| Commit message (Collapse) | Author | Files | Lines |
|
Some macros are unused, delete them.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
The dev_warn is using the platform driver which was removed in the previous
patch.
Let's replace dev_warn by pr_warn.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Remove some legacy code and replace it by the clksrc-of code.
Do some cleanup and code consolidation.
Signed-off-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Implement an ARM delay timer to be used for udelay(). This allows us to
skip the delay loop calibration at boot on Marvell BG2, BG2Q, BG2CD
platforms. And after this patch, udelay() will be unaffected by CPU
frequency changes.
Note: Although in case there are several possible delay timers, we may
not select the "best" delay timer. Take one Marvell Berlin platform for
example: we have arch timer and dw-apb timer. The arch timer freq is
25MHZ while the dw-apb timer freq is 100MHZ, current selection would
choose the dw-apb timer. But the dw apb timer is on the APB bus while
arch timer sits in CPU, the cost of accessing the apb timer is higher
than the arch timer. We could introduce "rating" concept to delay
timer, but this approach "brings a lot of complexity and workarounds
in the code for a small benefit" as pointed out by Daniel.
Later, Arnd pointed out "However, we could argue that this actually
doesn't matter at all, because the entire point of the ndelay()/
udelay()/mdelay() functions is to waste CPU cycles doing not much at
all, so we can just as well waste them reading the timer register
than spinning on the CPU reading the arch timer more often.", so we
just simply register the dw apb base delay timer.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
In order to compile on all arch without error with 'allyesconfig' make
sure the platform selected the GENERIC_CLOCKEVENTS. Without this patch
the new added drivers will prevent the kernel to compile on PARISC.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Let the platform's Kconfig to select the clock instead of having a reverse
dependency from the driver to the platform options.
Add the COMPILE_TEST option for the compilation test coverage. Due to the
non portable 'delay' code, this driver is only compilable on ARM.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Tested-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Reviewed-by: Chanwoo Choi <cw00.choi@samsung.com>
|
|
Let the platform's Kconfig to select the clock instead of having a reverse
dependency from the driver to the platform options.
Add the COMPILE_TEST option for the compilation test coverage.
This change is debatable as the option itself in the Kconfig allows to
select the driver for the platform or not. This change will make the prcmu
timer always selected.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Due to the non portable code for the delay timer, this option is only
available for the ARM architecture.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
The driver depends on the common clock framework, thus the dependency added
on COMMON_CLK.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Remove the <asm/time.h> header inclusion which is pointless.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Due to the non portable code for the delay timer, this option is only
available for the ARM architecture.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
The driver depends on the common clock framework, thus the dependency added
on COMMON_CLK.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
The driver is using the atomic_io API which is not portable, so the
compilation is restricted to ARM only.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Due to the non portable 'delay' code, the compilation is restricted to the
ARM architecture only.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Due to the dsb() usage in the driver, this driver is only compilable on
ARM and ARM64.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Instead of having the clocksource's Kconfig depending on the arch, let the
arch to select the timer it needs.
The CLKSRC_OF dependency is removed because already selected by the
ARCH_PXA, and it is added for SA1100.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Increase the compilation test coverage by adding the COMPILE_TEST option.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Change the Kconfig selection rule by letting the STI arch to select
the timer.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Maxime Coquelin <maxime.coquelin@st.com>
|
|
In order to be consistent with the rest of the drivers compilation, let's
introduce the COMPILE_TEST option. Unfortunately, the delay.h code is not
portable, so the compilation test coverage will be restricted to the ARM
architecture.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
The dsb() instruction is pointless in this code.
Remove it.
That also fixes the ARM64 compilation issue.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Tested-by: Caesar Wang <wxt@rock-chips.com>
|
|
When we try to compile a clocksource driver with the COMPILE_TEST option,
we can't select the GENERIC_SCHED_CLOCK because the sched_clock() symbol
will be duplicated with the one defined for the x86.
In order to fix that, we don't select the GENERIC_SCHED_CLOCK in the
driver Kconfig's file but we define some empty functions for the different
symbols in order to prevent the unresolved ones.
This patch fixes the COMPILE_TEST option for the compile test coverage for
the clocksource drivers. Without this patch, we can't add the COMPILE_TEST
option for the clocksource drivers using the GENERIC_SCHED_CLOCK.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Let's checkstyle to clean up the macros with such trivial details.
Signed-off-by: Caesar Wang <wxt@rock-chips.com>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Allow the timer core to change the smp affinity of the broadcast timer
irq by setting CLOCK_EVT_FEAT_DYNIRQ flag.
This reduces interrupt pressure and wakeups on CPU0 as well as vastly
reducing the number of timer broadcast IPIs.
Signed-off-by: Lucas Stach <dev@lynxeye.de>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
Add error path to clear evt struct allocated by kzalloc() in the beginning of
function mtk_timer_init().
Acked-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Alexey Klimov <alexey.klimov@linaro.org>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
1) Change pr_warn()s to pr_err()s. These messages are actually errors and not
warnings.
2) Add missing \n.
3) Error message for kzalloc() failure is removed per suggestion by Joe Perches.
There is generic stack_dump() for allocation issues.
Signed-off-by: Alexey Klimov <alexey.klimov@linaro.org>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
It's a bit unclear what subsystem/driver emits some messages to dmesg in
the function mtk_init_timer(). Use pr_fmt to auto-prefix the messages
appropriately.
Acked-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Alexey Klimov <alexey.klimov@linaro.org>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
While going through the nohz code I got stumped by some of it.
This patch adds a few comments clarifying the code; based on discussion
with Thomas.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Link: http://lkml.kernel.org/r/20151119162106.GO3816@twins.programming.kicks-ass.net
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
We cache all hotpath members of a clocksource in the time keeper
core. So there is no requirement in general to cache line align struct
clocksource. Remove the enforces alignment.
That allows users which need to wrap struct clocksource into their own
struct to align the struct without getting extra padding.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
Cc: Marc Gonzalez <marc_gonzalez@sigmadesigns.com>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Mans Rullgard <mans@mansr.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Nicolas Pitre <nico@linaro.org>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Sebastian Frias <sebastian_frias@sigmadesigns.com>
Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1511191209000.3898@nanos
|
|
|
|
Adjust kmem_cache_alloc_bulk API before we have any real users.
Adjust API to return type 'int' instead of previously type 'bool'. This
is done to allow future extension of the bulk alloc API.
A future extension could be to allow SLUB to stop at a page boundary, when
specified by a flag, and then return the number of objects.
The advantage of this approach, would make it easier to make bulk alloc
run without local IRQs disabled. With an approach of cmpxchg "stealing"
the entire c->freelist or page->freelist. To avoid overshooting we would
stop processing at a slab-page boundary. Else we always end up returning
some objects at the cost of another cmpxchg.
To keep compatible with future users of this API linking against an older
kernel when using the new flag, we need to return the number of allocated
objects with this API change.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Initial implementation missed support for kmem cgroup support in
kmem_cache_free_bulk() call, add this.
If CONFIG_MEMCG_KMEM is not enabled, the compiler should be smart enough
to not add any asm code.
Incoming bulk free objects can belong to different kmem cgroups, and
object free call can happen at a later point outside memcg context. Thus,
we need to keep the orig kmem_cache, to correctly verify if a memcg object
match against its "root_cache" (s->memcg_params.root_cache).
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reviewed-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The call slab_pre_alloc_hook() interacts with kmemgc and is not allowed to
be called several times inside the bulk alloc for loop, due to the call to
memcg_kmem_get_cache().
This would result in hitting the VM_BUG_ON in __memcg_kmem_get_cache.
As suggested by Vladimir Davydov, change slab_post_alloc_hook() to be able
to handle an array of objects.
A subtle detail is, loop iterator "i" in slab_post_alloc_hook() must have
same type (size_t) as size argument. This helps the compiler to easier
realize that it can remove the loop, when all debug statements inside loop
evaluates to nothing. Note, this is only an issue because the kernel is
compiled with GCC option: -fno-strict-overflow
In slab_alloc_node() the compiler inlines and optimizes the invocation of
slab_post_alloc_hook(s, flags, 1, &object) by removing the loop and access
object directly.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Reported-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Suggested-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Reviewed-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This change focus on improving the speed of object freeing in the
"slowpath" of kmem_cache_free_bulk.
The calls slab_free (fastpath) and __slab_free (slowpath) have been
extended with support for bulk free, which amortize the overhead of
the (locked) cmpxchg_double.
To use the new bulking feature, we build what I call a detached
freelist. The detached freelist takes advantage of three properties:
1) the free function call owns the object that is about to be freed,
thus writing into this memory is synchronization-free.
2) many freelist's can co-exist side-by-side in the same slab-page
each with a separate head pointer.
3) it is the visibility of the head pointer that needs synchronization.
Given these properties, the brilliant part is that the detached
freelist can be constructed without any need for synchronization. The
freelist is constructed directly in the page objects, without any
synchronization needed. The detached freelist is allocated on the
stack of the function call kmem_cache_free_bulk. Thus, the freelist
head pointer is not visible to other CPUs.
All objects in a SLUB freelist must belong to the same slab-page.
Thus, constructing the detached freelist is about matching objects
that belong to the same slab-page. The bulk free array is scanned is
a progressive manor with a limited look-ahead facility.
Kmem debug support is handled in call of slab_free().
Notice kmem_cache_free_bulk no longer need to disable IRQs. This
only slowed down single free bulk with approx 3 cycles.
Performance data:
Benchmarked[1] obj size 256 bytes on CPU i7-4790K @ 4.00GHz
SLUB fastpath single object quick reuse: 47 cycles(tsc) 11.931 ns
To get stable and comparable numbers, the kernel have been booted with
"slab_merge" (this also improve performance for larger bulk sizes).
Performance data, compared against fallback bulking:
bulk - fallback bulk - improvement with this patch
1 - 62 cycles(tsc) 15.662 ns - 49 cycles(tsc) 12.407 ns- improved 21.0%
2 - 55 cycles(tsc) 13.935 ns - 30 cycles(tsc) 7.506 ns - improved 45.5%
3 - 53 cycles(tsc) 13.341 ns - 23 cycles(tsc) 5.865 ns - improved 56.6%
4 - 52 cycles(tsc) 13.081 ns - 20 cycles(tsc) 5.048 ns - improved 61.5%
8 - 50 cycles(tsc) 12.627 ns - 18 cycles(tsc) 4.659 ns - improved 64.0%
16 - 49 cycles(tsc) 12.412 ns - 17 cycles(tsc) 4.495 ns - improved 65.3%
30 - 49 cycles(tsc) 12.484 ns - 18 cycles(tsc) 4.533 ns - improved 63.3%
32 - 50 cycles(tsc) 12.627 ns - 18 cycles(tsc) 4.707 ns - improved 64.0%
34 - 96 cycles(tsc) 24.243 ns - 23 cycles(tsc) 5.976 ns - improved 76.0%
48 - 83 cycles(tsc) 20.818 ns - 21 cycles(tsc) 5.329 ns - improved 74.7%
64 - 74 cycles(tsc) 18.700 ns - 20 cycles(tsc) 5.127 ns - improved 73.0%
128 - 90 cycles(tsc) 22.734 ns - 27 cycles(tsc) 6.833 ns - improved 70.0%
158 - 99 cycles(tsc) 24.776 ns - 30 cycles(tsc) 7.583 ns - improved 69.7%
250 - 104 cycles(tsc) 26.089 ns - 37 cycles(tsc) 9.280 ns - improved 64.4%
Performance data, compared current in-kernel bulking:
bulk - curr in-kernel - improvement with this patch
1 - 46 cycles(tsc) - 49 cycles(tsc) - improved (cycles:-3) -6.5%
2 - 27 cycles(tsc) - 30 cycles(tsc) - improved (cycles:-3) -11.1%
3 - 21 cycles(tsc) - 23 cycles(tsc) - improved (cycles:-2) -9.5%
4 - 18 cycles(tsc) - 20 cycles(tsc) - improved (cycles:-2) -11.1%
8 - 17 cycles(tsc) - 18 cycles(tsc) - improved (cycles:-1) -5.9%
16 - 18 cycles(tsc) - 17 cycles(tsc) - improved (cycles: 1) 5.6%
30 - 18 cycles(tsc) - 18 cycles(tsc) - improved (cycles: 0) 0.0%
32 - 18 cycles(tsc) - 18 cycles(tsc) - improved (cycles: 0) 0.0%
34 - 78 cycles(tsc) - 23 cycles(tsc) - improved (cycles:55) 70.5%
48 - 60 cycles(tsc) - 21 cycles(tsc) - improved (cycles:39) 65.0%
64 - 49 cycles(tsc) - 20 cycles(tsc) - improved (cycles:29) 59.2%
128 - 69 cycles(tsc) - 27 cycles(tsc) - improved (cycles:42) 60.9%
158 - 79 cycles(tsc) - 30 cycles(tsc) - improved (cycles:49) 62.0%
250 - 86 cycles(tsc) - 37 cycles(tsc) - improved (cycles:49) 57.0%
Performance with normal SLUB merging is significantly slower for
larger bulking. This is believed to (primarily) be an effect of not
having to share the per-CPU data-structures, as tuning per-CPU size
can achieve similar performance.
bulk - slab_nomerge - normal SLUB merge
1 - 49 cycles(tsc) - 49 cycles(tsc) - merge slower with cycles:0
2 - 30 cycles(tsc) - 30 cycles(tsc) - merge slower with cycles:0
3 - 23 cycles(tsc) - 23 cycles(tsc) - merge slower with cycles:0
4 - 20 cycles(tsc) - 20 cycles(tsc) - merge slower with cycles:0
8 - 18 cycles(tsc) - 18 cycles(tsc) - merge slower with cycles:0
16 - 17 cycles(tsc) - 17 cycles(tsc) - merge slower with cycles:0
30 - 18 cycles(tsc) - 23 cycles(tsc) - merge slower with cycles:5
32 - 18 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:4
34 - 23 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:-1
48 - 21 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:1
64 - 20 cycles(tsc) - 48 cycles(tsc) - merge slower with cycles:28
128 - 27 cycles(tsc) - 57 cycles(tsc) - merge slower with cycles:30
158 - 30 cycles(tsc) - 59 cycles(tsc) - merge slower with cycles:29
250 - 37 cycles(tsc) - 56 cycles(tsc) - merge slower with cycles:19
Joint work with Alexander Duyck.
[1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/slab_bulk_test01.c
[akpm@linux-foundation.org: BUG_ON -> WARN_ON;return]
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Make it possible to free a freelist with several objects by adjusting API
of slab_free() and __slab_free() to have head, tail and an objects counter
(cnt).
Tail being NULL indicate single object free of head object. This allow
compiler inline constant propagation in slab_free() and
slab_free_freelist_hook() to avoid adding any overhead in case of single
object free.
This allows a freelist with several objects (all within the same
slab-page) to be free'ed using a single locked cmpxchg_double in
__slab_free() and with an unlocked cmpxchg_double in slab_free().
Object debugging on the free path is also extended to handle these
freelists. When CONFIG_SLUB_DEBUG is enabled it will also detect if
objects don't belong to the same slab-page.
These changes are needed for the next patch to bulk free the detached
freelists it introduces and constructs.
Micro benchmarking showed no performance reduction due to this change,
when debugging is turned off (compiled with CONFIG_SLUB_DEBUG).
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Adjust the linker script and map_pages() to map kernel text and data on
physical 1MB huge/large pages.
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
This patch adds huge page support to allow userspace to allocate huge
pages and to use hugetlbfs filesystem on 32- and 64-bit Linux kernels.
A later patch will add kernel support to map kernel text and data on
huge pages.
The only requirement is, that the kernel needs to be compiled for a
PA8X00 CPU (PA2.0 architecture). Older PA1.X CPUs do not support
variable page sizes. 64bit Kernels are compiled for PA2.0 by default.
Technically on parisc multiple physical huge pages may be needed to
emulate standard 2MB huge pages.
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
Use the 22bit instead of the 17bit branch instruction on a 64bit kernel
to reach the do_syscall_trace_exit function from the gateway page.
A huge page enabled kernel may need the additional branch distance bits.
Signed-off-by: Helge Deller <deller@gmx.de>
|