diff options
author | Jeremy Fitzhardinge <jeremy@goop.org> | 2009-02-27 22:25:21 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2009-03-02 12:07:48 +0100 |
commit | db949bba3c7cf2e664ac12e237c6d4c914f0c69d (patch) | |
tree | 4de65831dd1de95f642bed15bc9788edd74c48da /arch/x86/kernel/process_32.c | |
parent | Merge branch 'x86/pat' into x86/core (diff) | |
download | linux-db949bba3c7cf2e664ac12e237c6d4c914f0c69d.tar.xz linux-db949bba3c7cf2e664ac12e237c6d4c914f0c69d.zip |
x86-32: use non-lazy io bitmap context switching
Impact: remove 32-bit optimization to prepare unification
x86-32 and -64 differ in the way they context-switch tasks
with io permission bitmaps. x86-64 simply copies the next
tasks io bitmap into place (if any) on context switch. x86-32
invalidates the bitmap on context switch, so that the next
IO instruction will fault; at that point it installs the
appropriate IO bitmap.
This makes context switching IO-bitmap-using tasks a bit more
less expensive, at the cost of making the next IO instruction
slower due to the extra fault. This tradeoff only makes sense
if IO-bitmap-using processes are relatively common, but they
don't actually use IO instructions very often.
However, in a typical desktop system, the only process likely
to be using IO bitmaps is the X server, and nothing at all on
a server. Therefore the lazy context switch doesn't really win
all that much, and its just a gratuitious difference from
64-bit code.
This patch removes the lazy context switch, with a view to
unifying this code in a later change.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'arch/x86/kernel/process_32.c')
-rw-r--r-- | arch/x86/kernel/process_32.c | 36 |
1 files changed, 9 insertions, 27 deletions
diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c index 646da41a620a..a59314e877f0 100644 --- a/arch/x86/kernel/process_32.c +++ b/arch/x86/kernel/process_32.c @@ -248,11 +248,8 @@ void exit_thread(void) /* * Careful, clear this in the TSS too: */ - memset(tss->io_bitmap, 0xff, tss->io_bitmap_max); + memset(tss->io_bitmap, 0xff, t->io_bitmap_max); t->io_bitmap_max = 0; - tss->io_bitmap_owner = NULL; - tss->io_bitmap_max = 0; - tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET; put_cpu(); } @@ -458,34 +455,19 @@ __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, hard_enable_TSC(); } - if (!test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) { + if (test_tsk_thread_flag(next_p, TIF_IO_BITMAP)) { /* - * Disable the bitmap via an invalid offset. We still cache - * the previous bitmap owner and the IO bitmap contents: + * Copy the relevant range of the IO bitmap. + * Normally this is 128 bytes or less: */ - tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET; - return; - } - - if (likely(next == tss->io_bitmap_owner)) { + memcpy(tss->io_bitmap, next->io_bitmap_ptr, + max(prev->io_bitmap_max, next->io_bitmap_max)); + } else if (test_tsk_thread_flag(prev_p, TIF_IO_BITMAP)) { /* - * Previous owner of the bitmap (hence the bitmap content) - * matches the next task, we dont have to do anything but - * to set a valid offset in the TSS: + * Clear any possible leftover bits: */ - tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET; - return; + memset(tss->io_bitmap, 0xff, prev->io_bitmap_max); } - /* - * Lazy TSS's I/O bitmap copy. We set an invalid offset here - * and we let the task to get a GPF in case an I/O instruction - * is performed. The handler of the GPF will verify that the - * faulting task has a valid I/O bitmap and, it true, does the - * real copy and restart the instruction. This will save us - * redundant copies when the currently switched task does not - * perform any I/O during its timeslice. - */ - tss->x86_tss.io_bitmap_base = INVALID_IO_BITMAP_OFFSET_LAZY; } /* |