summaryrefslogtreecommitdiffstats
path: root/include/asm-x86_64/processor.h (follow)
Commit message (Collapse)AuthorAgeFilesLines
* i386/x86_64: move headers to include/asm-x86Thomas Gleixner2007-10-111-439/+0
| | | | | | | | Move the headers to include/asm-x86 and fixup the header install make rules Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* Remove unnecessary cast in prefetch()Serge Belyshev2007-10-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | It is ok to call prefetch() function with NULL argument, as specifically commented in include/linux/prefetch.h. But in standard C, it is invalid to dereference NULL pointer (see C99 standard 6.5.3.2 paragraph 4 and note #84). prefetch() has a memory reference for its argument. Newer gcc versions (4.3 and above) will use that to conclude that "x" argument is non-null and thus wreaking havok everywhere prefetch() was inlined. Fixed by removing cast and changing asm constraint. [ It seems in theory gcc 4.2 could miscompile this too; although no cases known. In 2.6.24 we should probably switch to __builtin_prefetch() instead, but this is a simpler fix for now. -- AK ] Signed-off-by: Serge Belyshev <belyshev@depni.sinp.msu.ru> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: Replace NSC/Cyrix specific chipset access macros by inlined functions.Juergen Beisert2007-07-221-11/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Due to index register access ordering problems, when using macros a line like this fails (and does nothing): setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x88); With inlined functions this line will work as expected. Note about a side effect: Seems on Geode GX1 based systems the "suspend on halt power saving feature" was never enabled due to this wrong macro expansion. With inlined functions it will be enabled, but this will stop the TSC when the CPU runs into a HLT instruction. Kernel output something like this: Clocksource tsc unstable (delta = -472746897 ns) This is the 3rd version of this patch. - Adding missed arch/i386/kernel/cpu/mtrr/state.c Thanks to Andres Salomon - Adding some big fat comments into the new header file Suggested by Andi Kleen AK: fixed x86-64 compilation Signed-off-by: Juergen Beisert <juergen@kreuzholzen.de> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: remove support for the Rise CPUAdrian Bunk2007-07-221-1/+0
| | | | | | | | | | | | | | | | | | | | The Rise CPUs were only very short-lived, and there are no reports of anyone both owning one and running Linux on it. Googling for the printk string "CPU: Rise iDragon" didn't find any dmesg available online. If it turns out that against all expectations there are actually users reverting this patch would be easy. This patch will make the kernel images smaller by a few bytes for all i386 users. Signed-off-by: Adrian Bunk <bunk@stusta.de> Acked-by: Dave Jones <davej@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Use a new CPU feature word to cover features that are spread aroundVenki Pallipadi2007-07-121-0/+1
| | | | | | | | | | | | | | | | | | Some Intel features are spread around in different CPUID leafs like 0x5, 0x6 and 0xA. Make this feature detection code common across i386 and x86_64. Display Intel Dynamic Acceleration feature in /proc/cpuinfo. This feature will be enabled automatically by current acpi-cpufreq driver. Refer to Intel Software Developer's Manual for more details about the feature. Thanks to hpa (H Peter Anvin) for the making the actual code detecting the scattered features data-driven. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Unify the CPU features vectors between i386 and x86-64H. Peter Anvin2007-07-121-2/+0
| | | | | | | | | Unify the handling of the CPU features vectors between i386 and x86-64. This also adopts the collapsing of features which are required at compile-time into constant tests from x86-64 to i386. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [PATCH] x86: Clean up x86 control register and MSR macros (corrected)H. Peter Anvin2007-05-021-31/+0
| | | | | | | | | | | | This patch is based on Rusty's recent cleanup of the EFLAGS-related macros; it extends the same kind of cleanup to control registers and MSRs. It also unifies these between i386 and x86-64; at least with regards to MSRs, the two had definitely gotten out of sync. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] x86-64: fix arithmetic in commentAvi Kivity2007-05-021-1/+1
| | | | | | | The xmm space on x86_64 is 256 bytes. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] x86-64: Use X86_EFLAGS_IF in x86-64/irqflags.h.Andi Kleen2007-05-021-21/+1
| | | | | | | | | | | As per i386 patch: move X86_EFLAGS_IF et al out to a new header: processor-flags.h, so we can include it from irqflags.h and use it in raw_irqs_disabled_flags(). As a side-effect, we could now use these flags in .S files. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] x86-64: Fix interrupt race in idle callback (3rd try)Venkatesh Pallipadi2006-12-071-0/+8
| | | | | | | | | | | | | | Idle callbacks has some races when enter_idle() sets isidle and subsequent interrupts that can happen on that CPU, before CPU goes to idle. Due to this, an IDLE_END can get called before IDLE_START. To avoid these races, disable interrupts before enter_idle and make sure that all idle routines do not enable interrupts before entering idle. Note that poll_idle() still has a this race as it has to enable interrupts before going to idle. But, all other idle routines have the race fixed. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Andi Kleen <ak@suse.de>
* ACPI: Processor native C-states using MWAITVenkatesh Pallipadi2006-10-141-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | Intel processors starting with the Core Duo support support processor native C-state using the MWAIT instruction. Refer: Intel Architecture Software Developer's Manual http://www.intel.com/design/Pentium4/manuals/253668.htm Platform firmware exports the support for Native C-state to OS using ACPI _PDC and _CST methods. Refer: Intel Processor Vendor-Specific ACPI: Interface Specification http://www.intel.com/technology/iapc/acpi/downloads/302223.htm With Processor Native C-state, we use 'MWAIT' instruction on the processor to enter different C-states (C1, C2, C3). We won't use the special IO ports to enter C-state and no SMM mode etc required to enter C-state. Overall this will mean better C-state support. One major advantage of using MWAIT for all C-states is, with this and "treat interrupt as break event" feature of MWAIT, we can now get accurate timing for the time spent in C1, C2, .. states. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Len Brown <len.brown@intel.com>
* [PATCH] x86_64: Save original IST values for checking stack addressesKeith Owens2006-08-311-0/+6
| | | | | | | | | | The values in init_tss.ist[] can change when an IST event occurs. Save the original IST values for checking stack addresses when debugging or doing stack traces. Signed-off-by: Keith Owens <kaos@ocs.com.au> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] x86_64: moving phys_proc_id and cpu_core_id to cpuinfo_x86Rohit Seth2006-06-261-0/+4
| | | | | | | | | | Most of the fields of cpuinfo are defined in cpuinfo_x86 structure. This patch moves the phys_proc_id and cpu_core_id for each processor to cpuinfo_x86 structure as well. Signed-off-by: Rohit Seth <rohitseth@google.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] i386/x86-64: Emulate CPUID4 on AMDAndi Kleen2006-06-261-0/+1
| | | | | | | | | | | | | Intel systems report the cache level data from CPUID 4 in sysfs. Add a CPUID 4 emulation for AMD CPUs to report the same information for them. This allows programs to read this information in a uniform way. The AMD way to report this is less flexible so some assumptions are hardcoded (e.g. no L3) Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Don't include linux/config.h from anywhere else in include/David Woodhouse2006-04-261-1/+0
| | | | Signed-off-by: David Woodhouse <dwmw2@infradead.org>
* [PATCH] arch/i386/kernel/microcode.c: remove the obsolete microcode_ioctlAdrian Bunk2006-03-281-3/+0
| | | | | | | | | | Nowadays, even Debian stable ships a microcode_ctl utility recent enough to no longer use this ioctl. Signed-off-by: Adrian Bunk <bunk@stusta.de> Acked-by: Tigran Aivazian <tigran_aivazian@symantec.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] sched: new sched domain for representing multi-coreSiddha, Suresh B2006-03-271-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | Add a new sched domain for representing multi-core with shared caches between cores. Consider a dual package system, each package containing two cores and with last level cache shared between cores with in a package. If there are two runnable processes, with this appended patch those two processes will be scheduled on different packages. On such systems, with this patch we have observed 8% perf improvement with specJBB(2 warehouse) benchmark and 35% improvement with CFP2000 rate(with 2 users). This new domain will come into play only on multi-core systems with shared caches. On other systems, this sched domain will be removed by domain degeneration code. This new domain can be also used for implementing power savings policy (see OLS 2005 CMP kernel scheduler paper for more details.. I will post another patch for power savings policy soon) Most of the arch/* file changes are for cpu_coregroup_map() implementation. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] x86_64: Flexmap for 32bit and randomized mappings for 64bitAndi Kleen2006-01-171-0/+2
| | | | | | | | | | | | | | | | | Another try at this. For 32bit follow the 32bit implementation from Ingo - mappings are growing down from the end of stack now and vary randomly by 1GB. Randomized mappings for 64bit just vary the normal mmap break by 1TB. I didn't bother implementing full flex mmap for 64bit because it shouldn't be needed there. Cc: mingo@elte.hu Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] x86_64: Allow nesting of int3 by default for kprobesAndi Kleen2006-01-161-7/+0
| | | | | | | | | | | This unbreaks recursive kprobes which didn't work anymore due to an earlier patch which converted the debug entry point to use an IST. This also allows nesting of the debug entry point too. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] amd64: task_pt_regs()Al Viro2006-01-121-2/+2
| | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] x86_64: Inclusion of ScaleMP vSMP architecture patches - vsmp_alignRavikiran G Thirumalai2006-01-121-0/+6
| | | | | | | | | | | | | | vSMP specific alignment patch to 1. Define INTERNODE_CACHE_SHIFT for vSMP 2. Use this for alignment of critical structures 3. Use INTERNODE_CACHE_SHIFT for ARCH_MIN_TASKALIGN, and let the slab align task_struct allocations to the internode cacheline size 4. Introduce and use ARCH_MIN_MMSTRUCT_ALIGN for mm_struct slab allocations. Signed-off-by: Ravikiran Thirumalai <kiran@scalemp.com> Signed-off-by: Shai Fultheim <shai@scalemp.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] x86_64: Move int 3 handler to debug stack and allow to increase it.Jan Beulich2006-01-121-2/+0
| | | | | | | | | | | | | | | | | | This - switches the INT3 handler to run on an IST stack (to cope with breakpoints set by a kernel debugger on places where the kernel's %gs base hasn't been set up, yet); the IST stack used is shared with the INT1 handler's [AK: this also allows setting a kprobe on the interrupt/exception entry points] - allows nesting of INT1/INT3 handlers so that one can, with a kernel debugger, debug (at least) the user-mode portions of the INT1/INT3 handling; the nesting isn't actively enabled here since a kernel- debugger-free kernel doesn't need it Signed-Off-By: Jan Beulich <jbeulich@novell.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] x86-64/i386: Intel HT, Multi core detection fixesSiddha, Suresh B2005-11-151-1/+3
| | | | | | | | | | | | | | | Fields obtained through cpuid vector 0x1(ebx[16:23]) and vector 0x4(eax[14:25], eax[26:31]) indicate the maximum values and might not always be the same as what is available and what OS sees. So make sure "siblings" and "cpu cores" values in /proc/cpuinfo reflect the values as seen by OS instead of what cpuid instruction says. This will also fix the buggy BIOS cases (for example where cpuid on a single core cpu says there are "2" siblings, even when HT is disabled in the BIOS. http://bugzilla.kernel.org/show_bug.cgi?id=4359) Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] x86-64: Set the stack pointer correctly in init_thread and init_tssAndi Kleen2005-09-121-1/+7
| | | | | Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Replace extern inline with static inline in asm-x86_64/*Adrian Bunk2005-09-121-2/+2
| | | | | | | | | They should be identical in the kernel now, but this makes it consistent with other code. Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] x86_64: prefetchw() can fall back to prefetch() if !3DNOWEric Dumazet2005-09-081-1/+1
| | | | | | | | | | | | | This is a multi-part message in MIME format. If the cpu lacks 3DNOW feature, we can use a normal prefetcht0 instruction instead of NOP5. "prefetchw (%rxx)" and "prefetcht0 (%rxx)" have the same length, ranging from 3 to 5 bytes depending on the register. So this patch even helps AMD64, shortening the length of the code. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Acked-by: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] i386: cleanup serialize msrZachary Amsden2005-09-051-0/+5
| | | | | | | | | | | | | i386 arch cleanup. Introduce the serialize macro to serialize processor state. Why the microcode update needs it I am not quite sure, since wrmsr() is already a serializing instruction, but it is a microcode update, so I will keep the semantic the same, since this could be a timing workaround. As far as I can tell, this has always been there since the original microcode update source. Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] i386 / desc_empty macro is incorrectZachary Amsden2005-08-161-1/+1
| | | | | | | | | | | | | | | Chuck Ebbert noticed that the desc_empty macro is incorrect. Fix it. Thankfully, this is not used as a security check, but it can falsely overwrite TLS segments with carefully chosen base / limits. I do not believe this is an issue in practice, but it is a kernel bug. Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Chris Wright <chrisw@osdl.org> [ x86-64 had the same problem, and the same fix. Linus ] Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] xen: x86_64: Add macro for debugregVincent Hanquez2005-06-231-0/+8
| | | | | | | | | | | Add 2 macros to set and get debugreg on x86_64. This is useful for Xen because it will need only to redefine each macro to a hypervisor call. Signed-off-by: Vincent Hanquez <vincent.hanquez@cl.cam.ac.uk> Cc: Ian Pratt <m+Ian.Pratt@cl.cam.ac.uk> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] x86_64: TASK_SIZE fixes for compatibility mode processesSuresh Siddha2005-06-221-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Appended patch will setup compatibility mode TASK_SIZE properly. This will fix atleast three known bugs that can be encountered while running compatibility mode apps. a) A malicious 32bit app can have an elf section at 0xffffe000. During exec of this app, we will have a memory leak as insert_vm_struct() is not checking for return value in syscall32_setup_pages() and thus not freeing the vma allocated for the vsyscall page. And instead of exec failing (as it has addresses > TASK_SIZE), we were allowing it to succeed previously. b) With a 32bit app, hugetlb_get_unmapped_area/arch_get_unmapped_area may return addresses beyond 32bits, ultimately causing corruption because of wrap-around and resulting in SEGFAULT, instead of returning ENOMEM. c) 32bit app doing this below mmap will now fail. mmap((void *)(0xFFFFE000UL), 0x10000UL, PROT_READ|PROT_WRITE, MAP_FIXED|MAP_PRIVATE|MAP_ANON, 0, 0); Signed-off-by: Zou Nan hai <nanhai.zou@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] x86_64: Remove x86_apicid fieldAndi Kleen2005-05-171-1/+0
| | | | | | | | Remove x86_apicid field Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] x86_64: Add a guard page at the end of the 47bit address spaceAndi Kleen2005-05-171-2/+2
| | | | | | | | This works around a bug in the AMD K8 CPUs. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] x86_64: Rename the extended cpuid level fieldAndi Kleen2005-04-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It was confusingly named. Signed-off-by: Andi Kleen <ak@suse.de> DESC x86_64: Switch SMP bootup over to new CPU hotplug state machine EDESC From: "Andi Kleen" <ak@suse.de> This will allow hotplug CPU in the future and in general cleans up a lot of crufty code. It also should plug some races that the old hackish way introduces. Remove one old race workaround in NMI watchdog setup that is not needed anymore. I removed the old total sum of bogomips reporting code. The brag value of BogoMips has been greatly devalued in the last years on the open market. Real CPU hotplug will need some more work, but the infrastructure for it is there now. One drawback: the new TSC sync algorithm is less accurate than before. The old way of zeroing TSCs is too intrusive to do later. Instead the TSC of the BP is duplicated now, which is less accurate. Cc: <rusty@rustcorp.com.au> Cc: <mingo@elte.hu> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-171-0/+462
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!