summaryrefslogtreecommitdiffstats
path: root/arch/sparc64/kernel/smp.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* sparc,sparc64: unify kernel/Sam Ravnborg2008-12-041-1412/+0
| | | | | | | | | | | | | | | | o Move all files from sparc64/kernel/ to sparc/kernel - rename as appropriate o Update sparc/Makefile to the changes o Update sparc/kernel/Makefile to include the sparc64 files NOTE: This commit changes link order on sparc64! Link order had to change for either of sparc32 and sparc64. And assuming sparc64 see more testing than sparc32 change link order on sparc64 where issues will be caught faster. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Make %pil level 15 a pseudo-NMI.David S. Miller2008-12-041-2/+2
| | | | | | | | | So that we can profile code even in a local_irq_disable() section, only write 14 (instead of 15) into the %pil register to disable IRQs. This allows PIL level 15 to serve as a pseudo NMI. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Stop using memory barriers for atomics and locks.David S. Miller2008-12-041-6/+5
| | | | | | | | | The kernel always executes in the TSO memory model now, so none of this stuff is necessary any more. With helpful feedback from Nick Piggin. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64 trivial section misannotationsAl Viro2008-11-301-2/+2
| | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sparc64: Add missing notify_cpu_starting() call.David S. Miller2008-10-131-0/+4
| | | | | | | | | | Commit e545a6140b698b2494daf0b32107bdcc5e901390 ("kernel/cpu.c: create a CPU_STARTING cpu_chain notifier") added a notify_cpu_starting() notifier event, and hit every arch except sparc64. Fix that missed case. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Prevent sparc64 from invoking irq handlers on offline CPUsPaul E. McKenney2008-09-031-4/+4
| | | | | | | | | | Make sparc64 refrain from clearing a given to-be-offlined CPU's bit in the cpu_online_mask until it has processed pending irqs. This change prevents other CPUs from being blindsided by an apparently offline CPU nevertheless changing globally visible state. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Fix IPI call locking.David S. Miller2008-09-031-6/+4
| | | | | | | | | | When I switched sparc64 over to the generic helpers for smp_call_function(), I didn't convert the dinky call_lock we had. Use ipi_call_lock() and ipi_call_unlock(). Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Don't MAGIC_SYSRQ ifdef smp_fetch_global_regs and support code.David S. Miller2008-08-101-4/+0
| | | | | | | | | | | | Based upon a report and initial patch by Friedrich Oslage. The intention is to provide this facility for __trigger_all_cpu_backtrace even if MAGIC_SYSRQ is not set. The only part that should have MAGIC_SYSRQ ifdef protection is the sparc_globalreg_op sysrq regitration and immediate code. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Remove all cpumask_t local variables in xcall dispatch.David S. Miller2008-08-051-24/+9
| | | | | | | | | All of the xcall delivery implementation is cpumask agnostic, so we can pass around pointers to const cpumask_t objects everywhere. The sad remaining case is the argument to arch_send_call_function_ipi(). Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Kill error_mask from hypervisor_xcall_deliver().David S. Miller2008-08-051-13/+7
| | | | | | | | It can eat up a lot of stack space when NR_CPUS is large. We retain some of it's functionality by reporting at least one of the cpu's which are seen in error state. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Build cpu list and mondo block at top-level xcall_deliver().David S. Miller2008-08-051-44/+69
| | | | | | | | | | Then modify all of the xcall dispatch implementations get passed and use this information. Now all of the xcall dispatch implementations do not need to be mindful of details such as "is current cpu in the list?" and "is cpu online?" Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Disable local interrupts around xcall_deliver_impl() invocation.David S. Miller2008-08-051-17/+15
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Make all xcall_deliver's go through common helper function.David S. Miller2008-08-051-4/+9
| | | | | | | This just facilitates the next changeset where we'll be building the cpu list and mondo block in this helper function. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Make smp_cross_call_masked() take a cpumask_t pointer.David S. Miller2008-08-041-7/+11
| | | | | | | | | | Ideally this could be simplified further such that we could pass the pointer down directly into the xcall_deliver() implementation. But if we do that we need to do the "cpu_online(cpu)" and "cpu != self" checks down in those functions. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Directly call xcall_deliver() in smp_start_sync_tick_client.David S. Miller2008-08-041-4/+2
| | | | | | We know the cpu is online and not the current cpu here. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Call xcall_deliver() directly in some cases.David S. Miller2008-08-041-23/+10
| | | | | | | | | | | | | | | | | | | | For these cases the callers make sure: 1) The cpus indicated are online. 2) The current cpu is not in the list of indicated cpus. Therefore we can pass a pointer to the mask directly. One of the motivations in this transformation is to make use of "&cpumask_of_cpu(cpu)" which evaluates to a pointer to constant data in the kernel and thus takes up no stack space. Hopefully someone in the future will change the interface of arch_send_call_function_ipi() such that it passes a const cpumask_t pointer so that this will optimize ever further. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Use cpumask_t pointers and for_each_cpu_mask_nr() in xcall_deliver.David S. Miller2008-08-041-18/+21
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Use xcall_deliver() consistently.David S. Miller2008-08-041-23/+17
| | | | | | | There remained some spots still vectoring to the appropriate *_xcall_deliver() function manually. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Use function pointer for cross-call sending.David S. Miller2008-08-041-6/+13
| | | | | | Initialize it using the smp_setup_processor_id() hook. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Kill smp_report_regs().David S. Miller2008-07-311-6/+0
| | | | | | | | All the call sites are #if 0'd out and we have a much more useful global cpu dumping facility these days. smp_report_regs() is way too verbose to be usable. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Convert to generic helpers for IPI function calls.David S. Miller2008-07-181-70/+17
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* smp_call_function: get rid of the unused nonatomic/retry argumentJens Axboe2008-06-261-8/+4
| | | | | | | | It's never used and the comments refer to nonatomic and retry interchangably. So get rid of it. Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* Add generic helpers for arch IPI function callsJens Axboe2008-06-261-5/+6
| | | | | | | | | | | | | | | | | This adds kernel/smp.c which contains helpers for IPI function calls. In addition to supporting the existing smp_call_function() in a more efficient manner, it also adds a more scalable variant called smp_call_function_single() for calling a given function on a single CPU only. The core of this is based on the x86-64 patch from Nick Piggin, lots of changes since then. "Alan D. Brunelle" <Alan.Brunelle@hp.com> has contributed lots of fixes and suggestions as well. Also thanks to Paul E. McKenney <paulmck@linux.vnet.ibm.com> for reviewing RCU usage and getting rid of the data allocation fallback deadlock. Acked-by: Ingo Molnar <mingo@elte.hu> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* sparc64: Add global register dumping facility.David S. Miller2008-05-201-0/+10
| | | | | | | | | | | | | | | | | | | | When a cpu really is stuck in the kernel, it can be often impossible to figure out which cpu is stuck where. The worst case is when the stuck cpu has interrupts disabled. Therefore, implement a global cpu state capture that uses SMP message interrupts which are not disabled by the normal IRQ enable/disable APIs of the kernel. As long as we can get a sysrq 'y' to the kernel, we can get a dump. Even if the console interrupt cpu is wedged, we can trigger it from userspace using /proc/sysrq-trigger The output is made compact so that this facility is more useful on high cpu count systems, which is where this facility will likely find itself the most useful :) Signed-off-by: David S. Miller <davem@davemloft.net>
* Revert "[SPARC64]: Wrap SMP IPIs with irq_enter()/irq_exit()."David S. Miller2008-05-041-22/+5
| | | | | | | | | This reverts commit 2664ef44cf5053d2b7dff01cecac70bc601a5f68. Ingo moved around where the softlockup dependency sits so this change is no longer necessary. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: remove duplicated includeHuang Weiyi2008-04-291-1/+0
| | | | | | | Remove dulicated include file <asm/timer.h> in arch/sparc64/kernel/smp.c. Signed-off-by: Huang Weiyi <hwy@cn.fujitsu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc: Add kgdb support.David S. Miller2008-04-291-0/+10
| | | | | | | | | | | | | | Current limitations: 1) On SMP single stepping has some fundamental issues, shared with other sw single-step architectures such as mips and arm. 2) On 32-bit sparc we don't support SMP kgdb yet. That requires some reworking of the IPI mechanisms and infrastructure on that platform. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Wrap SMP IPIs with irq_enter()/irq_exit().David S. Miller2008-04-251-5/+22
| | | | | | | | | | | | | Otherwise all sorts of bad things can happen, including spurious softlockup reports. Other platforms have this same bug, in one form or another, just don't see the issue because they don't sleep as long as sparc64 can in NOHZ. Thanks to some brilliant debugging by Peter Zijlstra. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Call real_setup_per_cpu_areas() earlier and use lmb_alloc().David S. Miller2008-04-241-3/+8
| | | | | | | We have to do it like this before we can move the PROM and MDESC device tree code over to using lmb_alloc(). Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix sparse warnings in arch/sparc64/kernel/time.cDavid S. Miller2008-03-261-1/+2
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Remove most limitations to kernel image size.David S. Miller2008-03-221-8/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently kernel images are limited to 8MB in size, and this causes problems especially when enabling features that take up a lot of kernel image space such as lockdep. The code now will align the kernel image size up to 4MB and map that many locked TLB entries. So, the only practical limitation is the number of available locked TLB entries which is 16 on Cheetah and 64 on pre-Cheetah sparc64 cpus. Niagara cpus don't actually have hw locked TLB entry support. Rather, the hypervisor transparently provides support for "locked" TLB entries since it runs with physical addressing and does the initial TLB miss processing. Fully utilizing this change requires some help from SILO, a patch for which will be submitted to the maintainer. Essentially, SILO will only currently map up to 8MB for the kernel image and that needs to be increased. Note that neither this patch nor the SILO bits will help with network booting. The openfirmware code will only map up to a certain amount of kernel image during a network boot and there isn't much we can to about that other than to implemented a layered network booting facility. Solaris has this, and calls it "wanboot" and we may implement something similar at some point. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix cpu trampoline et al. mismatch warnings.Sam Ravnborg2008-02-211-1/+1
| | | | | Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* calibrate_delay() must be __cpuinitAdrian Bunk2008-02-061-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | calibrate_delay() must be __cpuinit, not __{dev,}init. I've verified that this is correct for all users. While doing the latter, I also did the following cleanups: - remove pointless additional prototypes in C files - ensure all users #include <linux/delay.h> This fixes the following section mismatches with CONFIG_HOTPLUG=n, CONFIG_HOTPLUG_CPU=y: WARNING: vmlinux.o(.text+0x1128d): Section mismatch: reference to .init.text.1:calibrate_delay (between 'check_cx686_slop' and 'set_cx86_reorder') WARNING: vmlinux.o(.text+0x25102): Section mismatch: reference to .init.text.1:calibrate_delay (between 'smp_callin' and 'cpu_coregroup_map') Signed-off-by: Adrian Bunk <bunk@kernel.org> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Richard Henderson <rth@twiddle.net> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Christian Zankel <chris@zankel.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [SPARC64]: Fix endless loop in cheetah_xcall_deliver().David S. Miller2007-12-121-6/+13
| | | | | | | | We need to mask out the proper bits when testing the dispatch status register else we can see unrelated NACK bits from previous cross call sends. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add missing "space"Joe Perches2007-12-051-2/+3
| | | | | Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: __inline__ --> inlineDavid S. Miller2007-10-271-2/+2
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* Convert cpu_sibling_map to be a per cpu variableMike Travis2007-10-161-9/+8
| | | | | | | | | | | | | | | | | Convert cpu_sibling_map from a static array sized by NR_CPUS to a per_cpu variable. This saves sizeof(cpumask_t) * NR unused cpus. Access is mostly from startup and CPU HOTPLUG functions. Signed-off-by: Mike Travis <travis@sgi.com> Cc: Andi Kleen <ak@suse.de> Cc: Christoph Lameter <clameter@sgi.com> Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [SPARC64]: check fork_idle() errorAkinobu Mita2007-10-041-0/+2
| | | | | | | Check the return value of fork_idle() to catch error. Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix memory leak when cpu hotplugging.David S. Miller2007-08-091-5/+0
| | | | | | | | | | | Every time a cpu is added via hotplug, we allocate the per-cpu MONDO queues but we never free them up. Freeing isn't easy since the first cpu gets this memory from bootmem. Therefore, the simplest thing to do to fix this bug is to allocate the queues for all possible cpus at boot time. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: dr-cpu unconfigure support.David S. Miller2007-07-161-12/+106
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Clear cpu_{core,sibling}_map[] in smp_fill_in_sib_core_maps()David S. Miller2007-07-161-0/+2
| | | | | | | | When we hot-plug in new cpus, the core_id and proc_id of existing cpus can change. So in order to set the cpu groups correctly we need to clear the maps out completely first. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix leak when DR added cpu does not bootup.David S. Miller2007-07-161-6/+6
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: More sensible udelay implementation.David S. Miller2007-07-161-24/+0
| | | | | | | | | | Take a page from the powerpc folks and just calculate the delay factor directly. Since frequency scaling chips use a system-tick register, the value is going to be the same system-wide. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: SMP build fixes.David S. Miller2007-07-161-2/+12
| | | | | | | | | | | | | | | | | | With the move of ldom_startcpu_cpuid() into smp.c some other things need to follow along: 1) smp.c is not a driver so we can't use "PFX" macro in the printk calls. 2) smp.c now needs asm/io.h and asm/hvtramp.h, ds.c no longer does 3) kimage_addr_to_ra() also needs to move into smp.c While we're here, update copyright info and my email address in smp.c Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix build regressions added by dr-cpu changes.David S. Miller2007-07-161-1/+51
| | | | | | | | | | | | | | | Do not select HOTPLUG_CPU from SUN_LDOMS, that causes HOTPLUG_CPU to be selected even on non-SMP which is illegal. Only build hvtramp.o when SMP, just like trampoline.o Protect dr-cpu code in ds.c with HOTPLUG_CPU. Likewise move ldom_startcpu_cpuid() to smp.c and protect it and the call site with SUN_LDOMS && HOTPLUG_CPU. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Initial LDOM cpu hotplug support.David S. Miller2007-07-161-18/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Only adding cpus is supports at the moment, removal will come next. When new cpus are configured, the machine description is updated. When we get the configure request we pass in a cpu mask of to-be-added cpus to the mdesc CPU node parser so it only fetches information for those cpus. That code also proceeds to update the SMT/multi-core scheduling bitmaps. cpu_up() does all the work and we return the status back over the DS channel. CPUs via dr-cpu need to be booted straight out of the hypervisor, and this requires: 1) A new trampoline mechanism. CPUs are booted straight out of the hypervisor with MMU disabled and running in physical addresses with no mappings installed in the TLB. The new hvtramp.S code sets up the critical cpu state, installs the locked TLB mappings for the kernel, and turns the MMU on. It then proceeds to follow the logic of the existing trampoline.S SMP cpu bringup code. 2) All calls into OBP have to be disallowed when domaining is enabled. Since cpus boot straight into the kernel from the hypervisor, OBP has no state about that cpu and therefore cannot handle being invoked on that cpu. Luckily it's only a handful of interfaces which can be called after the OBP device tree is obtained. For example, rebooting, halting, powering-off, and setting options node variables. CPU removal support will require some infrastructure changes here. Namely we'll have to process the requests via a true kernel thread instead of in a workqueue. workqueues run on a per-cpu thread, but when unconfiguring we might need to force the thread to execute on another cpu if the current cpu is the one being removed. Removal of a cpu also causes the kernel to destroy that cpu's workqueue running thread. Another issue on removal is that we may have interrupts still pointing to the cpu-to-be-removed. So new code will be needed to walk the active INO list and retarget those cpus as-needed. Signed-off-by: David S. Miller <davem@davemloft.net>
* sched: zap the migration init / cache-hot balancing codeIngo Molnar2007-07-091-27/+0
| | | | | | | | | | | | | | | | | | | | | | the SMP load-balancer uses the boot-time migration-cost estimation code to attempt to improve the quality of balancing. The reason for this code is that the discrete priority queues do not preserve the order of scheduling accurately, so the load-balancer skips tasks that were running on a CPU 'recently'. this code is fundamental fragile: the boot-time migration cost detector doesnt really work on systems that had large L3 caches, it caused boot delays on large systems and the whole cache-hot concept made the balancing code pretty undeterministic as well. (and hey, i wrote most of it, so i can say it out loud that it sucks ;-) under CFS the same purpose of cache affinity can be achieved without any special cache-hot special-case: tasks are sorted in the 'timeline' tree and the SMP balancer picks tasks from the left side of the tree, thus the most cache-cold task is balanced automatically. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* [SPARC64]: Fix {mc,smt}_capable().David S. Miller2007-06-051-0/+2
| | | | | | | It's not just sun4v hypervisor platforms that should return true for this, sun4u with UltraSPARC-IV should return true too. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Proper multi-core scheduling support.David S. Miller2007-06-051-1/+18
| | | | | | | | | | The scheduling domain hierarchy is: all cpus --> cpus that share an instruction cache --> cpus that share an integer execution unit Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Eliminate NR_CPUS limitations.David S. Miller2007-05-291-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | Cheetah systems can have cpuids as large as 1023, although physical systems don't have that many cpus. Only three limitations existed in the kernel preventing arbitrary NR_CPUS values: 1) dcache dirty cpu state stored in page->flags on D-cache aliasing platforms. With some build time calculations and some build-time BUG checks on page->flags layout, this one was easily solved. 2) The cheetah XCALL delivery code could only handle a cpumask with up to 32 cpus set. Some simple looping logic clears that up too. 3) thread_info->cpu was a u8, easily changed to a u16. There are a few spots in the kernel that still put NR_CPUS sized arrays on the kernel stack, but that's not a sparc64 specific problem. Signed-off-by: David S. Miller <davem@davemloft.net>