summaryrefslogtreecommitdiffstats
path: root/arch/sparc64/kernel/entry.S (follow)
Commit message (Collapse)AuthorAgeFilesLines
* sparc64: Split entry.S up into seperate files.David S. Miller2008-04-281-2575/+0
| | | | | | | | | | entry.S was a hodge-podge of several totally unrelated sets of assembler routines, ranging from FPU trap handlers to hypervisor call functions. Split it up into topic-sized pieces. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: %l6 trap return handling no longer necessary.David S. Miller2008-04-241-18/+17
| | | | | | | | | | Now that we indicate the "restart system call" in the trap type field of pt_regs->magic, we don't need to set the %l6 boolean in all of the trap return paths. And we therefore don't need to pass it to do_notify_resume(). Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: cleanup after SunOS/Solaris binary emulation removalAdrian Bunk2008-04-241-1/+1
| | | | | | | | | | The following cleanups are now possible: - arch/sparc64/kernel/entry.S:ret_sys_call no longer has to be global - arch/sparc64/kernel/sparc64_ksyms.c: remove no longer used prototypes Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC]: Remove SunOS and Solaris binary support.David S. Miller2008-04-221-59/+2
| | | | | | As per Documentation/feature-removal-schedule.txt Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Make save_stack_trace() more efficient.David S. Miller2008-03-251-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | Doing a 'flushw' every stack trace capture creates so much overhead that it makes lockdep next to unusable. We only care about the frame pointer chain and the function caller program counters, so flush those by hand to the stack frame. This is significantly more efficient than a 'flushw' because: 1) We only save 16 bytes per active register window to the stack. 2) This doesn't push the entire register window context of the current call chain out of the cpu, forcing register window fill traps as we return back down. Note that we can't use 'restore' and 'save' instructions to move around the register windows because that wouldn't work on Niagara processors. They optimize 'save' into a new register window by simply clearing out the registers instead of pulling them in from the on-chip register window backing store. Based upon a report by Tom Callaway. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC]: Move over to arch_ptrace().David S. Miller2008-02-071-4/+0
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix two kernel linear mapping setup bugs.David S. Miller2007-12-131-0/+12
| | | | | | | | | | | | | | | | This was caught and identified by Greg Onufer. Since we setup the 256M/4M bitmap table after taking over the trap table, it's possible for some 4M mapping to get loaded in the TLB beforhand which later will be 256M mappings. This can cause illegal TLB multiple-match conditions. Fix this by setting up the bitmap before we take over the trap table. Next, __flush_tlb_all() was not doing anything on hypervisor platforms. Fix by adding sun4v_mmu_demap_all() and calling it. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Access ivector_table[] using physical addresses.David S. Miller2007-10-141-6/+6
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Make IVEC pointers 64-bit.David S. Miller2007-10-141-4/+4
| | | | | | | | | | | | | | | Currently we chain IVEC entries using 32-bit "pointers" because we know that the ivector_table is in the main kernel image, thus below 4GB. This uses proper 64-bit pointers instead. Whilst this bloats up the kernel image size, this sets the infrastructure necessary to significantly shrink the kernel size by using physical addresses and dynamically allocating the ivector table. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix args to sun4v_ldc_revoke().David S. Miller2007-06-131-2/+3
| | | | | | | First argument is LDC channel ID, then mapping cookie, then the MTE revoke cookie. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Provide mmu statistics via sysfs.David Miller2007-06-051-0/+22
| | | | | | | | | | | | | | | If the system supports hypervisor based statistics, allow them to be fetched, enabled, and disabled via sysfs. Enable and disable via the boolean: /sys/devices/systems/cpu/cpuN/mmustat_enable Statistic values are provided under: /sys/devices/systems/cpu/cpuN/mmu_status/ Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix service channel hypervisor function names.David Miller2007-06-051-20/+20
| | | | | | sed 's/scv/svc/' Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add missing NCS and SVC hypervisor interfaces.David S. Miller2007-05-311-0/+72
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fill holes in hypervisor APIs and fix KTSB registry.David S. Miller2007-05-291-7/+547
| | | | | | | | | | | Several interfaces were missing and others misnumbered or improperly documented. Also, make sure to check the return value when registering the kernel TSBs with the hypervisor. This helped to find the 4MB kernel TSB alignment bug fixed in a previous changeset. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Use machine description and OBP properly for cpu probing.David S. Miller2007-05-291-0/+9
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Report proper system soft state to the hypervisor.David S. Miller2007-05-291-0/+12
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add hypervisor API negotiation and fix console bugs.David S. Miller2007-05-161-0/+94
| | | | | | | | | | | | | | | | | | | | | | | Hypervisor interfaces need to be negotiated in order to use some API calls reliably. So add a small set of interfaces to request API versions and query current settings. This allows us to fix some bugs in the hypervisor console: 1) If we can negotiate API group CORE of at least major 1 minor 1 we can use con_read and con_write which can improve console performance quite a bit. 2) When we do a console write request, we should hold the spinlock around the whole request, not a byte at a time. What would happen is that it's easy for output from different cpus to get mixed with each other. 3) Use consistent udelay() based polling, udelay(1) each loop with a limit of 1000 polls to handle stuck hypervisor console. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add irqtrace/stacktrace/lockdep support.David S. Miller2006-12-101-1/+26
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC]: Fix robust futex syscalls and wire up migrate_pages.David S. Miller2006-11-061-2/+1
| | | | | | | | | | | | | When I added the entries for the robust futex syscall entries, I forgot to bump NR_SYSCALLS. The current situation is error-prone because NR_SYSCALLS lives in entry.S where the system call limit checks are enforced. Move the definition to asm/unistd.h in order to make this mistake much more difficult to make. And wire up sys_migrate_pages since the powerpc folks implemented the compat wrapper for us. Signed-off-by: David S. Miller <davem@davemloft.net>
* Remove obsolete #include <linux/config.h>Jörn Engel2006-06-301-1/+0
| | | | | Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* [SPARC64]: Move over to GENERIC_HARDIRQS.David S. Miller2006-06-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | This is the long overdue conversion of sparc64 over to the generic IRQ layer. The kernel image is slightly larger, but the BSS is ~60K smaller due to the reduced size of struct ino_bucket. A lot of IRQ implementation details, including ino_bucket, were moved out of asm-sparc64/irq.h and are now private to arch/sparc64/kernel/irq.c, and most of the code in irq.c totally disappeared. One thing that's different at the moment is IRQ distribution, we do it at enable_irq() time. If the cpu mask is ALL then we round-robin using a global rotating cpu counter, else we pick the first cpu in the mask to support single cpu targetting. This is similar to what powerpc's XICS IRQ support code does. This works fine on my UP SB1000, and the SMP build goes fine and runs on that machine, but lots of testing on different setups is needed. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Send all device interrupts via one PIL.David S. Miller2006-06-201-7/+4
| | | | | | | | | | | | | | | | | | This is the first in a series of cleanups that will hopefully allow a seamless attempt at using the generic IRQ handling infrastructure in the Linux kernel. Define PIL_DEVICE_IRQ and vector all device interrupts through there. Get rid of the ugly pil0_dummy_{bucket,desc}, instead vector the timer interrupt directly to a specific handler since the timer interrupt is the only event that will be signaled on PIL 14. The irq_worklist is now in the per-cpu trap_block[]. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix bugs in SUN4V cpu mondo dispatch.David S. Miller2006-03-201-0/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There were several bugs in the SUN4V cpu mondo dispatch code. In fact, if we ever got a EWOULDBLOCK or other error from the hypervisor call, we'd potentially send a cpu mondo multiple times to the same cpu and even worse we could loop until the timeout resending the same mondo over and over to such cpus. So let's bulletproof this thing as follows: 1) Implement cpu_mondo_send() and cpu_state() hypervisor calls in arch/sparc64/kernel/entry.S, add prototypes to asm/hypervisor.h 2) Don't build and update the cpulist using inline functions, this was causing the cpu mask to not get updated in the caller. 3) Disable interrupts during the entire mondo send, otherwise our cpu list and/or mondo block could get overwritten if we take an interrupt and do a cpu mondo send on the current cpu. 4) Check for all possible error return types from the cpu_mondo_send() hypervisor call. In particular: HV_EOK) Our work is done, all cpus have received the mondo. HV_CPUERROR) One or more of the cpus in the cpu list we passed to the hypervisor are in error state. Use cpu_state() calls over the entries in the cpu list to see which ones. Record them in "error_mask" and report this after we are done sending the mondo to cpus which are not in error state. HV_EWOULDBLOCK) We need to keep trying. Any other error we consider fatal, we report the event and exit immediately. 5) We only timeout if forward progress is not made. Forward progress is defined as having at least one cpu get the mondo successfully in a given cpu_mondo_send() call. Otherwise we bump a counter and delay a little. If the counter hits a limit, we signal an error and report the event. Also, smp_call_function_mask() error handling reports the number of cpus incorrectly. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add sun4v_cpu_yield().David S. Miller2006-03-201-0/+9
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix uniprocessor IRQ targetting on SUN4V.David S. Miller2006-03-201-1/+3
| | | | | | | | | | | We need to use the real hardware processor ID when targetting interrupts, not the "define to 0" thing the uniprocessor build gives us. Also, fill in the Node-ID and Agent-ID fields properly on sun4u/Safari. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add sun4v_cpu_qconf() hypervisor call.David S. Miller2006-03-201-0/+12
| | | | | | Call it from register_one_mondo(). Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: do_fptrap needs to load the thread reg into %g6.David S. Miller2006-03-201-0/+1
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Implement rest of generic interrupt hypervisor calls.David S. Miller2006-03-201-1/+65
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Move devino_to_sysino out of pci_sun4v_asm.SDavid S. Miller2006-03-201-0/+12
| | | | | | It is not PCI specific, it is for all system interrupts. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Patch up mmu context register writes for sun4v.David S. Miller2006-03-201-10/+70
| | | | | | sun4v uses ASI_MMU instead of ASI_DMMU Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add explicit register args to trap state loading macros.David S. Miller2006-03-201-4/+4
| | | | | | | This, as well as making the code cleaner, allows a simplification in the TSB miss handling path. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Refine code sequences to get the cpu id.David S. Miller2006-03-201-79/+5
| | | | | | | | | | | | | | | | | | On uniprocessor, it's always zero for optimize that. On SMP, the jmpl to the stub kills the return address stack in the cpu branch prediction logic, so expand the code sequence inline and use a code patching section to fix things up. This also always better and explicit register selection, which will be taken advantage of in a future changeset. The hard_smp_processor_id() function is big, so do not inline it. Fix up tests for Jalapeno to also test for Serrano chips too. These tests want "jbus Ultra-IIIi" cases to match, so that is what we should test for. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Kill hard-coded %pstate setting in sparc_exit.David S. Miller2006-03-201-2/+3
| | | | | | | Just flip the bit off of whatever it's currently set to. PSTATE_IE is guarenteed to be enabled when we get here. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Kill {save,restore}_alternate_globals()David S. Miller2006-03-201-70/+0
| | | | | | | No longer needed now that we no longer have hard-coded alternate global register usage. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Elminate all usage of hard-coded trap globals.David S. Miller2006-03-201-17/+105
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | UltraSPARC has special sets of global registers which are switched to for certain trap types. There is one set for MMU related traps, one set of Interrupt Vector processing, and another set (called the Alternate globals) for all other trap types. For what seems like forever we've hard coded the values in some of these trap registers. Some examples include: 1) Interrupt Vector global %g6 holds current processors interrupt work struct where received interrupts are managed for IRQ handler dispatch. 2) MMU global %g7 holds the base of the page tables of the currently active address space. 3) Alternate global %g6 held the current_thread_info() value. Such hardcoding has resulted in some serious issues in many areas. There are some code sequences where having another register available would help clean up the implementation. Taking traps such as cross-calls from the OBP firmware requires some trick code sequences wherein we have to save away and restore all of the special sets of global registers when we enter/exit OBP. We were also using the IMMU TSB register on SMP to hold the per-cpu area base address, which doesn't work any longer now that we actually use the TSB facility of the cpu. The implementation is pretty straight forward. One tricky bit is getting the current processor ID as that is different on different cpu variants. We use a stub with a fancy calling convention which we patch at boot time. The calling convention is that the stub is branched to and the (PC - 4) to return to is in register %g1. The cpu number is left in %g6. This stub can be invoked by using the __GET_CPUID macro. We use an array of per-cpu trap state to store the current thread and physical address of the current address space's page tables. The TRAP_LOAD_THREAD_REG loads %g6 with the current thread from this table, it uses __GET_CPUID and also clobbers %g1. TRAP_LOAD_IRQ_WORK is used by the interrupt vector processing to load the current processor's IRQ software state into %g6. It also uses __GET_CPUID and clobbers %g1. Finally, TRAP_LOAD_PGD_PHYS loads the physical address base of the current address space's page tables into %g7, it clobbers %g1 and uses __GET_CPUID. Many refinements are possible, as well as some tuning, with this stuff in place. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC]: Wire up sys_unshare().David S. Miller2006-02-081-1/+1
| | | | | | | | Also, the Solaris syscall table is sized differrently, and does not go beyond entry 255, so trim off the excess entries. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC]: Increase NR_SYSCALLS to 299David S. Miller2006-01-221-1/+1
| | | | | | To let new syscalls through. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC]: Add support for *at(), ppoll, and pselect syscalls.David S. Miller2006-01-191-23/+0
| | | | | | | | | | This also includes by necessity _TIF_RESTORE_SIGMASK support, which actually resulted in a lot of cleanups. The sparc signal handling code is quite a mess and I should clean it up some day. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix ptrace/straceRichard Mortimer2006-01-091-5/+2
| | | | | | | | | | | | | | | | | Don't clobber register %l0 while checking TI_SYS_NOERROR value in syscall return path. This bug was introduced by: db7d9a4eb700be766cc9f29241483dbb1e748832 Problem narrowed down by Luis F. Ortiz and Richard Mortimer. I tried using %l2 as suggested by Luis and that works for me. Looking at the code I wonder if it makes sense to simplify the code a little bit. The following works for me but I'm not sure how to exercise the "NOERROR" codepath. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix userland FPU state corruption.David S. Miller2005-10-071-18/+21
| | | | | | | | We need to use stricter memory barriers around the block load and store instructions we use to save and restore the FPU register file. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Replace cheetah+ code patching with variables.David S. Miller2005-10-051-35/+8
| | | | | | | | Instead of code patching to handle the page size fields in the context registers, just use variables from which we get the proper values. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Probe D/I/E-cache config and use.David S. Miller2005-09-261-9/+18
| | | | | | | | | | | | | | | | | | At boot time, determine the D-cache, I-cache and E-cache size and line-size. Use them in cache flushes when appropriate. This change was motivated by discovering that the D-cache on UltraSparc-IIIi and later are 64K not 32K, and the flushes done by the Cheetah error handlers were assuming a 32K size. There are still some pieces of code that are hard coding things and will need to be fixed up at some point. While we're here, fix the D-cache and I-cache parity error handlers to run with interrupts disabled, and when the trap occurs at trap level > 1 log the event via a counter displayed in /proc/cpuinfo. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Move kernel TLB miss handling into a seperate file.David S. Miller2005-09-221-153/+0
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Verify vmalloc TLB misses more strictly.David S. Miller2005-09-201-20/+19
| | | | | | | Arrange the modules, OBP, and vmalloc areas such that a range verification can be done quite minimally. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Do not expand CHEETAH_LOG_ERROR 3 times.David S. Miller2005-08-311-136/+173
| | | | | | | We only need to expand this thing once, saving some text section space. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Revamp Spitfire error trap handling.David S. Miller2005-08-291-103/+163
| | | | | | | | | | | | | | | | | | | Current uncorrectable error handling was poor enough that the processor could just loop taking the same trap over and over again. Fix things up so that we at least get a log message and perhaps even some register state. In the process, much consolidation became possible, particularly with the correctable error handler. Prefix assembler and C function names with "spitfire" to indicate that these are for Ultra-I/II/IIi/IIe only. More work is needed to make these routines robust and featureful to the level of the Ultra-III error handlers. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Do not call winfix_dax blindlyDavid S. Miller2005-08-291-0/+16
| | | | | | | | | Verify we really are taking a data access exception trap, at TL1, from one of the window spill/fill handlers. Else call a new function, data_access_exception_tl1, to log the error. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix trap state reading for instruction_access_exception.David S. Miller2005-08-291-11/+4
| | | | | | | | | | 1) Read ASI_IMMU SFSR not ASI_DMMU. 2) IMMU has no SFAR, read TPC instead 3) Delete old and incorrect comment about the DTLB protection trap having a dependency on the SFSR contents in order to function correctly Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Move syscall success and newchild state out of thread flags.David S. Miller2005-07-251-9/+8
| | | | | | | | | These two bits were accesses non-atomically from assembler code. So, in order to eliminate any potential races resulting from that, move these pieces of state into two bytes elsewhere in struct thread_info. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add syscall auditing support.David S. Miller2005-07-111-5/+5
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>