summaryrefslogtreecommitdiffstats
path: root/arch (follow)
Commit message (Collapse)AuthorAgeFilesLines
* powerpc/fsl: Update defconfigs to enable some standard FSL HW featuresKumar Gala2012-01-044-15/+30
| | | | | | | | | | | | | | | | corenet64_smp_defconfig: - enabled rapidio corenet32_smp_defconfig: - enabled hugetlbfs, rapidio mpc85xx_smp_defconfig: - enabled P1010RDB, hugetlbfs, SPI, SDHC, Crypto/CAAM mpc85xx_smp_defconfig: - enabled hugetlbfs, SPI, SDHC, Crypto/CAAM Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* powerpc: Add TBI PHY node to first MDIO busAndy Fleming2012-01-045-2/+24
| | | | | | | | | | | | | | | | | | | | | Systems which use the fsl_pq_mdio driver need to specify an address for TBI PHY transactions such that the address does not conflict with any PHYs on the bus (all transactions to that address are directed to the onboard TBI PHY). The driver used to scan for a free address if no address was specified, however this ran into issues when the PHY Lib was fixed so that all MDIO transactions were protected by a mutex. As it is, the code was meant to serve as a transitional tool until the device trees were all updated to specify the TBI address. The best fix for the mutex issue was to remove the scanning code, but it turns out some of the newer SoCs have started to omit the tbi-phy node when SGMII is not being used. As such, these devices will now fail unless we add a tbi-phy node to the first mdio controller. Signed-off-by: Andy Fleming <afleming@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* sbc834x: put full compat string in board match checkPaul Gortmaker2012-01-041-2/+2
| | | | | | | | | | | | | | | The commit 883c2cfc8bcc0fd00c5d9f596fb8870f481b5bda: "fix of_flat_dt_is_compatible() to match the full compatible string" causes silent boot death on the sbc8349 board because it was just looking for 8349 and not 8349E -- as originally there were non-E (no SEC/encryption) chips available. Just add the E to the board detection string since all boards I've seen were manufactured with the E versions. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* powerpc/fsl-pci: Allow 64-bit PCIe devices to DMA to any memory addressKumar Gala2012-01-041-0/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is an issue on FSL-BookE 64-bit devices (P5020) in which PCIe devices that are capable of doing 64-bit DMAs (like an Intel e1000) do not function and crash the kernel if we have >4G of memory in the system. The reason is that the existing code only sets up one inbound window for access to system memory across PCIe. That window is limited to a 32-bit address space. So on systems we'll end up utilizing SWIOTLB for dma mappings. However SWIOTLB dma ops implement dma_alloc_coherent() as dma_direct_alloc_coherent(). Thus we can end up with dma addresses that are not accessible because of the inbound window limitation. We could possibly set the SWIOTLB alloc_coherent op to swiotlb_alloc_coherent() however that does not address the issue since the swiotlb_alloc_coherent() will behave almost identical to dma_direct_alloc_coherent() since the devices coherent_dma_mask will be greater than any address allocated by swiotlb_alloc_coherent() and thus we'll never bounce buffer it into a range that would be dma-able. The easiest and best solution is to just make it so that a 64-bit capable device is able to DMA to any internal system address. We accomplish this by opening up a second inbound window that maps all of memory above the internal SoC address width so we can set it up to access all of the internal SoC address space if needed. We than fixup the dma_ops and dma_offset for PCIe devices with a dma mask greater than the maximum internal SoC address. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* powerpc: Fix unpaired probe_hcall_entry and probe_hcall_exitLi Zhong2012-01-032-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Unpaired calling of probe_hcall_entry and probe_hcall_exit might happen as following, which could cause incorrect preempt count. __trace_hcall_entry => trace_hcall_entry -> probe_hcall_entry => get_cpu_var => preempt_disable __trace_hcall_exit => trace_hcall_exit -> probe_hcall_exit => put_cpu_var => preempt_enable where: A => B and A -> B means A calls B, but => means A will call B through function name, and B will definitely be called. -> means A will call B through function pointer, so B might not be called if the function pointer is not set. So error happens when only one of probe_hcall_entry and probe_hcall_exit get called during a hcall. This patch tries to move the preempt count operations from probe_hcall_entry and probe_hcall_exit to its callers. Reported-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com> Tested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> CC: stable@kernel.org [v2.6.32+] Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/boot: Change the WARN to INFO for boot wrapper overlap messageSuzuki Poulose2011-12-211-2/+2
| | | | | | | | | | | | | | | | commit c55aef0e5bc6 ("powerpc/boot: Change the load address for the wrapper to fit the kernel") introduced a WARNING to inform the user that the uncompressed kernel would overlap the boot uncompressing wrapper code. Change it to an INFO. I initially thought, this would be a 'WARNING' for the those boards, where the link_address should be fixed, so that the user can take actions accordingly. Changing the same to INFO. Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
* powerpc/44x: Fix build error on currituck platformJosh Boyer2011-12-201-1/+1
| | | | | | | | The MPIC_PRIMARY define was recently made "default" and the meaning was inverted to MPIC_SECONDARY. This causes compile errors in currituck now, so fix it to the new manner of allocating mpics. Signed-off-by: Josh Boyer <jwboyer@gmail.com>
* powerpc/boot: Change the load address for the wrapper to fit the kernelSuzuki Poulose2011-12-201-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | The wrapper code which uncompresses the kernel in case of a 'ppc' boot is by default loaded at 0x00400000 and the kernel will be uncompressed to fit the location 0-0x00400000. But with dynamic relocations, the size of the kernel may exceed 0x00400000(4M). This would cause an overlap of the uncompressed kernel and the boot wrapper, causing a failure in boot. The message looks like : zImage starting: loaded at 0x00400000 (sp: 0x0065ffb0) Allocating 0x5ce650 bytes for kernel ... Insufficient memory for kernel at address 0! (_start=00400000, uncompressed size=00591a20) This patch shifts the load address of the boot wrapper code to the next higher MB, according to the size of the uncompressed vmlinux. With the patch, we get the following message while building the image : WARN: Uncompressed kernel (size 0x5b0344) overlaps the address of the wrapper(0x400000) WARN: Fixing the link_address of wrapper to (0x600000) Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
* powerpc/44x: Enable CRASH_DUMP for 440xSuzuki Poulose2011-12-201-2/+2
| | | | | | | | | | | | | | Now that we have relocatable kernel, supporting CRASH_DUMP only requires turning the switches on for UP machines. We don't have kexec support on 47x yet. Enabling SMP support would be done as part of enabling the PPC_47x support. Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com> Cc: Josh Boyer <jwboyer@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
* powerpc/44x: Enable CONFIG_RELOCATABLE for PPC44xSuzuki Poulose2011-12-202-3/+94
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following patch adds relocatable kernel support - based on processing of dynamic relocations - for PPC44x kernel. We find the runtime address of _stext and relocate ourselves based on the following calculation. virtual_base = ALIGN(KERNELBASE,256M) + MODULO(_stext.run,256M) relocate() is called with the Effective Virtual Base Address (as shown below) | Phys. Addr| Virt. Addr | Page (256M) |------------------------| Boundary | | | | | | | | | Kernel Load |___________|_ __ _ _ _ _|<- Effective Addr(_stext)| | ^ |Virt. Base Addr | | | | | | | | | |reloc_offset| | | | | | | | | | |______v_____|<-(KERNELBASE)%256M | | | | | | | | | Page(256M) |-----------|------------| Boundary | | | The virt_phys_offset is updated accordingly, i.e, virt_phys_offset = effective. kernel virt base - kernstart_addr I have tested the patches on 440x platforms only. However this should work fine for PPC_47x also, as we only depend on the runtime address and the current TLB XLAT entry for the startup code, which is available in r25. I don't have access to a 47x board yet. So, it would be great if somebody could test this on 47x. Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: Tony Breeds <tony@bakeyournoodle.com> Cc: Josh Boyer <jwboyer@gmail.com> Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
* powerpc: Define virtual-physical translations for RELOCATABLESuzuki Poulose2011-12-202-3/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We find the runtime address of _stext and relocate ourselves based on the following calculation. virtual_base = ALIGN(KERNELBASE,KERNEL_TLB_PIN_SIZE) + MODULO(_stext.run,KERNEL_TLB_PIN_SIZE) relocate() is called with the Effective Virtual Base Address (as shown below) | Phys. Addr| Virt. Addr | Page |------------------------| Boundary | | | | | | | | | Kernel Load |___________|_ __ _ _ _ _|<- Effective Addr(_stext)| | ^ |Virt. Base Addr | | | | | | | | | |reloc_offset| | | | | | | | | | |______v_____|<-(KERNELBASE)%TLB_SIZE | | | | | | | | | Page |-----------|------------| Boundary | | | On BookE, we need __va() & __pa() early in the boot process to access the device tree. Currently this has been defined as : #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) - PHYSICAL_START + KERNELBASE) where: PHYSICAL_START is kernstart_addr - a variable updated at runtime. KERNELBASE is the compile time Virtual base address of kernel. This won't work for us, as kernstart_addr is dynamic and will yield different results for __va()/__pa() for same mapping. e.g., Let the kernel be loaded at 64MB and KERNELBASE be 0xc0000000 (same as PAGE_OFFSET). In this case, we would be mapping 0 to 0xc0000000, and kernstart_addr = 64M Now __va(1MB) = (0x100000) - (0x4000000) + 0xc0000000 = 0xbc100000 , which is wrong. it should be : 0xc0000000 + 0x100000 = 0xc0100000 On platforms which support AMP, like PPC_47x (based on 44x), the kernel could be loaded at highmem. Hence we cannot always depend on the compile time constants for mapping. Here are the possible solutions: 1) Update kernstart_addr(PHSYICAL_START) to match the Physical address of compile time KERNELBASE value, instead of the actual Physical_Address(_stext). The disadvantage is that we may break other users of PHYSICAL_START. They could be replaced with __pa(_stext). 2) Redefine __va() & __pa() with relocation offset #ifdef CONFIG_RELOCATABLE_PPC32 #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) - PHYSICAL_START + (KERNELBASE + RELOC_OFFSET))) #define __pa(x) ((unsigned long)(x) + PHYSICAL_START - (KERNELBASE + RELOC_OFFSET)) #endif where, RELOC_OFFSET could be a) A variable, say relocation_offset (like kernstart_addr), updated at boot time. This impacts performance, as we have to load an additional variable from memory. OR b) #define RELOC_OFFSET ((PHYSICAL_START & PPC_PIN_SIZE_OFFSET_MASK) - \ (KERNELBASE & PPC_PIN_SIZE_OFFSET_MASK)) This introduces more calculations for doing the translation. 3) Redefine __va() & __pa() with a new variable i.e, #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET)) where VIRT_PHYS_OFFSET : #ifdef CONFIG_RELOCATABLE_PPC32 #define VIRT_PHYS_OFFSET virt_phys_offset #else #define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START) #endif /* CONFIG_RELOCATABLE_PPC32 */ where virt_phy_offset is updated at runtime to : Effective KERNELBASE - kernstart_addr. Taking our example, above: virt_phys_offset = effective_kernelstart_vaddr - kernstart_addr = 0xc0400000 - 0x400000 = 0xc0000000 and __va(0x100000) = 0xc0000000 + 0x100000 = 0xc0100000 which is what we want. I have implemented (3) in the following patch which has same cost of operation as the existing one. I have tested the patches on 440x platforms only. However this should work fine for PPC_47x also, as we only depend on the runtime address and the current TLB XLAT entry for the startup code, which is available in r25. I don't have access to a 47x board yet. So, it would be great if somebody could test this on 47x. Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
* powerpc: Process dynamic relocations for kernelSuzuki Poulose2011-12-206-23/+256
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following patch implements the dynamic relocation processing for PPC32 kernel. relocate() accepts the target virtual address and relocates the kernel image to the same. Currently the following relocation types are handled : R_PPC_RELATIVE R_PPC_ADDR16_LO R_PPC_ADDR16_HI R_PPC_ADDR16_HA The last 3 relocations in the above list depends on value of Symbol indexed whose index is encoded in the Relocation entry. Hence we need the Symbol Table for processing such relocations. Note: The GNU ld for ppc32 produces buggy relocations for relocation types that depend on symbols. The value of the symbols with STB_LOCAL scope should be assumed to be zero. - Alan Modra Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com> Signed-off-by: Josh Poimboeuf <jpoimboe@linux.vnet.ibm.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Alan Modra <amodra@au1.ibm.com> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
* powerpc/44x: Enable DYNAMIC_MEMSTART for 440xSuzuki Poulose2011-12-202-1/+13
| | | | | | | | | | | | DYNAMIC_MEMSTART(old RELOCATABLE) was restricted only to PPC_47x variants of 44x. This patch enables DYNAMIC_MEMSTART for 440x based chipsets. Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com> Cc: Josh Boyer <jwboyer@gmail.com> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: linux ppc dev <linuxppc-dev@lists.ozlabs.org> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
* powerpc: Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookESuzuki Poulose2011-12-2010-31/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current implementation of CONFIG_RELOCATABLE in BookE is based on mapping the page aligned kernel load address to KERNELBASE. This approach however is not enough for platforms, where the TLB page size is large (e.g, 256M on 44x). So we are renaming the RELOCATABLE used currently in BookE to DYNAMIC_MEMSTART to reflect the actual method. The CONFIG_RELOCATABLE for PPC32(BookE) based on processing of the dynamic relocations will be introduced in the later in the patch series. This change would allow the use of the old method of RELOCATABLE for platforms which can afford to enforce the page alignment (platforms with smaller TLB size). Changes since v3: * Introduced a new config, NONSTATIC_KERNEL, to denote a kernel which is either a RELOCATABLE or DYNAMIC_MEMSTART(Suggested by: Josh Boyer) Suggested-by: Scott Wood <scottwood@freescale.com> Tested-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com> Cc: Scott Wood <scottwood@freescale.com> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: Josh Boyer <jwboyer@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: linux ppc dev <linuxppc-dev@lists.ozlabs.org> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
* powerpc: Fix old bug in prom_init setting of the colorBenjamin Herrenschmidt2011-12-191-1/+1
| | | | | | We have an array of 16 entries and a loop of 32 iterations... oops. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Only use initrd_end as the limit for alloc_bottom if it's inside ↵Paul Mackerras2011-12-191-8/+9
| | | | | | | | | | | | | | | the RMO. As the kernels and initrd's get bigger boot-loaders and possibly kexec-tools will need to place the initrd outside the RMO. When this happens we end up with no lowmem and the boot doesn't get very far. Only use initrd_end as the limit for alloc_bottom if it's inside the RMO. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Fix comment explaining our VSID layoutAnton Blanchard2011-12-191-4/+3
| | | | | | | | We support 16TB of user address space and half a million contexts so update the comment to reflect this. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Fix wrong divisor in usecs_to_cputimeAndreas Schwab2011-12-192-8/+8
| | | | | | | | | | | | | | | Commit d57af9b (taskstats: use real microsecond granularity for CPU times) renamed msecs_to_cputime to usecs_to_cputime, but failed to update all numbers on the way. This causes nonsensical cpu idle/iowait values to be displayed in /proc/stat (the only user of usecs_to_cputime so far). This also renames __cputime_msec_factor to __cputime_usec_factor, adapting its value and using it directly in cputime_to_usecs instead of doing two multiplications. Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Acked-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/mm: Fix section mismatch for read_n_cellsDavid Rientjes2011-12-191-1/+1
| | | | | | | | | read_n_cells() cannot be marked as .devinit.text since it is referenced from two functions that are not in that section: of_get_lmb_size() and hot_add_drconf_scn_to_nid(). Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/mm: Fix section mismatch for mark_reserved_regions_for_nidDavid Rientjes2011-12-191-1/+1
| | | | | | | | | | mark_reserved_regions_for_nid() is only called from do_init_bootmem(), which is in .init.text, so it must be in the same section to avoid a section mismatch warning. Reported-by: Subrata Modak <subrata@linux.vnet.ibm.com> Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Add __SANE_USERSPACE_TYPES__ to asm/types.h for LL64Matt Evans2011-12-191-1/+4
| | | | | | | | | | | | PPC64 uses long long for u64 in the kernel, but powerpc's asm/types.h prevents 64-bit userland from seeing this definition, instead defaulting to u64 == long in userspace. Some user programs (e.g. kvmtool) may actually want LL64, so this patch adds a check for __SANE_USERSPACE_TYPES__ so that, if defined, int-ll64.h is included instead. Signed-off-by: Matt Evans <matt@ozlabs.org> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: POWER7 optimised copy_to_user/copy_from_user using VMXAnton Blanchard2011-12-195-2/+744
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement a POWER7 optimised copy_to_user/copy_from_user using VMX. For large aligned copies this new loop is over 10% faster, and for large unaligned copies it is over 200% faster. If we take a fault we fall back to the old version, this keeps things relatively simple and easy to verify. On POWER7 unaligned stores rarely slow down - they only flush when a store crosses a 4KB page boundary. Furthermore this flush is handled completely in hardware and should be 20-30 cycles. Unaligned loads on the other hand flush much more often - whenever crossing a 128 byte cache line, or a 32 byte sector if either sector is an L1 miss. Considering this information we really want to get the loads aligned and not worry about the alignment of the stores. Microbenchmarks confirm that this approach is much faster than the current unaligned copy loop that uses shifts and rotates to ensure both loads and stores are aligned. We also want to try and do the stores in cacheline aligned, cacheline sized chunks. If the store queue is unable to merge an entire cacheline of stores then the L2 cache will have to do a read/modify/write. Even worse, we will serialise this with the stores in the next iteration of the copy loop since both iterations hit the same cacheline. Based on this, the new loop does the following things: 1 - 127 bytes Get the source 8 byte aligned and use 8 byte loads and stores. Pretty boring and similar to how the current loop works. 128 - 4095 bytes Get the source 8 byte aligned and use 8 byte loads and stores, 1 cacheline at a time. We aren't doing the stores in cacheline aligned chunks so we will potentially serialise once per cacheline. Even so it is much better than the loop we have today. 4096 - bytes If both source and destination have the same alignment get them both 16 byte aligned, then get the destination cacheline aligned. Do cacheline sized loads and stores using VMX. If source and destination do not have the same alignment, we get the destination cacheline aligned, and use permute to do aligned loads. In both cases the VMX loop should be optimal - we always do aligned loads and stores and are always doing stores in cacheline aligned, cacheline sized chunks. To be able to use VMX we must be careful about interrupts and sleeping. We don't use the VMX loop when in an interrupt (which should be rare anyway) and we wrap the VMX loop in disable/enable_pagefault and fall back to the existing copy_tofrom_user loop if we do need to sleep. The VMX breakpoint of 4096 bytes was chosen using this microbenchmark: http://ozlabs.org/~anton/junkcode/copy_to_user.c Since we are using VMX and there is a cost to saving and restoring the user VMX state there are two broad cases we need to benchmark: - Best case - userspace never uses VMX - Worst case - userspace always uses VMX In reality a userspace process will sit somewhere between these two extremes. Since we need to test both aligned and unaligned copies we end up with 4 combinations. The point at which the VMX loop begins to win is: 0% VMX aligned 2048 bytes unaligned 2048 bytes 100% VMX aligned 16384 bytes unaligned 8192 bytes Considering this is a microbenchmark, the data is hot in cache and the VMX loop has better store queue merging properties we set the breakpoint to 4096 bytes, a little below the unaligned breakpoints. Some future optimisations we can look at: - Looking at the perf data, a significant part of the cost when a task is always using VMX is the extra exception we take to restore the VMX state. As such we should do something similar to the x86 optimisation that restores FPU state for heavy users. ie: /* * If the task has used fpu the last 5 timeslices, just do a full * restore of the math state immediately to avoid the trap; the * chances of needing FPU soon are obviously high now */ preload_fpu = tsk_used_math(next_p) && next_p->fpu_counter > 5; and /* * fpu_counter contains the number of consecutive context switches * that the FPU is used. If this is over a threshold, the lazy fpu * saving becomes unlazy to save the trap. This is an unsigned char * so that after 256 times the counter wraps and the behavior turns * lazy again; this to deal with bursty apps that only use FPU for * a short time */ - We could create a paca bit to mirror the VMX enabled MSR bit and check that first, avoiding multiple calls to calling enable_kernel_altivec. That should help with iovec based system calls like readv. - We could have two VMX breakpoints, one for when we know the user VMX state is loaded into the registers and one when it isn't. This could be a second bit in the paca so we can calculate the break points quickly. - One suggestion from Ben was to save and restore the VSX registers we use inline instead of using enable_kernel_altivec. [BenH: Fixed a problem with preempt and fixed build without CONFIG_ALTIVEC] Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Use rwsem.h from generic locationRichard Kuo2011-12-162-132/+2
| | | | | | | | | As of commit dd472da38, rwsem.h was moved into asm-generic. This patch removes the arch file and points the build at its new location. Signed-off-by: Richard Kuo <rkuo@codeaurora.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* Merge remote-tracking branch 'jwb/next' into nextBenjamin Herrenschmidt2011-12-1620-13/+1158
|\ | | | | | | | | Conflicts: arch/powerpc/platforms/40x/ppc40x_simple.c
| * powerpc/47x: Add support for the new IBM currituck platformTony Breeds2011-12-098-1/+688
| | | | | | | | | | | | | | Based on original work by David 'Shaggy' Kleikamp. Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
| * powerpc/476fpe: Add 476fpe SoC codeTony Breeds2011-12-096-1/+84
| | | | | | | | | | | | | | Based on original work by David 'Shaggy' Kleikamp. Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
| * powerpc/boot: Add mfdcrxTony Breeds2011-12-091-0/+6
| | | | | | | | | | | | | | Needed for currituck support. Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
| * powerpc/boot: Add extended precision shifts to the boot wrapper.Tony Breeds2011-12-091-0/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The upcomming currituck patches will need to do 64-bit shifts which will fail with undefined symbol without this patch. I looked at linking against libgcc but we can't guarantee that libgcc was compiled with soft-float. Also Using ../lib/div64.S or ../kernel/misc_32.S, this will break the build as the .o's need to be built with different flags for the bootwrapper vs the kernel. So for now the easyest option is to just copy code from arch/powerpc/kernel/misc_32.S I don't think this code changes too often ;P Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
| * powerpc/44x: Removing dead CONFIG_PPC47xChristoph Egger2011-12-091-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | CONFIG_PPC47x doesn't exist in Kconfig and no 476 processor calls this function ppc44x_pin_tlb() as it has it's own ppc47x_pin_tlb(). This code is probably an artifact of the original 476 code that shouldn't have made it upstream. Signed-off-by: Christoph Egger <siccegge@cs.fau.de> Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
| * powerpc/44x: pci: Setup the dma_window properties for each pci_controllerTony Breeds2011-12-091-0/+6
| | | | | | | | | | | | | | Needed if you want to use swiotlb, harmless otherwise. Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
| * powerpc/44x: pci: Add a want_sdr flag into ppc4xx_pciex_hwopsTony Breeds2011-12-091-6/+14
| | | | | | | | | | | | | | | | | | | | | | | | Currituck doesn't need nor use SDR so aborting the pci setup if there is no sdr-base would be bad. Add a flag to ppc4xx_pciex_hwops for the backends to state if they need SDR and then only complain and abort if they do and it's not found in the device tree. Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
| * powerpc/44x: pci: Use PCI_BASE_ADDRESS_MEM_PREFETCH rather than magic value.Tony Breeds2011-12-091-1/+1
| | | | | | | | | | Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
| * powerpc/40x: Add APM8018X SOC supportTanmay Inamdar2011-11-305-0/+307
| | | | | | | | | | | | | | | | | | | | | | The AppliedMicro APM8018X embedded processor targets embedded applications that require low power and a small footprint. It features a PowerPC 405 processor core built in a 65nm low-power CMOS process with a five-stage pipeline executing up to one instruction per cycle. The family has 128-kbytes of on-chip memory, a 128-bit local bus and on-chip DDR2 SDRAM controller with 16-bit interface. Signed-off-by: Tanmay Inamdar <tinamdar@apm.com> Signed-off-by: Josh Boyer <jwboyer@gmail.com>
* | powerpc/pmac: Fix SMP kernels on pre-core99 UP machinesBenjamin Herrenschmidt2011-12-161-1/+1
| | | | | | | | | | | | | | The code for "powersurge" SMP would kick in and cause a crash at boot due to the lack of a NULL test. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc/pmac: Simplify old pmac PIC interrupt handlingBenjamin Herrenschmidt2011-12-161-28/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the old days, we treated all interrupts from the legacy Apple home made interrupt controllers as level, with a trick reading the "level" register along with the "event" register to work arounds bugs where it would occasionally fail to latch some events. Doing so appeared to work fine for both level and edge interrupts. Later on, we discovered in Darwin source the magic masks that define which interrupts are actually level and which are edge, and implemented a different algorithm, more similar to what Apple does, that treats those differently. I recently discovered however that this caused problems (including loss of interrupts) with an old Wallstreet PowerBook when trying to use the internal modem (connected to a cascaded controller). It looks like some interrupts are treated as edge while they are really level and I'm starting to seriously doubt the correctness of the Darwin code (which has other obvious bugs when you read it, so ...) This patch reverts to our original behaviour of treating everything as a level interrupt. It appears to solve the problems with the modem on the Wallstreet and everything else seems to be working properly as well. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | Merge branch 'kexec' into nextBenjamin Herrenschmidt2011-12-166-203/+214
|\ \
| * | powerpc/kdump: Only save CPU state first time through the secondary CPU ↵Anton Blanchard2011-12-081-12/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | capture code We might enter the secondary CPU capture code twice, eg if we have to unstick some CPUs with a system reset. In this case we don't want to overwrite the state on CPUs that had made it into the capture code OK, so use the cpus_state_saved cpumask for that and make it local to crash_ipi_callback. For controlling progress now use atomic_t cpus_in_crash to count how many CPUs have made it into the kdump code, and time_to_dump to tell everyone it's time to dump. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/kdump: Delay before sending IPI on a system resetAnton Blanchard2011-12-081-7/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we enter the kdump code via system reset, wait a bit before sending the IPI to capture all secondary CPUs. Without it we race with the hypervisor that is issuing the system reset to each CPU. If the IPI gets there first the system reset oops output then shows the register state of the IPI handler which is not what we want. I took the opportunity to add defines for all the various delays we have. There's no need for cpu_relax when we are doing an mdelay, so remove them too. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/xics: Reset the CPPR if H_EOI failsAnton Blanchard2011-12-081-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I have an intermittent kdump fail where the hypervisor fails an H_EOI. As a result our CPPR is never reset to 0xff and we no longer accept interrupts. This patch calls icp_hv_set_cppr to reset the CPPR if H_EOI fails, fixing the kdump fail. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc: Reduce pseries panic timeout from 180s to 10sAnton Blanchard2011-12-081-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We've had a 180 second panic timeout on ppc64 for as long as I can remember. This patch reduces it to 10 seconds on pseries for a few reasons: - Almost all pseries machines have a hypervisor console so panic output will be available in a scrollback buffer. - The 180 seconds impacts our availability, users (other than kernel hackers) just want the box to come back around so it can continue its work. - I spend a lot of my life staring at the 180 second panic timeout. Many pseries machines take minutes to power cycle, so it's quicker to sit through the 180 seconds than it is to power cycle. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc: Rework die()Anton Blanchard2011-12-082-56/+74
| | | | | | | | | | | | | | | | | | | | | | | | Our die() code was based off a very old x86 version. Update it to mirror the current x86 code. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc: Cleanup crash/kexec codeAnton Blanchard2011-12-082-19/+3
| | | | | | | | | | | | | | | | | | | | | Remove some unnecessary defines and fix some spelling mistakes. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/kdump: Use setjmp/longjmp to handle kdump and system reset recursionAnton Blanchard2011-12-081-15/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We can handle recursion caused by system reset by reusing the crash shutdown fault handler. Since we don't have an OS triggerable NMI, if all CPUs don't make it into kdump then we tell the user to issue a system reset. However if we have a panic timeout set we cannot wait forever and must continue the kdump. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc: Remove broken and complicated kdump system reset codeAnton Blanchard2011-12-083-101/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have a lot of complicated logic that handles possible recursion between kdump and a system reset exception. We can solve this in a much simpler way using the same setjmp/longjmp tricks xmon does. As a first step, this patch removes the old system reset code. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc: Give us time to get all oopses out before panickingAnton Blanchard2011-12-081-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I've been seeing truncated output when people send system reset info to me. We should see a backtrace for every CPU, but the panic() code takes the box down before they all make it out to the console. The panic code runs unlocked so we also see corrupted console output. If we are going to panic, then delay 1 second before calling into the panic code. Move oops_exit inside the die lock and put a newline between oopses for clarity. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | | Merge branch 'ps3' into nextBenjamin Herrenschmidt2011-12-168-147/+154
|\ \ \
| * | | powerpc/ps3: Update ps3_defconfigGeoff Levand2011-12-081-19/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Refresh ps3_defconfig to latest kernel sources and change the options: CONFIG_PPP=m to CONFIG_PPP=n. CONFIG_NAMESPACES=y to CONFIG_NAMESPACES=n CONFIG_NUMA=y to CONFIG_NUMA=n Signed-off-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/ps3: Add __init to ps3_smp_probeGeoff Levand2011-12-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add an __init annotation to the ps3_smp_probe() routine. Fixes build warnings like these when CONFIG_DEBUG_SECTION_MISMATCH=y: WARNING: Section mismatch in reference from the function .ps3_smp_probe() Signed-off-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/ps3: Fix PS3 repository build warningsGeoff Levand2011-12-081-67/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix some PS3 repository.c build warnings when DEBUG is defined. Also change most pr_debug calls to pr_devel calls. Fixes warnings like these: format '%lx' expects type 'long unsigned int', but argument 7 has type 'u64' Signed-off-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/ps3: Fix hcall lv1_read_repository_nodeGeoff Levand2011-12-082-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The lv1 hcall #91 should be named lv1_read_repository_node, and not lv1_get_repository_node_value. Adjust the lv1 hcall table and all calls. Signed-off-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>