summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* cpumask: Use nr_cpu_ids in seq_cpumaskRusty Russell2008-12-291-1/+1
| | | | | | | | | | | | | | | Impact: cleanup, futureproof nr_cpu_ids is the (badly named) runtime limit on possible CPU numbers; ie. the variable version of NR_CPUS. With the new cpumask operators, only bits less than this are defined. So we should use it everywhere, rather than NR_CPUS. Eventually this will make it possible to allocate cpumasks of the minimal length at runtime. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Acked-by: Ingo Molnar <mingo@elte.hu>
* cpumask: use new cpumask API in drivers/infiniband/hw/ipathRusty Russell2008-12-291-4/+4
| | | | | | | | | | | Impact: cleanup We're moving from handing around cpumask_t's to handing around struct cpumask *'s. cpus_*, cpumask_t and cpu_*_map are deprecated: convert to cpumask_*, cpu_*_mask. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Ralph Campbell <infinipath@qlogic.com>
* cpumask: use new cpumask API in drivers/infiniband/hw/ehcaRusty Russell2008-12-291-5/+5
| | | | | | | | | | | | | Impact: cleanup We're moving from handing around cpumask_t's to handing around struct cpumask *'s. cpus_*, cpumask_t and cpu_*_map are deprecated: convert to cpumask_*, cpu_*_mask. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Hoang-Nam Nguyen <hnguyen@de.ibm.com> Tested-by: Hoang-Nam Nguyen <hnguyen@de.ibm.com> Cc: Christoph Raisch <raisch@de.ibm.com>
* cpumask: use for_each_online_cpu() in drivers/infiniband/hw/ehca/ehca_irq.cRusty Russell2008-12-291-4/+3
| | | | | | | | | | | | | Impact: cleanup In future, accessing cpu numbers beyond nr_cpu_ids (the runtime limit) will be undefined. We can avoid future problems by using for_each_online_cpu() here. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Hoang-Nam Nguyen <hnguyen@de.ibm.com> Tested-by: Hoang-Nam Nguyen <hnguyen@de.ibm.com> Cc: Christoph Raisch <raisch@de.ibm.com>
* cpumask: arch_send_call_function_ipi_mask: coreRusty Russell2008-12-291-1/+7
| | | | | | | | | Impact: new API to reduce stack usage We're weaning the core code off handing cpumask's around on-stack. This introduces arch_send_call_function_ipi_mask(). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* cpumask: smp_call_function_many()Rusty Russell2008-12-292-97/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | Impact: Implementation change to remove cpumask_t from stack. Actually change smp_call_function_mask() to smp_call_function_many(). We avoid cpumasks on the stack in this version. (S390 has its own version, but that's going away apparently). We have to do some dancing to figure out if 0 or 1 other cpus are in the mask supplied and the online mask without allocating a tmp cpumask. It's still fairly cheap. We allocate the cpumask at the end of the call_function_data structure: if allocation fails we fallback to smp_call_function_single rather than using the baroque quiescing code (which needs a cpumask on stack). (Thanks to Hiroshi Shimamoto for spotting several bugs in previous versions!) Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Cc: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Cc: npiggin@suse.de Cc: axboe@kernel.dk
* cpumask: make set_cpu_*/init_cpu_* out-of-lineRusty Russell2008-12-292-46/+54
| | | | | | | | | | | | | | | They're only for use in boot/cpu hotplug code anyway, and this avoids the use of deprecated cpu_*_map. Stephen Rothwell points out that gcc 4.2.4 (on powerpc at least) didn't like the cast away of const anyway: include/linux/cpumask.h: In function 'set_cpu_possible': include/linux/cpumask.h:1052: warning: passing argument 2 of 'cpumask_set_cpu' discards qualifiers from pointer target type So this kills two birds with one stone. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* cpumask: make cpumask.h eat its own dogfood.Rusty Russell2008-12-291-37/+38
| | | | | | | | | | | | Changes: 1) cpumask_t to struct cpumask, 2) cpus_weight_nr to cpumask_weight, 3) cpu_isset to cpumask_test_cpu, 4) ->bits to cpumask_bits() 5) cpu_*_map to cpu_*_mask. 6) for_each_cpu_mask_nr to for_each_cpu Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* cpumask: switch over to cpu_online/possible/active/present_mask: coreRusty Russell2008-12-292-72/+52
| | | | | | | | | | | | | | Impact: cleanup This implements the obsolescent cpu_online_map in terms of cpu_online_mask, rather than the other way around. Same for the other maps. The documentation comments are also updated to refer to _mask rather than _map. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
* bitmap: fix seq_bitmap and seq_cpumask to take const pointerRusty Russell2008-12-292-3/+5
| | | | | | | | | Impact: cleanup seq_bitmap just calls bitmap_scnprintf on the bits: that arg can be const. Similarly, seq_cpumask just calls seq_bitmap. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* bitmap: test for constant as well as small size for inline versionsRusty Russell2008-12-291-16/+19
| | | | | | | | | | | | | | | | Impact: reduce text size bitmap_zero et al have a fastpath for nbits <= BITS_PER_LONG, but this should really only apply where the nbits is known at compile time. This only saves about 1200 bytes on an allyesconfig kernel, but with cpumasks going variable that number will increase. text data bss dec hex filename 35327852 5035607 6782976 47146435 2cf65c3 vmlinux-before 35326640 5035607 6782976 47145223 2cf6107 vmlinux-after Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* cpumask: make CONFIG_NR_CPUS always valid.Rusty Russell2008-12-291-8/+8
| | | | | | | | | | | | | | | | | Impact: cleanup Currently we have NR_CPUS, which is 1 on UP, and CONFIG_NR_CPUS on SMP. If we make CONFIG_NR_CPUS always valid (and always 1 on !SMP), we can skip the middleman. This also allows us to find and check all the unaudited NR_CPUS usage as we prepare for v. large NR_CPUS. To avoid breaking every arch, we cheat and do this for the moment in the header if the arch doesn't. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
* Merge branch 'master' of ↵Rusty Russell2008-12-292974-81236/+144501
|\ | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6
| * Merge branch 'next' of ↵Linus Torvalds2008-12-29254-2728/+7138
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (144 commits) powerpc/44x: Support 16K/64K base page sizes on 44x powerpc: Force memory size to be a multiple of PAGE_SIZE powerpc/32: Wire up the trampoline code for kdump powerpc/32: Add the ability for a classic ppc kernel to be loaded at 32M powerpc/32: Allow __ioremap on RAM addresses for kdump kernel powerpc/32: Setup OF properties for kdump powerpc/32/kdump: Implement crash_setup_regs() using ppc_save_regs() powerpc: Prepare xmon_save_regs for use with kdump powerpc: Remove default kexec/crash_kernel ops assignments powerpc: Make default kexec/crash_kernel ops implicit powerpc: Setup OF properties for ppc32 kexec powerpc/pseries: Fix cpu hotplug powerpc: Fix KVM build on ppc440 powerpc/cell: add QPACE as a separate Cell platform powerpc/cell: fix build breakage with CONFIG_SPUFS disabled powerpc/mpc5200: fix error paths in PSC UART probe function powerpc/mpc5200: add rts/cts handling in PSC UART driver powerpc/mpc5200: Make PSC UART driver update serial errors counters powerpc/mpc5200: Remove obsolete code from mpc5200 MDIO driver powerpc/mpc5200: Add MDMA/UDMA support to MPC5200 ATA driver ... Fix trivial conflict in drivers/char/Makefile as per Paul's directions
| | * powerpc/44x: Support 16K/64K base page sizes on 44xIlya Yanok2008-12-2810-48/+130
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds support for 16k and 64k page sizes on PowerPC 44x processors. The PGDIR table is much smaller than a page when using 16k or 64k pages (512 and 32 bytes respectively) so we allocate the PGDIR with kzalloc() instead of __get_free_pages(). One PTE table covers rather a large memory area when using 16k or 64k pages (32MB or 512MB respectively), so we can easily put FIXMAP and PKMAP in the area covered by one PTE table. Signed-off-by: Yuri Tikhonov <yur@emcraft.com> Signed-off-by: Vladimir Panfilov <pvr@emcraft.com> Signed-off-by: Ilya Yanok <yanok@emcraft.com> Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc: Force memory size to be a multiple of PAGE_SIZEHollis Blanchard2008-12-281-1/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ensure that total memory size is page-aligned, because otherwise mark_bootmem() gets upset. This error case was triggered by using 64 KiB pages in the kernel while arch/powerpc/boot/4xx.c arbitrarily reduced the amount of memory by 4096 (to work around a chip bug that affects the last 256 bytes of physical memory). Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc/32: Wire up the trampoline code for kdumpDale Farnsworth2008-12-233-1/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Wire up the trampoline code for ppc32 to relay exceptions from the vectors at address 0 to vectors at address 32MB, and modify Kconfig to enable Kdump support for all classic powerpcs. Signed-off-by: Dale Farnsworth <dale@farnsworth.org> Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc/32: Add the ability for a classic ppc kernel to be loaded at 32MDale Farnsworth2008-12-235-14/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the ability for a classic ppc kernel to be loaded at an address of 32MB. This done by fixing a few places that assume we are loaded at address 0, and by changing several uses of KERNELBASE to use PAGE_OFFSET, instead. Signed-off-by: Dale Farnsworth <dale@farnsworth.org> Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc/32: Allow __ioremap on RAM addresses for kdump kernelAnton Vorontsov2008-12-231-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While for debugging it is good to catch bogus users of ioremap, though for kdump support it is more convenient to use __ioremap for copy_oldmem_page() (exactly as we do for PPC64 currently). Note that copy_oldmem_page() calls __ioremap with flags set to '0', so it should be safe with the regard to the caches. The other option is to use kmap_atomic_pfn()[1], but it will not work for kernels compiled without HIGHMEM. That is, on a board with 256MB RAM and crashkernel=64M@32M case, the !HIGHMEM capturing kernel maps 0-96M range, which does not include all the memory needed to capture the dump. And, obviously, accessing anything upper than 96M will cause faults. [1] http://ozlabs.org/pipermail/linuxppc-dev/2007-November/046747.html Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc/32: Setup OF properties for kdumpDale Farnsworth2008-12-232-52/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Refactor the setting of kdump OF properties, moving the common code from machine_kexec_64.c to machine_kexec.c where it can be used on both ppc64 and ppc32. This will be needed for kdump to work on ppc32 platforms. Signed-off-by: Dale Farnsworth <dale@farnsworth.org> Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc/32/kdump: Implement crash_setup_regs() using ppc_save_regs()Anton Vorontsov2008-12-232-10/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | This replaces the dummy crash_setup_regs function with full-fledged crash_setup_regs implementation. On PPC32 we simply use the new ppc_save_regs function to dump the registers. Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc: Prepare xmon_save_regs for use with kdumpAnton Vorontsov2008-12-235-5/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Today the arch/powerpc/xmon/setjmp.S file contains only the xmon_save_regs function. We want to use it for kdump purposes, so let's move the file into arch/powerpc/kernel/ and give the function a more generic name (ppc_save_regs). Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc: Remove default kexec/crash_kernel ops assignmentsAnton Vorontsov2008-12-237-43/+0
| | | | | | | | | | | | | | | | | | | | | Default ops are implicit now. Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc: Make default kexec/crash_kernel ops implicitAnton Vorontsov2008-12-231-12/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This removes the need for each platform to specify default kexec and crash kernel ops, thus effectively adds a working kexec support for most 6xx/7xx/7xxx-based boards. Platforms that can't cope with default ops will explode in some weird way (a hang or reboot is most likely), which means that the board's kexec support should be fixed or blacklisted via dummy _prepare callback returning -ENOSYS. Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc: Setup OF properties for ppc32 kexecDale Farnsworth2008-12-232-19/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Refactor the setting of kexec OF properties, moving the common code from machine_kexec_64.c to machine_kexec.c where it can be used on both ppc64 and ppc32. This is needed for kexec to work on ppc32 platforms. Signed-off-by: Dale Farnsworth <dale@farnsworth.org> Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc/pseries: Fix cpu hotplugSebastien Dugue2008-12-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, pseries_cpu_die() calls msleep() while polling RTAS for the status of the dying cpu. However, if the cpu that is going down also happens to be the one doing the tick then we're hosed as the tick_do_timer_cpu 'baton' is only passed later on in tick_shutdown() when _cpu_down() does the CPU_DEAD notification. Therefore jiffies won't be updated anymore. This replaces that msleep() with a cpu_relax() to make sure we're not going to schedule at that point. With this patch my test box survives a 100k iterations hotplug stress test on _all_ cpus, whereas without it, it quickly dies after ~50 iterations. Signed-off-by: Sebastien Dugue <sebastien.dugue@bull.net> Cc: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc: Fix KVM build on ppc440Paul Mackerras2008-12-231-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 2a4aca1144394653269720ffbb5a325a77abd5fa ("powerpc/mm: Split low level tlb invalidate for nohash processors") changed a call to _tlbia to _tlbil_all but didn't include the header that defines _tlbil_all, leading to a build failure on 440 if KVM is enabled. This fixes it. Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc/cell: add QPACE as a separate Cell platformBenjamin Krill2008-12-225-16/+175
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the QPACE (Chromodynamics Parallel Computing on the Cell Broadband Engine) platform doesn't use a iommu, doesn't have PCI devices and a MPIC much lesser setup and configurations are needed. So far all devices are detected as OF device. A notifier function is used to set the dma_ops for the of_platform bus. Further this patch splits the PPC_CELL_NATIVE into PPC_CELL_COMMON which are parts that are shared with the QPACE platform and the rest. Signed-off-by: Benjamin Krill <ben@codiert.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
| | * powerpc/cell: fix build breakage with CONFIG_SPUFS disabledArnd Bergmann2008-12-221-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | CBE_THERM and OPROFILE_CELL both cannot be built without SPU_FS disabled, so make the dependency explicit. Reported-by: Milton Miller <miltonm@bga.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
| | * powerpc/mpc5200: fix error paths in PSC UART probe functionWolfram Sang2008-12-211-8/+15
| | | | | | | | | | | | | | | | | | | | | | | | - error cases for mapbase and irq were unbundled - mapped irq now gets disposed on error Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| | * powerpc/mpc5200: add rts/cts handling in PSC UART driverWolfram Sang2008-12-212-5/+47
| | | | | | | | | | | | | | | | | | | | | | | | Add RTS/CTS-support for the PSC of the MPC5200B. Tested with a Phytec MPC5200B-IO. Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| | * powerpc/mpc5200: Make PSC UART driver update serial errors countersRené Bürgel2008-12-211-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the capability to the mpc52xx-uart to report framing errors, parity errors, breaks and overruns to userspace. These values may be requested in userspace by using the ioctl TIOCGICOUNT. Signed-off-by: René Bürgel <r.buergel@unicontrol.de> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| | * powerpc/mpc5200: Remove obsolete code from mpc5200 MDIO driverWolfram Sang2008-12-211-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | As this driver polls for a complete MDIO transaction, there is no need to enable interrupts for it. Furthermore, make both checks for freeing MDIO-bus irqs consistent. Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| | * powerpc/mpc5200: Add MDMA/UDMA support to MPC5200 ATA driverTim Yamin2008-12-213-78/+488
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds MDMA/UDMA support using BestComm for DMA on the MPC5200 platform. Based heavily on previous work by Freescale (Bernard Kuhn, John Rigby) and Domen Puncer. With this patch, a SanDisk Extreme IV CF card gets read speeds of approximately 26.70 MB/sec. Signed-off-by: Tim Yamin <plasm@roo.me.uk> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| | * powerpc/mpc5200: Disable bestcomm prefetching when ATA DMA enabledGrant Likely2008-12-213-5/+21
| | | | | | | | | | | | | | | | | | | | | | | | When ATA DMA is enabled, bestcomm prefetching does not work. This patch adds a function to disable bestcomm prefetch when the ATA Bestcomm task is initialized. Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| | * powerpc/mpc5200: Bestcomm fixes to ATA supportTim Yamin2008-12-212-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1) ata.h has dst_pa in the wrong place (needs to match what the BestComm task microcode in bcom_ata_task.c expects); fix it. 2) The BestComm ATA task priority was changed to maximum in bestcomm_priv.h; this fixes a deadlock issue experienced with heavy DMA occurring on both the ATA and Ethernet BestComm tasks, e.g. when downloading a large file over a LAN to disk. Signed-off-by: Tim Yamin <plasm@roo.me.uk> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| | * powerpc/mpc5200: Bugfix on handling variable sized buffer descriptorsGrant Likely2008-12-211-19/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The buffer descriptors for the ATA BestComm task are larger than the current definition for bcom_bd. This causes problems because the various bcom_... functions dereference the buffer descriptor pointer by using the array operator which doesn't work when the buffer descriptors are a different size. This patch adds the bcom_get_bd() function which uses the value in bcom_task.bd_size to calculate the offset into the BD table. This patch also changes the definition of bcom_bd to specify a data size of 0 instead of 1 so that it will never work if anyone attempts to dereference the bd list as an array (as opposed to something that might work even though it is wrong). Finally, this patch moves the definition of bcom_bd up in the file to eliminate a forward declaration. Based on patch originally written by Tim Yamin. Signed-off-by: Tim Yamin <plasm@roo.me.uk> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| | * powerpc/mpc5200: Make internal 5200 PIC the default interrupt controllerGrant Likely2008-12-211-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The MPC5200 internal interrupt controller setup function needs to set the default interrupt controller when it is called. Without this irq_create_of_mapping() cannot be called without first determining the pointer to the irq controller (ie. call with controller = NULL). Reported-by: Steven Cavanagh <scavanagh@secretlab.ca> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| | * powerpc/mpc5200: Document and tidy irq driverGrant Likely2008-12-215-122/+189
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds documentation to the mpc5200 interrupt controller driver and cleans up some minor coding conventions. It also moves the contents of mpc52xx_pic.h into the driver proper (except for a small common bit that is moved to the common mpc52xx.h) because the information encoded there is not required by any other part of kernel code. Finally for code readability sake, the L2_OFFSET shift value is removed because the code using it resolves to a noop. Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| | * powerpc: Fix missing 'blr' in _tlbia()Benjamin Herrenschmidt2008-12-211-0/+1
| | | | | | | | | | | | | | | | | | | | | Rework to MMU code dropped a much missed 'blr' instruction. Brown-Paper-Bag-Worn-By: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| | * powerpc/bootwrapper: Use the child-bus #address-cells to decide which range ↵Scott Wood2008-12-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | entry to use The correct #address-cells was still used for the actual translation, so the impact is only a possibility of choosing the wrong range entry or failing to find any match. Most common cases were not affected. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc: Const-qualify Device Node Argument to DCR Resource Extent APIGrant Erickson2008-12-212-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add const qualifier to device_node argument for dcr_resource_{start,len} as of_get_property also const-qualifies this argument. Signed-off-by: Grant Erickson <gerickson@nuovations.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc/44x: 44x TLB doesn't need "Guarded" set for all pagesBenjamin Herrenschmidt2008-12-211-1/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After discussing with chip designers, it appears that it's not necessary to set G everywhere on 440 cores. The various core errata related to prefetch should be sorted out by firmware by disabling icache prefetching in CCR0. We add the workaround to the kernel however just in case oooold firmwares don't do it. This is valid for -all- 4xx core variants. Later ones hard wire the absence of prefetch but it doesn't harm to clear the bits in CCR0 (they should already be cleared anyway). We still leave G=1 on the linear mapping for now, we need to stop over-mapping RAM to be able to remove it. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc/mm: Rework usage of _PAGE_COHERENT/NO_CACHE/GUARDEDBenjamin Herrenschmidt2008-12-219-77/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, we never set _PAGE_COHERENT in the PTEs, we just OR it in in the hash code based on some CPU feature bit. We also manipulate _PAGE_NO_CACHE and _PAGE_GUARDED by hand in all sorts of places. This changes the logic so that instead, the PTE now contains _PAGE_COHERENT for all normal RAM pages thay have I = 0 on platforms that need it. The hash code clears it if the feature bit is not set. It also adds some clean accessors to setup various valid combinations of access flags and change various bits of code to use them instead. This should help having the PTE actually containing the bit combinations that we really want. I also removed _PAGE_GUARDED from _PAGE_BASE on 44x and instead set it explicitely from the TLB miss. I will ultimately remove it completely as it appears that it might not be needed after all but in the meantime, having it in the TLB miss makes things a lot easier. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc/mm: Runtime allocation of mmu context maps for nohash CPUsBenjamin Herrenschmidt2008-12-213-54/+116
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes the MMU context code used for CPUs with no hash table (except 603) dynamically allocate the various maps used to track the state of contexts. Only the main free map and CPU 0 stale map are allocated at boot time. Other CPU maps are allocated when those CPUs are brought up and freed if they are unplugged. This also moves the initialization of the MMU context management slightly later during the boot process, which should be fine as it's really only needed when userland if first started anyways. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc/44x: No need to mask MSR:CE, ME or DE in _tlbil_va on 440Benjamin Herrenschmidt2008-12-211-9/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The handlers for Critical, Machine Check or Debug interrupts will save and restore MMUCR nowadays, thus we only need to disable normal interrupts when invalidating TLB entries. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc/mm: Split low level tlb invalidate for nohash processorsBenjamin Herrenschmidt2008-12-217-249/+292
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the various forms of low level TLB invalidations are all implemented in misc_32.S for 32-bit processors, in a fairly scary mess of #ifdef's and with interesting duplication such as a whole bunch of code for FSL _tlbie and _tlbia which are no longer used. This moves things around such that _tlbie is now defined in hash_low_32.S and is only used by the 32-bit hash code, and all nohash CPUs use the various _tlbil_* forms that are now moved to a new file, tlb_nohash_low.S. I moved all the definitions for that stuff out of include/asm/tlbflush.h as they are really internal mm stuff, into mm/mmu_decl.h The code should have no functional changes. I kept some variants inline for trivial forms on things like 40x and 8xx. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc/mm: Add SMP support to no-hash TLB handlingBenjamin Herrenschmidt2008-12-2110-57/+281
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit moves the whole no-hash TLB handling out of line into a new tlb_nohash.c file, and implements some basic SMP support using IPIs and/or broadcast tlbivax instructions. Note that I'm using local invalidations for D->I cache coherency. At worst, if another processor is trying to execute the same and has the old entry in its TLB, it will just take a fault and re-do the TLB flush locally (it won't re-do the cache flush in any case). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc/mm: Introduce MMU featuresBenjamin Herrenschmidt2008-12-2116-60/+268
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We're soon running out of CPU features and I need to add some new ones for various MMU related bits, so this patch separates the MMU features from the CPU features. I moved over the 32-bit MMU related ones, added base features for MMU type families, but didn't move over any 64-bit only feature yet. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * powerpc/mm: Rework context management for CPUs with no hash tableBenjamin Herrenschmidt2008-12-216-54/+234
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reworks the context management code used by 4xx,8xx and freescale BookE. It adds support for SMP by implementing a concept of stale context map to lazily flush the TLB on processors where a context may have been invalidated. This also contains the ground work for generalizing such lazy TLB flushing by just picking up a new PID and marking the old one stale. This will be implemented later. This is a first implementation that uses a global spinlock. Ideally, we should try to get at least the fast path (context ID already assigned) lockless or limited to a per context lock, but for now this will do. I tried to keep the UP case reasonably simple to avoid adding too much overhead to 8xx which does a lot of context stealing since it effectively has only 16 PIDs available. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>