summaryrefslogtreecommitdiffstats
path: root/include/asm-ia64 (follow)
Commit message (Collapse)AuthorAgeFilesLines
* [IA64] Fix bug in ia64 specific down() functionZoltan Menyhart2006-01-171-4/+4
| | | | | | | | | | | | | Chen, Kenneth W wrote: > The memory order semantics for include/asm-ia64/semaphore.h:down() > doesn't look right. It is using atomic_dec_return, which eventually > translate into ia64_fetch_and_add() that uses release semantics. > Shouldn't it use acquire semantics? Use ia64_fetchadd() instead of atomic_dec_return() Acked-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Zonelists for nodes without cpusJack Steiner2006-01-171-0/+4
| | | | | | | | If a node runs out of memory, ensure that memory on nodes w/o cpus is used before using memory on nodes with cpus. Signed-off-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64-SGI] sn2 mutex conversionJes Sorensen2006-01-172-5/+7
| | | | | | | | | Migrate sn2 code to use mutex and completion events rather than semaphores. Signed-off-by: Jes Sorensen <jes@sgi.com> Acked-by: Dean Nelson <dcn@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* Pull perfmon-montecito into release branchTony Luck2006-01-171-2/+2
|\
| * [IA64] Perfmon for MontecitoStephane Eranian2006-01-161-2/+2
| | | | | | | | | | | | | | Add Montecito PMU description table for perfmon2 Signed-off-by: Stephane Eranian <eranian@hpl.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | [IA64] Cleanup of arch/ia64/sn and include/asm-ia64/snPrarit Bhargava2006-01-1713-1501/+1501
| | | | | | | | | | | | | | | | Replace uintX_t declarations with uX declarations. Replace intX_t declarations with sX declarations. Signed-off-by: Prarit Bhargava <prarit@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | [IA64] pal cache flush patchXu, Anthony2006-01-171-1/+1
|/ | | | | | | | | | | | Because PAL spec has changed since 2002, you can goto http://developer.intel.com/design/itanium/manuals/iiasdmanual.htm to download new SDM, all PAL calls should be invoked with psr.ic=1, and it's caller's responsibility to handle possible tlb miss. Ia64_pal_cache_flush was written according to old spec, it is obsolete, and this patch has ia64_pal_cache_flush conform to new spec. Signed-off-by Anthony Xu <anthony.xu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [PATCH] Altix: ioc3 serial supportPatrick Gefre2006-01-151-0/+241
| | | | | | | | | | | | | | | Add driver support for a 2 port PCI IOC3-based serial card on Altix boxes: This is a re-submission. On the original submission I was asked to organize the code so that the MIPS ioc3 ethernet and serial parts could be used with this driver. Stanislaw Skowronek was kind enough to provide the shim layer for this - thanks Stanislaw. This patch includes the shim layer and the Altix PCI ioc3 serial driver. The MIPS merged ioc3 ethernet and serial support is forthcoming. Signed-off-by: Patrick Gefre <pfg@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [IA64] prevent accidental modification of args in jprobe handlerZhang Yanmin2006-01-131-0/+6
| | | | | | | | | | | When jprobe is hit, the function parameters of the original function should be saved before jprobe handler is executed, and restored it after jprobe handler is executed, because jprobe handler might change the register values due to tail call optimization by the gcc. Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Handle debug traps in fsys modeJason Uhlenkott2006-01-131-1/+3
| | | | | | | | We need to handle debug traps in fsys mode non-fatally. They can happen now that we have fsyscalls which contain probe instructions. Signed-off-by: Jason Uhlenkott <jasonuhl@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64-SGI] Fix sn_flush_device_kernel & spinlock initializationPrarit Bhargava2006-01-131-1/+2
| | | | | | | | | | | | | | | | | | | | | | This patch separates the sn_flush_device_list struct into kernel and common (both kernel and PROM accessible) structures. As it was, if the size of a spinlock_t changed (due to additional CONFIG options, etc.) the sal call which populated the sn_flush_device_list structs would erroneously write data (and cause memory corruption and/or a panic). This patch does the following: 1. Removes sn_flush_device_list and adds sn_flush_device_common and sn_flush_device_kernel. 2. Adds a new SAL call to populate a sn_flush_device_common struct per device, not per widget as previously done. 3. Correctly initializes each device's sn_flush_device_kernel spinlock_t struct (before it was only doing each widget's first device). Signed-off-by: Prarit Bhargava <prarit@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64-SGI] Altix BTE error handling fixesRuss Anderson2006-01-131-1/+1
| | | | | | | | | | Altix (shub2) pushes the BTE clean-up into SAL. This patch correctly interfaces with the now implemented SAL call. It also fixes a bug when delaying clean-up to allow busy BTEs to complete (or error out). Signed-off-by: Russ Anderson <rja@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64-SGI] move xpc.h to include/asm-ia64/sn (cleanup)Dean Nelson2006-01-131-4/+4
| | | | | | | | Cleanup a few items after moving xpc.h from arch/ia64/sn/kernel to include/asm-ia64/sn. Signed-off-by: Dean Nelson <dcn@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64-SGI] move xpc.h to include/asm-ia64/snDean Nelson2006-01-131-0/+1274
| | | | | | | Move xpc.h from arch/ia64/sn/kernel to include/asm-ia64/sn without change. Signed-off-by: Dean Nelson <dcn@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64-SGI] ensure XPC disengage request is processedDean Nelson2006-01-131-1/+3
| | | | | | | | | | | This patch fixes a problem in XPC disengage processing whereby it was not seeing the request to disengage from a remote partition, so the disengage wasn't happening. The disengagement is suppose to transpire during the time a XPC channel is disconnecting, and should be completed before the channel is declared to be disconnected. Signed-off-by: Dean Nelson <dcn@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [PATCH] ia64: task_pt_regs()Al Viro2006-01-124-8/+8
| | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] ia64: task_thread_info()Al Viro2006-01-121-0/+9
| | | | | | | | | | on ia64 thread_info is at the constant offset from task_struct and stack is embedded into the same beast. Set __HAVE_THREAD_FUNCTIONS, made task_thread_info() just add a constant. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] scheduler cache-hot-autodetectakpm@osdl.org2006-01-121-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ) From: Ingo Molnar <mingo@elte.hu> This is the latest version of the scheduler cache-hot-auto-tune patch. The first problem was that detection time scaled with O(N^2), which is unacceptable on larger SMP and NUMA systems. To solve this: - I've added a 'domain distance' function, which is used to cache measurement results. Each distance is only measured once. This means that e.g. on NUMA distances of 0, 1 and 2 might be measured, on HT distances 0 and 1, and on SMP distance 0 is measured. The code walks the domain tree to determine the distance, so it automatically follows whatever hierarchy an architecture sets up. This cuts down on the boot time significantly and removes the O(N^2) limit. The only assumption is that migration costs can be expressed as a function of domain distance - this covers the overwhelming majority of existing systems, and is a good guess even for more assymetric systems. [ People hacking systems that have assymetries that break this assumption (e.g. different CPU speeds) should experiment a bit with the cpu_distance() function. Adding a ->migration_distance factor to the domain structure would be one possible solution - but lets first see the problem systems, if they exist at all. Lets not overdesign. ] Another problem was that only a single cache-size was used for measuring the cost of migration, and most architectures didnt set that variable up. Furthermore, a single cache-size does not fit NUMA hierarchies with L3 caches and does not fit HT setups, where different CPUs will often have different 'effective cache sizes'. To solve this problem: - Instead of relying on a single cache-size provided by the platform and sticking to it, the code now auto-detects the 'effective migration cost' between two measured CPUs, via iterating through a wide range of cachesizes. The code searches for the maximum migration cost, which occurs when the working set of the test-workload falls just below the 'effective cache size'. I.e. real-life optimized search is done for the maximum migration cost, between two real CPUs. This, amongst other things, has the positive effect hat if e.g. two CPUs share a L2/L3 cache, a different (and accurate) migration cost will be found than between two CPUs on the same system that dont share any caches. (The reliable measurement of migration costs is tricky - see the source for details.) Furthermore i've added various boot-time options to override/tune migration behavior. Firstly, there's a blanket override for autodetection: migration_cost=1000,2000,3000 will override the depth 0/1/2 values with 1msec/2msec/3msec values. Secondly, there's a global factor that can be used to increase (or decrease) the autodetected values: migration_factor=120 will increase the autodetected values by 20%. This option is useful to tune things in a workload-dependent way - e.g. if a workload is cache-insensitive then CPU utilization can be maximized by specifying migration_factor=0. I've tested the autodetection code quite extensively on x86, on 3 P3/Xeon/2MB, and the autodetected values look pretty good: Dual Celeron (128K L2 cache): --------------------- migration cost matrix (max_cache_size: 131072, cpu: 467 MHz): --------------------- [00] [01] [00]: - 1.7(1) [01]: 1.7(1) - --------------------- cacheflush times [2]: 0.0 (0) 1.7 (1784008) --------------------- Here the slow memory subsystem dominates system performance, and even though caches are small, the migration cost is 1.7 msecs. Dual HT P4 (512K L2 cache): --------------------- migration cost matrix (max_cache_size: 524288, cpu: 2379 MHz): --------------------- [00] [01] [02] [03] [00]: - 0.4(1) 0.0(0) 0.4(1) [01]: 0.4(1) - 0.4(1) 0.0(0) [02]: 0.0(0) 0.4(1) - 0.4(1) [03]: 0.4(1) 0.0(0) 0.4(1) - --------------------- cacheflush times [2]: 0.0 (33900) 0.4 (448514) --------------------- Here it can be seen that there is no migration cost between two HT siblings (CPU#0/2 and CPU#1/3 are separate physical CPUs). A fast memory system makes inter-physical-CPU migration pretty cheap: 0.4 msecs. 8-way P3/Xeon [2MB L2 cache]: --------------------- migration cost matrix (max_cache_size: 2097152, cpu: 700 MHz): --------------------- [00] [01] [02] [03] [04] [05] [06] [07] [00]: - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [01]: 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [02]: 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) [03]: 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) 19.2(1) [04]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) 19.2(1) [05]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) 19.2(1) [06]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - 19.2(1) [07]: 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) - --------------------- cacheflush times [2]: 0.0 (0) 19.2 (19281756) --------------------- This one has huge caches and a relatively slow memory subsystem - so the migration cost is 19 msecs. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Cc: <wilder@us.ibm.com> Signed-off-by: John Hawkes <hawkes@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] sched: add cacheflush() asmIngo Molnar2006-01-121-0/+1
| | | | | | | | | | Add per-arch sched_cacheflush() which is a write-back cacheflush used by the migration-cost calibration code at bootup time. Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] kprobes: fix build breakageAnanth N Mavinakayanahalli2006-01-101-1/+1
| | | | | | | | | | | The following patch (against 2.6.15-rc5-mm3) fixes a kprobes build break due to changes introduced in the kprobe locking in 2.6.15-rc5-mm3. In addition, the patch reverts back the open-coding of kprobe_mutex. Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Acked-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] kprobes: arch_remove_kprobeAnil S Keshavamurthy2006-01-101-0/+1
| | | | | | | | | | | | | Currently arch_remove_kprobes() is only implemented/required for x86_64 and powerpc. All other architecture like IA64, i386 and sparc64 implementes a dummy function which is being called from arch independent kprobes.c file. This patch removes the dummy functions and replaces it with #define arch_remove_kprobe(p, s) do { } while(0) Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] kprobes: changed from using spinlock to mutexAnil S Keshavamurthy2006-01-101-5/+0
| | | | | | | | | | | | | | | Since Kprobes runtime exception handlers is now lock free as this code path is now using RCU to walk through the list, there is no need for the register/unregister{_kprobe} to use spin_{lock/unlock}_isr{save/restore}. The serialization during registration/unregistration is now possible using just a mutex. In the above process, this patch also fixes a minor memory leak for x86_64 and powerpc. Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] kprobes: cleanup include/asm/kprobes.hAnil S Keshavamurthy2006-01-101-8/+0
| | | | | | | | | | | | | | | The arch specific kprobes.h files never gets included when CONFIG_KPROBES is turned off. Hence check for CONFIG_KPROBES is not appropriate here in this arch specific kprobes.h files. Also the below defined function kprobes_exception_notify() is not needed when CONFIG_KPROBES is off. Compile tested for both CONFIG_KPROBES=y and N. Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Generic ioctl.hBrian Gerst2006-01-101-77/+1
| | | | | | | | Most arches copied the i386 ioctl.h. Combine them into a generic header. Signed-off-by: Brian Gerst <bgerst@didntduck.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mutex subsystem, add default include/asm-*/mutex.h filesArjan van de Ven2006-01-101-0/+9
| | | | | | | | | | | | | add the per-arch mutex.h files for the remaining architectures. We default to asm-generic/mutex-dec.h, because that performs quite well on most arches. Arches that do not have atomic decrement/increment instructions should switch to mutex-xchg.h instead. Arches can also provide their own implementation for the mutex fastpath primitives. Signed-off-by: Arjan van de Ven <arjan@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* [PATCH] mutex subsystem, add atomic_xchg() to all archesIngo Molnar2006-01-101-0/+1
| | | | | | | add atomic_xchg() to all the architectures. Needed by the new mutex code. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arjan van de Ven <arjan@infradead.org>
* [PATCH] /dev/mem: validate mmap requestsBjorn Helgaas2006-01-091-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a hook so architectures can validate /dev/mem mmap requests. This is analogous to validation we already perform in the read/write paths. The identity mapping scheme used on ia64 requires that each 16MB or 64MB granule be accessed with exactly one attribute (write-back or uncacheable). This avoids "attribute aliasing", which can cause a machine check. Sample problem scenario: - Machine supports VGA, so it has uncacheable (UC) MMIO at 640K-768K - efi_memmap_init() discards any write-back (WB) memory in the first granule - Application (e.g., "hwinfo") mmaps /dev/mem, offset 0 - hwinfo receives UC mapping (the default, since memmap says "no WB here") - Machine check abort (on chipsets that don't support UC access to WB memory, e.g., sx1000) In the scenario above, the only choices are - Use WB for hwinfo mmap. Can't do this because it causes attribute aliasing with the UC mapping for the VGA MMIO space. - Use UC for hwinfo mmap. Can't do this because the chipset may not support UC for that region. - Disallow the hwinfo mmap with -EINVAL. That's what this patch does. Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] remove gcc-2 checksAndrew Morton2006-01-092-6/+2
| | | | | | | | | | | | | Remove various things which were checking for gcc-1.x and gcc-2.x compilers. From: Adrian Bunk <bunk@stusta.de> Some documentation updates and removes some code paths for gcc < 3.2. Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] consolidate asm/futex.hJeff Dike2006-01-091-48/+1
| | | | | | | | | | | | | Most of the architectures have the same asm/futex.h. This consolidates them into asm-generic, with the arches including it from their own asm/futex.h. In the case of UML, this reverts the old broken futex.h and goes back to using the same one as almost everyone else. Signed-off-by: Jeff Dike <jdike@addtoit.com> Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Kill L1_CACHE_SHIFT_MAXRavikiran G Thirumalai2006-01-091-2/+0
| | | | | | | | | | | Kill L1_CACHE_SHIFT from all arches. Since L1_CACHE_SHIFT_MAX is not used anymore with the introduction of INTERNODE_CACHE, kill L1_CACHE_SHIFT_MAX. Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org> Signed-off-by: Shai Fultheim <shai@scalex86.org> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Swap Migration V5: sys_migrate_pages interfaceChristoph Lameter2006-01-091-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sys_migrate_pages implementation using swap based page migration This is the original API proposed by Ray Bryant in his posts during the first half of 2005 on linux-mm@kvack.org and linux-kernel@vger.kernel.org. The intent of sys_migrate is to migrate memory of a process. A process may have migrated to another node. Memory was allocated optimally for the prior context. sys_migrate_pages allows to shift the memory to the new node. sys_migrate_pages is also useful if the processes available memory nodes have changed through cpuset operations to manually move the processes memory. Paul Jackson is working on an automated mechanism that will allow an automatic migration if the cpuset of a process is changed. However, a user may decide to manually control the migration. This implementation is put into the policy layer since it uses concepts and functions that are also needed for mbind and friends. The patch also provides a do_migrate_pages function that may be useful for cpusets to automatically move memory. sys_migrate_pages does not modify policies in contrast to Ray's implementation. The current code here is based on the swap based page migration capability and thus is not able to preserve the physical layout relative to it containing nodeset (which may be a cpuset). When direct page migration becomes available then the implementation needs to be changed to do a isomorphic move of pages between different nodesets. The current implementation simply evicts all pages in source nodeset that are not in the target nodeset. Patch supports ia64, i386 and x86_64. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] atomic_long_t & include/asm-generic/atomic.h V2Christoph Lameter2006-01-061-0/+1
| | | | | | | | | | | | | | | | | | | Several counters already have the need to use 64 atomic variables on 64 bit platforms (see mm_counter_t in sched.h). We have to do ugly ifdefs to fall back to 32 bit atomic on 32 bit platforms. The VM statistics patch that I am working on will also make more extensive use of atomic64. This patch introduces a new type atomic_long_t by providing definitions in asm-generic/atomic.h that works similar to the c "long" type. Its 32 bits on 32 bit platforms and 64 bits on 64 bit platforms. Also cleans up the determination of the mm_counter_t in sched.h. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] kill last zone_reclaim() bitsAndrew Morton2006-01-061-1/+1
| | | | | | | | Remove the last bits of Martin's ill-fated sys_set_zone_reclaim(). Cc: Martin Hicks <mort@wildopensource.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] madvise(MADV_REMOVE): remove pages from tmpfs shm backing storeBadari Pulavarty2006-01-061-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here is the patch to implement madvise(MADV_REMOVE) - which frees up a given range of pages & its associated backing store. Current implementation supports only shmfs/tmpfs and other filesystems return -ENOSYS. "Some app allocates large tmpfs files, then when some task quits and some client disconnect, some memory can be released. However the only way to release tmpfs-swap is to MADV_REMOVE". - Andrea Arcangeli Databases want to use this feature to drop a section of their bufferpool (shared memory segments) - without writing back to disk/swap space. This feature is also useful for supporting hot-plug memory on UML. Concerns raised by Andrew Morton: - "We have no plan for holepunching! If we _do_ have such a plan (or might in the future) then what would the API look like? I think sys_holepunch(fd, start, len), so we should start out with that." - Using madvise is very weird, because people will ask "why do I need to mmap my file before I can stick a hole in it?" - None of the other madvise operations call into the filesystem in this manner. A broad question is: is this capability an MM operation or a filesytem operation? truncate, for example, is a filesystem operation which sometimes has MM side-effects. madvise is an mm operation and with this patch, it gains FS side-effects, only they're really, really significant ones." Comments: - Andrea suggested the fs operation too but then it's more efficient to have it as a mm operation with fs side effects, because they don't immediatly know fd and physical offset of the range. It's possible to fixup in userland and to use the fs operation but it's more expensive, the vmas are already in the kernel and we can use them. Short term plan & Future Direction: - We seem to need this interface only for shmfs/tmpfs files in the short term. We have to add hooks into the filesystem for correctness and completeness. This is what this patch does. - In the future, plan is to support both fs and mmap apis also. This also involves (other) filesystem specific functions to be implemented. - Current patch doesn't support VM_NONLINEAR - which can be addressed in the future. Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: Andrea Arcangeli <andrea@suse.de> Cc: Michael Kerrisk <mtk-manpages@gmx.net> Cc: Ulrich Drepper <drepper@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [FLS64]: generic versionStephen Hemminger2006-01-031-0/+1
| | | | | Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PATCH] x86_64/ia64 : Fix compilation error for node_to_first_cpuRavikiran G Thirumalai2005-12-241-1/+1
| | | | | | | | | | | | | Fixes a compiler error in node_to_first_cpu, __ffs expects unsigned long as a parameter; instead cpumask_t was being passed. The macro node_to_first_cpu was not yet used in x86_64 and ia64 arches, and so we never hit this. This patch replaces __ffs with first_cpu macro, similar to other arches. Signed-off-by: Alok N Kataria <alokk@calsoftinc.com> Signed-off-by: Ravikiran G Thirumalai <kiran@scalex86.org> Signed-off-by: Shai Fultheim <shai@scalex86.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [IA64] disable preemption in udelay()John Hawkes2005-12-161-9/+1
| | | | | | | | | | | | | | | The udelay() inline for ia64 uses the ITC. If CONFIG_PREEMPT is enabled and the platform has unsynchronized ITCs and the calling task migrates to another CPU while doing the udelay loop, then the effective delay may be too short or very, very long. This patch disables preemption around 100 usec chunks of the overall desired udelay time. This minimizes preemption-holdoffs. udelay() is now too big to be inline, move it out of line and export it. Signed-off-by: John Hawkes <hawkes@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Split 16-bit severity field in sal_log_record_headerTony Luck2005-12-131-1/+2
| | | | | | | | | | ERR_SEVERITY item is defined as a 8 bits item in SAL documentation ($B.2.1 rev december 2003), but as an u16 in sal.h. This has the side effect that current code in mca.c may not call ia64_sal_clear_state_info() upon receiving corrected platform errors if there are bits set in the validation byte. Reported by Xavier Bru. Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Define an ia64 version of __raw_read_trylockKeith Owens2005-12-121-1/+11
| | | | | | | | | | IA64 is using the generic version of __raw_read_trylock, which always waits for the lock to be free instead of returning when the lock is in use. Define an ia64 version of __raw_read_trylock which behaves correctly, and drop the generic one. Signed-off-by: Keith Owens <kaos@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Fix missing parameter for local_add/subChristoph Lameter2005-12-071-2/+2
| | | | | | | | Local add/sub macros need to have a parameter to specify the addend/subtrahend respectively. Signed-off-by: Christoph Lameter <clameter@sgi.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Change SET_PERSONALITY to comply with comment in binfmt_elf.c.Robin Holt2005-12-061-0/+2
| | | | | | | | | | | | | | | | | | | | We have a customer application which trips a bug. The problem arises when a driver attempts to call do_munmap on an area which is mapped, but because current->thread.task_size has been set to 0xC0000000, the call to do_munmap fails thinking it is an unmap beyond the user's address space. The comment in fs/binfmt_elf.c in load_elf_library() before the call to SET_PERSONALITY() indicates that task_size must not be changed for the running application until flush_thread, but is for ia64 executing ia32 binaries. This patch moves the setting of task_size from SET_PERSONALITY() to flush_thread() as indicated. The customer application no longer is able to trip the bug. Signed-off-by: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64-SGI] altix: pci_window fixupJohn Keller2005-12-061-3/+17
| | | | | | | | | | | | | | Altix only patch to add fixup code that sets up pci_controller->window. This code is a temporary fix until ACPI support on Altix is added. Also, corrects the usage of pci_dev->sysdata, which had previously been used to reference platform specific device info, to now point to a pci_controller struct. Signed-off-by: John Keller <jpk@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] - Make pfn_valid more precise for SGI Altix systemsDean Roe2005-11-291-1/+2
| | | | | | | | | | | A single SGI Altix system can be divided into multiple partitions, each running their own instance of the Linux kernel. pfn_valid() is currently not optimal for any but the first partition, since it does not compare the pfn with min_low_pfn before calling the more costly ia64_pfn_valid(). Signed-off-by: Dean Roe <roe@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64-SGI] support for older versions of PROMJack Steiner2005-11-211-0/+34
| | | | | | | | | Add support for old versions of the SN PROMs. Eventually this support will be deleted but it is useful right now to continue supporting older PROMs. Signed-off-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] altix: fix copyright in tioce .h filesMark Maule2005-11-182-29/+14
| | | | | | | Fix up copyright in tioce header files Signed-off-by: Mark Maule <maule@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [PATCH] atomic: inc_not_zeroNick Piggin2005-11-141-0/+10
| | | | | | | | | | | Introduce an atomic_inc_not_zero operation. Make this a special case of atomic_add_unless because lockless pagecache actually wants atomic_inc_not_negativeone due to its offset refcount. Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: "Paul E. McKenney" <paulmck@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] atomic: cmpxchgNick Piggin2005-11-141-0/+2
| | | | | | | | | Introduce an atomic_cmpxchg operation. Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: "Paul E. McKenney" <paulmck@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [IA64] 4-level page tablesRobin Holt2005-11-113-18/+85
| | | | | | | | | | | | | This patch introduces 4-level page tables to ia64. I have run some benchmarks and found nothing interesting. Performance has consistently fallen within the noise range. It also introduces a config option (setting the default to 3 levels). The config option prevents having 4 level page tables with 64k base page size. Signed-off-by: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [PATCH] PCI: Change MSI to use physical delivery mode alwaysAshok Raj2005-11-111-3/+0
| | | | | | | | | | | | | | | | | MSI hardcoded delivery mode to use logical delivery mode. Recently x86_64 moved to use physical mode addressing to support physflat mode. With this mode enabled noticed that my eth with MSI werent working. msi_address_init() was hardcoded to use logical mode for i386 and x86_64. So when we switch to use physical mode, things stopped working. Since anyway we dont use lowest priority delivery with MSI, its always directed to just a single CPU. Its safe and simpler to use physical mode always, even when we use logical delivery mode for IPI's or other ioapic RTE's. Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* Pull context-bitmap into release branchTony Luck2005-11-102-34/+48
|\