summaryrefslogtreecommitdiffstats
path: root/arch/sh/mm/cache-sh4.c (follow)
Commit message (Expand)AuthorAgeFilesLines
* sh: Mass ctrl_in/outX to __raw_read/writeX conversion.Paul Mundt2010-01-261-5/+5
* sh: Kill off the special uncached section and fixmap.Paul Mundt2010-01-211-2/+2
* sh: Optimise flush_dcache_page() on SH4Matt Fleming2010-01-021-10/+3
* sh: Can't compare physical and virtual addresses for aliasesMatt Fleming2009-12-091-2/+1
* sh: Drop associative writes for SH-4 cache flushes.Matt Fleming2009-12-041-2/+2
* Merge branch 'sh/stable-updates'Paul Mundt2009-11-091-1/+4
|\
| * sh: Account for cache aliases in flush_icache_range()Matt Fleming2009-11-091-1/+4
* | sh: Do not apply virt_to_phys() to a physical addressMatt Fleming2009-10-301-2/+1
* | Merge branch 'sh/stable-updates'Paul Mundt2009-10-161-14/+12
|\|
| * sh: Fix up single page flushing to use PAGE_SIZE.Valentin Sitdikov2009-10-161-12/+10
* | sh: Prepare for dynamic PMB supportMatt Fleming2009-10-101-3/+3
* | sh: Obliterate the P1 area macrosMatt Fleming2009-10-101-1/+1
* | Merge branch 'sh/cachetlb'Paul Mundt2009-10-101-421/+75
|\ \ | |/ |/|
| * sh: Fix up redundant cache flushing for PAGE_SIZE > 4k.Paul Mundt2009-09-091-1/+1
| * sh: Rework sh4_flush_cache_page() for coherent kmap mapping.Paul Mundt2009-09-091-27/+48
| * sh: Kill off segment-based d-cache flushing on SH-4.Paul Mundt2009-09-091-271/+20
| * sh: Kill off broken PHYSADDR() usage in sh4_flush_dcache_page().Paul Mundt2009-09-091-2/+2
| * sh: sh4_flush_cache_mm() optimizations.Paul Mundt2009-09-091-120/+4
* | sh: Sprinkle __uses_jump_to_uncachedMatt Fleming2009-10-091-1/+1
|/
* sh: Cleanup whitespace damage in sh4_flush_icache_range().Paul Mundt2009-09-091-30/+33
* Revert "sh: Kill off now redundant local irq disabling."Paul Mundt2009-09-011-26/+35
* Merge branch 'master' into sh/smpPaul Mundt2009-09-011-13/+61
|\
| * sh: Fix dcache flushing for N-way write-through caches.Matt Fleming2009-09-011-21/+27
| * sh: Fix problems with cache flushing when cache is in write-through modeStuart Menefy2009-08-241-0/+34
| * sh: Improve comments int SH4 cache flushing codeStuart Menefy2009-08-241-0/+11
* | sh: Fix up sh4_flush_dcache_page() build on UP.Paul Mundt2009-08-271-1/+2
* | sh: Kill off now redundant local irq disabling.Paul Mundt2009-08-211-35/+26
* | sh: Make cache flushers SMP-aware.Paul Mundt2009-08-211-17/+37
* | sh: Fix up cache-sh4 build on SMP.Paul Mundt2009-08-201-1/+1
* | sh: Migrate SH-4 cacheflush ops to function pointers.Paul Mundt2009-08-151-41/+46
* | sh: Kill off unused flush_icache_user_range().Paul Mundt2009-08-151-14/+0
* | sh: Don't export flush_dcache_all().Paul Mundt2009-08-151-1/+1
* | sh: Move alias computation to shared cache init.Paul Mundt2009-08-151-53/+5
* | sh: Centralize the CPU cache initialization routines.Paul Mundt2009-08-151-1/+1
* | sh: NO_CONTEXT ASID optimizations for SH-4 cache flush.Paul Mundt2009-08-141-0/+9
* | sh: Split out SH-4 __flush_xxx_region() ops.Paul Mundt2009-08-041-60/+0
* | sh: Migrate from PG_mapped to PG_dcache_dirty.Paul Mundt2009-07-221-1/+9
|/
* sh: uninline flush_icache_all().Paul Mundt2008-09-081-1/+1
* sh: Optimized flush_icache_range() implementation.Chris Smith2008-07-281-31/+36
* sh: Preparation for uncached jumps through PMB.Stuart Menefy2008-01-281-7/+7
* sh: Calculate cache aliases on L2 caches.Paul Mundt2007-09-241-0/+15
* sh: Fix alias calculation for non-aliasing cases.Paul Mundt2007-09-241-2/+2
* sh: Avoid smp_processor_id() in cache desc paths.Paul Mundt2007-09-211-31/+31
* sh: Reclaim beginning of P3 space for vmalloc area.Paul Mundt2007-07-251-3/+0
* sh: Add kmap_coherent()/kunmap_coherent() interface for SH-4.Paul Mundt2007-07-241-11/+0
* sh: Revert lazy dcache writeback changes.Paul Mundt2007-03-051-11/+1
* sh: Fixup cpu_data references for the non-boot CPUs.Paul Mundt2007-02-131-32/+33
* sh: Lazy dcache writeback optimizations.Paul Mundt2007-02-131-1/+11
* sh: Convert remaining remap_area_pages() users to ioremap_page_range().Paul Mundt2006-12-121-1/+1
* sh: Fixup various PAGE_SIZE == 4096 assumptions.Paul Mundt2006-12-061-2/+2