summaryrefslogtreecommitdiffstats
path: root/include/asm-generic/page.h
diff options
context:
space:
mode:
authorVineet Gupta <vgupta@synopsys.com>2015-10-17 13:24:14 +0200
committerVineet Gupta <vgupta@synopsys.com>2019-10-28 20:12:32 +0100
commit1355ea2e603d76af6b1381873e37b1aec22a18a0 (patch)
tree54d0bd6bbd412d2bd8e393554db413c1baef0a04 /include/asm-generic/page.h
parentARC: mm: tlb flush optim: Make TLBWriteNI fallback to TLBWrite if not available (diff)
downloadlinux-1355ea2e603d76af6b1381873e37b1aec22a18a0.tar.xz
linux-1355ea2e603d76af6b1381873e37b1aec22a18a0.zip
ARC: mm: tlb flush optim: elide repeated uTLB invalidate in loop
The unconditional full TLB flush (on say ASID rollover) iterates over each entry and uses TLBWrite to zero it out. TLBWrite by design also invalidates the uTLBs thus we end up invalidating it as many times as numbe rof entries (512 or 1k) Optimize this by using a weaker TLBWriteNI cmd in loop, which doesn't tinker with uTLBs and an explicit one time IVUTLB, outside the loop to invalidate them all once. And given the optimiztion, the IVUTLB is now needed on MMUv4 too where the uTLBs and JTLBs are otherwise coherent given the TLBInsertEntry / TLBDeleteEntry commands Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Diffstat (limited to 'include/asm-generic/page.h')
0 files changed, 0 insertions, 0 deletions