summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* regmap: cache: Fix typo in cache_bypass parameter descriptionAndrew F. Davis2016-03-231-1/+1
| | | | | | | | | Setting the flag 'cache_bypass' will bypass the cache not the hardware. Fix this comment here. Fixes: 0eef6b0415f5 ("regmap: Fix doc comment") Signed-off-by: Andrew F. Davis <afd@ti.com> Signed-off-by: Mark Brown <broonie@kernel.org>
* Merge remote-tracking branch 'regmap/topic/update-bits' into regmap-nextMark Brown2016-03-052-226/+88
|\
| * regmap: replace regmap_write_bits()Kuninori Morimoto2016-03-052-32/+3
| | | | | | | | | | | | | | | | | | | | | | commit 23b92e4cf5fd ("regmap: remove regmap_write_bits()") removed regmap_write_bits(), but MFD driver was using it. So, commit e30fccd6771d ("regmap: Keep regmap_write_bits()") turns out it, but it is using original style. This patch uses regmap_update_bits_base() for regmap_write_bits() Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| * regmap: add regmap_fields_force_xxx() macrosKuninori Morimoto2016-02-262-14/+4
| | | | | | | | | | Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| * regmap: add regmap_field_force_xxx() macrosKuninori Morimoto2016-02-261-0/+4
| | | | | | | | | | Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| * regmap: merge regmap_fields_update_bits() into macroKuninori Morimoto2016-02-192-28/+2
| | | | | | | | | | | | | | | | This patch merges regmap_fields_update_bits() into macro by using regmap_field_update_bits_base(). Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| * regmap: merge regmap_fields_write() into macroKuninori Morimoto2016-02-192-24/+3
| | | | | | | | | | | | | | | | This patch merges regmap_fields_write() into macro by using regmap_fields_update_bits_base(). Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| * regmap: add regmap_fields_update_bits_base()Kuninori Morimoto2016-02-192-0/+44
| | | | | | | | | | | | | | | | | | This patch adds new regmap_fields_update_bits_base() which is using regmap_update_bits_base(). Current regmap_fields_xxx() can be merged into it by macro. Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| * regmap: merge regmap_field_update_bits() into macroKuninori Morimoto2016-02-192-22/+2
| | | | | | | | | | | | | | | | This patch merges regmap_field_update_bits() into macro by using regmap_field_update_bits_base(). Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| * regmap: merge regmap_field_write() into macroKuninori Morimoto2016-02-192-17/+3
| | | | | | | | | | | | | | | | This patch merges regmap_field_write() into macro by using regmap_field_update_bits_base(). Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| * regmap: add regmap_field_update_bits_base()Kuninori Morimoto2016-02-192-1/+38
| | | | | | | | | | | | | | | | | | This patch adds new regmap_field_update_bits_base() which is using regmap_update_bits_base(). Current regmap_field_xxx() can be merged into it by macro. Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| * regmap: merge regmap_update_bits_check_async() into macroKuninori Morimoto2016-02-192-50/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current regmap has many similar update functions like below, but the difference is very few. regmap_update_bits() regmap_update_bits_async() regmap_update_bits_check() regmap_update_bits_check_async() Furthermore, we can add *force* write option too in the future. This patch merges regmap_update_bits_check_async() into macro by using regmap_update_bits_base(). Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| * regmap: merge regmap_update_bits_check() into macroKuninori Morimoto2016-02-192-37/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current regmap has many similar update functions like below, but the difference is very few. regmap_update_bits() regmap_update_bits_async() regmap_update_bits_check() regmap_update_bits_check_async() Furthermore, we can add *force* write option too in the future. This patch merges regmap_update_bits_check() into macro by using regmap_update_bits_base(). Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| * regmap: merge regmap_update_bits_async() into macroKuninori Morimoto2016-02-192-44/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current regmap has many similar update functions like below, but the difference is very few. regmap_update_bits() regmap_update_bits_async() regmap_update_bits_check() regmap_update_bits_check_async() Furthermore, we can add *force* write option too in the future. This patch merges regmap_update_bits_async() into macro by using regmap_update_bits_base(). Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| * regmap: merge regmap_update_bits() into macroKuninori Morimoto2016-02-192-32/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current regmap has many similar update functions like below, but the difference is very few. regmap_update_bits() regmap_update_bits_async() regmap_update_bits_check() regmap_update_bits_check_async() Furthermore, we can add *force* write option too in the future. This patch merges regmap_update_bits() into macro by using regmap_update_bits_base(). Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| * regmap: add regmap_update_bits_base()Kuninori Morimoto2016-02-192-0/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current regmap has many similar update functions like below, but the difference is very few. regmap_update_bits() regmap_update_bits_async() regmap_update_bits_check() regmap_update_bits_check_async() Furthermore, we can add *force* write option too in the future. This patch adds new regmap_update_bits_base() which is feature merged function. Above functions can be merged into it by macro. Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| |
| \
| \
| \
| \
| \
*-----. \ Merge remote-tracking branches 'regmap/topic/devm-irq', 'regmap/topic/doc', ↵Mark Brown2016-03-055-12/+157
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | 'regmap/topic/irq' and 'regmap/topic/stride' into regmap-next
| | | | * | regcache: flat: Introduce register strider orderXiubo Li2016-02-191-5/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here we introduce regcache_flat_get_index(), which using register stride order and bit rotation, will save some memory spaces for flat cache. Though this will also lost some access performance, since the bit rotation is used to get the index of the cache array, and this could be ingored for memory I/O accessing. Signed-off-by: Xiubo Li <lixiubo@cmss.chinamobile.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| | | | * | regcache: Introduce the index parsing API by stride orderXiubo Li2016-02-191-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here introduces regcache_get_index_by_order() for regmap cache, which uses the register stride order and bit rotation, to improve the performance. Signed-off-by: Xiubo Li <lixiubo@cmss.chinamobile.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| | | | * | regmap: core: Introduce register stride orderXiubo Li2016-02-192-6/+23
| | | | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the register stride should always equal to 2^N, and bit rotation is much faster than multiplication and division. So introducing the stride order and using bit rotation to get the offset of the register from the index to improve the performance. Signed-off-by: Xiubo Li <lixiubo@cmss.chinamobile.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| | | * | regmap: irq: Enable irq retriggering for nested irqsGrygorii Strashko2016-02-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When nested interrupts are handled with regmap irq framework, we need to mark the interrupts to be resend for pending interrupts on enable_irq. Else the events might be lost for nested irqs. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Tested-by: Nishanth Menon <nm@ti.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| | | * | regmap: irq: add devm apis for regmap_{add,del}_irq_chipLaxman Dewangan2016-02-152-0/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add device managed APIs for regmap_add_irq_chip() and regmap_del_irq_chip() so that it can be managed by device framework for freeing it. This helps on following: 1. Maintaining the sequence of resource allocation and deallocation regmap_add_irq_chip(&d); devm_requested_threaded_irq(virq) On free path: regmap_del_irq_chip(d); and then removing the irq registration. On this case, regmap irq is deleted before the irq is free. This force to use normal irq registration. By using devm apis, the sequence can be maintain properly: devm_regmap_add_irq_chip(&d); devm_requested_threaded_irq(virq); and resource deallocation will be done in reverse order by device framework. 2. No need to delete the regmap_irq_chip in error path or remove callback and hence there is less code on this path. Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| | | * | regmap: irq: dispose all virtual irq before removing domainLaxman Dewangan2016-02-091-0/+21
| | | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is require to dispose all virtual irq of hwirq on chip created on given irq domain before removing this irq domain. Hence dispose all mapped irqs before deleting the irq domains in regmap_del_irq_chip(); Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com> Reviewed-by: Javier Martinez Canillas <javier@osg.samsung.com> Tested-by: Javier Martinez Canillas <javier@osg.samsung.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| | * | regmap: clairify meaning of max_registerStefan Agner2016-01-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The exact meaning of max_register is not entierly clear. Follow the common wording and use "address" instead of "index". Signed-off-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Mark Brown <broonie@kernel.org>
| * | | regmap: irq: add devm apis for regmap_{add,del}_irq_chipLaxman Dewangan2016-03-052-0/+90
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add device managed APIs for regmap_add_irq_chip() and regmap_del_irq_chip() so that it can be managed by device framework for freeing it. This helps on following: 1. Maintaining the sequence of resource allocation and deallocation regmap_add_irq_chip(&d); devm_requested_threaded_irq(virq) On free path: regmap_del_irq_chip(d); and then removing the irq registration. On this case, regmap irq is deleted before the irq is free. This force to use normal irq registration. By using devm apis, the sequence can be maintain properly: devm_regmap_add_irq_chip(&d); devm_requested_threaded_irq(virq); and resource deallocation will be done in reverse order by device framework. 2. No need to delete the regmap_irq_chip in error path or remove callback and hence there is less code on this path. Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com> Signed-off-by: Mark Brown <broonie@kernel.org>
* | | Merge remote-tracking branch 'regmap/topic/mmio' into regmap-nextMark Brown2016-03-0514-145/+191
|\ \ \
| * \ \ Merge branch 'fix/mmio' of ↵Mark Brown2016-02-050-0/+0
| |\ \ \ | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap into regmap-mmio
| * | | | regmap: cache: Fall back to register by register read for cache defaultsMark Brown2016-02-021-11/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we are unable to read the cache defaults for a regmap then fall back on attempting to read them word by word. This is going to be painfully slow for large regmaps but might be adequate for smaller ones. Signed-off-by: Mark Brown <broonie@kernel.org> [maciej: Use cache_bypass around read and skipping of unreadable regs] Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name> Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com> Tested-by: Fabio Estevam <fabio.estevam@nxp.com> Signed-off-by: Mark Brown <broonie@kernel.org>
| * | | | regmap: Return an error if a caller attempts to do an unsupported raw readMark Brown2016-02-011-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | regmaps without raw I/O access can't implement raw I/O operations, return an error if someone tries to do that rather than crashing. Signed-off-by: Mark Brown <broonie@kernel.org>
| * | | | regmap: mmio: Convert to regmap_bus and fix accessor usageMark Brown2016-01-271-120/+139
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently regmap-mmio uses the __raw accessors to read and write from memory. This is not safe as these interact poorly with spinlocks and are not guaranteed to generate emulated instructions on at least ARM where regmap is commonly used. The APIs that are provided all provide some byte swapping so this is difficult to do with the current regmap-mmio implementation which attempts to use the regmap core byte swapping. We can fix this by modernising the MMIO implementation to use reg_read() and reg_write() operations which were added after the API was implemented and pass simple unsigned integers through to the bus, making use of the formatting provided by the I/O accessors using a similar pattern to that used by the core. This will be less efficient for block I/O operations since we now enable and disable any required clocks per register but it is not clear that any users of regmap-mmio actually use block I/O and there is room to optimise later. This removes support for big endian I/O on 64 bit registers since no I/O accessors are provided, no current users were found and support can be added easily once they are available. In addition make the default endianness little endian. This was the behaviour prior to 29bb45f25ff305 (regmap-mmio: Use native endianness for read/write) and is the behaviour desired by most existing users, the users have been audited and those that need native endianness converted to request it explicitly. Previously native was documented as the default but due to the byte swapping in the accessors this was not correctly implemented. Fixes: 29bb45f25ff305 (regmap-mmio: Use native endianness for read/write) Reported-by: Johannes Berg <johannes@sipsolutions.net> Tested-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: Mark Brown <broonie@kernel.org>
| * | | | MIPS: dt: Explicitly specify native endian behaviour for sysconMark Brown2016-01-2710-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On many MIPS systems the endianness of IP blocks is kept the same as that of the CPU by the hardware. This includes the system controllers on these systems which are controlled via syscon which uses the regmap API which used readl() and writel() to interact with the hardware, meaning that all writes are converted to little endian when writing to the hardware. This caused a bad interaction with the regmap core in big endian mode since it was not aware of the byte swapping and so ended up performing little endian writes. Unfortunately when this issue was noticed it was addressed by updating the DT for the affected devices to specify them as little endian. This happened to work since it resulted in two endianness swaps which cancelled each other out and gave little endian behaviour but meant that the DT was clearly not accurately describing the hardware. The intention of commit 29bb45f25ff305 (regmap-mmio: Use native endianness for read/write) was to fix this by making regmap default to native endianness but this breaks most other MMIO users where the hardware has a fixed endianness and the implementation uses the __raw accessors which are not intended to be used outside of architecture code. Instead use the newly added native-endian DT property to say exactly what we want for these systems. Fixes: 29bb45f25ff305 (regmap-mmio: Use native endianness for read/write) Reported-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Ralf Baechle <ralf@linux-mips.org>
| * | | | regmap: Add explict native endian flag to DT bindingsMark Brown2016-01-272-4/+9
| | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the binding document says that if no endianness is configured we use native endian but this is not in fact true for all binding types and we do have some devices that really want native endianness such as Broadcom MIPS SoCs where switching the endianness of the CPU also switches the endianness of external IPs. Provide an explicit option for this. Signed-off-by: Mark Brown <broonie@kernel.org>
* | | | Merge remote-tracking branch 'regmap/fix/raw' into regmap-linusMark Brown2016-03-051-2/+2
|\ \ \ \
| * | | | regmap: pass buffer size to regmap_raw_read() in regcache_hw_init()Maciej S. Szmigiero2016-01-151-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | regcache_hw_init() uses regmap_raw_read() to initialize cache when reg_defaults_raw isn't provided. The last parameter to regmap_raw_read() is buffer size in bytes, however regcache_hw_init() called it with number of registers to read instead, which cause problem if they aren't one byte wide in cache. This wasn't triggered by any of current in-tree drivers since they either have one-byte registers or provide reg_defaults_raw explicitly. Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name> Signed-off-by: Mark Brown <broonie@kernel.org>
* | | | | Linux 4.5-rc6v4.5-rc6Linus Torvalds2016-02-281-1/+1
| | | | |
* | | | | Merge branch 'perf-urgent-for-linus' of ↵Linus Torvalds2016-02-282-131/+244
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Thomas Gleixner: "A rather largish series of 12 patches addressing a maze of race conditions in the perf core code from Peter Zijlstra" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf: Robustify task_function_call() perf: Fix scaling vs. perf_install_in_context() perf: Fix scaling vs. perf_event_enable() perf: Fix scaling vs. perf_event_enable_on_exec() perf: Fix ctx time tracking by introducing EVENT_TIME perf: Cure event->pending_disable race perf: Fix race between event install and jump_labels perf: Fix cloning perf: Only update context time when active perf: Allow perf_release() with !event->ctx perf: Do not double free perf: Close install vs. exit race
| * | | | | perf: Robustify task_function_call()Peter Zijlstra2016-02-251-20/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since there is no serialization between task_function_call() doing task_curr() and the other CPU doing context switches, we could end up not sending an IPI even if we had to. And I'm not sure I still buy my own argument we're OK. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dvyukov@google.com Cc: eranian@google.com Cc: oleg@redhat.com Cc: panand@redhat.com Cc: sasha.levin@oracle.com Cc: vince@deater.net Link: http://lkml.kernel.org/r/20160224174948.340031200@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | | perf: Fix scaling vs. perf_install_in_context()Peter Zijlstra2016-02-251-45/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Completely reworks perf_install_in_context() (again!) in order to ensure that there will be no ctx time hole between add_event_to_ctx() and any potential ctx_sched_in(). Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dvyukov@google.com Cc: eranian@google.com Cc: oleg@redhat.com Cc: panand@redhat.com Cc: sasha.levin@oracle.com Cc: vince@deater.net Link: http://lkml.kernel.org/r/20160224174948.279399438@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | | perf: Fix scaling vs. perf_event_enable()Peter Zijlstra2016-02-251-19/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Similar to the perf_enable_on_exec(), ensure that event timings are consistent across perf_event_enable(). Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dvyukov@google.com Cc: eranian@google.com Cc: oleg@redhat.com Cc: panand@redhat.com Cc: sasha.levin@oracle.com Cc: vince@deater.net Link: http://lkml.kernel.org/r/20160224174948.218288698@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | | perf: Fix scaling vs. perf_event_enable_on_exec()Peter Zijlstra2016-02-251-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The recent commit 3e349507d12d ("perf: Fix perf_enable_on_exec() event scheduling") caused this by moving task_ctx_sched_out() from before __perf_event_mask_enable() to after it. The overlooked consequence of that change is that task_ctx_sched_out() would update the ctx time fields, and now __perf_event_mask_enable() uses stale time. In order to fix this, explicitly stop our context's time before enabling the event(s). Reported-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dvyukov@google.com Cc: eranian@google.com Cc: panand@redhat.com Cc: sasha.levin@oracle.com Cc: vince@deater.net Fixes: 3e349507d12d ("perf: Fix perf_enable_on_exec() event scheduling") Link: http://lkml.kernel.org/r/20160224174948.159242158@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | | perf: Fix ctx time tracking by introducing EVENT_TIMEPeter Zijlstra2016-02-251-12/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently any ctx_sched_in() call will re-start the ctx time tracking, this means that calls like: ctx_sched_in(.event_type = EVENT_PINNED); ctx_sched_in(.event_type = EVENT_FLEXIBLE); will have a hole in their ctx time tracking. This is likely harmless but can confuse things a little. By adding EVENT_TIME, we can have the first ctx_sched_in() (is_active: 0 -> !0) start the time and any further ctx_sched_in() will leave the timestamps alone. Secondly, this allows for an early disable like: ctx_sched_out(.event_type = EVENT_TIME); which would update the ctx time (if the ctx is active) and any further calls to ctx_sched_out() would not further modify the ctx time. For ctx_sched_in() any 0 -> !0 transition will automatically include EVENT_TIME. For ctx_sched_out(), any transition that clears EVENT_ALL will automatically clear EVENT_TIME. These two rules ensure that under normal circumstances we need not bother with EVENT_TIME and get natural ctx time behaviour. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dvyukov@google.com Cc: eranian@google.com Cc: oleg@redhat.com Cc: panand@redhat.com Cc: sasha.levin@oracle.com Cc: vince@deater.net Link: http://lkml.kernel.org/r/20160224174948.100446561@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | | perf: Cure event->pending_disable racePeter Zijlstra2016-02-251-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because event_sched_out() checks event->pending_disable _before_ actually disabling the event, it can happen that the event fires after it checks but before it gets disabled. This would leave event->pending_disable set and the queued irq_work will try and process it. However, if the event trigger was during schedule(), the event might have been de-scheduled by the time the irq_work runs, and perf_event_disable_local() will fail. Fix this by checking event->pending_disable _after_ we call event->pmu->del(). This depends on the latter being a compiler barrier, such that the compiler does not lift the load and re-creates the problem. Tested-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dvyukov@google.com Cc: eranian@google.com Cc: oleg@redhat.com Cc: panand@redhat.com Cc: sasha.levin@oracle.com Cc: vince@deater.net Link: http://lkml.kernel.org/r/20160224174948.040469884@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | | perf: Fix race between event install and jump_labelsPeter Zijlstra2016-02-252-11/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | perf_install_in_context() relies upon the context switch hooks to have scheduled in events when the IPI misses its target -- after all, if the task has moved from the CPU (or wasn't running at all), it will have to context switch to run elsewhere. This however doesn't appear to be happening. It is possible for the IPI to not happen (task wasn't running) only to later observe the task running with an inactive context. The only possible explanation is that the context switch hooks are not called. Therefore put in a sync_sched() after toggling the jump_label to guarantee all CPUs will have them enabled before we install an event. A simple if (0->1) sync_sched() will not in fact work, because any further increment can race and complete before the sync_sched(). Therefore we must jump through some hoops. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dvyukov@google.com Cc: eranian@google.com Cc: oleg@redhat.com Cc: panand@redhat.com Cc: sasha.levin@oracle.com Cc: vince@deater.net Link: http://lkml.kernel.org/r/20160224174947.980211985@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | | perf: Fix cloningPeter Zijlstra2016-02-252-15/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Alexander reported that when the 'original' context gets destroyed, no new clones happen. This can happen irrespective of the ctx switch optimization, any task can die, even the parent, and we want to continue monitoring the task hierarchy until we either close the event or no tasks are left in the hierarchy. perf_event_init_context() will attempt to pin the 'parent' context during clone(). At that point current is the parent, and since current cannot have exited while executing clone(), its context cannot have passed through perf_event_exit_task_context(). Therefore perf_pin_task_context() cannot observe ctx->task == TASK_TOMBSTONE. However, since inherit_event() does: if (parent_event->parent) parent_event = parent_event->parent; it looks at the 'original' event when it does: is_orphaned_event(). This can return true if the context that contains the this event has passed through perf_event_exit_task_context(). And thus we'll fail to clone the perf context. Fix this by adding a new state: STATE_DEAD, which is set by perf_release() to indicate that the filedesc (or kernel reference) is dead and there are no observers for our data left. Only for STATE_DEAD will is_orphaned_event() be true and inhibit cloning. STATE_EXIT is otherwise preserved such that is_event_hup() remains functional and will report when the observed task hierarchy becomes empty. Reported-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Tested-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dvyukov@google.com Cc: eranian@google.com Cc: oleg@redhat.com Cc: panand@redhat.com Cc: sasha.levin@oracle.com Cc: vince@deater.net Fixes: c6e5b73242d2 ("perf: Synchronously clean up child events") Link: http://lkml.kernel.org/r/20160224174947.919845295@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | | perf: Only update context time when activePeter Zijlstra2016-02-251-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dvyukov@google.com Cc: eranian@google.com Cc: oleg@redhat.com Cc: panand@redhat.com Cc: sasha.levin@oracle.com Cc: vince@deater.net Link: http://lkml.kernel.org/r/20160224174947.860690919@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | | perf: Allow perf_release() with !event->ctxPeter Zijlstra2016-02-251-3/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the err_file: fput(event_file) case, the event will not yet have been attached to a context. However perf_release() does assume it has been. Cure this. Tested-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dvyukov@google.com Cc: eranian@google.com Cc: oleg@redhat.com Cc: panand@redhat.com Cc: sasha.levin@oracle.com Cc: vince@deater.net Link: http://lkml.kernel.org/r/20160224174947.793996260@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | | perf: Do not double freePeter Zijlstra2016-02-251-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In case of: err_file: fput(event_file), we'll end up calling perf_release() which in turn will free the event. Do not then free the event _again_. Tested-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dvyukov@google.com Cc: eranian@google.com Cc: oleg@redhat.com Cc: panand@redhat.com Cc: sasha.levin@oracle.com Cc: vince@deater.net Link: http://lkml.kernel.org/r/20160224174947.697350349@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | | perf: Close install vs. exit racePeter Zijlstra2016-02-251-9/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Consider the following scenario: CPU0 CPU1 ctx = find_get_ctx(); perf_event_exit_task_context() mutex_lock(&ctx->mutex); perf_install_in_context(ctx, ...); /* NO-OP */ mutex_unlock(&ctx->mutex); ... perf_release() WARN_ON_ONCE(event->state != STATE_EXIT); Since the event doesn't pass through perf_remove_from_context() because perf_install_in_context() NO-OPs because the ctx is dead, and perf_event_exit_task_context() will not observe the event because its not attached yet, the event->state will not be set. Solve this by revalidating ctx->task after we acquire ctx->mutex and failing the event creation as a whole. Tested-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dvyukov@google.com Cc: eranian@google.com Cc: oleg@redhat.com Cc: panand@redhat.com Cc: sasha.levin@oracle.com Cc: vince@deater.net Link: http://lkml.kernel.org/r/20160224174947.626853419@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | | | | | Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds2016-02-285-7/+15
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Thomas Gleixner: "This update contains: - Hopefully the last ASM CLAC fixups - A fix for the Quark family related to the IMR lock which makes kexec work again - A off-by-one fix in the MPX code. Ironic, isn't it? - A fix for X86_PAE which addresses once more an unsigned long vs phys_addr_t hickup" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/mpx: Fix off-by-one comparison with nr_registers x86/mm: Fix slow_virt_to_phys() for X86_PAE again x86/entry/compat: Add missing CLAC to entry_INT80_32 x86/entry/32: Add an ASM_CLAC to entry_SYSENTER_32 x86/platform/intel/quark: Change the kernel's IMR lock bit to false
| * | | | | | x86/mpx: Fix off-by-one comparison with nr_registersColin Ian King2016-02-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the unlikely event that regno == nr_registers then we get an array overrun on regoff because the invalid register check is currently off-by-one. Fix this with a check that regno is >= nr_registers instead. Detected with static analysis using CoverityScan. Fixes: fcc7ffd67991 "x86, mpx: Decode MPX instruction to get bound violation information" Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/1456512931-3388-1-git-send-email-colin.king@canonical.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>