diff options
author | Sean Christopherson <seanjc@google.com> | 2022-11-19 02:34:45 +0100 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2022-12-02 19:22:32 +0100 |
commit | 7f2b47f22b825c16d9843e6e78bbb2370d2c31a0 (patch) | |
tree | 00363dd12be1cbf933f7ebf0282f9c9ba3c76671 /tools/include/asm-generic | |
parent | KVM: arm64: selftests: Enable single-step without a "full" ucall() (diff) | |
download | linux-7f2b47f22b825c16d9843e6e78bbb2370d2c31a0.tar.xz linux-7f2b47f22b825c16d9843e6e78bbb2370d2c31a0.zip |
tools: Take @bit as an "unsigned long" in {clear,set}_bit() helpers
Take @bit as an unsigned long instead of a signed int in clear_bit() and
set_bit() so that they match the double-underscore versions, __clear_bit()
and __set_bit(). This will allow converting users that really don't want
atomic operations to the double-underscores without introducing a
functional change, which will in turn allow making {clear,set}_bit()
atomic (as advertised).
Practically speaking, this _should_ have no functional impact. KVM's
selftests usage is either hardcoded (Hyper-V tests) or is artificially
limited (arch_timer test and dirty_log test). In KVM, dirty_log test is
the only mildly interesting case as it's use indirectly restricted to
unsigned 32-bit values, but in theory it could generate a negative value
when cast to a signed int. But in that case, taking an "unsigned long"
is actually a bug fix.
Perf's usage is more difficult to audit, but any code that is affected
by the switch is likely already broken. perf_header__{set,clear}_feat()
and perf_file_header__read() effectively use only hardcoded enums with
small, positive values, atom_new() passes an unsigned long, but its value
is capped at 128 via NR_ATOM_PER_PAGE, etc...
The only real potential for breakage is in the perf flows that take a
"cpu", but it's unlikely perf is subtly relying on a negative index into
bitmaps, e.g. "cpu" can be "-1", but only as "not valid" placeholder.
Note, tools/testing/nvdimm/ makes heavy use of set_bit(), but that code
builds into a kernel module of sorts, i.e. pulls in all of the kernel's
header and so is getting the kernel's atomic set_bit(). The NVDIMM test
usage of atomics is likely unnecessary, e.g. ndtest_dimm_register() sets
bits in a local variable, but that's neither here nor there as far as
this change is concerned.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20221119013450.2643007-5-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'tools/include/asm-generic')
-rw-r--r-- | tools/include/asm-generic/bitops/atomic.h | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/tools/include/asm-generic/bitops/atomic.h b/tools/include/asm-generic/bitops/atomic.h index 2f6ea28764a7..f64b049d236c 100644 --- a/tools/include/asm-generic/bitops/atomic.h +++ b/tools/include/asm-generic/bitops/atomic.h @@ -5,12 +5,12 @@ #include <asm/types.h> #include <asm/bitsperlong.h> -static inline void set_bit(int nr, unsigned long *addr) +static inline void set_bit(unsigned long nr, unsigned long *addr) { addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG); } -static inline void clear_bit(int nr, unsigned long *addr) +static inline void clear_bit(unsigned long nr, unsigned long *addr) { addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG)); } |