diff options
author | Andrew Morton <akpm@linux-foundation.org> | 2016-02-03 22:44:12 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2016-02-09 12:03:25 +0100 |
commit | a63f38cc4ccfa076f87fc3d0c276ee62e710f953 (patch) | |
tree | 2c00ddc832f5f814958b54efca84506e4de64108 /arch/c6x | |
parent | Merge branch 'locking/urgent' into locking/core, to pick up fixes (diff) | |
download | linux-a63f38cc4ccfa076f87fc3d0c276ee62e710f953.tar.xz linux-a63f38cc4ccfa076f87fc3d0c276ee62e710f953.zip |
locking/lockdep: Convert hash tables to hlists
Mike said:
: CONFIG_UBSAN_ALIGNMENT breaks x86-64 kernel with lockdep enabled, i.e.
: kernel with CONFIG_UBSAN_ALIGNMENT=y fails to load without even any error
: message.
:
: The problem is that ubsan callbacks use spinlocks and might be called
: before lockdep is initialized. Particularly this line in the
: reserve_ebda_region function causes problem:
:
: lowmem = *(unsigned short *)__va(BIOS_LOWMEM_KILOBYTES);
:
: If i put lockdep_init() before reserve_ebda_region call in
: x86_64_start_reservations kernel loads well.
Fix this ordering issue permanently: change lockdep so that it uses hlists
for the hash tables. Unlike a list_head, an hlist_head is in its
initialized state when it is all-zeroes, so lockdep is ready for operation
immediately upon boot - lockdep_init() need not have run.
The patch will also save some memory.
Probably lockdep_init() and lockdep_initialized can be done away with now.
Suggested-by: Mike Krinkin <krinkin.m.u@gmail.com>
Reported-by: Mike Krinkin <krinkin.m.u@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Cc: mm-commits@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/c6x')
0 files changed, 0 insertions, 0 deletions