summaryrefslogtreecommitdiffstats
path: root/fs/bcachefs/btree_locking.h (follow)
Commit message (Collapse)AuthorAgeFilesLines
* bcachefs: Improve trace_trans_restart_relockKent Overstreet2024-01-211-8/+1
| | | | Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: btree_trans always has statsKent Overstreet2024-01-061-6/+3
| | | | | | reserve slot 0 for unknown (when we overflow), to avoid some branches Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: kill btree_path.idxKent Overstreet2024-01-011-1/+0
| | | | Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: trans_for_each_path_with_node() no longer uses path->idxKent Overstreet2024-01-011-1/+2
| | | | Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: trans_for_each_path() no longer uses path->idxKent Overstreet2024-01-011-1/+2
| | | | | | | | | | path->idx is now a code smell: we should be using path_idx_t, since it's stable across btree path reallocation. This is also a bit faster, using the same loop counter vs. fetching path->idx from each path we iterate over. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Refactor trans->paths_allocated to be standard bitmapKent Overstreet2024-01-011-1/+1
| | | | Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Don't downgrade locks on transaction restartKent Overstreet2023-11-021-4/+14
| | | | | | | | We should only be downgrading locks on success - otherwise, our transaction restarts won't be getting the correct locks and we'll livelock. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Fix silent enum conversion errorKent Overstreet2023-10-221-1/+1
| | | | | | | | | | | | This changes mark_btree_node_locked() to take an enum btree_node_locked_type, not a six_lock_type, since BTREE_NODE_UNLOCKED is -1 which may cause problems converting back and forth to six_lock_type if short enums are in use. With this change, we never store BTREE_NODE_UNLOCKED in a six_lock_type enum. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Assorted fixes for clangKent Overstreet2023-10-221-2/+2
| | | | | | | clang had a few more warnings about enum conversion, and also didn't like the opts.c initializer. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Assorted sparse fixesKent Overstreet2023-10-221-3/+3
| | | | | | | | | - endianness fixes - mark some things static - fix a few __percpu annotations - fix silent enum conversions Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: bch2_trans_unlock_noassert()Kent Overstreet2023-10-221-0/+2
| | | | | | This fixes a spurious assert in the btree node read path. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* six locks: Seq now only incremented on unlockKent Overstreet2023-10-221-2/+2
| | | | Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* six locks: Documentation, renamingKent Overstreet2023-10-221-2/+2
| | | | | | | | | | - Expanded and revamped overview documentation in six.h, giving an overview of all features - docbook-comments for all external interfaces - Rename some functions for simplicity, i.e. six_lock_ip_type() -> six_lock_ip() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* six locks: Kill six_lock_state unionKent Overstreet2023-10-221-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | As suggested by Linus, this drops the six_lock_state union in favor of raw bitmasks. On the one hand, bitfields give more type-level structure to the code. However, a significant amount of the code was working with six_lock_state as a u64/atomic64_t, and the conversions from the bitfields to the u64 were deemed a bit too out-there. More significantly, because bitfield order is poorly defined (#ifdef __LITTLE_ENDIAN_BITFIELD can be used, but is gross), incrementing the sequence number would overflow into the rest of the bitfield if the compiler didn't put the sequence number at the high end of the word. The new code is a bit saner when we're on an architecture without real atomic64_t support - all accesses to lock->state now go through atomic64_*() operations. On architectures with real atomic64_t support, we additionally use atomic bit ops for setting/clearing individual bits. Text size: 7467 bytes -> 4649 bytes - compilers still suck at bitfields. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* six locks: Kill six_lock_pcpu_(alloc|free)Kent Overstreet2023-10-221-1/+1
| | | | | | | | | | | | | six_lock_pcpu_alloc() is an unsafe interface: it's not safe to allocate or free the percpu reader count on an existing lock that's in use, the only safe time to allocate percpu readers is when the lock is first being initialized. This patch adds a flags parameter to six_lock_init(), and instead of six_lock_pcpu_free() we now expose six_lock_exit(), which does the same thing but is less likely to be misused. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Centralize btree node lock initializationKent Overstreet2023-10-221-1/+7
| | | | | | | This fixes some confusion in the lockdep code due to initializing btree node/key cache locks with the same lockdep key, but different names. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Use six_lock_ip()Kent Overstreet2023-10-221-7/+9
| | | | | | | This uses the new _ip() interface to six locks and hooks it up to btree_path->ip_allocated, when available. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Switch to local_clock() for fastpath time sourceKent Overstreet2023-10-221-3/+3
| | | | | | | local_clock() isn't always completely accurate - e.g. on machines with TSC drift - but ktime_get_ns() overhead is too high, unfortunately. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: bch2_btree_node_relock_notrace()Kent Overstreet2023-10-221-2/+14
| | | | | | | | | Most of the node_relock_fail trace events are generated from bch2_btree_path_verify_level(), when debugcheck_iterators is enabled - but we're not interested in these trace events, they don't indicate that we're in a slowpath. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Ensure bch2_btree_node_lock_write_nofail() never failsKent Overstreet2023-10-221-8/+4
| | | | | | | | | In order for bch2_btree_node_lock_write_nofail() to never produce a deadlock, we must ensure we're never holding read locks when using it. Fortunately, it's only used from code paths where any read locks may be safely dropped. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Delete old deadlock avoidance codeKent Overstreet2023-10-221-41/+7
| | | | | | This deletes our old lock ordering based deadlock avoidance code. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: Print deadlock cycle in debugfsKent Overstreet2023-10-221-0/+1
| | | | | | | | | | In the event that we're not finished debugging the cycle detector, this adds a new file to debugfs that shows what the cycle detector finds, if anything. By comparing this with btree_transactions, which shows held locks for every btree_transaction, we'll be able to determine if it's the cycle detector that's buggy or something else. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Deadlock cycle detectorKent Overstreet2023-10-221-17/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We've outgrown our own deadlock avoidance strategy. The btree iterator API provides an interface where the user doesn't need to concern themselves with lock ordering - different btree iterators can be traversed in any order. Without special care, this will lead to deadlocks. Our previous strategy was to define a lock ordering internally, and whenever we attempt to take a lock and trylock() fails, we'd check if the current btree transaction is holding any locks that cause a lock ordering violation. If so, we'd issue a transaction restart, and then bch2_trans_begin() would re-traverse all previously used iterators, but in the correct order. That approach had some issues, though. - Sometimes we'd issue transaction restarts unnecessarily, when no deadlock would have actually occured. Lock ordering restarts have become our primary cause of transaction restarts, on some workloads totally 20% of actual transaction commits. - To avoid deadlock or livelock, we'd often have to take intent locks when we only wanted a read lock: with the lock ordering approach, it is actually illegal to hold _any_ read lock while blocking on an intent lock, and this has been causing us unnecessary lock contention. - It was getting fragile - the various lock ordering rules are not trivial, and we'd been seeing occasional livelock issues related to this machinery. So, since bcachefs is already a relational database masquerading as a filesystem, we're stealing the next traditional database technique and switching to a cycle detector for avoiding deadlocks. When we block taking a btree lock, after adding ourself to the waitlist but before sleeping, we do a DFS of btree transactions waiting on other btree transactions, starting with the current transaction and walking our held locks, and transactions blocking on our held locks. If we find a cycle, we emit a transaction restart. Occasionally (e.g. the btree split path) we can not allow the lock() operation to fail, so if necessary we'll tell another transaction that it has to fail. Result: trans_restart_would_deadlock events are reduced by a factor of 10 to 100, and we'll be able to delete a whole bunch of grotty, fragile code. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: bch2_btree_path_upgrade() now emits transaction restartKent Overstreet2023-10-221-6/+13
| | | | | | | | Centralizing the transaction restart/tracepoint in bch2_btree_path_upgrade() lets us improve the tracepoint - now it emits old and new locks_want. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Convert more locking code to btree_bkey_cached_commonKent Overstreet2023-10-221-8/+8
| | | | | | | | | Ideally, all the code in btree_locking.c should be converted, but then we'd want to convert btree_path to point to btree_key_cached_common too, and then we'd be in for a much bigger cleanup - but a bit of incremental cleanup will still be helpful for the next patches. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: bch2_btree_node_lock_write_nofail()Kent Overstreet2023-10-221-1/+10
| | | | | | | Taking a write lock will be able to fail, with the new cycle detector - unless we pass it nofail, which is possible but not preferred. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: New locking functionsKent Overstreet2023-10-221-0/+18
| | | | | | | | | | | | | | | | In the future, with the new deadlock cycle detector, we won't be using bare six_lock_* anymore: lock wait entries will all be embedded in btree_trans, and we will need a btree_trans context whenever locking a btree node. This patch plumbs a btree_trans to the few places that need it, and adds two new locking functions - btree_node_lock_nopath, which may fail returning a transaction restart, and - btree_node_lock_nopath_nofail, to be used in places where we know we cannot deadlock (i.e. because we're holding no other locks). Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: Mark write locks before taking lockKent Overstreet2023-10-221-2/+7
| | | | | | | | | six locks are unfair: while a thread is blocked trying to take a write lock, new read locks will fail. The new deadlock cycle detector makes use of our existing lock tracing, so we need to tell it we're holding a write lock before we take the lock for it to work correctly. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Delete time_stats for lock contended timesKent Overstreet2023-10-221-24/+1
| | | | | | | Since we've now got time_stats for lock hold times (per btree transaction), we don't need this anymore. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Improve bch2_btree_node_relock()Kent Overstreet2023-10-221-1/+2
| | | | | | | This moves the IS_ERR_OR_NULL() check to the inline part, since that's a fast path event. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Track held write locksKent Overstreet2023-10-221-8/+18
| | | | | | | | The upcoming lock cycle detection code will need to know precisely which locks every btree_trans is holding, including write locks - this patch updates btree_node_locked_type to include write locks. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Switch btree locking code to struct btree_bkey_cached_commonKent Overstreet2023-10-221-12/+17
| | | | | | This is just some type safety cleanup. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: Kill nodes_intent_lockedKent Overstreet2023-10-221-19/+7
| | | | | | | | Previously, we used two different bit arrays for tracking held btree node locks. This patch switches to an array of two bit integers, which will let us track, in a future patch, when we hold a write lock. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: Better use of locking helpersKent Overstreet2023-10-221-22/+28
| | | | | | | | | Held btree locks are tracked in btree_path->nodes_locked and btree_path->nodes_intent_locked. Upcoming patches are going to change the representation in struct btree_path, so this patch switches to proper helpers instead of direct access to these fields. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: Reorganize btree_locking.[ch]Kent Overstreet2023-10-221-67/+78
| | | | | | Tidy things up a bit before doing more work in this file. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: btree_locking.cKent Overstreet2023-10-221-0/+41
| | | | | | | Start to centralize some of the locking code in a new file; more locking code will be moving here in the future. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: Rename lock_held_stats -> btree_transaction_statsKent Overstreet2023-10-221-9/+21
| | | | | | | Going to be adding more things to this in the next patch. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Tracepoint improvementsKent Overstreet2023-10-221-0/+8
| | | | | | | | Our types are exported to the tracepoint code, so it's not necessary to break things out individually when passing them to tracepoints - we can also call other functions from TP_fast_assign(). Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: BTREE_ITER_NO_NODE -> BCH_ERR codesKent Overstreet2023-10-221-0/+15
| | | | Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: Don't set should_be_locked on paths that aren't lockedKent Overstreet2023-10-221-0/+8
| | | | | | | | It doesn't make any sense to set should_be_locked on btree_paths that aren't locked, and is often a bug - this patch adds assertions and fixes some of those bugs. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: Fix bch2_btree_trans_to_text()Kent Overstreet2023-10-221-1/+1
| | | | | | | | | bch2_btree_trans_to_text() is used to print btree_transactions owned by other threads; thus, it needs to be particularly careful. This fixes a null ptr deref caused by racing with the owning thread changing path->l[].b. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: EINTR -> BCH_ERR_transaction_restartKent Overstreet2023-10-221-18/+20
| | | | | | | | | Now that we have error codes, with subtypes, we can switch to our own error code for transaction restarts - and even better, a distinct error code for each transaction restart reason: clearer code and better debugging. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: added lock held time statsDaniel Hill2023-10-221-8/+36
| | | | | | | | | We now record the length of time btree locks are held and expose this in debugfs. Enabled via CONFIG_BCACHEFS_LOCK_TIME_STATS. Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: lock time stats prep work.Daniel Hill2023-10-221-3/+5
| | | | | | | We need the caller name and a place to store our results, btree_trans provides this. Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Also show when blocked on write locksKent Overstreet2023-10-221-17/+29
| | | | | | | | This consolidates some of the btree node lock path, so that when we're blocked taking a write lock on a node it shows up in bch2_btree_trans_to_text(), along with intent and read locks. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* Revert "bcachefs: Add more assertions for locking btree iterators out of order"Kent Overstreet2023-10-221-15/+3
| | | | | | | | | Figured out the bug we were chasing, and it had nothing to do with locking btree iterators/paths out of order. This reverts commit ff08733dd298c969aec7c7828095458f73fd5374. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: Add more assertions for locking btree iterators out of orderKent Overstreet2023-10-221-3/+15
| | | | | | | | | btree_path_traverse_all() traverses btree iterators in sorted order, and thus shouldn't see transaction restarts due to potential deadlocks - but sometimes we do. This patch adds some more assertions and tracks some more state to help track this down. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: btree_pathKent Overstreet2023-10-221-58/+59
| | | | | | | | | | | | | | | This splits btree_iter into two components: btree_iter is now the externally visible componont, and it points to a btree_path which is now reference counted. This means we no longer have to clone iterators up front if they might be mutated - btree_path can be shared by multiple iterators, and cloned if an iterator would mutate a shared btree_path. This will help us use iterators more efficiently, as well as slimming down the main long lived state in btree_trans, and significantly cleans up the logic for iterator lifetimes. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
* bcachefs: Further reduce iter->trans usageKent Overstreet2023-10-221-16/+14
| | | | | | | This is prep work for splitting btree_path out from btree_iter - btree_path will not have a pointer to btree_trans. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
* bcachefs: Reduce iter->trans usageKent Overstreet2023-10-221-6/+11
| | | | | | Disfavoured, and should go away. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>