diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2020-06-06 01:26:36 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-06-06 01:26:36 +0200 |
commit | 9daa0a27a0bce6596be287fb1df372ff80bb1087 (patch) | |
tree | 0dae8f626d71339d79131a3f664abbe36547c5da /fs/afs/rxrpc.c | |
parent | Merge tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/t... (diff) | |
parent | afs: Adjust the fileserver rotation algorithm to reprobe/retry more quickly (diff) | |
download | linux-9daa0a27a0bce6596be287fb1df372ff80bb1087.tar.xz linux-9daa0a27a0bce6596be287fb1df372ff80bb1087.zip |
Merge tag 'afs-next-20200604' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs
Pull AFS updates from David Howells:
"There's some core VFS changes which affect a couple of filesystems:
- Make the inode hash table RCU safe and providing some RCU-safe
accessor functions. The search can then be done without taking the
inode_hash_lock. Care must be taken because the object may be being
deleted and no wait is made.
- Allow iunique() to avoid taking the inode_hash_lock.
- Allow AFS's callback processing to avoid taking the inode_hash_lock
when using the inode table to find an inode to notify.
- Improve Ext4's time updating. Konstantin Khlebnikov said "For now,
I've plugged this issue with try-lock in ext4 lazy time update.
This solution is much better."
Then there's a set of changes to make a number of improvements to the
AFS driver:
- Improve callback (ie. third party change notification) processing
by:
(a) Relying more on the fact we're doing this under RCU and by
using fewer locks. This makes use of the RCU-based inode
searching outlined above.
(b) Moving to keeping volumes in a tree indexed by volume ID
rather than a flat list.
(c) Making the server and volume records logically part of the
cell. This means that a server record now points directly at
the cell and the tree of volumes is there. This removes an N:M
mapping table, simplifying things.
- Improve keeping NAT or firewall channels open for the server
callbacks to reach the client by actively polling the fileserver on
a timed basis, instead of only doing it when we have an operation
to process.
- Improving detection of delayed or lost callbacks by including the
parent directory in the list of file IDs to be queried when doing a
bulk status fetch from lookup. We can then check to see if our copy
of the directory has changed under us without us getting notified.
- Determine aliasing of cells (such as a cell that is pointed to be a
DNS alias). This allows us to avoid having ambiguity due to
apparently different cells using the same volume and file servers.
- Improve the fileserver rotation to do more probing when it detects
that all of the addresses to a server are listed as non-responsive.
It's possible that an address that previously stopped responding
has become responsive again.
Beyond that, lay some foundations for making some calls asynchronous:
- Turn the fileserver cursor struct into a general operation struct
and hang the parameters off of that rather than keeping them in
local variables and hang results off of that rather than the call
struct.
- Implement some general operation handling code and simplify the
callers of operations that affect a volume or a volume component
(such as a file). Most of the operation is now done by core code.
- Operations are supplied with a table of operations to issue
different variants of RPCs and to manage the completion, where all
the required data is held in the operation object, thereby allowing
these to be called from a workqueue.
- Put the standard "if (begin), while(select), call op, end" sequence
into a canned function that just emulates the current behaviour for
now.
There are also some fixes interspersed:
- Don't let the EACCES from ICMP6 mapping reach the user as such,
since it's confusing as to whether it's a filesystem error. Convert
it to EHOSTUNREACH.
- Don't use the epoch value acquired through probing a server. If we
have two servers with the same UUID but in different cells, it's
hard to draw conclusions from them having different epoch values.
- Don't interpret the argument to the CB.ProbeUuid RPC as a
fileserver UUID and look up a fileserver from it.
- Deal with servers in different cells having the same UUIDs. In the
event that a CB.InitCallBackState3 RPC is received, we have to
break the callback promises for every server record matching that
UUID.
- Don't let afs_statfs return values that go below 0.
- Don't use running fileserver probe state to make server selection
and address selection decisions on. Only make decisions on final
state as the running state is cleared at the start of probing"
Acked-by: Al Viro <viro@zeniv.linux.org.uk> (fs/inode.c part)
* tag 'afs-next-20200604' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs: (27 commits)
afs: Adjust the fileserver rotation algorithm to reprobe/retry more quickly
afs: Show more a bit more server state in /proc/net/afs/servers
afs: Don't use probe running state to make decisions outside probe code
afs: Fix afs_statfs() to not let the values go below zero
afs: Fix the by-UUID server tree to allow servers with the same UUID
afs: Reorganise volume and server trees to be rooted on the cell
afs: Add a tracepoint to track the lifetime of the afs_volume struct
afs: Detect cell aliases 3 - YFS Cells with a canonical cell name op
afs: Detect cell aliases 2 - Cells with no root volumes
afs: Detect cell aliases 1 - Cells with root volumes
afs: Implement client support for the YFSVL.GetCellName RPC op
afs: Retain more of the VLDB record for alias detection
afs: Fix handling of CB.ProbeUuid cache manager op
afs: Don't get epoch from a server because it may be ambiguous
afs: Build an abstraction around an "operation" concept
afs: Rename struct afs_fs_cursor to afs_operation
afs: Remove the error argument from afs_protocol_error()
afs: Set error flag rather than return error from file status decode
afs: Make callback processing more efficient.
afs: Show more information in /proc/net/afs/servers
...
Diffstat (limited to 'fs/afs/rxrpc.c')
-rw-r--r-- | fs/afs/rxrpc.c | 45 |
1 files changed, 26 insertions, 19 deletions
diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c index e313dae01674..8fc8fb406a5a 100644 --- a/fs/afs/rxrpc.c +++ b/fs/afs/rxrpc.c @@ -181,8 +181,7 @@ void afs_put_call(struct afs_call *call) if (call->type->destructor) call->type->destructor(call); - afs_put_server(call->net, call->server, afs_server_trace_put_call); - afs_put_cb_interest(call->net, call->cbi); + afs_unuse_server_notime(call->net, call->server, afs_server_trace_put_call); afs_put_addrlist(call->alist); kfree(call->request); @@ -281,18 +280,19 @@ static void afs_load_bvec(struct afs_call *call, struct msghdr *msg, struct bio_vec *bv, pgoff_t first, pgoff_t last, unsigned offset) { + struct afs_operation *op = call->op; struct page *pages[AFS_BVEC_MAX]; unsigned int nr, n, i, to, bytes = 0; nr = min_t(pgoff_t, last - first + 1, AFS_BVEC_MAX); - n = find_get_pages_contig(call->mapping, first, nr, pages); + n = find_get_pages_contig(op->store.mapping, first, nr, pages); ASSERTCMP(n, ==, nr); msg->msg_flags |= MSG_MORE; for (i = 0; i < nr; i++) { to = PAGE_SIZE; if (first + i >= last) { - to = call->last_to; + to = op->store.last_to; msg->msg_flags &= ~MSG_MORE; } bv[i].bv_page = pages[i]; @@ -322,13 +322,14 @@ static void afs_notify_end_request_tx(struct sock *sock, */ static int afs_send_pages(struct afs_call *call, struct msghdr *msg) { + struct afs_operation *op = call->op; struct bio_vec bv[AFS_BVEC_MAX]; unsigned int bytes, nr, loop, offset; - pgoff_t first = call->first, last = call->last; + pgoff_t first = op->store.first, last = op->store.last; int ret; - offset = call->first_offset; - call->first_offset = 0; + offset = op->store.first_offset; + op->store.first_offset = 0; do { afs_load_bvec(call, msg, bv, first, last, offset); @@ -338,7 +339,7 @@ static int afs_send_pages(struct afs_call *call, struct msghdr *msg) bytes = msg->msg_iter.count; nr = msg->msg_iter.nr_segs; - ret = rxrpc_kernel_send_data(call->net->socket, call->rxcall, msg, + ret = rxrpc_kernel_send_data(op->net->socket, call->rxcall, msg, bytes, afs_notify_end_request_tx); for (loop = 0; loop < nr; loop++) put_page(bv[loop].bv_page); @@ -348,7 +349,7 @@ static int afs_send_pages(struct afs_call *call, struct msghdr *msg) first += nr; } while (first <= last); - trace_afs_sent_pages(call, call->first, last, first, ret); + trace_afs_sent_pages(call, op->store.first, last, first, ret); return ret; } @@ -383,16 +384,18 @@ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp) */ tx_total_len = call->request_size; if (call->send_pages) { - if (call->last == call->first) { - tx_total_len += call->last_to - call->first_offset; + struct afs_operation *op = call->op; + + if (op->store.last == op->store.first) { + tx_total_len += op->store.last_to - op->store.first_offset; } else { /* It looks mathematically like you should be able to * combine the following lines with the ones above, but * unsigned arithmetic is fun when it wraps... */ - tx_total_len += PAGE_SIZE - call->first_offset; - tx_total_len += call->last_to; - tx_total_len += (call->last - call->first - 1) * PAGE_SIZE; + tx_total_len += PAGE_SIZE - op->store.first_offset; + tx_total_len += op->store.last_to; + tx_total_len += (op->store.last - op->store.first - 1) * PAGE_SIZE; } } @@ -538,13 +541,15 @@ static void afs_deliver_to_call(struct afs_call *call) ret = call->type->deliver(call); state = READ_ONCE(call->state); + if (ret == 0 && call->unmarshalling_error) + ret = -EBADMSG; switch (ret) { case 0: afs_queue_call_work(call); if (state == AFS_CALL_CL_PROC_REPLY) { - if (call->cbi) + if (call->op) set_bit(AFS_SERVER_FL_MAY_HAVE_CB, - &call->cbi->server->flags); + &call->op->server->flags); goto call_complete; } ASSERTCMP(state, >, AFS_CALL_CL_PROC_REPLY); @@ -957,9 +962,11 @@ int afs_extract_data(struct afs_call *call, bool want_more) /* * Log protocol error production. */ -noinline int afs_protocol_error(struct afs_call *call, int error, +noinline int afs_protocol_error(struct afs_call *call, enum afs_eproto_cause cause) { - trace_afs_protocol_error(call, error, cause); - return error; + trace_afs_protocol_error(call, cause); + if (call) + call->unmarshalling_error = true; + return -EBADMSG; } |