summaryrefslogtreecommitdiffstats
path: root/net/ceph (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'ceph-for-4.15-rc1' of git://github.com/ceph/ceph-clientLinus Torvalds2017-11-214-4/+18
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull ceph updates from Ilya Dryomov: "We have a set of file locking improvements from Zheng, rbd rw/ro state handling code cleanup from myself and some assorted CephFS fixes from Jeff. rbd now defaults to single-major=Y, lifting the limit of ~240 rbd images per host for everyone" * tag 'ceph-for-4.15-rc1' of git://github.com/ceph/ceph-client: rbd: default to single-major device number scheme libceph: don't WARN() if user tries to add invalid key rbd: set discard_alignment to zero ceph: silence sparse endianness warning in encode_caps_cb ceph: remove the bump of i_version ceph: present consistent fsid, regardless of arch endianness ceph: clean up spinlocking and list handling around cleanup_cap_releases() rbd: get rid of rbd_mapping::read_only rbd: fix and simplify rbd_ioctl_set_ro() ceph: remove unused and redundant variable dropping ceph: mark expected switch fall-throughs ceph: -EINVAL on decoding failure in ceph_mdsc_handle_fsmap() ceph: disable cached readdir after dropping positive dentry ceph: fix bool initialization/comparison ceph: handle 'session get evicted while there are file locks' ceph: optimize flock encoding during reconnect ceph: make lock_to_ceph_filelock() static ceph: keep auth cap when inode has flocks or posix locks
| * libceph: don't WARN() if user tries to add invalid keyEric Biggers2017-11-131-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The WARN_ON(!key->len) in set_secret() in net/ceph/crypto.c is hit if a user tries to add a key of type "ceph" with an invalid payload as follows (assuming CONFIG_CEPH_LIB=y): echo -e -n '\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' \ | keyctl padd ceph desc @s This can be hit by fuzzers. As this is merely bad input and not a kernel bug, replace the WARN_ON() with return -EINVAL. Fixes: 7af3ea189a9a ("libceph: stop allocating a new cipher on every crypto request") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ilya Dryomov <idryomov@gmail.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
| * ceph: mark expected switch fall-throughsGustavo A. R. Silva2017-11-133-3/+15
| | | | | | | | | | | | | | | | | | In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com> [idryomov@gmail.com: amended "Older OSDs" comment] Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | Merge branch 'work.get_user_pages_fast' of ↵Linus Torvalds2017-11-171-2/+2
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull get_user_pages_fast() conversion from Al Viro: "A bunch of places switched to get_user_pages_fast()" * 'work.get_user_pages_fast' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: ceph: use get_user_pages_fast() pvr2fs: use get_user_pages_fast() atomisp: use get_user_pages_fast() st: use get_user_pages_fast() via_dmablit(): use get_user_pages_fast() fsl_hypervisor: switch to get_user_pages_fast() rapidio: switch to get_user_pages_fast() vchiq_2835_arm: switch to get_user_pages_fast()
| * ceph: use get_user_pages_fast()Al Viro2017-09-231-2/+2
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | License cleanup: add SPDX GPL-2.0 license identifier to files with no licenseGreg Kroah-Hartman2017-11-0225-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* | libceph: don't allow bidirectional swap of pg-upmap-itemsIlya Dryomov2017-09-191-10/+25
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts most of commit f53b7665c8ce ("libceph: upmap semantic changes"). We need to prevent duplicates in the final result. For example, we can currently take [1,2,3] and apply [(1,2)] and get [2,2,3] or [1,2,3] and apply [(3,2)] and get [1,2,2] The rest of the system is not prepared to handle duplicates in the result set like this. The reverted piece was intended to allow [1,2,3] and [(1,2),(2,1)] to get [2,1,3] to reorder primaries. First, this bidirectional swap is hard to implement in a way that also prevents dups. For example, [1,2,3] and [(1,4),(2,3),(3,4)] would give [4,3,4] but would we just drop the last step we'd have [4,3,3] which is also invalid, etc. Simpler to just not handle bidirectional swaps. In practice, they are not needed: if you just want to choose a different primary then use primary_affinity, or pg_upmap (not pg_upmap_items). Cc: stable@vger.kernel.org # 4.13 Link: http://tracker.ceph.com/issues/21410 Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Sage Weil <sage@redhat.com>
* ceph: more accurate statfsDouglas Fuller2017-09-061-1/+5
| | | | | | | | | | | | | Improve accuracy of statfs reporting for Ceph filesystems comprising exactly one data pool. In this case, the Ceph monitor can now report the space usage for the single data pool instead of the global data for the entire Ceph cluster. Include support for this message in mon_client and leverage it in ceph/super. Signed-off-by: Douglas Fuller <dfuller@redhat.com> Reviewed-by: Yan, Zheng <zyan@redhat.com> Reviewed-by: Ilya Dryomov <idryomov@gmail.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* ceph: nuke startsync opYanhu Cao2017-09-061-5/+0
| | | | | | | | | startsync is a no-op, has been for years. Remove it. Link: http://tracker.ceph.com/issues/20604 Signed-off-by: Yanhu Cao <gmayyyha@gmail.com> Reviewed-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* libceph: make RECOVERY_DELETES feature create a new intervalIlya Dryomov2017-08-012-1/+9
| | | | | | | | This is needed so that the OSDs can regenerate the missing set at the start of a new interval where support for recovery deletes changed. Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Sage Weil <sage@redhat.com>
* libceph: upmap semantic changesIlya Dryomov2017-08-011-28/+11
| | | | | | | | - apply both pg_upmap and pg_upmap_items - allow bidirectional swap of pg-upmap-items Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Sage Weil <sage@redhat.com>
* crush: assume weight_set != null imples weight_set_size > 0Ilya Dryomov2017-08-012-1/+5
| | | | | | | Reflects ceph.git commit 5e8fa3e06b68fae1582c9230a3a8d1abc6146286. Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Sage Weil <sage@redhat.com>
* libceph: fallback for when there isn't a pool-specific choose_argIlya Dryomov2017-08-011-1/+11
| | | | | | | | | | There is now a fallback to a choose_arg index of -1 if there isn't a pool-specific choose_arg set. If you create a per-pool weight-set, that works for that pool. Otherwise we try the compat/default one. If that doesn't exist either, then we use the normal CRUSH weights. Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Sage Weil <sage@redhat.com>
* libceph: don't call ->reencode_message() more than once per messageIlya Dryomov2017-08-011-3/+3
| | | | | | | | | | | | | Reencoding an already reencoded message is a bad idea. This could happen on Policy::stateful_server connections (!CEPH_MSG_CONNECT_LOSSY), such as MDS sessions. This didn't pop up in testing because currently only OSD requests are reencoded and OSD sessions are always lossy. Fixes: 98ad5ebd1505 ("libceph: ceph_connection_operations::reencode_message() method") Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: "Yan, Zheng" <zyan@redhat.com>
* libceph: make encode_request_*() work with r_mempool requestsIlya Dryomov2017-08-011-3/+6
| | | | | | | | | | | Messages allocated out of ceph_msgpool have a fixed front length (pool->front_len). Asserting that the entire front has been filled while encoding is thus wrong. Fixes: 8cb441c0545d ("libceph: MOSDOp v8 encoding (actual spgid + full hash)") Reported-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: "Yan, Zheng" <zyan@redhat.com>
* libceph: potential NULL dereference in ceph_msg_data_create()Dan Carpenter2017-07-171-2/+4
| | | | | | | | | | If kmem_cache_zalloc() returns NULL then the INIT_LIST_HEAD(&data->links); will Oops. The callers aren't really prepared for NULL returns so it doesn't make a lot of difference in real life. Fixes: 5240d9f95dfe ("libceph: replace message data pointer with list") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* libceph: don't call encode_request_finish() on MOSDBackoff messagesIlya Dryomov2017-07-171-1/+4
| | | | | | | | encode_request_finish() is for MOSDOp messages. Calling it on MOSDBackoff ack-block messages corrupts them. Fixes: a02a946dfe96 ("libceph: respect RADOS_BACKOFF backoffs") Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* libceph: use alloc_pg_mapping() in __decode_pg_upmap_items()Ilya Dryomov2017-07-171-1/+1
| | | | | | | | ... otherwise we die in insert_pg_mapping(), which wants pg->node to be empty, i.e. initialized with RB_CLEAR_NODE. Fixes: 6f428df47dae ("libceph: pg_upmap[_items] infrastructure") Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* libceph: set -EINVAL in one place in crush_decode()Ilya Dryomov2017-07-171-11/+12
| | | | | | | | | | No sooner than Dan had fixed this issue in commit 293dffaad8d5 ("libceph: NULL deref on crush_decode() error path"), I brought it back. Add a new label and set -EINVAL once, right before failing. Fixes: 278b1d709c6a ("libceph: ceph_decode_skip_* helpers") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* libceph: NULL deref on osdmap_apply_incremental() error pathDan Carpenter2017-07-171-3/+3
| | | | | | | | | | | There are hidden gotos in the ceph_decode_* macros. We need to set the "err" variable on these error paths otherwise we end up returning ERR_PTR(0) which is NULL. It causes NULL dereferences in the callers. Fixes: 6f428df47dae ("libceph: pg_upmap[_items] infrastructure") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> [idryomov@gmail.com: similar bug in osdmap_decode(), changelog tweak] Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* Merge tag 'random_for_linus' of ↵Linus Torvalds2017-07-151-1/+5
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random Pull random updates from Ted Ts'o: "Add wait_for_random_bytes() and get_random_*_wait() functions so that callers can more safely get random bytes if they can block until the CRNG is initialized. Also print a warning if get_random_*() is called before the CRNG is initialized. By default, only one single-line warning will be printed per boot. If CONFIG_WARN_ALL_UNSEEDED_RANDOM is defined, then a warning will be printed for each function which tries to get random bytes before the CRNG is initialized. This can get spammy for certain architecture types, so it is not enabled by default" * tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random: random: reorder READ_ONCE() in get_random_uXX random: suppress spammy warnings about unseeded randomness random: warn when kernel uses unseeded randomness net/route: use get_random_int for random counter net/neighbor: use get_random_u32 for 32-bit hash random rhashtable: use get_random_u32 for hash_rnd ceph: ensure RNG is seeded before using iscsi: ensure RNG is seeded before use cifs: use get_random_u32 for 32-bit lock random random: add get_random_{bytes,u32,u64,int,long,once}_wait family random: add wait_for_random_bytes() API
| * ceph: ensure RNG is seeded before usingJason A. Donenfeld2017-06-201-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Ceph uses the RNG for various nonce generations, and it shouldn't accept using bad randomness. So, we wait for the RNG to be properly seeded. We do this by calling wait_for_random_bytes() in a function that is certainly called in process context, early on, so that all subsequent calls to get_random_bytes are necessarily acceptable. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Ilya Dryomov <idryomov@gmail.com> Cc: "Yan, Zheng" <zyan@redhat.com> Cc: Sage Weil <sage@redhat.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
* | libceph: osd_state is 32 bits wide in luminousIlya Dryomov2017-07-072-10/+21
| | | | | | | | Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | crush: remove an obsolete commentIlya Dryomov2017-07-071-5/+0
| | | | | | | | | | | | Reflects ceph.git commit dca1ae1e0a6b02029c3a7f9dec4114972be26d50. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | crush: crush_init_workspace starts with struct crush_workIlya Dryomov2017-07-071-1/+1
| | | | | | | | | | | | | | | | | | | | It is not just a pointer to crush_work, it is the whole structure. That is not a problem since it only contains a pointer. But it will be a problem if new data members are added to crush_work. Reflects ceph.git commit ee957dd431bfbeb6dadaf77764db8e0757417328. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph, crush: per-pool crush_choose_arg_map for crush_do_rule()Ilya Dryomov2017-07-072-3/+200
| | | | | | | | | | | | | | | | | | | | If there is no crush_choose_arg_map for a given pool, a NULL pointer is passed to preserve existing crush_do_rule() behavior. Reflects ceph.git commits 55fb91d64071552ea1bc65ab4ea84d3c8b73ab4b, dbe36e08be00c6519a8c89718dd47b0219c20516. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | crush: implement weight and id overrides for straw2Ilya Dryomov2017-07-072-19/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | bucket_straw2_choose needs to use weights that may be different from weight_items. For instance to compensate for an uneven distribution caused by a low number of values. Or to fix the probability biais introduced by conditional probabilities (see http://tracker.ceph.com/issues/15653 for more information). We introduce a weight_set for each straw2 bucket to set the desired weight for a given item at a given position. The weight of a given item when picking the first replica (first position) may be different from the weight the second replica (second position). For instance the weight matrix for a given bucket containing items 3, 7 and 13 could be as follows: position 0 position 1 item 3 0x10000 0x100000 item 7 0x40000 0x10000 item 13 0x40000 0x10000 When crush_do_rule picks the first of two replicas (position 0), item 7, 3 are four times more likely to be choosen by bucket_straw2_choose than item 13. When choosing the second replica (position 1), item 3 is ten times more likely to be choosen than item 7, 13. By default the weight_set of each bucket exactly matches the content of item_weights for each position to ensure backward compatibility. bucket_straw2_choose compares items by using their id. The same ids are also used to index buckets and they must be unique. For each item in a bucket an array of ids can be provided for placement purposes and they are used instead of the ids. If no replacement ids are provided, the legacy behavior is preserved. Reflects ceph.git commit 19537a450fd5c5a0bb8b7830947507a76db2ceca. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: apply_upmap()Ilya Dryomov2017-07-071-2/+95
| | | | | | | | | | | | | | | | | | | | | | Previously, pg_to_raw_osds() didn't filter for existent OSDs because raw_to_up_osds() would filter for "up" ("up" is predicated on "exists") and raw_to_up_osds() was called directly after pg_to_raw_osds(). Now, with apply_upmap() call in there, nonexistent OSDs in pg_to_raw_osds() output can affect apply_upmap(). Introduce remove_nonexistent_osds() to deal with that. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: compute actual pgid in ceph_pg_to_up_acting_osds()Ilya Dryomov2017-07-071-6/+6
| | | | | | | | | | | | | | Move raw_pg_to_pg() call out of get_temp_osds() and into ceph_pg_to_up_acting_osds(), for upcoming apply_upmap(). Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: pg_upmap[_items] infrastructureIlya Dryomov2017-07-072-3/+155
| | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_temp and pg_upmap encodings are the same (PG -> array of osds), except for the incremental remove: it's an empty mapping in new_pg_temp for pg_temp and a separate old_pg_upmap set for pg_upmap. (This isn't to allow for empty pg_upmap mappings -- apparently, pg_temp just wasn't looked at as an example for pg_upmap encoding.) Reuse __decode_pg_temp() for decoding pg_upmap and new_pg_upmap. __decode_pg_temp() stores into pg_temp union member, but since pg_upmap union member is identical, reading through pg_upmap later is OK. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: ceph_decode_skip_* helpersIlya Dryomov2017-07-071-22/+3
| | | | | | | | | | | | | | | | | | | | | | Some of these won't be as efficient as they could be (e.g. ceph_decode_skip_set(... 32 ...) could advance by len * sizeof(u32) once instead of advancing by sizeof(u32) len times), but that's fine and not worth a bunch of extra macro code. Replace skip_name_map() with ceph_decode_skip_map as an example. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: kill __{insert,lookup,remove}_pg_mapping()Ilya Dryomov2017-07-071-72/+15
| | | | | | | | | | | | Switch to DEFINE_RB_FUNCS2-generated {insert,lookup,erase}_pg_mapping(). Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: introduce and switch to decode_pg_mapping()Ilya Dryomov2017-07-071-67/+83
| | | | | | | | Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: don't pass pgid by valueIlya Dryomov2017-07-071-10/+10
| | | | | | | | | | | | | | Make __{lookup,remove}_pg_mapping() look like their ceph_spg_mapping counterparts: take const struct ceph_pg *. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: respect RADOS_BACKOFF backoffsIlya Dryomov2017-07-074-0/+684
| | | | | | | | Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: avoid unnecessary pi lookups in calc_target()Ilya Dryomov2017-07-072-28/+34
| | | | | | | | Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: use target pi for calc_target() calculationsIlya Dryomov2017-07-071-1/+8
| | | | | | | | | | | | | | | | For luminous and beyond we are encoding the actual spgid, which requires operating with the correct pg_num, i.e. that of the target pool. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: always populate t->target_{oid,oloc} in calc_target()Ilya Dryomov2017-07-071-11/+4
| | | | | | | | | | | | | | need_check_tiering logic doesn't make a whole lot of sense. Drop it and apply tiering unconditionally on every calc_target() call instead. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: make sure need_resend targets reflect latest mapIlya Dryomov2017-07-072-9/+26
| | | | | | | | | | | | | | | | | | Otherwise we may miss events like PG splits, pool deletions, etc when we get multiple incremental maps at once. Because check_pool_dne() can now be fed an unlinked request, finish_request() needed to be taught to handle unlinked requests. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: delete from need_resend_linger before check_linger_pool_dne()Ilya Dryomov2017-07-071-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When processing a map update consisting of multiple incrementals, we may end up running check_linger_pool_dne() on a lingering request that was previously added to need_resend_linger list. If it is concluded that the target pool doesn't exist, the request is killed off while still on need_resend_linger list, which leads to a crash on a NULL lreq->osd in kick_requests(): libceph: linger_id 18446462598732840961 pool does not exist BUG: unable to handle kernel NULL pointer dereference at 0000000000000010 IP: ceph_osdc_handle_map+0x4ae/0x870 Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: resend on PG splits if OSD has RESEND_ON_SPLITIlya Dryomov2017-07-072-11/+17
| | | | | | | | | | | | | | Note that ceph_osd_request_target fields are updated regardless of RESEND_ON_SPLIT. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: drop need_resend from calc_target()Ilya Dryomov2017-07-071-7/+11
| | | | | | | | | | | | | | Replace it with more fine-grained bools to separate updating ceph_osd_request_target fields and the decision to resend. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: MOSDOp v8 encoding (actual spgid + full hash)Ilya Dryomov2017-07-071-19/+134
| | | | | | | | Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: ceph_connection_operations::reencode_message() methodIlya Dryomov2017-07-071-2/+5
| | | | | | | | | | | | | | | | | | Give upper layers a chance to reencode the message after the connection is negotiated and ->peer_features is set. OSD client will use this to support both luminous and pre-luminous OSDs (in a single cluster): the former need MOSDOp v8; the latter will continue to be sent MOSDOp v4. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: encode_{pgid,oloc}() helpersIlya Dryomov2017-07-071-23/+27
| | | | | | | | | | | | Factor out encode_{pgid,oloc}() and use ceph_encode_string() for oid. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: introduce ceph_spg, ceph_pg_to_primary_shard()Ilya Dryomov2017-07-073-3/+48
| | | | | | | | | | | | Store both raw pgid and actual spgid in ceph_osd_request_target. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: new pi->last_force_request_resendIlya Dryomov2017-07-071-0/+37
| | | | | | | | | | | | | | | | | | | | The old (v15) pi->last_force_request_resend has been repurposed to make pre-RESEND_ON_SPLIT clients that don't check for PG splits but do obey pi->last_force_request_resend resend on splits. See ceph.git commit 189ca7ec6420 ("mon/OSDMonitor: make pre-luminous clients resend ops on split"). Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: fold [l]req->last_force_resend into ceph_osd_request_targetIlya Dryomov2017-07-071-11/+10
| | | | | | | | Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: support SERVER_JEWEL feature bitsIlya Dryomov2017-07-071-1/+7
| | | | | | | | | | | | | | Only MON_STATEFUL_SUB, really. MON_ROUTE_OSDMAP and OSDSUBOP_NO_SNAPCONTEXT are irrelevant. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | libceph: handle non-empty dest in ceph_{oloc,oid}_copy()Ilya Dryomov2017-07-071-4/+6
| | | | | | | | Signed-off-by: Ilya Dryomov <idryomov@gmail.com>