| Commit message (Collapse) | Author | Files | Lines |
|
|
|
Opening a verity device is an expensive operation. The kernelspace operations
are mostly sequential with a global lock held regardless of which device
is being opened. In userspace jumps in and out of multiple libraries are
required. When signatures are used, there's the additional cryptographic
checks.
We know when two devices are identical: they have the same root hash.
If libcrypsetup returns EEXIST, double check that the hashes are really
the same, and that either both or none have a signature, and if everything
matches simply remount the already open device. The kernel will do
reference counting for us.
In order to quickly and reliably discover if a device is already open,
change the node naming scheme from '/dev/mapper/major:minor-verity' to
'/dev/mapper/$roothash-verity'.
Unfortunately libdevmapper is not 100% reliable, so in some case it
will say that the device already exists and it is active, but in
reality it is not usable. Fallback to an individually-activated
unique device name in those cases for robustness.
|
|
|
|
It's line-by-line the same logic, hence use the common implementation.
|
|
|
|
In case the passwd/group file is symlinked, follow things correctly.
Follow-up for: #16512
Addresses: https://github.com/systemd/systemd/pull/16512#discussion_r458073677
|
|
|
|
|
|
../src/home/homectl-pkcs11.c:19:13: warning: ‘pkcs11_callback_data_release’ defined but not used [-Wunused-function]
19 | static void pkcs11_callback_data_release(struct pkcs11_callback_data *data) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
The docs for XZ don't seem to answer this at first blush, or maybe
I'm looking in the wrong place... This might make XZ less terribly slow,
but on the other hand, almost nobody uses it, so it doesn't matter that
much.
|
|
This should be more efficient with no downsides. Same considerations as in the
previous commit hold.
|
|
decompress_blob_zstd() would allocate ever bigger buffers in a loop trying to
get a buffer big enough to decompress the input data. This is wasteful, since
we can just query the size of the decompressed data from the compressed header.
Worse, it doesn't work when the output size is capped, i.e. when dst_max != 0.
If the decompressed blob happened to be bigger than dst_max, decompression
would fail with -ENOBUFS. We need to use "stream decompression" instead, and
only get min(uncompressed size, dst_max) bytes of output.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1856037 in a second way.
|
|
We might add more compression types in the future, and we should treat that
as unsupported, and not a format error.
|
|
SD_JOURNAL_FOREACH_DATA() and SD_JOURNAL_FOREACH_UNIQUE() would immediately
terminate when a field couldn't be accessed. This can happen for example when a
field is compressed with an unavailable compression format. But it's likely
that this is the wrong thing to do: the caller for example might want to
iterate over the fields but isn't interested in all of them. coredumpctl is
like this: it uses SD_JOURNAL_FOREACH_DATA() but only uses a subset of the
fields.
Add two new functions sd_journal_enumerate_good_data() and
sd_journal_enumerate_good_unique() that retry sd_journal_enumerate_data() and
sd_journal_enumerate_unique() if the return value is something that applies to
a single field: ENOBUS, E2BIG, EOPNOTSUPP.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1856037.
An alternative would be to make the macros themselves smarter instead of adding
new symbols, and do the looping internally in the macro. I don't like that
approach for two reasons. First, it would embed the logic in the macro, so
recompilation would be required if we decide to update the logic. With the
current version of the patch, recompilation is required to use the new symbols,
but after that, library upgrades are enough. So the current approach is safer
in case further updates are needed. Second, our headers use primitive C, and it
is hard to do the macros without using newer features.
|
|
|
|
handling
Follow-up for: 26698337f3842842af51cd007485f1dcd7c43cf2
|
|
|
|
Let's split this out into its own helper function we can reuse at
various places.
Also, let's avoid signed values where we can so that we can cover more
of the available time range.
|
|
|
|
Fixes: #16506
|
|
|
|
Let's use the new flag wherever we read key material/passphrases/hashes
off disk, so that people can plug in their own IPC service as backend if
they like, easily.
(My main goal was actually to support this for crypttab key files — i.e.
that you can specify AF_UNIX sockets as third column in crypttab — but
that's harder to implement, since the keys are read via libcryptsetup's
API, not ours.)
|
|
There's really no reason to prohibit this, hence don't.
|
|
Optionally, teach read_full_file() the ability to connect to an AF_UNIX
socket if the specified path points to one.
|
|
reading file
|
|
Also, drop one unnecessary sd_device_unref(), as dev_db_clone will be
unref()ed in udev_event_free().
|
|
|
|
|
|
TEST-50-DISSECT
|
|
the wrong file in the non EFI case
According to the docs, and to the
org.freedesktop.login1.get-reboot-to-boot-loader-menu code, the
(oneshot) boot-loader-menu timeout should be stored in
/run/systemd/reboot-to-boot-loader-menu, but the set method was storing it
in /run/systemd/reboot-to-loader-menu.
This commit fixes this. Note that the fixed name also is a better match
for the dbus call names and matches the related
/run/systemd/reboot-to-boot-loader-entry structure, so fixing the set code,
rather then the get code + docs seems like the right thing to do here.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Also, filter out DNS servers which do not match link ifindex.
|
|
|
|
|
|
|
|
|
|
|