| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
repart: Add MakeSymlinks=
|
| |
| |
| |
| | |
Similar to MakeDirectories=, but creates symlinks in the filesystem.
|
|/
|
|
|
|
|
| |
TEST-64-UDEV-STORAGE is invoked with the subtest appended, so TEST_SKIP=TEST-64-UDEV-STORAGE
does not work. Fix it by using TEST_SKIP as a partial match.
Follow-up for ddc91af4eaa32511f92c83b2c24d9cc0425fd5f5
|
| |
|
|
|
|
|
|
|
| |
Also, don't abuse RET_GATHER in verb_cat(), where the failures
are most likely unrelated to each other.
Closes #34281
|
| |
|
|
|
|
|
| |
This configures an indentity mapping similar to
systemd-nspawn --private-users=identity.
|
|\
| |
| | |
nspawn: make --volatile work with -U
|
| |
| |
| |
| | |
For issue #34254.
|
|\ \
| | |
| | | |
nspawn: enable FUSE in containers
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Linux kernel v4.18 (2018-08-12) added user-namespace support to FUSE, and
bumped the FUSE version to 7.27 (see: da315f6e0398 (Merge tag
'fuse-update-4.18' of
git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse, Linus Torvalds,
2018-06-07). This means that on such kernels it is safe to enable FUSE in
nspawn containers.
In outer_child(), before calling copy_devnodes(), check the FUSE version to
decide whether enable (>=7.27) or disable (<7.27) FUSE in the container. We
look at the FUSE version instead of the kernel version in order to enable FUSE
support on older-versioned kernels that may have the mentioned patchset
backported ([as requested by @poettering][1]). However, I am not sure that
this is safe; user-namespace support is not a documented part of the FUSE
protocol, which is what FUSE_KERNEL_VERSION/FUSE_KERNEL_MINOR_VERSION are meant
to capture. While the same patchset
- added FUSE_ABORT_ERROR (which is all that the 7.27 version bump
is documented as including),
- bumped FUSE_KERNEL_MINOR_VERSION from 26 to 27, and
- added user-namespace support
these 3 things are not inseparable; it is conceivable to me that a backport
could include the first 2 of those things and exclude the 3rd; perhaps it would
be safer to check the kernel version.
Do note that our get_fuse_version() function uses the fsopen() family of
syscalls, which were not added until Linux kernel v5.2 (2019-07-07); so if
nothing has been backported, then the minimum kernel version for FUSE-in-nspawn
is actually v5.2, not v4.18.
Pass whether or not to enable FUSE to copy_devnodes(); have copy_devnodes()
copy in /dev/fuse if enabled.
Pass whether or not to enable FUSE back over fd_outer_socket to run_container()
so that it can pass that to append_machine_properties() (via either
register_machine() or allocate_scope()); have append_machine_properties()
append "DeviceAllow=/dev/fuse rw" if enabled.
For testing, simply check that /dev/fuse can be opened for reading and writing,
but that actually reading from it fails with EPERM. The test assumes that if
FUSE is supported (/dev/fuse exists), then the testsuite is running on a kernel
with FUSE >= 7.27; I am unsure how to go about writing a test that validates
that the version check disables FUSE on old kernels.
[1]: https://github.com/systemd/systemd/issues/17607#issuecomment-745418835
Closes #17607
|
| | |
| | |
| | |
| | |
| | |
| | | |
Right now it mostly duplicates a test that already exists in
TEST-50-DISSECT.mountfsd.sh, but it serves as a template for more unprivileged
nspawn tests.
|
|/ / |
|
| |
| |
| |
| |
| |
| |
| |
| | |
The .cred suffix is stripped from a credential as it is imported from
the ESP, hence it should not be included in the credential name embedded
in the credential.
Fixes: #33497
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
So far you had to pick:
1. Use a signed PCR TPM2 policy to lock your disk to (i.e. UKI vendor
blesses your setup via signature)
or
2. Use a pcrlock policy (i.e. local system blesses your setup via
dynamic local policy stored in NV index)
It was not possible combine these two, because TPM2 access policies do
not allow the combination of PolicyAuthorize (used to implement #1
above) and PolicyAuthorizeNV (used to implement #2) in a single policy,
unless one is "further upstream" (and can simply remove the other from
the policy freely).
This is quite limiting of course, since we actually do want to enforce
on each TPM object that both the OS vendor policy and the local policy
must be fulfilled, without the chance for the vendor or the local system
to disable the other.
This patch addresses this: instead of trying to find a way to come up
with some adventurous scheme to combine both policy into one TPM2
policy, we simply shard the symmetric LUKS decryption key: one half we
protect via the signed PCR policy, and the other we protect via the
pcrlock policy. Only if both halves can be acquired the disk can be
decrypted.
This means:
1. we simply double the unlock key in length in case both policies shall
be used.
2. We store two resulting TPM policy hashes in the LUKS token JSON, one
for each policy
3. We store two sealed TPM policy key blobs in the LUKS token JSON, for
both halves of the LUKS unlock key.
This patch keeps the "sharding" logic relatively generic (i.e. the low
level logic is actually fine with more than 2 shards), because I figure
sooner or later we might have to encode more shards, for example if we
add further TPM2-based access policies, for example when combining FIDO2
with TPM2, or implementing TOTP for this.
|
|/ |
|
|\
| |
| | |
repart: initialize seed earlier
|
| | |
|
| |
| |
| |
| | |
For issue #34257.
|
|\ \
| | |
| | | |
core: follow-ups for recent PRs
|
| |/
| |
| |
| | |
Addresses https://github.com/systemd/systemd/pull/32487#issuecomment-2328465309
|
|/ |
|
| |
|
| |
|
|\
| |
| | |
network: make qdisc reconfigurable
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently, if for example a traffic control object already exist, networkd
will silently do nothing, even if the settings in the network file for the
traffic control object have changed. Let's instead replace the object if it
already exists so that new settings from the network file are applied as
expected.
Fixes #31226
|
| | |
|
| |
| |
| |
| | |
(where BindJournalSockets=yes is implied)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Now that mkfs.btrfs is adding support for compressing the generated
filesystem (https://github.com/kdave/btrfs-progs/pull/882), let's
add general support for specifying the compression algorithm and
compression level to use.
We opt to not parse the specified compression algorithm and instead
pass it on as is to the mkfs tool. This has a few benefits:
- We support every compression algorithm supported by every tool
automatically.
- Users don't need to modify systemd-repart if a mkfs tool learns a
new compression algorithm in the future
- We don't need to maintain a bunch of tables for filesystem to map
from our generic compression algorithm enum to the filesystem specific
names.
We don't add support for btrfs just yet until the corresponding PR
in btrfs-progs is merged.
|
| | |
|
| |
| |
| |
| | |
Required for various tests in TEST-58-REPART.
|
| |
| |
| |
| |
| |
| |
| |
| | |
The original regex didn't cover the `run-unit-tests.py` script that
made the old framework pull in Python into the test image, which in turn
allowed the new TEST-69-SHUTDOWN Python script to get executed in the
old framework's image, causing unexpected fails with latest Python on
Rawhide.
|
|/
|
|
|
|
| |
Force means force, we skip checks with PID1 for existing units, but
then bail out with EEXIST if the files are actually there. Overwrite
everything instead.
|
|
|
|
| |
Hopefully fixes #34204.
|
|
|
|
| |
For issue #34104.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These operations might require slow I/O, and thus might block PID1's main
loop for an undeterminated amount of time. Instead of performing them
inline, fork a worker process and stash away the D-Bus message, and reply
once we get a SIGCHILD indicating they have completed. That way we don't
break compatibility and callers can continue to rely on the fact that when
they get the method reply the operation either succeeded or failed.
To keep backward compatibility, unlike reload control processes, these
are ran inside init.scope and not the target cgroup. Unlike ExecReload,
this is under our control and is not defined by the unit. This is necessary
because previously the operation also wasn't ran from the target cgroup,
so suddenly forking a copy-on-write copy of pid1 into the target cgroup
will make memory usage spike, and if there is a MemoryMax= or MemoryHigh=
set and the cgroup is already close to the limit, it will cause an OOM
kill, where previously it would have worked fine.
|
|
|
|
|
| |
In some cases (SUSE Tumbleweed) this is needed as a library (libz) is
not in the default path, so it fails to run.
|
|
|
|
|
|
| |
The TEST-64-UDEV-STORAGE tests fail before we even start the test.
Let's set show_status=error to get more information when those failures
happen.
|
|\
| |
| | |
sysupdate: Handle incomplete versions
|
| |
| |
| |
| | |
To make sure we don't regress on #33339
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
One of the major pait points of managing fleets of headless nodes is
that when something fails at startup, unless debug level was already
enabled (which usually isn't, as it's a firehose), one needs to manually
enable it and pray the issue can be reproduced, which often is really
hard and time consuming, just to get extra info. Usually the extra log
messages are enough to triage an issue.
This new option makes it so that when a service fails and is restarted
due to Restart=, log level for that unit is set to debug, so that all
setup code in pid1 and sd-executor logs at debug level, and also a new
DEBUG_INVOCATION=1 env var is passed to the service itself, so that it
knows it should start with a higher log level. Once the unit succeeds
or reaches the rate limit the original level is restored.
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
|
|
| |
I don't actually need this anymore since we're going with a
unit based approach for the containers stuff internally so
let's just revert it.
Fixes #34085
This reverts commit ce2291730d5f91190e97e7c515ac772ae4970062.
|
|\
| |
| | |
network/routing-policy-rule: follow up for recent change
|
| |
| |
| |
| |
| |
| | |
after reconfiguration
For issue #34068.
|
| |
| |
| |
| |
| |
| |
| |
| | |
We usually configure a test rule with a unique priority. Hence, finding
rule by priority reduces the lines of output, and we can debug easily.
Also print short comments on check. That's helpful when the check is
called several times.
|
|\ \
| |/
|/| |
sysupdate: Implement dbus service
|
| | |
|
|\ \
| |/
|/| |
network: further rework for routing policy rule
|