summaryrefslogtreecommitdiffstats
path: root/units/user@.service.in
diff options
context:
space:
mode:
authorZbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>2022-07-02 10:33:49 +0200
committerLennart Poettering <lennart@poettering.net>2022-07-05 14:40:01 +0200
commitb8df7f8629cb310beac982a4779b27eabe5362c6 (patch)
tree211b3ba7dba433c95b58b7342c5140d590dedb5e /units/user@.service.in
parentupdate TODO (diff)
downloadsystemd-b8df7f8629cb310beac982a4779b27eabe5362c6.tar.xz
systemd-b8df7f8629cb310beac982a4779b27eabe5362c6.zip
user: delegate cpu controller, assign weights to user slices
So far we didn't enable the cpu controller because of overhead of the accounting. If I'm reading things correctly, delegation was enabled for a while for the units with user and pam context set, i.e. for user@.service too. a931ad47a8623163a29d898224d8a8c1177ffdaf added the explicit Delegate=yes|no switch, but it was initially set to 'yes'. acc8059129b38d60c1b923670863137f8ec8f91a disabled delegation for user@.service with the justication that CPU accounting is expensive, but half a year later a88c5b8ac4df713d9831d0073a07fac82e884fb3 changed DefaultCPUAccounting=yes for kernels >=4.15 with the justification that CPU accounting is inexpensive there. In my (very noncomprehensive) testing, I don't see a measurable overhead if the cpu controller is enabled for user slices. I tried some repeated compilations, and there is was no statistical difference, but the noise level was fairly high. Maybe better benchmarking would reveal a difference. The goal of this change is very simple: currently all of the user session, including services like the display server and pipewire are under user@.service. This means that when e.g. a compilation job is started in the session's app.slice, the processes in session.slice compete for CPU and can be starved. In particular, audio starts to stutter, etc. With CPU controller enabled, I can start start 'ninja -C build -j40' in a tab and this doesn't have any noticable effect on audio. I don't think the particular values matter too much: the CPU controller is work-convserving, and presumably the session slice would never need more than e.g. one 1 full CPU, i.e. half or a quarter of available CPU resources on even the smallest of today's machines. app.slice and session.slice are assigned equal weights, background.slice is assigned a smaller fraction. CPUWeight=100 is the default, but I wrote it explicitly to make it easier for users to see how the split is done. So effectively this should result in session.slice getting as much power as it needs. If if turns out that this does have a noticable overhead, we could make it opt-in. But I think that the benefit to usability is important enough to enable it by default. W/o something like this the session is not really usable with background tasks.
Diffstat (limited to 'units/user@.service.in')
-rw-r--r--units/user@.service.in2
1 files changed, 1 insertions, 1 deletions
diff --git a/units/user@.service.in b/units/user@.service.in
index 85fc3c907e..eff0d5bcb6 100644
--- a/units/user@.service.in
+++ b/units/user@.service.in
@@ -21,7 +21,7 @@ Type=notify
ExecStart={{ROOTLIBEXECDIR}}/systemd --user
Slice=user-%i.slice
KillMode=mixed
-Delegate=pids memory
+Delegate=pids memory cpu
TasksMax=infinity
TimeoutStopSec=120s
KeyringMode=inherit