| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
It's time. sd-json was already done earlier in this cycle, let's now
make sd-varlink public too.
This is mostly just a search/replace job of epical proportions.
I left some functions internal (mostly IDL handling), and I turned some
static inline calls into regular calls.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is preparation for making our Varlink API a public API. Since our
Varlink API is built on top of our JSON API we need to make that public
first (it's a nice API, but JSON APIs there are already enough, this is
purely about the Varlink angle).
I made most of the json.h APIs public, and just placed them in
sd-json.h. Sometimes I wasn't so sure however, since the underlying data
structures would have to be made public too. If in doubt I didn#t risk
it, and moved the relevant API to src/libsystemd/sd-json/json-util.h
instead (without any sd_* symbol prefixes).
This is mostly a giant search/replace patch.
|
|
|
|
|
| |
Otherwise the default log target is the console and we won't use
the journal socket even if it is available.
|
|
|
|
|
|
| |
Several test cases intentionally trigger assert_return(). So, to avoid
the entire test fails, this introduces several macros that tentatively
make assert_return() not critical.
|
|
|
|
|
|
| |
To avoid timeouts with larger inputs.
Resolves: #29856
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
varlink_dispatch() is a simple wrapper around json_dispatch() that
returns clean, standards-compliant InvalidParameter error back to
clients, if the specified JSON cannot be parsed properly.
For this json_dispatch() is extended to return the offending field's
name. Because it already has quite a few parameters, I then renamed
json_dispatch() to json_dispatch_full() and made json_dispatch() a
wrapper around it that passes the new argument as NULL. While doing so I
figured we should also get rid of the bad= argument in the short
wrapper, since it's only used in the OCI code.
To simplify the OCI code this adds a second wrapper oci_dispatch()
around json_dispatch_full(), that fills in bad= the way we want.
Net result: instead of one json_dispatch() call there are now:
1. json_dispatch_full() for the fully feature mother of all dispathers.
2. json_dispatch() for the simpler version that you want to use most of
the time.
3. varlink_dispatch() that generates nice Varlink errors
4. oci_dispatch() that does the OCI specific error handling
And that's all there is.
|
|
|
|
|
|
|
|
| |
We use it for more than just pipe() arrays. For example also for
socketpair(). Hence let's give it a generic name.
Also add EBADF_TRIPLET to mirror this for things like
stdin/stdout/stderr arrays, which we use a bunch of times.
|
|
|
|
|
|
| |
Even more than with the previous commit, this is not a trivial function
and there's no reason to believe this will actually be inlined nor that
it would be beneficial.
|
|
|
|
|
|
|
|
|
| |
This is preparation for #28891, which adds a bunch more helpers around
"struct iovec", at which point this really deserves its own .c/.h file.
The idea is that we sooner or later can consider "struct iovec" as an
entirely generic mechanism to reference some binary blob, and is the
go-to type for this purpose whenever we need one.
|
|
|
|
|
|
|
|
|
|
| |
Make sure we don't log anything when running in "fuzzing" mode. Also,
when at it, unify the setup logic into a helper, pretty similar to
the test_setup_logging() one.
Addresses:
- https://github.com/systemd/systemd/pull/29558#pullrequestreview-1676060607
- https://github.com/systemd/systemd/pull/29558#discussion_r1358940663
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
json_append() is a useful wrapper around json_variant_merge(). However,
I think the naming sould be cleaned up a bit of both functions.
I thinker "merge" is the better word than "append", since it does
decidedly more than just append: it replaces existing fields of the same
name, hence "merge" sounds more appropriate. This is as opposed to the
similar operations for arrays, where no such override logic is applied
and we really just append, hence those functions are called "append"
already.
To make clearer that "merge" is about objects, and "append" about
arrays, also include "object" in the name.
Also, include "json_variant" in the name, like we do for almost all
other functions in the JSON API that take a JSON object as primary
input, and hence are kinda object methods.
Finally, let's follow the logic that helpers that combine json_build()
with some other operation get suffixed with "b" like we already have in
some cases.
Hence:
json_variant_merge() → json_variant_merge_object()
json_append() → json_variant_merge_objectb()
This mirrors nicely the existing:
json_variant_append_array()
json_vairant_append_arrayb()
This also drops the variant of json_append() that takes a va_arg
parameter (i.e. json_appendv()). We have no user of that so far, and
given the nature as a helper function only I don#t see that happening,
and if it happens after all it's trivial to bring back.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In case one of the allocations fails.
For example:
AddressSanitizer:DEADLYSIGNAL
=================================================================
==17==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7fb352a476e5 bp 0x7ffe45154850 sp 0x7ffe45154008 T0)
==17==The signal is caused by a READ memory access.
==17==Hint: address points to the zero page.
SCARINESS: 10 (null-deref)
#0 0x7fb352a476e5 (/lib/x86_64-linux-gnu/libc.so.6+0x1886e5) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)
#1 0x435878 in __interceptor_strlen /src/llvm-project/compiler-rt/lib/asan/../sanitizer_common/sanitizer_common_interceptors.inc
#2 0x4de1e4 in LLVMFuzzerTestOneInput /work/build/../../src/systemd/src/fuzz/fuzz-calendarspec.c:20:21
#3 0x4deea8 in NaloFuzzerTestOneInput (/build/fuzz-calendarspec+0x4deea8)
#4 0x4fde33 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15
#5 0x4fd61a in fuzzer::Fuzzer::RunOne(unsigned char const*, unsigned long, bool, fuzzer::InputInfo*, bool, bool*) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:514:3
#6 0x4fece9 in fuzzer::Fuzzer::MutateAndTestOne() /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:757:19
#7 0x4ff9b5 in fuzzer::Fuzzer::Loop(std::__Fuzzer::vector<fuzzer::SizedFile, std::__Fuzzer::allocator<fuzzer::SizedFile> >&) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:895:5
#8 0x4eed1f in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:912:6
#9 0x4ef5e8 in LLVMFuzzerRunDriver /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:925:10
#10 0x4df105 in main (/build/fuzz-calendarspec+0x4df105)
#11 0x7fb3528e3082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)
#12 0x41f80d in _start (/build/fuzz-calendarspec+0x41f80d)
Found by Nallocfuzz.
|
|
|
|
|
|
|
| |
And make compress_xyz() return 0 on success, as we know which compression
algorithm is used when calling compress_blob().
Follow-up for 2360352ef02548723ac0c8eaf5ff6905eb9eeca5.
|
| |
|
| |
|
|
|
|
| |
In some places, initialization is dropped when unnecesary.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-1 was used everywhere, but -EBADF or -EBADFD started being used in various
places. Let's make things consistent in the new style.
Note that there are two candidates:
EBADF 9 Bad file descriptor
EBADFD 77 File descriptor in bad state
Since we're initializating the fd, we're just assigning a value that means
"no fd yet", so it's just a bad file descriptor, and the first errno fits
better. If instead we had a valid file descriptor that became invalid because
of some operation or state change, the other errno would fit better.
In some places, initialization is dropped if unnecessary.
|
| |
|
|
|
|
|
| |
util.h is now about logarithms only, so we can rename it. Many files included
util.h for no apparent reason… Those includes are dropped.
|
|
|
|
|
|
| |
The man page says nothing about "e". Glibc clearly accepts it without fuss, but
it is meaningless for a memory object (and probably doesn't work). This use is
not portable, so let's avoid it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The compiler may decide computations like these are not doing anything
and decide to optimize them away. This would defeat the whole fuzzing
exercise. This macro will force the compiler to materialize the value
no matter what. It should be less prone to accidents compared to using
log functions, which would either slow things down or still optimize the
value away (or simply move it into the if branch the log macros create).
The benefit over assert_se would be that no requirement is made on the
value itself. If we are fine getting a string of any size (including
zero), an assert_se would either create a noisy compiler warning about
conditions that would alawys be met or yet again optimize the whole
thing away.
|
| |
|
|
|
|
| |
Operate on image/directory, and also take files to install from it
|
|
|
|
|
| |
This way we can still call fuzzers on old samples, but oss-fuzz will not waste
its and our time finding overly large inputs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Without the size limits, oss-fuzz creates huge samples that time out. Usually
this is because some of our code has bad algorithmic complexity. For data like
configuration samples we don't need to care about this: non-rogue configs are
rarely more than a few items, and a bit of a slowdown with a few hundred items
is acceptable. This wouldn't be OK for processing of untrusted data though.
We need to set the limit in two ways: through .options and in the code. The
first because it nicely allows libFuzzer to avoid wasting time, and the second
because fuzzers like hongfuzz and afl don't support .options.
While at it, let's fix an off-by-one (65535 is the largest offset for a
power-of-two size, but we're checking the size here).
Co-authored-by: Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
|
|\
| |
| | |
More coverage in fuzz-json
|
| |
| |
| |
| | |
This might even work ;)
|
| | |
|
| |
| |
| |
| | |
Similarly to other fuzzers… this makes development easier.
|
| |
| |
| |
| |
| |
| |
| | |
https://oss-fuzz.com/testcase-detail/5680508182331392 has the
first timeout with 811kb of input. As in the other cases, the code
is known to be slow with lots of repeated entries and we're fine with
that.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Coverage data shows that we didn't test calendar_spec_next_usec() and
associated functions at all.
The input samples so far were only used until the first NUL. We take advantage
of that by using the part until the second NUL as the starting timestamp,
retaining backwards compatibility for how the first part is used.
|
|/
|
|
|
|
| |
calendar_spec_from_string() already calls calendar_spec_normalize(), so
there is no point in calling it from the fuzzer. Once that's removed, there's
just one internal caller and it can be made static.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Given we have two different types for the journal object flags and the
Compression enum, let's make the latter a regular non-sparse enum, and
thus remove some surprises. We have to convert anyway between the two,
and already do via COMPRESSION_FROM_OBJECT().
|
|
|
|
|
|
|
|
|
|
|
|
| |
The compression helpers are used both in journal code and in coredump
code, and there's a good chance we'll use them later for other stuff.
Let's hence move them into src/basic/, to make them a proper internal
API we can use from everywhere where that's desirable. (pstore might be
a candidate, for example)
No real code changes, just some moving around, build system
rearrangements, and stripping of journal-def.h inclusion.
|
|
|
|
|
|
| |
Not having to provide the full path in the source tree is much
nicer and the produced lists can also be used anywhere in the source
tree.
|
|
|
|
|
|
|
|
| |
Empty files and empty strings seem to have triggered various
issues in the past so it seems they shouldn't be ignore by the
fuzzers just because fmemopen can't handle them.
Prompted by https://github.com/systemd/systemd/pull/21939#issuecomment-1003113669
|
|
|
|
|
|
| |
It wasn't picked up automatically because it's not in
test/fuzz/fuzz-fido-id-desc/. But looking at the contents, it doesn't seem to
be in the expected input format either.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We recently started making more use of malloc_usable_size() and rely on
it (see the string_erase() story). Given that we don't really support
sytems where malloc_usable_size() cannot be trusted beyond statistics
anyway, let's go fully in and rework GREEDY_REALLOC() on top of it:
instead of passing around and maintaining the currenly allocated size
everywhere, let's just derive it automatically from
malloc_usable_size().
I am mostly after this for the simplicity this brings. It also brings
minor efficiency improvements I guess, but things become so much nicer
to look at if we can avoid these allocation size variables everywhere.
Note that the malloc_usable_size() man page says relying on it wasn't
"good programming practice", but I think it does this for reasons that
don't apply here: the greedy realloc logic specifically doesn't rely on
the returned extra size, beyond the fact that it is equal or larger than
what was requested.
(This commit was supposed to be a quick patch btw, but apparently we use
the greedy realloc stuff quite a bit across the codebase, so this ends
up touching *a*lot* of code.)
|
|
|
|
| |
This is useful when debugging.
|
|
|
|
|
| |
There's also fuzz-bus-label, but despite the name, it tests code that is in
src/shared/, so it shouldn't move.
|
|
|
|
| |
Also use _cleanup_free_ in one more place.
|
| |
|
| |
|
| |
|
| |
|