| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Like JSON_BUILD_PAIR_CALLBACK(), but doesn't add anything to the variant
if the callback doesn't put anything in the return argument.
|
| |
|
|
|
|
|
| |
This does the opposite of the previous patch: it undoes the "-" → "_"
mapping of enum values when we try to parse enums again.
|
|
|
|
|
|
|
|
|
|
|
| |
Most of our enums are mapped to strings that use dashes ("-") as word
separators, i.e. "foo-bar-baz". However, Varlink enums do not allow "-"
as separator, see:
https://varlink.org/Interface-Definition
Hence, let's add some simple glue to automatucally turn "-" into "_" for
use when serializing our enums.
|
|
|
|
|
|
|
| |
In 99% of uses of sd_json_build() we want to build an object as
outermost construct. Let's shorten this most common case a bit, by
adding sd_json_buildo() that implies this. This allows us to shorten
much of our code, all across the tree.
|
|
|
|
|
|
|
|
|
|
|
| |
currently when dispatching json objects into C structs we either insist
on the field type or we don't. Let's extend this model a bit: depending
on two new fields either allow or refuse null types in addition to the
specified type.
This is useful for example when dispatch enums as this allows us
explicitly refuse null in various scenarios where we allow multiple
types.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is preparation for making our Varlink API a public API. Since our
Varlink API is built on top of our JSON API we need to make that public
first (it's a nice API, but JSON APIs there are already enough, this is
purely about the Varlink angle).
I made most of the json.h APIs public, and just placed them in
sd-json.h. Sometimes I wasn't so sure however, since the underlying data
structures would have to be made public too. If in doubt I didn#t risk
it, and moved the relevant API to src/libsystemd/sd-json/json-util.h
instead (without any sd_* symbol prefixes).
This is mostly a giant search/replace patch.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the flag would completely refuse formatting a JSON object if
any field of it was marked sensitive. With this change we'll simply
replace the subobject with the string "<sensitive data>", and show
everything else.
This is tremendously useful when debugging, since it means that we can
again trace varlink calls through the stack: we can show all the message
metadata and just suppress the actually sensitive parameters.
The ability to debug this matters, and we should not hide more
information that we can get away with, to keep things debuggable and
maintainable.
|
|\
| |
| | |
tpm2-util: convert various things over to struct iovec rather that data ptr + size
|
| | |
|
|/ |
|
|
|
|
|
| |
Returns -EPERM if any node in the variant is marked as sensitive,
useful to avoid leaking data to log messages and so on
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
JSON famously is problematic with integers beyond 53 bits, because
JavaScript stores everything in double precision floating points.
Various implementations in other languages can deal with signed 64 bit
integers, and a few can deal with unsigned 64bit too (like ours).
Typically program that need more then 53 bit of accuracy encode integers
as decimal strings, to make sure that even if consumers can't really
process larger values they at least won't corrupt the data while passing
it along. This is also recommended by JSON-I (RFC 7493)
To maximize compatibility with other implementations let's add 1st class
parsing support for such objects in the json_dispatch() API.
This makes json_dispatch_uint64() and related calls parse such
integers-formatted-as-decimal-strings as uint64_t. This logic will only
be enabled if the "type" field of JsonDispatch is left unspecified (i.e.
set to negative/_JSON_VARIANT_TYPE_INVALID) though, hence alone does not
change anything in effect.
This purely is about consuming such values, whether we should genreate
them also is a discussion for a separate PR.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
json_append() is a useful wrapper around json_variant_merge(). However,
I think the naming sould be cleaned up a bit of both functions.
I thinker "merge" is the better word than "append", since it does
decidedly more than just append: it replaces existing fields of the same
name, hence "merge" sounds more appropriate. This is as opposed to the
similar operations for arrays, where no such override logic is applied
and we really just append, hence those functions are called "append"
already.
To make clearer that "merge" is about objects, and "append" about
arrays, also include "object" in the name.
Also, include "json_variant" in the name, like we do for almost all
other functions in the JSON API that take a JSON object as primary
input, and hence are kinda object methods.
Finally, let's follow the logic that helpers that combine json_build()
with some other operation get suffixed with "b" like we already have in
some cases.
Hence:
json_variant_merge() → json_variant_merge_object()
json_append() → json_variant_merge_objectb()
This mirrors nicely the existing:
json_variant_append_array()
json_vairant_append_arrayb()
This also drops the variant of json_append() that takes a va_arg
parameter (i.e. json_appendv()). We have no user of that so far, and
given the nature as a helper function only I don#t see that happening,
and if it happens after all it's trivial to bring back.
|
|
|
|
|
|
|
|
|
| |
"static inline" makes sense in .h files. But in .c files it's useless
decoration, the compiler should just make its own decisions there, and
it can do that.
hence, replace all remaining uses of "static line" by a simple" static"
in all .c files (but keep them in .h files, where they make sense)
|
|
|
|
|
| |
We generate this implicitly, hence we generally don't include it
explicitly.
|
|
|
|
|
|
| |
Recent gcc versions have started to trigger false positive
maybe-uninitialized warnings. Let's make sure we initialize
variables annotated with _cleanup_ to avoid these.
|
| |
|
|
|
|
|
|
|
|
| |
The source would be set implicitly when parsing from a named file. But
it's useful to specify the source also for cases where we're parsing a
ready string. I noticed the lack of this API when trying to write tests,
but it seems generally useful to be specify a source name when parsing
things.
|
|
|
|
|
| |
util.h is now about logarithms only, so we can rename it. Many files included
util.h for no apparent reason… Those includes are dropped.
|
|
|
|
|
| |
It is useful to distinguish if json_parse_file() got no input or invalid input.
Use different return codes for the two cases.
|
| |
|
| |
|
|
|
|
| |
This also drop unnecessary cast.
|
|
|
|
|
| |
Follow-up for 337712e777bff389f53e26d5b378d2ceba7d98a8, aka "the great
un-long-double-ification of 2021".
|
|
|
|
|
|
|
|
|
| |
This converts to TEST macro where it is trivial.
Some additional notable changes:
- simplify HAVE_LIBIDN #ifdef in test-dns-domain.c
- use saved_argc/saved_argv in test-copy.c, test-path-util.c,
test-tmpfiles.c and test-unit-file.c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This macro is like JSON_BUILD_STRING() but uses our json library's
ability to use literal strings directly as JsonVariant objects.
The changes all our codebase to use this new macro whenever we build
JSON objects from literal strings.
(I tried to make this automatic, i.e. to detect in JSON_BUILD_STRING()
whether something is a literal string nicely and thus do this stuff
automatically, but I couldn't find a way.)
This should reduce memory usage of our JSON code a bit. Constant strings
we use very often will now be shared and mapped directly from the ELF
image.
|
|
|
|
|
|
| |
The rest of our JSON code tries hard to magically convert NULL inputs
into "null" JSON objects, let's make sure this also works with
json_variant_set_field().
|
|
|
|
|
|
|
|
|
|
|
| |
We were already asserting that the intmax_t and uintmax_t types
are the same as int64_t and uint64_t. Pretty much everywhere in
the code base we use the latter types. In principle intmax_t could
be something different on some new architecture, and then the code would
fail to compile or behave differently. We actually do not want the code
to behave differently on those architectures, because that'd break
interoperability. So let's just use int64_t/uint64_t since that's what
we indend to use.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It seems that the implementation of long double on ppc64el doesn't really work:
long double cast to integer and back compares as unequal to itself. Strangely,
this effect happens without optimization and both with gcc and clang, so it
seems to be an effect of how long double is implemented by the architecture.
Dumping the values shows the following pattern:
00 00 00 00 00 00 24 40 00 00 00 00 00 00 00 00 # long double v = 10;
00 00 00 00 00 00 24 40 00 00 00 00 00 00 80 39 # (long double)(intmax_t) v
Instead of trying to make this work, I think it's most reasonable to switch to
normal doubles. Notably, we had no tests for floating point behaviour. The
first test we added (for values even not in the range outside of double),
showed failures.
Common implementations of JSON (in particular JavaScript) use 64 bit double.
If we stick to this, users are likely to be happy when they exchange data with
those tools. Exporting values that cannot be represented in other tools would
just cause interop problems.
I don't think the extra precision would be much used. Long double seems to make
most sense as a transient format used in calculations to get extra precision in
operations, and not a storage or exchange format. So I expect low-level
numerical routines that have to know about hardware to make use of it, but it
shouldn't be used by our (higher-level) system library. In particular, we would
have to add tests for implementations conforming to IEEE 754, and those that
don't conform, and account for various implementation differences. It just
doesn't seem worth the effort.
https://en.wikipedia.org/wiki/Long_double#Implementations shows that the
situation is "complicated":
> On the x86 architecture, most C compilers implement long double as the 80-bit
> extended precision type supported by x86 hardware. An exception is Microsoft
> Visual C++ for x86, which makes long double a synonym for double. The Intel
> C++ compiler on Microsoft Windows supports extended precision, but requires
> the /Qlong‑double switch for long double to correspond to the hardware's
> extended precision format.
> Compilers may also use long double for the IEEE 754 quadruple-precision
> binary floating-point format (binary128). This is the case on HP-UX,
> Solaris/SPARC, MIPS with the 64-bit or n32 ABI, 64-bit ARM (AArch64) (on
> operating systems using the standard AAPCS calling conventions, such as
> Linux), and z/OS with FLOAT(IEEE). Most implementations are in software, but
> some processors have hardware support.
> On some PowerPC and SPARCv9 machines, long double is implemented as a
> double-double arithmetic, where a long double value is regarded as the exact
> sum of two double-precision values, giving at least a 106-bit precision; with
> such a format, the long double type does not conform to the IEEE
> floating-point standard. Otherwise, long double is simply a synonym for
> double (double precision), e.g. on 32-bit ARM, 64-bit ARM (AArch64) (on
> Windows and macOS) and on 32-bit MIPS (old ABI, a.k.a. o32).
> With the GNU C Compiler, long double is 80-bit extended precision on x86
> processors regardless of the physical storage used for the type (which can be
> either 96 or 128 bits). On some other architectures, long double can be
> double-double (e.g. on PowerPC) or 128-bit quadruple precision (e.g. on
> SPARC). As of gcc 4.3, a quadruple precision is also supported on x86, but as
> the nonstandard type __float128 rather than long double.
> Although the x86 architecture, and specifically the x87 floating-point
> instructions on x86, supports 80-bit extended-precision operations, it is
> possible to configure the processor to automatically round operations to
> double (or even single) precision. Conversely, in extended-precision mode,
> extended precision may be used for intermediate compiler-generated
> calculations even when the final results are stored at a lower precision
> (i.e. FLT_EVAL_METHOD == 2). With gcc on Linux, 80-bit extended precision is
> the default; on several BSD operating systems (FreeBSD and OpenBSD),
> double-precision mode is the default, and long double operations are
> effectively reduced to double precision. (NetBSD 7.0 and later, however,
> defaults to 80-bit extended precision). However, it is possible to override
> this within an individual program via the FLDCW "floating-point load
> control-word" instruction. On x86_64, the BSDs default to 80-bit extended
> precision. Microsoft Windows with Visual C++ also sets the processor in
> double-precision mode by default, but this can again be overridden within an
> individual program (e.g. by the _controlfp_s function in Visual C++). The
> Intel C++ Compiler for x86, on the other hand, enables extended-precision
> mode by default. On IA-32 OS X, long double is 80-bit extended precision.
So, in short, the only thing that can be said is that nothing can be said. In
common scenarios, we are getting only a bit of extra precision (80 bits instead
of 64), but use space for padding. In other scenarios we are getting no extra
precision. And the variance in implementations is a big issue: we can expect
strange differences in behaviour between architectures, systems, compiler
versions, compilation options, and even the other things that the program is
doing.
Fixes #21390.
|
|
|
|
|
| |
Test that we don't loose accuracy without bounds for extreme values, and
validate that nan/inf/-inf actually get converted to null properly.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
JSON strings must be utf-8-clean. We also verify this in json_parse_string()
so we would reject a message with invalid utf-8 anyway.
It would probably be slightly cheaper to detect non-conformaning strings in
serialization, but then we'd have to fail serialization. By doing this early,
we give the caller a chance to handle the error nicely.
The test is adjusted to contain a valid utf-8 string after decoding of the
utf-32 encoding in json ("विवेकख्यातिरविप्लवा हानोपायः।", something about the
cessation of ignorance).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Let's add a concept of normalization: as preparation for signing json
records let's add a mechanism to bring JSON records into a well-defined
order so that we can safely validate JSON records.
This adds two booleans to each JsonVariant object: "sorted" and
"normalized". The latter indicates whether a variant is fully sorted
(i.e. all keys of objects listed in alphabetical order) recursively down
the tree. The former is a weaker property: it only checks whether the
keys of the object itself are sorted. All variants which are
"normalized" are also "sorted", but not vice versa.
The knowledge of the "sorted" property is then used to optimize
searching for keys in the variant by using bisection.
Both properties are determined at the moment the variants are allocated.
Since our objects are immutable this is safe.
|
|
|
|
|
|
|
| |
This will call json_variant_sensitive() internally while parsing for
each allocated sub-variant. This is better than calling it a posteriori
at the end, because partially parsed variants will always be properly
erased from memory this way.
|
|
|
|
|
|
|
|
|
| |
Should finally fix oss-fuzz-14688.
8688c29b5aece49805a244676cba5bba0196f509 wasn't enough.
The buffer retrieved from memstream has the size that the same as the written
data. When we write do write(f, s, strlen(s)), then no terminating NUL is written,
and the buffer is not (necessarilly) a proper C string.
|
|
|
|
| |
This might make things marginially faster. I didn't benchmark though.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Fixes #11600.
The code was effectively doing:
json_build(..., ({ char **_x = ((char**) ((const char*[]) {"one", "two", "three", "four", NULL })); _x; }));
but there was no guarantee that the storage for the array that _x points to
survives pass the end of the block. Essentially, STRV_MAKE cannot be used
inline inside of a block like this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Found by inspecting results of running this small program:
int main(int argc, const char **argv) {
for (int i = 1; i < argc; i++) {
FILE *f;
char line[1024], prev[1024], *r;
int lineno;
prev[0] = '\0';
lineno = 1;
f = fopen(argv[i], "r");
if (!f)
exit(1);
do {
r = fgets(line, sizeof(line), f);
if (!r)
break;
if (strcmp(line, prev) == 0)
printf("%s:%d: error: dup %s", argv[i], lineno, line);
lineno++;
strcpy(prev, line);
} while (!feof(f));
fclose(f);
}
}
|
|
|
|
|
|
|
|
|
|
|
| |
The test fails under valgrind, so there was an exception for valgrind.
Unfortunately that check only works when valgrind-devel headers are
available during build. But it is possible to have just valgrind installed,
or simply install it after the build, and then "valgrind test-json" would
fail.
It also seems that even without valgrind, this fails on some arm32 CPUs.
Let's do the usual-style test for absolute and relative differences.
|
|
|
|
| |
Just some extra care to avoid any ambiguities in what we read.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Quite often when we generate objects some fields should only be
generated in some conditions. Let's add high-level support for that.
Matching the existing JSON_BUILD_PAIR() this adds
JSON_BUILD_PAIR_CONDITIONAL() which is very similar, but takes an
additional parameter: a boolean condition. If "true" this acts like
JSON_BUILD_PAIR(), but if false then the whole pair is suppressed.
This sounds simply, but requires a tiny bit of complexity: when complex
sub-variants are used in fields, then we also need to suppress them.
|