| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This macro is like JSON_BUILD_STRING() but uses our json library's
ability to use literal strings directly as JsonVariant objects.
The changes all our codebase to use this new macro whenever we build
JSON objects from literal strings.
(I tried to make this automatic, i.e. to detect in JSON_BUILD_STRING()
whether something is a literal string nicely and thus do this stuff
automatically, but I couldn't find a way.)
This should reduce memory usage of our JSON code a bit. Constant strings
we use very often will now be shared and mapped directly from the ELF
image.
|
|
|
|
|
|
|
|
|
|
|
| |
We were already asserting that the intmax_t and uintmax_t types
are the same as int64_t and uint64_t. Pretty much everywhere in
the code base we use the latter types. In principle intmax_t could
be something different on some new architecture, and then the code would
fail to compile or behave differently. We actually do not want the code
to behave differently on those architectures, because that'd break
interoperability. So let's just use int64_t/uint64_t since that's what
we indend to use.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It seems that the implementation of long double on ppc64el doesn't really work:
long double cast to integer and back compares as unequal to itself. Strangely,
this effect happens without optimization and both with gcc and clang, so it
seems to be an effect of how long double is implemented by the architecture.
Dumping the values shows the following pattern:
00 00 00 00 00 00 24 40 00 00 00 00 00 00 00 00 # long double v = 10;
00 00 00 00 00 00 24 40 00 00 00 00 00 00 80 39 # (long double)(intmax_t) v
Instead of trying to make this work, I think it's most reasonable to switch to
normal doubles. Notably, we had no tests for floating point behaviour. The
first test we added (for values even not in the range outside of double),
showed failures.
Common implementations of JSON (in particular JavaScript) use 64 bit double.
If we stick to this, users are likely to be happy when they exchange data with
those tools. Exporting values that cannot be represented in other tools would
just cause interop problems.
I don't think the extra precision would be much used. Long double seems to make
most sense as a transient format used in calculations to get extra precision in
operations, and not a storage or exchange format. So I expect low-level
numerical routines that have to know about hardware to make use of it, but it
shouldn't be used by our (higher-level) system library. In particular, we would
have to add tests for implementations conforming to IEEE 754, and those that
don't conform, and account for various implementation differences. It just
doesn't seem worth the effort.
https://en.wikipedia.org/wiki/Long_double#Implementations shows that the
situation is "complicated":
> On the x86 architecture, most C compilers implement long double as the 80-bit
> extended precision type supported by x86 hardware. An exception is Microsoft
> Visual C++ for x86, which makes long double a synonym for double. The Intel
> C++ compiler on Microsoft Windows supports extended precision, but requires
> the /Qlong‑double switch for long double to correspond to the hardware's
> extended precision format.
> Compilers may also use long double for the IEEE 754 quadruple-precision
> binary floating-point format (binary128). This is the case on HP-UX,
> Solaris/SPARC, MIPS with the 64-bit or n32 ABI, 64-bit ARM (AArch64) (on
> operating systems using the standard AAPCS calling conventions, such as
> Linux), and z/OS with FLOAT(IEEE). Most implementations are in software, but
> some processors have hardware support.
> On some PowerPC and SPARCv9 machines, long double is implemented as a
> double-double arithmetic, where a long double value is regarded as the exact
> sum of two double-precision values, giving at least a 106-bit precision; with
> such a format, the long double type does not conform to the IEEE
> floating-point standard. Otherwise, long double is simply a synonym for
> double (double precision), e.g. on 32-bit ARM, 64-bit ARM (AArch64) (on
> Windows and macOS) and on 32-bit MIPS (old ABI, a.k.a. o32).
> With the GNU C Compiler, long double is 80-bit extended precision on x86
> processors regardless of the physical storage used for the type (which can be
> either 96 or 128 bits). On some other architectures, long double can be
> double-double (e.g. on PowerPC) or 128-bit quadruple precision (e.g. on
> SPARC). As of gcc 4.3, a quadruple precision is also supported on x86, but as
> the nonstandard type __float128 rather than long double.
> Although the x86 architecture, and specifically the x87 floating-point
> instructions on x86, supports 80-bit extended-precision operations, it is
> possible to configure the processor to automatically round operations to
> double (or even single) precision. Conversely, in extended-precision mode,
> extended precision may be used for intermediate compiler-generated
> calculations even when the final results are stored at a lower precision
> (i.e. FLT_EVAL_METHOD == 2). With gcc on Linux, 80-bit extended precision is
> the default; on several BSD operating systems (FreeBSD and OpenBSD),
> double-precision mode is the default, and long double operations are
> effectively reduced to double precision. (NetBSD 7.0 and later, however,
> defaults to 80-bit extended precision). However, it is possible to override
> this within an individual program via the FLDCW "floating-point load
> control-word" instruction. On x86_64, the BSDs default to 80-bit extended
> precision. Microsoft Windows with Visual C++ also sets the processor in
> double-precision mode by default, but this can again be overridden within an
> individual program (e.g. by the _controlfp_s function in Visual C++). The
> Intel C++ Compiler for x86, on the other hand, enables extended-precision
> mode by default. On IA-32 OS X, long double is 80-bit extended precision.
So, in short, the only thing that can be said is that nothing can be said. In
common scenarios, we are getting only a bit of extra precision (80 bits instead
of 64), but use space for padding. In other scenarios we are getting no extra
precision. And the variance in implementations is a big issue: we can expect
strange differences in behaviour between architectures, systems, compiler
versions, compilation options, and even the other things that the program is
doing.
Fixes #21390.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
compound statements
Compound statements is this stuff: ({ … })
Compound literals is this stuff: (type) { … }
We use compound statements a lot in macro definitions: they have one
drawback though: they define a code block of their own, hence if macro
invocations are nested within them that use compound literals their
lifetime is limited to the code block, which might be unexpected.
Thankfully, we can rework things from compound statements to compund
literals in the case of json.h: they don't open a new codeblack, and
hence do not suffer by the problem explained above.
The interesting thing about compound statements is that they also work
for simple types, not just for structs/unions/arrays. We can use this
here for a typechecked implicit conversion: we want to superficially
typecheck arguments to the json_build() varargs function, and we do that
by assigning the specified arguments to our compound literals, which
does the minimal amount of typechecks and ensures that types are
propagated on correctly.
We need one special tweak for this: sd_id128_t is not a simple type but
a union. Using compound literals for initialzing that would mean
specifiying the components of the union, not a complete sd_id128_t. Our
hack around that: instead of passing the object directly via the stack
we now take a pointer (and thus a simple type) instead.
Nice side-effect of all this: compound literals is C99, while compound
statements are a GCC extension, hence we move closer to standard C.
Fixes: #20501
Replaces: #20512
|
| |
|
| |
|
|
|
|
|
|
| |
Prompted by https://bugzilla.redhat.com/show_bug.cgi?id=1930875 in which
I had previously used json_dispatch_unsigned and passed a return variable of
type unsigned when json_dispatch_unsigned writes a uintmax_t.
|
|
|
|
|
| |
json.[ch] is a very generic implementation, and cmdline argument parsing
doesn't fit there.
|
|
|
|
|
|
|
|
|
| |
As suggested in https://github.com/systemd/systemd/pull/11484#issuecomment-775288617.
This does not touch anything exposed in src/systemd. Changing the defines there
would be a compatibility break.
Note that tests are broken after this commit. They will be fixed in the next one.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a new flag JSON_FORMAT_OFF that is a marker for "no JSON
output please!".
Of course, this flag sounds pointless in a JSON implementation, however
this is useful in code that can generate JSON output, but also more
human friendly output (for example our table formatters).
With this in place various tools that so far maintained one boolean
field "arg_json" that controlled whether JSON output was requested at
all and another field "arg_json_format_flags" for selecing the precise
json output flags may merge them into one, simplifying code a bit.
|
|
|
|
|
|
| |
This is similar to the base64 support, but fixed-size hash values are
typically preferably presented as series of hex values, hence store them
here like that too.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Let's make sure we never lose the bit when copying a variant, after all
the data contained is still going to be sensitive after the copy.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reworks the user validation infrastructure. There are now two
modes. In regular mode we are strict and test against a strict set of
valid chars. And in "relaxed" mode we just filter out some really
obvious, dangerous stuff. i.e. strict is whitelisting what is OK, but
"relaxed" is blacklisting what is really not OK.
The idea is that we use strict mode whenver we allocate a new user
(i.e. in sysusers.d or homed), while "relaxed" mode is when we process
users registered elsewhere, (i.e. userdb, logind, …)
The requirements on user name validity vary wildly. SSSD thinks its fine
to embedd "@" for example, while the suggested NAME_REGEX field on
Debian does not even allow uppercase chars…
This effectively liberaralizes a lot what we expect from usernames.
The code that warns about questionnable user names is now optional and
only used at places such as unit file parsing, so that it doesn't show
up on every userdb query, but only when processing configuration files
that know better.
Fixes: #15149 #15090
|
|
|
|
|
| |
This takes inspiration from JSON_COLOR_AUTO: it will automatically map
to JSON_PRETTY if connected to a TTY and JSON_NEWLINE otherwise.
|
|
|
|
|
|
|
|
|
| |
This adds json_dispatch_const_string() which is similar to
json_dispatch_string() but doesn't store a strdup()'ed copy of the
string, but a pointer directly into the JSON record.
This should simplify cases where the json variant sticks around long
enough anyway.
|
| |
|
| |
|
|
|
|
| |
json_variant_set_field_boolean() helpers
|
| |
|
|
|
|
|
| |
This is particularly useful when no trailing \n is generated, i.e. stdio
doesn't flush the output on its own.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Let's add a concept of normalization: as preparation for signing json
records let's add a mechanism to bring JSON records into a well-defined
order so that we can safely validate JSON records.
This adds two booleans to each JsonVariant object: "sorted" and
"normalized". The latter indicates whether a variant is fully sorted
(i.e. all keys of objects listed in alphabetical order) recursively down
the tree. The former is a weaker property: it only checks whether the
keys of the object itself are sorted. All variants which are
"normalized" are also "sorted", but not vice versa.
The knowledge of the "sorted" property is then used to optimize
searching for keys in the variant by using bisection.
Both properties are determined at the moment the variants are allocated.
Since our objects are immutable this is safe.
|
| |
|
| |
|
|
|
|
| |
Only works for arrays of strings, of course.
|
| |
|
|
|
|
|
|
|
| |
This will call json_variant_sensitive() internally while parsing for
each allocated sub-variant. This is better than calling it a posteriori
at the end, because partially parsed variants will always be properly
erased from memory this way.
|
|
|
|
| |
This is an "at" function, similar to json_parse_file().
|
|
|
|
|
|
| |
An object marked with this flag will be erased from memory when it is
freed. This is useful for dealing with sensitive data (key material,
passphrases) encoded in JSON objects.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
We use assert_cc(...); almost everywhere. Let's always require that.
https://github.com/systemd/systemd/issues/12997#issuecomment-510103988
|
|
|
|
| |
This replaces the internal uses of __FILE__ with the new macro.
|
| |
|
|
|
|
|
| |
Let's add an additional paranoia check, and not accept embedded NUL
bytes in strings, just in case.
|
|
|
|
|
|
|
|
|
|
|
| |
Let's exit the loop early in case the variant is not actually an object
or array. This is safer since otherwise we might end up iterating
through these variants and access fields that aren't of the type we
expect them to be and then bad things happen.
Of course, this doesn't absolve uses of these macros to check the type
of the variant explicitly beforehand, but it makes it less bad if they
forget to do so.
|
|
|
|
|
|
|
|
| |
There's no point in returning the "key" within each loop iteration as
JsonVariant object. Let's simplify things and return it as string. That
simplifies usage (since the caller doesn't have to convert the object to
the string anymore) and is safe since we already validate that keys are
strings when an object JsonVariant is allocated.
|
|
|
|
|
| |
This means we need to include many more headers in various files that simply
included util.h before, but it seems cleaner to do it this way.
|
| |
|