summaryrefslogtreecommitdiffstats
path: root/src/shared/json.h (follow)
Commit message (Collapse)AuthorAgeFilesLines
* json: introduce several macros for building json objectYu Watanabe2021-11-251-1/+57
|
* json: add new JSON_BUILD_CONST_STRING() macroLennart Poettering2021-11-251-0/+1
| | | | | | | | | | | | | | | | This macro is like JSON_BUILD_STRING() but uses our json library's ability to use literal strings directly as JsonVariant objects. The changes all our codebase to use this new macro whenever we build JSON objects from literal strings. (I tried to make this automatic, i.e. to detect in JSON_BUILD_STRING() whether something is a literal string nicely and thus do this stuff automatically, but I couldn't find a way.) This should reduce memory usage of our JSON code a bit. Constant strings we use very often will now be shared and mapped directly from the ELF image.
* shared/json: use int64_t instead of intmax_tZbigniew Jędrzejewski-Szmek2021-11-181-16/+10
| | | | | | | | | | | We were already asserting that the intmax_t and uintmax_t types are the same as int64_t and uint64_t. Pretty much everywhere in the code base we use the latter types. In principle intmax_t could be something different on some new architecture, and then the code would fail to compile or behave differently. We actually do not want the code to behave differently on those architectures, because that'd break interoperability. So let's just use int64_t/uint64_t since that's what we indend to use.
* shared/json: stop using long doubleZbigniew Jędrzejewski-Szmek2021-11-181-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It seems that the implementation of long double on ppc64el doesn't really work: long double cast to integer and back compares as unequal to itself. Strangely, this effect happens without optimization and both with gcc and clang, so it seems to be an effect of how long double is implemented by the architecture. Dumping the values shows the following pattern: 00 00 00 00 00 00 24 40 00 00 00 00 00 00 00 00 # long double v = 10; 00 00 00 00 00 00 24 40 00 00 00 00 00 00 80 39 # (long double)(intmax_t) v Instead of trying to make this work, I think it's most reasonable to switch to normal doubles. Notably, we had no tests for floating point behaviour. The first test we added (for values even not in the range outside of double), showed failures. Common implementations of JSON (in particular JavaScript) use 64 bit double. If we stick to this, users are likely to be happy when they exchange data with those tools. Exporting values that cannot be represented in other tools would just cause interop problems. I don't think the extra precision would be much used. Long double seems to make most sense as a transient format used in calculations to get extra precision in operations, and not a storage or exchange format. So I expect low-level numerical routines that have to know about hardware to make use of it, but it shouldn't be used by our (higher-level) system library. In particular, we would have to add tests for implementations conforming to IEEE 754, and those that don't conform, and account for various implementation differences. It just doesn't seem worth the effort. https://en.wikipedia.org/wiki/Long_double#Implementations shows that the situation is "complicated": > On the x86 architecture, most C compilers implement long double as the 80-bit > extended precision type supported by x86 hardware. An exception is Microsoft > Visual C++ for x86, which makes long double a synonym for double. The Intel > C++ compiler on Microsoft Windows supports extended precision, but requires > the /Qlong‑double switch for long double to correspond to the hardware's > extended precision format. > Compilers may also use long double for the IEEE 754 quadruple-precision > binary floating-point format (binary128). This is the case on HP-UX, > Solaris/SPARC, MIPS with the 64-bit or n32 ABI, 64-bit ARM (AArch64) (on > operating systems using the standard AAPCS calling conventions, such as > Linux), and z/OS with FLOAT(IEEE). Most implementations are in software, but > some processors have hardware support. > On some PowerPC and SPARCv9 machines, long double is implemented as a > double-double arithmetic, where a long double value is regarded as the exact > sum of two double-precision values, giving at least a 106-bit precision; with > such a format, the long double type does not conform to the IEEE > floating-point standard. Otherwise, long double is simply a synonym for > double (double precision), e.g. on 32-bit ARM, 64-bit ARM (AArch64) (on > Windows and macOS) and on 32-bit MIPS (old ABI, a.k.a. o32). > With the GNU C Compiler, long double is 80-bit extended precision on x86 > processors regardless of the physical storage used for the type (which can be > either 96 or 128 bits). On some other architectures, long double can be > double-double (e.g. on PowerPC) or 128-bit quadruple precision (e.g. on > SPARC). As of gcc 4.3, a quadruple precision is also supported on x86, but as > the nonstandard type __float128 rather than long double. > Although the x86 architecture, and specifically the x87 floating-point > instructions on x86, supports 80-bit extended-precision operations, it is > possible to configure the processor to automatically round operations to > double (or even single) precision. Conversely, in extended-precision mode, > extended precision may be used for intermediate compiler-generated > calculations even when the final results are stored at a lower precision > (i.e. FLT_EVAL_METHOD == 2). With gcc on Linux, 80-bit extended precision is > the default; on several BSD operating systems (FreeBSD and OpenBSD), > double-precision mode is the default, and long double operations are > effectively reduced to double precision. (NetBSD 7.0 and later, however, > defaults to 80-bit extended precision). However, it is possible to override > this within an individual program via the FLDCW "floating-point load > control-word" instruction. On x86_64, the BSDs default to 80-bit extended > precision. Microsoft Windows with Visual C++ also sets the processor in > double-precision mode by default, but this can again be overridden within an > individual program (e.g. by the _controlfp_s function in Visual C++). The > Intel C++ Compiler for x86, on the other hand, enables extended-precision > mode by default. On IA-32 OS X, long double is 80-bit extended precision. So, in short, the only thing that can be said is that nothing can be said. In common scenarios, we are getting only a bit of extra precision (80 bits instead of 64), but use space for padding. In other scenarios we are getting no extra precision. And the variance in implementations is a big issue: we can expect strange differences in behaviour between architectures, systems, compiler versions, compilation options, and even the other things that the program is doing. Fixes #21390.
* json: rework JSON_BUILD_XYZ() macros to use compound literals instead of ↵Lennart Poettering2021-08-231-15/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | compound statements Compound statements is this stuff: ({ … }) Compound literals is this stuff: (type) { … } We use compound statements a lot in macro definitions: they have one drawback though: they define a code block of their own, hence if macro invocations are nested within them that use compound literals their lifetime is limited to the code block, which might be unexpected. Thankfully, we can rework things from compound statements to compund literals in the case of json.h: they don't open a new codeblack, and hence do not suffer by the problem explained above. The interesting thing about compound statements is that they also work for simple types, not just for structs/unions/arrays. We can use this here for a typechecked implicit conversion: we want to superficially typecheck arguments to the json_build() varargs function, and we do that by assigning the specified arguments to our compound literals, which does the minimal amount of typechecks and ensures that types are propagated on correctly. We need one special tweak for this: sd_id128_t is not a simple type but a union. Using compound literals for initialzing that would mean specifiying the components of the union, not a complete sd_id128_t. Our hack around that: instead of passing the object directly via the stack we now take a pointer (and thus a simple type) instead. Nice side-effect of all this: compound literals is C99, while compound statements are a GCC extension, hence we move closer to standard C. Fixes: #20501 Replaces: #20512
* json: make JSON_VARIANT_ARRAY/OBJECT_FOREACH() nestableYu Watanabe2021-05-141-13/+17
|
* tree-wide: use UINT64_MAX or friendsYu Watanabe2021-03-041-1/+1
|
* json: rename json_dispatch_{integer,unsigned} -> json_dispatch_{intmax,uintmax}Anita Zhang2021-02-261-4/+4
| | | | | | Prompted by https://bugzilla.redhat.com/show_bug.cgi?id=1930875 in which I had previously used json_dispatch_unsigned and passed a return variable of type unsigned when json_dispatch_unsigned writes a uintmax_t.
* Move and rename parse_json_argument() functionZbigniew Jędrzejewski-Szmek2021-02-151-2/+0
| | | | | json.[ch] is a very generic implementation, and cmdline argument parsing doesn't fit there.
* tree-wide: use -EINVAL for enum invalid valuesZbigniew Jędrzejewski-Szmek2021-02-101-1/+1
| | | | | | | | | As suggested in https://github.com/systemd/systemd/pull/11484#issuecomment-775288617. This does not touch anything exposed in src/systemd. Changing the defines there would be a compatibility break. Note that tests are broken after this commit. They will be fixed in the next one.
* json: add generic cmdline parser for --json= switchLennart Poettering2021-01-091-0/+2
|
* json: add new json format flag for disabling JSON outputLennart Poettering2021-01-091-0/+1
| | | | | | | | | | | | | | This adds a new flag JSON_FORMAT_OFF that is a marker for "no JSON output please!". Of course, this flag sounds pointless in a JSON implementation, however this is useful in code that can generate JSON output, but also more human friendly output (for example our table formatters). With this in place various tools that so far maintained one boolean field "arg_json" that controlled whether JSON output was requested at all and another field "arg_json_format_flags" for selecing the precise json output flags may merge them into one, simplifying code a bit.
* json: add APIs for quickly inserting hex blobs into as JSON stringsLennart Poettering2020-12-171-0/+4
| | | | | | This is similar to the base64 support, but fixed-size hash values are typically preferably presented as series of hex values, hence store them here like that too.
* license: LGPL-2.1+ -> LGPL-2.1-or-laterYu Watanabe2020-11-091-1/+1
|
* json: also add explicit dispatchers for 'int' and 'unsigned'Lennart Poettering2020-08-261-0/+6
|
* json: add support for byte arrays to json builderLennart Poettering2020-08-261-0/+2
|
* json: add helpers for dealing with id128 + strvLennart Poettering2020-08-121-0/+6
|
* json: when making a copy of a json variant, propagate the sensitive bitLennart Poettering2020-04-291-0/+1
| | | | | Let's make sure we never lose the bit when copying a variant, after all the data contained is still going to be sensitive after the copy.
* user-util: rework how we validate user namesLennart Poettering2020-04-081-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | This reworks the user validation infrastructure. There are now two modes. In regular mode we are strict and test against a strict set of valid chars. And in "relaxed" mode we just filter out some really obvious, dangerous stuff. i.e. strict is whitelisting what is OK, but "relaxed" is blacklisting what is really not OK. The idea is that we use strict mode whenver we allocate a new user (i.e. in sysusers.d or homed), while "relaxed" mode is when we process users registered elsewhere, (i.e. userdb, logind, …) The requirements on user name validity vary wildly. SSSD thinks its fine to embedd "@" for example, while the suggested NAME_REGEX field on Debian does not even allow uppercase chars… This effectively liberaralizes a lot what we expect from usernames. The code that warns about questionnable user names is now optional and only used at places such as unit file parsing, so that it doesn't show up on every userdb query, but only when processing configuration files that know better. Fixes: #15149 #15090
* json: add new output flag JSON_PRETTY_AUTOLennart Poettering2019-12-021-8/+9
| | | | | This takes inspiration from JSON_COLOR_AUTO: it will automatically map to JSON_PRETTY if connected to a TTY and JSON_NEWLINE otherwise.
* json: add const string dispatcherLennart Poettering2019-12-021-0/+1
| | | | | | | | | This adds json_dispatch_const_string() which is similar to json_dispatch_string() but doesn't store a strdup()'ed copy of the string, but a pointer directly into the JSON record. This should simplify cases where the json variant sticks around long enough anyway.
* json: teach json_build() to build arrays from C arrays of JsonVariantLennart Poettering2019-12-021-0/+2
|
* json: add more dispatch helpersLennart Poettering2019-12-021-0/+4
|
* json: add json_variant_set_field_integer() and ↵Lennart Poettering2019-12-021-0/+2
| | | | json_variant_set_field_boolean() helpers
* json: add json_variant_unbase64() helperLennart Poettering2019-12-021-0/+2
|
* json: add new flag for forcing a flush after dumping json data to fileLennart Poettering2019-12-021-0/+1
| | | | | This is particularly useful when no trailing \n is generated, i.e. stdio doesn't flush the output on its own.
* json: add explicit log call for ENOMEMLennart Poettering2019-12-021-0/+3
|
* json: add ability to generate empty arrays/objects in json builderLennart Poettering2019-12-021-0/+2
|
* json: allow putting together base64 fields with json_build()Lennart Poettering2019-12-021-0/+2
|
* json: add new helper json_variant_append_array()Lennart Poettering2019-12-021-0/+2
|
* json: add new helper json_variant_new_base64()Lennart Poettering2019-12-021-0/+1
|
* json: add concept of normalizationLennart Poettering2019-12-021-0/+5
| | | | | | | | | | | | | | | | | | | Let's add a concept of normalization: as preparation for signing json records let's add a mechanism to bring JSON records into a well-defined order so that we can safely validate JSON records. This adds two booleans to each JsonVariant object: "sorted" and "normalized". The latter indicates whether a variant is fully sorted (i.e. all keys of objects listed in alphabetical order) recursively down the tree. The former is a weaker property: it only checks whether the keys of the object itself are sorted. All variants which are "normalized" are also "sorted", but not vice versa. The knowledge of the "sorted" property is then used to optimize searching for keys in the variant by using bisection. Both properties are determined at the moment the variants are allocated. Since our objects are immutable this is safe.
* json: add json_variant_merge() helperLennart Poettering2019-12-021-0/+2
|
* json: add json_variant_set_field_string() and json_variant_set_field_unsigned()Lennart Poettering2019-12-021-0/+2
|
* json: add json_variant_strv() helper that converts a json variant to an strvLennart Poettering2019-12-021-0/+2
| | | | Only works for arrays of strings, of course.
* json: optionally, make string checks stricter when dispatching stringsLennart Poettering2019-12-021-2/+3
|
* json: add flags parameter to json_parse_file(), for parsing "sensitive" dataLennart Poettering2019-12-021-5/+9
| | | | | | | This will call json_variant_sensitive() internally while parsing for each allocated sub-variant. This is better than calling it a posteriori at the end, because partially parsed variants will always be properly erased from memory this way.
* json: add json_parse_file_at() helperLennart Poettering2019-12-021-1/+6
| | | | This is an "at" function, similar to json_parse_file().
* json: add a new "sensitive" flags for JsonVariant objectsLennart Poettering2019-12-021-0/+2
| | | | | | An object marked with this flag will be erased from memory when it is freed. This is useful for dealing with sensitive data (key material, passphrases) encoded in JSON objects.
* json: add new json_variant_set_field() helperLennart Poettering2019-12-021-0/+2
|
* json: add new API json_variant_filter() for dropping fields from objectsLennart Poettering2019-12-021-0/+2
|
* json: add new json_variant_is_blank_{object,array}() helpersLennart Poettering2019-12-021-0/+2
|
* Drop trailing slash from assert_cc() definitionZbigniew Jędrzejewski-Szmek2019-07-171-2/+2
| | | | | | We use assert_cc(...); almost everywhere. Let's always require that. https://github.com/systemd/systemd/issues/12997#issuecomment-510103988
* tree-wide: use PROJECT_FILE instead of __FILE__Zbigniew Jędrzejewski-Szmek2019-07-041-1/+1
| | | | This replaces the internal uses of __FILE__ with the new macro.
* codespell: fix spelling errorsBen Boeckel2019-04-291-2/+2
|
* json: let's not accept embedded NUL bytes when allocating JSON stringsLennart Poettering2019-04-261-1/+1
| | | | | Let's add an additional paranoia check, and not accept embedded NUL bytes in strings, just in case.
* json: be more careful when iterating through a JSON object/arrayLennart Poettering2019-04-121-2/+4
| | | | | | | | | | | Let's exit the loop early in case the variant is not actually an object or array. This is safer since otherwise we might end up iterating through these variants and access fields that aren't of the type we expect them to be and then bad things happen. Of course, this doesn't absolve uses of these macros to check the type of the variant explicitly beforehand, but it makes it less bad if they forget to do so.
* json: simplify JSON_VARIANT_OBJECT_FOREACH() macro a bitLennart Poettering2019-04-121-1/+1
| | | | | | | | There's no point in returning the "key" within each loop iteration as JsonVariant object. Let's simplify things and return it as string. That simplifies usage (since the caller doesn't have to convert the object to the string anymore) and is safe since we already validate that keys are strings when an object JsonVariant is allocated.
* headers: remove unneeded includes from util.hZbigniew Jędrzejewski-Szmek2019-03-271-1/+2
| | | | | This means we need to include many more headers in various files that simply included util.h before, but it seems cleaner to do it this way.
* nspawn-oci: use SYNTHETIC_ERRNOZbigniew Jędrzejewski-Szmek2019-03-211-3/+3
|