summaryrefslogtreecommitdiffstats
path: root/src/test/test-json.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* json-util: Add JSON_BUILD_PAIR_CALLBACK_NON_NULL()Daan De Meyer2024-09-031-0/+12
| | | | | Like JSON_BUILD_PAIR_CALLBACK(), but doesn't add anything to the variant if the callback doesn't put anything in the return argument.
* json-util: Add JSON_BUILD_STRING_ORDERED_SET()Daan De Meyer2024-09-031-0/+14
|
* json: make it easy to dispatch our enumsLennart Poettering2024-06-201-8/+13
| | | | | This does the opposite of the previous patch: it undoes the "-" → "_" mapping of enum values when we try to parse enums again.
* json: make it easy to serialize our enums to jsonLennart Poettering2024-06-201-7/+11
| | | | | | | | | | | Most of our enums are mapped to strings that use dashes ("-") as word separators, i.e. "foo-bar-baz". However, Varlink enums do not allow "-" as separator, see: https://varlink.org/Interface-Definition Hence, let's add some simple glue to automatucally turn "-" into "_" for use when serializing our enums.
* sd-json: add sd_json_build() wrapper macro that implies SD_JSON_BUILD_OBJECT()Lennart Poettering2024-06-191-0/+15
| | | | | | | In 99% of uses of sd_json_build() we want to build an object as outermost construct. Let's shorten this most common case a bit, by adding sd_json_buildo() that implies this. This allows us to shorten much of our code, all across the tree.
* json: extend JsonDispatch flags with nullable and refuse-null flagsLennart Poettering2024-06-151-0/+80
| | | | | | | | | | | currently when dispatching json objects into C structs we either insist on the field type or we don't. Let's extend this model a bit: depending on two new fields either allow or refuse null types in addition to the specified type. This is useful for example when dispatch enums as this allows us explicitly refuse null in various scenarios where we allow multiple types.
* json: add sd_json_dispatch_double() helperLennart Poettering2024-06-151-0/+44
|
* test: extend JSON test coverageLennart Poettering2024-06-121-1/+95
|
* libsystemd: turn json.[ch] into a public APILennart Poettering2024-06-121-384/+386
| | | | | | | | | | | | | | | This is preparation for making our Varlink API a public API. Since our Varlink API is built on top of our JSON API we need to make that public first (it's a nice API, but JSON APIs there are already enough, this is purely about the Varlink angle). I made most of the json.h APIs public, and just placed them in sd-json.h. Sometimes I wasn't so sure however, since the underlying data structures would have to be made public too. If in doubt I didn#t risk it, and moved the relevant API to src/libsystemd/sd-json/json-util.h instead (without any sd_* symbol prefixes). This is mostly a giant search/replace patch.
* ASSERT_STREQ for simple casesIvan Kruglov2024-04-151-18/+18
|
* ASSERT_NULL/ASSERT_NOT_NULLIvan Kruglov2024-04-101-2/+2
|
* json: replace JSON_FORMAT_REFUSE_SENSITIVE with JSON_FORMAT_CENSOR_SENSITIVELennart Poettering2024-01-161-16/+20
| | | | | | | | | | | | | | | Previously, the flag would completely refuse formatting a JSON object if any field of it was marked sensitive. With this change we'll simply replace the subobject with the string "<sensitive data>", and show everything else. This is tremendously useful when debugging, since it means that we can again trace varlink calls through the stack: we can show all the message metadata and just suppress the actually sensitive parameters. The ability to debug this matters, and we should not hide more information that we can get away with, to keep things debuggable and maintainable.
* Merge pull request #30754 from poettering/iovecificationLennart Poettering2024-01-051-0/+21
|\ | | | | tpm2-util: convert various things over to struct iovec rather that data ptr + size
| * test: add unit tests for the new iovec helpersLennart Poettering2024-01-051-0/+21
| |
* | test: add unit test for JSON_DISPATCH_ENUM_DEFINE()Lennart Poettering2024-01-051-0/+51
|/
* json: add JSON_FORMAT_REFUSE_SENSITIVE to json_variant_format()Luca Boccassi2024-01-031-0/+105
| | | | | Returns -EPERM if any node in the variant is marked as sensitive, useful to avoid leaking data to log messages and so on
* json: teach dispatch logic to also take numbers formatted as stringsLennart Poettering2023-11-071-0/+62
| | | | | | | | | | | | | | | | | | | | | | | | JSON famously is problematic with integers beyond 53 bits, because JavaScript stores everything in double precision floating points. Various implementations in other languages can deal with signed 64 bit integers, and a few can deal with unsigned 64bit too (like ours). Typically program that need more then 53 bit of accuracy encode integers as decimal strings, to make sure that even if consumers can't really process larger values they at least won't corrupt the data while passing it along. This is also recommended by JSON-I (RFC 7493) To maximize compatibility with other implementations let's add 1st class parsing support for such objects in the json_dispatch() API. This makes json_dispatch_uint64() and related calls parse such integers-formatted-as-decimal-strings as uint64_t. This logic will only be enabled if the "type" field of JsonDispatch is left unspecified (i.e. set to negative/_JSON_VARIANT_TYPE_INVALID) though, hence alone does not change anything in effect. This purely is about consuming such values, whether we should genreate them also is a discussion for a separate PR.
* json: rename json_append() → json_variant_merge_objectb()Lennart Poettering2023-08-241-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | json_append() is a useful wrapper around json_variant_merge(). However, I think the naming sould be cleaned up a bit of both functions. I thinker "merge" is the better word than "append", since it does decidedly more than just append: it replaces existing fields of the same name, hence "merge" sounds more appropriate. This is as opposed to the similar operations for arrays, where no such override logic is applied and we really just append, hence those functions are called "append" already. To make clearer that "merge" is about objects, and "append" about arrays, also include "object" in the name. Also, include "json_variant" in the name, like we do for almost all other functions in the JSON API that take a JSON object as primary input, and hence are kinda object methods. Finally, let's follow the logic that helpers that combine json_build() with some other operation get suffixed with "b" like we already have in some cases. Hence: json_variant_merge() → json_variant_merge_object() json_append() → json_variant_merge_objectb() This mirrors nicely the existing: json_variant_append_array() json_vairant_append_arrayb() This also drops the variant of json_append() that takes a va_arg parameter (i.e. json_appendv()). We have no user of that so far, and given the nature as a helper function only I don#t see that happening, and if it happens after all it's trivial to bring back.
* tree-wide: drop "static inline" use in .c filesLennart Poettering2023-08-211-1/+1
| | | | | | | | | "static inline" makes sense in .h files. But in .c files it's useless decoration, the compiler should just make its own decisions there, and it can do that. hence, replace all remaining uses of "static line" by a simple" static" in all .c files (but keep them in .h files, where they make sense)
* tree-wide: drop trailing newline from various log callsLennart Poettering2023-07-101-4/+4
| | | | | We generate this implicitly, hence we generally don't include it explicitly.
* tree-wide: Fix false positives on newer gccDaan De Meyer2023-05-231-1/+1
| | | | | | Recent gcc versions have started to trigger false positive maybe-uninitialized warnings. Let's make sure we initialize variables annotated with _cleanup_ to avoid these.
* json: add helper for adding variant to array suppressing duplicatesLennart Poettering2022-12-151-0/+25
|
* shared/json: make it possible to specify source name for strings too, add testsZbigniew Jędrzejewski-Szmek2022-12-011-0/+63
| | | | | | | | The source would be set implicitly when parsing from a named file. But it's useful to specify the source also for cases where we're parsing a ready string. I noticed the lack of this API when trying to write tests, but it seems generally useful to be specify a source name when parsing things.
* basic: rename util.h to logarithm.hZbigniew Jędrzejewski-Szmek2022-11-081-1/+0
| | | | | util.h is now about logarithms only, so we can rename it. Many files included util.h for no apparent reason… Those includes are dropped.
* shared/json: use different return code for empty inputZbigniew Jędrzejewski-Szmek2022-10-191-0/+18
| | | | | It is useful to distinguish if json_parse_file() got no input or invalid input. Use different return codes for the two cases.
* json: introduce json_append()Yu Watanabe2022-09-031-0/+15
|
* json: use fpclassify() or its helper functionsYu Watanabe2022-07-211-6/+12
|
* test: use fabs() as the argument is doubleYu Watanabe2022-07-211-6/+6
| | | | This also drop unnecessary cast.
* test: JSON_BUILD_REAL nowadays expects 'double', not 'long double'Lennart Poettering2022-05-091-1/+1
| | | | | Follow-up for 337712e777bff389f53e26d5b378d2ceba7d98a8, aka "the great un-long-double-ification of 2021".
* test: Use TEST macroJan Janssen2021-11-251-79/+56
| | | | | | | | | This converts to TEST macro where it is trivial. Some additional notable changes: - simplify HAVE_LIBIDN #ifdef in test-dns-domain.c - use saved_argc/saved_argv in test-copy.c, test-path-util.c, test-tmpfiles.c and test-unit-file.c
* json: add new JSON_BUILD_CONST_STRING() macroLennart Poettering2021-11-251-7/+7
| | | | | | | | | | | | | | | | This macro is like JSON_BUILD_STRING() but uses our json library's ability to use literal strings directly as JsonVariant objects. The changes all our codebase to use this new macro whenever we build JSON objects from literal strings. (I tried to make this automatic, i.e. to detect in JSON_BUILD_STRING() whether something is a literal string nicely and thus do this stuff automatically, but I couldn't find a way.) This should reduce memory usage of our JSON code a bit. Constant strings we use very often will now be shared and mapped directly from the ELF image.
* json: don't assert() if we add a NULL element via json_variant_set_field()Lennart Poettering2021-11-251-0/+24
| | | | | | The rest of our JSON code tries hard to magically convert NULL inputs into "null" JSON objects, let's make sure this also works with json_variant_set_field().
* shared/json: use int64_t instead of intmax_tZbigniew Jędrzejewski-Szmek2021-11-181-12/+12
| | | | | | | | | | | We were already asserting that the intmax_t and uintmax_t types are the same as int64_t and uint64_t. Pretty much everywhere in the code base we use the latter types. In principle intmax_t could be something different on some new architecture, and then the code would fail to compile or behave differently. We actually do not want the code to behave differently on those architectures, because that'd break interoperability. So let's just use int64_t/uint64_t since that's what we indend to use.
* shared/json: stop using long doubleZbigniew Jędrzejewski-Szmek2021-11-181-17/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It seems that the implementation of long double on ppc64el doesn't really work: long double cast to integer and back compares as unequal to itself. Strangely, this effect happens without optimization and both with gcc and clang, so it seems to be an effect of how long double is implemented by the architecture. Dumping the values shows the following pattern: 00 00 00 00 00 00 24 40 00 00 00 00 00 00 00 00 # long double v = 10; 00 00 00 00 00 00 24 40 00 00 00 00 00 00 80 39 # (long double)(intmax_t) v Instead of trying to make this work, I think it's most reasonable to switch to normal doubles. Notably, we had no tests for floating point behaviour. The first test we added (for values even not in the range outside of double), showed failures. Common implementations of JSON (in particular JavaScript) use 64 bit double. If we stick to this, users are likely to be happy when they exchange data with those tools. Exporting values that cannot be represented in other tools would just cause interop problems. I don't think the extra precision would be much used. Long double seems to make most sense as a transient format used in calculations to get extra precision in operations, and not a storage or exchange format. So I expect low-level numerical routines that have to know about hardware to make use of it, but it shouldn't be used by our (higher-level) system library. In particular, we would have to add tests for implementations conforming to IEEE 754, and those that don't conform, and account for various implementation differences. It just doesn't seem worth the effort. https://en.wikipedia.org/wiki/Long_double#Implementations shows that the situation is "complicated": > On the x86 architecture, most C compilers implement long double as the 80-bit > extended precision type supported by x86 hardware. An exception is Microsoft > Visual C++ for x86, which makes long double a synonym for double. The Intel > C++ compiler on Microsoft Windows supports extended precision, but requires > the /Qlong‑double switch for long double to correspond to the hardware's > extended precision format. > Compilers may also use long double for the IEEE 754 quadruple-precision > binary floating-point format (binary128). This is the case on HP-UX, > Solaris/SPARC, MIPS with the 64-bit or n32 ABI, 64-bit ARM (AArch64) (on > operating systems using the standard AAPCS calling conventions, such as > Linux), and z/OS with FLOAT(IEEE). Most implementations are in software, but > some processors have hardware support. > On some PowerPC and SPARCv9 machines, long double is implemented as a > double-double arithmetic, where a long double value is regarded as the exact > sum of two double-precision values, giving at least a 106-bit precision; with > such a format, the long double type does not conform to the IEEE > floating-point standard. Otherwise, long double is simply a synonym for > double (double precision), e.g. on 32-bit ARM, 64-bit ARM (AArch64) (on > Windows and macOS) and on 32-bit MIPS (old ABI, a.k.a. o32). > With the GNU C Compiler, long double is 80-bit extended precision on x86 > processors regardless of the physical storage used for the type (which can be > either 96 or 128 bits). On some other architectures, long double can be > double-double (e.g. on PowerPC) or 128-bit quadruple precision (e.g. on > SPARC). As of gcc 4.3, a quadruple precision is also supported on x86, but as > the nonstandard type __float128 rather than long double. > Although the x86 architecture, and specifically the x87 floating-point > instructions on x86, supports 80-bit extended-precision operations, it is > possible to configure the processor to automatically round operations to > double (or even single) precision. Conversely, in extended-precision mode, > extended precision may be used for intermediate compiler-generated > calculations even when the final results are stored at a lower precision > (i.e. FLT_EVAL_METHOD == 2). With gcc on Linux, 80-bit extended precision is > the default; on several BSD operating systems (FreeBSD and OpenBSD), > double-precision mode is the default, and long double operations are > effectively reduced to double precision. (NetBSD 7.0 and later, however, > defaults to 80-bit extended precision). However, it is possible to override > this within an individual program via the FLDCW "floating-point load > control-word" instruction. On x86_64, the BSDs default to 80-bit extended > precision. Microsoft Windows with Visual C++ also sets the processor in > double-precision mode by default, but this can again be overridden within an > individual program (e.g. by the _controlfp_s function in Visual C++). The > Intel C++ Compiler for x86, on the other hand, enables extended-precision > mode by default. On IA-32 OS X, long double is 80-bit extended precision. So, in short, the only thing that can be said is that nothing can be said. In common scenarios, we are getting only a bit of extra precision (80 bits instead of 64), but use space for padding. In other scenarios we are getting no extra precision. And the variance in implementations is a big issue: we can expect strange differences in behaviour between architectures, systems, compiler versions, compilation options, and even the other things that the program is doing. Fixes #21390.
* test-json: add test that makes sure floats are somewhat reasonably implementedLennart Poettering2021-11-151-0/+54
| | | | | Test that we don't loose accuracy without bounds for extreme values, and validate that nan/inf/-inf actually get converted to null properly.
* license: LGPL-2.1+ -> LGPL-2.1-or-laterYu Watanabe2020-11-091-1/+1
|
* test-json: add function headersZbigniew Jędrzejewski-Szmek2020-09-011-8/+28
|
* shared/json: reject non-utf-8 stringsZbigniew Jędrzejewski-Szmek2020-09-011-1/+1
| | | | | | | | | | | | | JSON strings must be utf-8-clean. We also verify this in json_parse_string() so we would reject a message with invalid utf-8 anyway. It would probably be slightly cheaper to detect non-conformaning strings in serialization, but then we'd have to fail serialization. By doing this early, we give the caller a chance to handle the error nicely. The test is adjusted to contain a valid utf-8 string after decoding of the utf-32 encoding in json ("विवेकख्यातिरविप्लवा हानोपायः।", something about the cessation of ignorance).
* json: use our regular way to turn off compiler warningsLennart Poettering2020-05-251-3/+2
|
* json: add concept of normalizationLennart Poettering2019-12-021-2/+90
| | | | | | | | | | | | | | | | | | | Let's add a concept of normalization: as preparation for signing json records let's add a mechanism to bring JSON records into a well-defined order so that we can safely validate JSON records. This adds two booleans to each JsonVariant object: "sorted" and "normalized". The latter indicates whether a variant is fully sorted (i.e. all keys of objects listed in alphabetical order) recursively down the tree. The former is a weaker property: it only checks whether the keys of the object itself are sorted. All variants which are "normalized" are also "sorted", but not vice versa. The knowledge of the "sorted" property is then used to optimize searching for keys in the variant by using bisection. Both properties are determined at the moment the variants are allocated. Since our objects are immutable this is safe.
* json: add flags parameter to json_parse_file(), for parsing "sensitive" dataLennart Poettering2019-12-021-6/+6
| | | | | | | This will call json_variant_sensitive() internally while parsing for each allocated sub-variant. This is better than calling it a posteriori at the end, because partially parsed variants will always be properly erased from memory this way.
* shared/varlink: add missing terminator in json stringsZbigniew Jędrzejewski-Szmek2019-05-301-0/+4
| | | | | | | | | Should finally fix oss-fuzz-14688. 8688c29b5aece49805a244676cba5bba0196f509 wasn't enough. The buffer retrieved from memstream has the size that the same as the written data. When we write do write(f, s, strlen(s)), then no terminating NUL is written, and the buffer is not (necessarilly) a proper C string.
* Add fmemopen_unlocked() and use unlocked ops in fuzzers and some other testsZbigniew Jędrzejewski-Szmek2019-04-121-1/+2
| | | | This might make things marginially faster. I didn't benchmark though.
* test-json: use standard test introZbigniew Jędrzejewski-Szmek2019-02-251-4/+2
|
* test-json: avoid deep stack recursion under msanZbigniew Jędrzejewski-Szmek2019-02-251-0/+7
|
* test-json: do not pass ephemeral array as intializer to JSON_BUILD_STRVZbigniew Jędrzejewski-Szmek2019-02-111-2/+4
| | | | | | | | | | Fixes #11600. The code was effectively doing: json_build(..., ({ char **_x = ((char**) ((const char*[]) {"one", "two", "three", "four", NULL })); _x; })); but there was no guarantee that the storage for the array that _x points to survives pass the end of the block. Essentially, STRV_MAKE cannot be used inline inside of a block like this.
* Delete duplicate linesTopi Miettinen2019-01-121-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Found by inspecting results of running this small program: int main(int argc, const char **argv) { for (int i = 1; i < argc; i++) { FILE *f; char line[1024], prev[1024], *r; int lineno; prev[0] = '\0'; lineno = 1; f = fopen(argv[i], "r"); if (!f) exit(1); do { r = fgets(line, sizeof(line), f); if (!r) break; if (strcmp(line, prev) == 0) printf("%s:%d: error: dup %s", argv[i], lineno, line); lineno++; strcpy(prev, line); } while (!feof(f)); fclose(f); } }
* test-json: check absolute and relative difference in floating point testZbigniew Jędrzejewski-Szmek2019-01-031-9/+7
| | | | | | | | | | | The test fails under valgrind, so there was an exception for valgrind. Unfortunately that check only works when valgrind-devel headers are available during build. But it is possible to have just valgrind installed, or simply install it after the build, and then "valgrind test-json" would fail. It also seems that even without valgrind, this fails on some arm32 CPUs. Let's do the usual-style test for absolute and relative differences.
* fileio: when reading a full file into memory, refuse inner NUL bytesLennart Poettering2018-12-171-1/+1
| | | | Just some extra care to avoid any ambiguities in what we read.
* json: teach json builder "conditional" object fieldsLennart Poettering2018-11-281-0/+16
| | | | | | | | | | | | Quite often when we generate objects some fields should only be generated in some conditions. Let's add high-level support for that. Matching the existing JSON_BUILD_PAIR() this adds JSON_BUILD_PAIR_CONDITIONAL() which is very similar, but takes an additional parameter: a boolean condition. If "true" this acts like JSON_BUILD_PAIR(), but if false then the whole pair is suppressed. This sounds simply, but requires a tiny bit of complexity: when complex sub-variants are used in fields, then we also need to suppress them.