| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
| |
Also, make test_hashmap_ensure_allocated() actually test
hashmap_ensure_allocated().
|
|
|
|
|
|
|
| |
comment
Effectively this does two more tests also for ordered hashmaps.
This setup is a bit confusing, let's add a comment.
|
|
|
|
|
|
|
| |
So far, we'd use hashmap_free_free to free both keys and values along with
the hashmap. I think it's better to make this more encapsulated: in this variant
the way contents are freed can be decided when the hashmap is created, and
users of the hashmap can always use hashmap_free.
|
|
|
|
|
| |
This means we need to include many more headers in various files that simply
included util.h before, but it seems cleaner to do it this way.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The point here is to compare speed of hashmap_destroy with free and a different
freeing function, to the implementation details of hashmap_clear can be
evaluated.
Results:
current code:
/* test_hashmap_free (slow, 1048576 entries) */
string_hash_ops test took 2.494499s
custom_free_hash_ops test took 2.640449s
string_hash_ops test took 2.287734s
custom_free_hash_ops test took 2.557632s
string_hash_ops test took 2.299791s
custom_free_hash_ops test took 2.586975s
string_hash_ops test took 2.314099s
custom_free_hash_ops test took 2.589327s
string_hash_ops test took 2.319137s
custom_free_hash_ops test took 2.584038s
code with a patch which restores the "fast path" using:
for (idx = skip_free_buckets(h, 0); idx != IDX_NIL; idx = skip_free_buckets(h, idx + 1))
in the case where both free_key and free_value are either free or NULL:
/* test_hashmap_free (slow, 1048576 entries) */
string_hash_ops test took 2.347013s
custom_free_hash_ops test took 2.585104s
string_hash_ops test took 2.311583s
custom_free_hash_ops test took 2.578388s
string_hash_ops test took 2.283658s
custom_free_hash_ops test took 2.621675s
string_hash_ops test took 2.334675s
custom_free_hash_ops test took 2.601568s
So the test is noisy, but there clearly is no significant difference with the
"fast path" restored. I'm surprised by this, but it shows that the current
"safe" implementation does not cause a performance loss.
When the code is compiled with optimization, those times are significantly
lower (e.g. 1.1s and 1.4s), but again, there is no difference with the "fast
path" restored.
The difference between string_hash_ops and custom_free_hash_ops is the
additional cost of global modification and the extra function call.
|
|
|
|
| |
This makes it slightly easier to watch for performance changes.
|
| |
|
| |
|
|
|
|
| |
Acks in https://github.com/systemd/systemd/issues/9320.
|
|
|
|
|
|
| |
Let's unify an beautify our remaining copyright statements, with a
unicode ©. This means our copyright statements are now always formatted
the same way. Yay.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This part of the copyright blurb stems from the GPL use recommendations:
https://www.gnu.org/licenses/gpl-howto.en.html
The concept appears to originate in times where version control was per
file, instead of per tree, and was a way to glue the files together.
Ultimately, we nowadays don't live in that world anymore, and this
information is entirely useless anyway, as people are very welcome to
copy these files into any projects they like, and they shouldn't have to
change bits that are part of our copyright header for that.
hence, let's just get rid of this old cruft, and shorten our codebase a
bit.
|
|
|
|
|
|
|
|
|
|
| |
Files which are installed as-is (any .service and other unit files, .conf
files, .policy files, etc), are left as is. My assumption is that SPDX
identifiers are not yet that well known, so it's better to retain the
extended header to avoid any doubt.
I also kept any copyright lines. We can probably remove them, but it'd nice to
obtain explicit acks from all involved authors before doing that.
|
|
|
|
|
| |
This follows what the kernel is doing, c.f.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5fd54ace4721fc5ce2bb5aef6318fcf17f421460.
|
|
|
|
|
| |
test-hashmap is a very good test, but it gets in the way when one wants to
compile and quickly test changes.
|
|
|
|
|
|
| |
The test was failing at -O2+ with gcc 5.3 and 6.0.
"val1" == "val1" and "val1" != "val1" are both valid.
http://stackoverflow.com/questions/4843640/why-is-a-a-in-c
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
string-util.[ch]
There are more than enough calls doing string manipulations to deserve
its own files, hence do something about it.
This patch also sorts the #include blocks of all files that needed to be
updated, according to the sorting suggestions from CODING_STYLE. Since
pretty much every file needs our string manipulation functions this
effectively means that most files have sorted #include blocks now.
Also touches a few unrelated include files.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The purpose of testing with the crippled hash function is to cover
the otherwise very unlikely codepath in bucket_calculate_dib() where
it has to fall back to recomputing the hash value.
This unlikely path was not covered by test-hashmap anymore after
57217c8f "test: hashmap - cripple the hash function by truncating the
input rather than the output".
Restore the test coverage by increasing the number of entries in the test.
The number was determined empirically by checking with lcov.
|
|
|
|
|
|
|
|
|
|
|
|
| |
All our hash functions are based on siphash24(), factor out
siphash_init() and siphash24_finalize() and pass the siphash
state to the hash functions rather than the hash key.
This simplifies the hash functions, and in particular makes
composition simpler as calling siphash24_compress() repeatedly
on separate chunks of input has the same effect as first
concatenating the input and then calling siphash23_compress()
on the result.
|
|
|
|
|
|
|
|
|
|
|
|
| |
than the output
The reason for the crippled hash function is to reduce the distribution
of the hash function, do this by truncating the domain rather than the
range. This does introduce a change in behavoir as the range is no longer
contiguous, which greatly reduces collisions.
This is needed as a follow-up patch will no longer allow individual hash
functions to alter the output directly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, the HASHMAP iterators stop at the first NULL entry in a
hashmap. This is non-obvious and breaks users like sd-device, which
legitimately store NULL values in a hashmap.
Fix all the iterators by taking a pointer to the value storage, instead of
returning it. The iterators now return a boolean that tells whether the
end of the list was reached.
Current users of HASHMAP_FOREACH() are *NOT* changed to explicitly check
for NULL. If it turns out, there were users that inserted NULL into
hashmaps, but didn't properly check for it during iteration, then we
really want to find those and fix them.
|
|
|
|
| |
CID#1299016
|
|
|
|
|
| |
Check string ops hashmap_put() for keys with a different pointer but the same
value.
|
|
|
|
|
|
| |
Check return value of hashmap_ensure_allocated().
CID#1250807.
|
| |
|
| |
|
|
|
|
| |
A reimplementation of hashmaps will follow and it will use 0.8.
|
|
|
|
|
|
| |
It cannot fail in the current hashmap implementation, but it may fail in
alternative implementations (unless a sufficiently large reservation has
been placed beforehand).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test more corner cases and error states in several tests.
Add new tests for:
hashmap_move
hashmap_remove
hashmap_remove2
hashmap_remove_value
hashmap_remove_and_replace
hashmap_get2
hashmap_first
In test_hashmap_many additionally test with an intentionally bad hash
function.
|
|
test-hashmap-ordered.c is generated from test-hashmap-plain.c simply by
substituting "ordered_hashmap" for "hashmap" etc.
In the cases where tests rely on the order of entries, a distinction
between plain and ordered hashmaps is made using the ORDERED macro,
which is defined only for test-hashmap-ordered.c.
|