diff options
author | Alexei Starovoitov <ast@fb.com> | 2016-02-02 07:39:53 +0100 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2016-02-06 09:34:35 +0100 |
commit | 824bd0ce6c7c43a9e1e210abf124958e54d88342 (patch) | |
tree | 10d1841e462152a97c3d855f17568684d5246dd0 /include/uapi | |
parent | ethtool: Declare netdev_rss_key as __read_mostly. (diff) | |
download | linux-824bd0ce6c7c43a9e1e210abf124958e54d88342.tar.xz linux-824bd0ce6c7c43a9e1e210abf124958e54d88342.zip |
bpf: introduce BPF_MAP_TYPE_PERCPU_HASH map
Introduce BPF_MAP_TYPE_PERCPU_HASH map type which is used to do
accurate counters without need to use BPF_XADD instruction which turned
out to be too costly for high-performance network monitoring.
In the typical use case the 'key' is the flow tuple or other long
living object that sees a lot of events per second.
bpf_map_lookup_elem() returns per-cpu area.
Example:
struct {
u32 packets;
u32 bytes;
} * ptr = bpf_map_lookup_elem(&map, &key);
/* ptr points to this_cpu area of the value, so the following
* increments will not collide with other cpus
*/
ptr->packets ++;
ptr->bytes += skb->len;
bpf_update_elem() atomically creates a new element where all per-cpu
values are zero initialized and this_cpu value is populated with
given 'value'.
Note that non-per-cpu hash map always allocates new element
and then deletes old after rcu grace period to maintain atomicity
of update. Per-cpu hash map updates element values in-place.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/uapi')
-rw-r--r-- | include/uapi/linux/bpf.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index aa6f8571de13..43ae40c8763e 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -81,6 +81,7 @@ enum bpf_map_type { BPF_MAP_TYPE_ARRAY, BPF_MAP_TYPE_PROG_ARRAY, BPF_MAP_TYPE_PERF_EVENT_ARRAY, + BPF_MAP_TYPE_PERCPU_HASH, }; enum bpf_prog_type { |