summaryrefslogtreecommitdiffstats
path: root/bgpd/bgp_nexthop.h
diff options
context:
space:
mode:
authorDonald Sharp <sharpd@cumulusnetworks.com>2017-09-27 02:06:13 +0200
committerDonald Sharp <sharpd@cumulusnetworks.com>2017-09-27 02:06:13 +0200
commit65d4e0c69b5cb6e7811784ef563e1b23ba162cff (patch)
treef23648ad5e15b7050dd250a210baa5d550078fe8 /bgpd/bgp_nexthop.h
parentMerge pull request #1248 from vjardin6WIND/cleanup (diff)
downloadfrr-65d4e0c69b5cb6e7811784ef563e1b23ba162cff.tar.xz
frr-65d4e0c69b5cb6e7811784ef563e1b23ba162cff.zip
bgpd: Reduce multiaccess_check_v4 overhead for subgroups
Perf results at scale( >1k peers) showed a non-trivial amount of time spent in bgp_multiaccess_check_v4. Upon function examination we are looking up the nexthops connected node in each call as well as having to unlock it after each iteration. Rewrite to lookup the nexthop node once. This should reduce the node lookup by aproximately 1/2 which should yield some performance results. There are probably better things to do here but would require deeper thought. Signed-off-by: Donald Sharp <sharpd@cumulusnetworks.com>
Diffstat (limited to 'bgpd/bgp_nexthop.h')
-rw-r--r--bgpd/bgp_nexthop.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/bgpd/bgp_nexthop.h b/bgpd/bgp_nexthop.h
index b482778fd..2c5b2ab11 100644
--- a/bgpd/bgp_nexthop.h
+++ b/bgpd/bgp_nexthop.h
@@ -82,6 +82,8 @@ extern int bgp_nexthop_lookup(afi_t, struct peer *peer, struct bgp_info *,
int *, int *);
extern void bgp_connected_add(struct bgp *bgp, struct connected *c);
extern void bgp_connected_delete(struct bgp *bgp, struct connected *c);
+extern int bgp_subgrp_multiaccess_check_v4(struct in_addr nexthop,
+ struct update_subgroup *subgrp);
extern int bgp_multiaccess_check_v4(struct in_addr, struct peer *);
extern int bgp_config_write_scan_time(struct vty *);
extern int bgp_nexthop_self(struct bgp *, struct in_addr);