a118a5ba43
What I did: Added support when TSA is done on Line Card make sure it's completely isolated from all e-BGP peer devices from this LC or remote LC Why I did: Currently when TSA is executed on LC routes are withdrawn from it's connected e-BGP peers only. e-BGP peers on remote LC can/will (via i-BGP) still have route pointing/attracting traffic towards this isolated LC. How I did: When TSA is applied on LC all the routes that are advertised via i-BGP are set with community tag of no-export so that when remote LC received these routes it does not send over to it's connected e-BGP peers. Also once we receive the route with no-export over iBGP match on it and and set the local preference of that route to lower value (80) so that we remove that route from the forwarding database. Below scenario explains why we do this: - LC1 advertise R1 to LC3 - LC2 advertise R1 to LC3 - On LC3 we have multi-path/ECMP over both LC1 and LC2 - On LC3 R1 received from LC1 is consider best route over R1 over received from LC2 and is send to LC3 e-BGP peers - Now we do TSA on LC2 - LC3 will receive R1 from LC2 with community no-export and from LC1 same as earlier (no change) - LC3 will still get traffic for R1 since it is still advertised to e-BGP peers (since R1 from LC1 is best route) - LC3 will forward to both LC1 and LC2 (ecmp) and this causes issue as LC2 is in TSA mode and should not receive traffic To fix above scenario we change the preference to lower value of R1 received from LC2 so that it is removed from Multi-path/ECMP group. How I verfiy: UT has been added to make sure Template generation is correct Manual Verification of the functionality sonic-mgmt test case will be updated accordingly. Please note this PR is on top of this :#16714 which needs to be merged first. Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
69 lines
2.2 KiB
Django/Jinja
69 lines
2.2 KiB
Django/Jinja
!
|
|
! template: bgpd/templates/internal/policies.conf.j2
|
|
!
|
|
!
|
|
{% from "common/functions.conf.j2" import get_ipv4_loopback_address %}
|
|
!
|
|
{% if CONFIG_DB__DEVICE_METADATA['localhost']['sub_role'] == 'BackEnd' %}
|
|
route-map FROM_BGP_INTERNAL_PEER_V4 permit 1
|
|
set originator-id {{ get_ipv4_loopback_address(CONFIG_DB__LOOPBACK_INTERFACE, "Loopback4096") | ip }}
|
|
!
|
|
route-map FROM_BGP_INTERNAL_PEER_V6 permit 1
|
|
set ipv6 next-hop prefer-global
|
|
on-match next
|
|
!
|
|
route-map FROM_BGP_INTERNAL_PEER_V6 permit 2
|
|
set originator-id {{ get_ipv4_loopback_address(CONFIG_DB__LOOPBACK_INTERFACE, "Loopback4096") | ip }}
|
|
{% elif CONFIG_DB__DEVICE_METADATA['localhost']['switch_type'] == 'chassis-packet' %}
|
|
bgp community-list standard DEVICE_INTERNAL_COMMUNITY permit {{ constants.bgp.internal_community }}
|
|
bgp community-list standard NO_EXPORT permit no-export
|
|
!
|
|
route-map FROM_BGP_INTERNAL_PEER_V4 permit 1
|
|
match community DEVICE_INTERNAL_COMMUNITY
|
|
set comm-list DEVICE_INTERNAL_COMMUNITY delete
|
|
set tag {{ constants.bgp.internal_community_match_tag }}
|
|
!
|
|
route-map FROM_BGP_INTERNAL_PEER_V4 permit 2
|
|
match community NO_EXPORT
|
|
set local-preference 80
|
|
!
|
|
route-map FROM_BGP_INTERNAL_PEER_V6 permit 1
|
|
set ipv6 next-hop prefer-global
|
|
on-match next
|
|
!
|
|
route-map FROM_BGP_INTERNAL_PEER_V6 permit 2
|
|
match community DEVICE_INTERNAL_COMMUNITY
|
|
set comm-list DEVICE_INTERNAL_COMMUNITY delete
|
|
set tag {{ constants.bgp.internal_community_match_tag }}
|
|
!
|
|
route-map FROM_BGP_INTERNAL_PEER_V6 permit 3
|
|
match community NO_EXPORT
|
|
set local-preference 80
|
|
!
|
|
route-map TO_BGP_INTERNAL_PEER_V4 permit 1
|
|
match ip address prefix-list PL_LoopbackV4
|
|
set community {{ constants.bgp.internal_community }}
|
|
!
|
|
route-map TO_BGP_INTERNAL_PEER_V6 permit 2
|
|
match ipv6 address prefix-list PL_LoopbackV6
|
|
set community {{ constants.bgp.internal_community }}
|
|
!
|
|
{% else %}
|
|
route-map FROM_BGP_INTERNAL_PEER_V6 permit 1
|
|
set ipv6 next-hop prefer-global
|
|
on-match next
|
|
!
|
|
{% endif %}
|
|
!
|
|
route-map FROM_BGP_INTERNAL_PEER_V4 permit 100
|
|
!
|
|
route-map FROM_BGP_INTERNAL_PEER_V6 permit 100
|
|
!
|
|
route-map TO_BGP_INTERNAL_PEER_V4 permit 100
|
|
!
|
|
route-map TO_BGP_INTERNAL_PEER_V6 permit 100
|
|
!
|
|
!
|
|
! end of template: bgpd/templates/internal/policies.conf.j2
|
|
!
|