Put a flag for fast-reboot to the db using EXPIRE feature. Using this flag in other part of SONiC to start in Fast-reboot mode. If we reload a config, the state in the db will be removed.
* adding quotes for string comparison with special characters
* Update dockers/docker-sonic-telemetry/telemetry.sh
Co-Authored-By: Joe LeVeque <jleveque@users.noreply.github.com>
* Update dockers/docker-sonic-telemetry/telemetry.sh
Co-Authored-By: Joe LeVeque <jleveque@users.noreply.github.com>
* start bgp_eoiu_mark service to populate bgp eoiu marker if configured so
* Address code review comments: check db value via "-v" option in sonic-cfggen
* Address code review comment 2: check string against 'true' directly, instead of couting
* Update start.sh
Added python-libpcap to be used by arp_responder.py utility. This is needed to set conf.use_pcap which will make sure that L2pcapListenSocket uses libpcap instead of Linux PF_PACKET sockets. By using libpcap the vlan field will not be removed when the application receives the packet.
* Rename asn/deployment_id_asn_map.yaml to constants/constants.yaml
* Fix bgp templates
* Add community for loopback when bgpd is isolated
* Use correct community value
Now it's possible to add and remove peers based on ConfigDB
- What I did
Fixed functionality for dynamically adding/removing static bgp peers.
- How I did it
Split the bgp default template on bgp part and bgp peer part
Changed bgpcfgd to use 1.
- How to verify it
Build an image and run on your DUT
The owner of the generated files (/etc/frr/*.conf) by start.sh is root if it is a new file.
This will cause error when executing "copy running-config startup-config" in vtysh because of privilege issue.
While doing CLI changes for SNMP configuration, few changes are made in backend to handle the modified CLI.
** Changes**
- "community" for "snmp trap" is also made as "configurable". snmpd_conf.j2 is modified to handle the same.
- Changed the snmp.yml file generation from postStartAction to preStartAction in docker_image_ctl.j2 specific to SNMP docker, to ensure that the snmp.yml is generated before sonic-cfggen generates the snmpd.conf.
- Changed to make the code common for management vrf and default vrf. Users can configure snmp trap and snmp listening IP for both management vrf and default vrf.
* [SNMP] management VRF SNMP support
This commit adds SNMP support for Management VRF using l3mdev.
The patch included provides VRF support, there is no single
"listendevice" configuration, rather multiple agentaddress
config options can each have their own "interface" to bind to
using "ip%interface". The snmpd.conf file is accordingly
generated using the snmp.yml file and redis database info.
Adding below the comments of SNMP patch 1376
--------------------------------------------
Since the Linux kernel added support for Virtual Routing
and Forwarding (VRF) in version 4.3
(Note: these won't compile on non-linux platforms)
https://www.kernel.org/doc/Documentation/networking/vrf.txt
Linux users could not use snmpd in its current form to
bind specific listening IP addresses to specific VRF
devices. A simplified description of a VRF inteface
is an interface that is a master (a container of sorts)
that collects a set of physicalinterfaces to form a
routing table.
This set of two patches (one for V5-7-patches and one
for V5-8-patches branches) is almost identical to patch
single "listendevice" configuration. Rather, multiple
agentAddress config options can each have their own
"interface" to bind to using the <ip>%<interface>
syntax.</interface></ip>
-------------------------------------------
Signed-off-by: Harish Venkatraman <harish_venkatraman@dell.com>
Introduce a new "sflow" container (if ENABLE_SFLOW is set). The new docker will include:
hsflowd : host-sflow based daemon is the sFlow agent
psample : Built from libpsample repository. Useful in debugging sampled packets/groups.
sflowtool : Locally dump sflow samples (e.g. with a in-unit collector)
In case of SONiC-VS, enable psample & act_sample kernel modules.
VS' syncd needs iproute2=4.20.0-2~bpo9+1 & libcap2-bin=1:2.25-1 to support tc-sample
tc-syncd is provided as a convenience tool for debugging (e.g. tc-syncd filter show ...)
Update interfaces of bgpcfd from swsssdk to swsscommon to unify a suit of interface with other component. Meanwhile, we can listen multiple tables at one thread under swsscommon interface.
Signed-off-by: Ze Gan ganze718@gmail.com
- What I did
Move the interface of bgpcfgd from swsssdk to swsscommon. Because bgpcfgd need to listen more events in the future and we want to maintain one kind of APIs, swsscommon is more suitable than swsssdk.
- How I did it
Refactor the BGPConfigDaemon to two components, Daemon and BGPConfigManager. We can register new managers to the Daemon object if we want to listen more events.
this is the first step to moving different databases tables into different database instances
in this PR, only handle multiple database instances creation based on user configuration at /etc/sonic/database_config.json
we keep current method to create single database instance if no extra/new DATABASE configuration exist in database_config.json file.
if user try to configure more db instances at database_config.json , we create those new db instances along with the original db instance existing today.
The configuration is as below, later we can add more db related information if needed:
{
...
"DATABASE": {
"redis-db-01" : {
"port" : "6380",
"database": ["APPL_DB", "STATE_DB"]
},
"redis-db-02" : {
"port" : "6381",
"database":["ASIC_DB"]
},
}
...
}
The detail description is at design doc at Azure/SONiC#271
The main idea is : when database.sh started, we check the configuration and generate corresponding scripts.
rc.local service handle old_config copy when loading new images, there is no dependency between rc.local and database service today, for safety and make sure the copy operation are done before database try to read it, we make database service run after rc.local
Then database docker started, we check the configuration and generate corresponding scripts/.conf in database docker as well.
based on those conf, we create databases instances as required.
at last, we ping_pong check database are up and continue
Signed-off-by: Dong Zhang d.zhang@alibaba-inc.com
* [docker-fpm-frr]: Generate separated staticd.conf for staticd
Generate staticd.conf by templates/staticd.conf.j2 with config DB data
* [docker-fpm-frr]: Remove default_route block from zebra.conf.j2
default_route block already moved to staticd.conf.j2
* [docker-fpm-frr]: Add test for staticd.conf.j2 template
* Add test for staticd.conf.j2 template
* Correct the sample output of zebra.conf.j2 template
* Fix a typo in test_zebra_frr
* [docker-fpm-frr]: Fix test_j2files test errors
* Fix test errors in test_j2files.py and test_j2files_t2_chassis_fe.py
* Fix typo in test_j2files_t2_chassis_fe.py