#### Why I did it
SAI profile files speed configuration have wrong bitmap value for 10/50G speed option.
#### How I did it
Fix to the correct value for all SPC1 devices.
#### How to verify it
Configure on these platforms ports with 10/50G speed using this fix.
Backport of https://github.com/Azure/sonic-buildimage/pull/7031 to the 201911 branch
#### Why I did it
To enable parsing the `AutoNegotiation` element from the LinkMetadata section of minigraph file
#### How I did it
Parse the value `AutoNegotiation` element from the `LinkMetadata` section of minigraph file. If the element is present, an `autoneg` key will be added to the port in the `PORT` table of Config DB with a value of either `0` or `1`
If an `autoneg` value is present in port_config.ini, the value from the minigraph will take precedence, overriding that value.
Also remove `AutoNegotiation` and `EnableAutoNegotiation` elements from the `DeviceInfo` section, as we will use this data in the `LinkMetadata` section to determine whether to enable auto-negotiation for a port.
Why I did it
It was observed that on a multi-asic DUT bootup, the BGP internal sessions between ASIC's was taking more time to get ESTABLISHED than external BGP sessions. The internal sessions was coming up almost exactly 120 secs later.
In multi-asic platform the bgp dockers ( which is per ASIC ) on switch start are bring brought up around the same time and they try to make the bgp sessions with neighbors (in peer ASIC's) which may be not be completely up. This results in BGP connect fail and the retry happens after 120sec which is the default Connect Retry Timer
How I did it
Add the command to set the bgp neighboring session retry timer to 10sec for internal bgp neighbors.
Included commits in sonic-py-swsssdk
```
63c75c1 2021-03-14 | Workaround Mellanox default vlan has no SAI_VLAN_ATTR_VLAN_ID attribute (#103) [Qi Luo]
```
Included commits in sonic-snmpagent
```
a8c6e36 2021-03-15 | Implement rfc4363 FdbUpdater for lag inside vlan (#204) [Qi Luo]
```
It is possible to have DHCP relay configuration with no servers/
helpers which result in DHCP container to crash. This PR fixes this
issue by not starting DHCP relay for vlans with no DHCP helpers.
resolves: #6931closes: #6931
Do not add program group for dhcp relay with not dhcp helpers
Unit test
d81828c6740f2d4fca59fe3ec1d0adb1088a9dbb (HEAD -> 201911, origin/201911) Updated lldpRemManAddrTable to use all the management ip address associated with interface. (#201)
093a3c2c5bc688ddc5e5362dc657f19175e12ce8 Fix fdb_vlanmac() on corner cases (#193)
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
- Why I did it
To pick up new features and fix from SDK/FW and SAI
SDK/FW new Feature:
All | Added support for multiple modules and cable types. For full list contact Nvidia networking support
Spectrum-3 | SN46000C | Added support for up to 5W on ports 49 to 64 .
SDK/FW bugs' fix:
All | fast reboot | fast boot failure from latest 201811 to 201911 and above
Spectrum | 10GbE/1GbE Transceiver (FTLX8574D3BCV) stopped working after firmware upgrade
Spectrum-2 | When device is rebooted with locked Optical Transceivers in split mode, the firmware may get stuck
Spectrum-2 | SN3700 | When connecting at 200GbE to Ixia K400, Ixia receives CRC errors
Spectrum-2 | SN3800 | On rare occasions packets loss may be experienced due to signal integrity issues
Spectrum-2 | When the port is a member of a LAG, after a warmboot and port toggle on the peer-side, the port remains down
Spectrum-3 | SN4700 | While using Optic cable in Split 4x1 mode in PAM4, when two first ports are toggled, the other 2 ports go down
Spectrum-3 | SN4700 | When working in 400GbE, deleting the headroom configuration (changing buffer size to zero) on the fly may cause continual packet drops
SAI
All | Counters | Update tunnel decap counter to capture VNI miss
- How I did it
Update the related version number in the make files and update the submodule pointer accordingly.
- How to verify it
Run regression test and everything works good.
Included commits in sonic-py-swsssdk repo
```
4e0c561 2019-11-19 | read portchannel name from LAG_NAME_MAP_TABLE in COUNTERS_DB (#51) [anilkpandey]
```
Included commits in sonic-snmpagent repo
```
02dc2ce 2021-03-12 | add mock tables for LAG_NAME_MAP_TABLE in COUNTERS_DB (#202) [Qi Luo]
```
Why I did it
To monitor the SSD health condition in DellEMC S6100 platform post upgrade.
A daemon is introduced to monitor the SSD every one hour.
To check for SSD status at boot time and at the time of cold-reboot.
All these changes are supported only for newer SSD firmware.
Added a platform_reboot_pre_check script to prevent cold-reboot based on SSD status.
Depends on Azure/sonic-utilities#1472
DO NOT MERGE UNTIL ABOVE PR IS MERGED
Closes issue #6982.
The issue was root caused as we were using the unix_socket for reading from DB as a default mechanism (#5250). The redis unix socket is created as follows.
admin@str--acs-1:~$ ls -lrt /var/run/redis/redis.sock
srwxrw---- 1 root redis 0 Mar 6 01:57 /var/run/redis/redis.sock
So it used to work fine for the user "root" or if user is part of redis group ( admin was made part of redis group by default )
Check if the user is with sudo permissions then use the redis unix socket, else fallback to tcp socket.
This PR is cherry-pick of master
https://github.com/Azure/sonic-buildimage/pull/6920
Why I did it
Add support for BGP Monitors on multi asic SONiC platforms.
How I did it
On multi ASIC SONiC platforms, BGP monitor session will be established from Backend ASIC.
To achieve this following changes are done
Add BGP monitor configuration on the backend ASIC.
The BGP monitor configuration is present in the DPG of the device in minigraph.xml of multi-ASIC device, so this configuration will be added to the config_db of the host, when the minigraph is loaded.
To add configuration for this in the Backend ASIC, a new class MultiAsicBgpMonCfg is added to the hostcfgd service to update the config_db of the backend ASIC when the BGP_MONITOR table of the host config_db is updated.
This way incremental BGP_MONITOR configuration can also be handled.
Changes to establish BGP session with bgp monitor.
Add route in host main routing table to go to one of pre-define backend asic
Add IP table rule on front asic to mark the BGP packets with destination as IPv4 Loopback.
Add IP rule in front asic namespace to match mark BGP packet and lookup default table
Program the default route in FrontEnd asic name space docker default table as part of start.sh of the BGP container.
It need to be done as part of start.sh otherwise FRR default route will get over-written.
How to verify it
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
Co-authored-by: Arvind <arlakshm@microsoft.com>
To run VNET route consistency check periodically.
For any failure, the monit will raise alert based on return code.
The tool will log required details.
- [201911][acl] Expand VLAN into VLAN members when creating an ACL table (#1477)
- [201911][acl-loader] Add support for matching on ICMP and VLAN info (#1476)
- [201911][acl-loader] Improve input validation for acl_loader (#1481)
Signed-off-by: Danny Allen <daall@microsoft.com>
#### Why I did it
Change buffer config for new SKU Mellanox-SN2700-D40C8S8
#### How I did it
Reuse the buffer config of SKU Mellanox-SN2700-D48C8
#### How to verify it
Run sonic-mgmt qos test and all passed
This is to address the issue when "mmuconfig -p egress_lossy_profile" is executed which causes SYNCd failure with SAI_STATUS_INSUFFICIENT_RESOURCE for attr SAI_BUFFER_PROFILE_ATTR_RESERVED_BUFFER_SIZE.
This change also requires the change from (https://github.com/Azure/sonic-swss/pull/1649)
This SAI change was already tested as part of the (https://github.com/Azure/sonic-swss/pull/1649) PR.
1. Made the command next-hop-self force only applicable on back-end asic bgp. This is done so that BGPL iBGP session running on backend can send e-BGP learn nexthop. Back end asic FRR is able to recursively resolve the eBGP nexthop in its routing table since it knows about all the connected routes advertise from front end asic.
2. Made all front-end asic bgp use global loopback ip (Loopback0) as router id and back end asic bgp use Loopbacl4096 as ruter-id and originator id for Route-Reflector. This is done so that routes learnt by external peer do not see Loopback4096 as router id in show ip bgp <route-prerfix> output.
3. To handle above change need to pass Loopback4096 from BGP manager for jinja2 template generation. This was missing and this change/fix is needed for this also https://github.com/Azure/sonic-buildimage/blob/master/dockers/docker-fpm-frr/frr/bgpd/templates/dynamic/instance.conf.j2#L27
4. Enhancement to add mult_asic specific bgpd template generation unit test cases.
Enable BBR config allowas-in 1 for internal peers
Why I did:
To advertise BBR routes learnt via e-BGP peer in one asic/namespace to another iBGP asic/namespace via Route Reflector.
Install pyroute2 need for sonic-utilities in sonic-slave-stretch docker.
Submodule update of sonic-utilities to the commit 9297d5c5a00e64b5dea94a49a69cb776ac862bdc
The Portchannels were not getting cleaned up as the cleanup activity was taking more than 10 secs which is default docker timeout after which a SIGKILL will be send.
Fix Issue #6537
- Why I did it
Current mutli-asic vs hwsku consists of 6 asics with each asic having 32 interfaces. When bringing this up, below issue was seen:
When all 32 interfaces(sonic interfaces and linux interface) are set to 9100 mtu, DMA error is seen "DMA: Out of SW-IOMMU space for 4096 bytes at device 0000:06:03.0" which can be fixed by updating swiotlb=65536 in /host/grub/grub.cfg .In order to keep multi-asic VS lighter and easier to bring up and test, new hwsku 'msft_four_asic_vs' is added to represent 4-asic hwsku with 2 frontend asics and 2 backend asics and each asic having 8 interfaces interconnected by port-channels.
- How I did it
Add msft_four_asic_hwsku directory to have the right number of directories (4) and update port_config.ini and lanemap.ini files to include 8 ports information.
Add topology.sh script to create the internal asic-asic connectivity.
- How to verify it
Update asic.conf with the 4 asic information as below and build sonic-vs.img:
NUM_ASIC=4
DEV_ID_ASIC_0=0
DEV_ID_ASIC_1=1
DEV_ID_ASIC_2=2
DEV_ID_ASIC_3=3
Modify sonic_multiasic.xml to have 8 front panel interfaces.
create virtual switch using "sudo virsh sonic_mutliasic.xml" command.
Start topology service and Load config_db files for switch and each asic.
Ensure that that all internal interfaces and port_channels are coming up.
multi-asic vs testbed:
Bring up mutli-asic VS testbed with a multi-asic image(asic.conf updated to 4 asics) and using t1-lag topology.
./testbed-cli.sh -t vtestbed.csv -m veos_vtb -k ceos add-topo vms-kvm-four-asic-t1-lag password.txt
Load minigraph/config_dbs.
Ensure all internal and external interfaces come up.
No change on single asic vs.
61acd3a2e4a457f3bc706cbfaf3162b947763864 (HEAD -> 201911, origin/201911)
[xcvrd] Change in xcvrd ports cache creation, now ports are being
fetched from config DB (#5892) (#155)
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
Current mutli-asic vs hwsku consists of 6 asics with each asic having 32 interfaces.
When bringing this up, below issue was seen:
When all 32 interfaces in each namespace (sonic interfaces and linux interface) is set to 9100 mtu, DMA error is seen "DMA: Out of SW-IOMMU space for 4096 bytes at device 0000:06:03.0" which can be fixed by updating swiotlb=65536 in /host/grub/grub.cfg .
Signed-off-by: SuvarnaMeenakshi <sumeenak@microsoft.com>
Update topology script to retrieve hwsku from minigraph
if hwsku information is not available in config_db.
Fix clean up of interfaces in msft_multi_asic_vs hwsku
topology script.
- Why I did it
When bringing up multi-asic VS switch, topology service is started during boot up.
Topology service starts a shell script which runs the topology script present in /usr/share/sonic/device// directory. To invoke hwsku specific script, the topology script tries to retrieve hwsku information from config_db.
During initial boot up config_db might not be populated. In order to start topology service before config_db is updated,
update topology script to get hwsku information from minigraph.xml if it is available.
This will be helpful to bring up multi-asic VS testbed by loading minigraph and starting topology service.
- How I did it
Update topology.sh script to retrieve hwsku information from minigraph.xml.
Fix clean up function on msft_multi_asic_vs toplogy script.
- How to verify it
single-asic VS - no change; topology service is only enabled for multi-asic VS.
multi-asic VS - Bring up multi-asic VS image, copy minigraph to vs image, start topology service. Topology service should be successful.
to test clean up function fix, start topology service - make sure interfaces are created and moved to the right namespaces.
stop topology service - make sure namespace do not have any interface and all front end interfaces are present in default namespace.
What I did:-
For multi-asic platforms added iptable v4 rule to communicate on docker bridge ip
For multi-asic platforms extend iptable v4 rule for iptable v6 also
For multi-asic program made all internal rules applicable for all protocols (not filter based on tcp/udp). This is done to be consistent same as local host rule
For multi-asic platforms made nat rule (to forward traffic from namespace to host) generic for all protocols and also use Source IP if present for matching