This is to address an issue where it was observed that SAI operations sometime may take a very long to time complete (over 45ms). It was determined that the ALPM distributed thread was causing this issue.
The fix is to disable this debug thread that has no functional purpose.
Preliminary tests looks fine. BGP neighbors were all up with proper routes programmed
interfaces are all up
Manually ran the fib test cases on 7050CX3 (TD3), TD2, TH, TH2, and TH3 based platforms and
thy all passed.
Note: the testing was done over 20201230 image and are porting this change to master branch.
No need to port this to 20201230 branch as a separate PR was already done for that branch. (#9190)
this PR is created to port the changes made by (#9199) but could not be cherry picked directly to 202106 branch.
Why I did it
The first 4 ports on this dut are breakout ports. They might not always be connected in lab. Mark them as 'RJ45' to skip the SFP check since they are by default disabled.
How to verify it
run platform test_reboot.py
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
* Update default cable len to 0m for TD2 (#8298)
* Update sonic-cfggen tests with the correct cable len
Signed-off-by: Neetha John <nejo@microsoft.com>
As part of the buffer reclamation efforts for TD2, setting the default cable len to 0m which means unused ports will have a cable len of 0m.
Why I did it
To align with the changes in Azure/sonic-swss#1830
How to verify it
- With the default cable len set to 0m and the associated changes in swss, CABLE_LENGTH table had '0m' set for unused ports and accordingly more space was reserved for the shared pool
- Cfggen tests passed with the cable len update
Why I did it
7050 S4Q31 mmu configuration is missing ALPM configurations, causing not enough memory reserved for routes. Orchagent crashes on a nightly testbed with 6400 route entries.
How I did it
Add the missing ALPM configurations.
How to verify it
Load the configuration on testbed and verified new configuration exists and no more crash.
Signed-off-by: Ying Xie ying.xie@microsoft.com
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
PG profile settings need to be aligned with Arista-7050-QX-32S
How I did it
Copy over the current settings from Arista-7050-QX-32S and define params for 10G and 1G speeds as well
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
Need proper MMU and Qos settings for Arista-7050QX-32S-S4Q31
How I did it
Updated the settings based on Arista-7050-QX-32S
However in SAI 3.7 default behaviout got changes to 128 Group and 128
Memeber each.
This change is to make sure we are using same ECMP Group/Memeber Per
Group for 3.7 also so that behaviour is consistent.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>