- Why I did it
Update hw-mgmt to a new version to pick up support for the SN4600C A1 system.
- How I did it
Update the pointer of the hw-mgmt submodule
Update the hw-mgmt version number
Remove the staled code patch to hw-mgmt userspace code.
- How to verify it
Run platform regression on Mellanox platforms.
Signed-off-by: Kebo Liu <kebol@nvidia.com>
Why I did it
Fix typo and missing files in SN3800 and SN4600C's buffer templates
How I did it
ingress_lossless_xoff_size => ingress_lossless_pool_xoff add missing files for SN4600C-D100C12S2
How to verify it
Deploy the fix and verify whether the device can be up.
Signed-off-by: Stephen Sun <stephens@nvidia.com>
#### Why I did it
To pick up fixes from submodule sonic-sairedis which include the following fixes:
```
commit 1027eef3a331e84560827c7584ee8009baf434d5 (HEAD -> 202012, origin/202012)
Author: gechiang <62408185+gechiang@users.noreply.github.com>
Date: Wed Dec 8 03:13:34 2021 -0800
[202012] Prevent other notification event storms to keep enqueue unchecked and drained all memory that leads to crashing the switch router (#976)
commit 94455e50d3444dcd60093b7a39c7f427337a94d2
Author: VenkatCisco <77468614+VenkatCisco@users.noreply.github.com>
Date: Tue Jun 15 03:23:20 2021 -0700
Add cisco-8000 checks to syncd_init_common (#839)
commit 2df539483ed68519c3c9c6df958d3ed2f31dd629
Author: Kamil Cudnik <kcudnik@gmail.com>
Date: Mon Dec 6 20:50:23 2021 +0100
[lgtm] Add gmock libs to lgtm (#979)
```
#### Why I did it
Update sonic-swss-common
54879741 [202012][schema] Add vnet route tunnel and advertise network tables for state_db (Azure/sonic-swss-common#563)
a5394f9d Update for BFD, default route table (Azure/sonic-swss-common#550)
Update sonic-swss
fbbe5bcc [202012][pfc_detect] fix RedisReply errors (Azure/sonic-swss#2078)
5762b0c2 [Reclaim buffer][202012] Reclaim unused buffer for dynamic buffer model (Azure/sonic-swss#1985)
33e9bd19 [Document][202012] Supply the missing ingress/egress port profile list in document (Azure/sonic-swss#2066)
1b6ffba1 [Reclaiming buffer][202012] Support reclaiming buffer in traditional buffer model (Azure/sonic-swss#2063)
afb33f16 [202012] Update default route status to state DB (Azure/sonic-swss#2009) (Azure/sonic-swss#2067)
b9c44f75 Common code update for reclaiming buffer (backport community PR Azure/sonic-swss#1996 to 202106/202012) (Azure/sonic-swss#2061)
cf5182d8 [request parser] Allow request parser to parse multiple values
#### Why I did it
The capability files were incorrect in comparison to the marketing spec of the SN4410 platform.
#### How I did it
Aligned the capability files according to the marketing spec.
#### How to verify it
Did basic manual sanity checks:
- Check if critical docker containers were UP
- Check if interfaces were created and were UP
- Check if interfaces created in the syncd docker container by executing – sx_api_ports_dump.py script
- Check the logs from the start of the switch – everything was OK
- Verified the port breakout
- Why I did it
To have an ability to use PRM sniffer.
- How I did it
Enabled the option in configure flags.
- How to verify it
Built and ran on switch. Enabled the feature in runtime and checked the sniffer recording.
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
Why I did it
There are scenarios that End-of-RIB comes from a part of the peers arrives after reconciliation. In such scenarios, if the route selection deferral timer has the default value of 360 seconds, FRR would not set up routes and all routes would be removed after reconciliation. This PR reduces the route selection deferral timer so that at least routes to parts of the peers get restored at the point of reconciliation.
Fix#7488
How I did it
Reduce route selection deferral timer for bgp graceful restart to 15 seconds.
- Create a script in the orchagent docker container which listens for these encapsulated packets which are trapped to CPU (indicating that they cannot be routed/no neighbor info exists for the inner packet). When such a packet is received, the script will issue a ping command to the packet's inner destination IP to start the neighbor learning process.
- This script is also resilient to portchannel status changes (i.e. interface going up or down). An interface going down does not affect traffic sniffing on interfaces which are still up. When an interface comes back up, we restart the sniffer to start capturing traffic on that interface again.
Backport https://github.com/Azure/sonic-buildimage/pull/9259 to 202012
#### Why I did it
Nvidia platform API does not support set LED to orange.
#### How I did it
Allow user to set LED to orange
#### How to verify it
Manual test
**- Why I did it**
I'm updating the jinja2 template to support getting SNMP information from the redis configdb.
I'm using the format approved here:
https://github.com/Azure/SONiC/pull/718
This will pave the way for us to decrement using the snmp.yml in the future.
Right now we will still be using both the snmp.yml and configdb to get variable information in order to create the snmpd.conf via the sonic-cfggen tool.
**- How I did it**
I first updated the SNMP Schema in PR #718 to get that approved as a standardized format.
Then I verified I could add snmp configs to the configdb using this standard schema. Once the configs were added to the configdb then I updated the snmpd.conf.j2 file to support the updates via the configdb while still using the variables in the snmp.yml file in parallel. This way we will have backward compatibility until we can fully migrate to the configdb only.
By updating the snmpd.conf.j2 template and running the sonic-cfggen tool the snmpd.conf gets generated with using the values in both the configdb and snmp.yml file.
Co-authored-by: trvanduy <trvanduy@microsoft.com>
- Why I did it
Also recalculated all parameters with the latest algorithm with per-speed peer response time taken into account
- How I did it
Detailed information of each SKU:
C64:
t0: 32 100G downlinks and 32 100G uplinks
t1: 56 100G downlinks and 8 100G uplinks with 2km-cable supported
D112C8: 112 50G downlinks and 8 100G uplinks.
D48C40: 48 50G downlinks, 32 100G downlinks, and 8 100G uplinks
D100C12S2: 4 100G downlinks, 2 10G downlinks, 100 50G downlinks, and 8 100G uplinks
2km cable is supported for C64 on t1 only
- How to verify it
Run regression test (QoS)
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Why I did it
Arista did not update db with eeprom info. Previous PR had issues that were reverted.
How I did it
Had Arista eeprom class inherit the class that has method to update db. Updated platform API methods for Arista 202012.
How to verify it
In redis-cli the keys and values can be seen. Can use sonic-mgmt testing to verify behavior, and see the chassis platform API methods have not regressed.
- Why I did it
Support zero buffer profiles
1. Add buffer profiles and pool definition for zero buffer profiles
2. Support applying zero profiles on INACTIVE PORTS
3. Enable dynamic buffer manager to load zero pools and profiles from a JSON file
- How I did it
Add buffer profiles and pool definition for zero buffer profiles
If the buffer model is static:
* Apply normal buffer profiles to admin-up ports
* Apply zero buffer profiles to admin-down ports
If the buffer model is dynamic:
* Apply normal buffer profiles to all ports
* buffer manager will take care when a port is shut down
Update buffers_config.j2 to support INACTIVE PORTS by extending the existing macros to generate the various buffer objects, including PGs, queues, ingress/egress profile lists
Originally, all the macros to generate the above buffer objects took active ports only as an argument.
Now that buffer items need to be generated on inactive ports as well, an extra argument representing the inactive ports need to be added.
To be backward compatible, a new series of macros are introduced to take both active and inactive ports as arguments
The original version (with active ports only) will be checked first. If it is not defined, then the extended version will be called.
Only vendors who support zero profiles need to change their buffer templates
Enable buffer manager to load zero pools and profiles from a JSON file:
The JSON file is provided on a per-platform basis
It is copied from platform/<vendor> folder to /usr/share/sonic/temlates folder in compiling time and rendered when the swss container is being created.
To make code clean and reduce redundant code, extract common macros from buffer_defaults_t{0,1}.j2 of all SKUs to two common files:
One in Mellanox-SN2700-D48C8 for single ingress pool mode
The other in ACS-MSN2700 for double ingress pool mode
Those files of all other SKUs will be symbol link to the above files
Update sonic-cfggen test accordingly:
* Adjust example output file of JSON template for unit test
* Add unit test in for Mellanox's new buffer templates.
- How to verify it
Regression test.
Unit test in sonic-cfggen
Run regression test and manually test.
Signed-off-by: stephens <stephens@nvidia.com>
#### Why I did it
Merged from master branch: https://github.com/Azure/sonic-buildimage/pull/9443
Fix the nodesource.list cannot read issue, it is cased by the full path not used.
```
2021-12-03T06:59:26.0019306Z Removing intermediate container 77cfe980cd36
2021-12-03T06:59:26.0020872Z ---> 528fd40e60f6
2021-12-03T06:59:26.0021457Z Step 81/81 : RUN post_run_buildinfo
2021-12-03T06:59:26.0841136Z ---> Running in d804bd7e1b06
2021-12-03T06:59:29.1626594Z [91mDEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality.
2021-12-03T06:59:34.2960105Z [0m[91m/usr/bin/sed: can't read nodesource.list: No such file or directory
2021-12-03T06:59:34.5094880Z [0mThe command '/bin/sh -c post_run_buildinfo' returned a non-zero code: 2
```
6c6151b Fix unstable unit tests (state change handler wasn't invoked) (#8)
2f7dc0a support code diff coverage (#5)
83f0002 Force mux state switch to standby if triggered from Cli (#6)
signed-off-by: Jing Zhang zhangjing@microsoft.com
Contains the following commits
239cb5c [flex counter] Flex counter threads consume too much CPU resources (Azure/sonic-utilities#1925)
8a3b41a [load_minigraph] Delay pfcwd start until the buffer templates are rendered (Azure/sonic-utilities#1937)
Fix no space left on device issue in tmpfs.
2021-12-01T06:30:40.1651742Z cp: write error: No space left on device
2021-12-01T06:30:40.1652225Z Failure: local_fs_run():/dev/vdb Unable to copy /tmp/tmp.gl4Sgp/onie-installer.bin to tmpfs
- Why I did it
To include latest fixes.
1. On CMIS modules, after low power configuration, the firmware waited for the module state to be ModuleReady instead of ModuleLowPower causing delays.
2. When connecting Spectrum devices with optical transceivers that support RXLOS, remote side port down might cause the switch firmware to get stuck and cause unexpected switch behavior.
3. On rare occasions, when working with port rates of 1GbE or 10GbE and congestion occurs, packets may get stuck in the chip and may cause switch to hang.
4. When ECMP has high amount of next-hops based on VLAN interfaces, in some rare cases, packets will get a wrong VLAN tag and will be dropped.
5. Using SN4600C with copper or optics loopback cables in NRZ speeds, link may raise in long link up times ( up to 70 seconds).
6. When connecting SN4600C to SN4600C after Fastboot in 50GbE No_FEC mode with a copper cable, the link up time may take ~20 seconds.
- How I did it
Updated SDK submodule and relevant makefiles with the required versions.
- How to verify it
Build an image and run tests from "soni-mgmt".
Signed-off-by: Volodymyr Samotiy <volodymyrs@nvidia.com>
Support marvell-armhf dpkg cache and the azp check.
Waiting for merging PR #9381 to 202012 branch, so only azp template change in this PR.
Move the VS build to a new stage BuildVS, change the Test stage only depending on BuildVS, running the BuildVS and the other platform's build in parallel. The Test stage do not has dependency on the marvel-armhf build, reduce the overall build time caused by longer build time of marvel-armhf build.
Why I did it
Fix some of the version files not used issue.
One of example version file version-py3-all-armhf, when building marvell-armhf, the version is used as expected, but it not use.
- Consolidate the two [Service] sections by moving the ExecStartPre line for mark_dhcp_packet.py to the first section and removing the second.
- Make the mark_dhcp_packet.py file executable
- Also clean up mark_dhcp_packet.py
- Remove unused imports
- Fix spacing and line lengths to conform to PEP8
Signed-off-by: Lawrence Lee <lawlee@microsoft.com>
Why I did it
resolves#8979 and #9055
How I did it
Remove the file static.conf.j2,which adds the default route on eth0 from bgp docker
Signed-off-by: Arvindsrinivasan Lakshmi Narasimhan <arlakshm@microsoft.com>
- Why I did it
This is to update the common sonic-buildimage infra for reclaiming buffer.
- How I did it
Render zero_profiles.j2 to zero_profiles.json for vendors that support reclaiming buffer
The zero profiles will be referenced in PR [Reclaim buffer] Reclaim unused buffers by applying zero buffer profiles #8768 on Mellanox platforms and there will be test cases to verify the behavior there.
Rendering is done here for passing azure pipeline.
Load zero_profiles.json when the dynamic buffer manager starts
Generate inactive port list to reclaim buffer
Signed-off-by: Stephen Sun <stephens@nvidia.com>
- Why I did it
When PSU is powered off, the PSU is still on the switch and the air flow is still the same. In this case, it is not necessary to set FAN speed to 100%.
- How I did it
When PSU is powered of, don't treat it as absent.
- How to verify it
Adjust existing unit test case
Add new case in sonic-mgmt
Backport https://github.com/Azure/sonic-buildimage/pull/9068 to 202012
#### Why I did it
Command `monit summary -B` can no longer display the status for each critical process, system-health should not depend on it and need find a way to monitor the status of critical processes. The PR is to address that. monit is still used by system-health to do file system check as well as customize check.
#### How I did it
1. Get container names from FEATURE table
2. For each container, collect critical process names from file critical_processes
3. Use “docker exec -it <container_name> bash -c ‘supervisorctl status’” to get processes status inside container, parse the output and check if any critical processes exit
#### How to verify it
1. Add unit test case to cover it
2. Adjust sonic-mgmt cases to cover it
3. Manual test
Update sonic-sairedis submodule to get the below fixes:
7389704 [202012] Add ACL_TABLE object to break before make list (Azure/sonic-sairedis#971)
f334349 Fix hung issue when installing linux kernel modules (Azure/sonic-sairedis#969)
When we update the a sai package downing from a remote server, we need to update the version file as well currently, but the reproducible build feature is not enabled in master, it can only be detected when merging the code into the release branches, such as 202106, 202012, etc.
The reproducible feature is to reduce the build failure, not need to break the build when the version not specified. If version not specified, the best choice is to accept the version from remote server.
Co-authored-by: Ubuntu <xumia@xumia-vm1.jqzc3g5pdlluxln0vevsg3s20h.xx.internal.cloudapp.net>
#### Why I did it
With current code the delay will take place even if simple 'config reload' command executed and this is not desired.
This delay should be used only when fast-rebooting.
#### How I did it
Change the type of delay to OnBootSec instead of OnActiveSec.
#### How to verify it
Fast-reboot with this PR and observe the delay.
Run 'config-reload' command and observe no delay is running.
The recent release of redis 4.0.0 or newer (for python3) breaks sonic-config-engine unit test. Fix to last known good version.
ref: https://pypi.org/project/redis/#history
[cherry-pick PR #9123 ]
Why I did it
When sshd realizes that this login can't succeed due to internal device state
or configuration, instead of failing right there, it proceeds to prompt for
password, so as the user does not get any clue on where is the failure point.
Yet to ensure that this login does not proceed, sshd replaces user provided password
with a specific pattern of characters matching length of user provided password.
This pattern is "<BS><LF><CR><DEL>INCORRECT", which is bound to fail.
If user provided length is smaller/equal, the substring of pattern is overwritten.
If user provided length is greater, the pattern is repeated until length is exhausted.
But if the PAM-tacacs plugin would send this password to AAA, the user could get
locked out by AAA, for providing incorrect value.
How I did it
Hence this fix, matches obtained password against the pattern. If match, fail just before
reaching AAA server.
How to verify it
Make sure tacacs is properly configured.
Try logging in as, say "user-A"; ensure it succeeds
Pick another user, say user-B and ensure this user has not logged into this device before (look into /etc/passed & folders under /home)
Disable monit service (as that could fix the issue using disk_check.py)
Start TCP dump for all TACACS servers.
Simulate Read-only disk
Try logging in using user-B.
Verify it fails, after 3 attempts
Stop tcp dump.
TCP dump should show "authentication" for user-A only
6f198d0 (HEAD -> 202012, origin/202012) [Y-Cable][Broadcom] upgrade to support Broadcom Y-Cable API to release (#230)
1c3e422 SSD Health: Retrieve SSD health and temperature values from generic SSD info (#229)
Signed-off-by: vaibhav-dahiya <vdahiya@microsoft.com>