- Why I did it
Update SAI version - 1.22.0.0
Update SDK/FW version - 4.5.2318/2010_2318
SAI Changes:
1. Port FEC fix for multiple speeds
2. Next hop group optimized bulk API
3. Support BFD remote-disc exchange in negotiation stage
4. Reduce verbosity of shared database already exists print
SDK/FW Fixes:
1. Cr space timeout on Hold and Release GW - at warmboot
2. SPC-1 Port in stuck PHY_UP after peer side rebooted
3. memory leak in sx_api_router_ecmp_update_set
- How I did it
Update pointer for the new SAI and SDK/FW
- How to verify it
Run regression tests
- Why I did it
New changes in this new HW-MGMT package:
1. hw-mgmt: chassis events: Fix voltmon address conflict on connecting
2. hw-mgmt: topology: Add COMEX BRDWL respin support
a. Removed A2D sensor from all COMEX BRDWL boards
b. Add COMEX BRDWL boards with register defined (config3)
- How I did it
Advance the hw-mgmt repo pointer and update the hw-mgmt version number
- How to verify it
Run platform-related regression test cases on the new testbed.
Signed-off-by: Kebo Liu <kebol@nvidia.com>
Why I did it
This PR is to cherry-pick #11448 to 202012 branch after resolving conflicts.
There are conflicts in
files/build_templates/qos_config.j2
src/sonic-config-engine/tests/test_j2files.py
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
Missed this sku in the previous PR #11398
How I did it
Update the dynamic threshold to 0 and ECN settings as 2mb/10mb/5%
How to verify it
Updated unit tests to use the modified values for 7260 ecn settings.
- Why I did it
MSN3700/3700C/4600C have been re-spined, the new HW version of platforms has different sensors, so need to apply the correct sensor.conf for them.
- How I did it
Add new sensor.conf files for the new re-spined platforms.
Enhance the logic of "get_sensors_conf_path" for the related platforms in order to load the correct sensor.conf for each version of platforms.
- How to verify it
run sensors test on different versions of platforms
Signed-off-by: Kebo Liu <kebol@nvidia.com>
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
Improve throughput and latency for 7260 deployments
How I did it
Update the dynamic threshold to 0 and ECN settings as 2mb/10mb/5%
How to verify it
Updated unit tests to use the modified values for 7260 ecn settings.
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
There is a need to select different mmu profiles based on deployment type
How I did it
There will be separate subfolders (RDMA-CENTRIC, TCP-CENTRIC, BALANCED) in each hwsku folder which contains deployment specific mmu and qos settings. SonicQosProfile attribute in the minigraph will be used to determine which settings to use. If that attribute is not present, the default settings that exist in the hwsku folder will be used
Why I did it
When any of the test job failed in the test stage, the rerun will not work, the test stage will be skipped automaticall, so we do not have chance to rerun the test stage again, and the checks of the test will be always in failed status, block the PR to merge forever.
It should be caused by the condition in the Test stage, we should specify the status of the BuildVS stage.
How I did it
Fix stage dependency logic.
Update sonic-linux-kernel submodule pointer to include the following:
* [202012][patch] mlxsw: i2c: Prevent transaction execution for special chip states ([#282](https://github.com/Azure/sonic-linux-kernel/pull/282))
Signed-off-by: dprital <drorp@nvidia.com>
Signed-off-by: Neetha John <nejo@microsoft.com>
This PR contains the following commits
5a54bd7 Added cisco config platform commands (Azure/sonic-utilities#2241)
62c1640 [config/load_mgmt_config] Support load IPv6 mgmt IP (Azure/sonic-utilities#2206)
c061a18 Fix header for the output table following 'show ipv6 interface' command (Azure/sonic-utilities#2219)
ecca18ff [202012] Update load minigraph to load backend acl (Azure/sonic-utilities#2235)
Why I did it
Daemon dhcp6relay may crash due to null pointer access to ifa_addr member of struct ifaddrs. It's not guaranteed that the interface must have available ifa_addr. That is true for some special virtual/pseudo interfaces.
How I did it
Check the pointer to ifa_addr is valid ahead of accessing it.
Why I did it
Fix the missing debian package for reproducible build issue.
The gnupg2 should be added into the version file.
https://dev.azure.com/mssonic/build/_build/results?buildId=118139&view=logs&j=88ce9a53-729c-5fa9-7b6e-3d98f2488e3f&t=8d99be27-49d0-54d0-99b1-cfc0d47f0318
The following packages have unmet dependencies:
gnupg2 : Depends: gnupg (>= 2.2.27-2+deb11u2) but 2.2.27-2+deb11u1 is to be installed
E: Unable to correct problems, you have held broken packages.
The issue was caused by the gnupg2 removed, and not detected.
sonic-buildimage/build_debian.sh
Line 250 in 4fb6cf0
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y remove software-properties-common gnupg2 python3-gi
How I did it
Export the debian packages when any debian package being removed.
Why I did it
Storage backend has all vlan members tagged. If untagged packets are received on those links, they are accounted as RX_DROPS which can lead to false alarms in monitoring tools. Using this acl to hide these drops.
How I did it
Created a acl template which will be loaded during minigraph load for backend. This template will allow tagged vlan packets and dropped untagged
How to verify it
Unit tests
Signed-off-by: Neetha John <nejo@microsoft.com>
#### Why I did it
These methods were added to make some convenient platform and chassis information methods accessible through sonic-py-common. These methods were refactored from sonic-utilities and are used in the `show platform summary` and `show version` commands.
#### How I did it
There are two methods, one is `get_platform_info()` which simply calls local methods to collect useful platform information into a dictionary format, this came directly from sonic-utilities.
Signed-off-by: Neetha John <nejo@microsoft.com>
Backport #11221
Why I did it
For storage backend, certain rules will be applied to the DATAACL table to allow only vlan tagged packets and drop untagged packets.
How I did it
Create DATAACL table if the device is a storage backend device
To avoid ACL resource issues, remove EVERFLOW related tables if the device is a storage backend device
How to verify it
Added the following unit tests
verify that EVERFLOW acl tables is removed and DATAACL table is added for storage backend tor
verify that no DATAACL tables are created and EVERFLOW tables exist for storage backend leaf
- Why I did it
This is for the eventual support of multiple architectures for the mellanox platform.
- How I did it
Change the location of the binaries in Switch-SDK-drivers so that the path specifies the target architecture in addition to the target distribution that the debians are built for.
This is the most straightforward way to separate binaries built against different architectures and selectively target them for installation in the mellanox SONiC image.
- How to verify it
Build SONiC for mellanox and verify it compiles successfully.
Why I did it
Support to use symbol links in platform folder to reduce the image size.
The current solution is to copy each lazy installation targets (xxx.deb files) to each of the folders in the platform folder. The size will keep growing when more and more packages added in the platform folder. For cisco-8000 as an example, the size will be up to 2G, while most of them are duplicate packages in the platform folder.
How I did it
Create a new folder in platform/common, all the deb packages are copied to the folder, any other folders where use the packages are the symbol links to the common folder.
Why platform.tar?
We have implemented a patch for it, see #10775, but the problem is the the onie use really old unzip version, cannot support the symbol links.
The current solution is similar to the PR 10775, but make the platform folder into a tar package, which can be supported by onie. During the installation, the package.tar will be extracted to the original folder and removed.
Signed-off-by: Yong Zhao yozhao@microsoft.com
Why I did it
This PR aims to fix an issue (#10088) by enhancing the script memory_checker.
Specifically, if container is not created successfully during device is booted/rebooted, then memory_checker do not need check its memory usage.
How I did it
In the script memory_checker, a function is added to get names of running containers. If the specified container name is not in current running container list, then this script will exit without checking its memory usage.
How to verify it
I tested on a lab device by following the steps:
Stops telemetry container with command sudo systemctl stop telemetry.service
Removes telemetry container with command docker rm telemetry
Checks whether the script memory_checker ran by Monit will generate the syslog message saying it will exit without checking memory usage of telemetry.
Issue fixed: when performing a warm-reboot or fast-reboot from 201811 or 201911 to 202012 the kernel command line contains duplicate information. This issue is related to a change that was made to make 202012 boot0 file more futureproof.
A cold reboot brings everything back into a clean slate though not always desirable.
Changes done:
Added some logic to properly detect the end of the Aboot cmdline when cmdline-aboot-end delimiter is not set (clean case)
Added some logic to regenerate the Aboot cmdline when cmdline-aboot-end is set but duplicate parameters exists before (dirty case). Reorganized some code to handle duplicate parameter handling in the allowlist.
- Why I did it
Configure different DSCP_TO_TC_MAP between uplink and downlink on T1 switch in dual ToR scenario
On T1 uplink, both DSCP 2/6 will be mapped to TC 1 for the purpose of avoiding such traffic occupying lossless buffers.
On T1 downlink, they will be mapped to TC 2/6 respectively. (unchanged)
- How I did it
For vendors who want to configure different DSCP_TO_TC_MAP between uplinks and downlinks on T1, they should
Define generate_dscp_to_tc_map macro in SKU's qos.json.j2 file
Define map AZURE for downlink and AZURE_UPLINK for uplink
Define jinja2 variable different_dscp_to_tc_map as True
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Why I did it
- To reduce rc.local script execution time.
- Time consumption of rc.local script is around 22 seconds in S6100.
How I did it
- Moving platform-modules-s6100.service and s6100-lpc-monitor.service asynchronous to rc.local script.
How to verify it
- Load the image with the changes and the time consumption of rc.local script reduced from 22 seconds(approx.) to 14 seconds(approx.) during warm-/fast-reboot upgrades.
- sonic-mgmt test results.