Why I did it
[Ci] Support the SONiC reproducible build in Azure Pipelines
Enable the reproducible build on master branch
Enable the mirror snapshot based build on 202205+ which support snapshot build.
How I did it
Enable the build flag on Azure Pipelines.
How to verify it
Update sonic-platform-daemons submodule head to include:
05dd3bd mihirpat1 Wed Feb 22 09:19:13 2023 -0800 Update CMIS module types for 2x100G AOC support (sonic-net/sonic-platform-daemons#339)
f132d12 vdahiya12 Thu Feb 9 18:01:38 2023 -0800 [ycabled] add more coverage to ycabled; add minor name change for vendor API CLI return key-values pairs (sonic-net/sonic-platform-daemons#338)
Update sonic-platform-common submodule head to include:
85c20cd mihirpat1 Wed Feb 22 09:18:20 2023 -0800 Update host electrical interface for 2x100G AOC (sonic-net/sonic-platform-common#346)
Signed-off-by: Mihir Patel <patelmi@microsoft.com>
Why I did it
Fix similar issue seen on #13739 but only for DCS-7050CX3-32S
How I did it
Add a kernel parameter to tell libata to disable NCQ
How to verify it
The message ata2.00: FORCE: horkage modified (noncq) should appear on the dmesg.
Test results using: fio --direct=1 --rw=randrw --bs=64k --ioengine=libaio --iodepth=64 --runtime=120 --numjobs=4
with NCQ
READ: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=3136MiB (3288MB), run=120053-120053msec
WRITE: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=3161MiB (3315MB), run=120053-120053msec
without NCQ
READ: bw=22.0MiB/s (23.1MB/s), 22.0MiB/s-22.0MiB/s (23.1MB/s-23.1MB/s), io=2647MiB (2775MB), run=120069-120069msec
WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=2665MiB (2795MB), run=120069-120069msec
On SONiC VoQ chassis, the speed changes are done from 400G to 100G needs to be supported on 400G linecards.
To enable this, along with speed change the port lanes need to be changed. This PR has the changes to update the port lanes when such speed change happens.
This PR is intended only for VoQ chassis linecards. These platforms today have 400g port with 8 serdes lines, and 100g will operate with 4 serdes lane. When the port speed changes from 400G to 100G the first 4 lanes will be used for 100G port.
Platforms which support 2x50g PAM4 or support 100G PAM4 serdes or other combinations are not handled in the PR.
Signed-off-by: Arvindsrinivasan Lakshmi Narasimhan <arlakshm@microsoft.com>
To support 64 cores on arista skus. Fixesaristanetworks/sonic#77
Remapped recycle ports to lowers core port ids and set appl_param_nof_ports_per_modid to 64.
#### Why I did it
Add support of California-SB237 conformance.
https://github.com/sonic-net/SONiC/tree/master/doc/California-SB237
#### How I did it
Expire user passwords during build
#### How to verify it
Enable build flag and check if default user is prompted for a new password
Fixes#11873.
#### Why I did it
When loading from minigraph, for port channels, don't create the members@ array in config_db in the PORTCHANNEL table. This is no longer needed or used.
In addition, when adding a port channel member from the CLI, that member doesn't get added into the members@ array, resulting in a bit of inconsistency. This gets rid of that inconsistency.
- Why I did it
FW for Spectrum-4 ASIC not yet available
- How I did it
Remove in Mellanox fw make files to Spectrum-4 ASIC firmware binaries.
Remove from firmware upgrade scripts to be able Spectrum-4 ASIC.
- How to verify it
Run regression test
Why I did it
LGTM is deprecated. LGTM's badge doesn't work now.
Github code scanning shows alerts in Security tab. It doesn't have a badge.
How I did it
Remove LGTM badge.
How to verify it
- Why I did it
The flow of SONiC reproducible build assumes that web packages (retrieved by curl and wget) are located in a single web file.
server. Currently there is no way to easily download the packages from the various file servers where they are currently deployed.
In this change we offer a utility to download all relevant packages and upload them to a specific file server.
- How I did it
We implemented python script that parse the various version files (generated by SONIC reproducible build compilation) and identify all relevant packages.
Later the script downloads the file and then upload them to the destination server pointed by the user.
- How to verify it
Script is running and was verified manually
Which release branch to backport (provide reason below if selected)
Feature will be added to master only
Signed-off-by: oreiss <oren.reiss@gmail.com>
Fixes: #13395
This fix resolves ownership configuration for vcache:
Step 24/40 : RUN pip3 install j2cli
---> Running in fcc39df62a98
chown: missing operand after '/sonic/target/vcache/docker-base-bullseye'
Try 'chown --help' for more information.
Originally the issue was introduced here: #13287
- Why I did it
To fix ownership configuration
- How I did it
Removed redundant stuff
Signed-off-by: Nazarii Hnydyn <nazariig@nvidia.com>
Why I did it
Seastone does not have the psu fans' status led, need to reflect it in platform.json.
How I did it
Set the psu fans status led available to false.
How to verify it
Verify it with platform_tests/api/test_psu_fans.py::TestPsuFans::test_set_fans_led case.
- Why I did it
Need to add the possibility to choose between dropping packets (using ACL) on ingress or egress in Dual ToR scenario
- How I did it
Add new attribute "mux_tunnel_ingress_acl" to SYSTEM_DEFAULTS table
- How to verify it
check that new attribute exists in redis:
admin@sonic:~$ redis-cli -n 4
127.0.0.1:6379[4]> HGETALL SYSTEM_DEFAULTS|mux_tunnel_ingress_acl
1."state"
2."false"
Signed-off-by: Andriy Yurkiv <ayurkiv@nvidia.com>
Fixe #12047. After the c++ implementation of the sonic-db-cli, sonic-db-cli PING command tries to initialize the global database for all instances database starting. If all instance database-config.json are not ready yet. it will crash and generate core file. PR sonic-net/sonic-swss-common#701 only fix the crash and the process abortion.
Signed-off-by: mlok <marty.lok@nokia.com>
d768d19 Remove warning msg when a transceiver op takes > 200ms
7451689 Support the module.py in IMM to query the Supervisor card eeprom info
Signed-off-by: mlok <marty.lok@nokia.com>
- Why I did it
On Mellanox platform, system EEPROM is a soft link provided by hw-management. There is chance that config-setup service accessing the EEPROM before hw-management creating it. It causes errors. The PR is aim to fix it.
- How I did it
Waiting EEPROM creation in platform API up to 10 seconds.
- How to verify it
Manual test
Why I did it
The sensors and sensord processes were reporting data on unused sensors.
This lead to ALARM messages or erroneous values that could be misinterpreted.
How I did it
Ignore the affected sensors in the sensors.conf
How to verify it
Check that there are no longer ALARM messages from sensord in the syslog or in the output of sensors
- Why I did it
Add support for systems 4600/4600C/2201 that are using sonic interface names aligned to 4 instead of 8 (which is the max number of lanes per port).
Improve DB access calls, now we use Python library functions.
- How I did it
Use addition information taken from Config DB in order to create map from SDK logical index to sonic interface name.
- How to verify it
Run ECMP calculator on 4600, 4600C and 2201 platforms.
- Why I did it
sfp_event.py gets a PMPE message when a cable event is available. In PMPE message, there is no label port available. Current sfp_event.py is using sx_api_port_device_get to get 64 logical ports attributes, and find the label port from those 64 attributes. However, if there are more than 64 ports, sfp_event.py might not be able to find the label port and drop the PMPE message.
- How I did it
Don't use hardcoded 64, get logical port number instead.
- How to verify it
Manual test
Why I did it
Support to upgrade packages, do better cleanup after the build.
How I did it
Remove the no use preference version control file after the build.
How to verify it
- Why I did it
Add PYTHON3_SWSSCOMMON as build time dependency to Mellanox platform API to avoid issue like:
19:34:11 ImportError while loading conftest '/sonic/platform/mellanox/mlnx-platform-api/tests/conftest.py'.
19:34:11 tests/conftest.py:28: in <module>
19:34:11 from sonic_platform import utils
19:34:11 sonic_platform/__init__.py:18: in <module>
19:34:11 from sonic_platform import *
19:34:11 sonic_platform/platform.py:28: in <module>
19:34:11 raise ImportError(str(e) + "- required module not found")
19:34:11 E ImportError: No module named 'swsscommon'- required module not found- required module not found
19:34:11 [ FAIL LOG END ] [ target/python-wheels/bullseye/mlnx_platform_api-1.0-py3-none-any.whl ]
The issue only happens when calling below command:
make target/python-wheels/bullseye/mlnx_platform_api-1.0-py3-none-any.whl
- How I did it
Add PYTHON3_SWSSCOMMON as build time dependency to Mellanox platform API
- How to verify it
Run build
- Why I did it
Add non-upstream kernel patches for the Nvidia platforms
These patches are not yet upstream but needed for new technology.
A flow to upstream them is in progress and once they will be approved they will be moved officially to sonic-linux-kernel.
Till then to include them in the build (not must) the build option INCLUDE_EXTERNAL_PATCH_TAR=y should be included
- How I did it
Zip all the patches in to a tar.gz tarball.
- How to verify it
Manually test
Signed-off-by: Stephen Sun <stephens@nvidia.com>
- Why I did it
Add SONiC YANG model for DNS to provide the possibility to configure static DNS entries in Config DB.
- How I did it
Added sonic-dns.yang file that contains the YANG model for the static DNS configuration.
- How to verify it
This PR extends YANG model tests to cover DNS configuration.
To run the test sonic_yang_models-1.0-py3-none-any.whl should be compiled.
#### Why I did it
Fix an issue that services do not start automatically on first boot and start only after hostcfgd enables them.
This is due to a bug in systemd-sonic-generator:
```
admin@arc-switch1004:~$ /usr/lib/systemd/system-generators/systemd-sonic-generator dir
Failed to open file /usr/lib/systemd/system/database.servcee
Error parsing targets for database.servcee
Error parsing database.servcee
Failed to open file /usr/lib/systemd/system/bgp.servcee
Error parsing targets for bgp.servcee
Error parsing bgp.servcee
Failed to open file /usr/lib/systemd/system/lldp.servcee
Error parsing targets for lldp.servcee
Error parsing lldp.servcee
Failed to open file /usr/lib/systemd/system/swss.servcee
Error parsing targets for swss.servcee
Error parsing swss.servcee
Failed to open file /usr/lib/systemd/system/teamd.servcee
Error parsing targets for teamd.servcee
Error parsing teamd.servcee
Failed to open file /usr/lib/systemd/system/syncd.servcee
Error parsing targets for syncd.servcee
Error parsing syncd.servcee
```
A wrong file name is generated (e.g database.**servcee**).
#### How I did it
Fixed overlapping strings being passed to strcpy/strcat that receive restirct* pointers (strings should not overlap).
#### How to verify it
Perform first boot and observe services start immidiatelly after boot.
add SEU reporting on chassis
fix fallback logic for Clearlake eeprom identification
fix fan speed reporting for a specific model
move pcie timeout configuration for Upperlake in platform code (deprecates hwsku-init)
Why I did it
Docker build has a low rate of hanging up.
It hangs on different steps. So, it looks like a bug in docker daemon.
How I did it
Start a daemon process to scan running time more than 1 hours, and kill the process.
How to verify it
- Why I did it
Currently, when building MFT, it can only download the source code from the official download site: http://www.mellanox.com/downloads/MFT/, it's not possible to integrate an internal version that has not been officially released yet.
The intention of this PR is to make it possible to download the source code from any valid link.
- How I did it
Add a new parameter "MLNX_MFT_INTERNAL_SOURCE_BASE_URL", if an URL is given, it will download the source code from the given URL, otherwise, it downloads from the default official site.
- How to verify it
Specify a valid URL in the make file, the MFT debs should be built successfully.
Signed-off-by: Kebo Liu <kebol@nvidia.com>
Why I did it
Some products might experience an occasional IO failure in the communication between CPU and SSD.
Based on some research it could be attributable to some device not handling ATA NCQ (Native Command Queue).
This issue currently affect 4 products:
DCS-7170-32C*
DCS-7170-64C
DCS-7060DX4-32
DCS-7260CX3-64
How I did it
This change disable NCQ on the affected drive for a small set of products.
How to verify it
When the fix is applied, these 2 patterns can be found in the dmesg.
ata1.00: FORCE: horkage modified (noncq)
NCQ (not used)
Test results using: fio --direct=1 --rw=randrw --bs=64k --ioengine=libaio --iodepth=64 --runtime=120 --numjobs=4
with NCQ (ata1.00: 61865984 sectors, multi 1: LBA48 NCQ (depth 32), AA)
READ: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=4073MiB (4270MB), run=120078-120078msec
WRITE: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=4100MiB (4300MB), run=120078-120078msec
without NCQ (ata1.00: 61865984 sectors, multi 1: LBA48 NCQ (not used))
READ: bw=31.7MiB/s (33.3MB/s), 31.7MiB/s-31.7MiB/s (33.3MB/s-33.3MB/s), io=3808MiB (3993MB), run=120083-120083msec
WRITE: bw=31.9MiB/s (33.4MB/s), 31.9MiB/s-31.9MiB/s (33.4MB/s-33.4MB/s), io=3830MiB (4016MB), run=120083-120083msec
Which release branch to backport (provide reason below if selected)
Why I did it
This change specifies the tuning values for each lane of the B52 phy chips. These values can be different for different ports. The values being set are under the assumption of optical transceivers. This change depends on the change to sonic-swss: sonic-net/sonic-swss#2158.
How to verify it
We verified the values are correctly set on the B52 chips of Arista 7280cr3, by reading them from the debug cli of the B52 driver.
Why I did it
DHCPv6 relay config entry is not useful while del dhcpv6 relay config.
How I did it
Remove dhcpv6_relay entry if it is empty and not check entry exist while adding dhcpv6 relay
Why I did it
Fix all mirror is commented out in sources.list in slave image issue. It will have an issue when installing more packages in the slave container.
It will add additional space character after running add-apt-repository command.
For example:
The original config in /etc/apt/sources.list
#deb [arch=amd64] http://deb.debian.org/debian/ bullseye main contrib non-free
Run the following command:
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian bullseye stable"
Then the setting changed to: (added a new space character after #)
# deb [arch=amd64] http://deb.debian.org/debian/ bullseye main contrib non-free
How I did it
Fix the regex string to add the space pattern. After fixed, whether there is a space character or not, it will not be an issue.
How to verify it
previously "make reset" was expecting user input from the terminal to do its job
setting UNATTENDED to any non-zero string will allow "make reset" to run without interactive confirmation
- Why I did it
When doing automated builds of SONiC images, we need to reset the working repositories between each build.
- How I did it
Adding an environment variable that is read by Makefile.work
- How to verify it
running
UNATTENDED=1 make reset
should make an automatic reset of all working directories