Update sonic-sairedis submodule pointer to include the following:
* [202205][SAI] advance SAI version to 1.10 to pick up saiserver and syncd-rpc support ([#1115](https://github.com/sonic-net/sonic-sairedis/pull/1115))
Signed-off-by: dprital <drorp@nvidia.com>
Signed-off-by: dprital <drorp@nvidia.com>
* Pin version of bazelisk to v1.13.0
This tries to avoid builds failures due to the latest version of
bazelisk changing and causing hash mismatches.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Why I did it
bazelisk package with hash value 1227b24db77557d552701f6add122edc is deleted from github release.
Reproducible build only cached hash value. Package file didn't be cached. Because they are in different pipelines.
Using latest package hash instead.
Why I did it
This PR is to update TC_TO_QUEUE_MAP|AZURE for SKU Arista-7050CX3-32S-D48C8 and Arista-7260CX3 T0.
The change is only to align the TC_TO_QUEUE_MAP for regular traffic and bounced traffic. It has no impact on business because we have no traffic being mapped to TC2 or TC6.
How I did it
Update TC_TO_QUEUE_MAP|AZURE , and test cases as well.
How to verify it
Verified by running test case test_j2files.py
/sonic/src/sonic-config-engine$ python3 setup.py test -s tests/test_j2files.py
running test
......
----------------------------------------------------------------------
Ran 29 tests in 25.390s
OK
Why I did it
If the SWSS services was restarted, the MACsec service should also be restarted. Otherwise the data in wpa_supplicant and orchagent will not be consistent.
How I did it
Add dependency in docker-macsec.mk.
How to verify it
Manually check by 'sudo service swss restart'.
The MACsec container should be started after swss, the syslog will look like
Sep 8 14:36:29.562953 sonic INFO swss.sh[9661]: Starting existing swss container with HWSKU Force10-S6000
Sep 8 14:36:30.024399 sonic DEBUG container: container_start: BEGIN
...
Sep 8 14:36:33.391706 sonic INFO systemd[1]: Starting macsec container...
Sep 8 14:36:33.392925 sonic INFO systemd[1]: Starting Management Framework container...
Signed-off-by: Ze Gan <ganze718@gmail.com>
* [mux] skip mux operations during warm shutdown
- Enhance write_standby.py script to skip actions during warm shutdown.
- Expand the support to BGP service.
- MuX support was added by a previous PR.
- don't skip action during warm recovery
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
- Why I did it
Remove extra empty lines in the SN2201 pg_profile_lookup.ini to make it aligned with other platforms.
This extra empty line could confuse some test cases which need to parse this file.
Signed-off-by: Kebo Liu <kebol@nvidia.com>
After pinging any failed IPv6 neighbor entries, set the remaining failed/incomplete entries to a permanent INCOMPLETE state. This manual setting to INCOMPLETE prevents these entries from automatically transitioning to FAILED state, and since they are now incomplete any subsequent NA messages for these neighbors is able to resolve the entry in the cache.
Signed-off-by: Lawrence Lee <lawlee@microsoft.com>
It could happen that a container has already crashed but docker-wait-any
will wait forever till it starts. It should, however, immediately exit
to make the serivce restart.
#### Why I did it
It is observed in some circumstances that the auto-restart mechanism does not work. Specifically for ```swss.service```, ```orchagent``` had crashed before ```docker-wait-any``` started in ```swss.sh```. This led ```docker-wait-any``` wait forever for ```swss``` to be in ```"Running"``` state and it results in:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1abef1ecebff bcbca2b74df6 "/usr/local/bin/supe…" 22 hours ago Up 22 hours what-just-happened
3c924d405cd5 docker-lldp:latest "/usr/bin/docker-lld…" 22 hours ago Up 22 hours lldp
eb2b12a98c13 docker-router-advertiser:latest "/usr/bin/docker-ini…" 22 hours ago Up 22 hours radv
d6aac4a46974 docker-sonic-mgmt-framework:latest "/usr/local/bin/supe…" 22 hours ago Up 22 hours mgmt-framework
d880fd07aab9 docker-platform-monitor:latest "/usr/bin/docker_ini…" 22 hours ago Up 22 hours pmon
75f9e22d4fdd docker-snmp:latest "/usr/local/bin/supe…" 22 hours ago Up 22 hours snmp
76d570a4bd1c docker-sonic-telemetry:latest "/usr/local/bin/supe…" 22 hours ago Up 22 hours telemetry
ee49f50344b3 docker-syncd-mlnx:latest "/usr/local/bin/supe…" 22 hours ago Up 22 hours syncd
1f0b0bab3687 docker-teamd:latest "/usr/local/bin/supe…" 22 hours ago Up 22 hours teamd
917aeeaf9722 docker-orchagent:latest "/usr/bin/docker-ini…" 22 hours ago Exited (0) 22 hours ago swss
81a4d3e820e8 docker-fpm-frr:latest "/usr/bin/docker_ini…" 22 hours ago Up 22 hours bgp
f6eee8be282c docker-database:latest "/usr/local/bin/dock…" 22 hours ago Up 22 hours database
```
The check for ```"Running"``` state is not needed because for cold boot case we do ```start_peer_and_dependent_services``` and for warm boot case the loop will retry to wait for container if this container is doing warm boot:
d01a91a569/files/image_config/misc/docker-wait-any (L56)
#### How I did it
Removed the check for ```"Running"```.
#### How to verify it
Kill swss before ```docker-wait-any``` is reached and verify auto restart will restart swss serivce.
Why I did it
Approve step needs special permission settings.
We already added permission setting to enable bypass merging PR.
So, approve step is not necessary.
For the Restapi/gnmi use-cases, Sonic has to support a new Table: EXTERNAL_CLIENT of type CTRLPLANE, stage ingress
This shall match on 'src ip prefix' and dst port '8080'. Caclmgrd must parse this from acl.json and install as in the below example:
iptables -A INPUT -s 20.20.20.20/27 -p tcp --dport 8080 -j ACCEPT
or ip6tables if the 'src ip prefix' is IPv6.
This change for master branch is in PR sonic-net/sonic-host-services#9
Signed-off-by: Zhaohui Sun <zhaohuisun@microsoft.com>
Update sonic-swss submodule pointer to include the following:
[BFD]Clean up state_db BFD entries on swss restart (#2434)
Fix the Fec Mode Setting of gbsyncd (#2430)
[neighsyncd] Enabling ipv4 link local entries for non-dualtor (#2427)
tlm_teamd: Filter portchannel subinterface events from STATE_DB LAG_TABLE (#2408)
PFCWD recovery changes using DLR_INIT (#2316)
Dynamic port configuration - add port buffer cfg to the port ref counter (#2194)
Signed-off-by: dprital <drorp@nvidia.com>
Why I did it
After PFC interop testing between 8102 and 7050cx3, data packet losses were observed on the Rx ports of the 7050cx3 (inflow from 8102) during testing. This was primarily due to the slower response times to react to PFC pause packets for the 8102, when receiving such frames from neighboring devices. To solve for the packet drops, the 7050cx3 pg headroom size has to be increased to 160kB.
How I did it
Modified the xoff threshold value to 160kB in the pg_profile file to allow for the buffer manager to read that value when building the image, and configuring the device
How to verify it
run "mmuconfig -l" once image is built
Signed-off-by: dojha <devojha@microsoft.com>
Why I did it:
API get_device_runtime_metadata() added by #11795 uses merge operator for dict but that is supported only for python version >=3.9. This API will be be used by scrips eg:hostcfgd which is still build for buster which does not have python 3.9 support.
As part of PR #11754
Change was added to use variable SONIC_DB_NS_CLI for
namespace but that will not work since ./files/scripts/syncd_common.sh
uses SONIC_DB_CLI. So revert back to use SONIC_DB_CLI and define new
variable for SONIC_GLOBAL_DB_CLI for global/host db cli access
Also fixed DB_CLI not working for namespace.
Why I did it
Currently the CLI commands show interface status show interface counters and show interface description displays Ethernet-IB and Ethernet-Rec ports in the output. These are internal ports should only be displayed when the option -d all is used for the above mentioned CLI commands
How I did it
Add the port roles Inb and Rec when classifing a port as internal port.
How to verify it
Verify the CLI output of the command show interface status doesnt display the Ethenet-IB and Ethernet-Rec port when -d all option in not present
Before
Signed-off-by: Arvindsrinivasan Lakshmi Narasimhan <arlakshm@microsoft.com>
Why I did it
Fix a build not stable issue: #11620
The vs vm has started successfully, but failed to wait for the message "sonic login:".
There were 55 builds failed caused by the issue in the last 30 days.
AzurePipelineBuildLogs
| where startTime > ago(30d)
| where type =~ "task"
| where result =~ "failed"
| where name =~ "Build sonic image"
| where content contains "Timeout exceeded"
| where content contains "re.compile('sonic login:')"
| project-away content
| extend branchName=case(reason=~"pullRequest", tostring(todynamic(parameters)['system.pullRequest.targetBranch']),
replace("refs/heads/", "", sourceBranch))
| summarize FailedCount=dcount(buildId) by branchName
branchName FailedCount
master 37
202012 9
202106 4
202111 2
202205 1
201911 1
It is caused by the login message mixed with the output message of the /etc/rc.local, one of the examples as below: (see the message rc.local[307]: sonic+ onie_disco_subnet=255.255.255.0 login: )
The check_install.py was waiting for the message "sonic login:", and Linux console was waiting for the username input (the login message has already printed in the console).
https://dev.azure.com/mssonic/build/_build/results?buildId=123294&view=logs&j=cef3d8a9-152e-5193-620b-567dc18af272&t=359769c4-8b5e-5976-a793-85da132e0a6f
2022-07-17T15:00:58.9198877Z [ 25.493855] rc.local[307]: + onie_disco_opt53=05
2022-07-17T15:00:58.9199330Z [ 25.595054] rc.local[307]: + onie_disco_router=10.0.2.2
2022-07-17T15:00:58.9199781Z [ 25.699409] rc.local[307]: + onie_disco_serverid=10.0.2.2
2022-07-17T15:00:58.9200252Z [ 25.789891] rc.local[307]: + onie_disco_siaddr=10.0.2.2
2022-07-17T15:00:58.9200622Z [ 25.880920]
2022-07-17T15:00:58.9200745Z
2022-07-17T15:00:58.9201019Z Debian GNU/Linux 10 sonic ttyS0
2022-07-17T15:00:58.9201201Z
2022-07-17T15:00:58.9201542Z rc.local[307]: sonic+ onie_disco_subnet=255.255.255.0 login:
2022-07-17T15:00:58.9202309Z [ 26.079767] rc.local[307]: + onie_exec_url=file://dev/vdb/onie-installer.bin
How I did it
Input a newline when finished to run the script /etc/rc.local.
If entering a newline, the message "sonic login:" will prompt again.
Why I did it
Content of platform.json was outdated and some platform_tests/api of sonic-mgmt were failing.
How I did it
Added the necessary values to platform.json
How to verify it
Running platform_tests/api of sonic-mgmt should yield 100% passrate.
Port index 22 is associated with phy23_config.json, then same port index 22 in phy24_config.json may cause gearbox port creation error. Port Ethernet22 maps to index 23.
update sai module with commit
- 566d4a8ef2 2022-08-11 | [SAI-PTF] Enable saiserverv2 with syncd-rpc and fix saithriftv2 build (#1552) (#1533) (#1514) (#1492) (#1558) (#1557) [Richard.Yu]
- a1796a53cc 2022-08-11 | Add support of mdio IPC server class using sai switch api and unix socket
Signed-off-by: richardyu-ms <richard.yu@microsoft.com>
Signed-off-by: richardyu-ms <richard.yu@microsoft.com>
Added initial set of config files to allow for booting and partial traffic testing in SONiC on the 720DT-48S.
How to verify it
- Switch boots
- show interfaces status shows links up on interfaces Ethernet24-51
- Traffic flows with no errors on interfaces Ethernet24-51
* [SAIServer] support saiserver v2 in bullseye
Support build saiserverv2 in bullseye
Add dependencies for building saiserverv2
Upgrade libboost-atomic1.71 to libboost-atomic1.74
Test done:
Local builded with NOSTRETCH=y NOJESSIE=y NOBUSTER=y SAITHRIFT_V2=y make target/docker-saiserverv2-brcm.gz
* update libboost-atomic from 1.71 to 1.74 for bullseye
#### Why I did it
Fix the build failure caused by the installer image size too small. The installer image is only used during the build, not impact the final images.
See https://dev.azure.com/mssonic/build/_build/results?buildId=139926&view=logs&j=cef3d8a9-152e-5193-620b-567dc18af272&t=359769c4-8b5e-5976-a793-85da132e0a6f
```
+ fallocate -l 2048M ./sonic-installer.img
+ mkfs.vfat ./sonic-installer.img
mkfs.fat 4.2 (2021-01-31)
++ mktemp -d
+ tmpdir=/tmp/tmp.TqdDSc00Cn
+ mount -o loop ./sonic-installer.img /tmp/tmp.TqdDSc00Cn
+ cp target/sonic-vs.bin /tmp/tmp.TqdDSc00Cn/onie-installer.bin
cp: error writing '/tmp/tmp.TqdDSc00Cn/onie-installer.bin': No space left on device
[ FAIL LOG END ] [ target/sonic-vs.img.gz ]
```
#### How I did it
Increase the size from 2048M to 4096M.
Why not increase to 16G like qcow2 image?
The qcow2 supports the sparse disk, although a big disk size allocated, but it will not consume the real disk size. The falocate does not support the sparse disk. We do not want to allocate a very big disk, but no use at all. It will require more space to build.
* [sonic_py_common] Cache Static Information in device_info to speed up CLI response (#11696)
- Why I did it
Profiled the execution for the following cmd intfutil -c status
- How I did it
Cached the following information:
1. get_sonic_version_info()
2. get_platform_info()
None of the API exposed to the user libraries (for eg: sonic-utilities) has been modified
These methods involve reading text files or from redis. Thus, caching helped to improve the execution time
- How to verify it
Added UT's.
Verified on the device
Signed-off-by: Vivek Reddy Karri <vkarri@nvidia.com>
* Removed UT since libswsscommom dep is missing in <=202205
Signed-off-by: Vivek Reddy <vkarri@nvidia.com>
Signed-off-by: Vivek Reddy Karri <vkarri@nvidia.com>
Signed-off-by: Vivek Reddy <vkarri@nvidia.com>
Why I did it
The directory /var/warmboot as top directory for warmboot feature is also needed in docker gbsyncd. Some vendor SAI might save data under it. Without it, the SAI init/creation API failure has happened on PikeZ platform.
How I did it
Mount host directory /host/warmboot as /var/warmboot in docker gbsyncd, which is same as what it has done on docker syncd.
With the Broadcom syncd containers getting upgraded to Bullseye, the DNX
RPC container is no longer automatically built. Explicitly add a make
command to build it.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>