Commit Graph

1180 Commits

Author SHA1 Message Date
mssonicbld
36b6d5824c
[ci/build]: Upgrade SONiC package versions (#14812) 2023-04-23 20:52:29 +08:00
mssonicbld
7cc8c76f0f
Increase wait_for_tunnel() timeout to 90s (#14279) (#14733) 2023-04-20 05:47:12 +08:00
mssonicbld
20bb5daa6a [ci/build]: Upgrade SONiC package versions 2023-04-18 22:39:02 +08:00
mssonicbld
94ba969676
[write standby] force DB connections to use unix socket to connect (#14524) (#14553) 2023-04-18 17:11:59 +08:00
anamehra
0b30826e56 chassis-packet: resolve the missing static routes (#14593)
Why I did it
Fixes #14179
chassis-packet: missing arp entries for static routes causing high orchagent cpu usage

It is observed that some sonic-mgmt test case calls sonic-clear arp, which clears the static arp entries as well. Orchagent or arp_update process does not try to resolve the missing arp entries after clear.

How I did it
arp_update should resolve the missing arp/ndp static route
entries. Added code to check for missing entries and try ping if any
found to resolve it.

How to verify it
After boot or config reload, check ipv4 and ipv4 neigh entries to make sure all static route entries are present
manual validation:
Use sonic-clear arp and sonic-clear ndp to clear all neighbor entries
run arp_update
Check for neigh entries. All entries should be present.
Testing on T0 setup route/for test_static_route.py

The test set the STATIC_ROUTE entry in conifg db without ifname:
sonic-db-cli CONFIG_DB hmset 'STATIC_ROUTE|2.2.2.0/24' nexthop 192.168.0.18,192.168.0.25,192.168.0.23

"STATIC_ROUTE": {
    "2.2.2.0/24": {
        "nexthop": "192.168.0.18,192.168.0.25,192.168.0.23"
    }
},
Validate that the arp_update gets the proper ARP_UPDATE_VARDS using arp_update_vars.j2 template from config db and does not crash:

{ "switch_type": "", "interface": "", "pc_interface" : "PortChannel101 PortChannel102 PortChannel103 PortChannel104 ", "vlan_sub_interface": "", "vlan" : "Vlan1000", "static_route_nexthops": "192.168.0.18 192.168.0.25 192.168.0.23 ", "static_route_ifnames": "" }

validate route/test_static_route.py testcase pass.
2023-04-18 14:34:49 +08:00
mssonicbld
c9d5e20923
[image_config] add rasdaemon.timer (#14300) (#14692) 2023-04-18 12:33:15 +08:00
xumia
1a9d6cdc5a
Support to add SONiC OS Version in device info (#14601) (#14624)
Why I did it
Support to add SONiC OS Version in device info.
It will be used to display the version info in the SONiC command "show version". The version is used to do the FIPS certification. We do not do the FIPS certification on a specific release, but on the SONiC OS Version.

SONiC Software Version: SONiC.master-13812.218661-7d94c0c28
SONiC OS Version: 11
Distribution: Debian 11.6
Kernel: 5.10.0-18-2-amd64
How I did it
2023-04-17 17:30:49 -07:00
mssonicbld
c919e758db
[ci/build]: Upgrade SONiC package versions (#14681) 2023-04-16 20:54:23 +08:00
mssonicbld
32b8764d3b
[ci/build]: Upgrade SONiC package versions (#14674) 2023-04-15 20:36:48 +08:00
mssonicbld
597d37d395
[ci/build]: Upgrade SONiC package versions (#14604) 2023-04-11 21:29:00 +08:00
Ye Jianquan
c55a5a94eb
Revert "chassis-packet: resolve the missing static routes (#14230)" (#14545)
This reverts commit a6d597a811.
2023-04-10 23:40:31 +08:00
mssonicbld
c62ed6800e
[ci/build]: Upgrade SONiC package versions (#14579) 2023-04-09 20:29:39 +08:00
mssonicbld
9680eb9e07
[ci/build]: Upgrade SONiC package versions (#14573) 2023-04-08 20:37:10 +08:00
Dev Ojha
82873c24ce
[202205][Buffer] Added cable length config to buffer config template for EdgeZoneAggregator (#14538)
Why I did it
SONiC currently does not identify 'EdgeZoneAggregator' neighbor. As a result, the buffer profile attached to those interfaces uses the default cable length which could cause ingress packet drops due to insufficient headroom. Hence, there is a need to update the buffer templates to identify such neighbors and assign the same cable length as used by the T1.

How I did it
Modified the buffer template to identify EdgeZoneAggregator as a neighbor device type and assign it the same cable length as a T1/leaf router.

How to verify it
Unit tests pass, and manually checked on a 7260 to see the changes take effect.
2023-04-06 07:52:43 -07:00
mssonicbld
eab159f5e3
Delay mux/sflow/snmp timer after interface-config service (#14506) (#14523) 2023-04-05 15:28:41 +08:00
mssonicbld
8ca2d47aa9 [ci/build]: Upgrade SONiC package versions 2023-04-04 22:32:19 +08:00
Hua Liu
0a58f4f68b Improve sudo cat command for RO user. (#14428)
Improve sudo cat command for RO user.

#### Why I did it
RO user can use sudo command show none syslog files.

#### How I did it
Improve sudo cat command for RO user.

#### How to verify it
Pass all UT.
Manually check fixed code work correctly.

#### Description for the changelog
Improve sudo cat command for RO user.
2023-04-03 16:34:36 +08:00
anamehra
a6d597a811 chassis-packet: resolve the missing static routes (#14230)
arp_update should resolve the missing arp/ndp static route
entries. Added code to check for missing entries and try ping to
resolve the missing entry.

Why I did it
Fixes #14179

chassis-packet: missing arp entries for static routes causing high orchagent cpu usage

It is observed that some sonic-mgmt test case calls sonic-clear arp, which clears the static arp entries as well. Orchagent or arp_update process does not try to resolve the missing arp entries after clear.

How I did it
arp_update should resolve the missing arp/ndp static route
entries. Added code to check for missing entries and try ping if any
found to resolve it.

How to verify it
After boot or config reload, check ipv4 and ipv4 neigh entries to make sure all static route entries are present
manual validation:
Use sonic-clear arp and sonic-clear ndp to clear all neighbor entries
run arp_update
Check for neigh entries. All entries should be present.

Signed-off-by: anamehra <anamehra@cisco.com>
2023-04-03 16:34:21 +08:00
mssonicbld
2b3bea61de
[ci/build]: Upgrade SONiC package versions (#14489) 2023-04-02 20:39:29 +08:00
mssonicbld
40889d25ce
[ci/build]: Upgrade SONiC package versions (#14443) 2023-03-28 20:37:28 +08:00
mssonicbld
16bbbd776a
Set owner after restoring counters folder during warmboot (#13507) (#14435)
Why I did it
After warm reboot, show environment prints the following error:
failed to import plugin show.plugins.macsec: [Errno 13] Permission denied: '/tmp/cache/macsec'

How I did it
Set owner back to admin after restoring counters folder.

How to verify it
sudo warm-reboot, then ensure show environement does not print errors.

Signed-off-by: Oleksandr Kolomeiets <oleksandrx.kolomeiets@intel.com>
Co-authored-by: oleksandrx-kolomeiets <oleksandrx.kolomeiets@intel.com>
2023-03-27 14:57:26 -07:00
mssonicbld
5fb685ebb8
[ci/build]: Upgrade SONiC package versions (#14418) 2023-03-26 20:42:23 +08:00
mssonicbld
f76fbfca5b
[ci/build]: Upgrade SONiC package versions (#14415) 2023-03-25 21:02:51 +08:00
Stepan Blyshchak
e2ed36c764
[202205] Clear teamd-timer when finalizing fast-reboot (#14295)
* Clear teamd-timer when finalizing fast-reboot
* Move config save after finalizing fast/warm reboot
2023-03-22 12:07:58 -07:00
mssonicbld
22b7b68ff0 [ci/build]: Upgrade SONiC package versions 2023-03-21 20:46:04 +08:00
xumia
e5e8d46fe1
[Security] Fix some of vulnerability issue relative python packages (#14269) (#14353)
Why I did it
Fix some of vulnerability issue relative python packages #14269
Pillow: [CVE-2021-27921]
Wheel: [CVE-2022-40898]
lxml: [CVE-2022-2309]

How I did it
2023-03-20 16:52:40 -07:00
mssonicbld
19a89aa6e2
[ci/build]: Upgrade SONiC package versions (#14344) 2023-03-19 20:19:28 +08:00
mssonicbld
3df60a15dd
[ci/build]: Upgrade SONiC package versions (#14314) 2023-03-19 00:04:48 +08:00
mssonicbld
51df8df2a8
[ci/build]: Upgrade SONiC package versions (#14299) 2023-03-18 02:31:30 +08:00
mssonicbld
cac2cf5d24
[storage_backend] Add backend acl service (#14229) (#14281) 2023-03-17 08:50:15 +08:00
Stepan Blyshchak
58937201f9 [swss/syncd] remove dependency on interfaces-config.service (#13084)
- Why I did it
Remove dependency on interfaces-config.service to speed up boot, because interfaces-config.service takes a lot of time on boot.

- How I did it
Changed service files for swss, syncd.

- How to verify it
Boot and check swss/syncd start time comparing to interfaces-config

Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
2023-03-16 14:37:17 +08:00
Aryeh Feigin
4a3c5e42c2
[202205] Fast reboot finalizer 202205 (#14143)
* Finalize fast-reboot in warmboot finalizer

* update fast/warm-reboot finalizer

* support compatibility for fast-reboot from previous versions (prior 202205)

* advance pointers: sairedis, utilities
2023-03-15 09:34:05 -07:00
Arvindsrinivasan Lakshmi Narasimhan
dddf1db1d3
[202205]Revert "Revert "[Chassis][Voq]update to add buffer_queue config on sy… (#14173)
* Revert "Revert "[Chassis][Voq]update to add buffer_queue config on system ports (#12156)" (#13421)"

This reverts commit 73c0deb810.

* update swss submodule

Signed-off-by: Arvindsrinivasan Lakshmi Narasimhan <arlakshm@microsoft.com>

---------

Signed-off-by: Arvindsrinivasan Lakshmi Narasimhan <arlakshm@microsoft.com>
2023-03-10 13:09:13 -08:00
anamehra
0c5ab622c4 Add support for platform syncd pre shutdown plugin (#13564)
Why I did it
Vendor platform may require running platform specific pre-shutdown routine before shutting down the syncd process which runs the SAI and vendor sdk instance.

How I did it
Added a platform script hook which will be executed if the plugin script is provided by the platform in device//plugins/
2023-03-09 04:33:42 +08:00
Marty Y. Lok
432c4f9222 [Chassis][multiasic] Fix the sonic-db-cli core files issue on multiasic platform after the c++ implementation of sonic-db-cli (#13207)
Fixe #12047. After the c++ implementation of the sonic-db-cli, sonic-db-cli PING command tries to initialize the global database for all instances database starting. If all instance database-config.json are not ready yet. it will crash and generate core file. PR sonic-net/sonic-swss-common#701 only fix the crash and the process abortion. 

Signed-off-by: mlok <marty.lok@nokia.com>
2023-03-07 14:39:25 +08:00
mssonicbld
cf5b888534
[ci/build]: Upgrade SONiC package versions (#14083) 2023-03-05 18:57:50 +08:00
mssonicbld
b40cccafa4
[ci/build]: Upgrade SONiC package versions (#14079) 2023-03-04 22:15:09 +08:00
mssonicbld
bab3243d93
[Arista] Disable SSD NCQ on Lodoga (#13964) (#14073) 2023-03-04 04:41:19 +08:00
mssonicbld
9d6457a2ff
[ci/build]: Upgrade SONiC package versions (#14046) 2023-03-02 05:37:16 +08:00
xumia
b8ef3c07df
Bump lxml from 4.6.5 to 4.9.1 in /src/sonic-config-engine (#14011)
Why I did it
It is to fix the security alert CVE-2022-2309, see https://security-tracker.debian.org/tracker/CVE-2022-2309
The fix has already merged in master, See detail in PR #11366

How I did it
Upgrade version to 4.9.1

How to verify it
2023-03-01 08:21:57 +00:00
mssonicbld
73f572948f
[netlink] Increse netlink buffer size from 3MB to 16MB (#13965) (#14027) 2023-03-01 11:47:29 +08:00
mssonicbld
7f4afadd1a
[ci/build]: Upgrade SONiC package versions (#13995) 2023-02-26 20:02:25 +08:00
mssonicbld
3170ef9b50
[ci/build]: Upgrade SONiC package versions (#13991) 2023-02-25 20:14:59 +08:00
mssonicbld
668774db9f
[ci/build]: Upgrade SONiC package versions (#13975) 2023-02-24 17:32:54 +08:00
Stepan Blyshchak
70e2ea1e87
[202205][Mellanox] Place FW binaries under platform directory instead of squashfs (#13838)
Fixes #13568
Backport of #13837

Upgrade from old image always requires squashfs mount to get the next image FW binary. This can be avoided if we put FW binary under platform directory which is easily accessible after installation:

admin@r-spider-05:~$ ls /host/image-fw-new-loc.0-dirty-20230208.193534/platform/fw-SPC.mfa
/host/image-fw-new-loc.0-dirty-20230208.193534/platform/fw-SPC.mfa
admin@r-spider-05:~$ ls -al /tmp/image-fw-new-loc.0-dirty-20230208.193534-fs/etc/mlnx/fw-SPC.mfa
lrwxrwxrwx 1 root root 66 Feb  8 17:57 /tmp/image-fw-new-loc.0-dirty-20230208.193534-fs/etc/mlnx/fw-SPC.mfa -> /host/image-fw-new-loc.0-dirty-20230208.193534/platform/fw-SPC.mfa

- Why I did it
202211 and above uses different squashfs compression type that 201911 kernel can not handle. Therefore, we avoid mounting squashfs altogether with this change.

- How I did it
Place FW binary under /host/image-/platform/mlnx/, soft links in /etc/mlnx are created to avoid breaking existing scripts/automation.
/etc/mlnx/fw-SPCX.mfa is a soft link always pointing to the FW that should be used in current image
mlnx-fw-upgrade.sh is updated to prefer /host/image-/platform/mlnx location and fallback to /etc/mlnx in squashfs in case new location does not exist. This is necessary to do image downgrade.

- How to verify it
Upgrade from 201911 to 202205
202205 to 201911 downgrade
202205 -> 202205 reboot
ONIE -> 202205 boot (First FW burn)

Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
2023-02-22 17:40:50 +02:00
zhixzhu
8b5a42794d
set cable length of backplane ports to 1m (#13279)
* set cable length of backplane ports to 1m

Signed-off-by: Zhixin Zhu <zhixzhu@cisco.com>

* add UT for cable length

Signed-off-by: Zhixin Zhu <zhixzhu@cisco.com>

* correct argument format

---------

Signed-off-by: Zhixin Zhu <zhixzhu@cisco.com>
2023-02-21 22:14:53 +00:00
mssonicbld
9bf90a5f2e
Use tmpfs for /var/log on Arista 7050CX3-32S (#13805) (#13843)
This is to reduce writes to the SSD on the device.

Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Co-authored-by: Saikrishna Arcot <sarcot@microsoft.com>
2023-02-21 13:25:35 -08:00
Samuel Angebault
638fdd0e93 [Arista] Disable ATA NCQ for a few products (#13739)
Why I did it
Some products might experience an occasional IO failure in the communication between CPU and SSD.
Based on some research it could be attributable to some device not handling ATA NCQ (Native Command Queue).

This issue currently affect 4 products:

DCS-7170-32C*
DCS-7170-64C
DCS-7060DX4-32
DCS-7260CX3-64

How I did it
This change disable NCQ on the affected drive for a small set of products.

How to verify it
When the fix is applied, these 2 patterns can be found in the dmesg.
ata1.00: FORCE: horkage modified (noncq)
NCQ (not used)

Test results using: fio --direct=1 --rw=randrw --bs=64k --ioengine=libaio --iodepth=64 --runtime=120 --numjobs=4

with NCQ (ata1.00: 61865984 sectors, multi 1: LBA48 NCQ (depth 32), AA)

   READ: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=4073MiB (4270MB), run=120078-120078msec
  WRITE: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=4100MiB (4300MB), run=120078-120078msec
without NCQ (ata1.00: 61865984 sectors, multi 1: LBA48 NCQ (not used))

   READ: bw=31.7MiB/s (33.3MB/s), 31.7MiB/s-31.7MiB/s (33.3MB/s-33.3MB/s), io=3808MiB (3993MB), run=120083-120083msec
  WRITE: bw=31.9MiB/s (33.4MB/s), 31.9MiB/s-31.9MiB/s (33.4MB/s-33.4MB/s), io=3830MiB (4016MB), run=120083-120083msec
Which release branch to backport (provide reason below if selected)
2023-02-22 04:34:01 +08:00
Stepan Blyshchak
b5be0da272 [dockerd] Force usage of cgo DNS resolver (#13649)
Go's runtime (and dockerd inherits this) uses own DNS resolver implementation by default on Linux.
It has been observed that there are some DNS resolution issues when executing ```docker pull``` after first boot.

Consider the following script:

```
admin@r-boxer-sw01:~$ while :; do date; cat /etc/resolv.conf; ping -c 1 harbor.mellanox.com; docker pull harbor.mellanox.com/sonic/cpu-report:1.0.0 ; sleep 1; done
Fri 03 Feb 2023 10:06:22 AM UTC
nameserver 10.211.0.124
nameserver 10.211.0.121
nameserver 10.7.77.135
search mtr.labs.mlnx labs.mlnx mlnx lab.mtl.com mtl.com
PING harbor.mellanox.com (10.7.1.117) 56(84) bytes of data.
64 bytes from harbor.mtl.labs.mlnx (10.7.1.117): icmp_seq=1 ttl=53 time=5.99 ms

--- harbor.mellanox.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 5.989/5.989/5.989/0.000 ms
Error response from daemon: Get "https://harbor.mellanox.com/v2/": dial tcp: lookup harbor.mellanox.com on [::1]:53: read udp [::1]:57245->[::1]:53: read: connection refused
Fri 03 Feb 2023 10:06:23 AM UTC
nameserver 10.211.0.124
nameserver 10.211.0.121
nameserver 10.7.77.135
search mtr.labs.mlnx labs.mlnx mlnx lab.mtl.com mtl.com
PING harbor.mellanox.com (10.7.1.117) 56(84) bytes of data.
64 bytes from harbor.mtl.labs.mlnx (10.7.1.117): icmp_seq=1 ttl=53 time=5.56 ms

--- harbor.mellanox.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 5.561/5.561/5.561/0.000 ms
Error response from daemon: Get "https://harbor.mellanox.com/v2/": dial tcp: lookup harbor.mellanox.com on [::1]:53: read udp [::1]:53299->[::1]:53: read: connection refused
Fri 03 Feb 2023 10:06:24 AM UTC
nameserver 10.211.0.124
nameserver 10.211.0.121
nameserver 10.7.77.135
search mtr.labs.mlnx labs.mlnx mlnx lab.mtl.com mtl.com
PING harbor.mellanox.com (10.7.1.117) 56(84) bytes of data.
64 bytes from harbor.mtl.labs.mlnx (10.7.1.117): icmp_seq=1 ttl=53 time=5.78 ms

--- harbor.mellanox.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 5.783/5.783/5.783/0.000 ms
Error response from daemon: Get "https://harbor.mellanox.com/v2/": dial tcp: lookup harbor.mellanox.com on [::1]:53: read udp [::1]:55765->[::1]:53: read: connection refused
Fri 03 Feb 2023 10:06:25 AM UTC
nameserver 10.211.0.124
nameserver 10.211.0.121
nameserver 10.7.77.135
search mtr.labs.mlnx labs.mlnx mlnx lab.mtl.com mtl.com
PING harbor.mellanox.com (10.7.1.117) 56(84) bytes of data.
64 bytes from harbor.mtl.labs.mlnx (10.7.1.117): icmp_seq=1 ttl=53 time=7.17 ms

--- harbor.mellanox.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 7.171/7.171/7.171/0.000 ms
Error response from daemon: Get "https://harbor.mellanox.com/v2/": dial tcp: lookup harbor.mellanox.com on [::1]:53: read udp [::1]:44877->[::1]:53: read: connection refused
Fri 03 Feb 2023 10:06:26 AM UTC
nameserver 10.211.0.124
nameserver 10.211.0.121
nameserver 10.7.77.135
search mtr.labs.mlnx labs.mlnx mlnx lab.mtl.com mtl.com
PING harbor.mellanox.com (10.7.1.117) 56(84) bytes of data.
64 bytes from harbor.mtl.labs.mlnx (10.7.1.117): icmp_seq=1 ttl=53 time=5.66 ms

--- harbor.mellanox.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 5.656/5.656/5.656/0.000 ms
Error response from daemon: Get "https://harbor.mellanox.com/v2/": dial tcp: lookup harbor.mellanox.com on [::1]:53: read udp [::1]:54604->[::1]:53: read: connection refused
Fri 03 Feb 2023 10:06:27 AM UTC
nameserver 10.211.0.124
nameserver 10.211.0.121
nameserver 10.7.77.135
search mtr.labs.mlnx labs.mlnx mlnx lab.mtl.com mtl.com
PING harbor.mellanox.com (10.7.1.117) 56(84) bytes of data.
64 bytes from harbor.mtl.labs.mlnx (10.7.1.117): icmp_seq=1 ttl=53 time=8.22 ms

--- harbor.mellanox.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 8.223/8.223/8.223/0.000 ms
1.0.0: Pulling from sonic/cpu-report
004f1eed87df: Downloading [===================>                               ]   19.3MB/50.43MB
5d6f1e8117db: Download complete
48c2faf66abe: Download complete
234b70d0479d: Downloading [=========>                                         ]  9.363MB/51.84MB
6fa07a00e2f0: Downloading [==>                                                ]   9.51MB/192.4MB
04a31b4508b8: Waiting
e11ae5168189: Waiting
8861a99744cb: Waiting
d59580d95305: Waiting
12b1523494c1: Waiting
d1a4b09e9dbc: Waiting
99f41c3f014f: Waiting
```

While /etc/resolv.conf has the correct content and ping (and any other utility that uses libc's DNS resolution implementation) works correctly
docker is unable to resolve the hostname and falls back to default [::1]:53. This started to happen after PR https://github.com/sonic-net/sonic-buildimage/pull/13516 has been merged.
As you can see from the log, dockerd is able to pick up the correct /etc/resolv.conf only after 5 sec since first try. This seems to be somehow related to the logic in Go's DNS resolver
https://github.com/golang/go/blob/master/src/net/dnsclient_unix.go#L385.

There have been issues like that reported in docker like:
  - https://github.com/docker/cli/issues/2299
  - https://github.com/docker/cli/issues/2618
  - https://github.com/moby/moby/issues/22398

Since this starts to happen after inclusion of resolvconf package by
above mentioned PR and the fact I can't see any problem with that (ping,
nslookup, etc. works) the choice is made to force dockerd to use cgo
(libc) resolver.

Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
2023-02-22 04:33:44 +08:00
mssonicbld
25ead73d10
[ci/build]: Upgrade SONiC package versions (#13893) 2023-02-21 19:42:54 +08:00