Why I did it
To add branch info for submodules of 202305 branch
Work item tracking
Microsoft ADO (21450860):
How I did it
To modify the .gitmodule file to have 202305 branch for repos
How to verify it
Pass PR build
#### Why I did it
src/sonic-swss
```
* 87e0b08 - (HEAD -> master, origin/master, origin/HEAD) [portsorch]: Enhancing SWSS OA logs to capture host_tx_ready change events (#2822) (11 hours ago) [mihirpat1]
* c7e52a0 - [subinterface]: Fix admin state handling. (#2806) (34 hours ago) [Nazarii Hnydyn]
* ebfda13 - [aclorch] Fix TODO: use SAI object API to query capabilities (#2743) (2 days ago) [Stepan Blyshchak]
```
#### How I did it
#### How to verify it
#### Description for the changelog
#### Why I did it
src/sonic-gnmi
```
* a600dc9 - (HEAD -> master, origin/master, origin/HEAD) Fix threading issues in Event Client (#121) (9 hours ago) [Zain Budhwani]
```
#### How I did it
#### How to verify it
#### Description for the changelog
#### Why I did it
src/sonic-swss-common
```
* 2320ddc - (HEAD -> master, origin/master, origin/HEAD) Add ZMQ port for orchagent (#795) (19 hours ago) [Hua Liu]
```
#### How I did it
#### How to verify it
#### Description for the changelog
Why I did it
sonic-mgmt is failing tests due to invalid test data in platform.json
Fwutil is upset the chassis name in the platform_component.json of the 7060CX-32S
How I did it
Fixed the aforementioned issues
* Re-add 127.0.0.1/8 when bringing down the interfaces
With #5353, 127.0.0.1/16 was added to the lo interface, and then
127.0.0.1/8 was removed. However, when bringing down the lo interface,
like during a config reload, 127.0.0.1/16 gets removed, but 127.0.0.1/8
isn't added back to the interface. This means that there's a period of
time where 127.0.0.1 is not available at all, and services that need to
connect to 127.0.01 (such as for redis DB) will fail.
To fix this, when going down, add 127.0.0.1/8. Add this address before
the existing configuration gets removed, so that 127.0.0.1 is available
at all times.
Note that running `ifdown lo` doesn't actually bring down the loopback
interface; the interface always stays "physically" up.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
- Why I did it
SDK patches for iproute2 were added to SONiC tree as a temporary solution.
Now that SDK with the patches is available, I have removed the patches from SONiC tree and we consume them from SDK github during compilation.
- How I did it
During build we download SDK iproute2 patches from SDK github (or from the URL provided by user if compiling SDK from sources) and apply them before compilation.
- How to verify it
Compile and load on switch, verify interfaces network devices created successfully.
Verify LLDP shows connections to neighbors.
Verify ping between 2 hosts over 2 router ports is successful.
- Why I did it
Adjust the warning threshold implementation according to the latest algorithm update
- How I did it
Modify power warning and critical thresholds methods
- How to verify it
Unit test updated to cover the change
Signed-off-by: Stephen Sun <stephens@nvidia.com>
#### Why I did it
The asn 0 in BGP_MONITOR is invalid by YANG definition. However, the asn 0 in BGP_MONITOR is found in many devices.
It was introduced by minigraph where its value is set to 0.
To unblock Config Updater test, the short term fix is to accept the asn 0 in BGP_MONITOR.
We can revert this after NGS team make all the ASN change in minigraph.
##### Work item tracking
- Microsoft ADO **(24186140)**:
#### How I did it
Change the range
#### How to verify it
Unit test.
Add watchdog mechanism to swss service and generate alert when swss have issue.
**Work item tracking**
Microsoft ADO (number only): 16578912
**What I did**
Add orchagent watchdog to monitor and alert orchagent stuck issue.
**Why I did it**
Currently SONiC monit system only monit orchagent process exist or not. If orchagent process stuck and stop processing, current monit can't find and report it.
**How I verified it**
Pass all UT.
Manually test process_monitoring/test_critical_process_monitoring.py can pass.
Add new UT https://github.com/sonic-net/sonic-mgmt/pull/8306 to check watchdog works correctly.
Manually test, after pause orchagent with 'kill -STOP <pid>', check there are warning message exist in log:
Apr 28 23:36:41.504923 vlab-01 ERR swss#supervisor-proc-watchdog-listener: Process 'orchagent' is stuck in namespace 'host' (1.0 minutes).
**Details if related**
Heartbeat message PR: https://github.com/sonic-net/sonic-swss/pull/2737
UT PR: https://github.com/sonic-net/sonic-mgmt/pull/8306
For T2 systems using packet mode, the backplane interfaces (Ethernet-BP#) and the fabric card ethernet interfaces are not visible as neighbor interfaces.
In packet mode, these interfaces needs qos and buffer config as well.
This fix addresses that issue and adds the backplane interfaces to the PORTS_ACTIVE list
Why I did it
After docker_inram is enabled, the docker folder's default max size is 1.5G.
It's not big enough for some tests which need to install additional docker images or install extra packages.
Work item tracking
Microsoft ADO 24199761:
How I did it
add docker_inram into cmdline_allowlist
How to verify it
sudo sh -c 'echo "docker_inram_size=3000M" >> kernel-cmdline-append'
sudo reboot and check the docker folder size
Why I did it
This is to add support for specifying custom retry counts for LACP sessions. This is to make warmboot easier on low-storage and low-memory platforms, by allowing more than 90 seconds of downtime.
How I did it
How to verify it
Tested manually with these cases:
Verify that changing the retry count using teamdctl PortChannel101 state item set runner.retry_count 5 takes effect
Verify that the retry count change actually affects when the LAG goes down by forcefully killing teamd on one side (i.e. setting the retry count to 5 causes the LAG to go down after 150 seconds)
Verify that the retry count gets reset to 3 after the LAG goes down for whatever reason
Verify that the retry count gets reset to 3 after some period of time (30 seconds * retry count)
Test cases are in sonic-net/sonic-mgmt#7961 and sonic-net/sonic-mgmt#8152.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
This reverts commit 44427a2f6b.
Docker image not updated during PR validation and caused PR check failures.
Force merge this revert. After cache is updated after this PR is merged, issue should be fixed.
Why I did it
Work item tracking
Microsoft ADO (24182162):
How I did it
update the config.bcm to set the default fec RS 100G Linecard
How to verify it
Tests on chassis
- Why I did it
Add the patchwork link to the commit description for non-upstream patches if present
- How I did it
Parse the patchwork/<patch_name>.txt file from hw-mgmt
Why I did it
Add marvell-arm64 platform build in PR checks to avoid build break.
Work item tracking
Microsoft ADO (number only): 17257160
How I did it
How to verify it
Why I did it
fix possible cpld race read issue between watchdog and reboot cause
process
How I did it
Use fcntl.flock to limit parallel access to cpld sys file
How to verify it
It can be simulated and verified with following python script
``` python3
import fcntl
import signal
import threading
exit_flag = False
def get_cpld_reg_value(getreg_path, register):
file = open(getreg_path, 'w+')
# Acquire an exclusive lock on the file
fcntl.flock(file, fcntl.LOCK_EX)
try:
file.write(register + '\n')
file.flush()
# Seek to the beginning of the file
file.seek(0)
# Read the content of the file
result = file.readline().strip()
finally:
# Release the lock and close the file
fcntl.flock(file, fcntl.LOCK_UN)
file.close()
return result
def cpld_read(thread_num, cpld_reg, expect_val):
while not exit_flag:
val
= get_cpld_reg_value("/sys/devices/platform/dx010_cpld/getreg",
cpld_reg)
#print(f"Thread {thread_num}: get cpld reg {cpld_reg}, value
{val}")
if val != expect_val:
print(f"Thread {thread_num}: get cpld reg {cpld_reg}, value
{val}, expect_val {expect_val}")
def signal_handler(sig, frame):
global exit_flag
print("Ctrl+C detected. Quitting...")
exit_flag = True
if __name__ == '__main__':
# Register the signal handler for Ctrl+C
signal.signal(signal.SIGINT, signal_handler)
t1 = threading.Thread(target=cpld_read, args=(1, '0x103', '0x11',))
t2 = threading.Thread(target=cpld_read, args=(2, '0x141', '0x00',))
t1.start()
t2.start()
t1.join()
t2.join()
```
What I did:
Workaround for the issue seen here : FRRouting/frr#13682
It seems there is timing issue where there are multiple recursive lookup needed to resolve nexthop of the route it's possible that it does not happen correctly causing route to remain in inactive state
Issue is seen on chassis-packet as there 2 level of recursive lookup needed for a given e-BGP learnt route
- Level1 to resolve e-BGP peer (connected route via bgp ) over Loopback4096 (i-BGP peering)
- Level 2 Loopback4096 over backend port-channels next-hops
For VOQ chassis there is no e-BGP peer (connected route via bgp ) resolution as route is added as Static route by orchagent over Ethernet-IB.
Also as part of this remove route-map policy from instance.conf.j2 as same is define in peer-group.j2.
Microsoft ADO: https://msazure.visualstudio.com/One/_workitems/edit/24198507
How I verify:
Functional Verification manually
Updated UT.
We will be adding sanity check in sonic-mgmt to make sure none of route are in inactive state.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>