enable automated test suites to selectively run relevant tests ( or not run tests ) based upon a new port_type identifier in hwsku.json
How I did it
Modified the valid optional fields in validity check for hwsku.json per recommendation from Joe in
https://github.com/Azure/sonic-mgmt/pull/2654/files
Co-authored-by: Carl Keene <keene@nokia.com>
Signed-off-by: Rajkumar Pennadam Ramamoorthy rpennadamram@marvell.com
Why I did it
Install sonic image from ONIE. Once system is up, execute "config reload" command.
Root cause is that "determine-reboot-cause.service" was in failed state.
root@sonic:/host/reboot-cause# systemctl list-units --failed
UNIT LOAD ACTIVE SUB DESCRIPTION
● determine-reboot-cause.service loaded failed failed Reboot cause determination service
How I did it
Fixed the issue by setting default reason to "REBOOT_CAUSE_UNKNOWN" instead of "None".
How to verify it
Check " determine-reboot-cause.service' loaded successfully post image installation from ONIE.
Verify "reboot-cause.txt" file is created and config reload succeeds.
#### Why I did it
hostcfgd is starting at the same time as 'create_switch' method is called on orchagent process.
This introduce a degradation on the function execution time which eventually cause the fast-boot flow and a boot scenario in general to run slower (~6 seconds).
This change will delay the start time of this daemon.
90 seconds determined as the maximum allowed downtime for control plane to come back up on fast-boot flow.
#### How I did it
Add a timer for hostcfgd service in order to delay the startup of this service.
#### How to verify it
Install an image with this change and observe the daemon start 90 seconds after the system boot.
* 41dfaad 2021-08-02 | Bridge mac setting, fix statedb time format (#1844) (HEAD, origin/202012) [Prince Sunny]
Signed-off-by: Guohan Lu <lguohan@gmail.com>
* Update default cable len to 0m for TD2 (#8298)
* Update sonic-cfggen tests with the correct cable len
Signed-off-by: Neetha John <nejo@microsoft.com>
As part of the buffer reclamation efforts for TD2, setting the default cable len to 0m which means unused ports will have a cable len of 0m.
Why I did it
To align with the changes in Azure/sonic-swss#1830
How to verify it
- With the default cable len set to 0m and the associated changes in swss, CABLE_LENGTH table had '0m' set for unused ports and accordingly more space was reserved for the shared pool
- Cfggen tests passed with the cable len update
To include following changes:
* d84a8cc 2021-08-05 | [fast-reboot] revert the change of disabling counter polling before fast-reboot (#1744) (HEAD -> 202012, github/202012) [Ying Xie]
* e900bc5 2021-08-04 | Add script null_route_helper (#1718) [bingwang-ms]
* 85f14e1 2021-08-02 | disk_check updates: (#1736) [Renuka Manavalan]
* d68ac1c 2021-05-27 | [console][show] Force refresh all lines status during show line (#1641) [Blueve]
* a0e417f 2021-04-25 | [console] Display success message after line cleared (#1579) [Blueve]
* 0c6bb27 2021-04-07 | [console] Include Flow Control status in show line result (#1549) [Blueve]
I have been seeing intermittent (~40%) build failures with the same error described in PR https://github.com/Azure/sonic-buildimage/pull/6592, even with that fix present
```
/usr/bin/ld: mibgroup/ip-forward-mib/ipCidrRouteTable/.libs/ipCidrRouteTable_interface.o: file not recognized: file truncated
...
libtool: error: 'mibgroup/ip-forward-mib/inetCidrRouteTable/inetCidrRouteTable_interface.lo' is not a valid libtool object
make[5]: *** [Makefile:1020: libnetsnmpmibs.la] Error 1
make[5]: *** Waiting for unfinished jobs....
```
#### How I did it
Use `-j1` for the libsnmp build regardless of the value of `$(MULTIARCH_QEMU_ENVIRON)`
#### How to verify it
Performed 10 builds of the libsnmp target (`target/debs/buster/libsnmp-base_5.7.3+dfsg-5_all.deb`) with and without this change. Without the change, hit the error 40% of the time. With the change did not see the error at all
Signed-off-by: Justin Sherman <jusherma@cisco.com>
Update submodule for swss
f54b7d0b [Dynamic Buffer Calc][202012]Bug fix: Don't create lossless buffer profile for active ports without speed configured (Azure/sonic-swss#1820)
ac7f5cff Td2: Reclaim buffer from unused ports (Azure/sonic-swss#1830)
04105a4b [debugcounterorch] check if counter type is supported before querying (Azure/sonic-swss#1789)
a67d8af6 [202012][portsorch] fix errors when moving port from one lag to another. (Azure/sonic-swss#1819)
This PR updates the following commits
a9606fb [show] fix show muxcable metrics <port> for sorted output (#1731)
7355016 [minigraph][port_config] Use imported config.main and add conditional patch (#1728)
cc1d6e4 [configlet] Python3 compatible syntax for extracting a key from the dict (#1721)
Signed-off-by: vaibhav-dahiya <vdahiya@microsoft.com>
It can be that service is not enabled but UnitFilePreset=enabled (case
for Application Extension):
```
Loaded: loaded (/lib/systemd/system/cpu-report.service; disabled; vendor preset: enabled)
```
This makes existing logic skip enabling the service.
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
Backport #7944
#### Why I did it
The current logic generates 'VLAN_SUB_INTERFACE' table if the device type is backend and cluster name contains 'str'. This is not a reliable method to determine a storage backend device
#### How I did it
Updated the logic to generate 'VLAN_SUB_INTERFACE' table if any of the following conditions hold true
1. device is of type backend and ResourceType attribute is None
2. device is of type backend and ResourceType attribute contains "Storage"
3. device is of type backend and graph contains "Subinterface" section
Also updated the logic to set "is_storage_device" to True
1. for Backend, if any of the above conditions hold true
2. for Frontend, if ResourceType attribute contains "Storage"
#### How to verify it
Added new tests to verify the code changes and built sonic_config_engine-1.0-py3-none-any.whl successfully
Update:
> 2ca493b 2021-07-13 create sniffer folder if not exist (Azure/sonic-utilities#1659)
> 1695104 2021-07-07 [show priority-group drop counters] Remove backup with cached PG drop counters after 'config reload' (Azure/sonic-utilities#1679)
> e99a3c5 2021-07-07 [show][config] support for interface alias for muxcable commands (Azure/sonic-utilities#1699)
Currently hostcfgd is implemented in a way each feature which is enabled/disabled triggering execution of systemctl enable/unmask commands which eventually trigger 'systemctl daemon-reload' command.
Each call like this cost 0.6s and overall add a overhead of ~12 seconds of CPU time.
This change will verify the desired state of a feature and the current state of this feature on systemd and trigger a system call only when must.
What is changed: Check each feature status on systemd before executing a system call to enable and reload the systemctl daemon.
How to verify: Build an image with this change and observe less system calls are executed.
Why I did it
Static route configuration should not depend on BGP_ASN. Remove the dependency on BGP_ASN for StaticRouteMgr.
Fix#8027
How I did it
Check if BGP_ASN field before configuring static route redistribution and wait until BGP_ASN is available to enable static route redistribution.
How to verify it
Add unit test to cover the scenario and verify the functionality on a virtual switch.
Update submodule for sonic-utilities to include the following PR:
[202012] [pfcwd] Fix the return code in invalid case (#1698)
Signed-off-by: Dror Prital <drorp@nvidia.com>
Advance submodule head for sonic-swss on 202012
bb383be2 [Dynamic Buffer Calc][Mellanox] Bug fixes and enhancements for the lua plugins for buffer pool calculation and headroom checking (Azure/sonic-swss#1781)
f949dfe9 [Dynamic Buffer Calc] Avoid creating lossy PG for admin down ports during initialization (Azure/sonic-swss#1776)
def0a914 Fix config prompt question issue (Azure/sonic-swss#1799)
21f97506 [ci]: Merge azure pipelines from master to 202012 branch (Azure/sonic-swss#1764)
a83a2a42 [vstest]: add dvs_route fixture
849bdf9c [Mux] Add support for mux metrics to State DB (Azure/sonic-swss#1757)
386de717 [qosorch] Dot1p map list initialization fix (Azure/sonic-swss#1746)
f99abdca [sub intf] Port object reference count update (Azure/sonic-swss#1712)
4a00042d [vstest/nhg]: use dvs_route fixture to make test_nhg more robust
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Update submodule for sonic-utilities to include the following PRs:
80b7b54 Pcieutil to load the platform api first instead of using common api (sonic-utilities#1672)
3d5e93d [Mellanox] Update mellanox dump generation to include SDK dumps (sonic-utilities#1640)
a805efc [ci]: Fix config prompt question issue (sonic-utilities#1693)
33c6d79 [vnet_route_check] Fix logic for getting VNET routes from ASIC DB (sonic-utilities#1653)
Update submodule for sonic-sairedis to include the following PRs:
74b5808 [Mellanox] Update mellanoxs dump generation to include SDK dumps (sonic-sairedis#833)
5ff9305 Fix azure-pipelines branch reference error, change master to 202012 (sonic-sairedis#834)
#### Why I did it
To ensure any environment variables which are configured in the build/test environment do not influence the behavior of sonic-py-common during unit tests. For example, variables which might be set by continuous integration pipelines.
#### How I did it
Add class-scoped pytest fixture to `TestDeviceInfo` class which stashes the current environment variables, clears them and yields. Once all the test cases in the class finish, the fixture will restore the original environment variables.
Also remove unnecessary unittest-style setup and teardown functions from interface_test.py
A recent version of contextlib2 (https://pypi.org/project/contextlib2/21.6.0/#history) has broken Python2 compatibility,
so the version picked up by netaddr when using Python2 must be specified, or else builds fail
Co-authored-by: Tom Zhu <tom.zhu@metaswitch.com>
Update submodule for sonic-utilities to include the following PRs:
Make the soft-reboot available in the SONiC image on master (#1681)
[config]: Update environment file during config reload (#1673)
Enable pr checker form 202012 (#1649)
[config] support for configuring muxcable to manual mode of operation (#1642)
[config] Fix config int add incorrect ip (#1414)
Signed-off-by: Dror Prital <drorp@nvidia.com>
b0dad8c Add to check pcie configuration revision to get the right configuration. (#195)
f66ffc3 [eeprom_tlv_info] Optimize EEPROM data process by using visitor pattern (#193)
* Revert "fix"
This reverts commit 93585b0a0a.
* Revert "Version control git (#6562)"
This reverts commit 52b87753db.
* Revert "Revert "[files/build/versions]: support reproduceable build for git (#5774)""
This reverts commit 1cb8daf585.
* Revert "[files/build/versions]: support reproduceable build for git (#5774)"
This reverts commit 547aa9b2c7.
Add PG_DROP yang model and add check this field in unit test for yang model
How to verify it
Firstly try to do DPB (2x50G) for Ethernet0 port:
sudo config interface breakout Ethernet0 2x50G -f
After that try to do DPB (1x100G[40G]) for Ethernet0 port:
sudo config interface breakout Ethernet0 1x100G[40G] -f
Both commands should work correctly.
Signed-off-by: Mykola Gerasymenko <mykolax.gerasymenko@intel.com>
#### Why I did it
Recently, the build started failing with messages like
```
2021-06-16T16:55:02.8675603Z tests/hostcfgd/hostcfgd_test.py:5: in <module>
2021-06-16T16:55:02.8676208Z from parameterized import parameterized
2021-06-16T16:55:02.8677145Z E ModuleNotFoundError: No module named 'parameterized'
```
Unit tests for hostcfgd depend on the `parameterized` Python package, but it was never added as a dependency to the setup.py file. This dependency was added ~3 months ago. I'm not sure why we only started seeing this failure recently.
#### How I did it
Add 'parameterized' package as a test dependency in setup.py for sonic-host-services package
Why I did it
The SONiC switches get their docker images from local repo, populated during install with container images pre-built into SONiC FW. With the introduction of kubernetes, new docker images available in remote repo could be deployed. This requires dockerd to be able to pull images from remote repo.
Depending on the Switch network domain & config, it may or may not be able to reach the remote repo. In the case where remote repo is unreachable, we could potentially make Kubernetes server to also act as http-proxy.
How I did it
When admin explicitly enables, the kubernetes-server could be configured as docker-proxy. But any update to docker-proxy has to be via service-conf file environment variable, implying a "service restart docker" is required. But restart of dockerd is vey expensive, as it would restarts all dockers, including database docker.
To avoid dockerd restart, pre-configure an http_proxy using an unused IP. When k8s server is enabled to act as http-proxy, an IP table entry would be created to direct all traffic to the configured-unused-proxy-ip to the kubernetes-master IP. This way any update to Kubernetes master config would be just manipulating IPTables, which will be transparent to all modules, until dockerd needs to download from remote repo.
How to verify it
Configure a switch such that image repo is unreachable
Pre-configure dockerd with http_proxy.conf using an unused IP (e.g. 172.16.1.1)
Update ctrmgrd.service to invoke ctrmgrd.py with "-p" option.
Configure a k8s server, and deploy an image for feature with set_owner="kube"
Check if switch could successfully download the image or not.