Why I did it
Currently, k8s master image is generated from a separate branch which we created by ourselves, not release ones. We need to commit these k8s master related code to master branch for a better way to do k8s master image build out.
Work item tracking
Microsoft ADO (number only):
19998138
How I did it
Install k8s dashboard docker images
Install geneva mds and mdsd and fluentd docker images and tag them as latest, tagging latest will help create container always with the latest version
Install azure-storage-blob and azure-identity, this will help do etcd backup and restore.
Install kubernetes python client packages, this will help read worker and container state, we can send these metric to Geneva.
Remove mdm debian package, will replace it with the mdm docker image
Add k8s master entrance script, this script will be called by rc-local service when system startup. we have some master systemd services in compute-move repo, when VMM service create master VM, VMM will copy all master service files inside VM, the entrance script will setup all services according to the service files.
When the entrance script content changed, the PR build will set include_kubernetes_master=y to help do validation for k8s master related code change. The default value of include_kubernetes_master should be always n for public master branch. We will generate master image from internal master branch
How to verify it
Build with INCLUDE_KUBERNETES_MASTER = y
There is a redundant line in init_cfg.json.j2. It would cause pmon service always has "delayed=False". However, we know that PMON has a timer now. So, I try to fix it here.
#### Why I did it
src/sonic-platform-daemons
```
* 76baca3 - (HEAD -> master, origin/master, origin/HEAD) Fixes for the issues uncovered by sonic-pcied unit tests (#389) (32 hours ago) [Ashwin Srinivasan]
```
#### How I did it
#### How to verify it
#### Description for the changelog
Why I did it
Fix some of the patches in .patches folder not applied issue.
The command "quilt applied" only lists the applied patches, if some of the patches have issues, then the patches will not be applied when you run the build command again.
Work item tracking
Microsoft ADO (number only): 24410730
How I did it
Run the command to apply the patches without any conditions.
If failed, check if the failure reason is "series fully applied".
How to verify it
Why I did it
For route registry service, in order to block hijacked routes, IBGP session needs to be set up from BGP sentinel service to SONiC, and BGP sentinel service advertise the same route with higher local-preference and no export community. So that SONiC takes the route from BGP sentinel as the best path and does not advertise the route to EBGP peers.
In order to do that, new route-maps are needed. So this change adds a new set of templates, keeping BGPSentinel peers out of the other templates.
Work item tracking
Microsoft ADO (number only): 24451346
How I did it
Add sentinel_community in constants.yml, route from BGPSentinel do not match this community will be denied.
Add support to convert BGPSentinel related configuration in the BGPPeerPassive element of the minigraph to a new BGP_SENTINELS table in CONFIG_DB
Add a new set of "sentinels" templates to docker-fpm-frr
Add a new BGP peer manager to bgpcfgd, to add neighbors from the BGP_SENTINELS table using the "sentinels" templates
Add a test case for minigraph.py, making sure the BGPSentinel and BGPSentinelV6 elements create BGP_SENTINELS DB entry.
Add a set of test cases for the new sentinels templates in sonic-bgpcfgd tests.
Add sonic-bgp-sentinel.yang and a set of testcases for the yang file.
How to verify it
Testcases and UT newly added would pass.
Setup IPv4 and IPv6 BGPSentinel services in minigraph, and load minigraph, show CONFIG_DB and "show runningconfig bgp", configuration would be loaded successfully.
Using t1-lag topo and setup IBGP session from BGPSentinel to SONiC loopback address, IBGP session would up.
Advertise route from BGPSentinel to T1 with sentinel_community, higher local-preference and no-export communiyt. In T1, show bgp route, the result is "Not advertise to any EBGP peer".
Withdraw the route in BGPSentinel, in T1, route would advertise to EBGP peers.
Advertise route from T1 that does not match sentinel_community, in T1, would not see the route in show bgp route.
Why I did it
To fixsonic-net/sonic-mgmt#8786
How I did it
Modified Fan API to check whether the data retrieved is valid or not and return accordingly
How to verify it
Verify whether API 2.0 is loaded properly or not.
Execute CLI's like "show version", "show interface status", "show platform psustatus" etc..
Add support for a separate DEB_BUILD_PROFILES environment variable, to
be able to set build profiles. This may be used to specify whether
python 2 bindings/libraries should be built, or what configuration
options should be specified for a package.
This also makes it easier to append/remove build profiles from our rules
files, which will be needed for the sairedis build.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
#### Why I did it
src/sonic-gnmi
```
* 610509b - (HEAD -> master, origin/master, origin/HEAD) Install necessary debs instead of entire artifact in azp (#137) (2 hours ago) [Zain Budhwani]
```
#### How I did it
#### How to verify it
#### Description for the changelog
Upgrade celery in the python3 to 5.2.7,
Upgrade ipython to 8.12.2 since 5.4.1 requires prompt-toolkit<2.0.0,>=1.0.4,
But celery 5.2.7 relies click-repl>=0.2.0 , click-repl>=0.2.0 relies prompt-toolkit>=3.0.36.
So upgrade ipython to resolve the prompt-toolkit version incompatible issue.
#### Why I did it
src/sonic-swss
```
* cb1b3f40 - (HEAD -> master, origin/master, origin/HEAD) Remove system neighbor DEL operation in m_toSync if SET operation for (#2853) (7 hours ago) [Song Yuan]
```
#### How I did it
#### How to verify it
#### Description for the changelog
#### Why I did it
src/linkmgrd
```
* 6e5cfda - (HEAD -> master, origin/master, origin/HEAD) Change common_libs dependencies from buster to bullseye (#212) (2 days ago) [Ze Gan]
```
#### How I did it
#### How to verify it
#### Description for the changelog
Why I did it
Certain all-numeric device IDs of PCI devices in the pcie.yaml file are left unquoted, leading to false mismatch flags in the pcie daemon and subsequently leads to log flooding. This PR fixes that issue.
Work item tracking
Microsoft ADO (number only): 24578930
How I did it
Added quotes around numeric PCI devices in the pcie.yaml files of the following platforms:
x86_64-mlnx_msn2700-r0
x86_64-mlnx_msn4600c-r0
How to verify it
Install latest image after the merge and verify that syslogs are not flooded with PCI device mismatch errors
Why I did it
Fix the armhf build failure.
How to reproduce the issue:
docker run -it debain:bullseye bash
apt-get update && apt-get install -y python3-pip
pip3 install PyYAML==5.4.1
Error message:
Collecting PyYAML==5.4.1
Installing build dependencies ... done
Getting requirements to build wheel ... error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 /tmp/tmp6xabslgb_in_process.py get_requires_for_build_wheel /tmp/tmp_er01ztl
....
raise AttributeError(attr)
AttributeError: cython_sources
----------------------------------------
WARNING: Discarding d63f2d7597/PyYAML-5.4.1.tar.gz (sha256)=607774cbba28732bfa802b54baa7484215f530991055bb562efbed5b2f20a45e (from https://pypi.org/simple/pyyaml/) (requires-python:>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*). Command errored out with exit status 1: /usr/bin/python3 /tmp/tmp6xabslgb_in_process.py get_requires_for_build_wheel /tmp/tmp_er01ztl Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement PyYAML==5.4.1
ERROR: No matching distribution found for PyYAML==5.4.1
root@fa2fa92edcfd:/#
But if adding the option --no-build-isolation, then it is good, see fix.
install "PyYAML==5.4.1" --no-build-isolation
The same error can be found in the multiple builds.
Work item tracking
Microsoft ADO (number only): 24567457
How I did it
Add a build option --no-build-isolation.
#### Why I did it
event yang models for usage currently use int as type for usage leaf, needs to be of type decimal64
##### Work item tracking
- Microsoft ADO **(number only)**:17747466
#### How I did it
Update yang models and UT
#### How to verify it
UT
#### Why I did it
src/sonic-swss
```
* 5b27c209 - (HEAD -> master, origin/master, origin/HEAD) Refactor Orch class to separate recorder implementation (#2837) (8 hours ago) [Vivek]
```
#### How I did it
#### How to verify it
#### Description for the changelog
#### Why I did it
Reduced root directory privileges
#### How I did it
During build_debian - called chroot to reduce root directory and its subdirectories privileges to 744
#### How to verify it
After image build and upgrade - check /root privileges by calling "ls -a /root"
#### Description for the changelog
reduced /root directory privileges
#### Why I did it
src/sonic-platform-daemons
```
* 94242c2 - (HEAD -> master, origin/master, origin/HEAD) Use vendor customizable fan speed threshold checks (#378) (3 hours ago) [spilkey-cisco]
* db6e340 - Fix index out of range in the error log of invalid media lane mask received (#386) (8 hours ago) [MichaelWangSmci]
```
#### How I did it
#### How to verify it
#### Description for the changelog
- Why I did it
Adjust PSU power threshold logic in system health.
- How I did it
Update the description message in PSU power threshold checking
power of PSU x (xx w) exceeds threshold (xx w) => System power exceeds xx threshold (xx w)
- How to verify it
Manual test and unit test
= Why I did it
To optimize Mellanox platform SAI build
- How I did it
SAI debs are now downloaded as Spectrum-SDK-Drivers-SONiC-Bins release.
- How to verify it
Configure/build for Mellanox platform, check the image and ensure that correct SAI debs are included.
- Why I did it
The reset cause "reset_from_comex" has been removed by hw-management, hence removing it from platform API code
- How I did it
Remove reset_from_comex from reboot cause mapping
- How to verify it
Manual test
- Why I did it
BIOS on new generation switch can come with a file type of cap or cab. Needs to add support to these file type.
Also ONIE version on new devices can have a suffix of 'dev'.
- How I did it
Added cap & cab as possible component extensions for ComponentBIOS.
Update the ONIE version regex to include dev signed versions.
- How to verify it
Update BIOS.
#### Why I did it
The testcases in sonic-mgmt need the packages of protobuf and dashapi
##### Work item tracking
- Microsoft ADO **(number only)**:
#### How I did it
Because the docker of sonic-mgmt is based on ubuntu20.04, it cannot directly install the packages compiled by slave due to dependency issues. Download related packaged directly from Azp.
#### How to verify it
Check azp stats.
Why I did it
When sonic is managed by k8s, the sonic container is managed by k8s daemonset, daemonset identifies its members by labels. Currently when restarting a sonic service by systemctl, if the service's container is already managed by k8s, systemd script stops the container by removing the feature label to make it disjoin from k8s daemonset, and then starts it by adding the label to make it join k8s daemonset again.
This behavior would cause problem during k8s container upgrade. Containers in daemonset are upgraded in a rolling fashion, that means the daemonset version is updated first, then rollout the new version to containers with precheck/postcheck one by one. However, if a sonic device joins a daemonset, k8s will directly deploy a pod with the current version of daemonset, it is expected when a device joins k8s cluster at first time.
But for a device which has already joined k8s cluster, the re-joining daemonset will cause the container upgraded to new version without precheck, so if a systemd service is restarted during daemonset upgrade, the container may be upgraded without precheck and break rolling update policy. To fix it, we need to remove the logic about dropping k8s label in systemd service stop script for kube mode.
Work item tracking
Microsoft ADO (number only): 24304563
How I did it
Don't drop label in systemd service stop script when feature's set_owner is kube. Only drop label when feature's set_owner is local.
How to verify it
The label feature_enabled should be always true if the feature's set owner is kube.
This is primarily to fix a bug in scapy hitting an error when trying to
listen on multiple interfaces in a single `sniff` call. This also
upgrades it to the current latest version.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
#### Why I did it
src/sonic-utilities
```
* 51c7a43c - (HEAD -> master, origin/master, origin/HEAD) [show][muxcable] update `show mux config` to print out `soc_ipv6` as well (#2909) (6 hours ago) [Jing Zhang]
* fd497755 - [route_check][dualtor] Ignore vlan neighbor route miss (#2888) (18 hours ago) [Longxiang Lyu]
* 81c0ed4e - [show][muxcable] update `show mux tunnel-route` to check soc_ipv6 as well (33 hours ago) [Jing Zhang]
* 1ee73668 - [db_migrator] Migrate DNS configuratuion (#2893) (2 days ago) [ganglv]
* 553a3432 - [dualtor][route_check] filter out `soc_ipv6` (#2899) (2 days ago) [Jing Zhang]
```
#### How I did it
#### How to verify it
#### Description for the changelog
Why I did it
For some devices whose log folder size is larger than 200M, for example, 256M, the LOG_FILE_ROTATE_SIZE_KB should be 16M. and
THRESHOLD_KB=$((USABLE_SPACE_KB - (NUM_LOGS_TO_ROTATE * LOG_FILE_ROTATE_SIZE_KB * 2)))
= $(( (VAR_LOG_SIZE_KB * 90 / 100) - RESERVED_SPACE_KB)) - (NUM_LOGS_TO_ROTATE * LOG_FILE_ROTATE_SIZE_KB * 2)))
= $(( (256M * 90 / 100) - 4096)) - (8 * 16M * 2)))
the result would be a negative value
Work item tracking
Microsoft ADO (number only):
24524827
How I did it
Add a case for 400M, if the log folder size is between 200M and 400M, set the log file size to 2M
How to verify it
Do cmd "sudo logrotate -f /etc/logrotate.conf" on DUT which val/log folder size is 256M, and check the syslog.
Why I did it
When do clean up container images, current code has two bugs need to be fixed. And some variables' name maybe cause confused, change the variables' name.
Work item tracking
Microsoft ADO (number only): 24502294
How I did it
We do clean up after tag latest successfully. But currently tag latest function only return 0 and 1, 0 means succeed and 1 means failed, when we get 1, we will retry, when we get 0, we will do clean up. Actually the code 0 includes another case we don't need to do clean up. The case is that when we are doing tag latest, the container image we want to tag maybe not running, so we can not tag latest and don't need to cleanup, we need to separate this case from 0, return -1 now.
When local mode(v1) -> kube mode(v2) happens, one problem is how to handle the local image, there are two cases. one case is that there was one kube v1 container dry-run(cause we don't relace the local if kube version = local version), we will remove the kube v1 image and tag the local version with ACR prefix and remove local v1 local tag. Another case is that there was no kube v1 container dry-run, we remove the local v1 image directly, cause the local v1 image should not be the last desire version.
About the docker_id variable, it may cause confused, it's actually docker image id, so rename the variable. About the two dicts and the list, rename them to be more readable.
How to verify it
Check tag latest and image clean up result.
Why I did it
During the upgrade process via k8s, the feature's systemd service will restart as well, all of the feature systemd service has restart number limit, and the limit number is too small, only three times. if fallback happens when upgrade, the start count will be 2, just once again, the systemd service will be down. So, need to bypass this. This restart function will be called when do local -> kube, kube -> kube, kube ->local, each time call this function, we indeed need to restart successfully, so do reset-failed every time we do restart.
When need to go back to local mode, we do systemd restart immediately without waiting the default restart interval time so that we can reduce the container down time.
Work item tracking
Microsoft ADO (number only):
24172368
How I did it
Before every restart for upgrade, do reset feature's restart number. The restart number will be reset to 0 to bypass the restart limit.
When need to go back to local mode, we do systemd restart immediately.
How to verify it
Feature's systemd service can be always restarted successfully during upgrade process via k8s.
Why I did it
HLD implementation: Container Hardening (sonic-net/SONiC#1364)
Work item tracking
Microsoft ADO (number only): 14807420
How I did it
Reduce linux capabilities in privileged flag, retain NET_ADMIN and SYS_ADMIN capabilities
How to verify it
Install new image to DUT, verify bgp container is up
Run bgp sonic-mgmt kvmtest
Why I did it
[Build] Change the build option from ENABLE_FIPS_FEATURE to INCLUDE_FIPS
Work item tracking
Microsoft ADO (number only): 24485797
How I did it
#### Why I did it
src/sonic-platform-daemons
```
* d73808c - (HEAD -> master, origin/master, origin/HEAD) Added PCIe transaction check for all peripherals on the bus (#331) (9 hours ago) [Ashwin Srinivasan]
* 432602a - Update active application selected code in transceiver_info table aft… (#381) (13 hours ago) [Michael Wang - TW]
```
#### How I did it
#### How to verify it
#### Description for the changelog
#### Why I did it
src/sonic-swss
```
* c7e1308e - (HEAD -> master, origin/master, origin/HEAD) Remove redundant updateFabricPortState (#2850) (2 hours ago) [kenneth-arista]
```
#### How I did it
#### How to verify it
#### Description for the changelog
Why I did it
It is to fix the docker-ptf-sai build failure.
https://dev.azure.com/mssonic/build/_build/results?buildId=311315&view=logs&j=cef3d8a9-152e-5193-620b-567dc18af272&t=cf595088-5c84-5cf1-9d7e-03331f31d795
2023-07-09T13:53:19.9025355Z �[91mTraceback (most recent call last):
2023-07-09T13:53:19.9025715Z File "/root/ptf/.eggs/setuptools_scm-7.1.0-py3.7.egg/setuptools_scm/_entrypoints.py", line 74, in <module>
2023-07-09T13:53:19.9025933Z from importlib.metadata import entry_points # type: ignore
2023-07-09T13:53:19.9026167Z ModuleNotFoundError: No module named 'importlib.metadata'
Work item tracking
Microsoft ADO (number only): 24513583
How I did it
How to verify it