Why I did it
Downgrade the symcrypt version, use the SymCrypt version v103.0.1 for certification.
Work item tracking
Microsoft ADO (number only): 24222567
How I did it
How to verify it
- Why I did it
Since the prod signing tool is vendor specific, and each vendor may have different arguments they would like to use in the script, we would need a way to inject those arguments to the script.
- How I did it
Add a compilation flag SECURE_UPGRADE_PROD_TOOL_ARGS which vendors can use to inject any flag they would want to the prod signing script.
- How to verify it
Build SONiC using your own prod script
Why I did it
Fix#15000
isc-dhcp 4.4.1-2.3+deb11u1 is no longer available in debian repository
How I did it
update isc-dhcp to new version 4.4.1-2.3+deb11u2
- Why I did it
In order to reduce sonic build time, there is an option to acquire sonic slave docker(s) from artifact server (reduce sonic make configure time).
Current implementation supports only convention of:
<REGISTRY_SERVER>:<REGISTRY_PORT>/<SLAVE_BASE_IMAGE>:<SLAVE_BASE_TAG>
In case the SLAVE_BASE_IMAGE appear in internal path inside the server, the convention should be like that:
<REGISTRY_SERVER>:<REGISTRY_PORT><REGISTRY_SERVER_PATH>/<SLAVE_BASE_IMAGE>:<SLAVE_BASE_TAG>
When REGISTRY_SERVER_PATH (that is set on rules/config) will have to start with "/".
If REGISTRY_SERVER_PATH will not be set, the behavior will remain the same it works today.
- How I did it
Add ability to set REGISTRY_SERVER_PATH and update the code for docker image tag and docker image pull accordingly
- How to verify it
Use sonic slave docker image from artifact server in which the image is kept in internal folder and make sure it consume it.
- Why I did it
To be able to see how much time was consumed to build a specific target.
A newly added code does those things:
1. Print build start time for target
2. Print build end time for target
3. Print elapsed time for target
- How I did it
Add a macro to record the time
Add macros to print end time and elapsed time
- How to verify it
Just build an image and check any *.log file
Signed-off-by: Yevhen Fastiuk <yfastiuk@nvidia.com>
#### Why I did it
Remove dbus when telemetry does not use it.
##### Work item tracking
- Microsoft ADO **(number only)**: 17852550
#### How I did it
Use INCLUDE_SYSTEM_GNMI to determine if telemetry needs dbus.
#### How to verify it
Build image and check telemetry container.
Depends on https://github.com/sonic-net/sonic-linux-kernel/pull/315
#### Why I did it
The name SECURE_UPGRADE_DEV_SIGNING_CERT is misleading, this flag is relevant to both to dev and prod signing.
#### How I did it
Rename all mentions of name SECURE_UPGRADE_DEV_SIGNING_CERT to SECURE_UPGRADE_SIGNING_CERT - this is also done with PR in sonic-linux-kernel repository
#### How to verify it
Build SONiC using your own prod script
This is done because when there is a default value, we mount to this path, and this creates this folder on the host.
#### Why I did it
Fix issue that running without overwriting SECURE_UPGRADE_DEV_SIGNING_KEY and SECURE_UPGRADE_DEV_SIGNING_CERT dummy folders are being created on the host.
#### How I did it
Removed the default assignment to SECURE_UPGRADE_DEV_SIGNING_KEY and SECURE_UPGRADE_DEV_SIGNING_CERT
#### How to verify it
Build SONiC using your own prod script
Why I did it
Support to add SONiC OS Version in device info.
It will be used to display the version info in the SONiC command "show version". The version is used to do the FIPS certification. We do not do the FIPS certification on a specific release, but on the SONiC OS Version.
SONiC Software Version: SONiC.master-13812.218661-7d94c0c28
SONiC OS Version: 11
Distribution: Debian 11.6
Kernel: 5.10.0-18-2-amd64
How I did it
- Why I did it
Currently, non upstream patches are applied only after upstream patches.
Depends on sonic-net/sonic-linux-kernel#313. Can be merged in any order, preferably together
- What I did it
Non upstream Patches that reside in the sonic repo will not be saved in a tar file bur rather in a folder pointed out by EXTERNAL_KERNEL_PATCH_LOC. This is to make changes to the non upstream patches easily traceable.
The build variable name is also updated to INCLUDE_EXTERNAL_PATCHES
Files/folders expected under EXTERNAL_KERNEL_PATCH_LOC
EXTERNAL_KERNEL_PATCH_LOC/
├──── patches/
├── 0001-xxxxx.patch
├── 0001-yyyyyyyy.patch
├── .............
├──── series.patch
series.patch should contain a diff that is applied on the sonic-linux-kernel/patch/series file. The diff should include all the non-upstream patches.
How to verify it
Build the Kernel and verified if all the patches are applied properly
Signed-off-by: Vivek Reddy Karri <vkarri@nvidia.com>
#### Why I did it
When CPU is busy, the sonic_ax_impl may not have sufficient speed to handle the notification message sent from REDIS.
Thus, the message will keep stacking in the memory space of sonic_ax_impl.
If the condition continues, the memory usage will keep increasing.
#### How I did it
Add a monit file to check if the SNMP container where sonic_ax_impl resides in use more than 4GB memory.
If yes, restart the sonic_ax_impl process.
#### How to verify it
Run a lot of this command: `while true; do ret=$(redis-cli -n 0 set LLDP_ENTRY_TABLE:test1 test1); sleep 0.1; done;`
And check the memory used by sonic_ax_impl keeps increasing.
After a period, make sure the sonic_ax_impl is restarted when the memory usage reaches the 4GB threshold.
And verify the memory usage of sonic_ax_impl drops down from 4GB.
Change references to use bullseye instead of buster
Why I did it
Almost all daemons in 202211 and master uses bullseye, and sflow was easy to migrate.
How I did it
Replaced the references, built and tested in 202211.
How to verify it
Build with the changes, enable sflow:
admin@sonic:~$ sudo config sflow collector add test 1.2.3.4
admin@sonic:~$ sudo config sflow collector enable
tcpdump on 1.2.3.4 and see that UDP sFlow are being sent.
Signed-off-by: Christian Svensson <blue@cmd.nu>
Change references to use bullseye instead of buster
Why I did it
Almost all daemons in 202211 and master uses bullseye, and NAT seems easy to migrate.
How I did it
Replaced the references, built with 202211 branch.
How to verify it
Not sure, it builds and tests pass as far as I can tell but I don't use the feature myself.
Signed-off-by: Christian Svensson <blue@cmd.nu>
* Upgrade docker-sonic-vs and docker-syncd-vs to Bullseye
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* iproute2: Force a new version and timestamp to be used for the package
There is an issue with Docker's overlay2 storage driver when not using
native diffs (and thus falling back to naive diff mode), which is the
case in the CI builds. The way the naive diff mode detects changes is by
comparing the file size and comparing the timestamps (specifically, I
believe it's the modification timestamp), and if there's a change there,
then it's considered a change that needs to be recorded as part of that
layer.
The problem is that with the code being added in the patch, the file
size remains the same, and the timestamp of binary files appear to be
the same timestamp as the changelog entry (likely for reproducible build
purposes). The file size remains the same likely due to extra padding
within the file introduced by relro. Because of this, Docker doesn't
detect this file has changed, and doesn't save the new file as part of
this layer.
To work around this, create a new changelog entry (with a new version as
well) with a new timestamp. This will result in the binary files having
a different timestamp, and thus will get saved by Docker as part of that
layer.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
---------
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Why I did it
Find a new bug on kubelet side. The kubernetes-cni plug-in was removed in #12997, the reason is that the plug-in will be auto installed when install kubeadm, and will report error if we don't remove the install code. But after removal, the version auto installed is different from what we installed before. This will affect the kubelet action in some scenarios we don't find before. Need to install it by another way.
How I did it
Install kubernetes-cni==0.8.7-00 before install kubeadm
How to verify it
Flannel binary will be installed under /opt/cni/bin/ folder
- Why I did it
Add Secure Boot support to SONiC OS.
Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process before the operating system has been loaded.
- How I did it
Added a signing process to sign the following components:
shim, grub, Linux kernel, and kernel modules when doing the build, and when feature is enabled in build time according to the HLD explanations (the feature is disabled by default).
- How to verify it
There are self-verifications of each boot component when building the image, in addition, there is an existing end-to-end test in sonic-mgmt repo that checks that the boot succeeds when loading a secure system (details below).
How to build a sonic image with secure boot feature: (more description in HLD)
Required to use the following build flags from rules/config:
SECURE_UPGRADE_MODE="dev"
SECURE_UPGRADE_DEV_SIGNING_KEY="/path/to/private/key.pem"
SECURE_UPGRADE_DEV_SIGNING_CERT="/path/to/cert/key.pem"
After setting those flags should build the sonic-buildimage.
Before installing the image, should prepared the setup (switch device) with the follow:
check that the device support UEFI
stored pub keys in UEFI DB
enabled Secure Boot flag in UEFI
How to run a test that verify the Secure Boot flow:
The existing test "test_upgrade_path" under "sonic-mgmt/tests/upgrade_path/test_upgrade_path", is enough to validate proper boot
You need to specify the following arguments:
Base_image_list your_secure_image
Taget_image_list your_second_secure_image
Upgrade_type cold
And run the test, basically the test will install the base image given in the parameter and then upgrade to target image by doing cold reboot and validates all the services are up and working correctly
Update sonic-swss-common submodule pointer to include the following:
565ad4b Fix common path issue (#751)
3352881 Prevent sonic-db-cli generate core dump (#749)
43cadec Add ProfileProvider class to support read profile config from PROFILE_DB. (#683)
8b09f90 Update path to sairedis tests (#747)
85f3776 Non recursive automake and Debian packaging changes (#700)
This is a reland of #13950, with the debug image build fix.
#### Why I did it
Add support of California-SB237 conformance.
https://github.com/sonic-net/SONiC/tree/master/doc/California-SB237
#### How I did it
Expire user passwords during build
#### How to verify it
Enable build flag and check if default user is prompted for a new password
Why I did it
[Security] Upgrade the openssl version to 1.1.1n-0+deb11u4+fips
f6df7303d8 Update expired certs.
84540b59c1 CVE-2022-2068
f763d8a93e Prepare 1.1.1n-0+deb11u2
576562cebe CVE-2022-1292
How I did it
Upgrade the OpenSSL version
Why I did it
[FIPS] Upgrade Open-SymCrypt version to 0.6
Improve the SymCrypt performance
Support to download the debug packages from storage account in version 0.6.
How I did it
Upgrade to symcrypt-openssl from version 0.4 to version 0.6
Changes in https://github.com/sonic-net/sonic-fips:
0c29b23 Upgrade the submodules: SymCrypt and SymCrypt-OpenSSL #40
80022f3 Fix the ARM64 build failure
2e76a3d Disable the unsupported tests
Other changes will be added as well:
55b8e0a Merge pull request #35 from xumia/change-license
120c1a7 Upgrade SymCrypt and SymCrypt-OpenSSL
2f9c084 Merge pull request #39 from liuh-80/dev/liuh/update-openssh-version
a3be6c5 Revert openssh version
e02fa1e Update fips version
How to verify it
Why I did it
Add explicit dependency on sonic_platform_common in sonic-chassisd mk. This was needed because sonic-chassisd depends on sonic-platform-base which is present in sonic-platform-common wheel package.
How I did it
Add explicit dependency on sonic_platform_common in sonic-chassisd mk.
How to verify it
Verified by building all platforms broadcom, mellanox, marvel_arm
Why I did it
[Build] Support Debian snapshot mirror to improve build stability
It is to enhance the reproducible build, supports the Debian snapshot mirror. It guarantees all the docker images using the same Debian mirror snapshot and fixes the temporary build failure which is caused by remote Debain mirror indexes changed during the build. It is also to fix the version conflict issue caused by no fixed versions of some of the Debian packages.
How I did it
Add a new feature to support the Debian snapshot mirror.
How to verify it
Why I did it
docker-sonic-mgmt build is failing.
How I did it
stretch docker is disabled recently. Update docker-sonic-mgmt to buster.
Migrate from sonictest to sonicbld. Because Azure requires migrate vm from uswest2 to uswest3.
Fix a build issue when build image.
How to verify it
Why I did it
We plan to pilot k8s feature, need to fix several bugs including enable telemetry feature and add platform label.
How I did it
Add support feature set, only enable telemetry container upgrade for now
Add platform label for scheduler usage
Remove CNI installation code, it would be auto installed when install kubeadm
How to verify it
After sonic device join k8s cluster, show node labels to check if platform label is visible.
Signed-off-by: Yun Li yunli1@microsoft.com
During the build process, a dsc file is retrieved from the URL:
http://deb.debian.org/debian/pool/main/i/isc-dhcp/isc-dhcp_4.4.1-2.3.dsc
Depending on the DNS resolution, the server reached may respond with a
HTTP 404 error code, what stops the build process.
In all cases, the URL http://deb.debian.org/debian/pool/main/i/isc-dhcp/
no more lists this DSC file but one with a different format.
The suffix "+deb11u1" is now appended to identify the debian version.
- append this suffix to the make file rules of isc-dhcp
Signed-off-by: Guillaume Lambert <guillaume.lambert@orange.com>
- Why I did it
Support syslog rate limit configuration feature
- How I did it
Remove unused rsyslog.conf from containers
Modify docker startup script to generate rsyslog.conf from template files
Add metadata/init data for syslog rate limit configuration
- How to verify it
Manual test
New sonic-mgmt regression cases
Why I did it
It's possible to speed up some parts of a build using parallel compression/decompression.
This is especially important for build_debian.sh.
How I did it
pigz is a parallel implementation of gzip: https://zlib.net/pigz/
Some programs like docker and mkinitramfs can automatically detect and use it instead of gzip.
For tar we need to select it directly.
To enable this feature you need to set GZ_COMPRESS_PROGRAM=pigz
docker-sonic-vs doesn't have the infra needed for the syslog rate limit
configuration, so it's not going to be rendering jinja templates to
overwrite /etc/rsyslog.conf. This also means that syslog messages would
get logged twice (because both the default /etc/rsyslog.conf file and
/etc/rsyslog.d/50-default.conf are telling it to log to syslog).
Therefore, keep the custom static /etc/rsyslog.conf file for docker-sonic-vs.
Fixessonic-net/sonic-swss#2570.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
- Why I did it
This optimization is needed for DPU SONiC. DPU SONiC runs a limited set of containers and teamd and radv containers are not part of them. Unlike the other containers, there was no possibility to disable teamd and radv containers compilation.
To reduce DPU SONiC compilation time and reduce the image size this commit adds the possibility to disable their compilation.
- How I did it
Two new configuration options are added to rules/config file:
INCLUDE_TEAMD
INCLUDE_ROUTER_ADVERTISER
By default to preserve the existing behavior both options are enabled. There are two ways to override them:
To change option value to "n" in rules/config file.
To override their value using SONIC_OVERRIDE_BUILD_VARS env variable:
SONIC_OVERRIDE_BUILD_VARS="SONIC_INCLUDE_TEAMD=y SONIC_INCLUDE_ROUTER_ADVERTISER=n"
- How to verify it
The default behavior is preserved. To verify it compile the image without overriding new options. Install the image and verify that both teamd and radv containers are present and running.
To verify the new options override them with "n" value. Compile and install image. Verify that no docker containers are present. Verify that SWSS can start without errors.
This feature caches all the deb files during docker build and stores them
into version cache.
It loads the cache file if already exists in the version cache and copies the extracted
deb file from cache file into Debian cache path( /var/cache/apt/archives).
The apt-install always installs the deb file from the cache if exists, this
avoid unnecessary package download from the repo and speeds up the overall build.
The cache file is selected based on the SHA value of version dependency
files.
Why I did it
How I did it
How to verify it
* 03.Version-cache - framework environment settings
It defines and passes the necessary version cache environment variables
to the caching framework.
It adds the utils script for shared cache file access.
It also adds the post-cleanup logic for cleaning the unwanted files from
the docker/image after the version cache creation.
* 04.Version cache - debug framework
Added DBGOPT Make variable to enable the cache framework
scripts in trace mode. This option takes the part name of the script to
enable the particular shell script in trace mode.
Multiple shell script names can also be given.
Eg: make DBGOPT="image|docker"
Added verbose mode to dump the version merge details during
build/dry-run mode.
Eg: scripts/versions_manager.py freeze -v \
'dryrun|cmod=docker-swss|cfile=versions-deb|cname=all|stage=sub|stage=add'
* 05.Version cache - docker dpkg caching support
This feature caches all the deb files during docker build and stores them
into version cache.
It loads the cache file if already exists in the version cache and copies the extracted
deb file from cache file into Debian cache path( /var/cache/apt/archives).
The apt-install always installs the deb file from the cache if exists, this
avoid unnecessary package download from the repo and speeds up the overall build.
The cache file is selected based on the SHA value of version dependency
files.
Why I did it
Provide GNMI native write interface for configuration.
How I did it
Add configuration parameters for GNMI native write.
How to verify it
Check build pipeline.
- Why I did it
Upgrade the app-extension developer environments (sonic-sdk & sonic-sdk-bullseye) to bullseye
- How to verify it
Built an app-extension using these images and verified if it is up and running.
Signed-off-by: Vivek Reddy <vkarri@nvidia.com>
Make syncd rpc docker which supports sai-ptf v2
local bulild the target
NOSTRETCH=y NOJESSIE=y make configure PLATFORM=vs
NOSTRETCH=y NOJESSIE=y NOBULLSEYE=y SAITHRIFT_V2=y make target/docker-ptf-sai.gz
NOSTRETCH=y NOJESSIE=y make configure PLATFORM=vs
NOSTRETCH=y NOJESSIE=y NOBULLSEYE=y make target/docker-ptf.gz
NOSTRETCH=y NOJESSIE=y make configure PLATFORM=broadcom
NOSTRETCH=y NOJESSIE=y ENABLE_SYNCD_RPC=y SAITHRIFT_V2=y make target/docker-syncd-brcm-rpcv2.gz
NOSTRETCH=y NOJESSIE=y ENABLE_SYNCD_RPC=y SAITHRIFT_V2=y make target/docker-saiserverv2-brcm.gz
Test done:
#12619
NOSTRETCH=y NOJESSIE=y make configure PLATFORM=broadcom
NOSTRETCH=y NOJESSIE=y ENABLE_SYNCD_RPC=y make target/docker-syncd-brcm-rpc.gz
NOSTRETCH=y NOJESSIE=y ENABLE_SYNCD_RPC=y make target/docker-saiserver-brcm.gz
Signed-off-by: richardyu-ms <richard.yu@microsoft.com>
Signed-off-by: richardyu-ms <richard.yu@microsoft.com>
Why I did it
A recent migration of SonicV2Connector from swsssdk to swsscommon.swsscommon broke phy-credo.
How I did it
Change the import path while keeping a fallback on the previous one for 202205
How to verify it
phy-credo.service no longer fails due to an import error
Why I did it
Stopping of pmon after swss and syncd causes some ERROR logs in syslog. Also, this affects teamd downtime.
How I did it
Adjust warmboot shutdown order in make file
How to verify it
Build SONiC image, deploy to the target device and check /etc/sonic/warm-reboot_order content.
lldp mux nat radv sflow bgp pmon swss teamd syncd
#### Why I did it
Currently at the Azure build system, the P4RT container is disabled by default at the build time. Here the goal is to include the P4RT container at the build time while disabling it at the runtime. The user can enable/disable the p4rt app through the config based on the preference.
#### How I did it
Changed the config in rules/config and init-cfg.json.j2
* [openssh]: Restore behavior of ClientAliveCountMax=0
OpenSSH 8.2 changed the behavior of ClientAliveCountMax=0 such that
setting it to 0 disables connection-killing entirely when the connection
is idle. Revert that change.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* Remove build-dep command that should not be there
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Update openssh make file, add missing dependency to libnl.
#### Why I did it
Openssh indirectly depends on libnl.
Another PR #12447 need add new patch to openssh, after adding new patch to openssh, PR build failed with libnl missing error.
#### How I did it
Update openssh make file, add missing dependency to libnl.
#### How to verify it
Pass all test case
#### Which release branch to backport (provide reason below if selected)
<!--
- Note we only backport fixes to a release branch, *not* features!
- Please also provide a reason for the backporting below.
- e.g.
- [x] 202006
-->
- [ ] 201811
- [ ] 201911
- [ ] 202006
- [ ] 202012
- [ ] 202106
- [ ] 202111
- [ ] 202205
#### Description for the changelog
Update openssh make file, add missing dependency to libnl.
#### Ensure to add label/tag for the feature raised. example - PR#2174 under sonic-utilities repo. where, Generic Config and Update feature has been labelled as GCU.
#### Link to config_db schema for YANG module changes
<!--
Provide a link to config_db schema for the table for which YANG model
is defined
Link should point to correct section on https://github.com/Azure/sonic-buildimage/blob/master/src/sonic-yang-models/doc/Configuration.md
-->
#### A picture of a cute animal (not mandatory but encouraged)
* Add smartmontools to pmon docker
* Set smartmontools to install version 7.2-1 in pmon to match host; clean up smartmontools build files
* Add comments on smartmontools version for both host and pmon
Why I did it
When sending a PR only CI change, as expected, the target target/python-wheels/buster/sonic_config_engine-1.0-py2-none-any.whl should be from the cache, because the depended files were not changed, but it rebuilt.
How I did it
Sort the files by name.
Build swss-common with libyang
#### Why I did it
sonic-swss-common lib add dependency to libyang recently, so need update make file before update sonic-swss-common submodule.
#### How I did it
Add dependency to libyang in rules/swss-common.mk
#### How to verify it
Pass all E2E test case.
#### Which release branch to backport (provide reason below if selected)
<!--
- Note we only backport fixes to a release branch, *not* features!
- Please also provide a reason for the backporting below.
- e.g.
- [x] 202006
-->
- [ ] 201811
- [ ] 201911
- [ ] 202006
- [ ] 202012
- [ ] 202106
- [ ] 202111
- [ ] 202205
#### Description for the changelog
Add new Redis database PROFILE_DB
#### Link to config_db schema for YANG module changes
<!--
Provide a link to config_db schema for the table for which YANG model
is defined
Link should point to correct section on https://github.com/Azure/sonic-buildimage/blob/master/src/sonic-yang-models/doc/Configuration.md
-->
#### A picture of a cute animal (not mandatory but encouraged)
Why I did it
Replace configuration parameter for gnmi write, and we will add other gnmi write features in the future.
How I did it
Update rules/config and other Makefile.
How to verify it
Build sonic image.
Why I did it
If the SWSS services was restarted, the MACsec service should also be restarted. Otherwise the data in wpa_supplicant and orchagent will not be consistent.
How I did it
Add dependency in docker-macsec.mk.
How to verify it
Manually check by 'sudo service swss restart'.
The MACsec container should be started after swss, the syslog will look like
Sep 8 14:36:29.562953 sonic INFO swss.sh[9661]: Starting existing swss container with HWSKU Force10-S6000
Sep 8 14:36:30.024399 sonic DEBUG container: container_start: BEGIN
...
Sep 8 14:36:33.391706 sonic INFO systemd[1]: Starting macsec container...
Sep 8 14:36:33.392925 sonic INFO systemd[1]: Starting Management Framework container...
Signed-off-by: Ze Gan <ganze718@gmail.com>
With this PR in, you flap BGP and use events_tool to see the published events.
With telemetry PR #111 in and corresponding submodule update done in buildimage, one could run gnmi_cli to capture BGP flap events.
#### Why I did it
To deprecate swsssdk, remove all dependency to it.
#### How I did it
Remove swsssdk from rules and build image scripts.
#### How to verify it
Pass all UT and E2E test case
#### Which release branch to backport (provide reason below if selected)
<!--
- Note we only backport fixes to a release branch, *not* features!
- Please also provide a reason for the backporting below.
- e.g.
- [x] 202006
-->
- [ ] 201811
- [ ] 201911
- [ ] 202006
- [ ] 202012
- [ ] 202106
- [ ] 202111
- [ ] 202205
#### Description for the changelog
Remove swsssdk from rules and build image scripts.
#### Link to config_db schema for YANG module changes
<!--
Provide a link to config_db schema for the table for which YANG model
is defined
Link should point to correct section on https://github.com/Azure/sonic-buildimage/blob/master/src/sonic-yang-models/doc/Configuration.md
-->
#### A picture of a cute animal (not mandatory but encouraged)
Why I did it
Migrate FRR to bullseye
How I did it
Makefile and docker config changes to refer to bullseye instead of buster.
How to verify it
Build bullseye frr docker.
Co-authored-by: Rajendra Dendukuri <rajendra.dendukuri@broadcom.com>
* [snmpd]: Update to 5.9+dfsg-4+deb11u1 to match Debian version
This brings in some security fixes.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* Update snmpd makefile
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* Remove binNMU for snmpd
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* Add k8s master feature
Signed-off-by: Yun Li <yunli1@microsoft.com>
* Update kubernetes version mistake and make variable passing clear
Signed-off-by: Yun Li <yunli1@microsoft.com>
* Add CRI-dockerd package
Signed-off-by: Yun Li <yunli1@microsoft.com>
* Update version variable passing logic
Signed-off-by: Yun Li <yunli1@microsoft.com>
* Upgrade the worker kubernetes version
Signed-off-by: Yun Li <yunli1@microsoft.com>
* Install xml file parse tool
Signed-off-by: Yun Li <yunli1@microsoft.com>
Signed-off-by: Yun Li <yunli1@microsoft.com>
Why I did it
Upgrade sonic fips packages to version 0.2
Upgrade openssl version from 1.1.1k-1+deb11u1+fips to 1.1.1n-0+deb11u3+fips
Upgrade openssh version from 8.4p1-5+fips to 8.4p1-5+deb11u1+fips
How I did it
Change the makefile.
* Ported Marvell armhf build on x86 for debian buster to use cross-compilation instead of qemu emulation
Current armhf Sonic build on amd64 host uses qemu emulation. Due to the
nature of the emulation it takes a very long time, about 22-24 hours to
complete the build. The change I did to reduce the building time by
porting Sonic armhf build on amd64 host for Marvell platform for debian
buster to use cross-compilation on arm64 host for armhf target. The
overall Sonic armhf building time using cross-compilation reduced to
about 6 hours.
Signed-off-by: marvell <marvell@cpss-build3.marvell.com>
* Fixed final Sonic image build with dockers inside
* Update Dockerfile.j2
Fixed qemu-user-static:x86_64-aarch64-5.0.0-2 .
* Update cross-build-arm-python-reqirements.sh
Added support for both armhf and arm64 cross-build platform using $PY_PLAT environment variable.
* Update Makefile
Added TARGET=<cross-target> for armhf/arm64 cross-compilation.
* Reviewer's @qiluo-msft requests done
Signed-off-by: marvell <marvell@cpss-build3.marvell.com>
* Added new radius/pam patch for arm64 support
* Update slave.mk
Added missing back tick.
* Added libgtest-dev: libgmock-dev: to the buster Dockerfile.j2. Fixed arm perl version to be generic
* Added missing armhf/arm64 entries in /etc/apt/sources.list
* fix libc-bin core dump issue from xumia:fix-libc-bin-install-issue commit
* Removed unnecessary 'apt-get update' from sonic-slave-buster/Dockerfile.j2
* Fixed saiarcot895 reviewer's requests
* Fixed README and replaced 'sed/awk' with patches
* Fixed ntp build to use openssl
* Unuse sonic-slave-buster/cross-build-arm-python-reqirements.sh script (put all prebuilt python packages cross-compilation/install inside Dockerfile.j2). Fixed src/snmpd/Makefile to use -j1 in all cases
* Clean armhf cross-compilation build fixes
* Ported cross-compilation armhf build to bullseye
* Additional change for bullseye
* Set CROSS_BUILD_ENVIRON default value n
* Removed python2 references
* Fixes after merge with the upstream
* Deleted unused sonic-slave-buster/cross-build-arm-python-reqirements.sh file
* Fixed 2 @saiarcot895 requests
* Fixed @saiarcot895 reviewer's requests
* Removed use of prebuilt python wheels
* Incorporated saiarcot895 CC/CXX and other simplification/generalization changes
Signed-off-by: marvell <marvell@cpss-build3.marvell.com>
* Fixed saiarcot895 reviewer's additional requests
* src/libyang/patch/debian-packaging-files.patch
* Removed --no-deps option when installing wheels. Removed unnecessary lazy_object_proxy arm python3 package instalation
Co-authored-by: marvell <marvell@cpss-build3.marvell.com>
Co-authored-by: marvell <marvell@cpss-build2.marvell.com>
- Why I did it
To optimize fast-reboot. Teamd can be stopped after bgp is stopped and after swss is stopped because the last LACP packet can be sent still since syncd is still running. Saves 15 sec on shutdown.
- How I did it
Defined in the manifest for teamd to be stopped after swss
- How to verify it
Run it on the switch.
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
Why I did it
src/dhcprelay is being split out to be its own submodule.
How I did it
Add existing dhcprelay commits into the new repo.
Clean up Makefile (sonic-net/sonic-dhcp-relay@772625f)
Add LGTM config (sonic-net/sonic-dhcp-relay@5cc0889)
Add Azure pipeline config (sonic-net/sonic-dhcp-relay@c79cdb7)
Add submodule reference, renaming most references of dhcp6relay to dhcprelay (to reflect that this will not just be for IPv6 in the future).
How to verify it
Successful run of LGTM is tested at sonic-net/sonic-dhcp-relay#4. Failure run of LGTM is tested at sonic-net/sonic-dhcp-relay#3.
Azure pipeline is run for each commit/PR, and will build for amd64, armhf, and arm64. UT/code coverage check is not yet done.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Why I did it
Fix the openssh build issue, upgrade from 8.4p1-5 to 8.4p1-5+deb11u1.
https://dev.azure.com/mssonic/build/_build/results?buildId=120209&view=logs&j=88ce9a53-729c-5fa9-7b6e-3d98f2488e3f&t=8d99be27-49d0-54d0-99b1-cfc0d47f0318
+ sudo dpkg --root=./fsroot-broadcom -i target/debs/bullseye/openssh-server_8.4p1-5_amd64.deb
dpkg: warning: downgrading openssh-server from 1:8.4p1-5+deb11u1 to 1:8.4p1-5
(Reading database ... 44818 files and directories currently installed.)
Preparing to unpack .../openssh-server_8.4p1-5_amd64.deb ...
Unpacking openssh-server (1:8.4p1-5) over (1:8.4p1-5+deb11u1) ...
dpkg: dependency problems prevent configuration of openssh-server:
openssh-server depends on openssh-client (= 1:8.4p1-5); however:
Version of openssh-client on system is 1:8.4p1-5+deb11u1.
dpkg: error processing package openssh-server (--install):
dependency problems - leaving unconfigured
Errors were encountered while processing:
openssh-server
+ clean_sys
How I did it
Upgrade openssh from 8.4p1-5 to 8.4p1-5+deb11u1.
Add support for reacting to speed change between 40G and 100G in CONFIG_DB
Fix a bug on optical bit setting.
Avoid the random error in shutdown for issue: aristanetworks/sonic#40
Avoid to run on SmartsvilleBkMs, which depends on a different driver (credo-sai).
How I did it
How to verify it
Verified on the duts that the commands printed in the log are matching the expectation and the interfaces are up.
- Why I did it
Implemented sonic-net/SONiC#1001
- How I did it
Install systemd-bootchart tool and provide default config for it.
- How to verify it
Run build and verify systemd-bootchart is installed.
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
#### Why I did it
Fix the build with updated sairedis
#### How I did it
Specify nopython2 for syncd and fixed a copy paste mistake for libsairedis
#### How to verify it
Run build with updated sairedis
* [sflow + dropmon] added INCLUDE_SFLOW_DROPMON flag, added patches for hsflowd
*Added a capability of monitoring dropped packets for the sFlow daemon in order to improve network - monitoring, diagnostic, and troubleshooting. The drop monitor service allows the sFlow daemon to export another type of sample - dropped packets as Discard samples alongside Counter samples and Packet Flow samples.
Signed-off-by: Vadym Hlushko <vadymh@nvidia.com>
- Why I did it
Need to execute mlxreg inside pmon docker
- How I did it
Add MFT package to pmon Makefile
- How to verify it
Install image, go to pmon : docker exec -it pmon bash, exec mlxreg
Verifiy warm, fast and cold reboot while MFT is being called in pmon constantly
Signed-off-by: Andriy Yurkiv <ayurkiv@nvidia.com>
Why I did it
The docker storage driver vfs is not a good option for build, it uses the “deep copy” when building a new layer, leads to lower performance and more space used on disk than other storage drivers.
A better docker storage driver is the default one overlay2, it is a modern union filesystem.
To not try to build python2 bindings for sairedis for bullseye. The same solution was done for swss-common package.
Releated changes Azure/sonic-sairedis#1050
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
#### Why I did it
Switch py-common from swsssdk to swsscommon.
#### How I did it
Change code and make file to use swsscommon.
#### How to verify it
Pass all UT and E2E test.
#### Which release branch to backport (provide reason below if selected)
<!--
- Note we only backport fixes to a release branch, *not* features!
- Please also provide a reason for the backporting below.
- e.g.
- [x] 202006
-->
- [ ] 201811
- [ ] 201911
- [ ] 202006
- [ ] 202012
- [ ] 202106
- [ ] 202111
#### Description for the changelog
<!--
Write a short (one line) summary that describes the changes in this
pull request for inclusion in the changelog:
-->
#### Link to config_db schema for YANG module changes
<!--
Provide a link to config_db schema for the table for which YANG model
is defined
Link should point to correct section on https://github.com/Azure/SONiC/wiki/Configuration.
-->
#### A picture of a cute animal (not mandatory but encouraged)
#### Why I did it
Fix sonic-db-cli high CPU usage on SONiC startup issue: https://github.com/Azure/sonic-buildimage/issues/10218
ETA of this issue will be 2022/05/31
#### How I did it
Re-write sonic-cli with c++ in sonic-swss-common: https://github.com/Azure/sonic-swss-common/pull/607
Modify swss-common rules and slave.mk to install c++ version sonic-db-cli.
#### How to verify it
Pass all E2E test scenario.
#### Which release branch to backport (provide reason below if selected)
<!--
- Note we only backport fixes to a release branch, *not* features!
- Please also provide a reason for the backporting below.
- e.g.
- [x] 202006
-->
- [ ] 201811
- [ ] 201911
- [ ] 202006
- [ ] 202012
- [ ] 202106
- [ ] 202111
#### Description for the changelog
Build and install c++ version sonic-db-cli from swss-common.
#### Link to config_db schema for YANG module changes
<!--
Provide a link to config_db schema for the table for which YANG model
is defined
Link should point to correct section on https://github.com/Azure/SONiC/wiki/Configuration.
-->
#### A picture of a cute animal (not mandatory but encouraged)
Currently, the build with ASAN_ENABLE=y reuses the packages built with
ASAN_ENABLE=n (and vice versa). To address this issue, ASAN_ENABLE is added to DEP_FLAGS for asan-enabled packages (docker-syncd-mlnx, syncd, docker-orchagent, swss).
- Why I did it
To make dpkg cache use/rebuild the packages for ASAN_ENABLE=y/n.
- How I did it
Added ASAN_ENABLE to the DEP_FLAGS for asan-enabled packages.
- How to verify it
Built with ASAN_ENABLE=y/n and checked the .flags .log files.
Signed-off-by: Yakiv Huryk <yhuryk@nvidia.com>
This is to improve the readability of ASAN reports. The debug package adds function names and source code references to the backtrace (currently, there are only binary addresses of functions)
Another way to address this issue is to build the image with "INSTALL_DEBUG_TOOLS=y". The downside of this approach is that the image size and compilation time are unnecessarily big. Also, the idea is to make the "ENABLE_ASAN" self-sufficient, which would not be the case for this approach.
- Why I did it
To improve the readability of asan logs.
- How I did it
Added SYNCD_DBG and SWSS_DBG to corresponding docker images for ASAN_ENABLE=y build
- How to verify it
Add artificial memory leak
Build with ASAN_ENABLE=y
Test the image and check the ASAN report
Signed-off-by: Yakiv Huryk <yhuryk@nvidia.com>
Official build fails complaining missing below targets:
2022-05-25T10:50:38.0560306Z tar: target/debs/buster/libyang2-cpp1_2.0.112-6_amd64.deb: Cannot stat: No such file or directory
2022-05-25T10:50:38.0571392Z tar: target/debs/buster/libyang2-cpp-dev_2.0.112-6_amd64.deb: Cannot stat: No such file or directory
2022-05-25T10:50:38.0588893Z tar: target/debs/buster/libyang2-cpp1-dbgsym_2.0.112-6_amd64.deb: Cannot stat: No such file or directory
2022-05-25T10:50:38.0590887Z tar: target/debs/buster/yang-tools_2.0.112-6_all.deb: Cannot stat: No such file or directory
Why I did it
Upgrade FRR to version 8.2.2. Build libyang2 required by FRR.
How I did it
Update FRR version and tag.
How to verify it
Following tests were performed on sonic-vs:
BGP docker status check
BGP configuration and session establishment
Route redistribution and ping
Issued show commands to check the bgp neighbor and routes
Checked app-db to ensure bgp routes are installed with correct interface and nexthop.
Create VRF and check FRR knows the VRF
Check VRF routes are installed in app-db with correct Vrf name and next-hop
Establish BGP Evpn session and check if Evpn routes (multicast, mac, prefix) are exchanged and installed correctly in app-db.
Fixes#9279
- Why I did it
Part of larger effort to move all SONiC systems to bullseye
- How I did it
1. Update container makefiles with correct dependencies
2. Update container Dockerfile with correct base image
3. Update container Dockerfile with correct apt dependencies
4. Update any other makefiles with dependencies to remove python2 support
5. Minor changes to support bullseye / python3
- How to verify it
Run regression on the switch:
1. Verify PTF community tests work
2. Verify syncd runs and all ports come up / pass traffic
3. Verify all platform tests succeed
Python 2 support for sonic-pcied was removed, and the Python 2 version
of the variable no longer exists.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Currently, the build dockers are created as a user dockers(docker-base-stretch-<user>, etc) that are
specific to each user. But the sonic dockers (docker-database, docker-swss, etc) are
created with a fixed docker name and common to all the users.
docker-database:latest
docker-swss:latest
When multiple builds are triggered on the same build server that creates parallel building issue because
all the build jobs are trying to create the same docker with latest tag.
This happens only when sonic dockers are built using native host dockerd for sonic docker image creation.
This patch creates all sonic dockers as user sonic dockers and then, while
saving and loading the user sonic dockers, it rename the user sonic
dockers into correct sonic dockers with tag as latest.
docker-database:latest <== SAVE/LOAD ==> docker-database-<user>:tag
The user sonic docker names are derived from 'DOCKER_USERNAME and DOCKER_USERTAG' make env
variable and using Jinja template, it replaces the FROM docker name with correct user sonic docker name for
loading and saving the docker image.
Why I did it
Migrate ptftests script to python3, in order to do an incremental migration, add python virtual environment firstly, install all required python packages in virtual env as well.
Then migrate ptftests scripts from python2 to python3 one by one avoid impacting non-changed scripts.
Signed-off-by: Zhaohui Sun zhaohuisun@microsoft.com
How I did it
Add python3 virtual environment for docker-ptf.
Add submodule ptf-py3 and install patched ptf 0.9.3 into virtual environment as well, two ptf issues were reported here:
p4lang/ptf#173p4lang/ptf#174
Signed-off-by: Zhaohui Sun <zhaohuisun@microsoft.com>
* [CG-Fix-CVE-2021-44906] Patching on thrift.0.14.1 for package minimist
Signed-off-by: richardyu-ms <richard.yu@microsoft.com>
* add more information in patch
Signed-off-by: richardyu-ms <richard.yu@microsoft.com>
* Update 0003-Remove-minimist-packages.patch
* change the thrift 0.14.1 to package download
Signed-off-by: richardyu-ms <richard.yu@microsoft.com>
* use the series file for patching
* fix a code defect
Why I did it
Missing the dependency of macsecmgrd in swss so that the MACsec feature cannot be enabled.
How I did it
Add SWSS dependency in docker-macsec.mk
How to verify it
Check the Azp of sonic-mgmt
sign-off: Jing Zhang zhangjing@microsoft.com
#### Why I did it
As part of the process moving containers from buster to bullseye.
#### How I did it
1. change base image from buster to bullseye.
2. remove unused addition to orchagent run options
#### How to verify it
Tested building locally.
* [build]: Patch debootstrap to not unmount the host's /proc filesystem
Currently, when the final image is being built (sonic-vs.img.gz,
sonic-broadcom.bin, or similar), each invocation of sudo in the
build_debian.sh script takes 0.8 seconds to run and execute the actual
command. This is because the /proc filesystem in the slave container has
been unmounted somehow. This is happening when debootstrap is running,
and it incorrectly unmounts the host's (in our case, the slave
container's) /proc filesystem because in the new image being built,
/proc is a symlink to the host's (the slave container's) /proc. Because
of that, /proc is gone, and each invocation of sudo adds 0.8 seconds
overhead. As a side effect, docker exec into the slave container during
this time will fail, because /proc/self/fd doesn't exist anymore, and
docker exec assumes that that exists.
Debootstrap has fixed this in 1.0.124 and newer, so backport the patch
that fixes this into the version that Bullseye has.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* [build_debian.sh]: Use eatmydata to speed up deb package installations
During package installations, dpkg calls fsync multiples times (for each
package) to ensure that tht efiles are written to disk, so that if
there's some system crash during package installation, then it is in at
least a somewhat recoverable state. For our use case though, we're
installing packages in a chroot in fsroot-* from a slave container and
then packaging it into an image. If there were a system crash (or even
if docker crashed), the fsroot-* directory would first be removed, and
the process would get restarted. This means that the fsync calls aren't
really needed for our use case.
The eatmydata package includes a library that will block/suppress the
use of fsync (and similar) system calls from applications and will
instead just return success, so that the application is not blocked on
disk writes, which can instead happen in the background instead as
necessary. If dpkg is run with this library, then the fsync calls that
it does will have no effect.
Therefore, install the eatmydata package at the beginning of
build_debian.sh and have dpkg be run under eatmydata for almost all
package installations/removals. At the end of the installation, remove
it, so that the final image uses dpkg as normal.
In my testing, this saves about 2-3 minutes from the image build time.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* Change ln syntax to use chroot
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Why I did it
To sign SONiC kernel image and allow secure boot based system to verify SONiC image before loading into the system.
How I did it
Pass following parameter to rules/config.user
Ex:
SONIC_ENABLE_SECUREBOOT_SIGNATURE := y
SIGNING_KEY := /path/to/key/private.key
SIGNING_CERT := /path/to/public/public.cert
How to verify it
Secure boot enabled system enrolled with right public key of the, image in the platform UEFI database will able to verify image before load.
Alternatively one can verify with offline sbsign tool as below.
export SBSIGN_KEY=/abc/bcd/xyz/
sbverify --cert $SBSIGN_KEY/public_cert.cert fsroot-platform-XYZ/boot/vmlinuz-5.10.0-8-2-amd64 mage
O/P:
Signature verification OK
Removed python2 support for sonic-platform-daemons that was causing unit
test errors in sonic_pcied.
* Removed config from docker supervisord jinja templates per VD review comment
* Removed space and python3 per QL comments
Why I did it
Running warm-reboot in a loop for 500 times leads to this error on 318-th iteration:
Apr 2 15:56:27.346747 sonic INFO swss#/supervisord: restore_neighbors Traceback (most recent call last):
Apr 2 15:56:27.346747 sonic INFO swss#/supervisord: restore_neighbors File "/usr/bin/restore_neighbors.py", line 24, in <module>
Apr 2 15:56:27.346747 sonic INFO swss#/supervisord: restore_neighbors from scapy.all import conf, in6_getnsma, inet_pton, inet_ntop, in6_getnsmac, get_if_hwaddr, Ether, ARP, IPv6, ICMPv6ND_NS, ICMPv6NDOptSrcLLAddr
Apr 2 15:56:27.346795 sonic INFO swss#/supervisord: restore_neighbors File "/usr/local/lib/python3.7/dist-packages/scapy/all.py", line 25, in <module>
Apr 2 15:56:27.346956 sonic INFO swss#/supervisord: restore_neighbors from scapy.route import *
Apr 2 15:56:27.346995 sonic INFO swss#/supervisord: restore_neighbors File "/usr/local/lib/python3.7/dist-packages/scapy/route.py", line 205, in <module>
Apr 2 15:56:27.347089 sonic INFO swss#/supervisord: restore_neighbors conf.iface = get_working_if()
Apr 2 15:56:27.347129 sonic INFO swss#/supervisord: restore_neighbors File "/usr/local/lib/python3.7/dist-packages/scapy/arch/linux.py", line 128, in get_working_if
Apr 2 15:56:27.347213 sonic INFO swss#/supervisord: restore_neighbors ifflags = struct.unpack("16xH14x", get_if(i, SIOCGIFFLAGS))[0]
Apr 2 15:56:27.347250 sonic INFO swss#/supervisord: restore_neighbors File "/usr/local/lib/python3.7/dist-packages/scapy/arch/common.py", line 31, in get_if
Apr 2 15:56:27.347345 sonic INFO swss#/supervisord: restore_neighbors return ioctl(sck, cmd, struct.pack("16s16x", iff.encode("utf8")))
Apr 2 15:56:27.347365 sonic INFO swss#/supervisord: restore_neighbors OSError: [Errno 19] No such device
The issue was reported to scapy devs secdev/scapy#3369, the fix is secdev/scapy#3371, however there is no released scapy version with this fix right now, thus decided to build scapy v2.4.5 from sources and apply the fix in a form of a patch.
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
Change the base image from `docker-config-engine-buster` to
`docker-config-engine-bullseye`, and remove the hardcoded
`radvd` version from the Dockerfile.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
#### Why I did it
SONiC is migrating to bullseye. This will update the sonic-pins container to bullseye.
#### How I did it
The [sonic-pins code](https://github.com/Azure/sonic-buildimage/blob/master/rules/p4rt.mk) isn't dependent on any architecture so it will already build successfully for bullseye. This PR updates the docker to use bullseye.
#### How to verify it
Today we cannot build the docker-sonic-p4rt.gz target (e.g. Issue #9885). With this change the docker will build successfully. The P4RT executable will not run, because of a missing runtime library, libgmpxx, which I'll address in a followup PR.
#### Description for the changelog
Update docker-sonic-p4rt.gz target to build with Bullseye instead of Buster.
- Why I did it
Stopping swss and syncd causes some driver module unloading. Those driver modules are depended by PMON. This could trigger ERROR logs in syslog.
- How I did it
Adjust warmboot shutdown order in make file
- How to verify it
Manual test
#### Why I did it
The current redis version of SONiC is `6.0.6`, which contains many high-risky security issues like CVEs that are fixed in the latest version. The Redis release notes also highly recommend to upgrade with SECURITY urgency.
```
================================================================================
Redis 6.0.16 Released Mon Oct 4 12:00:00 IDT 2021
================================================================================
Upgrade urgency: SECURITY, contains fixes to security issues.
Security Fixes:
* (CVE-2021-41099) Integer to heap buffer overflow handling certain string
commands and network payloads, when proto-max-bulk-len is manually configured
to a non-default, very large value [reported by yiyuaner].
* (CVE-2021-32762) Integer to heap buffer overflow issue in redis-cli and
redis-sentinel parsing large multi-bulk replies on some older and less common
platforms [reported by Microsoft Vulnerability Research].
* (CVE-2021-32687) Integer to heap buffer overflow with intsets, when
set-max-intset-entries is manually configured to a non-default, very large
value [reported by Pawel Wieczorkiewicz, AWS].
* (CVE-2021-32675) Denial Of Service when processing RESP request payloads with
a large number of elements on many connections.
* (CVE-2021-32672) Random heap reading issue with Lua Debugger [reported by
Meir Shpilraien].
* (CVE-2021-32628) Integer to heap buffer overflow handling ziplist-encoded
data types, when configuring a large, non-default value for
hash-max-ziplist-entries, hash-max-ziplist-value, zset-max-ziplist-entries
or zset-max-ziplist-value [reported by sundb].
* (CVE-2021-32627) Integer to heap buffer overflow issue with streams, when
configuring a non-default, large value for proto-max-bulk-len and
client-query-buffer-limit [reported by sundb].
* (CVE-2021-32626) Specially crafted Lua scripts may result with Heap buffer
overflow [reported by Meir Shpilraien].
Other bug fixes:
* Fix appendfsync to always guarantee fsync before reply, on MacOS and FreeBSD (kqueue) (#9416)
* Fix the wrong mis-detection of sync_file_range system call, affecting performance (#9371)
* Fix replication issues when repl-diskless-load is used (#9280)
```
#### How I did it
Edit `Dockerfile.j2` file
#### How to verify it
Check redis version
#### Description for the changelog
This PR will upgrade redis-server version to `6.0.16`.
#### Why I did it
To bump the Thrift version to 0.14.1
- To avoid [CVE-2020-13949](https://nvd.nist.gov/vuln/detail/CVE-2020-13949)
- to fix some dependencies issues
#### How I did it
- rename `src/thrfit_0_13_0` to `src/thrift_2` to remove version number in the path. (`src/thrift` contains rules to build thrift 0.11.0 )
- Add thrift sources as submodule as there are no prepared debian packages for version >0.13.0 on [debian.org](https://packages.debian.org/search?searchon=sourcenames&keywords=thrift)
- Added patches with fixes for original thrift debian rules:(remove unneeded packages, fix multi job build)
#### How to verify it
```
BLDENV=buster make -f Makefile.work target/debs/buster/libthrift-dev_0.14.1_amd64.deb
```
Implement infrastructure that allows enabling address sanitizer
for docker containers. Enable address sanitizer for SWSS container.
- Why I did it
To add a possibility to compile SONiC applications with address sanitizer (ASAN).
ASAN is a memory error detector for C/C++. It finds:
1. Use after free (dangling pointer dereference)
2. Heap buffer overflow
3. Stack buffer overflow
4. Global buffer overflow
5. Use after return
6. Use after the scope
7. Initialization order bugs
8. Memory leaks
- How I did it
By adding new ENABLE_ASAN configuration option.
- How to verify it
By default ASAN is disabled and the SONiC image is not affected.
When ASAN is enabled it inspects all allocation, deallocation, and memory usage that the application does in run time. To verify whether the application has memory errors tests that trigger memory usage of the application should be run. Ideally, the whole regression tests should be run. Memory leaks reports will be placed in /var/log/asan/ directory of SONiC host OS.
Signed-off-by: Oleksandr Ivantsiv <oivantsiv@nvidia.com>
- Why I did it
Remove obsolete parameter that enables static VXLAN src port range
provide functionality no generate json config file according to appropriate parameter in config_db
Done for
SN3800:
• Mellanox-SN3800-D28C50
• Mellanox-SN3800-C64
• Mellanox-SN3800-D28C49S1 (New 10G SKU)
SN2700:
• Mellanox-SN2700-D48C8
- How I did it
Remove SAI_VXLAN_SRCPORT_RANGE_ENABLE=1 from appropriate sai.profile files
Created vxlan.json file and added few params that depends on DEVICE_METADATA.localhost.vxlan_port_range
- How to verify it
File /etc/swss/config.d/vxlan.json should be generated inside swss docker when it restart
[
{
"SWITCH_TABLE:switch": {
"vxlan_src": "0xFF00",
"vxlan_mask": "8"
},
"OP": "SET"
}
]
Signed-off-by: Andriy Yurkiv <ayurkiv@nvidia.com>
Enable dbgsym package for dhcpmon.
Allow CFLAGS and LDFLAGS from environment variables to be used
in the dhcp6relay build. This makes sure that the -O2 flag from
dpkg-buildflags gets used.
Finally, enable all hardening flags in dpkg-buildflags for
dhcp6relay and dhcpmon. The change from the default set of flags is that
during linking, immediate binding of symbols is done instead of lazy
binding.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* [y_cable] Support for initialization of new Daemon ycable to support
ycables
This PR also adds the commit in sonic-platform-daemons
94fa239 [y_cable] refactor y_cable to a seperate logic and new daemon from xcvrd (#219)
Why I did it
This PR separates the logic of Y-Cable from xcvrd. Before this change we were utilizing xcvrd daemon to control all aspects of Y-Cable right from initialization to processing requests from other entities like orch,linkmgr.
Now we would have another daemon ycabled which will serve this purpose.
Logically everything still remains the same from the perspective of other daemons.
it also take care aspects like init/delete daemon from Y-Cable perspective.
How I did it
To serve the purpose we build a new wheel sonic_ycabled-1.0-py3-none-any.whl and install it inside pmon.
We also initalize the daemon ycabled which serves our purpose for refactor inside pmon
How to verify it
Ran the changes with an image for dualtor tests on a 7050cx3 platform
Signed-off-by: vaibhav-dahiya <vdahiya@microsoft.com>
Why I did it
Need to be able to run smartctl when pmon docker is not running.
How I did it
Removed the pmon dependency for pmon as well as the command wrapper and added it to the debian-extension.
How to verify it
Stop pmon
Run smartctl from the host and verify it runs without error
As part of this, update the isc-dhcp package to match the Bullseye
version (this fixes some compile errors related to BIND), clean up some
of the build dependencies and runtime dependencies for debian packaging,
and use the default Boost version to compile against instead of
explicitly saying using 1.74.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Improve the Linux kernel build cache hit rate.
Current the the hit rate is around 85.8% (based on the last 3 month, 3479 PR builds totally, 494 PR build not hit).
We can improve the hit rate up to 95% or better.
The Linux kernel build will take really long time, most of the PRs are nothing to do with the kernel change. The remaining cache options should be enough to detect the Linux kernel cache status (dirty or not).
Correct thrift.0.13.0 dependent package name.
In previous code, the buildout target was named as PYTHON3_THRIFT_0_13_0
But when add the prackage to LIBTHRIFT_0_13_0, it typo as PYTHON_THRIFT_0_13_0
- Add INCLUDE_PINS to config to enable/disable container
- Add Docker files and supporting resources
- Add sonic-pins submodule and associated make files
Submission containing materials of a third party:
Copyright Google LLC; Licensed under Apache 2.0
#### Why I did it
Adds P4RT container to SONiC for PINS
The P4RT app is covered by this HLD:
https://github.com/pins/SONiC/blob/master/doc/pins/p4rt_app_hld.md
#### How I did it
Followed the pattern and templates used for other SONiC applications
#### How to verify it
Build SONiC with INCLUDE_P4RT set to "y".
Verify that the resulting build has a container called "p4rt" running.
You can verify that the service is up by running the following command on the SONiC switch:
```bash
sudo netstat -lpnt | grep p4rt
```
You should see the service listening on TCP port 9559.
#### Which release branch to backport (provide reason below if selected)
None
#### Description for the changelog
Build P4RT container for PINS
Bring in the following commit:
405f1df Use build profiles instead of distro version for Python 2 binding build (#558)
This change requires a corresponding change in this repo to set a build
profile to not build the python 2 bindings on Bullseye.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
This pull request integrate audisp-tacplus to SONiC for per-command accounting.
#### Why I did it
To support TACACS per-command accounting, we integrate audisp-tacplus project to sonic.
#### How I did it
1. Add auditd service to SONiC
2. Port and patch audisp-tacplus to SONiC
#### How to verify it
UT with CUnit to cover all new code in usersecret-filter.c
Also pass all current UT.
#### Which release branch to backport (provide reason below if selected)
N/A
#### Description for the changelog
Add audisp-tacplus for per-command accounting.
#### A picture of a cute animal (not mandatory but encouraged)
* Add macsec-xpn-support iproute2 in syncd
Signed-off-by: Ze Gan <ganze718@gmail.com>
* Polish code
Signed-off-by: Ze Gan <ganze718@gmail.com>
* Remove useless files
Signed-off-by: Ze Gan <ganze718@gmail.com>
* Add self-compiled iproute2 to docker sonic vs
Signed-off-by: Ze Gan <ganze718@gmail.com>
* Enhance apt install for iproute2 dependencies
Signed-off-by: Ze Gan <ganze718@gmail.com>
HLD updated here: https://github.com/Azure/SONiC/pull/887
#### Why I did it
Command `monit summary -B` can no longer display the status for each critical process, system-health should not depend on it and need find a way to monitor the status of critical processes. The PR is to address that. monit is still used by system-health to do file system check as well as customize check.
#### How I did it
1. Get container names from FEATURE table
2. For each container, collect critical process names from file critical_processes
3. Use “docker exec -it <container_name> bash -c ‘supervisorctl status’” to get processes status inside container, parse the output and check if any critical processes exit
#### How to verify it
1. Add unit test case to cover it
2. Adjust sonic-mgmt cases to cover it
3. Manual test
#### Why I did it
Changes required for feature "Event Driven TechSupport Invocation & CoreDump Mgmt". [HLD](https://github.com/Azure/SONiC/pull/818 )
Requires: https://github.com/Azure/sonic-utilities/pull/1796.
Merging in any order would be fine.
Summary of the changes:
- Added the YANG Models for the new tables introduces as a part of this feature.
- Enhanced init_cfg.json with the default config required
- Added a compile Time flag which enables/disables the config required for this feature inside the init_cfg.json
- Enhanced the supervisor-proc-exit-listener script to populate `<feature>:<critical_proc> = <comm>:<pid>` info in the STATE_DB when it observes an proc exit notification for the critical processes running inside the docker.
This pull request add a bash plugin for TACACS+ per-command authorization
#### Why I did it
1. To support TACACS per command authorization, we check user command before execute it.
2. Fix libtacsupport.so can't parse tacplus_nss.conf correctly issue:
Support debug=on setting.
Support put server address and secret in same row.
3. Fix the parse_config_file method not reset server list before parse config file issue.
#### How I did it
The bash plugin will be called before every user command, and check user command with remote TACACS+ server for per-command authorization.
#### How to verify it
UT with CUnit cover all code in this plugin.
Also pass all current UT.
#### Which release branch to backport (provide reason below if selected)
N/A
#### Description for the changelog
Add Bash TACACS+ plugin.
#### A picture of a cute animal (not mandatory but encouraged)
Debian actually did a binNMU for snmpd, so to match the package version
we're building with the version in the offiical repos, that version
needs to be manually specified in the changelog.
Buster still needs 5.7.3, because there's a ABI change between 5.7.3 and
5.9 for libsnmp, so for Buster, make sure that 5.7.3 is built, and for
Bullseye, make sure that 5.9 is built.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Remove Python 2 package installation from the base image. For container
builds, reference Python 2 packages only if we're not building for
Bullseye.
For libyang, don't build Python 2 bindings at all, since they don't seem
to be used.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
All docker containers will be built as Buster containers, from a Buster
slave. The base image and remaining packages that are installed onto the
host system will be built for Bullseye, from a Bullseye slave.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
The dhcp6relay rules file had a line overwriting a variable for
docker-dhcp-relay. Remove that line.
This line caused a limited impact where if some (many?) of the docker
containers were already built, except for dhcp-relay, and the build
failed or was interrupted, then dhcp-relay container would fail to build
because this variable was overwritten and the python3-swsscommon
wouldn't get installed into the slave container. Most builds would be
fine, though.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
- Why I did it
In case an app.ext requires a dependency syncd^1.0.0, the RPC version of syncd will not satisfy this constraint, since 1.0.0-rpc < 1.0.0. This is not correct to put 'rpc' as a prerelease identifier. Instead put 'rpc' as build metadata in the version: 1.0.0+rpc which satisfies the constraint ^1.0.0.
- How I did it
Changed the way how to version in RPC and DBG images are constructed.
- How to verify it
Install app.ext with syncd^1.0.0 dependency on a switch with RPC syncd docker.
Signed-off-by: Stepan Blyshchak <stepanb@nvidia.com>
This makes it possible to install the debug symbols if needed. Also install
the package into the debug version of sonic-dhcp-relay container.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>