Why I did it
The protoc-dev isn't used by SONiC, but it was added to the derived package.
Work item tracking
Microsoft ADO (number only): 17417902
How I did it
Remove protoc-dev from protobuf.mk
Signed-off-by: Ze Gan <ganze718@gmail.com>
* Update sairedis submodule
This submodule update needs to be manually done due to build changes
done in the sairedis submodule. Specifically, Debian build profiles are
now being used instead of dpkg build targets, and dbgsym packages are
being used instead of dbg packages. Because of this, there needs to be
changes on the sonic-buildimage side for this.
This is a reland of #15720, which was reverted in #15995 due to the RPC
package build failing. That failure has since been fixed, and the
PR pipeline has been updated to build the RPC package so that this is
checked at the PR stage.
This submodule update brings in the following changes:
```
4dbdb21 Fix RPC package build failure due to shell syntax issue (#1268)
588d596 Make sure new binaries replace existing binaries in docker-sonic-vs (#1269)
ce8f642 [vs] Use boost join to concatenate switch types in config (#1266)
d6055a2 [vslib]: Temporaily map DPU switch type to NVDA_MBF2H536C (#1259)
e1cdb4d [CodeQL]: Use dependencies with relevant versions in azp template. (#1262)
c08f9a2 [CI]: Fix collect log error in azp template. (#1260)
eed856c [CodeQL]: Fix syncd compilation in azp template. (#1261)
a3f1f1a Reland 'Make changes to building and packaging sairedis (#1116)' (#1194)
```
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* Update sairedis submodule with the fix for the RPC package build
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
---------
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* Enhanced slave.mk to accept python wheels as dependency for a deb
target. Dependent wheel names should be specified through the new
{deb_name}_WHEEL_DEPENDS variable in the deb's make rules. The wheel
will be built and installed in the slave docker before starting the
deb build.
* Added sonic_yang_models-1.0-py3-none-any.whl as dependency for
sonic-mgmt-common.deb. This is required for using the sonic yangs in
UMF
Signed-off-by: Sachin Holla <sachin.holla@broadcom.com>
Why I did it
The protoc-dev library with the wrong declaration.
Work item tracking
Microsoft ADO (number only): 24707066
How I did it
Revise the wrong declaration from:
PROTOC = libprotoc_$(PROTOBUF_VERSION_FULL)_$(CONFIGURED_ARCH).deb to PROTOC_DEV = libprotoc-dev$(PROTOBUF_VERSION_FULL)_$(CONFIGURED_ARCH).deb
How to verify it
Check Azp log error.
This reverts commit e0927e28af.
Why I did it
Reverts #15720
It breaks build for target/debs/bullseye/syncd_1.0.0_amd64.deb
make[2]: Entering directory '/sonic/src/sonic-sairedis'
dh_install
# Note: escape with an extra symbol
if [ -f debian/syncd-rpc/usr/bin/syncd_init_common.sh ] ; then
/bin/sh: 1: Syntax error: end of file unexpected (expecting "fi")
make[2]: *** [debian/rules:65: override_dh_install] Error 2
make[2]: Leaving directory '/sonic/src/sonic-sairedis'
make[1]: *** [debian/rules:51: binary] Error 2
make[1]: Leaving directory '/sonic/src/sonic-sairedis'
dpkg-buildpackage: error: fakeroot debian/rules binary subprocess returned exit status 2
Work item tracking
Microsoft ADO (number only): 24691535
How I did it
How to verify it
#### Why I did it
Tacplus package has missed cache configuration
#### How I did it
Defined cache configuration for tacplus package
#### How to verify it
Build image with cache enabled and make sure you don't see any warnings related to tacplus
Why I did it
sonic-host-services depends on sonic-utilities because of FIPS feature.
Add dependency to unblock submodule sonic-host-services HEAD pointer update.
Work item tracking
Microsoft ADO (number only): 24671218
How I did it
This submodule update needs to be manually done due to build changes
done in the sairedis submodule. Specifically, Debian build profiles are
now being used instead of dpkg build targets, and dbgsym packages are
being used instead of dbg packages. Because of this, there needs to be
changes on the sonic-buildimage side for this.
This submodule update brings in the following changes:
ce8f642 [vs] Use boost join to concatenate switch types in config (#1266)
d6055a2 [vslib]: Temporaily map DPU switch type to NVDA_MBF2H536C (#1259)
e1cdb4d [CodeQL]: Use dependencies with relevant versions in azp template. (#1262)
c08f9a2 [CI]: Fix collect log error in azp template. (#1260)
eed856c [CodeQL]: Fix syncd compilation in azp template. (#1261)
a3f1f1a Reland 'Make changes to building and packaging sairedis (#1116)' (#1194)
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Why I did it
Currently, k8s master image is generated from a separate branch which we created by ourselves, not release ones. We need to commit these k8s master related code to master branch for a better way to do k8s master image build out.
Work item tracking
Microsoft ADO (number only):
19998138
How I did it
Install k8s dashboard docker images
Install geneva mds and mdsd and fluentd docker images and tag them as latest, tagging latest will help create container always with the latest version
Install azure-storage-blob and azure-identity, this will help do etcd backup and restore.
Install kubernetes python client packages, this will help read worker and container state, we can send these metric to Geneva.
Remove mdm debian package, will replace it with the mdm docker image
Add k8s master entrance script, this script will be called by rc-local service when system startup. we have some master systemd services in compute-move repo, when VMM service create master VM, VMM will copy all master service files inside VM, the entrance script will setup all services according to the service files.
When the entrance script content changed, the PR build will set include_kubernetes_master=y to help do validation for k8s master related code change. The default value of include_kubernetes_master should be always n for public master branch. We will generate master image from internal master branch
How to verify it
Build with INCLUDE_KUBERNETES_MASTER = y
Add support for a separate DEB_BUILD_PROFILES environment variable, to
be able to set build profiles. This may be used to specify whether
python 2 bindings/libraries should be built, or what configuration
options should be specified for a package.
This also makes it easier to append/remove build profiles from our rules
files, which will be needed for the sairedis build.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
#### Why I did it
The testcases in sonic-mgmt need the packages of protobuf and dashapi
##### Work item tracking
- Microsoft ADO **(number only)**:
#### How I did it
Because the docker of sonic-mgmt is based on ubuntu20.04, it cannot directly install the packages compiled by slave due to dependency issues. Download related packaged directly from Azp.
#### How to verify it
Check azp stats.
Why I did it
HLD implementation: Container Hardening (sonic-net/SONiC#1364)
Work item tracking
Microsoft ADO (number only): 14807420
How I did it
Reduce linux capabilities in privileged flag, retain NET_ADMIN and SYS_ADMIN capabilities
How to verify it
Install new image to DUT, verify bgp container is up
Run bgp sonic-mgmt kvmtest
Why I did it
[Build] Change the build option from ENABLE_FIPS_FEATURE to INCLUDE_FIPS
Work item tracking
Microsoft ADO (number only): 24485797
How I did it
#### Why I did it
Failed to build sonic-dhcp6relay_1.0.0-0_amd64.deb
#### How I did it
src/dhcprelay has git submodule.
Dependency files by "git ls-files" are not picked files in submodules.
Add --recurse-submodules, work again.
#### How to verify it
make all
#### Why I did it
After k8s upgrade a container, k8s can only know the container is running, don't know the service's status inside container. So we need a probe inside container, k8s will call the probe to check whether the container is really ready.
##### Work item tracking
- Microsoft ADO **(number only)**: 22453004
#### How I did it
Add a health check probe inside config engine container, the probe will check whether the start service exit normally or not if the start service exists and call the python script to do container self-related specific checks if the script is there. The python script should be implemented by feature owner if it's needed.
more details: [design doc](https://github.com/sonic-net/SONiC/blob/master/doc/kubernetes/health-check.md)
#### How to verify it
Check path /usr/bin/readiness_probe.sh inside container.
#### Which release branch to backport (provide reason below if selected)
- [ ] 201811
- [ ] 201911
- [ ] 202006
- [ ] 202012
- [ ] 202106
- [ ] 202111
- [x] 202205
- [x] 202211
#### Tested branch (Please provide the tested image version)
- [x] 20220531.28
Why I did it
To reduce the container's dependency from host system
Work item tracking
Microsoft ADO (number only):
17713469
How I did it
Move the k8s container startup script to config engine container, other than mount it from host.
How to verify it
Check file path(/usr/share/sonic/scripts/container_startup.py) inside config engine container.
Signed-off-by: Yun Li <yunli1@microsoft.com>
Co-authored-by: Qi Luo <qiluo-msft@users.noreply.github.com>
Why I did it
For the DASH scenario, the APP_DB will be optimized by protobuf message for less memory consumption.
How I did it
Download the Debian package of protobuf 3.21.12 and create a corresponding rule for building it.
Add a submodule of sonic-dash-api and generated its Debian package which includes C++ library and Python library
How to verify it
Check artifacts of Azp that the protobuf-related and dash-api deb packages should be generated.
Signed-off-by: Ze Gan <ganze718@gmail.com>
#### Why I did it
To fix the timezone sync issue between the containers and the host. If a certain timezone has been configured on the host (SONIC) then the expectation is to reflect the same across all the containers.
This will fix [Issue:13046](https://github.com/sonic-net/sonic-buildimage/issues/13046).
For instance, a PST timezone has been set on the host and if the user checks the link flap logs (inside the FRR), it shows the UTC timestamp. Ideally, it should be PST.
- Why I did it
To fix hiredis compilation
- How I did it
Changed package version: 0.14.0-3~bpo9+1 -> 0.14.1-1
- How to verify it
make configure PLATFORM=mellanox
make target/sonic-mellanox.bin
Signed-off-by: Nazarii Hnydyn <nazariig@nvidia.com>
Why I did it
Downgrade the symcrypt version, use the SymCrypt version v103.0.1 for certification.
Work item tracking
Microsoft ADO (number only): 24222567
How I did it
How to verify it
- Why I did it
Since the prod signing tool is vendor specific, and each vendor may have different arguments they would like to use in the script, we would need a way to inject those arguments to the script.
- How I did it
Add a compilation flag SECURE_UPGRADE_PROD_TOOL_ARGS which vendors can use to inject any flag they would want to the prod signing script.
- How to verify it
Build SONiC using your own prod script
Why I did it
Fix#15000
isc-dhcp 4.4.1-2.3+deb11u1 is no longer available in debian repository
How I did it
update isc-dhcp to new version 4.4.1-2.3+deb11u2
- Why I did it
In order to reduce sonic build time, there is an option to acquire sonic slave docker(s) from artifact server (reduce sonic make configure time).
Current implementation supports only convention of:
<REGISTRY_SERVER>:<REGISTRY_PORT>/<SLAVE_BASE_IMAGE>:<SLAVE_BASE_TAG>
In case the SLAVE_BASE_IMAGE appear in internal path inside the server, the convention should be like that:
<REGISTRY_SERVER>:<REGISTRY_PORT><REGISTRY_SERVER_PATH>/<SLAVE_BASE_IMAGE>:<SLAVE_BASE_TAG>
When REGISTRY_SERVER_PATH (that is set on rules/config) will have to start with "/".
If REGISTRY_SERVER_PATH will not be set, the behavior will remain the same it works today.
- How I did it
Add ability to set REGISTRY_SERVER_PATH and update the code for docker image tag and docker image pull accordingly
- How to verify it
Use sonic slave docker image from artifact server in which the image is kept in internal folder and make sure it consume it.
- Why I did it
To be able to see how much time was consumed to build a specific target.
A newly added code does those things:
1. Print build start time for target
2. Print build end time for target
3. Print elapsed time for target
- How I did it
Add a macro to record the time
Add macros to print end time and elapsed time
- How to verify it
Just build an image and check any *.log file
Signed-off-by: Yevhen Fastiuk <yfastiuk@nvidia.com>
#### Why I did it
Remove dbus when telemetry does not use it.
##### Work item tracking
- Microsoft ADO **(number only)**: 17852550
#### How I did it
Use INCLUDE_SYSTEM_GNMI to determine if telemetry needs dbus.
#### How to verify it
Build image and check telemetry container.
Depends on https://github.com/sonic-net/sonic-linux-kernel/pull/315
#### Why I did it
The name SECURE_UPGRADE_DEV_SIGNING_CERT is misleading, this flag is relevant to both to dev and prod signing.
#### How I did it
Rename all mentions of name SECURE_UPGRADE_DEV_SIGNING_CERT to SECURE_UPGRADE_SIGNING_CERT - this is also done with PR in sonic-linux-kernel repository
#### How to verify it
Build SONiC using your own prod script
This is done because when there is a default value, we mount to this path, and this creates this folder on the host.
#### Why I did it
Fix issue that running without overwriting SECURE_UPGRADE_DEV_SIGNING_KEY and SECURE_UPGRADE_DEV_SIGNING_CERT dummy folders are being created on the host.
#### How I did it
Removed the default assignment to SECURE_UPGRADE_DEV_SIGNING_KEY and SECURE_UPGRADE_DEV_SIGNING_CERT
#### How to verify it
Build SONiC using your own prod script
Why I did it
Support to add SONiC OS Version in device info.
It will be used to display the version info in the SONiC command "show version". The version is used to do the FIPS certification. We do not do the FIPS certification on a specific release, but on the SONiC OS Version.
SONiC Software Version: SONiC.master-13812.218661-7d94c0c28
SONiC OS Version: 11
Distribution: Debian 11.6
Kernel: 5.10.0-18-2-amd64
How I did it
- Why I did it
Currently, non upstream patches are applied only after upstream patches.
Depends on sonic-net/sonic-linux-kernel#313. Can be merged in any order, preferably together
- What I did it
Non upstream Patches that reside in the sonic repo will not be saved in a tar file bur rather in a folder pointed out by EXTERNAL_KERNEL_PATCH_LOC. This is to make changes to the non upstream patches easily traceable.
The build variable name is also updated to INCLUDE_EXTERNAL_PATCHES
Files/folders expected under EXTERNAL_KERNEL_PATCH_LOC
EXTERNAL_KERNEL_PATCH_LOC/
├──── patches/
├── 0001-xxxxx.patch
├── 0001-yyyyyyyy.patch
├── .............
├──── series.patch
series.patch should contain a diff that is applied on the sonic-linux-kernel/patch/series file. The diff should include all the non-upstream patches.
How to verify it
Build the Kernel and verified if all the patches are applied properly
Signed-off-by: Vivek Reddy Karri <vkarri@nvidia.com>
#### Why I did it
When CPU is busy, the sonic_ax_impl may not have sufficient speed to handle the notification message sent from REDIS.
Thus, the message will keep stacking in the memory space of sonic_ax_impl.
If the condition continues, the memory usage will keep increasing.
#### How I did it
Add a monit file to check if the SNMP container where sonic_ax_impl resides in use more than 4GB memory.
If yes, restart the sonic_ax_impl process.
#### How to verify it
Run a lot of this command: `while true; do ret=$(redis-cli -n 0 set LLDP_ENTRY_TABLE:test1 test1); sleep 0.1; done;`
And check the memory used by sonic_ax_impl keeps increasing.
After a period, make sure the sonic_ax_impl is restarted when the memory usage reaches the 4GB threshold.
And verify the memory usage of sonic_ax_impl drops down from 4GB.
Change references to use bullseye instead of buster
Why I did it
Almost all daemons in 202211 and master uses bullseye, and sflow was easy to migrate.
How I did it
Replaced the references, built and tested in 202211.
How to verify it
Build with the changes, enable sflow:
admin@sonic:~$ sudo config sflow collector add test 1.2.3.4
admin@sonic:~$ sudo config sflow collector enable
tcpdump on 1.2.3.4 and see that UDP sFlow are being sent.
Signed-off-by: Christian Svensson <blue@cmd.nu>
Change references to use bullseye instead of buster
Why I did it
Almost all daemons in 202211 and master uses bullseye, and NAT seems easy to migrate.
How I did it
Replaced the references, built with 202211 branch.
How to verify it
Not sure, it builds and tests pass as far as I can tell but I don't use the feature myself.
Signed-off-by: Christian Svensson <blue@cmd.nu>
* Upgrade docker-sonic-vs and docker-syncd-vs to Bullseye
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* iproute2: Force a new version and timestamp to be used for the package
There is an issue with Docker's overlay2 storage driver when not using
native diffs (and thus falling back to naive diff mode), which is the
case in the CI builds. The way the naive diff mode detects changes is by
comparing the file size and comparing the timestamps (specifically, I
believe it's the modification timestamp), and if there's a change there,
then it's considered a change that needs to be recorded as part of that
layer.
The problem is that with the code being added in the patch, the file
size remains the same, and the timestamp of binary files appear to be
the same timestamp as the changelog entry (likely for reproducible build
purposes). The file size remains the same likely due to extra padding
within the file introduced by relro. Because of this, Docker doesn't
detect this file has changed, and doesn't save the new file as part of
this layer.
To work around this, create a new changelog entry (with a new version as
well) with a new timestamp. This will result in the binary files having
a different timestamp, and thus will get saved by Docker as part of that
layer.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
---------
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Why I did it
Find a new bug on kubelet side. The kubernetes-cni plug-in was removed in #12997, the reason is that the plug-in will be auto installed when install kubeadm, and will report error if we don't remove the install code. But after removal, the version auto installed is different from what we installed before. This will affect the kubelet action in some scenarios we don't find before. Need to install it by another way.
How I did it
Install kubernetes-cni==0.8.7-00 before install kubeadm
How to verify it
Flannel binary will be installed under /opt/cni/bin/ folder
- Why I did it
Add Secure Boot support to SONiC OS.
Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process before the operating system has been loaded.
- How I did it
Added a signing process to sign the following components:
shim, grub, Linux kernel, and kernel modules when doing the build, and when feature is enabled in build time according to the HLD explanations (the feature is disabled by default).
- How to verify it
There are self-verifications of each boot component when building the image, in addition, there is an existing end-to-end test in sonic-mgmt repo that checks that the boot succeeds when loading a secure system (details below).
How to build a sonic image with secure boot feature: (more description in HLD)
Required to use the following build flags from rules/config:
SECURE_UPGRADE_MODE="dev"
SECURE_UPGRADE_DEV_SIGNING_KEY="/path/to/private/key.pem"
SECURE_UPGRADE_DEV_SIGNING_CERT="/path/to/cert/key.pem"
After setting those flags should build the sonic-buildimage.
Before installing the image, should prepared the setup (switch device) with the follow:
check that the device support UEFI
stored pub keys in UEFI DB
enabled Secure Boot flag in UEFI
How to run a test that verify the Secure Boot flow:
The existing test "test_upgrade_path" under "sonic-mgmt/tests/upgrade_path/test_upgrade_path", is enough to validate proper boot
You need to specify the following arguments:
Base_image_list your_secure_image
Taget_image_list your_second_secure_image
Upgrade_type cold
And run the test, basically the test will install the base image given in the parameter and then upgrade to target image by doing cold reboot and validates all the services are up and working correctly
Update sonic-swss-common submodule pointer to include the following:
565ad4b Fix common path issue (#751)
3352881 Prevent sonic-db-cli generate core dump (#749)
43cadec Add ProfileProvider class to support read profile config from PROFILE_DB. (#683)
8b09f90 Update path to sairedis tests (#747)
85f3776 Non recursive automake and Debian packaging changes (#700)
This is a reland of #13950, with the debug image build fix.
#### Why I did it
Add support of California-SB237 conformance.
https://github.com/sonic-net/SONiC/tree/master/doc/California-SB237
#### How I did it
Expire user passwords during build
#### How to verify it
Enable build flag and check if default user is prompted for a new password
Why I did it
[Security] Upgrade the openssl version to 1.1.1n-0+deb11u4+fips
f6df7303d8 Update expired certs.
84540b59c1 CVE-2022-2068
f763d8a93e Prepare 1.1.1n-0+deb11u2
576562cebe CVE-2022-1292
How I did it
Upgrade the OpenSSL version
Why I did it
[FIPS] Upgrade Open-SymCrypt version to 0.6
Improve the SymCrypt performance
Support to download the debug packages from storage account in version 0.6.
How I did it
Upgrade to symcrypt-openssl from version 0.4 to version 0.6
Changes in https://github.com/sonic-net/sonic-fips:
0c29b23 Upgrade the submodules: SymCrypt and SymCrypt-OpenSSL #40
80022f3 Fix the ARM64 build failure
2e76a3d Disable the unsupported tests
Other changes will be added as well:
55b8e0a Merge pull request #35 from xumia/change-license
120c1a7 Upgrade SymCrypt and SymCrypt-OpenSSL
2f9c084 Merge pull request #39 from liuh-80/dev/liuh/update-openssh-version
a3be6c5 Revert openssh version
e02fa1e Update fips version
How to verify it
Why I did it
Add explicit dependency on sonic_platform_common in sonic-chassisd mk. This was needed because sonic-chassisd depends on sonic-platform-base which is present in sonic-platform-common wheel package.
How I did it
Add explicit dependency on sonic_platform_common in sonic-chassisd mk.
How to verify it
Verified by building all platforms broadcom, mellanox, marvel_arm
Why I did it
[Build] Support Debian snapshot mirror to improve build stability
It is to enhance the reproducible build, supports the Debian snapshot mirror. It guarantees all the docker images using the same Debian mirror snapshot and fixes the temporary build failure which is caused by remote Debain mirror indexes changed during the build. It is also to fix the version conflict issue caused by no fixed versions of some of the Debian packages.
How I did it
Add a new feature to support the Debian snapshot mirror.
How to verify it
Why I did it
docker-sonic-mgmt build is failing.
How I did it
stretch docker is disabled recently. Update docker-sonic-mgmt to buster.
Migrate from sonictest to sonicbld. Because Azure requires migrate vm from uswest2 to uswest3.
Fix a build issue when build image.
How to verify it
Why I did it
We plan to pilot k8s feature, need to fix several bugs including enable telemetry feature and add platform label.
How I did it
Add support feature set, only enable telemetry container upgrade for now
Add platform label for scheduler usage
Remove CNI installation code, it would be auto installed when install kubeadm
How to verify it
After sonic device join k8s cluster, show node labels to check if platform label is visible.
Signed-off-by: Yun Li yunli1@microsoft.com
During the build process, a dsc file is retrieved from the URL:
http://deb.debian.org/debian/pool/main/i/isc-dhcp/isc-dhcp_4.4.1-2.3.dsc
Depending on the DNS resolution, the server reached may respond with a
HTTP 404 error code, what stops the build process.
In all cases, the URL http://deb.debian.org/debian/pool/main/i/isc-dhcp/
no more lists this DSC file but one with a different format.
The suffix "+deb11u1" is now appended to identify the debian version.
- append this suffix to the make file rules of isc-dhcp
Signed-off-by: Guillaume Lambert <guillaume.lambert@orange.com>