Why I did it
For security and consistency consideration, change the docker image from alpine to Debian in Makefile
Work item tracking
Microsoft ADO (number only): 23077660
How I did it
change the docker image from alpine to Debian in Makefile
- Why I did it
Since the prod signing tool is vendor specific, and each vendor may have different arguments they would like to use in the script, we would need a way to inject those arguments to the script.
- How I did it
Add a compilation flag SECURE_UPGRADE_PROD_TOOL_ARGS which vendors can use to inject any flag they would want to the prod signing script.
- How to verify it
Build SONiC using your own prod script
- Why I did it
In order to reduce sonic build time, there is an option to acquire sonic slave docker(s) from artifact server (reduce sonic make configure time).
Current implementation supports only convention of:
<REGISTRY_SERVER>:<REGISTRY_PORT>/<SLAVE_BASE_IMAGE>:<SLAVE_BASE_TAG>
In case the SLAVE_BASE_IMAGE appear in internal path inside the server, the convention should be like that:
<REGISTRY_SERVER>:<REGISTRY_PORT><REGISTRY_SERVER_PATH>/<SLAVE_BASE_IMAGE>:<SLAVE_BASE_TAG>
When REGISTRY_SERVER_PATH (that is set on rules/config) will have to start with "/".
If REGISTRY_SERVER_PATH will not be set, the behavior will remain the same it works today.
- How I did it
Add ability to set REGISTRY_SERVER_PATH and update the code for docker image tag and docker image pull accordingly
- How to verify it
Use sonic slave docker image from artifact server in which the image is kept in internal folder and make sure it consume it.
- Why I did it
Fix issue with signing tool not running due to being call with the path from the host and not the path it is mounted on inside the docker-slave
- How I did it
Modified the path on the SECURE_UPGRADE_PROD_SIGNING_TOOL flag to the path where it is mounted inside the slave docker
- How to verify it
Build SONiC using your own prod script
Depends on https://github.com/sonic-net/sonic-linux-kernel/pull/315
#### Why I did it
The name SECURE_UPGRADE_DEV_SIGNING_CERT is misleading, this flag is relevant to both to dev and prod signing.
#### How I did it
Rename all mentions of name SECURE_UPGRADE_DEV_SIGNING_CERT to SECURE_UPGRADE_SIGNING_CERT - this is also done with PR in sonic-linux-kernel repository
#### How to verify it
Build SONiC using your own prod script
Why I did it
Support to add SONiC OS Version in device info.
It will be used to display the version info in the SONiC command "show version". The version is used to do the FIPS certification. We do not do the FIPS certification on a specific release, but on the SONiC OS Version.
SONiC Software Version: SONiC.master-13812.218661-7d94c0c28
SONiC OS Version: 11
Distribution: Debian 11.6
Kernel: 5.10.0-18-2-amd64
How I did it
Why I did it
Optimize the version control for Debian packages.
Fix sonic-slave-buster/sources.list.amd64 not found display issue, need to generate the file before running the shell command to evaluate the sonic image tag.
When using the snapshot mirror, it is not necessary to update the version file based on the base image. It will reduce the version dependency issue, when an image is not run when freezing the version.
How I did it
Not to update the version file when snapshot mirror enabled.
How to verify it
Why I did it
Enable the reproducible build for PR build for master branch
Fix the reproducible build variable display error in the slave container.
The below config is none, although the config is set and takes effect.
"SONIC_VERSION_CONTROL_COMPONENTS": "none"
How I did it
Passing the variable through the slave container command line.
The variable has been passed to the slave container and the other docker container by a config file, it is only used to display the value during the build.
How to verify it
See https://dev.azure.com/mssonic/build/_build/results?buildId=247960&view=logs&j=88ce9a53-729c-5fa9-7b6e-3d98f2488e3f&t=88f376cf-c35d-5783-0a48-9ad83a873284
"SONIC_VERSION_CONTROL_COMPONENTS": "deb,py2,py3,web,git,docker"
Why I did it
We don't use snapshots of debian mirrors for sonic-slave containers even if MIRROR_SNAPSHOT is enabled.
How I did it
Export MIRROR_SNAPSHOT in Makefile.work to generate sources.list for sonic-slave containers using debian snapshot mirror
How to verify it
New docker versions use stderr instead of stdout to print info when build image.
As a resullt we got empty log files.
the fix is to redirect stderr to stdout when build sonic-slave images.
Why I did it
This is a fix for c63e9fe
SLAVE_TAG should include all dependencies used for SLAVE_BASE_TAG
How I did it
Take sources.list.* into account when calculate SLAVE_TAG
How to verify it
Why I did it
Find a new bug on kubelet side. The kubernetes-cni plug-in was removed in #12997, the reason is that the plug-in will be auto installed when install kubeadm, and will report error if we don't remove the install code. But after removal, the version auto installed is different from what we installed before. This will affect the kubelet action in some scenarios we don't find before. Need to install it by another way.
How I did it
Install kubernetes-cni==0.8.7-00 before install kubeadm
How to verify it
Flannel binary will be installed under /opt/cni/bin/ folder
- Why I did it
Add Secure Boot support to SONiC OS.
Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process before the operating system has been loaded.
- How I did it
Added a signing process to sign the following components:
shim, grub, Linux kernel, and kernel modules when doing the build, and when feature is enabled in build time according to the HLD explanations (the feature is disabled by default).
- How to verify it
There are self-verifications of each boot component when building the image, in addition, there is an existing end-to-end test in sonic-mgmt repo that checks that the boot succeeds when loading a secure system (details below).
How to build a sonic image with secure boot feature: (more description in HLD)
Required to use the following build flags from rules/config:
SECURE_UPGRADE_MODE="dev"
SECURE_UPGRADE_DEV_SIGNING_KEY="/path/to/private/key.pem"
SECURE_UPGRADE_DEV_SIGNING_CERT="/path/to/cert/key.pem"
After setting those flags should build the sonic-buildimage.
Before installing the image, should prepared the setup (switch device) with the follow:
check that the device support UEFI
stored pub keys in UEFI DB
enabled Secure Boot flag in UEFI
How to run a test that verify the Secure Boot flow:
The existing test "test_upgrade_path" under "sonic-mgmt/tests/upgrade_path/test_upgrade_path", is enough to validate proper boot
You need to specify the following arguments:
Base_image_list your_secure_image
Taget_image_list your_second_secure_image
Upgrade_type cold
And run the test, basically the test will install the base image given in the parameter and then upgrade to target image by doing cold reboot and validates all the services are up and working correctly
#### Why I did it
Add support of California-SB237 conformance.
https://github.com/sonic-net/SONiC/tree/master/doc/California-SB237
#### How I did it
Expire user passwords during build
#### How to verify it
Enable build flag and check if default user is prompted for a new password
previously "make reset" was expecting user input from the terminal to do its job
setting UNATTENDED to any non-zero string will allow "make reset" to run without interactive confirmation
- Why I did it
When doing automated builds of SONiC images, we need to reset the working repositories between each build.
- How I did it
Adding an environment variable that is read by Makefile.work
- How to verify it
running
UNATTENDED=1 make reset
should make an automatic reset of all working directories
Why I did it
Change the mirror config file
Use the files/build/versions/default/versions-mirror only when reproducible build enabled.
The config in files/build/versions is only for reproducible build, while snapshot mirror feature does not have the dependency on the reproducible build.
How I did it
Skip the mirror config in files/build/versions/default/versions-mirror if reproducible build not enabled.
How to verify it
Why I did it
[Build] Support Debian snapshot mirror to improve build stability
It is to enhance the reproducible build, supports the Debian snapshot mirror. It guarantees all the docker images using the same Debian mirror snapshot and fixes the temporary build failure which is caused by remote Debain mirror indexes changed during the build. It is also to fix the version conflict issue caused by no fixed versions of some of the Debian packages.
How I did it
Add a new feature to support the Debian snapshot mirror.
How to verify it
Why I did it
We plan to pilot k8s feature, need to fix several bugs including enable telemetry feature and add platform label.
How I did it
Add support feature set, only enable telemetry container upgrade for now
Add platform label for scheduler usage
Remove CNI installation code, it would be auto installed when install kubeadm
How to verify it
After sonic device join k8s cluster, show node labels to check if platform label is visible.
Signed-off-by: Yun Li yunli1@microsoft.com
Why I did it
make clean is broken after #12000:
bash: -c: line 1: syntax error near unexpected token `;'
bash: -c: line 1: `make -f slave.mk PLATFORM= PLATFORM_ARCH=amd64 MULTIARCH_QEMU_ENVIRON=n
...
MIRROR_URLS= MIRROR_SECURITY_URLS= Q=@ clean; ; '
make[1]: *** [Makefile.work:531: clean] Error 2
How I did it
Remove a conditional for clean command.
Signed-off-by: Konstantin Vasin <k.vasin@yadro.com>
Why I did it
It's possible to speed up some parts of a build using parallel compression/decompression.
This is especially important for build_debian.sh.
How I did it
pigz is a parallel implementation of gzip: https://zlib.net/pigz/
Some programs like docker and mkinitramfs can automatically detect and use it instead of gzip.
For tar we need to select it directly.
To enable this feature you need to set GZ_COMPRESS_PROGRAM=pigz
This feature caches all the deb files during docker build and stores them
into version cache.
It loads the cache file if already exists in the version cache and copies the extracted
deb file from cache file into Debian cache path( /var/cache/apt/archives).
The apt-install always installs the deb file from the cache if exists, this
avoid unnecessary package download from the repo and speeds up the overall build.
The cache file is selected based on the SHA value of version dependency
files.
Why I did it
How I did it
How to verify it
* 03.Version-cache - framework environment settings
It defines and passes the necessary version cache environment variables
to the caching framework.
It adds the utils script for shared cache file access.
It also adds the post-cleanup logic for cleaning the unwanted files from
the docker/image after the version cache creation.
* 04.Version cache - debug framework
Added DBGOPT Make variable to enable the cache framework
scripts in trace mode. This option takes the part name of the script to
enable the particular shell script in trace mode.
Multiple shell script names can also be given.
Eg: make DBGOPT="image|docker"
Added verbose mode to dump the version merge details during
build/dry-run mode.
Eg: scripts/versions_manager.py freeze -v \
'dryrun|cmod=docker-swss|cfile=versions-deb|cname=all|stage=sub|stage=add'
* 05.Version cache - docker dpkg caching support
This feature caches all the deb files during docker build and stores them
into version cache.
It loads the cache file if already exists in the version cache and copies the extracted
deb file from cache file into Debian cache path( /var/cache/apt/archives).
The apt-install always installs the deb file from the cache if exists, this
avoid unnecessary package download from the repo and speeds up the overall build.
The cache file is selected based on the SHA value of version dependency
files.
During docker build, host files can be passed to the docker build through
docker context files. But there is no straightforward way to transfer
the files from docker build to host.
This feature provides a tricky way to pass the cache contents from docker
build to host. It tar's the cached content and encodes them as base64 format
and passes it through a log file with a special tag as 'VCSTART and VCENT'.
Slave.mk in the host, it extracts the cache contents from the log and stores them
in the cache folder. Cache contents are encoded as base64 format for
easy passing.
<!--
Please make sure you've read and understood our contributing guidelines:
https://github.com/Azure/SONiC/blob/gh-pages/CONTRIBUTING.md
** Make sure all your commits include a signature generated with `git commit -s` **
If this is a bug fix, make sure your description includes "fixes #xxxx", or
"closes #xxxx" or "resolves #xxxx"
Please provide the following information:
-->
#### Why I did it
#### How I did it
#### How to verify it
Why I did it
Provide GNMI native write interface for configuration.
How I did it
Add configuration parameters for GNMI native write.
How to verify it
Check build pipeline.
#### Why I did it
When build the sonic-slave-bash target, it cannot go to the shell failed in the step to build sonic-build-hooks, the error logs as below. It may have impact on some of the users, it may be relative to different version of the make.
```
$ QUIET=n BLDENV=bullseye make NOJESSIE=1 NOSTRETCH=1 sonic-slave-bash
+++ Making sonic-slave-bash +++
BLDENV=buster make -f Makefile.work sonic-slave-bash
make[1]: Entering directory `/builds2/stephens/wip/update-submodule/sonic-buildimage'
echo -n ""
pushd src/sonic-build-hooks; TRUSTED_GPG_URLS=https://packages.trafficmanager.net/debian/public_key.gpg,https://packages.microsoft.com/keys/microsoft.asc make all; popd
/builds2/stephens/wip/update-submodule/sonic-buildimage/src/sonic-build-hooks /builds2/stephens/wip/update-submodule/sonic-buildimage
make[2]: Entering directory `/builds2/stephens/wip/update-submodule/sonic-buildimage/src/sonic-build-hooks'
dpkg-deb: building package 'sonic-build-hooks' in 'buildinfo/sonic-build-hooks_1.0_all.deb'.
make[2]: Leaving directory `/builds2/stephens/wip/update-submodule/sonic-buildimage/src/sonic-build-hooks'
/builds2/stephens/wip/update-submodule/sonic-buildimage
mkdir -p sonic-slave-buster/buildinfo
cp src/sonic-build-hooks/buildinfo/sonic-build-hooks* sonic-slave-buster/buildinfo
[ "n" == y ] && scripts/build_mirror_config.sh sonic-slave-buster amd64 buster
make[1]: *** [sonic-build-hooks] Error 1
make[1]: Leaving directory `/builds2/stephens/wip/update-submodule/sonic-buildimage'
make: *** [sonic-slave-bash] Error 2
```
#### How I did it
Change the format as below:
```
[ xxx = yyy ] && do something
```
To
```
if [ xxx = yyy ]; then do something; if
```
#### How to verify it
Verified by who found the issue, the issue gone when the patch applied.
Why I did it
After #12557 sources.lists are generated.
So we need to recalculate SLAVE_BASE_TAG if mirrors were changed.
Also we need to rebuild DPKG cache in this case.
How I did it
Use generated sources.list for SLAVE_BASE_TAG
Add MIRROR_URLS and MIRROR_SECURITY_URLS to SONIC_COMMON_FLAGS_LIST
Why I did it
Unify the Debian mirror sources
Make easy to upgrade to the next Debian release, not source url code change required.
Support to customize the Debian mirror sources during the build
Relative issue: #12523
This PR fixes the issue reported in PR#12367
https://github.com/sonic-net/sonic-buildimage/pull/12367
The issue is that exit code always being 0 for the builds that are failed.
Fix is added in the Makefile.work to return the error code
when the slave build is failed with an error.
- The Makefile.work becomes complex and it is very difficult to manage the changes across branches.
- Restructured the Makefile.work and it becomes more readable.
- Added $(QUIET) option to turn on command echo mode through command line option.
- Exported the SONIC_BUILD_VARS variable, through which make options can be set dynamically.
Eg: make SONIC_BUILD_VARS='INCLUDE_NAT=y'
Why I did it
Replace configuration parameter for gnmi write, and we will add other gnmi write features in the future.
How I did it
Update rules/config and other Makefile.
How to verify it
Build sonic image.
- Pass TARGET_BOOTLOADER variable value to slave build infra
#### Why I did it
The TARGET_BOOTLOADER is always blank when referred to in the Makefiles which are executed inside the slave build container.
#### How I did it
Pass it on the make command invoking slave.mk explicitly similar to other environment variables.
#### How to verify it
kdump-tools package is installed on sonic-broadcom.bin image.
* Add k8s master feature
Signed-off-by: Yun Li <yunli1@microsoft.com>
* Update kubernetes version mistake and make variable passing clear
Signed-off-by: Yun Li <yunli1@microsoft.com>
* Add CRI-dockerd package
Signed-off-by: Yun Li <yunli1@microsoft.com>
* Update version variable passing logic
Signed-off-by: Yun Li <yunli1@microsoft.com>
* Upgrade the worker kubernetes version
Signed-off-by: Yun Li <yunli1@microsoft.com>
* Install xml file parse tool
Signed-off-by: Yun Li <yunli1@microsoft.com>
Signed-off-by: Yun Li <yunli1@microsoft.com>
Why I did it
Not all build environments have passwordless sudo enabled for all users
How I did it
Instead of using sudo to delete fsroot directories, mount them in a small, temporary docker container and delete them from there
How to verify it
Build in an environment where the build user does not have passwordless sudo enabled and confirm that no sudo password prompts are seen
* Ported Marvell armhf build on x86 for debian buster to use cross-compilation instead of qemu emulation
Current armhf Sonic build on amd64 host uses qemu emulation. Due to the
nature of the emulation it takes a very long time, about 22-24 hours to
complete the build. The change I did to reduce the building time by
porting Sonic armhf build on amd64 host for Marvell platform for debian
buster to use cross-compilation on arm64 host for armhf target. The
overall Sonic armhf building time using cross-compilation reduced to
about 6 hours.
Signed-off-by: marvell <marvell@cpss-build3.marvell.com>
* Fixed final Sonic image build with dockers inside
* Update Dockerfile.j2
Fixed qemu-user-static:x86_64-aarch64-5.0.0-2 .
* Update cross-build-arm-python-reqirements.sh
Added support for both armhf and arm64 cross-build platform using $PY_PLAT environment variable.
* Update Makefile
Added TARGET=<cross-target> for armhf/arm64 cross-compilation.
* Reviewer's @qiluo-msft requests done
Signed-off-by: marvell <marvell@cpss-build3.marvell.com>
* Added new radius/pam patch for arm64 support
* Update slave.mk
Added missing back tick.
* Added libgtest-dev: libgmock-dev: to the buster Dockerfile.j2. Fixed arm perl version to be generic
* Added missing armhf/arm64 entries in /etc/apt/sources.list
* fix libc-bin core dump issue from xumia:fix-libc-bin-install-issue commit
* Removed unnecessary 'apt-get update' from sonic-slave-buster/Dockerfile.j2
* Fixed saiarcot895 reviewer's requests
* Fixed README and replaced 'sed/awk' with patches
* Fixed ntp build to use openssl
* Unuse sonic-slave-buster/cross-build-arm-python-reqirements.sh script (put all prebuilt python packages cross-compilation/install inside Dockerfile.j2). Fixed src/snmpd/Makefile to use -j1 in all cases
* Clean armhf cross-compilation build fixes
* Ported cross-compilation armhf build to bullseye
* Additional change for bullseye
* Set CROSS_BUILD_ENVIRON default value n
* Removed python2 references
* Fixes after merge with the upstream
* Deleted unused sonic-slave-buster/cross-build-arm-python-reqirements.sh file
* Fixed 2 @saiarcot895 requests
* Fixed @saiarcot895 reviewer's requests
* Removed use of prebuilt python wheels
* Incorporated saiarcot895 CC/CXX and other simplification/generalization changes
Signed-off-by: marvell <marvell@cpss-build3.marvell.com>
* Fixed saiarcot895 reviewer's additional requests
* src/libyang/patch/debian-packaging-files.patch
* Removed --no-deps option when installing wheels. Removed unnecessary lazy_object_proxy arm python3 package instalation
Co-authored-by: marvell <marvell@cpss-build3.marvell.com>
Co-authored-by: marvell <marvell@cpss-build2.marvell.com>
Refactors the SONiC Installer to support greater flexibility in building for a given architecture and bootloader.
#### Why I did it
Currently the SONiC installer assumes that if a platform is ARM based that it uses the `uboot` bootloader and uses the `grub` bootloader otherwise. This is not a correct assumption to make as ARM is not strictly tied to uboot and x86 is not strictly tied to grub.
#### How I did it
To implement this I introduce the following changes:
* Remove the different arch folders from the `installer/` directory
* Merge the generic components of the ARM and x86 installer into `installer/installer.sh`
* Refactor x86 + grub specific functions into `installer/default_platform.conf`
* Modify installer to call `default_platform.conf` file and also call `platform/[platform]/patform.conf` file as well to override as needed
* Update references to the installer in the `build_image.sh` script
* Add `TARGET_BOOTLOADER` variable that is by default `uboot` for ARM devices and `grub` for x86 unless overridden in `platform/[platform]/rules.mk`
* Update bootloader logic in `build_debian.sh` to be based on `TARGET_BOOTLOADER` instead of `TARGET_ARCH` and to reference the grub package in a generic manner
#### How to verify it
This has been tested on a ARM test platform as well as on Mellanox amd64 switches as well to ensure there was no impact.
#### Description for the changelog
[arm] Refactor installer and build to allow arm builds targeted at grub platforms
#### Link to config_db schema for YANG module changes
N/A
- Why I did it
Implemented sonic-net/SONiC#1001
- How I did it
Install systemd-bootchart tool and provide default config for it.
- How to verify it
Run build and verify systemd-bootchart is installed.
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
Why I did it
The docker storage driver vfs is not a good option for build, it uses the “deep copy” when building a new layer, leads to lower performance and more space used on disk than other storage drivers.
A better docker storage driver is the default one overlay2, it is a modern union filesystem.
Why I did it
Make reset didn't clean-up all fsroot folders.
How I did it
Remove all fsroot folders used during build.
How to verify it
Run local build and local make reset:
sudo mkdir fsroot-test
sudo touch fsroot-test/foo
make reset
(Without this change, make reset cannot remove fsroot-foo, with the change, the repo become clean after make reset.)
Signed-off-by: Ying Xie ying.xie@microsoft.com
#### Why I did it
No logs currently exist for sonic-save-X containers which makes it difficult to debug.
#### How I did it
Altered Makefile.work to create logs in the sonic-slave-X folder while still displaying the log to the screen to prevent interfering with any existing tooling.
#### How to verify it
Do `make configure` and verify that logs show up in `sonic-slave-buster/` and `sonic-slave-bullseye/`
#### Description for the changelog
Add logging for slave container builds
#### A picture of a cute animal (not mandatory but encouraged)
TBD
Currently, the build dockers are created as a user dockers(docker-base-stretch-<user>, etc) that are
specific to each user. But the sonic dockers (docker-database, docker-swss, etc) are
created with a fixed docker name and common to all the users.
docker-database:latest
docker-swss:latest
When multiple builds are triggered on the same build server that creates parallel building issue because
all the build jobs are trying to create the same docker with latest tag.
This happens only when sonic dockers are built using native host dockerd for sonic docker image creation.
This patch creates all sonic dockers as user sonic dockers and then, while
saving and loading the user sonic dockers, it rename the user sonic
dockers into correct sonic dockers with tag as latest.
docker-database:latest <== SAVE/LOAD ==> docker-database-<user>:tag
The user sonic docker names are derived from 'DOCKER_USERNAME and DOCKER_USERTAG' make env
variable and using Jinja template, it replaces the FROM docker name with correct user sonic docker name for
loading and saving the docker image.
Why I did it
To sign SONiC kernel image and allow secure boot based system to verify SONiC image before loading into the system.
How I did it
Pass following parameter to rules/config.user
Ex:
SONIC_ENABLE_SECUREBOOT_SIGNATURE := y
SIGNING_KEY := /path/to/key/private.key
SIGNING_CERT := /path/to/public/public.cert
How to verify it
Secure boot enabled system enrolled with right public key of the, image in the platform UEFI database will able to verify image before load.
Alternatively one can verify with offline sbsign tool as below.
export SBSIGN_KEY=/abc/bcd/xyz/
sbverify --cert $SBSIGN_KEY/public_cert.cert fsroot-platform-XYZ/boot/vmlinuz-5.10.0-8-2-amd64 mage
O/P:
Signature verification OK
Implement infrastructure that allows enabling address sanitizer
for docker containers. Enable address sanitizer for SWSS container.
- Why I did it
To add a possibility to compile SONiC applications with address sanitizer (ASAN).
ASAN is a memory error detector for C/C++. It finds:
1. Use after free (dangling pointer dereference)
2. Heap buffer overflow
3. Stack buffer overflow
4. Global buffer overflow
5. Use after return
6. Use after the scope
7. Initialization order bugs
8. Memory leaks
- How I did it
By adding new ENABLE_ASAN configuration option.
- How to verify it
By default ASAN is disabled and the SONiC image is not affected.
When ASAN is enabled it inspects all allocation, deallocation, and memory usage that the application does in run time. To verify whether the application has memory errors tests that trigger memory usage of the application should be run. Ideally, the whole regression tests should be run. Memory leaks reports will be placed in /var/log/asan/ directory of SONiC host OS.
Signed-off-by: Oleksandr Ivantsiv <oivantsiv@nvidia.com>
Support saiserver v2 with python3 and thrift 0.13.0
add variables to support the saiserverv2
build different thrift in saithrift depends on saiserver version
build differernt versions of saiserver
make the saiserver and saiserver docker with version number
test done:
build two different versions of sasiserver in local build environment
add saiserver to buster
Co-authored-by: richard.yu <richard.yu@microsoft.comwq>
* [arm64]: Fix registration of the qemu interpreters
The current code doesn't properly run the container that registers the
qemu interpreters. It checks to see if the container is "known" by
Docker, but that doesn't indicate whether it's been run or not.
Therefore, just always register the qemu interpreters in the kernel, to
make sure the binary that's in the slave images that we build is used.
* [build]: Reduce the number of python calls
Modify the BLDENV and PROJECT_ROOT variables in slave.mk to be
immediate execution instead of lazy execution. Neither of these
variables should be changing for the duration of the build in each slave
container, so just run it once instead of every time they're referenced.
When running `make configure` for broadcom arm64 (where all of the slave
images are already built) on an amd64 host, this reduces the time spent
in each slave container from 4.5-5 minutes to 2 minutes.
* [sonic-slave]: Upgrade the qemu used for Bullseye arm64 to 6.1.0
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>