This commit adds support for pensando asic called ELBA. ELBA is used in pci based cards and in smartswitches.
#### Why I did it
This commit introduces pensando platform which is based on ELBA ASIC.
##### Work item tracking
- Microsoft ADO **(number only)**:
#### How I did it
Created platform/pensando folder and created makefiles specific to pensando.
This mainly creates pensando docker (which OEM's need to download before building an image) which has all the userspace to initialize and use the DPU (ELBA ASIC).
Output of the build process creates two images which can be used from ONIE and goldfw.
Recommendation is use to use ONIE.
#### How to verify it
Load the SONiC image via ONIE or goldfw and make sure the interfaces are UP.
##### Description for the changelog
Add pensando platform support.
Why I did it
Add platform support for Debian 12 (Bookworm) on Mellanox Platform
How I did it
Update hw-management to v7.0030.2008
Deprecate the sfp_count == module_count approach in favour of asic init completion
Ref: Mellanox/hw-mgmt@bf4f593
Add xxd package to base image which is required by hw-management scripts
Add the non-upstream flag into linux kernel cache options
Update the thermalctl logic based on new sysfs attributes
Fix the integrate-mlnx-hw-mgmt script to not populate the arm64 Kconfig
How to verify it
Build kernel and run platform tests
Signed-off-by: Vivek Reddy <vkarri@nvidia.com>
Co-authored-by: Junchao-Mellanox <junchao@nvidia.com>
Co-authored-by: Junchao-Mellanox <57339448+Junchao-Mellanox@users.noreply.github.com>
In Bookworm's version of setuptools, direct calls to setup.py are
deprecated and no longer guaranteed to work. One of the recommended
commands is to use the `build` python package to build packages, and
call it with `python -m build`. This, by default, builds the packages in
a virtualenv to ensure that only the specified dependencies in setup.py
are needed to build the package. This also extends to running tests,
where directly calling `setup.py test` may not work, and the recommended
alternatives are to either call `pytest` directly, or call `tox` or
`nox.` More details are available at [1].
For SONiC's use case, for building python packages, we cannot build all
Python packages in a virtualenv since there are dependencies that we
would have built earlier, and these packages are not pushed to pypi or
any package registry. (There may be a cleaner approach to this, though,
but I'm not aware of it.) For this reason, the `-n` flag is added to not
build the package in a virtualenv.
For testing, `pytest` is now called instead of `setup.py test`.
[1] https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Changes from Bullseye slave container:
* Python 2 is no longer available at all
* Python 3.11 (instead of Python 3.9)
* GCC 12 (instead of GCC 10)
* Python ipaddr package is no longer available
* OpenJDK 17 (instead of OpenJDK 11)
* Remove doxygen armhf manual compilation (no longer needed)
* Disable FIPS, as the FIPS binaries are currently not yet available
* Install Python setuptools through Debian instead of pip
* Install Python wheel through Debian instead of pip
* Install Python nose through Debian instead of pip
* Install Python j2cli through Debian instead of pip
* Install Python pexpect through Debian instead of pip
* Install Python parameterized through Debian instead of pip
* Install Python pyyaml through Debian instead of pip
* Install Python pyfakefs through Debian instead of pip
* Install Python m2crypto through Debian instead of pip
* Python pympler 1.0 (instead of 0.8)
* Install Python build (as a replacement to setup.py)
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Why I did it
Share docker image to support gnmi container and telemetry container
Work item tracking
Microsoft ADO 25423918:
How I did it
Create telemetry image from gnmi docker image.
Enable gnmi container and disable telemetry container by default.
How to verify it
Run end to end test.
Why I did it
This is part of Python3 migration project. This PR will add a new makefile flag: LEGACY_SONIC_MGMT_DOCKER
Now by default: LEGACY_SONIC_MGMT_DOCKER = y will build sonic-mgmt-docker with Python2 and Python3
If LEGACY_SONIC_MGMT_DOCKER = n will will sonic-mgmt-docker with Python3 only
Work item tracking
Microsoft ADO (number only): 25254349
How I did it
Add makefile flag: LEGACY_SONIC_MGMT_DOCKER
How to verify it
By default will build sonic-mgmt-docker with Python2 and Python3. No change compared to before.
Set LEGACY_SONIC_MGMT_DOCKER=n will build sonic-mgmt-docker with Python3 only
* Reduce SONiC image filesystem size
Add a build option to reduce the image size.
The image reduction process is affecting the builds in 2 ways:
- change some packages that are installed in the rootfs
- apply a rootfs reduction script
The script itself will perform a few steps:
- remove file duplication by leveraging hardlinks
- under /usr/share/sonic since the symlinks under the device folder are lost during the build.
- under /var/lib/docker since the files there will only be mounted ro
- remove some extra files (man, docs, licenses, ...)
- some image specific space reduction (only for aboot images currently)
The script can later be improved but for now it's reducing the rootfs
size by ~30%.
* restore fully featured vim package
Why I did it
RFS cache have issues which breaks official build and PR checker.
By reading cache, fsroot-vs/lib/modules folder don't exist.
Work item tracking
Microsoft ADO (number only): 25481484
How I did it
Disable read cache currently.
How to verify it
* Re-add missing dependency for derived debs.
My previous changed removed the whole dependency on the main deb
existing, not just the installation of the main deb. Fix this by
readding a dependency on the main deb being built/pulled from cache.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* Add the kernel and initramfs as dependencies for RFS build
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
---------
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* Don't install dependencies of derived debs
When "building" a derived deb package, don't install the dependencies of
the package into the container. It's not needed at this stage.
* Re-add openssh-client and openssh-sftp-server as derived debs
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
---------
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
This adds optimization for the SONiC image build by splitting the final build step into two stages. It allows running the first stage in parallel, improving build time.
The optimization is enabled via new rules/config flag ENABLE_RFS_SPLIT_BUILD (disabled by default)
- Why I did it
To improve a build time.
- How I did it
Added a logic to run build_debian.sh in two stages, transferring the progress via a new build artifact.
- How to verify it
make ENABLE_RFS_SPLIT_BUILD=y SONIC_BUILD_JOBS=32 target/<IMAGE_NAME>.bin
Signed-off-by: Yakiv Huryk <yhuryk@nvidia.com>
- Why I did it
To simplify usability and increase adoption of the sFlow + dropmon feature without rebuilding an image.
- How I did it
Remove the ENABLE_SFLOW_DROPMON compilation flag, and remove unnecessary patches.
- How to verify it
1. Configure the sFlow on the switch
2. Configure the Host (PTF)
3. Launch the sflowtool on Host (PTF)
4. Send the dropped packets from Host (PTF) to the switch via scapy
5. Check the L3 counters on the switch
6. Check the samples that were captured by the sflowtool on the Host (PTF)
Signed-off-by: vadymhlushko-mlnx <vadymh@nvidia.com>
Why I did it
Add dhcp_server ipv4 feature to SONiC.
HLD: sonic-net/SONiC#1282
How I did it
To be clarify: This container is disabled by INCLUDE_DHCP_SERVER = n for now, which would cause container not build.
Add INCLUDE_DHCP_SERVER to indicate whether to build dhcp_server container
Add docker file for dhcp_server, build and install kea-dhcp4 inside container
Add template file for dhcp_server container services.
Add entry for dhcp_server to FEATURE table in config_db.
How to verify it
Build image with INCLUDE_DHCP_SERVER = y to verify:
Image can be install successfully without crush.
By config feature state dhcp_server enabled to enable dhcp_server.
### Why I did it
Since directories are being removed, the `-r` flag is required.
Fixes#15922
##### Work item tracking
- Microsoft ADO **(number only)**: 24752770
* Enhanced slave.mk to accept python wheels as dependency for a deb
target. Dependent wheel names should be specified through the new
{deb_name}_WHEEL_DEPENDS variable in the deb's make rules. The wheel
will be built and installed in the slave docker before starting the
deb build.
* Added sonic_yang_models-1.0-py3-none-any.whl as dependency for
sonic-mgmt-common.deb. This is required for using the sonic yangs in
UMF
Signed-off-by: Sachin Holla <sachin.holla@broadcom.com>
Why I did it
Currently, k8s master image is generated from a separate branch which we created by ourselves, not release ones. We need to commit these k8s master related code to master branch for a better way to do k8s master image build out.
Work item tracking
Microsoft ADO (number only):
19998138
How I did it
Install k8s dashboard docker images
Install geneva mds and mdsd and fluentd docker images and tag them as latest, tagging latest will help create container always with the latest version
Install azure-storage-blob and azure-identity, this will help do etcd backup and restore.
Install kubernetes python client packages, this will help read worker and container state, we can send these metric to Geneva.
Remove mdm debian package, will replace it with the mdm docker image
Add k8s master entrance script, this script will be called by rc-local service when system startup. we have some master systemd services in compute-move repo, when VMM service create master VM, VMM will copy all master service files inside VM, the entrance script will setup all services according to the service files.
When the entrance script content changed, the PR build will set include_kubernetes_master=y to help do validation for k8s master related code change. The default value of include_kubernetes_master should be always n for public master branch. We will generate master image from internal master branch
How to verify it
Build with INCLUDE_KUBERNETES_MASTER = y
Why I did it
Fix some of the patches in .patches folder not applied issue.
The command "quilt applied" only lists the applied patches, if some of the patches have issues, then the patches will not be applied when you run the build command again.
Work item tracking
Microsoft ADO (number only): 24410730
How I did it
Run the command to apply the patches without any conditions.
If failed, check if the failure reason is "series fully applied".
How to verify it
Add support for a separate DEB_BUILD_PROFILES environment variable, to
be able to set build profiles. This may be used to specify whether
python 2 bindings/libraries should be built, or what configuration
options should be specified for a package.
This also makes it easier to append/remove build profiles from our rules
files, which will be needed for the sairedis build.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
#### Why I did it
The testcases in sonic-mgmt need the packages of protobuf and dashapi
##### Work item tracking
- Microsoft ADO **(number only)**:
#### How I did it
Because the docker of sonic-mgmt is based on ubuntu20.04, it cannot directly install the packages compiled by slave due to dependency issues. Download related packaged directly from Azp.
#### How to verify it
Check azp stats.
Why I did it
[Build] Change the build option from ENABLE_FIPS_FEATURE to INCLUDE_FIPS
Work item tracking
Microsoft ADO (number only): 24485797
How I did it
- Why I did it
Since the prod signing tool is vendor specific, and each vendor may have different arguments they would like to use in the script, we would need a way to inject those arguments to the script.
- How I did it
Add a compilation flag SECURE_UPGRADE_PROD_TOOL_ARGS which vendors can use to inject any flag they would want to the prod signing script.
- How to verify it
Build SONiC using your own prod script
- Why I did it
To be able to cache, and then retrieve cached "copied" debs
- How I did it
Add missed caching and cache retrieval steps
- How to verify it
Build with cache and then clean and rebuild again. The targets added to SONIC_COPY_DEBS should be taken from a cache.
Signed-off-by: Yevhen Fastiuk <yfastiuk@nvidia.com>
- Why I did it
Fix issue with signing tool not running due to being call with the path from the host and not the path it is mounted on inside the docker-slave
- How I did it
Modified the path on the SECURE_UPGRADE_PROD_SIGNING_TOOL flag to the path where it is mounted inside the slave docker
- How to verify it
Build SONiC using your own prod script
Depends on https://github.com/sonic-net/sonic-linux-kernel/pull/315
#### Why I did it
The name SECURE_UPGRADE_DEV_SIGNING_CERT is misleading, this flag is relevant to both to dev and prod signing.
#### How I did it
Rename all mentions of name SECURE_UPGRADE_DEV_SIGNING_CERT to SECURE_UPGRADE_SIGNING_CERT - this is also done with PR in sonic-linux-kernel repository
#### How to verify it
Build SONiC using your own prod script
Closes#14697
Why I did it
When using the dpkg cache feature, debians referenced under SONIC_ONLINE_DEBS always get downloaded, even if the expected debian package already exists under target/. The runs contrary to the design of Makefiles (where presence of the output file indicates it is already built).
This is also counter to the behavior of the SONiC build when dpkg cache is not enabled, causing further confusion.
This behavior also causes problems when doing local development, where we may want to modify the local debian files when evaluating which changes to push to the HTTP repository storing them (Artifactory). With the current behavior, our local changes are always overwritten.
Work item tracking
Microsoft ADO (number only):
How I did it
The SONIC_ONLINE_DEBS rule now skips downloading debians if they already exist under target/.
How to verify it
Populate target/ with locally modified debian packages. Perform the build. Ensure the local modifications remain intact, and are not overwritten.
#### Why I did it
Implementing code changes for https://github.com/sonic-net/SONiC/pull/1203
#### How I did it
Removed the timers and delayed target since the delayed services would start based on event driven approach.
Cleared port table during config reload and cold reboot scenario.
Modified yang model, init_cfg.json to change has_timer to delayed
#### How to verify it
Running regression
Why I did it
Support to add SONiC OS Version in device info.
It will be used to display the version info in the SONiC command "show version". The version is used to do the FIPS certification. We do not do the FIPS certification on a specific release, but on the SONiC OS Version.
SONiC Software Version: SONiC.master-13812.218661-7d94c0c28
SONiC OS Version: 11
Distribution: Debian 11.6
Kernel: 5.10.0-18-2-amd64
How I did it
Why I did it
Fix#14081
By default DOCKER_BUILDKIT is enabled after docker version 23.0.0
So we need to disable it explicitly if SONIC_USE_DOCKER_BUILDKIT is not set.
Otherwise it will produce larger installable images.
How I did it
set DOCKER_BUILDKIT=0 in slave.mk
How to verify it
Why I did it
It is to fix the issue #13773
It only has impact on the build triggered manually inside of the slave container. Developers can go to the slave container do a build, it will print a skippable error message complaining the variable not found.
How I did it
Add the default value for variable SLAVE_DRI.
How to verify it
- Why I did it
Add Secure Boot support to SONiC OS.
Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process before the operating system has been loaded.
- How I did it
Added a signing process to sign the following components:
shim, grub, Linux kernel, and kernel modules when doing the build, and when feature is enabled in build time according to the HLD explanations (the feature is disabled by default).
- How to verify it
There are self-verifications of each boot component when building the image, in addition, there is an existing end-to-end test in sonic-mgmt repo that checks that the boot succeeds when loading a secure system (details below).
How to build a sonic image with secure boot feature: (more description in HLD)
Required to use the following build flags from rules/config:
SECURE_UPGRADE_MODE="dev"
SECURE_UPGRADE_DEV_SIGNING_KEY="/path/to/private/key.pem"
SECURE_UPGRADE_DEV_SIGNING_CERT="/path/to/cert/key.pem"
After setting those flags should build the sonic-buildimage.
Before installing the image, should prepared the setup (switch device) with the follow:
check that the device support UEFI
stored pub keys in UEFI DB
enabled Secure Boot flag in UEFI
How to run a test that verify the Secure Boot flow:
The existing test "test_upgrade_path" under "sonic-mgmt/tests/upgrade_path/test_upgrade_path", is enough to validate proper boot
You need to specify the following arguments:
Base_image_list your_secure_image
Taget_image_list your_second_secure_image
Upgrade_type cold
And run the test, basically the test will install the base image given in the parameter and then upgrade to target image by doing cold reboot and validates all the services are up and working correctly
#### Why I did it
Add support of California-SB237 conformance.
https://github.com/sonic-net/SONiC/tree/master/doc/California-SB237
#### How I did it
Expire user passwords during build
#### How to verify it
Enable build flag and check if default user is prompted for a new password
Why I did it
If make fails, we can't rerun the make process, because existing patches can't apply again.
How I did it
Check if patches are applied. if yes, don't apply patches again.
How to verify it
Why I did it
[Build] Support Debian snapshot mirror to improve build stability
It is to enhance the reproducible build, supports the Debian snapshot mirror. It guarantees all the docker images using the same Debian mirror snapshot and fixes the temporary build failure which is caused by remote Debain mirror indexes changed during the build. It is also to fix the version conflict issue caused by no fixed versions of some of the Debian packages.
How I did it
Add a new feature to support the Debian snapshot mirror.
How to verify it
- Why I did it
The followup to #12920 PR.
If the feature compilation is disabled its configuration should not be included into init_cfg.json.
- How I did it
Update init_cfg.json.j2 template to include teamd and radv features configuration only if their compilation is enabled.
- How to verify it
The default behavior is preserved. To verify the changes compile the image without overriding INCLUDE_TEAMD and INCLUDE_ROUTER_ADVERTISER options. The generated /etc/sonic/init_cfg.json should remain with no changes. Install the image and verify that both teamd and radv containers are present and running. Verify that feature state returned by show feature status command is enabled.
Change the INCLUDE_TEAMD or INCLUDE_ROUTER_ADVERTISER value to "n". Compile and install the image. Verify that feature configuration is not included in generated /etc/sonic/init_cfg.json file. Verify that show feature status output doesn't include the feature.
Why I did it
In PR check pipelines, there are too many duplicated warnings:
fatal: No names found, cannot describe anything.
SONIC_IMAGE_VERSION will not change in one build. We don't need to calculate in every reference. We just need calculate one time, then record it.
In Makefile, '=' will calculate again and again when it is referred.
How I did it
Fix it in Makefile.
How to verify it
Check this PR's check pipeline result.
Why I did it
It's possible to speed up some parts of a build using parallel compression/decompression.
This is especially important for build_debian.sh.
How I did it
pigz is a parallel implementation of gzip: https://zlib.net/pigz/
Some programs like docker and mkinitramfs can automatically detect and use it instead of gzip.
For tar we need to select it directly.
To enable this feature you need to set GZ_COMPRESS_PROGRAM=pigz
Why I did it
It is to fix the broadcom build failure, it is caused by the build image docker-dhcp-relay:latest not found.
2022-12-14T00:09:57.5464893Z [ FAIL LOG START ] [ target/docker-dhcp-relay.gz-load ]
2022-12-14T00:09:57.5466036Z Attempting docker image lock for docker-dhcp-relay load
2022-12-14T00:09:57.5467113Z Obtained docker image lock for docker-dhcp-relay load
2022-12-14T00:09:57.5468206Z Loading docker image target/docker-dhcp-relay.gz
2022-12-14T00:09:57.5469361Z Loaded image: docker-dhcp-relay:internal.65852159-11ad82a07a
2022-12-14T00:09:57.5470686Z Tagging docker image docker-dhcp-relay:latest as docker-dhcp-relay-sonic:latest
2022-12-14T00:09:57.5471997Z Error response from daemon: No such image: docker-dhcp-relay:latest
2022-12-14T00:09:57.5473122Z [ FAIL LOG END ] [ target/docker-dhcp-relay.gz-load ]
2022-12-14T00:09:57.5539792Z make: *** [slave.mk:1180: target/docker-dhcp-relay.gz-load] Error 1
2022-12-14T00:09:57.5540958Z make: *** Waiting for unfinished jobs....
The image had been built succeeded
2022-12-13T17:01:59.9046935Z [ finished ] [ target/docker-eventd.gz ]
2022-12-13T17:02:00.4947165Z [ building ] [ target/docker-dhcp-relay.gz ]
2022-12-13T17:02:00.6688627Z /sonic/dockers/docker-dhcp-relay/cli-plugin-tests /sonic
2022-12-13T17:02:41.1123955Z /sonic
2022-12-13T17:07:04.1786069Z [ finished ] [ target/docker-dhcp-relay.gz ]
But it was tagged by another value:
Obtained docker image lock for docker-dhcp-relay save
Tagging docker image docker-dhcp-relay-sonic:latest as docker-dhcp-relay:internal.65852159-11ad82a07a
Saving docker image docker-dhcp-relay:internal.65852159-11ad82a07a
Released docker image lock for docker-dhcp-relay save
Removing docker image docker-dhcp-relay-sonic:latest
Untagged: docker-dhcp-relay-sonic:latest
target/docker-dhcp-relay.gz
File /dpkg_cache/docker-dhcp-relay.gz-2ddfa01a109ca69b7621f1a-450bae36026d9dee62646f2.tgz saved in cache
[ CACHE::SAVED ] /dpkg_cache/docker-dhcp-relay.gz-2ddfa01a109ca69b7621f1a-450bae36026d9dee62646f2.tgz
How I did it
When the feature SONIC_CONFIG_USE_NATIVE_DOCKERD_FOR_BUILD not enabled, always save as the latest tag, not use the specify version.
The version is dynamic, it is changed when a new commit checked in, but the image of docker-dhcp-relay is not necessary to change.
- Why I did it
This optimization is needed for DPU SONiC. DPU SONiC runs a limited set of containers and teamd and radv containers are not part of them. Unlike the other containers, there was no possibility to disable teamd and radv containers compilation.
To reduce DPU SONiC compilation time and reduce the image size this commit adds the possibility to disable their compilation.
- How I did it
Two new configuration options are added to rules/config file:
INCLUDE_TEAMD
INCLUDE_ROUTER_ADVERTISER
By default to preserve the existing behavior both options are enabled. There are two ways to override them:
To change option value to "n" in rules/config file.
To override their value using SONIC_OVERRIDE_BUILD_VARS env variable:
SONIC_OVERRIDE_BUILD_VARS="SONIC_INCLUDE_TEAMD=y SONIC_INCLUDE_ROUTER_ADVERTISER=n"
- How to verify it
The default behavior is preserved. To verify it compile the image without overriding new options. Install the image and verify that both teamd and radv containers are present and running.
To verify the new options override them with "n" value. Compile and install image. Verify that no docker containers are present. Verify that SWSS can start without errors.
This feature caches all the deb files during docker build and stores them
into version cache.
It loads the cache file if already exists in the version cache and copies the extracted
deb file from cache file into Debian cache path( /var/cache/apt/archives).
The apt-install always installs the deb file from the cache if exists, this
avoid unnecessary package download from the repo and speeds up the overall build.
The cache file is selected based on the SHA value of version dependency
files.
Why I did it
How I did it
How to verify it
* 03.Version-cache - framework environment settings
It defines and passes the necessary version cache environment variables
to the caching framework.
It adds the utils script for shared cache file access.
It also adds the post-cleanup logic for cleaning the unwanted files from
the docker/image after the version cache creation.
* 04.Version cache - debug framework
Added DBGOPT Make variable to enable the cache framework
scripts in trace mode. This option takes the part name of the script to
enable the particular shell script in trace mode.
Multiple shell script names can also be given.
Eg: make DBGOPT="image|docker"
Added verbose mode to dump the version merge details during
build/dry-run mode.
Eg: scripts/versions_manager.py freeze -v \
'dryrun|cmod=docker-swss|cfile=versions-deb|cname=all|stage=sub|stage=add'
* 05.Version cache - docker dpkg caching support
This feature caches all the deb files during docker build and stores them
into version cache.
It loads the cache file if already exists in the version cache and copies the extracted
deb file from cache file into Debian cache path( /var/cache/apt/archives).
The apt-install always installs the deb file from the cache if exists, this
avoid unnecessary package download from the repo and speeds up the overall build.
The cache file is selected based on the SHA value of version dependency
files.
PR #12829 modified the docker tagging scheme such that optional docker
containers would be tagged with the SONiC image version. However, the
docker-image-load macro wasn't updated for these changes. Update it
here.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
pre-compiled bazel is not work in arm64 docker container
shil@2f910d8d37b2:/sonic/src/sonic-p4rt/sonic-pins$ uname -a
Linux 2f910d8d37b2 5.4.0-132-generic #148-Ubuntu SMP Mon Oct 17 16:02:06 UTC 2022 aarch64 GNU/Linux
shil@2f910d8d37b2:/sonic/src/sonic-p4rt/sonic-pins$ bazel
Opening zip "/proc/self/exe": lseek(): Bad file descriptor
FATAL: Failed to open '/proc/self/exe' as a zip file: (error: 9): Bad file descriptor
shil@2f910d8d37b2:/sonic/src/sonic-p4rt/sonic-pins$
During docker build, host files can be passed to the docker build through
docker context files. But there is no straightforward way to transfer
the files from docker build to host.
This feature provides a tricky way to pass the cache contents from docker
build to host. It tar's the cached content and encodes them as base64 format
and passes it through a log file with a special tag as 'VCSTART and VCENT'.
Slave.mk in the host, it extracts the cache contents from the log and stores them
in the cache folder. Cache contents are encoded as base64 format for
easy passing.
<!--
Please make sure you've read and understood our contributing guidelines:
https://github.com/Azure/SONiC/blob/gh-pages/CONTRIBUTING.md
** Make sure all your commits include a signature generated with `git commit -s` **
If this is a bug fix, make sure your description includes "fixes #xxxx", or
"closes #xxxx" or "resolves #xxxx"
Please provide the following information:
-->
#### Why I did it
#### How I did it
#### How to verify it
Fixes: #11521
- Why I did it
When build SONiC dockers, SONiC build system tags all of them with latest tag. This is Ok for all built-in dockers because we will also tag them with image version tag in sonic_debian_extension.j2 script. On the other hand, some of these dockers are SONiC packages and they are installed by sonic-package-manager which creates a only one tag whcih is recorded in the corresponding .gz file. This leads to having these dockers tagged only with latest tag. This change saves the tag as an image version string in .gz file, so that these dockers have version identification in their tag.
- How I did it
I modified slave.mk to save the version tag instead of latest tag.
- How to verify it
I verified this change by running show version
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
Why I did it
Provide GNMI native write interface for configuration.
How I did it
Add configuration parameters for GNMI native write.
How to verify it
Check build pipeline.