Commit Graph

273 Commits

Author SHA1 Message Date
xumia
14a5ec7914
[Build] Fix the docker image docker-dhcp-relay:latest not found issue (#13048)
Why I did it
It is to fix the broadcom build failure, it is caused by the build image docker-dhcp-relay:latest not found.

2022-12-14T00:09:57.5464893Z [ FAIL LOG START ] [ target/docker-dhcp-relay.gz-load ]
2022-12-14T00:09:57.5466036Z Attempting docker image lock for docker-dhcp-relay load
2022-12-14T00:09:57.5467113Z Obtained docker image lock for docker-dhcp-relay load
2022-12-14T00:09:57.5468206Z Loading docker image target/docker-dhcp-relay.gz
2022-12-14T00:09:57.5469361Z Loaded image: docker-dhcp-relay:internal.65852159-11ad82a07a
2022-12-14T00:09:57.5470686Z Tagging docker image docker-dhcp-relay:latest as docker-dhcp-relay-sonic:latest
2022-12-14T00:09:57.5471997Z Error response from daemon: No such image: docker-dhcp-relay:latest
2022-12-14T00:09:57.5473122Z [  FAIL LOG END  ] [ target/docker-dhcp-relay.gz-load ]
2022-12-14T00:09:57.5539792Z make: *** [slave.mk:1180: target/docker-dhcp-relay.gz-load] Error 1
2022-12-14T00:09:57.5540958Z make: *** Waiting for unfinished jobs....
The image had been built succeeded

2022-12-13T17:01:59.9046935Z [ finished ] [ target/docker-eventd.gz ] 
2022-12-13T17:02:00.4947165Z [ building ] [ target/docker-dhcp-relay.gz ] 
2022-12-13T17:02:00.6688627Z /sonic/dockers/docker-dhcp-relay/cli-plugin-tests /sonic
2022-12-13T17:02:41.1123955Z /sonic
2022-12-13T17:07:04.1786069Z [ finished ] [ target/docker-dhcp-relay.gz ] 
But it was tagged by another value:

Obtained docker image lock for docker-dhcp-relay save
Tagging docker image docker-dhcp-relay-sonic:latest as docker-dhcp-relay:internal.65852159-11ad82a07a
Saving docker image docker-dhcp-relay:internal.65852159-11ad82a07a
Released docker image lock for docker-dhcp-relay save
Removing docker image docker-dhcp-relay-sonic:latest
Untagged: docker-dhcp-relay-sonic:latest
target/docker-dhcp-relay.gz
File /dpkg_cache/docker-dhcp-relay.gz-2ddfa01a109ca69b7621f1a-450bae36026d9dee62646f2.tgz saved in cache 
[ CACHE::SAVED ] /dpkg_cache/docker-dhcp-relay.gz-2ddfa01a109ca69b7621f1a-450bae36026d9dee62646f2.tgz
How I did it
When the feature SONIC_CONFIG_USE_NATIVE_DOCKERD_FOR_BUILD not enabled, always save as the latest tag, not use the specify version.
The version is dynamic, it is changed when a new commit checked in, but the image of docker-dhcp-relay is not necessary to change.
2022-12-15 21:03:21 +08:00
Oleksandr Ivantsiv
9988ff888b
[build] Add the possibility to disable compilation of teamd and radv containers. (#12920)
- Why I did it
This optimization is needed for DPU SONiC. DPU SONiC runs a limited set of containers and teamd and radv containers are not part of them. Unlike the other containers, there was no possibility to disable teamd and radv containers compilation.
To reduce DPU SONiC compilation time and reduce the image size this commit adds the possibility to disable their compilation.

- How I did it
Two new configuration options are added to rules/config file:

INCLUDE_TEAMD
INCLUDE_ROUTER_ADVERTISER
By default to preserve the existing behavior both options are enabled. There are two ways to override them:

To change option value to "n" in rules/config file.
To override their value using SONIC_OVERRIDE_BUILD_VARS env variable:
SONIC_OVERRIDE_BUILD_VARS="SONIC_INCLUDE_TEAMD=y SONIC_INCLUDE_ROUTER_ADVERTISER=n"

- How to verify it
The default behavior is preserved. To verify it compile the image without overriding new options. Install the image and verify that both teamd and radv containers are present and running.
To verify the new options override them with "n" value. Compile and install image. Verify that no docker containers are present. Verify that SWSS can start without errors.
2022-12-13 12:06:30 +02:00
Kalimuthu-Velappan
0dc22bd27c
05.Version cache - docker dpkg caching support (#12005)
This feature caches all the deb files during docker build and stores them
into version cache.

It loads the cache file if already exists in the version cache and copies the extracted
deb file from cache file into Debian cache path( /var/cache/apt/archives).

The apt-install always installs the deb file from the cache if exists, this
avoid unnecessary package download from the repo and speeds up the overall build.

The cache file is selected based on the SHA value of version dependency
files.

Why I did it
How I did it
How to verify it


* 03.Version-cache - framework environment settings

It defines and passes the necessary version cache environment variables
to the caching framework.

It adds the utils script for shared cache file access.

It also adds the post-cleanup logic for cleaning the unwanted files from
the docker/image after the version cache creation.

* 04.Version cache - debug framework

Added DBGOPT Make variable to enable the cache framework
scripts in trace mode. This option takes the part name of the script to
enable the particular shell script in trace mode.

Multiple shell script names can also be given.

	Eg: make DBGOPT="image|docker"

Added verbose mode to dump the version merge details during
build/dry-run mode.
	Eg: scripts/versions_manager.py freeze -v \
		'dryrun|cmod=docker-swss|cfile=versions-deb|cname=all|stage=sub|stage=add'

* 05.Version cache - docker dpkg caching support

This feature caches all the deb files during docker build and stores them
into version cache.

It loads the cache file if already exists in the version cache and copies the extracted
deb file from cache file into Debian cache path( /var/cache/apt/archives).

The apt-install always installs the deb file from the cache if exists, this
avoid unnecessary package download from the repo and speeds up the overall build.

The cache file is selected based on the SHA value of version dependency
files.
2022-12-12 09:20:56 +08:00
Saikrishna Arcot
61536028f8
[build]: Fix docker load image tag not being the expected tag (#12959)
PR #12829 modified the docker tagging scheme such that optional docker
containers would be tagged with the SONiC image version. However, the
docker-image-load macro wasn't updated for these changes. Update it
here.

Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
2022-12-06 23:36:00 -08:00
LuiSzee
5281f6c3f6
[build][arm64] disable p4rt compile on arm64 for bazel not work (#12798)
pre-compiled bazel is not work in arm64 docker container

shil@2f910d8d37b2:/sonic/src/sonic-p4rt/sonic-pins$ uname -a
Linux 2f910d8d37b2 5.4.0-132-generic #148-Ubuntu SMP Mon Oct 17 16:02:06 UTC 2022 aarch64 GNU/Linux
shil@2f910d8d37b2:/sonic/src/sonic-p4rt/sonic-pins$ bazel
Opening zip "/proc/self/exe": lseek(): Bad file descriptor
FATAL: Failed to open '/proc/self/exe' as a zip file: (error: 9): Bad file descriptor
shil@2f910d8d37b2:/sonic/src/sonic-p4rt/sonic-pins$
2022-12-03 23:07:12 -08:00
Kalimuthu-Velappan
aaeafa8411
02.Version cache - docker cache build framework (#12001)
During docker build, host files can be passed to the docker build through
docker context files. But there is no straightforward way to transfer
the files from docker build to host.

This feature provides a tricky way to pass the cache contents from docker
build to host. It tar's the cached content and encodes them as base64 format
and passes it through a log file with a special tag as 'VCSTART and VCENT'.

Slave.mk in the host, it extracts the cache contents from the log and stores them
in the cache folder. Cache contents are encoded as base64 format for
easy passing.

<!--
     Please make sure you've read and understood our contributing guidelines:
     https://github.com/Azure/SONiC/blob/gh-pages/CONTRIBUTING.md

     ** Make sure all your commits include a signature generated with `git commit -s` **

     If this is a bug fix, make sure your description includes "fixes #xxxx", or
     "closes #xxxx" or "resolves #xxxx"

     Please provide the following information:
-->

#### Why I did it

#### How I did it

#### How to verify it
2022-12-02 08:28:45 +08:00
Stepan Blyshchak
d22cf46ceb
[dockers] save extension dockers with an image tag (#12829)
Fixes: #11521

- Why I did it
When build SONiC dockers, SONiC build system tags all of them with latest tag. This is Ok for all built-in dockers because we will also tag them with image version tag in sonic_debian_extension.j2 script. On the other hand, some of these dockers are SONiC packages and they are installed by sonic-package-manager which creates a only one tag whcih is recorded in the corresponding .gz file. This leads to having these dockers tagged only with latest tag. This change saves the tag as an image version string in .gz file, so that these dockers have version identification in their tag.

- How I did it
I modified slave.mk to save the version tag instead of latest tag.

- How to verify it
I verified this change by running show version

Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
2022-11-30 19:49:34 +02:00
ganglv
c16b8dbcc5
[sonic-gnmi] Support GNMI native write (#10948)
Why I did it
Provide GNMI native write interface for configuration.

How I did it
Add configuration parameters for GNMI native write.

How to verify it
Check build pipeline.
2022-11-29 16:58:27 +08:00
xumia
ac5d89c6ac
[Build] Support j2 template for debian sources (#12557)
Why I did it
Unify the Debian mirror sources
Make easy to upgrade to the next Debian release, not source url code change required.
Support to customize the Debian mirror sources during the build
Relative issue: #12523
2022-11-09 08:09:53 +08:00
xumia
61246b62c8
[Build] Fix the docker-sync not found issue (#12593)
Why I did it
[Build] Fix the docker-sync not found issue

How I did it
When SONIC_CONFIG_USE_NATIVE_DOCKERD_FOR_BUILD not enabled, not to remove the docker-sync tag.
2022-11-07 10:04:43 +08:00
ntoorchi
45d174663a
Enable P4RT at build time and disable at startup (#10499)
#### Why I did it
Currently at the Azure build system, the P4RT container is disabled by default at the build time. Here the goal is to include the P4RT container at the build time while disabling it at the runtime. The user can enable/disable the p4rt app through the config based on the preference. 

#### How I did it
Changed the config in rules/config and init-cfg.json.j2
2022-10-31 16:18:42 -07:00
Vivek
5d83d424b1
Added BUILD flags to provision for building the kernel with non-upstream patches (#12428)
* Added ENV vars for non-upstream patches

Signed-off-by: Vivek Reddy <vkarri@nvidia.com>

* Made MLNX_PATCH_LOC an absolute path

Signed-off-by: Vivek Reddy <vkarri@nvidia.com>

* Added non-upstream-patches dir

Signed-off-by: Vivek Reddy <vkarri@nvidia.com>

* Update README.md

* Addressed comments

* Env vars updated

Signed-off-by: Vivek Reddy <vkarri@nvidia.com>

* Readme updated

Signed-off-by: Vivek Reddy <vkarri@nvidia.com>

Signed-off-by: Vivek Reddy <vkarri@nvidia.com>
2022-10-31 12:16:05 -07:00
xumia
078608e7f0
Add the original docker tag without username (#12472)
Why I did it
Add the original docker tag without username to support some of the docker file not changed build broken issue.
The username suffix only required when the native build feature enabled, but if not enabled, the docker file not necessary to change, the build should be succeeded.
It is to support cisco 202205 build.
2022-10-25 15:31:18 +08:00
Hua Liu
257cc96d7c
Remove swsssdk from sonic OS image and docker container image (#12323)
Remove swsssdk from sonic OS image and docker image

#### Why I did it
swsssdk is deprecated, so need remove from image.

#### How I did it
Update config file to remove swsssdk from image.

#### How to verify it
Pass all test case.

#### Which release branch to backport (provide reason below if selected)

<!--
- Note we only backport fixes to a release branch, *not* features!
- Please also provide a reason for the backporting below.
- e.g.
- [x] 202006
-->

- [ ] 201811
- [ ] 201911
- [ ] 202006
- [ ] 202012
- [ ] 202106
- [ ] 202111
- [ ] 202205

#### Description for the changelog
Remove swsssdk from sonic OS image and docker image

#### Ensure to add label/tag for the feature raised. example - PR#2174 under sonic-utilities repo. where, Generic Config and Update feature has been labelled as GCU.

#### Link to config_db schema for YANG module changes
<!--
Provide a link to config_db schema for the table for which YANG model
is defined
Link should point to correct section on https://github.com/Azure/sonic-buildimage/blob/master/src/sonic-yang-models/doc/Configuration.md
-->

#### A picture of a cute animal (not mandatory but encouraged)
2022-10-12 13:04:14 +08:00
Kalimuthu-Velappan
c691b73959
01.Version-cache - restructuring of Makefile.work (#12000)
- The Makefile.work becomes complex and it is very difficult to manage the changes across branches.
- Restructured the Makefile.work and it becomes more readable.
- Added $(QUIET) option to turn on command echo mode through command line option.
- Exported the SONIC_BUILD_VARS variable, through which make options can be set dynamically.
	Eg: make SONIC_BUILD_VARS='INCLUDE_NAT=y'
2022-10-04 14:13:40 -07:00
Zain Budhwani
fd6a1b0ce2
Add events to host and create rsyslog_plugin deb pkg (#12059)
Why I did it

Create rsyslog plugin deb for other containers/host to install
Add events for bgp and host events
2022-09-21 09:20:53 -07:00
ganglv
c1d2e88de9
Replace configuration parameter for gnmi write (#11780)
Why I did it
Replace configuration parameter for gnmi write, and we will add other gnmi write features in the future.

How I did it
Update rules/config and other Makefile.

How to verify it
Build sonic image.
2022-09-19 14:54:08 +08:00
Zain Budhwani
6a54bc439a
Streaming structured events implementation (#11848)
With this PR in, you flap BGP and use events_tool to see the published events.
With telemetry PR #111 in and corresponding submodule update done in buildimage, one could run gnmi_cli to capture BGP flap events.
2022-09-03 07:33:25 -07:00
Saikrishna Arcot
de54eece46
Fix error handling when failing to install a deb package (#11846)
The current error handling code for when a deb package fails to be
installed currently has a chain of commands linked together by && and
ends with `exit 1`. The assumption is that the commands would succeed,
and the last `exit 1` would end it with a non-zero return code, thus
fully failing the target and causing the build to stop because of bash's
-e flag.

However, if one of the commands prior to `exit 1` returns a non-zero
return code, then bash won't actually treat it as a terminating error.
From bash's man page:

```
-e      Exit immediately if a pipeline (which may consist of a single simple
	command), a list, or a compound command (see SHELL GRAMMAR above),
        exits with a non-zero status.  The shell does not exit if the
        command that fails is part of the  command  list  immediately
        following a while or until keyword, part of the test following the
        if or elif reserved words, part of any command executed in a && or
        || list except the command following the final && or ||, any
        command in a pipeline but the last, or if the command's return
        value is being inverted with !.  If a compound command other than a
        subshell returns a non-zero status because a command failed while
        -e was being ignored, the shell does not exit.
```

The part `part of any command executed in a && or || list except the
command following the final && or ||` says that if the failing command
is not the `exit 1` that we have at the end, then bash doesn't treat it
as an error and exit immediately. Additionally, since this is a compound
command, but isn't in a subshell (subshell are marked by `(` and `)`,
whereas `{` and `}` just tells bash to run the commands in the current
environment), bash doesn't exist. The result of this is that in the
deb-install target, if a package installation fails, it may be
infinitely stuck in that while-loop.

There are two fixes for this: change to using a subshell, or use `;`
instead of `&&`. Using a subshell would, I think, require exporting any
shell variables used in the subshell, so I chose to change the `&&` to
`;`. In addition, at the start of the subshell, `set +e` is added in,
which removes the exit-on-error handling of bash. This makes sure that
all commands are run (the output of which may help for debugging) and
that it still exits with 1, which will then fully fail the target.

Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>

Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
2022-08-29 11:35:07 -07:00
lixiaoyuner
8d6431e754
Add k8s master feature (#11637)
* Add k8s master feature

Signed-off-by: Yun Li <yunli1@microsoft.com>

* Update kubernetes version mistake and make variable passing clear

Signed-off-by: Yun Li <yunli1@microsoft.com>

* Add CRI-dockerd package

Signed-off-by: Yun Li <yunli1@microsoft.com>

* Update version variable passing logic

Signed-off-by: Yun Li <yunli1@microsoft.com>

* Upgrade the worker kubernetes version

Signed-off-by: Yun Li <yunli1@microsoft.com>

* Install xml file parse tool

Signed-off-by: Yun Li <yunli1@microsoft.com>

Signed-off-by: Yun Li <yunli1@microsoft.com>
2022-08-13 23:01:35 +08:00
gregshpit
5df09490dc
Ported Marvell armhf build on amd64 host for debian buster to use cross-comp… (#8035)
* Ported Marvell armhf build on x86 for debian buster to use cross-compilation instead of qemu emulation

Current armhf Sonic build on amd64 host uses qemu emulation. Due to the
nature of the emulation it takes a very long time, about 22-24 hours to
complete the build. The change I did to reduce the building time by
porting Sonic armhf build on amd64 host for Marvell platform for debian
buster to use cross-compilation on arm64 host for armhf target. The
overall Sonic armhf building time using cross-compilation reduced to
about 6 hours.

Signed-off-by: marvell <marvell@cpss-build3.marvell.com>

* Fixed final Sonic image build with dockers inside

* Update Dockerfile.j2

Fixed qemu-user-static:x86_64-aarch64-5.0.0-2 .

* Update cross-build-arm-python-reqirements.sh

Added support for both armhf and arm64 cross-build platform using $PY_PLAT environment variable.

* Update Makefile

Added TARGET=<cross-target> for armhf/arm64 cross-compilation.

* Reviewer's @qiluo-msft requests done

Signed-off-by: marvell <marvell@cpss-build3.marvell.com>

* Added new radius/pam patch for arm64 support

* Update slave.mk

Added missing back tick.

* Added libgtest-dev: libgmock-dev: to the buster Dockerfile.j2. Fixed arm perl version to be generic

* Added missing armhf/arm64 entries in /etc/apt/sources.list

* fix libc-bin core dump issue from xumia:fix-libc-bin-install-issue commit

* Removed unnecessary 'apt-get update' from sonic-slave-buster/Dockerfile.j2

* Fixed saiarcot895 reviewer's requests

* Fixed README and replaced 'sed/awk' with patches

* Fixed ntp build to use openssl

* Unuse sonic-slave-buster/cross-build-arm-python-reqirements.sh script (put all prebuilt python packages cross-compilation/install inside Dockerfile.j2). Fixed src/snmpd/Makefile to use -j1 in all cases

* Clean armhf cross-compilation build fixes

* Ported cross-compilation armhf build to bullseye

* Additional change for bullseye

* Set CROSS_BUILD_ENVIRON default value n

* Removed python2 references

* Fixes after merge with the upstream

* Deleted unused sonic-slave-buster/cross-build-arm-python-reqirements.sh file

* Fixed 2 @saiarcot895 requests

* Fixed @saiarcot895 reviewer's requests

* Removed use of prebuilt python wheels

* Incorporated saiarcot895 CC/CXX and other simplification/generalization changes

Signed-off-by: marvell <marvell@cpss-build3.marvell.com>

* Fixed saiarcot895 reviewer's  additional requests

* src/libyang/patch/debian-packaging-files.patch

* Removed --no-deps option when installing wheels. Removed unnecessary lazy_object_proxy arm python3 package instalation

Co-authored-by: marvell <marvell@cpss-build3.marvell.com>
Co-authored-by: marvell <marvell@cpss-build2.marvell.com>
2022-07-21 14:15:16 -07:00
Alexander Allen
429254cb2d
[arm] Refactor installer and build to allow arm builds targeted at grub platforms (#11341)
Refactors the SONiC Installer to support greater flexibility in building for a given architecture and bootloader. 

#### Why I did it

Currently the SONiC installer assumes that if a platform is ARM based that it uses the `uboot` bootloader and uses the `grub` bootloader otherwise. This is not a correct assumption to make as ARM is not strictly tied to uboot and x86 is not strictly tied to grub. 

#### How I did it

To implement this I introduce the following changes:
* Remove the different arch folders from the `installer/` directory
* Merge the generic components of the ARM and x86 installer into `installer/installer.sh`
* Refactor x86 + grub specific functions into `installer/default_platform.conf` 
* Modify installer to call `default_platform.conf` file and also call `platform/[platform]/patform.conf` file as well to override as needed
* Update references to the installer in the `build_image.sh` script
* Add `TARGET_BOOTLOADER` variable that is by default `uboot` for ARM devices and `grub` for x86 unless overridden in `platform/[platform]/rules.mk`
* Update bootloader logic in `build_debian.sh` to be based on `TARGET_BOOTLOADER` instead of `TARGET_ARCH` and to reference the grub package in a generic manner

#### How to verify it

This has been tested on a ARM test platform as well as on Mellanox amd64 switches as well to ensure there was no impact. 

#### Description for the changelog
[arm] Refactor installer and build to allow arm builds targeted at grub platforms

#### Link to config_db schema for YANG module changes

N/A
2022-07-12 15:00:57 -07:00
Stepan Blyshchak
ef8675d7ab
[sonic_debian_extension] install systemd-bootchart (#11047)
- Why I did it
Implemented sonic-net/SONiC#1001

- How I did it
Install systemd-bootchart tool and provide default config for it.

- How to verify it
Run build and verify systemd-bootchart is installed.

Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
2022-07-06 14:03:31 +03:00
Vadym Hlushko
87425a5b2b
[sflow + dropmon] added the ENABLE_SFLOW_DROPMON build flag. Added patches for sflow repo. (#10370)
* [sflow + dropmon] added INCLUDE_SFLOW_DROPMON flag, added patches for hsflowd
*Added a capability of monitoring dropped packets for the sFlow daemon in order to improve network - monitoring, diagnostic, and troubleshooting. The drop monitor service allows the sFlow daemon to export another type of sample - dropped packets as Discard samples alongside Counter samples and Packet Flow samples.

Signed-off-by: Vadym Hlushko <vadymh@nvidia.com>
2022-06-20 17:07:02 -07:00
xumia
b6811a58cf
[Build] Improve docker build performance (#11111)
Why I did it
The docker storage driver vfs is not a good option for build, it uses the “deep copy” when building a new layer, leads to lower performance and more space used on disk than other storage drivers.
A better docker storage driver is the default one overlay2, it is a modern union filesystem.
2022-06-16 14:13:01 +08:00
vdahiya12
0b8e463db4
[sonic-platform-common][sonic-platform-daemons] submodule update; Remove python2 sonic-platform-common wheel (#10994)
* [sonic-platform-common][sonic-platform-daemons] submodule update

vdahiya@vdahiya-dev3:~/sonic-buildimage8/sonic-buildimage/src/sonic-platform-daemons$
git log --oneline 9ac12bf..master
0d90023 (HEAD -> master, origin/master, origin/HEAD, origin/202205) grpc
client implementation for active-active dualtor (#248)
6b8bf69 [ycabled] Fix some syntax warnings in ycabled (#263)
2bcf936 [ycabled] fix the posting for mux_cable_static_info per downlink
when ycabled is spawned; synchronizing executing Telemetry API (#257)
ce217c0 Include changes from xcvr_api in transceiver_info table (#253)
e0f8a35 Fix checkReplyType failed issue via recreating xcvr_table_helper
on forking subprocess (#255)

f575a40 (origin/master, origin/HEAD, origin/202205, master)
[Credo][Ycable] changes for synchronizing executing Telemetry API's when
mux toggle is inprogress (#280)
b043372 [sonic_ssd] Nokia-7215: "show platform ssdhealth" not showing
health percent (#279)
d62d3d6 [CMIS]Fix low-power to high power mode transition (#268)
f918125 [syseeprom] Enable display of vendor extension TLV content
(#270)
4e08440 [Credo][Ycable] improve logging for Server Powered off/Faulty
cables (#272)

Signed-off-by: vaibhav-dahiya <vdahiya@microsoft.com>

* remove python2 wheel for sonic-platform-common

Signed-off-by: vaibhav-dahiya <vdahiya@microsoft.com>

* remove python2 platform_common definitions

Signed-off-by: vaibhav-dahiya <vdahiya@microsoft.com>
2022-06-04 07:41:15 -07:00
Hua Liu
96954f0134
[swsscommon] Add c++ version sonic-db-cli from sonic-swss-common (#10825)
#### Why I did it
    Fix sonic-db-cli high CPU usage on SONiC startup issue: https://github.com/Azure/sonic-buildimage/issues/10218
    ETA of this issue will be 2022/05/31

#### How I did it
    Re-write sonic-cli with c++ in sonic-swss-common: https://github.com/Azure/sonic-swss-common/pull/607
    Modify swss-common rules and slave.mk to install c++ version sonic-db-cli.
    

#### How to verify it
    Pass all E2E test scenario.

#### Which release branch to backport (provide reason below if selected)

<!--
- Note we only backport fixes to a release branch, *not* features!
- Please also provide a reason for the backporting below.
- e.g.
- [x] 202006
-->

- [ ] 201811
- [ ] 201911
- [ ] 202006
- [ ] 202012
- [ ] 202106
- [ ] 202111

#### Description for the changelog
    Build and install c++ version sonic-db-cli from swss-common.

#### Link to config_db schema for YANG module changes
<!--
Provide a link to config_db schema for the table for which YANG model
is defined
Link should point to correct section on https://github.com/Azure/SONiC/wiki/Configuration.
-->

#### A picture of a cute animal (not mandatory but encouraged)
2022-06-01 08:05:53 +08:00
Shilong Liu
b313563986
[ci] Add arm artifacts in common lib azure pipeline (#10817)
* [ci] Add arm artifacts in common lib azure pipeline
2022-05-23 17:52:56 +08:00
vmittal-msft
f7882b3885
Fix for libsaithrift build for BRCM image (#10852)
Updated libsaibcm to fix libsaithrift compile issue on BRCM image
2022-05-19 11:15:34 -07:00
Saikrishna Arcot
05c6488029
Fix setting the HTTPS proxy (#10739)
Some places were not correctly setting the HTTPS proxy, and were only
setting the HTTP proxy. This was fine until Docker 20.10.10, which then
started using `https_proxy` for HTTPS connections.

Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
2022-05-05 22:48:58 -07:00
vmittal-msft
9ae17e66a3
[sonic-sairedis update] Support for SAI header v1.10.2 with BRCM SAI v7.1.0.0 and MLNX SAI v1.21.1.0 (#10583) 2022-05-05 20:27:29 -07:00
xumia
8ec8900d31
Support SONiC OpenSSL FIPS 140-3 based on SymCrypt engine (#9573)
Why I did it
Support OpenSSL FIPS 140-3, see design doc: https://github.com/Azure/SONiC/blob/master/doc/fips/SONiC-OpenSSL-FIPS-140-3.md.

How I did it
Install the fips packages.
To build the fips packages, see https://github.com/Azure/sonic-fips
Azure pipelines: https://dev.azure.com/mssonic/build/_build?definitionId=412

How to verify it
Validate the SymCrypt engine:

admin@sonic:~$ dpkg-query -W | grep openssl
openssl 1.1.1k-1+deb11u1+fips
symcrypt-openssl        0.1

admin@sonic:~$ openssl engine -v | grep -i symcrypt
(symcrypt) SCOSSL (SymCrypt engine for OpenSSL)
admin@sonic:~$
2022-05-06 07:21:30 +08:00
shlomibitton
1d84e0d7df
[Fastboot] Delay LLDP service for better fastboot performance (#10568)
- Why I did it
Profiling the system state on init after fast-reboot during create_switch function execution, it is possible to see few python scripts running at the same time.
This parallel execution consume CPU time and the duration of create_switch is longer than it should be.
Following this finding, and the motivation to ensure these services will not interfere in the future, LLDP is delayed in 90 seconds until the system finish the init flow after fastboot.

- How I did it
Add a timer for LLDP service.
Copy the timer file to the host bin image.

- How to verify it
Run fast-reboot on MLNX platform and observe faster create_switch execution time.
This PR is dependent on PR: #10567
2022-04-28 10:35:14 +03:00
Kalimuthu-Velappan
bc30528341
Parallel building of sonic dockers using native dockerd(dood). (#10352)
Currently, the build dockers are created as a user dockers(docker-base-stretch-<user>, etc) that are
specific to each user. But the sonic dockers (docker-database, docker-swss, etc) are
created with a fixed docker name and common to all the users.

    docker-database:latest
    docker-swss:latest

When multiple builds are triggered on the same build server that creates parallel building issue because
all the build jobs are trying to create the same docker with latest tag.
This happens only when sonic dockers are built using native host dockerd for sonic docker image creation.

This patch creates all sonic dockers as user sonic dockers and then, while
saving and loading the user sonic dockers, it rename the user sonic
dockers into correct sonic dockers with tag as latest.

	docker-database:latest <== SAVE/LOAD ==> docker-database-<user>:tag

The user sonic docker names are derived from 'DOCKER_USERNAME and DOCKER_USERTAG' make env
variable and using Jinja template, it replaces the FROM docker name with correct user sonic docker name for
loading and saving the docker image.
2022-04-28 08:39:37 +08:00
Saikrishna Arcot
330777e795
Image build time improvements (#10104)
* [build]: Patch debootstrap to not unmount the host's /proc filesystem

Currently, when the final image is being built (sonic-vs.img.gz,
sonic-broadcom.bin, or similar), each invocation of sudo in the
build_debian.sh script takes 0.8 seconds to run and execute the actual
command. This is because the /proc filesystem in the slave container has
been unmounted somehow. This is happening when debootstrap is running,
and it incorrectly unmounts the host's (in our case, the slave
container's) /proc filesystem because in the new image being built,
/proc is a symlink to the host's (the slave container's) /proc. Because
of that, /proc is gone, and each invocation of sudo adds 0.8 seconds
overhead. As a side effect, docker exec into the slave container during
this time will fail, because /proc/self/fd doesn't exist anymore, and
docker exec assumes that that exists.

Debootstrap has fixed this in 1.0.124 and newer, so backport the patch
that fixes this into the version that Bullseye has.

Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>

* [build_debian.sh]: Use eatmydata to speed up deb package installations

During package installations, dpkg calls fsync multiples times (for each
package) to ensure that tht efiles are written to disk, so that if
there's some system crash during package installation, then it is in at
least a somewhat recoverable state. For our use case though, we're
installing packages in a chroot in fsroot-* from a slave container and
then packaging it into an image. If there were a system crash (or even
if docker crashed), the fsroot-* directory would first be removed, and
the process would get restarted. This means that the fsync calls aren't
really needed for our use case.

The eatmydata package includes a library that will block/suppress the
use of fsync (and similar) system calls from applications and will
instead just return success, so that the application is not blocked on
disk writes, which can instead happen in the background instead as
necessary. If dpkg is run with this library, then the fsync calls that
it does will have no effect.

Therefore, install the eatmydata package at the beginning of
build_debian.sh and have dpkg be run under eatmydata for almost all
package installations/removals. At the end of the installation, remove
it, so that the final image uses dpkg as normal.

In my testing, this saves about 2-3 minutes from the image build time.

Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>

* Change ln syntax to use chroot

Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
2022-04-19 09:22:16 -07:00
Sachin Naik
598ab99469
secureboot: Enable signing SONiC kernel (#10557)
Why I did it
To sign SONiC kernel image and allow secure boot based system to verify SONiC image before loading into the system.

How I did it
Pass following parameter to rules/config.user
Ex:
SONIC_ENABLE_SECUREBOOT_SIGNATURE := y
SIGNING_KEY := /path/to/key/private.key
SIGNING_CERT := /path/to/public/public.cert

How to verify it
Secure boot enabled system enrolled with right public key of the, image in the platform UEFI database will able to verify image before load.

Alternatively one can verify with offline sbsign tool as below.

export SBSIGN_KEY=/abc/bcd/xyz/
sbverify --cert $SBSIGN_KEY/public_cert.cert fsroot-platform-XYZ/boot/vmlinuz-5.10.0-8-2-amd64 mage

O/P:
Signature verification OK
2022-04-19 13:23:15 +08:00
xumia
551dbfdcd2
[Security]: Enable hardening build options (#10369)
[Security]: Enable hardening build options
2022-04-02 23:07:10 +08:00
Shilong Liu
f8e11042b7
[build] Fix docker image cached tag issue. (#10297)
before: [ finished ] target/docker-base-buster.gz
after: [ cached ] target/docker-base-buster.gz
2022-03-22 15:39:01 +08:00
Shilong Liu
3fa627f290
Add a config variable to override default container registry instead of dockerhub. (#10166)
* Add variable to reset default docker registry
* fix bug in docker version control
2022-03-14 18:09:20 +08:00
Kebo Liu
fe0a7693f4
[smartmontools] Install smartmontools with apt-get and upgrade it to 7.2-1 (#10087)
Why I did it
Smartmontools 6.6 has an issue with reading SMART info of nvme SSD
Smartmontools can be installed with apt-get, no need to build and install

How I did it
Use apt-get to install smartmontools 7.2-1
Remove previous make files for smartmontools 6.6

How to verify it
verify with "smartctl" can read out correct SMART info on NVME ssd.
verify "show platform ssdhealth" can still work

Signed-off-by: Kebo Liu <kebol@nvidia.com>
2022-03-07 09:39:33 -08:00
Myron Sosyak
bec35267cb
[BFN] syncd-rpc build with thrift 0.14.1 (#9884) 2022-02-18 01:44:42 -08:00
xumia
4d203fbf91
[Build]: Fix hundreds of thousands lines of logs printed in marvell-armhf (#9980)
[Build]: Fix hundreds of thousands lines of logs printed in marvell-armhf
It is caused by the bad format of the marvell sai package mrvllibsai_armhf_1.7.1-6.deb, increasing the waiting time to reduce the logs, and reduce the waste of the CPU.
2022-02-14 07:21:10 +08:00
Oleksandr Ivantsiv
25a0ce5eb1
[asan] Add address sanitizer support. (#9857)
Implement infrastructure that allows enabling address sanitizer
for docker containers. Enable address sanitizer for SWSS container.

- Why I did it
To add a possibility to compile SONiC applications with address sanitizer (ASAN).
ASAN is a memory error detector for C/C++. It finds:
1. Use after free (dangling pointer dereference)
2. Heap buffer overflow
3. Stack buffer overflow
4. Global buffer overflow
5. Use after return
6. Use after the scope
7. Initialization order bugs
8. Memory leaks

- How I did it
By adding new ENABLE_ASAN configuration option.

- How to verify it
By default ASAN is disabled and the SONiC image is not affected.
When ASAN is enabled it inspects all allocation, deallocation, and memory usage that the application does in run time. To verify whether the application has memory errors tests that trigger memory usage of the application should be run. Ideally, the whole regression tests should be run. Memory leaks reports will be placed in /var/log/asan/ directory of SONiC host OS.

Signed-off-by: Oleksandr Ivantsiv <oivantsiv@nvidia.com>
2022-02-09 13:29:18 +02:00
Richard.Yu
49382d773e
[SAIServerV2] Build SAI Serverv2 docker (#9509)
Support saiserver v2 with python3 and thrift 0.13.0

add variables to support the saiserverv2
build different thrift in saithrift depends on saiserver version
build differernt versions of saiserver
make the saiserver and saiserver docker with version number

test done:
build two different versions of sasiserver in local build environment

add saiserver to buster

Co-authored-by: richard.yu <richard.yu@microsoft.comwq>
2022-02-08 02:56:34 -08:00
Alexander Allen
5f596aef63
[pmon] Move smartctl from pmon to host (#9607)
Why I did it
Need to be able to run smartctl when pmon docker is not running.

How I did it
Removed the pmon dependency for pmon as well as the command wrapper and added it to the debian-extension.

How to verify it
Stop pmon
Run smartctl from the host and verify it runs without error
2022-01-19 10:53:10 -08:00
Saikrishna Arcot
4acdc2a81e
Arm64 fixes and optimizations (#9274)
* [arm64]: Fix registration of the qemu interpreters

The current code doesn't properly run the container that registers the
qemu interpreters. It checks to see if the container is "known" by
Docker, but that doesn't indicate whether it's been run or not.
Therefore, just always register the qemu interpreters in the kernel, to
make sure the binary that's in the slave images that we build is used.

* [build]: Reduce the number of python calls

Modify the BLDENV and PROJECT_ROOT variables in slave.mk to be
immediate execution instead of lazy execution. Neither of these
variables should be changing for the duration of the build in each slave
container, so just run it once instead of every time they're referenced.

When running `make configure` for broadcom arm64 (where all of the slave
images are already built) on an amd64 host, this reduces the time spent
in each slave container from 4.5-5 minutes to 2 minutes.

* [sonic-slave]: Upgrade the qemu used for Bullseye arm64 to 6.1.0

Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
2021-12-13 18:20:39 -08:00
xumia
49dd5db94d
[Build]: Fix docker images built multiple times issue (#9253)
The same docker image is built multiple times after upgrading to bullseye, the build time is increased to about 15 hours from 6 hours.
See log: https://dev.azure.com/mssonic/be1b070f-be15-4154-aade-b1d3bfb17054/_apis/build/builds/50390/logs/9
Line 1437: 2021-11-11T11:15:02.7094923Z [ building ] [ target/docker-sonic-telemetry.gz ]
Line 1446: 2021-11-11T11:37:41.1073304Z [ finished ] [ target/docker-sonic-telemetry.gz ]
Line 1459: 2021-11-11T11:38:20.6293007Z [ building ] [ target/docker-sonic-telemetry.gz-load ]
Line 1462: 2021-11-11T11:38:28.1250201Z [ finished ] [ target/docker-sonic-telemetry.gz-load ]
Line 2906: 2021-11-11T18:57:42.8207365Z [ building ] [ target/docker-sonic-telemetry.gz ]
Line 2917: 2021-11-11T19:43:47.1860961Z [ finished ] [ target/docker-sonic-telemetry.gz ]
Line 3997: 2021-11-11T22:49:35.0196252Z [ building ] [ target/docker-sonic-telemetry.gz ]
Line 4002: 2021-11-11T23:14:00.4127728Z [ finished ] [ target/docker-sonic-telemetry.gz ]

How I did it
Place the python wheels in another folder relative to the build distribution.

Co-authored-by: Ubuntu <xumia@xumia-vm1.jqzc3g5pdlluxln0vevsg3s20h.xx.internal.cloudapp.net>
2021-12-09 15:42:44 -08:00
Brian O'Connor
46bcda359c
[PINS] Build P4RT container for PINS (#9083)
- Add INCLUDE_PINS to config to enable/disable container
- Add Docker files and supporting resources
- Add sonic-pins submodule and associated make files

Submission containing materials of a third party:
    Copyright Google LLC; Licensed under Apache 2.0

#### Why I did it

Adds P4RT container to SONiC for PINS

The P4RT app is covered by this HLD:
https://github.com/pins/SONiC/blob/master/doc/pins/p4rt_app_hld.md

#### How I did it

Followed the pattern and templates used for other SONiC applications

#### How to verify it

Build SONiC with INCLUDE_P4RT set to "y".
Verify that the resulting build has a container called "p4rt" running.
You can verify that the service is up by running the following command on the SONiC switch:
```bash
sudo netstat -lpnt | grep p4rt
```
You should see the service listening on TCP port 9559.

#### Which release branch to backport (provide reason below if selected)

None

#### Description for the changelog

Build P4RT container for PINS
2021-12-07 11:11:25 -08:00
Stepan Blyshchak
d77757d607
[slave.mk] fix error: recursive variable references itself. (#9385)
- Why I did it
To fix the above error when running make slave.mk with PLATFORM=vs.

- How I did it
Instead of:
export BUILD_MULTIASIC_KVM=$(BUILD_MULTIASIC_KVM)
do just the export:
export BUILD_MULTIASIC_KVM

BUILD_MULTIASIC_KVM is already defined to be either empty, or from rules/config or from the environment - from Makefile.work. No need to dereference the variable in the export statement.

- How to verify it
PLATFORM=vs make -f slave.mk list # verify no error and BUILD_MULTIASIC_KVM is empty in the output
PLATFORM=vs BUILD_MULTIASIC_KVM=y make -f slave.mk list # verify no error and BUILD_MULTIASIC_KVM is set to y in the output

Signed-off-by: Stepan Blyshchak <stepanb@nvidia.com>
2021-12-07 17:32:56 +02:00
Nazarii Hnydyn
e95ba9a408
[build]: Fix SAE to support debug flavor build. (#9435)
setup PACKAGE_NAME, PATH, MACHINE, VERSION info for debug docker

Signed-off-by: Nazarii Hnydyn <nazariig@nvidia.com>
2021-12-02 21:27:05 -08:00