Support saiserver v2 with python3 and thrift 0.13.0
add variables to support the saiserverv2
build different thrift in saithrift depends on saiserver version
build differernt versions of saiserver
make the saiserver and saiserver docker with version number
test done:
build two different versions of sasiserver in local build environment
add saiserver to buster
Co-authored-by: Yang Wang <yangwang1@microsoft.comwq>
[Build]: Fix hundreds of thousands lines of logs printed in marvell-armhf
It is caused by the bad format of the marvell sai package mrvllibsai_armhf_1.7.1-6.deb, increasing the waiting time to reduce the logs, and reduce the waste of the CPU.
Why I did it
Need to be able to run smartctl when pmon docker is not running.
How I did it
Removed the pmon dependency for pmon as well as the command wrapper and added it to the debian-extension.
How to verify it
Stop pmon
Run smartctl from the host and verify it runs without error
* [arm64]: Fix registration of the qemu interpreters
The current code doesn't properly run the container that registers the
qemu interpreters. It checks to see if the container is "known" by
Docker, but that doesn't indicate whether it's been run or not.
Therefore, just always register the qemu interpreters in the kernel, to
make sure the binary that's in the slave images that we build is used.
* [build]: Reduce the number of python calls
Modify the BLDENV and PROJECT_ROOT variables in slave.mk to be
immediate execution instead of lazy execution. Neither of these
variables should be changing for the duration of the build in each slave
container, so just run it once instead of every time they're referenced.
When running `make configure` for broadcom arm64 (where all of the slave
images are already built) on an amd64 host, this reduces the time spent
in each slave container from 4.5-5 minutes to 2 minutes.
* [sonic-slave]: Upgrade the qemu used for Bullseye arm64 to 6.1.0
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
The same docker image is built multiple times after upgrading to bullseye, the build time is increased to about 15 hours from 6 hours.
See log: https://dev.azure.com/mssonic/be1b070f-be15-4154-aade-b1d3bfb17054/_apis/build/builds/50390/logs/9
Line 1437: 2021-11-11T11:15:02.7094923Z [ building ] [ target/docker-sonic-telemetry.gz ]
Line 1446: 2021-11-11T11:37:41.1073304Z [ finished ] [ target/docker-sonic-telemetry.gz ]
Line 1459: 2021-11-11T11:38:20.6293007Z [ building ] [ target/docker-sonic-telemetry.gz-load ]
Line 1462: 2021-11-11T11:38:28.1250201Z [ finished ] [ target/docker-sonic-telemetry.gz-load ]
Line 2906: 2021-11-11T18:57:42.8207365Z [ building ] [ target/docker-sonic-telemetry.gz ]
Line 2917: 2021-11-11T19:43:47.1860961Z [ finished ] [ target/docker-sonic-telemetry.gz ]
Line 3997: 2021-11-11T22:49:35.0196252Z [ building ] [ target/docker-sonic-telemetry.gz ]
Line 4002: 2021-11-11T23:14:00.4127728Z [ finished ] [ target/docker-sonic-telemetry.gz ]
How I did it
Place the python wheels in another folder relative to the build distribution.
Co-authored-by: Ubuntu <xumia@xumia-vm1.jqzc3g5pdlluxln0vevsg3s20h.xx.internal.cloudapp.net>
- Why I did it
To fix the above error when running make slave.mk with PLATFORM=vs.
- How I did it
Instead of:
export BUILD_MULTIASIC_KVM=$(BUILD_MULTIASIC_KVM)
do just the export:
export BUILD_MULTIASIC_KVM
BUILD_MULTIASIC_KVM is already defined to be either empty, or from rules/config or from the environment - from Makefile.work. No need to dereference the variable in the export statement.
- How to verify it
PLATFORM=vs make -f slave.mk list # verify no error and BUILD_MULTIASIC_KVM is empty in the output
PLATFORM=vs BUILD_MULTIASIC_KVM=y make -f slave.mk list # verify no error and BUILD_MULTIASIC_KVM is set to y in the output
Signed-off-by: Stepan Blyshchak <stepanb@nvidia.com>
- Add INCLUDE_PINS to config to enable/disable container
- Add Docker files and supporting resources
- Add sonic-pins submodule and associated make files
Submission containing materials of a third party:
Copyright Google LLC; Licensed under Apache 2.0
#### Why I did it
Adds P4RT container to SONiC for PINS
The P4RT app is covered by this HLD:
https://github.com/pins/SONiC/blob/master/doc/pins/p4rt_app_hld.md
#### How I did it
Followed the pattern and templates used for other SONiC applications
#### How to verify it
Build SONiC with INCLUDE_P4RT set to "y".
Verify that the resulting build has a container called "p4rt" running.
You can verify that the service is up by running the following command on the SONiC switch:
```bash
sudo netstat -lpnt | grep p4rt
```
You should see the service listening on TCP port 9559.
#### Which release branch to backport (provide reason below if selected)
None
#### Description for the changelog
Build P4RT container for PINS
This pull request integrate audisp-tacplus to SONiC for per-command accounting.
#### Why I did it
To support TACACS per-command accounting, we integrate audisp-tacplus project to sonic.
#### How I did it
1. Add auditd service to SONiC
2. Port and patch audisp-tacplus to SONiC
#### How to verify it
UT with CUnit to cover all new code in usersecret-filter.c
Also pass all current UT.
#### Which release branch to backport (provide reason below if selected)
N/A
#### Description for the changelog
Add audisp-tacplus for per-command accounting.
#### A picture of a cute animal (not mandatory but encouraged)
#### Why I did it
Changes required for feature "Event Driven TechSupport Invocation & CoreDump Mgmt". [HLD](https://github.com/Azure/SONiC/pull/818 )
Requires: https://github.com/Azure/sonic-utilities/pull/1796.
Merging in any order would be fine.
Summary of the changes:
- Added the YANG Models for the new tables introduces as a part of this feature.
- Enhanced init_cfg.json with the default config required
- Added a compile Time flag which enables/disables the config required for this feature inside the init_cfg.json
- Enhanced the supervisor-proc-exit-listener script to populate `<feature>:<critical_proc> = <comm>:<pid>` info in the STATE_DB when it observes an proc exit notification for the critical processes running inside the docker.
This pull request add a bash plugin for TACACS+ per-command authorization
#### Why I did it
1. To support TACACS per command authorization, we check user command before execute it.
2. Fix libtacsupport.so can't parse tacplus_nss.conf correctly issue:
Support debug=on setting.
Support put server address and secret in same row.
3. Fix the parse_config_file method not reset server list before parse config file issue.
#### How I did it
The bash plugin will be called before every user command, and check user command with remote TACACS+ server for per-command authorization.
#### How to verify it
UT with CUnit cover all code in this plugin.
Also pass all current UT.
#### Which release branch to backport (provide reason below if selected)
N/A
#### Description for the changelog
Add Bash TACACS+ plugin.
#### A picture of a cute animal (not mandatory but encouraged)
Remove Python 2 package installation from the base image. For container
builds, reference Python 2 packages only if we're not building for
Bullseye.
For libyang, don't build Python 2 bindings at all, since they don't seem
to be used.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
All docker containers will be built as Buster containers, from a Buster
slave. The base image and remaining packages that are installed onto the
host system will be built for Bullseye, from a Bullseye slave.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
- Why I did it
In case an app.ext requires a dependency syncd^1.0.0, the RPC version of syncd will not satisfy this constraint, since 1.0.0-rpc < 1.0.0. This is not correct to put 'rpc' as a prerelease identifier. Instead put 'rpc' as build metadata in the version: 1.0.0+rpc which satisfies the constraint ^1.0.0.
- How I did it
Changed the way how to version in RPC and DBG images are constructed.
- How to verify it
Install app.ext with syncd^1.0.0 dependency on a switch with RPC syncd docker.
Signed-off-by: Stepan Blyshchak <stepanb@nvidia.com>
- Why I did it
docker-orchagent was missing libsairedis version label.
E.g. Currently only swsscommon is recorded in the labels:
admin@arc-switch1038:~$ docker inspect docker-orchagent | grep versions
"com.azure.sonic.versions.libswsscommon": "1.0.0"
With this change libsairedis is also recorded:
admin@arc-switch1038:~$ docker inspect docker-orchagent | grep versions
"com.azure.sonic.versions.libswsscommon": "1.0.0"
"com.azure.sonic.versions.libsairedis": "1.0.0"
- How I did it
By expanding the list of dependencies.
- How to verify it
Build and verify the label for libsairedis exists in docker-orchagent.
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
#### Why I did it
Fix a recent build error introduced by a pre-release redis-py. This is a general issue because `python setup.py install` (ie `easy_instal`) does not ignore pre-release versions. The fix is suggested by https://github.com/pypa/setuptools/issues/855#issuecomment-583803959
Linkmgrd monitors link status, mux status, and link state. Has
the link becomes unhealthy, linkmgrd will trigger mux switchover
on a standby ToR ensuring uninterrupted service to servers/blades.
This PR is initial implementation of linkmgrd.
Also, docker-mux container hold packages related to maintaining and managing
mux cable. It currently runs linkmgrd binary that monitor and switches
the mux if needed.
This PR also introduces mux-container and starts linkmgrd as startup when
build is configured with INCLUDE_MUX=y
Edit: linkmgrd PR will follow.
signed-off-by: Tamer Ahmed <tamer.ahmed@microsoft.com>
Related work items: #2315, #3146150
This pull request add plugin support library to bash.
And we will create a TACACS+ plugin for bash in an other PR, which will bring per command authorization feature to bash.
Why I did it
To support TACACS per command authorization, we check user command before execute it.
How I did it
Add plugin support to bash.
How to verify it
UT with CUnit under bash project cover all new code in plugin.c.
Also pass all current UT.
Which release branch to backport (provide reason below if selected)
N/A
Description for the changelog
Add plugin support to bash.
Why I did it
Pre-requisite: #8269
To be able to generate multi-asic KVM image.
To provide flexibility to generate single asic or both single and multi-asic images.
How I did it
Add a new build param, BUILD_MULTIASIC_KVM, if set to "y", the multi-asic VS target KVM images will be generated. If not, only single asic VS image will be generated.
Make changes to build_image.sh to generate 4-asic and 6-asic KVM images if BUILD_MULTIASIC_KVM parameter is set to y
How to verify it
Generate single-asic VS as currently done, no change in build steps:
make configure PLATFORM=vs
make target/sonic-vs.img.gz - will generate only single asic KVM image.
make BUILD_MULTIASIC_KVM=y target/sonic-vs.img.gz - will generate single asic and multi-asic KVM images.
should generate:
sonic-vs.bin
sonic-vs.img.gz
sonic-4asic-vs.img.gz
sonic-6asic-vs.img.gz
With a Bullseye base image, and PTF being based on Stretch, if there are
autogenerated files that are reused from the Bullseye build for Stretch,
then autoconf-based packages will fail. This is because the default set
of CFLAGS that dpkg passes in includes `-ffile-prefix-map=`, which is
not a supported flag with the GCC in Stretch. Then, when running `make
clean` in a Stretch environment, make will try to regenerate the
autoconf-based files with the cached set of CFLAGS from Bullseye. This
causes an error, since that flag is unknown.
To work around this, after each build, clean up all built objects and
autogenerated files. This makes sure that it is cleaned up in the same
environment as the build environment.
Note that this issue affects just autoconf packages. For apps using just
regular Makefiles, they will probably be fine (they'll still be cleaned
up as well).
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Why I did it
Support to build armhf/arm64 platforms on arm based system without qemu simulator.
When building the armhf/arm64 on arm based system, it is not necessary to use qemu simulator.
How I did it
Build armhf on armhf system, or build arm64 on arm64 system, by default, qemu simulator will not be used.
When building armhf on arm64, and you have enabled armhf docker, then it will build images without simulator automatically. It is based how the docker service is run.
Docker base image change:
For amd64, change from debian:to amd64/debian:
For arm64, change from multiarch/debian-debootstrap:arm64- to arm64v8/debian:
For armhf, change from multiarch/debian-debootstrap:armhf- to arm32v7/debian:
See https://github.com/docker-library/official-images#architectures-other-than-amd64
The mapping relations:
arm32v6 --- armel
arm32v7 --- armhf
arm64v8 --- arm64
Docker image armhf deprecated info: https://hub.docker.com/r/armhf/debian, using arm32v7 instead.
- Why I did it
Make DHCP relay docker an extension. DHCP relay now carries dhcp relay commands CLI plugin and has a complete manifest.
It is installed as extension if INCLUDE_DHCP_REALY is set to y.
DEPENDS on #5939
- How I did it
Modify DHCP relay docker makefile and dockerfile. Make changes to sonic_debian_extension.j2 to install sonic packages.
I moved DHCP related CLI tests from sonic-utilities to DHCP relay docker.
This PR introduces a way to write a plugin as part of docker image and run the tests from cli-plugin-tests directory under docker directory.
The test result is available in target/docker-dhcp-relay.gz.log:
[ REASON ] : target/docker-dhcp-relay.gz does not exist NON-EXISTENT PREREQUISITES: docker-start target/docker-config-engine-buster.gz-load target/python-wheels/sonic_utilities-1.2-py3-none-any.whl-in
stall target/debs/buster/python3-swsscommon_1.0.0_amd64.deb-install
[ FLAGS FILE ] : []
[ FLAGS DEPENDS ] : []
[ FLAGS DIFF ] : []
============================= test session starts ==============================
platform linux -- Python 3.7.3, pytest-3.10.1, py-1.7.0, pluggy-0.8.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /sonic/dockers/docker-dhcp-relay/cli-plugin-tests, inifile:
plugins: cov-2.6.0
collecting ... collected 10 items
test_config_dhcp_relay.py::TestConfigVlanDhcpRelay::test_plugin_registration PASSED [ 10%]
test_config_dhcp_relay.py::TestConfigVlanDhcpRelay::test_config_vlan_add_dhcp_relay_with_nonexist_vlanid PASSED [ 20%]
test_config_dhcp_relay.py::TestConfigVlanDhcpRelay::test_config_vlan_add_dhcp_relay_with_invalid_vlanid PASSED [ 30%]
test_config_dhcp_relay.py::TestConfigVlanDhcpRelay::test_config_vlan_add_dhcp_relay_with_invalid_ip PASSED [ 40%]
test_config_dhcp_relay.py::TestConfigVlanDhcpRelay::test_config_vlan_add_dhcp_relay_with_exist_ip PASSED [ 50%]
test_config_dhcp_relay.py::TestConfigVlanDhcpRelay::test_config_vlan_add_del_dhcp_relay_dest PASSED [ 60%]
test_config_dhcp_relay.py::TestConfigVlanDhcpRelay::test_config_vlan_remove_nonexist_dhcp_relay_dest PASSED [ 70%]
test_config_dhcp_relay.py::TestConfigVlanDhcpRelay::test_config_vlan_remove_dhcp_relay_dest_with_nonexist_vlanid PASSED [ 80%]
test_show_dhcp_relay.py::TestVlanDhcpRelay::test_plugin_registration PASSED [ 90%]
test_show_dhcp_relay.py::TestVlanDhcpRelay::test_dhcp_relay_column_output PASSED [100%]
=============================== warnings summary ===============================
/usr/local/lib/python3.7/dist-packages/tabulate.py:7
/usr/local/lib/python3.7/dist-packages/tabulate.py:7: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import namedtuple, Iterable
-- Docs: https://docs.pytest.org/en/latest/warnings.html
==================== 10 passed, 1 warnings in 0.35 seconds =====================
This adds the Makefile changes to use the Bullseye slave image, but
doesn't use it by default. There should be no functional changes with
this change (Buster will still be used for now).
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Signed-off-by: Stepan Blyschak stepanb@mellanox.com
Why I did it
To support building DHCP relay as extension and installing it during build time.
How I did it
Created infrastructure. Users need to define their packages in rules/sonic-packages.mk
How to verify it
Together with #6531
Introduce new sonic-buildimage images for Broadcom DNX ASIC family.
sonic-broadcom-dnx.bin
sonic-aboot-broadcom-dnx.swi
How I did it
NO CHANGE to existing make commands
make init; make configure PLATFORM=broadcom; make target/sonic-aboot-broadcom.swi; make target/sonic-broadcom.bin
The difference now is that it will result in new broadcom images for DNX asic family as well.
sonic-broadcom.bin, sonic-broadcom-dnx.bin
sonic-aboot-broadcom.swi, sonic-aboot-broadcom-dnx.swi
Note: This PR also adds support for Broadcom SAI 5.0 (based on 1.8 SAI ) for DNX based platform + changes in platform x86_64-arista_7280cr3_32p4 bcm config files and platform_env.conf files
#### Why I did it
Provide possibility to specify curl options as the present curl support provided in Azure/sonic does not extend capability for options like --user which some of the cisco artifacts are requiring.
#### How I did it
Add extensions to the slave.mk file to include curl options as follows:
$($*_CURL_OPTIONS)
#### How to verify it
Option 1) use curl -u, and environment variables
it with --user <user:password> curl_options. Ex: --user foo:'bar!'
curl -u ${BASIC_AUTH_HEADER} https://foo.bar
This works to obscure password/credential in a terminal session that someone else might see directly or via screen share.
Option 2) Option 1: use curl -n
If you run linux, create a ~/.netrc file and insert your creds there, and use curl -n.
chmod the file to 400. curl knows how to extract your creds from the file silently. You never have to type creds on the command line again.
If you run Windows, and use curl, you must name the file _netrc . As on *nix, the file should be in your home directory, and should have appropriate permissions.
For Administrative APIs , my .netrc file looks like this:
machine foobar-linux
login foo
password bar
- Why I did it
To give SONiC Application Extension developers an environment to run and develop their apps.
- How I did it
Created sonic-sdk and sonic-sdk-buildenv dockers and their dbg versions.
- How to verify it
Build:
$ make -f slave target/sonic-sdk.gz target/sonic-sdk-buildenv.gz
Signed-off-by: Stepan Blyschak stepanb@nvidia.com
This PR is part of SONiC Application Extension
Depends on #5938
- Why I did it
To provide an infrastructure change in order to support SONiC Application Extension feature.
- How I did it
Label every installable SONiC Docker with a minimal required manifest and auto-generate packages.json file based on
installed SONiC images.
- How to verify it
Build an image, execute the following command:
admin@sonic:~$ docker inspect docker-snmp:1.0.0 | jq '.[0].Config.Labels["com.azure.sonic.manifest"]' -r | jq
Cat /var/lib/sonic-package-manager/packages.json file to verify all dockers are listed there.
apt-package handling: These are part of the Export variables for .j2 files and is needed for Debian and its derivatives.
How I did it
Add support to slave.mk files to export APT_PACKAGES and DBG_APT_PACKAGES
How to verify it
The apt package, provides the apt management tool, a high-level command-line interface for better interactive usage. APT also includes command-line programs for dealing with packages, which use the library. Three such programs are apt, apt-get and apt-cache and can be verified for their existence.
#### Why I did it
To build flashrom properly with dependency tracking.
#### How I did it
Moved flashrom code from platform/broadcom/sonic-platform-modules-dell/tools directory to src/flashrom directory.
At the end, flashrom_0.9.7_amd64.deb package is build which will be installed in the devices.
There was an existing omission in the build jobs that SONIC_CONFIG_MAKE_JOBS wasn't being passed through to the underlying make command in SONIC_MAKE_DEBS, meaning we weren't fully utilizing all of the CPUs during builds.
Co-authored-by: Joe Tricklebank-Owens <Joseph.Tricklebank-Owens@metaswitch.com>
- Support compile sonic arm image on arm server. If arm image compiling is executed on arm server instead of using qemu mode on x86 server, compile time can be saved significantly.
- Add kernel argument systemd.unified_cgroup_hierarchy=0 for upgrade systemd to version 247, according to #7228
- rename multiarch docker to sonic-slave-${distro}-march-${arch}
Co-authored-by: Xianghong Gu <xgu@centecnetworks.com>
Co-authored-by: Shi Lei <shil@centecnetworks.com>
Lack of having the support for no_proxy in SONiC build environment limits enterprise companies to put some internal hacks to ensure the proxy's dont refer to some Intranet site for artifactory downloads etc. Today. Using no_proxy is familiar in proxy settings terminology and excludes traffic destined to certain hosts.
Most Web clients hence support connection to proxy servers via environment variables:
http_proxy / HTTP_PROXY
https_proxy / HTTPS_PROXY
no_proxy / NO_PROXY
These variables tell the client what URL should be used to access the proxy servers and which exceptions should be made.
How to verify it
Simply set up the variable in the bash shell at build time.
export no_proxy=internal.example.com, internal2.example.com
Usage is:
no_proxy is a comma- or space-separated list of machine or domain names, with optional :port part. If no :port
part is present, it applies to all ports on that domain.
- Why I did it
I made the docker_img_ctl.j2 applicable for more dockers (including application extensions dockers) by adding an option not to mount tmpfs on /tmp/ and /var/tmp/. In some applications /tmp/ is a different docker volume which can't be tmpfs.
Also, I added and ability to pass REPO[:TAG]|[@digest]/IMAGE_ID instead of just REPO name.
- How I did it
Modified docker_img_ctl.j2 and docker makefiles.
- How to verify it
Run it on the switch.
- Why I did it
To move ‘sonic-host-service’ which is currently built as a separate package to ‘sonic-host-services' package.
- How I did it
- Moved 'sonic-host-server' to 'src/sonic-host-services' and included it as part of the python3 wheel.
- Other files were moved to 'src/sonic-host-services-data' and included as part of the deb package.
- Changed build option ‘INCLUDE_HOST_SERVICE’ to ‘ENABLE_HOST_SERVICE_ON_START’ for enabling sonic-hostservice at boot-up by default.
Some commands used during build will prompt user interactively, but this is not expected during build. Since most output is collected into log file, user could not see the prompt and feel the build process hangs.
- How I did it
Use mv command in non interactive mode
Redirect stdin to null if command output is collected into log file.
Fix#119
when parallel build is enable, multiple dpkg-buildpackage
instances are running at the same time. /var/lib/dpkg is shared
by all instances and the /var/lib/dpkg/updates could be corrupted
and cause the build failure.
the fix is to use overlay fs to mount separate /var/lib/dpkg
for each dpkg-buildpackage instance so that they are not affecting
each other.
Signed-off-by: Guohan Lu <lguohan@gmail.com>
* Enable telemetry for ARM64 by default
* [Centec]Upgrade Centec syncd docker to buster; libjemalloc2 have been installed in docker-base-buster, remove libjemalloc1 from docker-syncd-centec's Dockerfile.j2
Co-authored-by: Gu Xianghong <xgu@centecnetworks.com>
- Make PDDF code compliant with both Python 2 and Python 3
- Align code with PEP8 standards using autopep8
- Build and install both Python 2 and Python 3 PDDF packages