<!--
Please make sure you've read and understood our contributing guidelines:
https://github.com/Azure/SONiC/blob/gh-pages/CONTRIBUTING.md
** Make sure all your commits include a signature generated with `git commit -s` **
If this is a bug fix, make sure your description includes "fixes #xxxx", or
"closes #xxxx" or "resolves #xxxx"
Please provide the following information:
-->
#### Why I did it
1. Fix auditd log file path, because known issue: https://github.com/Azure/sonic-buildimage/issues/9548
2. When SONiC change to based on bullseye, auditd version upgrade from 2.8.4 to 3.0.2, and in auditd 3.0.2 the plugin file path changed to /etc/audit/plugins.d, however the upstream auditisp-tacplus project not follow-up this change, it still install plugin config file to /etc/audit/audisp.d. so the plugin can't be launch correctly, the code change in src/tacacs/audisp/patches/0001-Porting-to-sonic.patch fix this issue.
#### How I did it
Fix tacacs plugin config file path.
Create /var/log/audit folder for auditd.
#### How to verify it
Pass all UT, also run per-command acccounting UT to validate plugin loaded.
#### Which release branch to backport (provide reason below if selected)
<!--
- Note we only backport fixes to a release branch, *not* features!
- Please also provide a reason for the backporting below.
- e.g.
- [x] 202006
-->
- [ ] 201811
- [ ] 201911
- [ ] 202006
- [ ] 202012
- [ ] 202106
#### Description for the changelog
<!--
Write a short (one line) summary that describes the changes in this
pull request for inclusion in the changelog:
-->
Fix tacacs plugin config file path.
Create /var/log/audit folder for auditd.
#### A picture of a cute animal (not mandatory but encouraged)
This pull request integrate audisp-tacplus to SONiC for per-command accounting.
#### Why I did it
To support TACACS per-command accounting, we integrate audisp-tacplus project to sonic.
#### How I did it
1. Add auditd service to SONiC
2. Port and patch audisp-tacplus to SONiC
#### How to verify it
UT with CUnit to cover all new code in usersecret-filter.c
Also pass all current UT.
#### Which release branch to backport (provide reason below if selected)
N/A
#### Description for the changelog
Add audisp-tacplus for per-command accounting.
#### A picture of a cute animal (not mandatory but encouraged)
Python 2 is no longer available, so remove those packages, and remove
the pip2 commands. For picocom and systemd, just install from the
regular repo, since there's no backports yet.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
#### Why I did it
Nokia IXR7250E platform requires grpcio, grpcio-tools python library, and libprotobuf-dev, libgrpc++ library
#### How I did it
Modified the build_debian.sh install libprotobuf-dev and libgrpc++ to support nokia ndk
Modified the sonic_debian_extension.j2 to install the grpcio and grpcio-tools in the host
Modified the docker-platform-monitor/Dockerfile.js to install grpcio and grpcio-tools for the pmon container.
#### How to verify it
Image running success.
the branch refers the branch name that the commit is in,
for example master, 202012, 201911, ...
In case there is no branch, the name will be HEAD.
release is encoded in /etc/sonic/sonic_release file.
the file is only available for a release branch.
It is not available in master branch.
example for master branch
```
build_version: 'master.602-6efc0a88'
debian_version: '10.7'
kernel_version: '4.19.0-9-2-amd64'
asic_type: vs
commit_id: '6efc0a88'
branch: 'master'
release: 'none'
build_date: Tue Dec 29 06:54:02 UTC 2020
build_number: 602
built_by: johnar@jenkins-worker-23
```
example for 202012 release branch
```
build_version: '202012.602-6efc0a88'
debian_version: '10.7'
kernel_version: '4.19.0-9-2-amd64'
asic_type: vs
commit_id: '6efc0a88'
branch: '202012'
release: '202012'
build_date: Tue Dec 29 06:54:02 UTC 2020
build_number: 602
built_by: johnar@jenkins-worker-23
```
Signed-off-by: Guohan Lu <lguohan@gmail.com>
#### Why I did it
- To build flashrom properly with dependency tracking.
#### How I did it
- Moved flashrom code from platform/broadcom/sonic-platform-modules-dell/tools directory to src/flashrom directory.
- At the end, flashrom_0.9.7_amd64.deb package is build which will be installed in the devices.
- Currently flashrom builds only for Dell S6100 platforms.
#### Why I did it
To allow SSH connections from IPv6 addresses
Resolves https://github.com/Azure/sonic-buildimage/issues/7668
#### How I did it
In build_debian.sh, modify sshd_config file so as to enable listening for IPv6 connections
Why I did it
The SONiC switches get their docker images from local repo, populated during install with container images pre-built into SONiC FW. With the introduction of kubernetes, new docker images available in remote repo could be deployed. This requires dockerd to be able to pull images from remote repo.
Depending on the Switch network domain & config, it may or may not be able to reach the remote repo. In the case where remote repo is unreachable, we could potentially make Kubernetes server to also act as http-proxy.
How I did it
When admin explicitly enables, the kubernetes-server could be configured as docker-proxy. But any update to docker-proxy has to be via service-conf file environment variable, implying a "service restart docker" is required. But restart of dockerd is vey expensive, as it would restarts all dockers, including database docker.
To avoid dockerd restart, pre-configure an http_proxy using an unused IP. When k8s server is enabled to act as http-proxy, an IP table entry would be created to direct all traffic to the configured-unused-proxy-ip to the kubernetes-master IP. This way any update to Kubernetes master config would be just manipulating IPTables, which will be transparent to all modules, until dockerd needs to download from remote repo.
How to verify it
Configure a switch such that image repo is unreachable
Pre-configure dockerd with http_proxy.conf using an unused IP (e.g. 172.16.1.1)
Update ctrmgrd.service to invoke ctrmgrd.py with "-p" option.
Configure a k8s server, and deploy an image for feature with set_owner="kube"
Check if switch could successfully download the image or not.
- Why I did it
To give SONiC Application Extension developers an environment to run and develop their apps.
- How I did it
Created sonic-sdk and sonic-sdk-buildenv dockers and their dbg versions.
- How to verify it
Build:
$ make -f slave target/sonic-sdk.gz target/sonic-sdk-buildenv.gz
PR# 7249 introduced a new bit of logic _after_ the point where the qemu based
build environment for ARM is removed. Hence the new logic fails when building
for ARM. Builds for AMD64 were not affected.
This commit moves the new logic introduced by PR# 7249 to just _before_ the
point where the qemu based build environment for ARM is removed. A comment is
added to reduce the likelihood of this sort of ARM build break from happening
again.
#### Why I did it
To build flashrom properly with dependency tracking.
#### How I did it
Moved flashrom code from platform/broadcom/sonic-platform-modules-dell/tools directory to src/flashrom directory.
At the end, flashrom_0.9.7_amd64.deb package is build which will be installed in the devices.
- Support compile sonic arm image on arm server. If arm image compiling is executed on arm server instead of using qemu mode on x86 server, compile time can be saved significantly.
- Add kernel argument systemd.unified_cgroup_hierarchy=0 for upgrade systemd to version 247, according to #7228
- rename multiarch docker to sonic-slave-${distro}-march-${arch}
Co-authored-by: Xianghong Gu <xgu@centecnetworks.com>
Co-authored-by: Shi Lei <shil@centecnetworks.com>
Problem:
Default groupadd for redis, takes 1000 by default. This forces, subsequently created admin group to get 1001.
As all TACACS users are created with 1000 as their gid, they end up in redis group.
Fix:
Create redis group *after* admin group is created
Add a check that admin group id is 1000
Fix#7180
Update systemd to v247 in order to pick the fix for "core: coldplug possible nop_job" systemd/systemd#13124
Install systemd, systemd-sysv from buster-backports. Pass "systemd.unified_cgroup_hierarchy=0" as kernel argument to force systemd to not use unified cgroup hierarchy, otherwise dockerd won't start moby/moby#16238.
Also, chown $FILSYSTEM_ROOT for root, otherwise apt systemd installation complains, see similar https://unix.stackexchange.com/questions/593529/can-not-configure-systemd-inside-a-chrooted-environment
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
As of the merging of PR #6799, we are now installing a newer version of scapy via pip, therefore there is no longer a need to install the older Debian package.
- Why I did it
To move ‘sonic-host-service’ which is currently built as a separate package to ‘sonic-host-services' package.
- How I did it
- Moved 'sonic-host-server' to 'src/sonic-host-services' and included it as part of the python3 wheel.
- Other files were moved to 'src/sonic-host-services-data' and included as part of the deb package.
- Changed build option ‘INCLUDE_HOST_SERVICE’ to ‘ENABLE_HOST_SERVICE_ON_START’ for enabling sonic-hostservice at boot-up by default.
**- Why I did it**
As per https://pypi.org/project/pip/ pip 21.0 does not not support Python 2 from Jan 2021. Most places in the codebase have already been pinned, but this one was missed.
**- How I did it**
Pin pip2 < version 21 in build_debian.sh
Following changes were done for ebtables:
- Support for Multi-asic platforms. Ebtable filters are installed in namespace for multi-asic and not host. On Single asic installed on host.
- For Multi-asic platforms we don't want to install on host otherwise Namespace-to-Namespace communication does not happens since ARP Request are not forwarded.
- Updated to use text file to restore ebtables rules then the binary format. Rules are restore as part of Database docker init instead of rc.local
- Removed the ebtable service files for buster as not needed as filters are restored/installed as part of database docker init.
All the binaries are pre-installed with ebtables* binary are same as ebatbles-legacy-*
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
- Make PDDF code compliant with both Python 2 and Python 3
- Align code with PEP8 standards using autopep8
- Build and install both Python 2 and Python 3 PDDF packages
Depending on the performance characteristics of a given hardware platform, it's possible to exceed the default 120 second kernel timeout during I/O intensive operations like image installation. This can cause a kernel panic like so:
kernel:[ 852.441781] Kernel panic - not syncing: hung_task: blocked tasks
If this happens during image installation, it's possible for the install to become corrupted and leave the device in an unreachable state that requires a power cycle to resolve. This risk increases as image size continues to increase. So, we need to increase the timeout so that we don't encounter kernel panics on devices with lower disk throughput.
Signed-off-by: Danny Allen <daall@microsoft.com>
- Why I did it
scripts/collect_host_image_version_files.sh fails with below error:
scripts/collect_host_image_version_files.sh target ./fsroot
/usr/sbin/chroot: failed to run command 'post_run_buildinfo': No such file or directory
/bin/cp: cannot stat './fsroot/usr/local/share/buildinfo/post-versions': No such file or directory
- How I did it
Issues is because qemu-arm-static is removed before this step. So, I moved the cleanup step to the end.
Signed-off-by: Sabareesh Kumar Anandan <sanandan@marvell.com>
Certain platform specific packages sonic-platform-xyz, installs files onto rootfs, which would be placed on read-write mount path on /host/image-name/rw/...
when ntpd starts it tries to do read access on /usr/bin /usr/sbin/ /usr/local/bin , which inturn links further to the read-write mount path also.
Where ntpd would get below Apparmor Warning message
LOG:-
audit: type=1400 audit(1606226503.240:21): apparmor="DENIED" operation="open" profile="/usr/sbin/ntpd" name="/image-HEAD-dirty-20201111.173951/rw/usr/local/bin/" pid=3733 comm="ntpd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
audit: type=1400 audit(1606226503.240:22): apparmor="DENIED" operation="open" profile="/usr/sbin/ntpd" name="/image-HEAD-dirty-20201111.173951/rw/usr/sbin/" pid=3733 comm="ntpd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
audit: type=1400 audit(1606226503.240:23): apparmor="DENIED" operation="open" profile="/usr/sbin/ntpd" name="/image-HEAD-dirty-20201111.173951/rw/usr/bin/" pid=3733 comm="ntpd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Fix:
Add rw/.. mount path similar to root path access provided for ntpd in /etc/apparmor.d/usr.sbin.ntpd
Signed-off-by: Antony Rheneus <arheneus@marvell.com>
Install the 'wheel' package in host OS (along with python3 and python3-distutils which are also needed for building some Python packages) to eliminate error messages like the following:
```
Running setup.py bdist_wheel for watchdog: started
Running setup.py bdist_wheel for watchdog: finished with status 'error'
Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-Qd3K08/watchdog/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/pip-wheel-0AHpMe --python-tag cp27:
usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: -c --help [cmd1 cmd2 ...]
or: -c --help-commands
or: -c cmd --help
error: invalid command 'bdist_wheel'
----------------------------------------
Failed building wheel for watchdog
```
These error messages appear to have no impact on the image build, because the Python package seems to still get installed successfully afterward, just the building of a wheel package fails. Therefore, this is more of a cosmetic fix than an actual bug.
This is an addendum to https://github.com/Azure/sonic-buildimage/pull/6182.
Also upgrade pip and install more recent version of setuptools package via PyPI.
Create new file to "sysctl.d" with desired panic conditions.
It will trigger a vmcore dump using kdump-tools on these situations.
Signed-off-by: Shlomi Bitton <shlomibi@nvidia.com>
Originally this line is used to mark all previously installed packages (deboostrap installed) as auto, so later if no other packages depend on anyone of them, it will be auto removed. Seems we gained little from this line, so let's remove it.
This change introduces PDDF which is described here: https://github.com/Azure/SONiC/pull/536
Most of the platform bring up effort goes in developing the platform device drivers, SONiC platform APIs and validating them. Typically each platform vendor writes their own drivers and platform APIs which is very tailor made to that platform. This involves writing code, building, installing it on the target platform devices and testing. Many of the details of the platform are hard coded into these drivers, from the HW spec. They go through this cycle repetitively till everything works fine, and is validated before upstreaming the code.
PDDF aims to make this platform driver and platform APIs development process much simpler by providing a data driven development framework. This is enabled by:
JSON descriptor files for platform data
Generic data-driven drivers for various devices
Generic SONiC platform APIs
Vendor specific extensions for customisation and extensibility
Signed-off-by: Fuzail Khan <fuzail.khan@broadcom.com>
1. Update SSL ca certificates for secure download [arm specific]
2. Using redis-tools from blob sonic-storage for docker-base-stretch
Signed-off-by: Sabareesh Kumar Anandan <sanandan@marvell.com>
- Convert script to Python 3
- Need to open file in binary mode before hashing due to new string data type in Python 3 being unicode by default. This should probably have been done regardless.
- Reorganize imports alphabetically
- When running the script, don't explicitly call `python`. Instead let the program loader use the interpreter specified in the shebang (which is now `python3`).
It should no longer be necessary to explicitly install the 'wheel' package, as SONiC packages built as wheels should specify 'wheel' as a dependency in their setup.py files. Therefore, pip[3] should check for the presence of 'wheel' and install it if it isn't present before attempting to call 'setup.py bdist_wheel' to install the package.
We are moving toward building all Python packages for SONiC as wheel packages rather than Debian packages. This will also allow us to more easily transition to Python 3.
Python files are now packaged in "sonic-utilities" Pyhton wheel. Data files are now packaged in "sonic-utilities-data" Debian package.
**- How I did it**
- Build and install sonic-utilities as a Python package
- Remove explicit installation of wheel dependencies, as these will now get installed implicitly by pip when installing sonic-utilities as a wheel
- Build and install new sonic-utilities-data package to install data files required by sonic-utilities applications
- Update all references to sonic-utilities scripts/entrypoints to either reference the new /usr/local/bin/ location or remove absolute path entirely where applicable
Submodule updates:
* src/sonic-utilities aa27dd9...2244d7b (5):
> Support building sonic-utilities as a Python wheel package instead of a Debian package (#1122)
> [consutil] Display remote device name in show command (#1120)
> [vrf] fix check state_db error when vrf moving (#1119)
> [consutil] Fix issue where the ConfigDBConnector's reference is missing (#1117)
> Update to make config load/reload backward compatible. (#1115)
* src/sonic-ztp dd025bc...911d622 (1):
> Update paths to reflect new sonic-utilities install location, /usr/local/bin/ (#19)
This PR limited the number of calls to sonic-cfggen to one call
per iteration instead of current 3 calls per iteration.
The PR also installs jq on host for future scripts if needed.
signed-off-by: Tamer Ahmed <tamer.ahmed@microsoft.com>