Cherry pick PR#12592
Why I did it
nameserver and domain entries from build system fsroot gets into sonic image.
How I did it
Clear /etc/resolv.conf before building image
How to verify it
Built image with it and verified with install that /etc/resolv.conf is empty
Co-authored-by: Devesh Pathak <54966909+devpatha@users.noreply.github.com>
Why I did it
Cherry-pick commits from master to support the snapshot based mirror, and fix the code conflicts.
ad162ae [Build] Optimize the version control for Debian packages (#14557)
38c5d7f [Build] Support j2 template for debian sources for docker ptf (#13198)
5e4826e [Ci] Support to use the same snapshot for all platform builds (#13913)
8206925 [Build] Change the default mirror version config file (#13786)
5e4a866 [Build] Support Debian snapshot mirror to improve build stability (#13097)
ac5d89c [Build] Support j2 template for debian sources (#12557)
Work item tracking
Microsoft ADO (number only): 18018114
How I did it
How to verify it
* Remove SSH host keys after installing the custom version of sshd
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* Use an override for for sshd instead of overwriting the service file
Don't overwrite upstream's .service file, and instead use an override
file for making sure the host key(s) are generated.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Why I did it
To sign SONiC kernel image and allow secure boot based system to verify SONiC image before loading into the system.
How I did it
Pass following parameter to rules/config.user
Ex:
SONIC_ENABLE_SECUREBOOT_SIGNATURE := y
SIGNING_KEY := /path/to/key/private.key
SIGNING_CERT := /path/to/public/public.cert
How to verify it
Secure boot enabled system enrolled with right public key of the, image in the platform UEFI database will able to verify image before load.
Alternatively one can verify with offline sbsign tool as below.
export SBSIGN_KEY=/abc/bcd/xyz/
sbverify --cert $SBSIGN_KEY/public_cert.cert fsroot-platform-XYZ/boot/vmlinuz-5.10.0-8-2-amd64 mage
O/P:
Signature verification OK
Why I did it
uboot env get and set commands fw_printenv/fw_setenv are not available in bullseye sonic image. Some platforms using them where failing. Ex: sonic-installer commands in marvell-armhf.
In case of buster, u-boot-tools was providing these commands.
How I did it
Added libubootenv-tool which provides these tools along with other uboot tools in build_debian.sh.
How to verify it
root@localhost:# fw_printenv serverip
serverip=10.4.50.39
root@localhost:# fw_setenv serverip 10.4.50.38
root@localhost:~# fw_printenv serverip
serverip=10.4.50.38
Change-Id: I558f8737f41d83d3e8527ce340391ae8f978b6d8
Signed-off-by: Pavan Naregundi <pnaregundi@marvell.com>
PR #9481 changed auditd's log directory to be /var/log instead of
/var/log/audit, because SONiC mounts a disk image at /var/log during
runtime, and so the /var/log/audit directory might not exist (since it
would've been created during package installation, mounting another
partition at /var/log will hide it). However, for security reasons,
auditd changes the log directory to have 0750 permissions, so that not
everyone knows about the audit logs or read them.
To fix this, revert the change to auditd's log directory, and tell
systemd to create the audit log directory at runtime if it doesn't
exist. Because the disk image gets mounted during initramfs (before
systemd starts), systemd will make sure that the /var/log/audit
directory will exist.
Fixes#9548 and #10015
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
<!--
Please make sure you've read and understood our contributing guidelines:
https://github.com/Azure/SONiC/blob/gh-pages/CONTRIBUTING.md
** Make sure all your commits include a signature generated with `git commit -s` **
If this is a bug fix, make sure your description includes "fixes #xxxx", or
"closes #xxxx" or "resolves #xxxx"
Please provide the following information:
-->
#### Why I did it
1. Fix auditd log file path, because known issue: https://github.com/Azure/sonic-buildimage/issues/9548
2. When SONiC change to based on bullseye, auditd version upgrade from 2.8.4 to 3.0.2, and in auditd 3.0.2 the plugin file path changed to /etc/audit/plugins.d, however the upstream auditisp-tacplus project not follow-up this change, it still install plugin config file to /etc/audit/audisp.d. so the plugin can't be launch correctly, the code change in src/tacacs/audisp/patches/0001-Porting-to-sonic.patch fix this issue.
#### How I did it
Fix tacacs plugin config file path.
Create /var/log/audit folder for auditd.
#### How to verify it
Pass all UT, also run per-command acccounting UT to validate plugin loaded.
#### Which release branch to backport (provide reason below if selected)
<!--
- Note we only backport fixes to a release branch, *not* features!
- Please also provide a reason for the backporting below.
- e.g.
- [x] 202006
-->
- [ ] 201811
- [ ] 201911
- [ ] 202006
- [ ] 202012
- [ ] 202106
#### Description for the changelog
<!--
Write a short (one line) summary that describes the changes in this
pull request for inclusion in the changelog:
-->
Fix tacacs plugin config file path.
Create /var/log/audit folder for auditd.
#### A picture of a cute animal (not mandatory but encouraged)
This pull request integrate audisp-tacplus to SONiC for per-command accounting.
#### Why I did it
To support TACACS per-command accounting, we integrate audisp-tacplus project to sonic.
#### How I did it
1. Add auditd service to SONiC
2. Port and patch audisp-tacplus to SONiC
#### How to verify it
UT with CUnit to cover all new code in usersecret-filter.c
Also pass all current UT.
#### Which release branch to backport (provide reason below if selected)
N/A
#### Description for the changelog
Add audisp-tacplus for per-command accounting.
#### A picture of a cute animal (not mandatory but encouraged)
Python 2 is no longer available, so remove those packages, and remove
the pip2 commands. For picocom and systemd, just install from the
regular repo, since there's no backports yet.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
#### Why I did it
Nokia IXR7250E platform requires grpcio, grpcio-tools python library, and libprotobuf-dev, libgrpc++ library
#### How I did it
Modified the build_debian.sh install libprotobuf-dev and libgrpc++ to support nokia ndk
Modified the sonic_debian_extension.j2 to install the grpcio and grpcio-tools in the host
Modified the docker-platform-monitor/Dockerfile.js to install grpcio and grpcio-tools for the pmon container.
#### How to verify it
Image running success.
the branch refers the branch name that the commit is in,
for example master, 202012, 201911, ...
In case there is no branch, the name will be HEAD.
release is encoded in /etc/sonic/sonic_release file.
the file is only available for a release branch.
It is not available in master branch.
example for master branch
```
build_version: 'master.602-6efc0a88'
debian_version: '10.7'
kernel_version: '4.19.0-9-2-amd64'
asic_type: vs
commit_id: '6efc0a88'
branch: 'master'
release: 'none'
build_date: Tue Dec 29 06:54:02 UTC 2020
build_number: 602
built_by: johnar@jenkins-worker-23
```
example for 202012 release branch
```
build_version: '202012.602-6efc0a88'
debian_version: '10.7'
kernel_version: '4.19.0-9-2-amd64'
asic_type: vs
commit_id: '6efc0a88'
branch: '202012'
release: '202012'
build_date: Tue Dec 29 06:54:02 UTC 2020
build_number: 602
built_by: johnar@jenkins-worker-23
```
Signed-off-by: Guohan Lu <lguohan@gmail.com>
#### Why I did it
- To build flashrom properly with dependency tracking.
#### How I did it
- Moved flashrom code from platform/broadcom/sonic-platform-modules-dell/tools directory to src/flashrom directory.
- At the end, flashrom_0.9.7_amd64.deb package is build which will be installed in the devices.
- Currently flashrom builds only for Dell S6100 platforms.
#### Why I did it
To allow SSH connections from IPv6 addresses
Resolves https://github.com/Azure/sonic-buildimage/issues/7668
#### How I did it
In build_debian.sh, modify sshd_config file so as to enable listening for IPv6 connections
Why I did it
The SONiC switches get their docker images from local repo, populated during install with container images pre-built into SONiC FW. With the introduction of kubernetes, new docker images available in remote repo could be deployed. This requires dockerd to be able to pull images from remote repo.
Depending on the Switch network domain & config, it may or may not be able to reach the remote repo. In the case where remote repo is unreachable, we could potentially make Kubernetes server to also act as http-proxy.
How I did it
When admin explicitly enables, the kubernetes-server could be configured as docker-proxy. But any update to docker-proxy has to be via service-conf file environment variable, implying a "service restart docker" is required. But restart of dockerd is vey expensive, as it would restarts all dockers, including database docker.
To avoid dockerd restart, pre-configure an http_proxy using an unused IP. When k8s server is enabled to act as http-proxy, an IP table entry would be created to direct all traffic to the configured-unused-proxy-ip to the kubernetes-master IP. This way any update to Kubernetes master config would be just manipulating IPTables, which will be transparent to all modules, until dockerd needs to download from remote repo.
How to verify it
Configure a switch such that image repo is unreachable
Pre-configure dockerd with http_proxy.conf using an unused IP (e.g. 172.16.1.1)
Update ctrmgrd.service to invoke ctrmgrd.py with "-p" option.
Configure a k8s server, and deploy an image for feature with set_owner="kube"
Check if switch could successfully download the image or not.
- Why I did it
To give SONiC Application Extension developers an environment to run and develop their apps.
- How I did it
Created sonic-sdk and sonic-sdk-buildenv dockers and their dbg versions.
- How to verify it
Build:
$ make -f slave target/sonic-sdk.gz target/sonic-sdk-buildenv.gz
PR# 7249 introduced a new bit of logic _after_ the point where the qemu based
build environment for ARM is removed. Hence the new logic fails when building
for ARM. Builds for AMD64 were not affected.
This commit moves the new logic introduced by PR# 7249 to just _before_ the
point where the qemu based build environment for ARM is removed. A comment is
added to reduce the likelihood of this sort of ARM build break from happening
again.
#### Why I did it
To build flashrom properly with dependency tracking.
#### How I did it
Moved flashrom code from platform/broadcom/sonic-platform-modules-dell/tools directory to src/flashrom directory.
At the end, flashrom_0.9.7_amd64.deb package is build which will be installed in the devices.
- Support compile sonic arm image on arm server. If arm image compiling is executed on arm server instead of using qemu mode on x86 server, compile time can be saved significantly.
- Add kernel argument systemd.unified_cgroup_hierarchy=0 for upgrade systemd to version 247, according to #7228
- rename multiarch docker to sonic-slave-${distro}-march-${arch}
Co-authored-by: Xianghong Gu <xgu@centecnetworks.com>
Co-authored-by: Shi Lei <shil@centecnetworks.com>
Problem:
Default groupadd for redis, takes 1000 by default. This forces, subsequently created admin group to get 1001.
As all TACACS users are created with 1000 as their gid, they end up in redis group.
Fix:
Create redis group *after* admin group is created
Add a check that admin group id is 1000
Fix#7180
Update systemd to v247 in order to pick the fix for "core: coldplug possible nop_job" systemd/systemd#13124
Install systemd, systemd-sysv from buster-backports. Pass "systemd.unified_cgroup_hierarchy=0" as kernel argument to force systemd to not use unified cgroup hierarchy, otherwise dockerd won't start moby/moby#16238.
Also, chown $FILSYSTEM_ROOT for root, otherwise apt systemd installation complains, see similar https://unix.stackexchange.com/questions/593529/can-not-configure-systemd-inside-a-chrooted-environment
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
As of the merging of PR #6799, we are now installing a newer version of scapy via pip, therefore there is no longer a need to install the older Debian package.
- Why I did it
To move ‘sonic-host-service’ which is currently built as a separate package to ‘sonic-host-services' package.
- How I did it
- Moved 'sonic-host-server' to 'src/sonic-host-services' and included it as part of the python3 wheel.
- Other files were moved to 'src/sonic-host-services-data' and included as part of the deb package.
- Changed build option ‘INCLUDE_HOST_SERVICE’ to ‘ENABLE_HOST_SERVICE_ON_START’ for enabling sonic-hostservice at boot-up by default.
**- Why I did it**
As per https://pypi.org/project/pip/ pip 21.0 does not not support Python 2 from Jan 2021. Most places in the codebase have already been pinned, but this one was missed.
**- How I did it**
Pin pip2 < version 21 in build_debian.sh
Following changes were done for ebtables:
- Support for Multi-asic platforms. Ebtable filters are installed in namespace for multi-asic and not host. On Single asic installed on host.
- For Multi-asic platforms we don't want to install on host otherwise Namespace-to-Namespace communication does not happens since ARP Request are not forwarded.
- Updated to use text file to restore ebtables rules then the binary format. Rules are restore as part of Database docker init instead of rc.local
- Removed the ebtable service files for buster as not needed as filters are restored/installed as part of database docker init.
All the binaries are pre-installed with ebtables* binary are same as ebatbles-legacy-*
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
- Make PDDF code compliant with both Python 2 and Python 3
- Align code with PEP8 standards using autopep8
- Build and install both Python 2 and Python 3 PDDF packages
Depending on the performance characteristics of a given hardware platform, it's possible to exceed the default 120 second kernel timeout during I/O intensive operations like image installation. This can cause a kernel panic like so:
kernel:[ 852.441781] Kernel panic - not syncing: hung_task: blocked tasks
If this happens during image installation, it's possible for the install to become corrupted and leave the device in an unreachable state that requires a power cycle to resolve. This risk increases as image size continues to increase. So, we need to increase the timeout so that we don't encounter kernel panics on devices with lower disk throughput.
Signed-off-by: Danny Allen <daall@microsoft.com>
- Why I did it
scripts/collect_host_image_version_files.sh fails with below error:
scripts/collect_host_image_version_files.sh target ./fsroot
/usr/sbin/chroot: failed to run command 'post_run_buildinfo': No such file or directory
/bin/cp: cannot stat './fsroot/usr/local/share/buildinfo/post-versions': No such file or directory
- How I did it
Issues is because qemu-arm-static is removed before this step. So, I moved the cleanup step to the end.
Signed-off-by: Sabareesh Kumar Anandan <sanandan@marvell.com>
Certain platform specific packages sonic-platform-xyz, installs files onto rootfs, which would be placed on read-write mount path on /host/image-name/rw/...
when ntpd starts it tries to do read access on /usr/bin /usr/sbin/ /usr/local/bin , which inturn links further to the read-write mount path also.
Where ntpd would get below Apparmor Warning message
LOG:-
audit: type=1400 audit(1606226503.240:21): apparmor="DENIED" operation="open" profile="/usr/sbin/ntpd" name="/image-HEAD-dirty-20201111.173951/rw/usr/local/bin/" pid=3733 comm="ntpd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
audit: type=1400 audit(1606226503.240:22): apparmor="DENIED" operation="open" profile="/usr/sbin/ntpd" name="/image-HEAD-dirty-20201111.173951/rw/usr/sbin/" pid=3733 comm="ntpd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
audit: type=1400 audit(1606226503.240:23): apparmor="DENIED" operation="open" profile="/usr/sbin/ntpd" name="/image-HEAD-dirty-20201111.173951/rw/usr/bin/" pid=3733 comm="ntpd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Fix:
Add rw/.. mount path similar to root path access provided for ntpd in /etc/apparmor.d/usr.sbin.ntpd
Signed-off-by: Antony Rheneus <arheneus@marvell.com>
Install the 'wheel' package in host OS (along with python3 and python3-distutils which are also needed for building some Python packages) to eliminate error messages like the following:
```
Running setup.py bdist_wheel for watchdog: started
Running setup.py bdist_wheel for watchdog: finished with status 'error'
Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-Qd3K08/watchdog/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/pip-wheel-0AHpMe --python-tag cp27:
usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: -c --help [cmd1 cmd2 ...]
or: -c --help-commands
or: -c cmd --help
error: invalid command 'bdist_wheel'
----------------------------------------
Failed building wheel for watchdog
```
These error messages appear to have no impact on the image build, because the Python package seems to still get installed successfully afterward, just the building of a wheel package fails. Therefore, this is more of a cosmetic fix than an actual bug.
This is an addendum to https://github.com/Azure/sonic-buildimage/pull/6182.
Also upgrade pip and install more recent version of setuptools package via PyPI.
Create new file to "sysctl.d" with desired panic conditions.
It will trigger a vmcore dump using kdump-tools on these situations.
Signed-off-by: Shlomi Bitton <shlomibi@nvidia.com>