sonic-buildimage/build_debian.sh

874 lines
39 KiB
Bash
Raw Normal View History

2016-03-08 13:42:20 -06:00
#!/bin/bash
## This script is to automate the preparation for a debian file system, which will be used for
## an ONIE installer image.
##
## USAGE:
[baseimage]: Improve password hashing for default user account (#1748) * [slave.mk]: Fix displaying username and password in build summary We display contents of DEFAULT_USERNAME and DEFAULT_PASSWORD, while image can be build with USERNAME and/or PASSWORD given on make(1) command line. For example: $ make USERNAME=adm PASSWORD=mypass target/sonic-broadcom.bin Fix by displaying USERNAME and PASSWORD variables in build summary. Signed-off-by: Sergey Popovich <sergey.popovich@ordnance.co> * [baseimage]: Improve default user account handling There are couple of issues with current implementation of default user account management in baseimage: 1) It uses DES to encrypt accounts password. Furthermore this effectively limits password length to 8 symbols, even if more provided with PASSWORD or DEFAULT_PASSWORD from rules/config. 2) Salt value for password is same on all builds even with different password increasing attack surface. 3) During the build process password passed as command line parameter either as plain text (if given to make(1) as "make PASSWORD=...") or DES encrypted (if given to build_debian.sh) can be seen by non-build users using /proc/<pid>/cmdline file that has group and world readable permissions. Both 1) and 2) come from: perl -e 'print crypt("$(PASSWORD)", "salt"),"\n"')" that by defalt uses DES if salt does not have format $<id>$<salt>$, where <id> is hashing function id. See crypt(3) for more details on valid <id> values. To address issues above we propose following changes: 1) Do not create password by hands (e.g. using perl snippet above): put this job to chpasswd(8) which is aware about system wide password hashing policy specified in /etc/login.defs with ENCRYPT_METHOD (by default it is SHA512 for Debian 8). 2) Now chpasswd(8) will take care about proper salt value. 3) This has two steps: 3.1) For compatibility reasons accept USERNAME and PASSWORD as make(1) parameters, but warn user that this is unsafe. 3.2) Use process environment to pass USERNAME and PASSWORD variables from Makefile to build_debian.sh as more secure alternative to passing via command line parameters: /proc/<pid>/environ readable only by user running process or privileged users like root. Before change: -------------- hash1 ----- # u='admin' # p="$(LANG=C perl -e 'print crypt("YourPaSs", "salt"),"\n"')" ^^^^^^^^ 8 symbols # echo "$u:$p" | chpasswd -e # getent shadow admin admin:sazQDkwgZPfSk:17680:0:99999:7::: ^^^^^^^^^^^^^ Note the hash (DES encrypted password) hash2 ----- # u='admin' # p="$(LANG=C perl -e 'print crypt("YourPaSsWoRd", "salt"),"\n"')" ^^^^^^^^^^^^ 12 symbols # echo "$u:$p" | chpasswd -e # getent shadow admin admin:sazQDkwgZPfSk:17680:0:99999:7::: ^^^^^^^^^^^^^ Hash is the same as for "YourPaSs" After change: ------------- hash1 ----- # echo "admin:YourPaSs" | chpasswd # getent shadow admin admin:$6$1Nho1jHC$T8YwK58FYToXMFuetQta7/XouAAN2q1IzWC3bdIg86woAs6WuTg\ ^^^^^^^^ Note salt here ksLO3oyQInax/wNVq.N4de6dyWZDsCAvsZ1:17681:0:99999:7::: hash2 ----- # echo "admin:YourPaSs" | chpasswd # getent shadow admin admin:$6$yKU5g7BO$kdT02Z1wHXhr1VCniKkZbLaMPZXK0WSSVGhSLGrNhsrsVxCJ.D9\ ^^^^^^^^ Here salt completely different from case above plFpd8ksGNpw/Vb92hvgYyCL2i5cfI8QEY/:17681:0:99999:7::: Since salt is different hashes for same password different too. hash1 ----- # LANG=C perl -e 'print crypt("YourPaSs", "\$6\$salt\$"),"\n"' ^^^^^ We want SHA512 hash $6$salt$qkwPvXqUeGpexO1vatnIQFAreOTXs6rnDX.OI.Sz2rcy51JrO8dFc9aGv82bB\ yd2ELrIMJ.FQLNjgSD0nNha7/ hash2 ----- # LANG=C perl -e 'print crypt("YourPaSsWoRd", "\$6\$salt\$"),"\n"' $6$salt$1JVndGzyy/dj7PaXo6hNcttlQoZe23ob8GWYWxVGEiGOlh6sofbaIvwl6Ho7N\ kYDI8zwRumRwga/A29nHm4mZ1 Now with same "salt" and $<id>$, and same 8 symbol prefix in password, but different password length we have different hashes. Signed-off-by: Sergey Popovich <sergey.popovich@ordnance.co>
2018-06-09 13:29:16 -05:00
## USERNAME=username PASSWORD=password ./build_debian
## ENVIRONMENT:
2016-03-08 13:42:20 -06:00
## USERNAME
## The name of the default admin user
[baseimage]: Improve password hashing for default user account (#1748) * [slave.mk]: Fix displaying username and password in build summary We display contents of DEFAULT_USERNAME and DEFAULT_PASSWORD, while image can be build with USERNAME and/or PASSWORD given on make(1) command line. For example: $ make USERNAME=adm PASSWORD=mypass target/sonic-broadcom.bin Fix by displaying USERNAME and PASSWORD variables in build summary. Signed-off-by: Sergey Popovich <sergey.popovich@ordnance.co> * [baseimage]: Improve default user account handling There are couple of issues with current implementation of default user account management in baseimage: 1) It uses DES to encrypt accounts password. Furthermore this effectively limits password length to 8 symbols, even if more provided with PASSWORD or DEFAULT_PASSWORD from rules/config. 2) Salt value for password is same on all builds even with different password increasing attack surface. 3) During the build process password passed as command line parameter either as plain text (if given to make(1) as "make PASSWORD=...") or DES encrypted (if given to build_debian.sh) can be seen by non-build users using /proc/<pid>/cmdline file that has group and world readable permissions. Both 1) and 2) come from: perl -e 'print crypt("$(PASSWORD)", "salt"),"\n"')" that by defalt uses DES if salt does not have format $<id>$<salt>$, where <id> is hashing function id. See crypt(3) for more details on valid <id> values. To address issues above we propose following changes: 1) Do not create password by hands (e.g. using perl snippet above): put this job to chpasswd(8) which is aware about system wide password hashing policy specified in /etc/login.defs with ENCRYPT_METHOD (by default it is SHA512 for Debian 8). 2) Now chpasswd(8) will take care about proper salt value. 3) This has two steps: 3.1) For compatibility reasons accept USERNAME and PASSWORD as make(1) parameters, but warn user that this is unsafe. 3.2) Use process environment to pass USERNAME and PASSWORD variables from Makefile to build_debian.sh as more secure alternative to passing via command line parameters: /proc/<pid>/environ readable only by user running process or privileged users like root. Before change: -------------- hash1 ----- # u='admin' # p="$(LANG=C perl -e 'print crypt("YourPaSs", "salt"),"\n"')" ^^^^^^^^ 8 symbols # echo "$u:$p" | chpasswd -e # getent shadow admin admin:sazQDkwgZPfSk:17680:0:99999:7::: ^^^^^^^^^^^^^ Note the hash (DES encrypted password) hash2 ----- # u='admin' # p="$(LANG=C perl -e 'print crypt("YourPaSsWoRd", "salt"),"\n"')" ^^^^^^^^^^^^ 12 symbols # echo "$u:$p" | chpasswd -e # getent shadow admin admin:sazQDkwgZPfSk:17680:0:99999:7::: ^^^^^^^^^^^^^ Hash is the same as for "YourPaSs" After change: ------------- hash1 ----- # echo "admin:YourPaSs" | chpasswd # getent shadow admin admin:$6$1Nho1jHC$T8YwK58FYToXMFuetQta7/XouAAN2q1IzWC3bdIg86woAs6WuTg\ ^^^^^^^^ Note salt here ksLO3oyQInax/wNVq.N4de6dyWZDsCAvsZ1:17681:0:99999:7::: hash2 ----- # echo "admin:YourPaSs" | chpasswd # getent shadow admin admin:$6$yKU5g7BO$kdT02Z1wHXhr1VCniKkZbLaMPZXK0WSSVGhSLGrNhsrsVxCJ.D9\ ^^^^^^^^ Here salt completely different from case above plFpd8ksGNpw/Vb92hvgYyCL2i5cfI8QEY/:17681:0:99999:7::: Since salt is different hashes for same password different too. hash1 ----- # LANG=C perl -e 'print crypt("YourPaSs", "\$6\$salt\$"),"\n"' ^^^^^ We want SHA512 hash $6$salt$qkwPvXqUeGpexO1vatnIQFAreOTXs6rnDX.OI.Sz2rcy51JrO8dFc9aGv82bB\ yd2ELrIMJ.FQLNjgSD0nNha7/ hash2 ----- # LANG=C perl -e 'print crypt("YourPaSsWoRd", "\$6\$salt\$"),"\n"' $6$salt$1JVndGzyy/dj7PaXo6hNcttlQoZe23ob8GWYWxVGEiGOlh6sofbaIvwl6Ho7N\ kYDI8zwRumRwga/A29nHm4mZ1 Now with same "salt" and $<id>$, and same 8 symbol prefix in password, but different password length we have different hashes. Signed-off-by: Sergey Popovich <sergey.popovich@ordnance.co>
2018-06-09 13:29:16 -05:00
## PASSWORD
## The password, expected by chpasswd command
2016-03-08 13:42:20 -06:00
## Default user
[ -n "$USERNAME" ] || {
[baseimage]: Improve password hashing for default user account (#1748) * [slave.mk]: Fix displaying username and password in build summary We display contents of DEFAULT_USERNAME and DEFAULT_PASSWORD, while image can be build with USERNAME and/or PASSWORD given on make(1) command line. For example: $ make USERNAME=adm PASSWORD=mypass target/sonic-broadcom.bin Fix by displaying USERNAME and PASSWORD variables in build summary. Signed-off-by: Sergey Popovich <sergey.popovich@ordnance.co> * [baseimage]: Improve default user account handling There are couple of issues with current implementation of default user account management in baseimage: 1) It uses DES to encrypt accounts password. Furthermore this effectively limits password length to 8 symbols, even if more provided with PASSWORD or DEFAULT_PASSWORD from rules/config. 2) Salt value for password is same on all builds even with different password increasing attack surface. 3) During the build process password passed as command line parameter either as plain text (if given to make(1) as "make PASSWORD=...") or DES encrypted (if given to build_debian.sh) can be seen by non-build users using /proc/<pid>/cmdline file that has group and world readable permissions. Both 1) and 2) come from: perl -e 'print crypt("$(PASSWORD)", "salt"),"\n"')" that by defalt uses DES if salt does not have format $<id>$<salt>$, where <id> is hashing function id. See crypt(3) for more details on valid <id> values. To address issues above we propose following changes: 1) Do not create password by hands (e.g. using perl snippet above): put this job to chpasswd(8) which is aware about system wide password hashing policy specified in /etc/login.defs with ENCRYPT_METHOD (by default it is SHA512 for Debian 8). 2) Now chpasswd(8) will take care about proper salt value. 3) This has two steps: 3.1) For compatibility reasons accept USERNAME and PASSWORD as make(1) parameters, but warn user that this is unsafe. 3.2) Use process environment to pass USERNAME and PASSWORD variables from Makefile to build_debian.sh as more secure alternative to passing via command line parameters: /proc/<pid>/environ readable only by user running process or privileged users like root. Before change: -------------- hash1 ----- # u='admin' # p="$(LANG=C perl -e 'print crypt("YourPaSs", "salt"),"\n"')" ^^^^^^^^ 8 symbols # echo "$u:$p" | chpasswd -e # getent shadow admin admin:sazQDkwgZPfSk:17680:0:99999:7::: ^^^^^^^^^^^^^ Note the hash (DES encrypted password) hash2 ----- # u='admin' # p="$(LANG=C perl -e 'print crypt("YourPaSsWoRd", "salt"),"\n"')" ^^^^^^^^^^^^ 12 symbols # echo "$u:$p" | chpasswd -e # getent shadow admin admin:sazQDkwgZPfSk:17680:0:99999:7::: ^^^^^^^^^^^^^ Hash is the same as for "YourPaSs" After change: ------------- hash1 ----- # echo "admin:YourPaSs" | chpasswd # getent shadow admin admin:$6$1Nho1jHC$T8YwK58FYToXMFuetQta7/XouAAN2q1IzWC3bdIg86woAs6WuTg\ ^^^^^^^^ Note salt here ksLO3oyQInax/wNVq.N4de6dyWZDsCAvsZ1:17681:0:99999:7::: hash2 ----- # echo "admin:YourPaSs" | chpasswd # getent shadow admin admin:$6$yKU5g7BO$kdT02Z1wHXhr1VCniKkZbLaMPZXK0WSSVGhSLGrNhsrsVxCJ.D9\ ^^^^^^^^ Here salt completely different from case above plFpd8ksGNpw/Vb92hvgYyCL2i5cfI8QEY/:17681:0:99999:7::: Since salt is different hashes for same password different too. hash1 ----- # LANG=C perl -e 'print crypt("YourPaSs", "\$6\$salt\$"),"\n"' ^^^^^ We want SHA512 hash $6$salt$qkwPvXqUeGpexO1vatnIQFAreOTXs6rnDX.OI.Sz2rcy51JrO8dFc9aGv82bB\ yd2ELrIMJ.FQLNjgSD0nNha7/ hash2 ----- # LANG=C perl -e 'print crypt("YourPaSsWoRd", "\$6\$salt\$"),"\n"' $6$salt$1JVndGzyy/dj7PaXo6hNcttlQoZe23ob8GWYWxVGEiGOlh6sofbaIvwl6Ho7N\ kYDI8zwRumRwga/A29nHm4mZ1 Now with same "salt" and $<id>$, and same 8 symbol prefix in password, but different password length we have different hashes. Signed-off-by: Sergey Popovich <sergey.popovich@ordnance.co>
2018-06-09 13:29:16 -05:00
echo "Error: no or empty USERNAME"
2016-03-08 13:42:20 -06:00
exit 1
}
[baseimage]: Improve password hashing for default user account (#1748) * [slave.mk]: Fix displaying username and password in build summary We display contents of DEFAULT_USERNAME and DEFAULT_PASSWORD, while image can be build with USERNAME and/or PASSWORD given on make(1) command line. For example: $ make USERNAME=adm PASSWORD=mypass target/sonic-broadcom.bin Fix by displaying USERNAME and PASSWORD variables in build summary. Signed-off-by: Sergey Popovich <sergey.popovich@ordnance.co> * [baseimage]: Improve default user account handling There are couple of issues with current implementation of default user account management in baseimage: 1) It uses DES to encrypt accounts password. Furthermore this effectively limits password length to 8 symbols, even if more provided with PASSWORD or DEFAULT_PASSWORD from rules/config. 2) Salt value for password is same on all builds even with different password increasing attack surface. 3) During the build process password passed as command line parameter either as plain text (if given to make(1) as "make PASSWORD=...") or DES encrypted (if given to build_debian.sh) can be seen by non-build users using /proc/<pid>/cmdline file that has group and world readable permissions. Both 1) and 2) come from: perl -e 'print crypt("$(PASSWORD)", "salt"),"\n"')" that by defalt uses DES if salt does not have format $<id>$<salt>$, where <id> is hashing function id. See crypt(3) for more details on valid <id> values. To address issues above we propose following changes: 1) Do not create password by hands (e.g. using perl snippet above): put this job to chpasswd(8) which is aware about system wide password hashing policy specified in /etc/login.defs with ENCRYPT_METHOD (by default it is SHA512 for Debian 8). 2) Now chpasswd(8) will take care about proper salt value. 3) This has two steps: 3.1) For compatibility reasons accept USERNAME and PASSWORD as make(1) parameters, but warn user that this is unsafe. 3.2) Use process environment to pass USERNAME and PASSWORD variables from Makefile to build_debian.sh as more secure alternative to passing via command line parameters: /proc/<pid>/environ readable only by user running process or privileged users like root. Before change: -------------- hash1 ----- # u='admin' # p="$(LANG=C perl -e 'print crypt("YourPaSs", "salt"),"\n"')" ^^^^^^^^ 8 symbols # echo "$u:$p" | chpasswd -e # getent shadow admin admin:sazQDkwgZPfSk:17680:0:99999:7::: ^^^^^^^^^^^^^ Note the hash (DES encrypted password) hash2 ----- # u='admin' # p="$(LANG=C perl -e 'print crypt("YourPaSsWoRd", "salt"),"\n"')" ^^^^^^^^^^^^ 12 symbols # echo "$u:$p" | chpasswd -e # getent shadow admin admin:sazQDkwgZPfSk:17680:0:99999:7::: ^^^^^^^^^^^^^ Hash is the same as for "YourPaSs" After change: ------------- hash1 ----- # echo "admin:YourPaSs" | chpasswd # getent shadow admin admin:$6$1Nho1jHC$T8YwK58FYToXMFuetQta7/XouAAN2q1IzWC3bdIg86woAs6WuTg\ ^^^^^^^^ Note salt here ksLO3oyQInax/wNVq.N4de6dyWZDsCAvsZ1:17681:0:99999:7::: hash2 ----- # echo "admin:YourPaSs" | chpasswd # getent shadow admin admin:$6$yKU5g7BO$kdT02Z1wHXhr1VCniKkZbLaMPZXK0WSSVGhSLGrNhsrsVxCJ.D9\ ^^^^^^^^ Here salt completely different from case above plFpd8ksGNpw/Vb92hvgYyCL2i5cfI8QEY/:17681:0:99999:7::: Since salt is different hashes for same password different too. hash1 ----- # LANG=C perl -e 'print crypt("YourPaSs", "\$6\$salt\$"),"\n"' ^^^^^ We want SHA512 hash $6$salt$qkwPvXqUeGpexO1vatnIQFAreOTXs6rnDX.OI.Sz2rcy51JrO8dFc9aGv82bB\ yd2ELrIMJ.FQLNjgSD0nNha7/ hash2 ----- # LANG=C perl -e 'print crypt("YourPaSsWoRd", "\$6\$salt\$"),"\n"' $6$salt$1JVndGzyy/dj7PaXo6hNcttlQoZe23ob8GWYWxVGEiGOlh6sofbaIvwl6Ho7N\ kYDI8zwRumRwga/A29nHm4mZ1 Now with same "salt" and $<id>$, and same 8 symbol prefix in password, but different password length we have different hashes. Signed-off-by: Sergey Popovich <sergey.popovich@ordnance.co>
2018-06-09 13:29:16 -05:00
## Password for the default user
[ -n "$PASSWORD" ] || {
echo "Error: no or empty PASSWORD"
2016-03-08 13:42:20 -06:00
exit 1
}
## Include common functions
. functions.sh
## Enable debug output for script
set -x -e
CONFIGURED_ARCH=$([ -f .arch ] && cat .arch || echo amd64)
## docker engine version (with platform)
DOCKER_VERSION=5:24.0.2-1~debian.12~$IMAGE_DISTRO
CONTAINERD_IO_VERSION=1.6.21-1
LINUX_KERNEL_VERSION=6.1.0-11-2
2016-03-08 13:42:20 -06:00
## Working directory to prepare the file system
FILESYSTEM_ROOT=./fsroot
PLATFORM_DIR=platform
2016-03-08 13:42:20 -06:00
## Hostname for the linux image
2016-09-15 17:22:29 -05:00
HOSTNAME=sonic
2016-03-08 13:42:20 -06:00
DEFAULT_USERINFO="Default admin user,,,"
BUILD_TOOL_PATH=src/sonic-build-hooks/buildinfo
TRUSTED_GPG_DIR=$BUILD_TOOL_PATH/trusted.gpg.d
2016-03-08 13:42:20 -06:00
## Read ONIE image related config file
. ./onie-image.conf
[ -n "$ONIE_IMAGE_PART_SIZE" ] || {
echo "Error: Invalid ONIE_IMAGE_PART_SIZE in onie image config file"
exit 1
}
[ -n "$INSTALLER_PAYLOAD" ] || {
echo "Error: Invalid INSTALLER_PAYLOAD in onie image config file"
2016-03-08 13:42:20 -06:00
exit 1
}
[ -n "$FILESYSTEM_SQUASHFS" ] || {
echo "Error: Invalid FILESYSTEM_SQUASHFS in onie image config file"
exit 1
}
if [ "$IMAGE_TYPE" = "aboot" ]; then
TARGET_BOOTLOADER="aboot"
fi
## Check if not a last stage of RFS build
if [[ $RFS_SPLIT_LAST_STAGE != y ]]; then
2016-03-08 13:42:20 -06:00
## Prepare the file system directory
if [[ -d $FILESYSTEM_ROOT ]]; then
sudo rm -rf $FILESYSTEM_ROOT || die "Failed to clean chroot directory"
2016-03-08 13:42:20 -06:00
fi
mkdir -p $FILESYSTEM_ROOT
mkdir -p $FILESYSTEM_ROOT/$PLATFORM_DIR
touch $FILESYSTEM_ROOT/$PLATFORM_DIR/firsttime
2016-03-08 13:42:20 -06:00
bootloader_packages=""
if [ "$TARGET_BOOTLOADER" != "aboot" ]; then
mkdir -p $FILESYSTEM_ROOT/$PLATFORM_DIR/grub
bootloader_packages="grub2-common"
fi
## ensure proc is mounted
sudo mount proc /proc -t proc || true
## Build the host debian base system
echo '[INFO] Build host debian base system...'
TARGET_PATH=$TARGET_PATH scripts/build_debian_base_system.sh $CONFIGURED_ARCH $IMAGE_DISTRO $FILESYSTEM_ROOT
# Prepare buildinfo
05.Version cache - docker dpkg caching support (#12005) This feature caches all the deb files during docker build and stores them into version cache. It loads the cache file if already exists in the version cache and copies the extracted deb file from cache file into Debian cache path( /var/cache/apt/archives). The apt-install always installs the deb file from the cache if exists, this avoid unnecessary package download from the repo and speeds up the overall build. The cache file is selected based on the SHA value of version dependency files. Why I did it How I did it How to verify it * 03.Version-cache - framework environment settings It defines and passes the necessary version cache environment variables to the caching framework. It adds the utils script for shared cache file access. It also adds the post-cleanup logic for cleaning the unwanted files from the docker/image after the version cache creation. * 04.Version cache - debug framework Added DBGOPT Make variable to enable the cache framework scripts in trace mode. This option takes the part name of the script to enable the particular shell script in trace mode. Multiple shell script names can also be given. Eg: make DBGOPT="image|docker" Added verbose mode to dump the version merge details during build/dry-run mode. Eg: scripts/versions_manager.py freeze -v \ 'dryrun|cmod=docker-swss|cfile=versions-deb|cname=all|stage=sub|stage=add' * 05.Version cache - docker dpkg caching support This feature caches all the deb files during docker build and stores them into version cache. It loads the cache file if already exists in the version cache and copies the extracted deb file from cache file into Debian cache path( /var/cache/apt/archives). The apt-install always installs the deb file from the cache if exists, this avoid unnecessary package download from the repo and speeds up the overall build. The cache file is selected based on the SHA value of version dependency files.
2022-12-11 19:20:56 -06:00
sudo SONIC_VERSION_CACHE=${SONIC_VERSION_CACHE} \
DBGOPT="${DBGOPT}" \
scripts/prepare_debian_image_buildinfo.sh $CONFIGURED_ARCH $IMAGE_DISTRO $FILESYSTEM_ROOT $http_proxy
2016-03-08 13:42:20 -06:00
sudo chown root:root $FILESYSTEM_ROOT
2016-03-08 13:42:20 -06:00
## Config hostname and hosts, otherwise 'sudo ...' will complain 'sudo: unable to resolve host ...'
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c "echo '$HOSTNAME' > /etc/hostname"
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c "echo '127.0.0.1 $HOSTNAME' >> /etc/hosts"
2016-09-25 23:48:25 -05:00
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c "echo '127.0.0.1 localhost' >> /etc/hosts"
2016-03-08 13:42:20 -06:00
## Config basic fstab
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c 'echo "proc /proc proc defaults 0 0" >> /etc/fstab'
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c 'echo "sysfs /sys sysfs defaults 0 0" >> /etc/fstab'
## Setup proxy
[ -n "$http_proxy" ] && sudo /bin/bash -c "echo 'Acquire::http::Proxy \"$http_proxy\";' > $FILESYSTEM_ROOT/etc/apt/apt.conf.d/01proxy"
trap_push 'sudo LANG=C chroot $FILESYSTEM_ROOT umount /proc || true'
sudo LANG=C chroot $FILESYSTEM_ROOT mount proc /proc -t proc
2016-03-08 13:42:20 -06:00
## Note: mounting is necessary to makedev and install linux image
echo '[INFO] Mount all'
## Output all the mounted device for troubleshooting
sudo LANG=C chroot $FILESYSTEM_ROOT mount
2016-03-08 13:42:20 -06:00
## Install the trusted gpg public keys
[ -d $TRUSTED_GPG_DIR ] && [ ! -z "$(ls $TRUSTED_GPG_DIR)" ] && sudo cp $TRUSTED_GPG_DIR/* ${FILESYSTEM_ROOT}/etc/apt/trusted.gpg.d/
2016-03-08 13:42:20 -06:00
## Pointing apt to public apt mirrors and getting latest packages, needed for latest security updates
Fix issue: systemctl daemon-reload would sporadically cause udev handler fail (#15253) #### Why I did it A workaround to back port the fix for a systemd issue. The systemd issue: https://github.com/systemd/systemd/issues/24668 The systemd PR to fix the issue: https://github.com/systemd/systemd/pull/24673/files The formal solution should upgrade systemd to a version that contains the fix. But, systemd is a very basic service, upgrading systemd requires heavy test. #### How I did it Copy the correct systemd-udevd.service file in build time #### Tested branch (Please provide the tested image version) - [x] 202211 - [ ] <!-- image version 2 --> ``` SONiC Software Version: SONiC.fix-udev.3-b65c7bdec_Internal SONiC OS Version: 11 Distribution: Debian 11.7 Kernel: 5.10.0-18-2-amd64 Build commit: b65c7bdec Build date: Mon Jun 19 10:54:50 UTC 2023 Built by: sw-r2d2-bot@r-build-sonic-ci02-241 Platform: x86_64-mlnx_msn4700-r0 HwSKU: ACS-MSN4700 ASIC: mellanox ASIC Count: 1 Serial Number: MT2022X08597 Model Number: MSN4700-WS2FO Hardware Revision: A1 Uptime: 08:10:11 up 1 min, 1 user, load average: 1.81, 0.67, 0.24 Date: Sun 25 Jun 2023 08:10:11 Docker images: REPOSITORY TAG IMAGE ID SIZE docker-fpm-frr fix-udev.3-b65c7bdec_Internal a7b911e7cb6f 346MB docker-fpm-frr latest a7b911e7cb6f 346MB docker-platform-monitor fix-udev.3-b65c7bdec_Internal 94c5178cf80b 731MB docker-platform-monitor latest 94c5178cf80b 731MB docker-orchagent fix-udev.3-b65c7bdec_Internal 46b393e0ace8 328MB docker-orchagent latest 46b393e0ace8 328MB docker-syncd-mlnx fix-udev.3-b65c7bdec_Internal 1f5c6c23e33a 734MB docker-syncd-mlnx latest 1f5c6c23e33a 734MB docker-sflow fix-udev.3-b65c7bdec_Internal 7e45992c8c59 317MB docker-sflow latest 7e45992c8c59 317MB docker-teamd fix-udev.3-b65c7bdec_Internal e4d905592cda 316MB docker-teamd latest e4d905592cda 316MB docker-nat fix-udev.3-b65c7bdec_Internal 7fe799367580 319MB docker-nat latest 7fe799367580 319MB docker-macsec latest d702a5554171 318MB docker-snmp fix-udev.3-b65c7bdec_Internal 3bce8fcf71cd 338MB docker-snmp latest 3bce8fcf71cd 338MB docker-sonic-telemetry fix-udev.3-b65c7bdec_Internal f13949cbc817 597MB docker-sonic-telemetry latest f13949cbc817 597MB docker-dhcp-relay latest 153d9072805d 306MB docker-router-advertiser fix-udev.3-b65c7bdec_Internal aed642b9a6bc 299MB docker-router-advertiser latest aed642b9a6bc 299MB docker-sonic-p4rt fix-udev.3-b65c7bdec_Internal a3cae5ca65a7 870MB docker-sonic-p4rt latest a3cae5ca65a7 870MB docker-mux fix-udev.3-b65c7bdec_Internal b81f0401b9a8 347MB docker-mux latest b81f0401b9a8 347MB docker-eventd fix-udev.3-b65c7bdec_Internal c5917d0e801f 298MB docker-eventd latest c5917d0e801f 298MB docker-lldp fix-udev.3-b65c7bdec_Internal fd5dc14a7976 341MB docker-lldp latest fd5dc14a7976 341MB docker-database fix-udev.3-b65c7bdec_Internal 438c2715a1dd 299MB docker-database latest 438c2715a1dd 299MB docker-sonic-mgmt-framework fix-udev.3-b65c7bdec_Internal 5c50b115fbcd 414MB docker-sonic-mgmt-framework latest ```
2023-06-25 18:58:14 -05:00
scripts/build_mirror_config.sh files/apt $CONFIGURED_ARCH $IMAGE_DISTRO
sudo cp files/apt/sources.list.$CONFIGURED_ARCH $FILESYSTEM_ROOT/etc/apt/sources.list
Improve remote fetch (#12795) ### Why I did it To fix those errors: One: ``` Connecting to urm.nvidia.com (urm.nvidia.com)|*.*.*.*|:443... connected. GnuTLS: Error in the pull function. Unable to establish SSL connection. Error 4 make[1]: Leaving directory '/sonic/src/smartmontools' [ target/debs/bullseye/smartmontools_6.6-1_amd64.deb ] ``` Second: ``` Get:90 https://debian-mirror-url buster/main amd64 librrd-dev amd64 1.7.1-2 [284 kB] Get:91 https://debian-mirror-url buster/main amd64 psmisc amd64 23.2-1+deb10u1 [126 kB] Get:92 https://debian-mirror-url buster/main amd64 python-smbus amd64 4.1-1 [12.2 kB] Get:93 https://debian-mirror-url buster/main amd64 python3.7-dev amd64 3.7.3-2+deb10u3 [510 kB] Get:94 https://debian-mirror-url buster/main amd64 python3-dev amd64 3.7.3-1 [1264 B] Get:95 https://debian-mirror-url buster/main amd64 python3-smbus amd64 4.1-1 [12.5 kB] Get:96 https://debian-mirror-url buster/main amd64 rrdtool amd64 1.7.1-2 [485 kB] Fetched 122 MB in 12s (9976 kB/s) E: Failed to fetch https://debian-mirror-url/pool/main/p/python-defaults/python2-minimal_2.7.16-1_amd64.deb 500 Internal Server Error [IP: *.*.*.* 443] E: Failed to fetch https://debian-mirror-url/pool/main/f/fontconfig/fontconfig-config_2.13.1-2_all.deb 500 Internal Server Error [IP: *.*.*.* 443] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? The command '/bin/sh -c apt-get update && apt-get install -y build-essential python3-dev ipmitool librrd8 librrd-dev rrdtool python-smbus python3-smbus dmidecode i2c-tools psmisc libpci3' returned a non-zero code: 100 [ target/docker-platform-monitor.gz ] Error 1 ``` #### How I did it Add retry mechanism to apt, wget, and curl hooks
2023-09-23 20:07:04 -05:00
sudo cp files/apt/apt-retries-count $FILESYSTEM_ROOT/etc/apt/apt.conf.d/
sudo cp files/apt/apt.conf.d/{81norecommends,apt-{clean,gzip-indexes,no-languages},no-check-valid-until} $FILESYSTEM_ROOT/etc/apt/apt.conf.d/
2016-03-08 13:42:20 -06:00
## Note: set lang to prevent locale warnings in your chroot
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y update
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y upgrade
Image build time improvements (#10104) * [build]: Patch debootstrap to not unmount the host's /proc filesystem Currently, when the final image is being built (sonic-vs.img.gz, sonic-broadcom.bin, or similar), each invocation of sudo in the build_debian.sh script takes 0.8 seconds to run and execute the actual command. This is because the /proc filesystem in the slave container has been unmounted somehow. This is happening when debootstrap is running, and it incorrectly unmounts the host's (in our case, the slave container's) /proc filesystem because in the new image being built, /proc is a symlink to the host's (the slave container's) /proc. Because of that, /proc is gone, and each invocation of sudo adds 0.8 seconds overhead. As a side effect, docker exec into the slave container during this time will fail, because /proc/self/fd doesn't exist anymore, and docker exec assumes that that exists. Debootstrap has fixed this in 1.0.124 and newer, so backport the patch that fixes this into the version that Bullseye has. Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com> * [build_debian.sh]: Use eatmydata to speed up deb package installations During package installations, dpkg calls fsync multiples times (for each package) to ensure that tht efiles are written to disk, so that if there's some system crash during package installation, then it is in at least a somewhat recoverable state. For our use case though, we're installing packages in a chroot in fsroot-* from a slave container and then packaging it into an image. If there were a system crash (or even if docker crashed), the fsroot-* directory would first be removed, and the process would get restarted. This means that the fsync calls aren't really needed for our use case. The eatmydata package includes a library that will block/suppress the use of fsync (and similar) system calls from applications and will instead just return success, so that the application is not blocked on disk writes, which can instead happen in the background instead as necessary. If dpkg is run with this library, then the fsync calls that it does will have no effect. Therefore, install the eatmydata package at the beginning of build_debian.sh and have dpkg be run under eatmydata for almost all package installations/removals. At the end of the installation, remove it, so that the final image uses dpkg as normal. In my testing, this saves about 2-3 minutes from the image build time. Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com> * Change ln syntax to use chroot Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
2022-04-19 11:22:16 -05:00
echo '[INFO] Install and setup eatmydata'
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y install eatmydata
sudo LANG=C chroot $FILESYSTEM_ROOT ln -s /usr/bin/eatmydata /usr/local/bin/dpkg
echo 'Dir::Bin::dpkg "/usr/local/bin/dpkg";' | sudo tee $FILESYSTEM_ROOT/etc/apt/apt.conf.d/00image-install-eatmydata > /dev/null
## Note: dpkg hook conflict with eatmydata
sudo LANG=C chroot $FILESYSTEM_ROOT rm /usr/local/sbin/dpkg -f
Image build time improvements (#10104) * [build]: Patch debootstrap to not unmount the host's /proc filesystem Currently, when the final image is being built (sonic-vs.img.gz, sonic-broadcom.bin, or similar), each invocation of sudo in the build_debian.sh script takes 0.8 seconds to run and execute the actual command. This is because the /proc filesystem in the slave container has been unmounted somehow. This is happening when debootstrap is running, and it incorrectly unmounts the host's (in our case, the slave container's) /proc filesystem because in the new image being built, /proc is a symlink to the host's (the slave container's) /proc. Because of that, /proc is gone, and each invocation of sudo adds 0.8 seconds overhead. As a side effect, docker exec into the slave container during this time will fail, because /proc/self/fd doesn't exist anymore, and docker exec assumes that that exists. Debootstrap has fixed this in 1.0.124 and newer, so backport the patch that fixes this into the version that Bullseye has. Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com> * [build_debian.sh]: Use eatmydata to speed up deb package installations During package installations, dpkg calls fsync multiples times (for each package) to ensure that tht efiles are written to disk, so that if there's some system crash during package installation, then it is in at least a somewhat recoverable state. For our use case though, we're installing packages in a chroot in fsroot-* from a slave container and then packaging it into an image. If there were a system crash (or even if docker crashed), the fsroot-* directory would first be removed, and the process would get restarted. This means that the fsync calls aren't really needed for our use case. The eatmydata package includes a library that will block/suppress the use of fsync (and similar) system calls from applications and will instead just return success, so that the application is not blocked on disk writes, which can instead happen in the background instead as necessary. If dpkg is run with this library, then the fsync calls that it does will have no effect. Therefore, install the eatmydata package at the beginning of build_debian.sh and have dpkg be run under eatmydata for almost all package installations/removals. At the end of the installation, remove it, so that the final image uses dpkg as normal. In my testing, this saves about 2-3 minutes from the image build time. Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com> * Change ln syntax to use chroot Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
2022-04-19 11:22:16 -05:00
2016-03-08 13:42:20 -06:00
echo '[INFO] Install packages for building image'
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y install makedev psmisc
2016-03-08 13:42:20 -06:00
Ported Marvell armhf build on amd64 host for debian buster to use cross-comp… (#8035) * Ported Marvell armhf build on x86 for debian buster to use cross-compilation instead of qemu emulation Current armhf Sonic build on amd64 host uses qemu emulation. Due to the nature of the emulation it takes a very long time, about 22-24 hours to complete the build. The change I did to reduce the building time by porting Sonic armhf build on amd64 host for Marvell platform for debian buster to use cross-compilation on arm64 host for armhf target. The overall Sonic armhf building time using cross-compilation reduced to about 6 hours. Signed-off-by: marvell <marvell@cpss-build3.marvell.com> * Fixed final Sonic image build with dockers inside * Update Dockerfile.j2 Fixed qemu-user-static:x86_64-aarch64-5.0.0-2 . * Update cross-build-arm-python-reqirements.sh Added support for both armhf and arm64 cross-build platform using $PY_PLAT environment variable. * Update Makefile Added TARGET=<cross-target> for armhf/arm64 cross-compilation. * Reviewer's @qiluo-msft requests done Signed-off-by: marvell <marvell@cpss-build3.marvell.com> * Added new radius/pam patch for arm64 support * Update slave.mk Added missing back tick. * Added libgtest-dev: libgmock-dev: to the buster Dockerfile.j2. Fixed arm perl version to be generic * Added missing armhf/arm64 entries in /etc/apt/sources.list * fix libc-bin core dump issue from xumia:fix-libc-bin-install-issue commit * Removed unnecessary 'apt-get update' from sonic-slave-buster/Dockerfile.j2 * Fixed saiarcot895 reviewer's requests * Fixed README and replaced 'sed/awk' with patches * Fixed ntp build to use openssl * Unuse sonic-slave-buster/cross-build-arm-python-reqirements.sh script (put all prebuilt python packages cross-compilation/install inside Dockerfile.j2). Fixed src/snmpd/Makefile to use -j1 in all cases * Clean armhf cross-compilation build fixes * Ported cross-compilation armhf build to bullseye * Additional change for bullseye * Set CROSS_BUILD_ENVIRON default value n * Removed python2 references * Fixes after merge with the upstream * Deleted unused sonic-slave-buster/cross-build-arm-python-reqirements.sh file * Fixed 2 @saiarcot895 requests * Fixed @saiarcot895 reviewer's requests * Removed use of prebuilt python wheels * Incorporated saiarcot895 CC/CXX and other simplification/generalization changes Signed-off-by: marvell <marvell@cpss-build3.marvell.com> * Fixed saiarcot895 reviewer's additional requests * src/libyang/patch/debian-packaging-files.patch * Removed --no-deps option when installing wheels. Removed unnecessary lazy_object_proxy arm python3 package instalation Co-authored-by: marvell <marvell@cpss-build3.marvell.com> Co-authored-by: marvell <marvell@cpss-build2.marvell.com>
2022-07-21 16:15:16 -05:00
if [[ $CROSS_BUILD_ENVIRON == y ]]; then
sudo LANG=C chroot $FILESYSTEM_ROOT dpkg --add-architecture $CONFIGURED_ARCH
fi
2016-03-08 13:42:20 -06:00
## Create device files
echo '[INFO] MAKEDEV'
if [[ $CONFIGURED_ARCH == armhf || $CONFIGURED_ARCH == arm64 ]]; then
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c 'cd /dev && MAKEDEV generic-arm'
else
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c 'cd /dev && MAKEDEV generic'
fi
## docker and mkinitramfs on target system will use pigz/unpigz automatically
if [[ $GZ_COMPRESS_PROGRAM == pigz ]]; then
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y install pigz
fi
2016-03-08 13:42:20 -06:00
## Install initramfs-tools and linux kernel
## Note: initramfs-tools recommends depending on busybox, and we really want busybox for
## 1. commands such as touch
## 2. mount supports squashfs
## However, 'dpkg -i' plus 'apt-get install -f' will ignore the recommended dependency. So
## we install busybox explicitly
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y install busybox linux-base
echo '[INFO] Install SONiC linux kernel image'
2016-03-08 13:42:20 -06:00
## Note: duplicate apt-get command to ensure every line return zero
sudo dpkg --root=$FILESYSTEM_ROOT -i $debs_path/initramfs-tools-core_*.deb || \
sudo LANG=C DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT apt-get -y install -f
sudo dpkg --root=$FILESYSTEM_ROOT -i $debs_path/initramfs-tools_*.deb || \
2016-03-08 13:42:20 -06:00
sudo LANG=C DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT apt-get -y install -f
sudo dpkg --root=$FILESYSTEM_ROOT -i $debs_path/linux-image-${LINUX_KERNEL_VERSION}-*_${CONFIGURED_ARCH}.deb || \
2016-03-08 13:42:20 -06:00
sudo LANG=C DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT apt-get -y install -f
2017-12-11 11:00:41 -06:00
sudo LANG=C DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT apt-get -y install acl
if [[ $CONFIGURED_ARCH == amd64 ]]; then
sudo LANG=C DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT apt-get -y install dmidecode hdparm
fi
## Sign the Linux kernel
Add Secure Boot Support (#12692) - Why I did it Add Secure Boot support to SONiC OS. Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process before the operating system has been loaded. - How I did it Added a signing process to sign the following components: shim, grub, Linux kernel, and kernel modules when doing the build, and when feature is enabled in build time according to the HLD explanations (the feature is disabled by default). - How to verify it There are self-verifications of each boot component when building the image, in addition, there is an existing end-to-end test in sonic-mgmt repo that checks that the boot succeeds when loading a secure system (details below). How to build a sonic image with secure boot feature: (more description in HLD) Required to use the following build flags from rules/config: SECURE_UPGRADE_MODE="dev" SECURE_UPGRADE_DEV_SIGNING_KEY="/path/to/private/key.pem" SECURE_UPGRADE_DEV_SIGNING_CERT="/path/to/cert/key.pem" After setting those flags should build the sonic-buildimage. Before installing the image, should prepared the setup (switch device) with the follow: check that the device support UEFI stored pub keys in UEFI DB enabled Secure Boot flag in UEFI How to run a test that verify the Secure Boot flow: The existing test "test_upgrade_path" under "sonic-mgmt/tests/upgrade_path/test_upgrade_path", is enough to validate proper boot You need to specify the following arguments: Base_image_list your_secure_image Taget_image_list your_second_secure_image Upgrade_type cold And run the test, basically the test will install the base image given in the parameter and then upgrade to target image by doing cold reboot and validates all the services are up and working correctly
2023-03-14 07:55:22 -05:00
# note: when flag SONIC_ENABLE_SECUREBOOT_SIGNATURE is enabled the Secure Upgrade flags should be disabled (no_sign) to avoid conflict between the features.
if [ "$SONIC_ENABLE_SECUREBOOT_SIGNATURE" = "y" ] && [ "$SECURE_UPGRADE_MODE" != 'dev' ] && [ "$SECURE_UPGRADE_MODE" != "prod" ]; then
if [ ! -f $SIGNING_KEY ]; then
echo "Error: SONiC linux kernel signing key missing"
exit 1
fi
if [ ! -f $SIGNING_CERT ]; then
echo "Error: SONiC linux kernel signing certificate missing"
exit 1
fi
echo '[INFO] Signing SONiC linux kernel image'
[arm] Refactor installer and build to allow arm builds targeted at grub platforms (#11341) Refactors the SONiC Installer to support greater flexibility in building for a given architecture and bootloader. #### Why I did it Currently the SONiC installer assumes that if a platform is ARM based that it uses the `uboot` bootloader and uses the `grub` bootloader otherwise. This is not a correct assumption to make as ARM is not strictly tied to uboot and x86 is not strictly tied to grub. #### How I did it To implement this I introduce the following changes: * Remove the different arch folders from the `installer/` directory * Merge the generic components of the ARM and x86 installer into `installer/installer.sh` * Refactor x86 + grub specific functions into `installer/default_platform.conf` * Modify installer to call `default_platform.conf` file and also call `platform/[platform]/patform.conf` file as well to override as needed * Update references to the installer in the `build_image.sh` script * Add `TARGET_BOOTLOADER` variable that is by default `uboot` for ARM devices and `grub` for x86 unless overridden in `platform/[platform]/rules.mk` * Update bootloader logic in `build_debian.sh` to be based on `TARGET_BOOTLOADER` instead of `TARGET_ARCH` and to reference the grub package in a generic manner #### How to verify it This has been tested on a ARM test platform as well as on Mellanox amd64 switches as well to ensure there was no impact. #### Description for the changelog [arm] Refactor installer and build to allow arm builds targeted at grub platforms #### Link to config_db schema for YANG module changes N/A
2022-07-12 17:00:57 -05:00
K=$FILESYSTEM_ROOT/boot/vmlinuz-${LINUX_KERNEL_VERSION}-${CONFIGURED_ARCH}
sbsign --key $SIGNING_KEY --cert $SIGNING_CERT --output /tmp/${K##*/} ${K}
sudo cp -f /tmp/${K##*/} ${K}
fi
## Update initramfs for booting with squashfs+overlay
2016-03-08 13:42:20 -06:00
cat files/initramfs-tools/modules | sudo tee -a $FILESYSTEM_ROOT/etc/initramfs-tools/modules > /dev/null
## Hook into initramfs: change fs type from vfat to ext4 on arista switches
sudo mkdir -p $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-premount/
sudo cp files/initramfs-tools/arista-convertfs $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-premount/arista-convertfs
sudo chmod +x $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-premount/arista-convertfs
sudo cp files/initramfs-tools/arista-hook $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-premount/arista-hook
sudo chmod +x $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-premount/arista-hook
sudo cp files/initramfs-tools/mke2fs $FILESYSTEM_ROOT/etc/initramfs-tools/hooks/mke2fs
sudo chmod +x $FILESYSTEM_ROOT/etc/initramfs-tools/hooks/mke2fs
sudo cp files/initramfs-tools/setfacl $FILESYSTEM_ROOT/etc/initramfs-tools/hooks/setfacl
sudo chmod +x $FILESYSTEM_ROOT/etc/initramfs-tools/hooks/setfacl
2017-08-19 23:32:10 -05:00
# Hook into initramfs: rename the management interfaces on arista switches
sudo cp files/initramfs-tools/arista-net $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-premount/arista-net
sudo chmod +x $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-premount/arista-net
# Hook into initramfs: resize root partition after migration from another NOS to SONiC on Dell switches
sudo cp files/initramfs-tools/resize-rootfs $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-premount/resize-rootfs
sudo chmod +x $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-premount/resize-rootfs
# Hook into initramfs: upgrade SSD from initramfs
sudo cp files/initramfs-tools/ssd-upgrade $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-premount/ssd-upgrade
sudo chmod +x $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-premount/ssd-upgrade
# Hook into initramfs: run fsck to repair a non-clean filesystem prior to be mounted
sudo cp files/initramfs-tools/fsck-rootfs $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-premount/fsck-rootfs
sudo chmod +x $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-premount/fsck-rootfs
2016-03-08 13:42:20 -06:00
## Hook into initramfs: after partition mount and loop file mount
## 1. Prepare layered file system
## 2. Bind-mount docker working directory (docker overlay storage cannot work over overlay rootfs)
2016-03-08 13:42:20 -06:00
sudo cp files/initramfs-tools/union-mount $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-bottom/union-mount
sudo chmod +x $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-bottom/union-mount
sudo cp files/initramfs-tools/varlog $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-bottom/varlog
sudo chmod +x $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-bottom/varlog
# Management interface (eth0) dhcp can be optionally turned off (during a migration from another NOS to SONiC)
#sudo cp files/initramfs-tools/mgmt-intf-dhcp $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-bottom/mgmt-intf-dhcp
#sudo chmod +x $FILESYSTEM_ROOT/etc/initramfs-tools/scripts/init-bottom/mgmt-intf-dhcp
2016-03-08 13:42:20 -06:00
sudo cp files/initramfs-tools/union-fsck $FILESYSTEM_ROOT/etc/initramfs-tools/hooks/union-fsck
sudo chmod +x $FILESYSTEM_ROOT/etc/initramfs-tools/hooks/union-fsck
pushd $FILESYSTEM_ROOT/usr/share/initramfs-tools/scripts/init-bottom && sudo patch -p1 < $OLDPWD/files/initramfs-tools/udev.patch; popd
if [[ $CONFIGURED_ARCH == armhf || $CONFIGURED_ARCH == arm64 ]]; then
sudo cp files/initramfs-tools/uboot-utils $FILESYSTEM_ROOT/etc/initramfs-tools/hooks/uboot-utils
sudo chmod +x $FILESYSTEM_ROOT/etc/initramfs-tools/hooks/uboot-utils
cat files/initramfs-tools/modules.arm | sudo tee -a $FILESYSTEM_ROOT/etc/initramfs-tools/modules > /dev/null
fi
# Update initramfs for load platform specific modules
if [ -f platform/$CONFIGURED_PLATFORM/modules ]; then
cat platform/$CONFIGURED_PLATFORM/modules | sudo tee -a $FILESYSTEM_ROOT/etc/initramfs-tools/modules > /dev/null
fi
2016-03-08 13:42:20 -06:00
## Add mtd and uboot firmware tools package
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y install u-boot-tools libubootenv-tool mtd-utils device-tree-compiler
2016-03-08 13:42:20 -06:00
## Install docker
echo '[INFO] Install docker'
## Install apparmor utils since they're missing and apparmor is enabled in the kernel
## Otherwise Docker will fail to start
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y install apparmor
sudo cp files/image_config/ntp/ntp-apparmor $FILESYSTEM_ROOT/etc/apparmor.d/local/usr.sbin.ntpd
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y install apt-transport-https \
ca-certificates \
curl
if [[ $CONFIGURED_ARCH == armhf ]]; then
# update ssl ca certificates for secure pem
sudo https_proxy=$https_proxy LANG=C chroot $FILESYSTEM_ROOT c_rehash
fi
sudo https_proxy=$https_proxy LANG=C chroot $FILESYSTEM_ROOT curl -o /tmp/docker.asc -fsSL https://download.docker.com/linux/debian/gpg
sudo LANG=C chroot $FILESYSTEM_ROOT mv /tmp/docker.asc /etc/apt/trusted.gpg.d/
sudo tee $FILESYSTEM_ROOT/etc/apt/sources.list.d/docker.list >/dev/null <<EOF
deb [arch=$CONFIGURED_ARCH] https://download.docker.com/linux/debian $IMAGE_DISTRO stable
EOF
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get update
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y install docker-ce=${DOCKER_VERSION} docker-ce-cli=${DOCKER_VERSION} containerd.io=${CONTAINERD_IO_VERSION}
install_kubernetes () {
local ver="$1"
sudo https_proxy=$https_proxy LANG=C chroot $FILESYSTEM_ROOT curl -fsSL \
https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
sudo LANG=C chroot $FILESYSTEM_ROOT apt-key add -
## Check out the sources list update matches current Debian version
sudo cp files/image_config/kubernetes/kubernetes.list $FILESYSTEM_ROOT/etc/apt/sources.list.d/
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get update
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y install kubernetes-cni=${KUBERNETES_CNI_VERSION}
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y install kubelet=${ver}
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y install kubectl=${ver}
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y install kubeadm=${ver}
}
if [ "$INCLUDE_KUBERNETES" == "y" ]
then
## Install Kubernetes
echo '[INFO] Install kubernetes'
install_kubernetes ${KUBERNETES_VERSION}
else
echo '[INFO] Skipping Install kubernetes'
fi
if [ "$INCLUDE_KUBERNETES_MASTER" == "y" ]
then
## Install Kubernetes master
echo '[INFO] Install kubernetes master'
install_kubernetes ${MASTER_KUBERNETES_VERSION}
Fix issue: systemctl daemon-reload would sporadically cause udev handler fail (#15253) #### Why I did it A workaround to back port the fix for a systemd issue. The systemd issue: https://github.com/systemd/systemd/issues/24668 The systemd PR to fix the issue: https://github.com/systemd/systemd/pull/24673/files The formal solution should upgrade systemd to a version that contains the fix. But, systemd is a very basic service, upgrading systemd requires heavy test. #### How I did it Copy the correct systemd-udevd.service file in build time #### Tested branch (Please provide the tested image version) - [x] 202211 - [ ] <!-- image version 2 --> ``` SONiC Software Version: SONiC.fix-udev.3-b65c7bdec_Internal SONiC OS Version: 11 Distribution: Debian 11.7 Kernel: 5.10.0-18-2-amd64 Build commit: b65c7bdec Build date: Mon Jun 19 10:54:50 UTC 2023 Built by: sw-r2d2-bot@r-build-sonic-ci02-241 Platform: x86_64-mlnx_msn4700-r0 HwSKU: ACS-MSN4700 ASIC: mellanox ASIC Count: 1 Serial Number: MT2022X08597 Model Number: MSN4700-WS2FO Hardware Revision: A1 Uptime: 08:10:11 up 1 min, 1 user, load average: 1.81, 0.67, 0.24 Date: Sun 25 Jun 2023 08:10:11 Docker images: REPOSITORY TAG IMAGE ID SIZE docker-fpm-frr fix-udev.3-b65c7bdec_Internal a7b911e7cb6f 346MB docker-fpm-frr latest a7b911e7cb6f 346MB docker-platform-monitor fix-udev.3-b65c7bdec_Internal 94c5178cf80b 731MB docker-platform-monitor latest 94c5178cf80b 731MB docker-orchagent fix-udev.3-b65c7bdec_Internal 46b393e0ace8 328MB docker-orchagent latest 46b393e0ace8 328MB docker-syncd-mlnx fix-udev.3-b65c7bdec_Internal 1f5c6c23e33a 734MB docker-syncd-mlnx latest 1f5c6c23e33a 734MB docker-sflow fix-udev.3-b65c7bdec_Internal 7e45992c8c59 317MB docker-sflow latest 7e45992c8c59 317MB docker-teamd fix-udev.3-b65c7bdec_Internal e4d905592cda 316MB docker-teamd latest e4d905592cda 316MB docker-nat fix-udev.3-b65c7bdec_Internal 7fe799367580 319MB docker-nat latest 7fe799367580 319MB docker-macsec latest d702a5554171 318MB docker-snmp fix-udev.3-b65c7bdec_Internal 3bce8fcf71cd 338MB docker-snmp latest 3bce8fcf71cd 338MB docker-sonic-telemetry fix-udev.3-b65c7bdec_Internal f13949cbc817 597MB docker-sonic-telemetry latest f13949cbc817 597MB docker-dhcp-relay latest 153d9072805d 306MB docker-router-advertiser fix-udev.3-b65c7bdec_Internal aed642b9a6bc 299MB docker-router-advertiser latest aed642b9a6bc 299MB docker-sonic-p4rt fix-udev.3-b65c7bdec_Internal a3cae5ca65a7 870MB docker-sonic-p4rt latest a3cae5ca65a7 870MB docker-mux fix-udev.3-b65c7bdec_Internal b81f0401b9a8 347MB docker-mux latest b81f0401b9a8 347MB docker-eventd fix-udev.3-b65c7bdec_Internal c5917d0e801f 298MB docker-eventd latest c5917d0e801f 298MB docker-lldp fix-udev.3-b65c7bdec_Internal fd5dc14a7976 341MB docker-lldp latest fd5dc14a7976 341MB docker-database fix-udev.3-b65c7bdec_Internal 438c2715a1dd 299MB docker-database latest 438c2715a1dd 299MB docker-sonic-mgmt-framework fix-udev.3-b65c7bdec_Internal 5c50b115fbcd 414MB docker-sonic-mgmt-framework latest ```
2023-06-25 18:58:14 -05:00
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get update
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y install hyperv-daemons gnupg xmlstarlet parted netcat
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y remove gnupg
sudo https_proxy=$https_proxy LANG=C chroot $FILESYSTEM_ROOT curl -o /tmp/cri-dockerd.deb -fsSL \
https://github.com/Mirantis/cri-dockerd/releases/download/v${MASTER_CRI_DOCKERD}/cri-dockerd_${MASTER_CRI_DOCKERD}.3-0.debian-${IMAGE_DISTRO}_amd64.deb
Fix issue: systemctl daemon-reload would sporadically cause udev handler fail (#15253) #### Why I did it A workaround to back port the fix for a systemd issue. The systemd issue: https://github.com/systemd/systemd/issues/24668 The systemd PR to fix the issue: https://github.com/systemd/systemd/pull/24673/files The formal solution should upgrade systemd to a version that contains the fix. But, systemd is a very basic service, upgrading systemd requires heavy test. #### How I did it Copy the correct systemd-udevd.service file in build time #### Tested branch (Please provide the tested image version) - [x] 202211 - [ ] <!-- image version 2 --> ``` SONiC Software Version: SONiC.fix-udev.3-b65c7bdec_Internal SONiC OS Version: 11 Distribution: Debian 11.7 Kernel: 5.10.0-18-2-amd64 Build commit: b65c7bdec Build date: Mon Jun 19 10:54:50 UTC 2023 Built by: sw-r2d2-bot@r-build-sonic-ci02-241 Platform: x86_64-mlnx_msn4700-r0 HwSKU: ACS-MSN4700 ASIC: mellanox ASIC Count: 1 Serial Number: MT2022X08597 Model Number: MSN4700-WS2FO Hardware Revision: A1 Uptime: 08:10:11 up 1 min, 1 user, load average: 1.81, 0.67, 0.24 Date: Sun 25 Jun 2023 08:10:11 Docker images: REPOSITORY TAG IMAGE ID SIZE docker-fpm-frr fix-udev.3-b65c7bdec_Internal a7b911e7cb6f 346MB docker-fpm-frr latest a7b911e7cb6f 346MB docker-platform-monitor fix-udev.3-b65c7bdec_Internal 94c5178cf80b 731MB docker-platform-monitor latest 94c5178cf80b 731MB docker-orchagent fix-udev.3-b65c7bdec_Internal 46b393e0ace8 328MB docker-orchagent latest 46b393e0ace8 328MB docker-syncd-mlnx fix-udev.3-b65c7bdec_Internal 1f5c6c23e33a 734MB docker-syncd-mlnx latest 1f5c6c23e33a 734MB docker-sflow fix-udev.3-b65c7bdec_Internal 7e45992c8c59 317MB docker-sflow latest 7e45992c8c59 317MB docker-teamd fix-udev.3-b65c7bdec_Internal e4d905592cda 316MB docker-teamd latest e4d905592cda 316MB docker-nat fix-udev.3-b65c7bdec_Internal 7fe799367580 319MB docker-nat latest 7fe799367580 319MB docker-macsec latest d702a5554171 318MB docker-snmp fix-udev.3-b65c7bdec_Internal 3bce8fcf71cd 338MB docker-snmp latest 3bce8fcf71cd 338MB docker-sonic-telemetry fix-udev.3-b65c7bdec_Internal f13949cbc817 597MB docker-sonic-telemetry latest f13949cbc817 597MB docker-dhcp-relay latest 153d9072805d 306MB docker-router-advertiser fix-udev.3-b65c7bdec_Internal aed642b9a6bc 299MB docker-router-advertiser latest aed642b9a6bc 299MB docker-sonic-p4rt fix-udev.3-b65c7bdec_Internal a3cae5ca65a7 870MB docker-sonic-p4rt latest a3cae5ca65a7 870MB docker-mux fix-udev.3-b65c7bdec_Internal b81f0401b9a8 347MB docker-mux latest b81f0401b9a8 347MB docker-eventd fix-udev.3-b65c7bdec_Internal c5917d0e801f 298MB docker-eventd latest c5917d0e801f 298MB docker-lldp fix-udev.3-b65c7bdec_Internal fd5dc14a7976 341MB docker-lldp latest fd5dc14a7976 341MB docker-database fix-udev.3-b65c7bdec_Internal 438c2715a1dd 299MB docker-database latest 438c2715a1dd 299MB docker-sonic-mgmt-framework fix-udev.3-b65c7bdec_Internal 5c50b115fbcd 414MB docker-sonic-mgmt-framework latest ```
2023-06-25 18:58:14 -05:00
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y install -f /tmp/cri-dockerd.deb
sudo LANG=C chroot $FILESYSTEM_ROOT rm -f /tmp/cri-dockerd.deb
else
echo '[INFO] Skipping Install kubernetes master'
fi
## Add docker config drop-in to specify dockerd command line
2016-03-08 13:42:20 -06:00
sudo mkdir -p $FILESYSTEM_ROOT/etc/systemd/system/docker.service.d/
## Note: $_ means last argument of last command
2016-03-08 13:42:20 -06:00
sudo cp files/docker/docker.service.conf $_
## Create default user
## Note: user should be in the group with the same name, and also in sudo/docker/redis groups
sudo LANG=C chroot $FILESYSTEM_ROOT useradd -G sudo,docker $USERNAME -c "$DEFAULT_USERINFO" -m -s /bin/bash
2016-03-08 13:42:20 -06:00
## Create password for the default user
[baseimage]: Improve password hashing for default user account (#1748) * [slave.mk]: Fix displaying username and password in build summary We display contents of DEFAULT_USERNAME and DEFAULT_PASSWORD, while image can be build with USERNAME and/or PASSWORD given on make(1) command line. For example: $ make USERNAME=adm PASSWORD=mypass target/sonic-broadcom.bin Fix by displaying USERNAME and PASSWORD variables in build summary. Signed-off-by: Sergey Popovich <sergey.popovich@ordnance.co> * [baseimage]: Improve default user account handling There are couple of issues with current implementation of default user account management in baseimage: 1) It uses DES to encrypt accounts password. Furthermore this effectively limits password length to 8 symbols, even if more provided with PASSWORD or DEFAULT_PASSWORD from rules/config. 2) Salt value for password is same on all builds even with different password increasing attack surface. 3) During the build process password passed as command line parameter either as plain text (if given to make(1) as "make PASSWORD=...") or DES encrypted (if given to build_debian.sh) can be seen by non-build users using /proc/<pid>/cmdline file that has group and world readable permissions. Both 1) and 2) come from: perl -e 'print crypt("$(PASSWORD)", "salt"),"\n"')" that by defalt uses DES if salt does not have format $<id>$<salt>$, where <id> is hashing function id. See crypt(3) for more details on valid <id> values. To address issues above we propose following changes: 1) Do not create password by hands (e.g. using perl snippet above): put this job to chpasswd(8) which is aware about system wide password hashing policy specified in /etc/login.defs with ENCRYPT_METHOD (by default it is SHA512 for Debian 8). 2) Now chpasswd(8) will take care about proper salt value. 3) This has two steps: 3.1) For compatibility reasons accept USERNAME and PASSWORD as make(1) parameters, but warn user that this is unsafe. 3.2) Use process environment to pass USERNAME and PASSWORD variables from Makefile to build_debian.sh as more secure alternative to passing via command line parameters: /proc/<pid>/environ readable only by user running process or privileged users like root. Before change: -------------- hash1 ----- # u='admin' # p="$(LANG=C perl -e 'print crypt("YourPaSs", "salt"),"\n"')" ^^^^^^^^ 8 symbols # echo "$u:$p" | chpasswd -e # getent shadow admin admin:sazQDkwgZPfSk:17680:0:99999:7::: ^^^^^^^^^^^^^ Note the hash (DES encrypted password) hash2 ----- # u='admin' # p="$(LANG=C perl -e 'print crypt("YourPaSsWoRd", "salt"),"\n"')" ^^^^^^^^^^^^ 12 symbols # echo "$u:$p" | chpasswd -e # getent shadow admin admin:sazQDkwgZPfSk:17680:0:99999:7::: ^^^^^^^^^^^^^ Hash is the same as for "YourPaSs" After change: ------------- hash1 ----- # echo "admin:YourPaSs" | chpasswd # getent shadow admin admin:$6$1Nho1jHC$T8YwK58FYToXMFuetQta7/XouAAN2q1IzWC3bdIg86woAs6WuTg\ ^^^^^^^^ Note salt here ksLO3oyQInax/wNVq.N4de6dyWZDsCAvsZ1:17681:0:99999:7::: hash2 ----- # echo "admin:YourPaSs" | chpasswd # getent shadow admin admin:$6$yKU5g7BO$kdT02Z1wHXhr1VCniKkZbLaMPZXK0WSSVGhSLGrNhsrsVxCJ.D9\ ^^^^^^^^ Here salt completely different from case above plFpd8ksGNpw/Vb92hvgYyCL2i5cfI8QEY/:17681:0:99999:7::: Since salt is different hashes for same password different too. hash1 ----- # LANG=C perl -e 'print crypt("YourPaSs", "\$6\$salt\$"),"\n"' ^^^^^ We want SHA512 hash $6$salt$qkwPvXqUeGpexO1vatnIQFAreOTXs6rnDX.OI.Sz2rcy51JrO8dFc9aGv82bB\ yd2ELrIMJ.FQLNjgSD0nNha7/ hash2 ----- # LANG=C perl -e 'print crypt("YourPaSsWoRd", "\$6\$salt\$"),"\n"' $6$salt$1JVndGzyy/dj7PaXo6hNcttlQoZe23ob8GWYWxVGEiGOlh6sofbaIvwl6Ho7N\ kYDI8zwRumRwga/A29nHm4mZ1 Now with same "salt" and $<id>$, and same 8 symbol prefix in password, but different password length we have different hashes. Signed-off-by: Sergey Popovich <sergey.popovich@ordnance.co>
2018-06-09 13:29:16 -05:00
echo "$USERNAME:$PASSWORD" | sudo LANG=C chroot $FILESYSTEM_ROOT chpasswd
2016-03-08 13:42:20 -06:00
## Create redis group
sudo LANG=C chroot $FILESYSTEM_ROOT groupadd -f redis
sudo LANG=C chroot $FILESYSTEM_ROOT usermod -aG redis $USERNAME
if [[ $CONFIGURED_ARCH == amd64 ]]; then
## Pre-install hardware drivers
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y install \
firmware-linux-nonfree
fi
2016-03-16 01:38:26 -05:00
2016-03-08 13:42:20 -06:00
## Pre-install the fundamental packages
## Note: gdisk is needed for sgdisk in install.sh
## Note: parted is needed for partprobe in install.sh
## Note: ca-certificates is needed for easy_install
2017-03-02 18:04:18 -06:00
## Note: don't install python-apt by pip, older than Debian repo one
## Note: fdisk and gpg are needed by fwutil
sudo LANG=C DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT apt-get -y install \
file \
ifmetric \
iproute2 \
bridge-utils \
isc-dhcp-client \
2016-03-08 13:42:20 -06:00
sudo \
vim \
tcpdump \
dbus \
ntpstat \
2016-03-08 13:42:20 -06:00
openssh-server \
python3-apt \
traceroute \
iputils-ping \
arping \
net-tools \
bsdmainutils \
ca-certificates \
i2c-tools \
efibootmgr \
usbutils \
pciutils \
iptables-persistent \
ebtables \
logrotate \
curl \
2017-04-21 19:23:36 -05:00
kexec-tools \
less \
unzip \
gdisk \
sysfsutils \
squashfs-tools \
$bootloader_packages \
rsyslog \
screen \
hping3 \
tcptraceroute \
mtr-tiny \
locales \
cgroup-tools \
ipmitool \
ndisc6 \
makedumpfile \
conntrack \
python3 \
python3-distutils \
python3-pip \
python-is-python3 \
cron \
libprotobuf32 \
libgrpc++1 \
libgrpc29 \
haveged \
fdisk \
gpg \
jq \
auditd \
linux-perf \
resolvconf \
lsof \
sysstat \
xxd \
zstd
# Have systemd create the auditd log directory
sudo mkdir -p ${FILESYSTEM_ROOT}/etc/systemd/system/auditd.service.d
sudo tee ${FILESYSTEM_ROOT}/etc/systemd/system/auditd.service.d/log-directory.conf >/dev/null <<EOF
[Service]
LogsDirectory=audit
LogsDirectoryMode=0750
EOF
[TACACS] Fix auditd can't load tacplus plugin issue. (#9481) <!-- Please make sure you've read and understood our contributing guidelines: https://github.com/Azure/SONiC/blob/gh-pages/CONTRIBUTING.md ** Make sure all your commits include a signature generated with `git commit -s` ** If this is a bug fix, make sure your description includes "fixes #xxxx", or "closes #xxxx" or "resolves #xxxx" Please provide the following information: --> #### Why I did it 1. Fix auditd log file path, because known issue: https://github.com/Azure/sonic-buildimage/issues/9548 2. When SONiC change to based on bullseye, auditd version upgrade from 2.8.4 to 3.0.2, and in auditd 3.0.2 the plugin file path changed to /etc/audit/plugins.d, however the upstream auditisp-tacplus project not follow-up this change, it still install plugin config file to /etc/audit/audisp.d. so the plugin can't be launch correctly, the code change in src/tacacs/audisp/patches/0001-Porting-to-sonic.patch fix this issue. #### How I did it Fix tacacs plugin config file path. Create /var/log/audit folder for auditd. #### How to verify it Pass all UT, also run per-command acccounting UT to validate plugin loaded. #### Which release branch to backport (provide reason below if selected) <!-- - Note we only backport fixes to a release branch, *not* features! - Please also provide a reason for the backporting below. - e.g. - [x] 202006 --> - [ ] 201811 - [ ] 201911 - [ ] 202006 - [ ] 202012 - [ ] 202106 #### Description for the changelog <!-- Write a short (one line) summary that describes the changes in this pull request for inclusion in the changelog: --> Fix tacacs plugin config file path. Create /var/log/audit folder for auditd. #### A picture of a cute animal (not mandatory but encouraged)
2021-12-15 21:02:58 -06:00
# latest tcpdump control resource access with AppArmor.
# override tcpdump profile to allow tcpdump access TACACS config file.
sudo cp files/apparmor/usr.bin.tcpdump $FILESYSTEM_ROOT/etc/apparmor.d/local/usr.bin.tcpdump
if [[ $CONFIGURED_ARCH == amd64 ]]; then
## Pre-install the fundamental packages for amd64 (x86)
sudo LANG=C DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT apt-get -y install \
rasdaemon
fi
## Set /etc/shadow permissions to -rw-------.
sudo LANG=c chroot $FILESYSTEM_ROOT chmod 600 /etc/shadow
## Set /etc/passwd, /etc/group permissions to -rw-r--r--.
sudo LANG=c chroot $FILESYSTEM_ROOT chmod 644 /etc/passwd
sudo LANG=c chroot $FILESYSTEM_ROOT chmod 644 /etc/group
# Needed to install kdump-tools
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c "mkdir -p /etc/initramfs-tools/conf.d"
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c "echo 'MODULES=most' >> /etc/initramfs-tools/conf.d/driver-policy"
# Copy vmcore-sysctl.conf to add more vmcore dump flags to kernel
sudo cp files/image_config/kdump/vmcore-sysctl.conf $FILESYSTEM_ROOT/etc/sysctl.d/
#Adds a locale to a debian system in non-interactive mode
sudo sed -i '/^#.* en_US.* /s/^#//' $FILESYSTEM_ROOT/etc/locale.gen && \
sudo LANG=C DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT locale-gen "en_US.UTF-8"
sudo LANG=en_US.UTF-8 DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT update-locale "LANG=en_US.UTF-8"
sudo LANG=C chroot $FILESYSTEM_ROOT bash -c "find /usr/share/i18n/locales/ ! -name 'en_US' -type f -exec rm -f {} +"
sudo LANG=C DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT apt-get -y install \
picocom \
systemd \
systemd-sysv \
ntp
[arm] Refactor installer and build to allow arm builds targeted at grub platforms (#11341) Refactors the SONiC Installer to support greater flexibility in building for a given architecture and bootloader. #### Why I did it Currently the SONiC installer assumes that if a platform is ARM based that it uses the `uboot` bootloader and uses the `grub` bootloader otherwise. This is not a correct assumption to make as ARM is not strictly tied to uboot and x86 is not strictly tied to grub. #### How I did it To implement this I introduce the following changes: * Remove the different arch folders from the `installer/` directory * Merge the generic components of the ARM and x86 installer into `installer/installer.sh` * Refactor x86 + grub specific functions into `installer/default_platform.conf` * Modify installer to call `default_platform.conf` file and also call `platform/[platform]/patform.conf` file as well to override as needed * Update references to the installer in the `build_image.sh` script * Add `TARGET_BOOTLOADER` variable that is by default `uboot` for ARM devices and `grub` for x86 unless overridden in `platform/[platform]/rules.mk` * Update bootloader logic in `build_debian.sh` to be based on `TARGET_BOOTLOADER` instead of `TARGET_ARCH` and to reference the grub package in a generic manner #### How to verify it This has been tested on a ARM test platform as well as on Mellanox amd64 switches as well to ensure there was no impact. #### Description for the changelog [arm] Refactor installer and build to allow arm builds targeted at grub platforms #### Link to config_db schema for YANG module changes N/A
2022-07-12 17:00:57 -05:00
if [[ $TARGET_BOOTLOADER == grub ]]; then
if [[ $CONFIGURED_ARCH == amd64 ]]; then
GRUB_PKG=grub-pc-bin
elif [[ $CONFIGURED_ARCH == arm64 ]]; then
GRUB_PKG=grub-efi-arm64-bin
fi
05.Version cache - docker dpkg caching support (#12005) This feature caches all the deb files during docker build and stores them into version cache. It loads the cache file if already exists in the version cache and copies the extracted deb file from cache file into Debian cache path( /var/cache/apt/archives). The apt-install always installs the deb file from the cache if exists, this avoid unnecessary package download from the repo and speeds up the overall build. The cache file is selected based on the SHA value of version dependency files. Why I did it How I did it How to verify it * 03.Version-cache - framework environment settings It defines and passes the necessary version cache environment variables to the caching framework. It adds the utils script for shared cache file access. It also adds the post-cleanup logic for cleaning the unwanted files from the docker/image after the version cache creation. * 04.Version cache - debug framework Added DBGOPT Make variable to enable the cache framework scripts in trace mode. This option takes the part name of the script to enable the particular shell script in trace mode. Multiple shell script names can also be given. Eg: make DBGOPT="image|docker" Added verbose mode to dump the version merge details during build/dry-run mode. Eg: scripts/versions_manager.py freeze -v \ 'dryrun|cmod=docker-swss|cfile=versions-deb|cname=all|stage=sub|stage=add' * 05.Version cache - docker dpkg caching support This feature caches all the deb files during docker build and stores them into version cache. It loads the cache file if already exists in the version cache and copies the extracted deb file from cache file into Debian cache path( /var/cache/apt/archives). The apt-install always installs the deb file from the cache if exists, this avoid unnecessary package download from the repo and speeds up the overall build. The cache file is selected based on the SHA value of version dependency files.
2022-12-11 19:20:56 -06:00
sudo LANG=C DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT apt-get install -d -o dir::cache=/var/cache/apt \
[arm] Refactor installer and build to allow arm builds targeted at grub platforms (#11341) Refactors the SONiC Installer to support greater flexibility in building for a given architecture and bootloader. #### Why I did it Currently the SONiC installer assumes that if a platform is ARM based that it uses the `uboot` bootloader and uses the `grub` bootloader otherwise. This is not a correct assumption to make as ARM is not strictly tied to uboot and x86 is not strictly tied to grub. #### How I did it To implement this I introduce the following changes: * Remove the different arch folders from the `installer/` directory * Merge the generic components of the ARM and x86 installer into `installer/installer.sh` * Refactor x86 + grub specific functions into `installer/default_platform.conf` * Modify installer to call `default_platform.conf` file and also call `platform/[platform]/patform.conf` file as well to override as needed * Update references to the installer in the `build_image.sh` script * Add `TARGET_BOOTLOADER` variable that is by default `uboot` for ARM devices and `grub` for x86 unless overridden in `platform/[platform]/rules.mk` * Update bootloader logic in `build_debian.sh` to be based on `TARGET_BOOTLOADER` instead of `TARGET_ARCH` and to reference the grub package in a generic manner #### How to verify it This has been tested on a ARM test platform as well as on Mellanox amd64 switches as well to ensure there was no impact. #### Description for the changelog [arm] Refactor installer and build to allow arm builds targeted at grub platforms #### Link to config_db schema for YANG module changes N/A
2022-07-12 17:00:57 -05:00
$GRUB_PKG
05.Version cache - docker dpkg caching support (#12005) This feature caches all the deb files during docker build and stores them into version cache. It loads the cache file if already exists in the version cache and copies the extracted deb file from cache file into Debian cache path( /var/cache/apt/archives). The apt-install always installs the deb file from the cache if exists, this avoid unnecessary package download from the repo and speeds up the overall build. The cache file is selected based on the SHA value of version dependency files. Why I did it How I did it How to verify it * 03.Version-cache - framework environment settings It defines and passes the necessary version cache environment variables to the caching framework. It adds the utils script for shared cache file access. It also adds the post-cleanup logic for cleaning the unwanted files from the docker/image after the version cache creation. * 04.Version cache - debug framework Added DBGOPT Make variable to enable the cache framework scripts in trace mode. This option takes the part name of the script to enable the particular shell script in trace mode. Multiple shell script names can also be given. Eg: make DBGOPT="image|docker" Added verbose mode to dump the version merge details during build/dry-run mode. Eg: scripts/versions_manager.py freeze -v \ 'dryrun|cmod=docker-swss|cfile=versions-deb|cname=all|stage=sub|stage=add' * 05.Version cache - docker dpkg caching support This feature caches all the deb files during docker build and stores them into version cache. It loads the cache file if already exists in the version cache and copies the extracted deb file from cache file into Debian cache path( /var/cache/apt/archives). The apt-install always installs the deb file from the cache if exists, this avoid unnecessary package download from the repo and speeds up the overall build. The cache file is selected based on the SHA value of version dependency files.
2022-12-11 19:20:56 -06:00
sudo cp $FILESYSTEM_ROOT/var/cache/apt/archives/grub*.deb $FILESYSTEM_ROOT/$PLATFORM_DIR/grub
fi
## Disable kexec supported reboot which was installed by default
sudo sed -i 's/LOAD_KEXEC=true/LOAD_KEXEC=false/' $FILESYSTEM_ROOT/etc/default/kexec
2016-03-08 13:42:20 -06:00
# Ensure that 'logrotate-config.service' is set as a dependency to start before 'logrotate.service'.
sudo mkdir $FILESYSTEM_ROOT/etc/systemd/system/logrotate.service.d
sudo cp files/image_config/logrotate/logrotateOverride.conf $FILESYSTEM_ROOT/etc/systemd/system/logrotate.service.d/logrotateOverride.conf
## Remove sshd host keys, and will regenerate on first sshd start
sudo rm -f $FILESYSTEM_ROOT/etc/ssh/ssh_host_*_key*
sudo cp files/sshd/host-ssh-keygen.sh $FILESYSTEM_ROOT/usr/local/bin/
sudo mkdir $FILESYSTEM_ROOT/etc/systemd/system/ssh.service.d
sudo cp files/sshd/override.conf $FILESYSTEM_ROOT/etc/systemd/system/ssh.service.d/override.conf
# Config sshd
# 1. Set 'UseDNS' to 'no'
# 2. Configure sshd to close all SSH connetions after 15 minutes of inactivity
sudo augtool -r $FILESYSTEM_ROOT <<'EOF'
touch /files/etc/ssh/sshd_config/EmptyLineHack
rename /files/etc/ssh/sshd_config/EmptyLineHack ""
set /files/etc/ssh/sshd_config/UseDNS no
ins #comment before /files/etc/ssh/sshd_config/UseDNS
set /files/etc/ssh/sshd_config/#comment[following-sibling::*[1][self::UseDNS]] "Disable hostname lookups"
rm /files/etc/ssh/sshd_config/ClientAliveInterval
rm /files/etc/ssh/sshd_config/ClientAliveCountMax
touch /files/etc/ssh/sshd_config/EmptyLineHack
rename /files/etc/ssh/sshd_config/EmptyLineHack ""
set /files/etc/ssh/sshd_config/ClientAliveInterval 900
set /files/etc/ssh/sshd_config/ClientAliveCountMax 0
ins #comment before /files/etc/ssh/sshd_config/ClientAliveInterval
set /files/etc/ssh/sshd_config/#comment[following-sibling::*[1][self::ClientAliveInterval]] "Close inactive client sessions after 15 minutes"
rm /files/etc/ssh/sshd_config/LogLevel
set /files/etc/ssh/sshd_config/LogLevel VERBOSE
save
quit
EOF
# Configure sshd to listen for v4 and v6 connections
sudo sed -i 's/^#ListenAddress 0.0.0.0/ListenAddress 0.0.0.0/' $FILESYSTEM_ROOT/etc/ssh/sshd_config
sudo sed -i 's/^#ListenAddress ::/ListenAddress ::/' $FILESYSTEM_ROOT/etc/ssh/sshd_config
## Config rsyslog
sudo augtool -r $FILESYSTEM_ROOT --autosave "
rm /files/lib/systemd/system/rsyslog.service/Service/ExecStart/arguments
set /files/lib/systemd/system/rsyslog.service/Service/ExecStart/arguments/1 -n
"
sudo mkdir -p $FILESYSTEM_ROOT/var/core
# Config sysctl
sudo augtool --autosave "
set /files/etc/sysctl.conf/kernel.core_pattern '|/usr/local/bin/coredump-compress %e %t %p %P'
set /files/etc/sysctl.conf/kernel.softlockup_panic 1
set /files/etc/sysctl.conf/kernel.panic 10
set /files/etc/sysctl.conf/kernel.hung_task_timeout_secs 300
set /files/etc/sysctl.conf/vm.panic_on_oom 2
set /files/etc/sysctl.conf/fs.suid_dumpable 2
" -r $FILESYSTEM_ROOT
sysctl_net_cmd_string=""
while read line; do
[[ "$line" =~ ^#.*$ ]] && continue
sysctl_net_conf_key=`echo $line | awk -F '=' '{print $1}'`
sysctl_net_conf_value=`echo $line | awk -F '=' '{print $2}'`
sysctl_net_cmd_string=$sysctl_net_cmd_string"set /files/etc/sysctl.conf/$sysctl_net_conf_key $sysctl_net_conf_value"$'\n'
done < files/image_config/sysctl/sysctl-net.conf
sudo augtool --autosave "$sysctl_net_cmd_string" -r $FILESYSTEM_ROOT
# Specify that we want to explicitly install Python packages into the system environment, and risk breakages
sudo cp files/image_config/pip/pip.conf $FILESYSTEM_ROOT/etc/pip.conf
# For building Python packages
sudo LANG=C DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT apt-get -y install python3-setuptools python3-wheel
# docker Python API package is needed by Ansible docker module as well as some SONiC applications
sudo https_proxy=$https_proxy LANG=C chroot $FILESYSTEM_ROOT pip3 install 'docker==6.1.1'
# Install scapy
sudo https_proxy=$https_proxy LANG=C chroot $FILESYSTEM_ROOT pip3 install 'scapy==2.4.4'
[Build] Fix the PyYang python package installation issue (#15890) Why I did it Fix the armhf build failure. How to reproduce the issue: docker run -it debain:bullseye bash apt-get update && apt-get install -y python3-pip pip3 install PyYAML==5.4.1 Error message: Collecting PyYAML==5.4.1 Installing build dependencies ... done Getting requirements to build wheel ... error ERROR: Command errored out with exit status 1: command: /usr/bin/python3 /tmp/tmp6xabslgb_in_process.py get_requires_for_build_wheel /tmp/tmp_er01ztl .... raise AttributeError(attr) AttributeError: cython_sources ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/a0/a4/d63f2d7597e1a4b55aa3b4d6c5b029991d3b824b5bd331af8d4ab1ed687d/PyYAML-5.4.1.tar.gz#sha256=607774cbba28732bfa802b54baa7484215f530991055bb562efbed5b2f20a45e (from https://pypi.org/simple/pyyaml/) (requires-python:>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*). Command errored out with exit status 1: /usr/bin/python3 /tmp/tmp6xabslgb_in_process.py get_requires_for_build_wheel /tmp/tmp_er01ztl Check the logs for full command output. ERROR: Could not find a version that satisfies the requirement PyYAML==5.4.1 ERROR: No matching distribution found for PyYAML==5.4.1 root@fa2fa92edcfd:/# But if adding the option --no-build-isolation, then it is good, see fix. install "PyYAML==5.4.1" --no-build-isolation The same error can be found in the multiple builds. Work item tracking Microsoft ADO (number only): 24567457 How I did it Add a build option --no-build-isolation.
2023-07-18 17:33:49 -05:00
# The option --no-build-isolation can be removed when upgrading PyYAML to 6.0.1
sudo https_proxy=$https_proxy LANG=C chroot $FILESYSTEM_ROOT pip3 install 'PyYAML==5.4.1' --no-build-isolation
2017-03-02 18:04:18 -06:00
## Note: keep pip installed for maintainance purpose
2016-03-08 13:42:20 -06:00
# Install GCC, needed for building/installing some Python packages
sudo LANG=C DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT apt-get -y install gcc
## Create /var/run/redis folder for docker-database to mount
sudo mkdir -p $FILESYSTEM_ROOT/var/run/redis
2016-03-08 13:42:20 -06:00
## Config DHCP for eth0
sudo tee -a $FILESYSTEM_ROOT/etc/network/interfaces > /dev/null <<EOF
auto eth0
allow-hotplug eth0
iface eth0 inet dhcp
EOF
sudo cp files/dhcp/rfc3442-classless-routes $FILESYSTEM_ROOT/etc/dhcp/dhclient-exit-hooks.d
sudo cp files/dhcp/sethostname $FILESYSTEM_ROOT/etc/dhcp/dhclient-exit-hooks.d/
sudo cp files/dhcp/sethostname6 $FILESYSTEM_ROOT/etc/dhcp/dhclient-exit-hooks.d/
sudo cp files/dhcp/graphserviceurl $FILESYSTEM_ROOT/etc/dhcp/dhclient-exit-hooks.d/
sudo cp files/dhcp/snmpcommunity $FILESYSTEM_ROOT/etc/dhcp/dhclient-exit-hooks.d/
sudo cp files/dhcp/vrf $FILESYSTEM_ROOT/etc/dhcp/dhclient-exit-hooks.d/
if [ -f files/image_config/ntp/ntpsec ]; then
sudo cp ./files/image_config/ntp/ntpsec $FILESYSTEM_ROOT/etc/init.d/
fi
if [ -f files/image_config/ntp/ntp-systemd-wrapper ]; then
sudo cp ./files/image_config/ntp/ntp-systemd-wrapper $FILESYSTEM_ROOT/usr/libexec/ntpsec/
fi
## Version file part 1
sudo mkdir -p $FILESYSTEM_ROOT/etc/sonic
if [ -f files/image_config/sonic_release ]; then
sudo cp files/image_config/sonic_release $FILESYSTEM_ROOT/etc/sonic/
fi
# Default users info
export password_expire="$( [[ "$CHANGE_DEFAULT_PASSWORD" == "y" ]] && echo true || echo false )"
export username="${USERNAME}"
export password="$(sudo grep ^${USERNAME} $FILESYSTEM_ROOT/etc/shadow | cut -d: -f2)"
j2 files/build_templates/default_users.json.j2 | sudo tee $FILESYSTEM_ROOT/etc/sonic/default_users.json
sudo LANG=c chroot $FILESYSTEM_ROOT chmod 600 /etc/sonic/default_users.json
sudo LANG=c chroot $FILESYSTEM_ROOT chown root:shadow /etc/sonic/default_users.json
## Copy over clean-up script
sudo cp ./files/scripts/core_cleanup.py $FILESYSTEM_ROOT/usr/bin/core_cleanup.py
## Copy ASIC config checksum
sudo chmod 755 files/build_scripts/generate_asic_config_checksum.py
./files/build_scripts/generate_asic_config_checksum.py
if [[ ! -f './asic_config_checksum' ]]; then
echo 'asic_config_checksum not found'
exit 1
fi
sudo cp ./asic_config_checksum $FILESYSTEM_ROOT/etc/sonic/asic_config_checksum
## Check if not a last stage of RFS build
fi
if [[ $RFS_SPLIT_FIRST_STAGE == y ]]; then
echo '[INFO] Finished with RFS first stage'
echo '[INFO] Umount all'
## Display all process details access /proc
sudo LANG=C chroot $FILESYSTEM_ROOT fuser -vm /proc
## Kill the processes
sudo LANG=C chroot $FILESYSTEM_ROOT fuser -km /proc || true
## Wait fuser fully kill the processes
sudo timeout 15s bash -c 'until LANG=C chroot $0 umount /proc; do sleep 1; done' $FILESYSTEM_ROOT || true
sudo rm -f $TARGET_PATH/$RFS_SQUASHFS_NAME
sudo mksquashfs $FILESYSTEM_ROOT $TARGET_PATH/$RFS_SQUASHFS_NAME -Xcompression-level 1
exit 0
fi
if [[ $RFS_SPLIT_LAST_STAGE == y ]]; then
echo '[INFO] RFS build: second stage'
## ensure proc is mounted
sudo mount proc /proc -t proc || true
sudo fuser -vm $FILESYSTEM_ROOT || true
sudo rm -rf $FILESYSTEM_ROOT
sudo unsquashfs -d $FILESYSTEM_ROOT $TARGET_PATH/$RFS_SQUASHFS_NAME
## make / as a mountpoint in chroot env, needed by dockerd
pushd $FILESYSTEM_ROOT
sudo mount --bind . .
popd
trap_push 'sudo LANG=C chroot $FILESYSTEM_ROOT umount /proc || true'
sudo LANG=C chroot $FILESYSTEM_ROOT mount proc /proc -t proc
fi
## Version file part 2
export build_version="${SONIC_IMAGE_VERSION}"
export debian_version="$(cat $FILESYSTEM_ROOT/etc/debian_version)"
export kernel_version="${kversion}"
export asic_type="${sonic_asic_platform}"
export asic_subtype="${TARGET_MACHINE}"
export commit_id="$(git rev-parse --short HEAD)"
export branch="$(git rev-parse --abbrev-ref HEAD)"
export release="$(if [ -f $FILESYSTEM_ROOT/etc/sonic/sonic_release ]; then cat $FILESYSTEM_ROOT/etc/sonic/sonic_release; fi)"
export build_date="$(date -u)"
export build_number="${BUILD_NUMBER:-0}"
export built_by="$USER@$BUILD_HOSTNAME"
export sonic_os_version="${SONIC_OS_VERSION}"
j2 files/build_templates/sonic_version.yml.j2 | sudo tee $FILESYSTEM_ROOT/etc/sonic/sonic_version.yml
if [ -f sonic_debian_extension.sh ]; then
./sonic_debian_extension.sh $FILESYSTEM_ROOT $PLATFORM_DIR $IMAGE_DISTRO
fi
## Organization specific extensions such as Configuration & Scripts for features like AAA, ZTP...
if [ "${enable_organization_extensions}" = "y" ]; then
if [ -f files/build_templates/organization_extensions.sh ]; then
sudo chmod 755 files/build_templates/organization_extensions.sh
./files/build_templates/organization_extensions.sh -f $FILESYSTEM_ROOT -h $HOSTNAME
fi
fi
## Setup ebtable rules (rule file in text format)
sudo cp files/image_config/ebtables/ebtables.filter.cfg ${FILESYSTEM_ROOT}/etc
## Debug Image specific changes
## Update motd for debug image
if [ "$DEBUG_IMG" == "y" ]
then
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c "echo '**************' >> /etc/motd"
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c "echo 'Running DEBUG image' >> /etc/motd"
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c "echo '**************' >> /etc/motd"
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c "echo '/src has the sources' >> /etc/motd"
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c "echo '/src is mounted in each docker' >> /etc/motd"
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c "echo '/debug is created for core files or temp files' >> /etc/motd"
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c "echo 'Create a subdir under /debug to upload your files' >> /etc/motd"
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c "echo '/debug is mounted in each docker' >> /etc/motd"
sudo mkdir -p $FILESYSTEM_ROOT/src
sudo cp $DEBUG_SRC_ARCHIVE_FILE $FILESYSTEM_ROOT/src/
sudo mkdir -p $FILESYSTEM_ROOT/debug
fi
## Set FIPS runtime default option
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c "mkdir -p /etc/fips"
sudo LANG=C chroot $FILESYSTEM_ROOT /bin/bash -c "echo 0 > /etc/fips/fips_enable"
Add Secure Boot Support (#12692) - Why I did it Add Secure Boot support to SONiC OS. Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process before the operating system has been loaded. - How I did it Added a signing process to sign the following components: shim, grub, Linux kernel, and kernel modules when doing the build, and when feature is enabled in build time according to the HLD explanations (the feature is disabled by default). - How to verify it There are self-verifications of each boot component when building the image, in addition, there is an existing end-to-end test in sonic-mgmt repo that checks that the boot succeeds when loading a secure system (details below). How to build a sonic image with secure boot feature: (more description in HLD) Required to use the following build flags from rules/config: SECURE_UPGRADE_MODE="dev" SECURE_UPGRADE_DEV_SIGNING_KEY="/path/to/private/key.pem" SECURE_UPGRADE_DEV_SIGNING_CERT="/path/to/cert/key.pem" After setting those flags should build the sonic-buildimage. Before installing the image, should prepared the setup (switch device) with the follow: check that the device support UEFI stored pub keys in UEFI DB enabled Secure Boot flag in UEFI How to run a test that verify the Secure Boot flow: The existing test "test_upgrade_path" under "sonic-mgmt/tests/upgrade_path/test_upgrade_path", is enough to validate proper boot You need to specify the following arguments: Base_image_list your_secure_image Taget_image_list your_second_secure_image Upgrade_type cold And run the test, basically the test will install the base image given in the parameter and then upgrade to target image by doing cold reboot and validates all the services are up and working correctly
2023-03-14 07:55:22 -05:00
# #################
Fix issue: systemctl daemon-reload would sporadically cause udev handler fail (#15253) #### Why I did it A workaround to back port the fix for a systemd issue. The systemd issue: https://github.com/systemd/systemd/issues/24668 The systemd PR to fix the issue: https://github.com/systemd/systemd/pull/24673/files The formal solution should upgrade systemd to a version that contains the fix. But, systemd is a very basic service, upgrading systemd requires heavy test. #### How I did it Copy the correct systemd-udevd.service file in build time #### Tested branch (Please provide the tested image version) - [x] 202211 - [ ] <!-- image version 2 --> ``` SONiC Software Version: SONiC.fix-udev.3-b65c7bdec_Internal SONiC OS Version: 11 Distribution: Debian 11.7 Kernel: 5.10.0-18-2-amd64 Build commit: b65c7bdec Build date: Mon Jun 19 10:54:50 UTC 2023 Built by: sw-r2d2-bot@r-build-sonic-ci02-241 Platform: x86_64-mlnx_msn4700-r0 HwSKU: ACS-MSN4700 ASIC: mellanox ASIC Count: 1 Serial Number: MT2022X08597 Model Number: MSN4700-WS2FO Hardware Revision: A1 Uptime: 08:10:11 up 1 min, 1 user, load average: 1.81, 0.67, 0.24 Date: Sun 25 Jun 2023 08:10:11 Docker images: REPOSITORY TAG IMAGE ID SIZE docker-fpm-frr fix-udev.3-b65c7bdec_Internal a7b911e7cb6f 346MB docker-fpm-frr latest a7b911e7cb6f 346MB docker-platform-monitor fix-udev.3-b65c7bdec_Internal 94c5178cf80b 731MB docker-platform-monitor latest 94c5178cf80b 731MB docker-orchagent fix-udev.3-b65c7bdec_Internal 46b393e0ace8 328MB docker-orchagent latest 46b393e0ace8 328MB docker-syncd-mlnx fix-udev.3-b65c7bdec_Internal 1f5c6c23e33a 734MB docker-syncd-mlnx latest 1f5c6c23e33a 734MB docker-sflow fix-udev.3-b65c7bdec_Internal 7e45992c8c59 317MB docker-sflow latest 7e45992c8c59 317MB docker-teamd fix-udev.3-b65c7bdec_Internal e4d905592cda 316MB docker-teamd latest e4d905592cda 316MB docker-nat fix-udev.3-b65c7bdec_Internal 7fe799367580 319MB docker-nat latest 7fe799367580 319MB docker-macsec latest d702a5554171 318MB docker-snmp fix-udev.3-b65c7bdec_Internal 3bce8fcf71cd 338MB docker-snmp latest 3bce8fcf71cd 338MB docker-sonic-telemetry fix-udev.3-b65c7bdec_Internal f13949cbc817 597MB docker-sonic-telemetry latest f13949cbc817 597MB docker-dhcp-relay latest 153d9072805d 306MB docker-router-advertiser fix-udev.3-b65c7bdec_Internal aed642b9a6bc 299MB docker-router-advertiser latest aed642b9a6bc 299MB docker-sonic-p4rt fix-udev.3-b65c7bdec_Internal a3cae5ca65a7 870MB docker-sonic-p4rt latest a3cae5ca65a7 870MB docker-mux fix-udev.3-b65c7bdec_Internal b81f0401b9a8 347MB docker-mux latest b81f0401b9a8 347MB docker-eventd fix-udev.3-b65c7bdec_Internal c5917d0e801f 298MB docker-eventd latest c5917d0e801f 298MB docker-lldp fix-udev.3-b65c7bdec_Internal fd5dc14a7976 341MB docker-lldp latest fd5dc14a7976 341MB docker-database fix-udev.3-b65c7bdec_Internal 438c2715a1dd 299MB docker-database latest 438c2715a1dd 299MB docker-sonic-mgmt-framework fix-udev.3-b65c7bdec_Internal 5c50b115fbcd 414MB docker-sonic-mgmt-framework latest ```
2023-06-25 18:58:14 -05:00
# secure boot
Add Secure Boot Support (#12692) - Why I did it Add Secure Boot support to SONiC OS. Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process before the operating system has been loaded. - How I did it Added a signing process to sign the following components: shim, grub, Linux kernel, and kernel modules when doing the build, and when feature is enabled in build time according to the HLD explanations (the feature is disabled by default). - How to verify it There are self-verifications of each boot component when building the image, in addition, there is an existing end-to-end test in sonic-mgmt repo that checks that the boot succeeds when loading a secure system (details below). How to build a sonic image with secure boot feature: (more description in HLD) Required to use the following build flags from rules/config: SECURE_UPGRADE_MODE="dev" SECURE_UPGRADE_DEV_SIGNING_KEY="/path/to/private/key.pem" SECURE_UPGRADE_DEV_SIGNING_CERT="/path/to/cert/key.pem" After setting those flags should build the sonic-buildimage. Before installing the image, should prepared the setup (switch device) with the follow: check that the device support UEFI stored pub keys in UEFI DB enabled Secure Boot flag in UEFI How to run a test that verify the Secure Boot flow: The existing test "test_upgrade_path" under "sonic-mgmt/tests/upgrade_path/test_upgrade_path", is enough to validate proper boot You need to specify the following arguments: Base_image_list your_secure_image Taget_image_list your_second_secure_image Upgrade_type cold And run the test, basically the test will install the base image given in the parameter and then upgrade to target image by doing cold reboot and validates all the services are up and working correctly
2023-03-14 07:55:22 -05:00
# #################
if [[ $SECURE_UPGRADE_MODE == 'dev' || $SECURE_UPGRADE_MODE == "prod" && $SONIC_ENABLE_SECUREBOOT_SIGNATURE != 'y' ]]; then
Fix issue: systemctl daemon-reload would sporadically cause udev handler fail (#15253) #### Why I did it A workaround to back port the fix for a systemd issue. The systemd issue: https://github.com/systemd/systemd/issues/24668 The systemd PR to fix the issue: https://github.com/systemd/systemd/pull/24673/files The formal solution should upgrade systemd to a version that contains the fix. But, systemd is a very basic service, upgrading systemd requires heavy test. #### How I did it Copy the correct systemd-udevd.service file in build time #### Tested branch (Please provide the tested image version) - [x] 202211 - [ ] <!-- image version 2 --> ``` SONiC Software Version: SONiC.fix-udev.3-b65c7bdec_Internal SONiC OS Version: 11 Distribution: Debian 11.7 Kernel: 5.10.0-18-2-amd64 Build commit: b65c7bdec Build date: Mon Jun 19 10:54:50 UTC 2023 Built by: sw-r2d2-bot@r-build-sonic-ci02-241 Platform: x86_64-mlnx_msn4700-r0 HwSKU: ACS-MSN4700 ASIC: mellanox ASIC Count: 1 Serial Number: MT2022X08597 Model Number: MSN4700-WS2FO Hardware Revision: A1 Uptime: 08:10:11 up 1 min, 1 user, load average: 1.81, 0.67, 0.24 Date: Sun 25 Jun 2023 08:10:11 Docker images: REPOSITORY TAG IMAGE ID SIZE docker-fpm-frr fix-udev.3-b65c7bdec_Internal a7b911e7cb6f 346MB docker-fpm-frr latest a7b911e7cb6f 346MB docker-platform-monitor fix-udev.3-b65c7bdec_Internal 94c5178cf80b 731MB docker-platform-monitor latest 94c5178cf80b 731MB docker-orchagent fix-udev.3-b65c7bdec_Internal 46b393e0ace8 328MB docker-orchagent latest 46b393e0ace8 328MB docker-syncd-mlnx fix-udev.3-b65c7bdec_Internal 1f5c6c23e33a 734MB docker-syncd-mlnx latest 1f5c6c23e33a 734MB docker-sflow fix-udev.3-b65c7bdec_Internal 7e45992c8c59 317MB docker-sflow latest 7e45992c8c59 317MB docker-teamd fix-udev.3-b65c7bdec_Internal e4d905592cda 316MB docker-teamd latest e4d905592cda 316MB docker-nat fix-udev.3-b65c7bdec_Internal 7fe799367580 319MB docker-nat latest 7fe799367580 319MB docker-macsec latest d702a5554171 318MB docker-snmp fix-udev.3-b65c7bdec_Internal 3bce8fcf71cd 338MB docker-snmp latest 3bce8fcf71cd 338MB docker-sonic-telemetry fix-udev.3-b65c7bdec_Internal f13949cbc817 597MB docker-sonic-telemetry latest f13949cbc817 597MB docker-dhcp-relay latest 153d9072805d 306MB docker-router-advertiser fix-udev.3-b65c7bdec_Internal aed642b9a6bc 299MB docker-router-advertiser latest aed642b9a6bc 299MB docker-sonic-p4rt fix-udev.3-b65c7bdec_Internal a3cae5ca65a7 870MB docker-sonic-p4rt latest a3cae5ca65a7 870MB docker-mux fix-udev.3-b65c7bdec_Internal b81f0401b9a8 347MB docker-mux latest b81f0401b9a8 347MB docker-eventd fix-udev.3-b65c7bdec_Internal c5917d0e801f 298MB docker-eventd latest c5917d0e801f 298MB docker-lldp fix-udev.3-b65c7bdec_Internal fd5dc14a7976 341MB docker-lldp latest fd5dc14a7976 341MB docker-database fix-udev.3-b65c7bdec_Internal 438c2715a1dd 299MB docker-database latest 438c2715a1dd 299MB docker-sonic-mgmt-framework fix-udev.3-b65c7bdec_Internal 5c50b115fbcd 414MB docker-sonic-mgmt-framework latest ```
2023-06-25 18:58:14 -05:00
# note: SONIC_ENABLE_SECUREBOOT_SIGNATURE is a feature that signing just kernel,
Add Secure Boot Support (#12692) - Why I did it Add Secure Boot support to SONiC OS. Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process before the operating system has been loaded. - How I did it Added a signing process to sign the following components: shim, grub, Linux kernel, and kernel modules when doing the build, and when feature is enabled in build time according to the HLD explanations (the feature is disabled by default). - How to verify it There are self-verifications of each boot component when building the image, in addition, there is an existing end-to-end test in sonic-mgmt repo that checks that the boot succeeds when loading a secure system (details below). How to build a sonic image with secure boot feature: (more description in HLD) Required to use the following build flags from rules/config: SECURE_UPGRADE_MODE="dev" SECURE_UPGRADE_DEV_SIGNING_KEY="/path/to/private/key.pem" SECURE_UPGRADE_DEV_SIGNING_CERT="/path/to/cert/key.pem" After setting those flags should build the sonic-buildimage. Before installing the image, should prepared the setup (switch device) with the follow: check that the device support UEFI stored pub keys in UEFI DB enabled Secure Boot flag in UEFI How to run a test that verify the Secure Boot flow: The existing test "test_upgrade_path" under "sonic-mgmt/tests/upgrade_path/test_upgrade_path", is enough to validate proper boot You need to specify the following arguments: Base_image_list your_secure_image Taget_image_list your_second_secure_image Upgrade_type cold And run the test, basically the test will install the base image given in the parameter and then upgrade to target image by doing cold reboot and validates all the services are up and working correctly
2023-03-14 07:55:22 -05:00
# SECURE_UPGRADE_MODE is signing all the boot component including kernel.
# its required to do not enable both features together to avoid conflicts.
echo "Secure Boot support build stage: Starting .."
# debian secure boot dependecies
sudo LANG=C DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT apt-get -y install \
shim-unsigned \
grub-efi
Fix issue: systemctl daemon-reload would sporadically cause udev handler fail (#15253) #### Why I did it A workaround to back port the fix for a systemd issue. The systemd issue: https://github.com/systemd/systemd/issues/24668 The systemd PR to fix the issue: https://github.com/systemd/systemd/pull/24673/files The formal solution should upgrade systemd to a version that contains the fix. But, systemd is a very basic service, upgrading systemd requires heavy test. #### How I did it Copy the correct systemd-udevd.service file in build time #### Tested branch (Please provide the tested image version) - [x] 202211 - [ ] <!-- image version 2 --> ``` SONiC Software Version: SONiC.fix-udev.3-b65c7bdec_Internal SONiC OS Version: 11 Distribution: Debian 11.7 Kernel: 5.10.0-18-2-amd64 Build commit: b65c7bdec Build date: Mon Jun 19 10:54:50 UTC 2023 Built by: sw-r2d2-bot@r-build-sonic-ci02-241 Platform: x86_64-mlnx_msn4700-r0 HwSKU: ACS-MSN4700 ASIC: mellanox ASIC Count: 1 Serial Number: MT2022X08597 Model Number: MSN4700-WS2FO Hardware Revision: A1 Uptime: 08:10:11 up 1 min, 1 user, load average: 1.81, 0.67, 0.24 Date: Sun 25 Jun 2023 08:10:11 Docker images: REPOSITORY TAG IMAGE ID SIZE docker-fpm-frr fix-udev.3-b65c7bdec_Internal a7b911e7cb6f 346MB docker-fpm-frr latest a7b911e7cb6f 346MB docker-platform-monitor fix-udev.3-b65c7bdec_Internal 94c5178cf80b 731MB docker-platform-monitor latest 94c5178cf80b 731MB docker-orchagent fix-udev.3-b65c7bdec_Internal 46b393e0ace8 328MB docker-orchagent latest 46b393e0ace8 328MB docker-syncd-mlnx fix-udev.3-b65c7bdec_Internal 1f5c6c23e33a 734MB docker-syncd-mlnx latest 1f5c6c23e33a 734MB docker-sflow fix-udev.3-b65c7bdec_Internal 7e45992c8c59 317MB docker-sflow latest 7e45992c8c59 317MB docker-teamd fix-udev.3-b65c7bdec_Internal e4d905592cda 316MB docker-teamd latest e4d905592cda 316MB docker-nat fix-udev.3-b65c7bdec_Internal 7fe799367580 319MB docker-nat latest 7fe799367580 319MB docker-macsec latest d702a5554171 318MB docker-snmp fix-udev.3-b65c7bdec_Internal 3bce8fcf71cd 338MB docker-snmp latest 3bce8fcf71cd 338MB docker-sonic-telemetry fix-udev.3-b65c7bdec_Internal f13949cbc817 597MB docker-sonic-telemetry latest f13949cbc817 597MB docker-dhcp-relay latest 153d9072805d 306MB docker-router-advertiser fix-udev.3-b65c7bdec_Internal aed642b9a6bc 299MB docker-router-advertiser latest aed642b9a6bc 299MB docker-sonic-p4rt fix-udev.3-b65c7bdec_Internal a3cae5ca65a7 870MB docker-sonic-p4rt latest a3cae5ca65a7 870MB docker-mux fix-udev.3-b65c7bdec_Internal b81f0401b9a8 347MB docker-mux latest b81f0401b9a8 347MB docker-eventd fix-udev.3-b65c7bdec_Internal c5917d0e801f 298MB docker-eventd latest c5917d0e801f 298MB docker-lldp fix-udev.3-b65c7bdec_Internal fd5dc14a7976 341MB docker-lldp latest fd5dc14a7976 341MB docker-database fix-udev.3-b65c7bdec_Internal 438c2715a1dd 299MB docker-database latest 438c2715a1dd 299MB docker-sonic-mgmt-framework fix-udev.3-b65c7bdec_Internal 5c50b115fbcd 414MB docker-sonic-mgmt-framework latest ```
2023-06-25 18:58:14 -05:00
if [ ! -f $SECURE_UPGRADE_SIGNING_CERT ]; then
echo "Error: SONiC SECURE_UPGRADE_SIGNING_CERT=$SECURE_UPGRADE_SIGNING_CERT key missing"
Add Secure Boot Support (#12692) - Why I did it Add Secure Boot support to SONiC OS. Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process before the operating system has been loaded. - How I did it Added a signing process to sign the following components: shim, grub, Linux kernel, and kernel modules when doing the build, and when feature is enabled in build time according to the HLD explanations (the feature is disabled by default). - How to verify it There are self-verifications of each boot component when building the image, in addition, there is an existing end-to-end test in sonic-mgmt repo that checks that the boot succeeds when loading a secure system (details below). How to build a sonic image with secure boot feature: (more description in HLD) Required to use the following build flags from rules/config: SECURE_UPGRADE_MODE="dev" SECURE_UPGRADE_DEV_SIGNING_KEY="/path/to/private/key.pem" SECURE_UPGRADE_DEV_SIGNING_CERT="/path/to/cert/key.pem" After setting those flags should build the sonic-buildimage. Before installing the image, should prepared the setup (switch device) with the follow: check that the device support UEFI stored pub keys in UEFI DB enabled Secure Boot flag in UEFI How to run a test that verify the Secure Boot flow: The existing test "test_upgrade_path" under "sonic-mgmt/tests/upgrade_path/test_upgrade_path", is enough to validate proper boot You need to specify the following arguments: Base_image_list your_secure_image Taget_image_list your_second_secure_image Upgrade_type cold And run the test, basically the test will install the base image given in the parameter and then upgrade to target image by doing cold reboot and validates all the services are up and working correctly
2023-03-14 07:55:22 -05:00
exit 1
fi
if [[ $SECURE_UPGRADE_MODE == 'dev' ]]; then
Fix issue: systemctl daemon-reload would sporadically cause udev handler fail (#15253) #### Why I did it A workaround to back port the fix for a systemd issue. The systemd issue: https://github.com/systemd/systemd/issues/24668 The systemd PR to fix the issue: https://github.com/systemd/systemd/pull/24673/files The formal solution should upgrade systemd to a version that contains the fix. But, systemd is a very basic service, upgrading systemd requires heavy test. #### How I did it Copy the correct systemd-udevd.service file in build time #### Tested branch (Please provide the tested image version) - [x] 202211 - [ ] <!-- image version 2 --> ``` SONiC Software Version: SONiC.fix-udev.3-b65c7bdec_Internal SONiC OS Version: 11 Distribution: Debian 11.7 Kernel: 5.10.0-18-2-amd64 Build commit: b65c7bdec Build date: Mon Jun 19 10:54:50 UTC 2023 Built by: sw-r2d2-bot@r-build-sonic-ci02-241 Platform: x86_64-mlnx_msn4700-r0 HwSKU: ACS-MSN4700 ASIC: mellanox ASIC Count: 1 Serial Number: MT2022X08597 Model Number: MSN4700-WS2FO Hardware Revision: A1 Uptime: 08:10:11 up 1 min, 1 user, load average: 1.81, 0.67, 0.24 Date: Sun 25 Jun 2023 08:10:11 Docker images: REPOSITORY TAG IMAGE ID SIZE docker-fpm-frr fix-udev.3-b65c7bdec_Internal a7b911e7cb6f 346MB docker-fpm-frr latest a7b911e7cb6f 346MB docker-platform-monitor fix-udev.3-b65c7bdec_Internal 94c5178cf80b 731MB docker-platform-monitor latest 94c5178cf80b 731MB docker-orchagent fix-udev.3-b65c7bdec_Internal 46b393e0ace8 328MB docker-orchagent latest 46b393e0ace8 328MB docker-syncd-mlnx fix-udev.3-b65c7bdec_Internal 1f5c6c23e33a 734MB docker-syncd-mlnx latest 1f5c6c23e33a 734MB docker-sflow fix-udev.3-b65c7bdec_Internal 7e45992c8c59 317MB docker-sflow latest 7e45992c8c59 317MB docker-teamd fix-udev.3-b65c7bdec_Internal e4d905592cda 316MB docker-teamd latest e4d905592cda 316MB docker-nat fix-udev.3-b65c7bdec_Internal 7fe799367580 319MB docker-nat latest 7fe799367580 319MB docker-macsec latest d702a5554171 318MB docker-snmp fix-udev.3-b65c7bdec_Internal 3bce8fcf71cd 338MB docker-snmp latest 3bce8fcf71cd 338MB docker-sonic-telemetry fix-udev.3-b65c7bdec_Internal f13949cbc817 597MB docker-sonic-telemetry latest f13949cbc817 597MB docker-dhcp-relay latest 153d9072805d 306MB docker-router-advertiser fix-udev.3-b65c7bdec_Internal aed642b9a6bc 299MB docker-router-advertiser latest aed642b9a6bc 299MB docker-sonic-p4rt fix-udev.3-b65c7bdec_Internal a3cae5ca65a7 870MB docker-sonic-p4rt latest a3cae5ca65a7 870MB docker-mux fix-udev.3-b65c7bdec_Internal b81f0401b9a8 347MB docker-mux latest b81f0401b9a8 347MB docker-eventd fix-udev.3-b65c7bdec_Internal c5917d0e801f 298MB docker-eventd latest c5917d0e801f 298MB docker-lldp fix-udev.3-b65c7bdec_Internal fd5dc14a7976 341MB docker-lldp latest fd5dc14a7976 341MB docker-database fix-udev.3-b65c7bdec_Internal 438c2715a1dd 299MB docker-database latest 438c2715a1dd 299MB docker-sonic-mgmt-framework fix-udev.3-b65c7bdec_Internal 5c50b115fbcd 414MB docker-sonic-mgmt-framework latest ```
2023-06-25 18:58:14 -05:00
# development signing & verification
Add Secure Boot Support (#12692) - Why I did it Add Secure Boot support to SONiC OS. Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process before the operating system has been loaded. - How I did it Added a signing process to sign the following components: shim, grub, Linux kernel, and kernel modules when doing the build, and when feature is enabled in build time according to the HLD explanations (the feature is disabled by default). - How to verify it There are self-verifications of each boot component when building the image, in addition, there is an existing end-to-end test in sonic-mgmt repo that checks that the boot succeeds when loading a secure system (details below). How to build a sonic image with secure boot feature: (more description in HLD) Required to use the following build flags from rules/config: SECURE_UPGRADE_MODE="dev" SECURE_UPGRADE_DEV_SIGNING_KEY="/path/to/private/key.pem" SECURE_UPGRADE_DEV_SIGNING_CERT="/path/to/cert/key.pem" After setting those flags should build the sonic-buildimage. Before installing the image, should prepared the setup (switch device) with the follow: check that the device support UEFI stored pub keys in UEFI DB enabled Secure Boot flag in UEFI How to run a test that verify the Secure Boot flow: The existing test "test_upgrade_path" under "sonic-mgmt/tests/upgrade_path/test_upgrade_path", is enough to validate proper boot You need to specify the following arguments: Base_image_list your_secure_image Taget_image_list your_second_secure_image Upgrade_type cold And run the test, basically the test will install the base image given in the parameter and then upgrade to target image by doing cold reboot and validates all the services are up and working correctly
2023-03-14 07:55:22 -05:00
if [ ! -f $SECURE_UPGRADE_DEV_SIGNING_KEY ]; then
echo "Error: SONiC SECURE_UPGRADE_DEV_SIGNING_KEY=$SECURE_UPGRADE_DEV_SIGNING_KEY key missing"
exit 1
fi
sudo ./scripts/signing_secure_boot_dev.sh -a $CONFIGURED_ARCH \
-r $FILESYSTEM_ROOT \
-l $LINUX_KERNEL_VERSION \
-c $SECURE_UPGRADE_SIGNING_CERT \
Add Secure Boot Support (#12692) - Why I did it Add Secure Boot support to SONiC OS. Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process before the operating system has been loaded. - How I did it Added a signing process to sign the following components: shim, grub, Linux kernel, and kernel modules when doing the build, and when feature is enabled in build time according to the HLD explanations (the feature is disabled by default). - How to verify it There are self-verifications of each boot component when building the image, in addition, there is an existing end-to-end test in sonic-mgmt repo that checks that the boot succeeds when loading a secure system (details below). How to build a sonic image with secure boot feature: (more description in HLD) Required to use the following build flags from rules/config: SECURE_UPGRADE_MODE="dev" SECURE_UPGRADE_DEV_SIGNING_KEY="/path/to/private/key.pem" SECURE_UPGRADE_DEV_SIGNING_CERT="/path/to/cert/key.pem" After setting those flags should build the sonic-buildimage. Before installing the image, should prepared the setup (switch device) with the follow: check that the device support UEFI stored pub keys in UEFI DB enabled Secure Boot flag in UEFI How to run a test that verify the Secure Boot flow: The existing test "test_upgrade_path" under "sonic-mgmt/tests/upgrade_path/test_upgrade_path", is enough to validate proper boot You need to specify the following arguments: Base_image_list your_secure_image Taget_image_list your_second_secure_image Upgrade_type cold And run the test, basically the test will install the base image given in the parameter and then upgrade to target image by doing cold reboot and validates all the services are up and working correctly
2023-03-14 07:55:22 -05:00
-p $SECURE_UPGRADE_DEV_SIGNING_KEY
elif [[ $SECURE_UPGRADE_MODE == "prod" ]]; then
# Here Vendor signing should be implemented
OUTPUT_SEC_BOOT_DIR=$FILESYSTEM_ROOT/boot
if [ ! -f $sonic_su_prod_signing_tool ]; then
echo "Error: SONiC sonic_su_prod_signing_tool=$sonic_su_prod_signing_tool script missing"
Add Secure Boot Support (#12692) - Why I did it Add Secure Boot support to SONiC OS. Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process before the operating system has been loaded. - How I did it Added a signing process to sign the following components: shim, grub, Linux kernel, and kernel modules when doing the build, and when feature is enabled in build time according to the HLD explanations (the feature is disabled by default). - How to verify it There are self-verifications of each boot component when building the image, in addition, there is an existing end-to-end test in sonic-mgmt repo that checks that the boot succeeds when loading a secure system (details below). How to build a sonic image with secure boot feature: (more description in HLD) Required to use the following build flags from rules/config: SECURE_UPGRADE_MODE="dev" SECURE_UPGRADE_DEV_SIGNING_KEY="/path/to/private/key.pem" SECURE_UPGRADE_DEV_SIGNING_CERT="/path/to/cert/key.pem" After setting those flags should build the sonic-buildimage. Before installing the image, should prepared the setup (switch device) with the follow: check that the device support UEFI stored pub keys in UEFI DB enabled Secure Boot flag in UEFI How to run a test that verify the Secure Boot flow: The existing test "test_upgrade_path" under "sonic-mgmt/tests/upgrade_path/test_upgrade_path", is enough to validate proper boot You need to specify the following arguments: Base_image_list your_secure_image Taget_image_list your_second_secure_image Upgrade_type cold And run the test, basically the test will install the base image given in the parameter and then upgrade to target image by doing cold reboot and validates all the services are up and working correctly
2023-03-14 07:55:22 -05:00
exit 1
fi
sudo $sonic_su_prod_signing_tool -a $CONFIGURED_ARCH \
-r $FILESYSTEM_ROOT \
-l $LINUX_KERNEL_VERSION \
-o $OUTPUT_SEC_BOOT_DIR \
$SECURE_UPGRADE_PROD_TOOL_ARGS
Add Secure Boot Support (#12692) - Why I did it Add Secure Boot support to SONiC OS. Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process before the operating system has been loaded. - How I did it Added a signing process to sign the following components: shim, grub, Linux kernel, and kernel modules when doing the build, and when feature is enabled in build time according to the HLD explanations (the feature is disabled by default). - How to verify it There are self-verifications of each boot component when building the image, in addition, there is an existing end-to-end test in sonic-mgmt repo that checks that the boot succeeds when loading a secure system (details below). How to build a sonic image with secure boot feature: (more description in HLD) Required to use the following build flags from rules/config: SECURE_UPGRADE_MODE="dev" SECURE_UPGRADE_DEV_SIGNING_KEY="/path/to/private/key.pem" SECURE_UPGRADE_DEV_SIGNING_CERT="/path/to/cert/key.pem" After setting those flags should build the sonic-buildimage. Before installing the image, should prepared the setup (switch device) with the follow: check that the device support UEFI stored pub keys in UEFI DB enabled Secure Boot flag in UEFI How to run a test that verify the Secure Boot flow: The existing test "test_upgrade_path" under "sonic-mgmt/tests/upgrade_path/test_upgrade_path", is enough to validate proper boot You need to specify the following arguments: Base_image_list your_secure_image Taget_image_list your_second_secure_image Upgrade_type cold And run the test, basically the test will install the base image given in the parameter and then upgrade to target image by doing cold reboot and validates all the services are up and working correctly
2023-03-14 07:55:22 -05:00
# verifying all EFI files and kernel modules in $OUTPUT_SEC_BOOT_DIR
sudo ./scripts/secure_boot_signature_verification.sh -e $OUTPUT_SEC_BOOT_DIR \
-c $SECURE_UPGRADE_SIGNING_CERT \
Add Secure Boot Support (#12692) - Why I did it Add Secure Boot support to SONiC OS. Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process before the operating system has been loaded. - How I did it Added a signing process to sign the following components: shim, grub, Linux kernel, and kernel modules when doing the build, and when feature is enabled in build time according to the HLD explanations (the feature is disabled by default). - How to verify it There are self-verifications of each boot component when building the image, in addition, there is an existing end-to-end test in sonic-mgmt repo that checks that the boot succeeds when loading a secure system (details below). How to build a sonic image with secure boot feature: (more description in HLD) Required to use the following build flags from rules/config: SECURE_UPGRADE_MODE="dev" SECURE_UPGRADE_DEV_SIGNING_KEY="/path/to/private/key.pem" SECURE_UPGRADE_DEV_SIGNING_CERT="/path/to/cert/key.pem" After setting those flags should build the sonic-buildimage. Before installing the image, should prepared the setup (switch device) with the follow: check that the device support UEFI stored pub keys in UEFI DB enabled Secure Boot flag in UEFI How to run a test that verify the Secure Boot flow: The existing test "test_upgrade_path" under "sonic-mgmt/tests/upgrade_path/test_upgrade_path", is enough to validate proper boot You need to specify the following arguments: Base_image_list your_secure_image Taget_image_list your_second_secure_image Upgrade_type cold And run the test, basically the test will install the base image given in the parameter and then upgrade to target image by doing cold reboot and validates all the services are up and working correctly
2023-03-14 07:55:22 -05:00
-k $FILESYSTEM_ROOT
# verifying vmlinuz file.
sudo ./scripts/secure_boot_signature_verification.sh -e $FILESYSTEM_ROOT/boot/vmlinuz-${LINUX_KERNEL_VERSION}-${CONFIGURED_ARCH} \
-c $SECURE_UPGRADE_SIGNING_CERT \
Add Secure Boot Support (#12692) - Why I did it Add Secure Boot support to SONiC OS. Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process before the operating system has been loaded. - How I did it Added a signing process to sign the following components: shim, grub, Linux kernel, and kernel modules when doing the build, and when feature is enabled in build time according to the HLD explanations (the feature is disabled by default). - How to verify it There are self-verifications of each boot component when building the image, in addition, there is an existing end-to-end test in sonic-mgmt repo that checks that the boot succeeds when loading a secure system (details below). How to build a sonic image with secure boot feature: (more description in HLD) Required to use the following build flags from rules/config: SECURE_UPGRADE_MODE="dev" SECURE_UPGRADE_DEV_SIGNING_KEY="/path/to/private/key.pem" SECURE_UPGRADE_DEV_SIGNING_CERT="/path/to/cert/key.pem" After setting those flags should build the sonic-buildimage. Before installing the image, should prepared the setup (switch device) with the follow: check that the device support UEFI stored pub keys in UEFI DB enabled Secure Boot flag in UEFI How to run a test that verify the Secure Boot flow: The existing test "test_upgrade_path" under "sonic-mgmt/tests/upgrade_path/test_upgrade_path", is enough to validate proper boot You need to specify the following arguments: Base_image_list your_secure_image Taget_image_list your_second_secure_image Upgrade_type cold And run the test, basically the test will install the base image given in the parameter and then upgrade to target image by doing cold reboot and validates all the services are up and working correctly
2023-03-14 07:55:22 -05:00
-k $FILESYSTEM_ROOT
fi
echo "Secure Boot support build stage: END."
fi
## Update initramfs
sudo chroot $FILESYSTEM_ROOT update-initramfs -u
## Convert initrd image to u-boot format
[arm] Refactor installer and build to allow arm builds targeted at grub platforms (#11341) Refactors the SONiC Installer to support greater flexibility in building for a given architecture and bootloader. #### Why I did it Currently the SONiC installer assumes that if a platform is ARM based that it uses the `uboot` bootloader and uses the `grub` bootloader otherwise. This is not a correct assumption to make as ARM is not strictly tied to uboot and x86 is not strictly tied to grub. #### How I did it To implement this I introduce the following changes: * Remove the different arch folders from the `installer/` directory * Merge the generic components of the ARM and x86 installer into `installer/installer.sh` * Refactor x86 + grub specific functions into `installer/default_platform.conf` * Modify installer to call `default_platform.conf` file and also call `platform/[platform]/patform.conf` file as well to override as needed * Update references to the installer in the `build_image.sh` script * Add `TARGET_BOOTLOADER` variable that is by default `uboot` for ARM devices and `grub` for x86 unless overridden in `platform/[platform]/rules.mk` * Update bootloader logic in `build_debian.sh` to be based on `TARGET_BOOTLOADER` instead of `TARGET_ARCH` and to reference the grub package in a generic manner #### How to verify it This has been tested on a ARM test platform as well as on Mellanox amd64 switches as well to ensure there was no impact. #### Description for the changelog [arm] Refactor installer and build to allow arm builds targeted at grub platforms #### Link to config_db schema for YANG module changes N/A
2022-07-12 17:00:57 -05:00
if [[ $TARGET_BOOTLOADER == uboot ]]; then
INITRD_FILE=initrd.img-${LINUX_KERNEL_VERSION}-${CONFIGURED_ARCH}
KERNEL_FILE=vmlinuz-${LINUX_KERNEL_VERSION}-${CONFIGURED_ARCH}
if [[ $CONFIGURED_ARCH == armhf ]]; then
INITRD_FILE=initrd.img-${LINUX_KERNEL_VERSION}-armmp
sudo LANG=C chroot $FILESYSTEM_ROOT mkimage -A arm -O linux -T ramdisk -C gzip -d /boot/$INITRD_FILE /boot/u${INITRD_FILE}
## Overwriting the initrd image with uInitrd
sudo LANG=C chroot $FILESYSTEM_ROOT mv /boot/u${INITRD_FILE} /boot/$INITRD_FILE
elif [[ $CONFIGURED_ARCH == arm64 ]]; then
if [[ $CONFIGURED_PLATFORM == pensando ]]; then
## copy device tree file into boot (XXX: need to compile dtb from dts)
sudo cp -v $PLATFORM_DIR/pensando/elba-asic-psci.dtb $FILESYSTEM_ROOT/boot/
## make kernel as gzip file
sudo LANG=C chroot $FILESYSTEM_ROOT gzip /boot/${KERNEL_FILE}
sudo LANG=C chroot $FILESYSTEM_ROOT mv /boot/${KERNEL_FILE}.gz /boot/${KERNEL_FILE}
## Convert initrd image to u-boot format
sudo LANG=C chroot $FILESYSTEM_ROOT mkimage -A arm64 -O linux -T ramdisk -C gzip -d /boot/$INITRD_FILE /boot/u${INITRD_FILE}
## Overwriting the initrd image with uInitrd
sudo LANG=C chroot $FILESYSTEM_ROOT mv /boot/u${INITRD_FILE} /boot/$INITRD_FILE
else
sudo cp -v $PLATFORM_DIR/${sonic_asic_platform}-${CONFIGURED_ARCH}/sonic_fit.its $FILESYSTEM_ROOT/boot/
sudo LANG=C chroot $FILESYSTEM_ROOT mkimage -f /boot/sonic_fit.its /boot/sonic_${CONFIGURED_ARCH}.fit
fi
fi
fi
# Collect host image version files before cleanup
05.Version cache - docker dpkg caching support (#12005) This feature caches all the deb files during docker build and stores them into version cache. It loads the cache file if already exists in the version cache and copies the extracted deb file from cache file into Debian cache path( /var/cache/apt/archives). The apt-install always installs the deb file from the cache if exists, this avoid unnecessary package download from the repo and speeds up the overall build. The cache file is selected based on the SHA value of version dependency files. Why I did it How I did it How to verify it * 03.Version-cache - framework environment settings It defines and passes the necessary version cache environment variables to the caching framework. It adds the utils script for shared cache file access. It also adds the post-cleanup logic for cleaning the unwanted files from the docker/image after the version cache creation. * 04.Version cache - debug framework Added DBGOPT Make variable to enable the cache framework scripts in trace mode. This option takes the part name of the script to enable the particular shell script in trace mode. Multiple shell script names can also be given. Eg: make DBGOPT="image|docker" Added verbose mode to dump the version merge details during build/dry-run mode. Eg: scripts/versions_manager.py freeze -v \ 'dryrun|cmod=docker-swss|cfile=versions-deb|cname=all|stage=sub|stage=add' * 05.Version cache - docker dpkg caching support This feature caches all the deb files during docker build and stores them into version cache. It loads the cache file if already exists in the version cache and copies the extracted deb file from cache file into Debian cache path( /var/cache/apt/archives). The apt-install always installs the deb file from the cache if exists, this avoid unnecessary package download from the repo and speeds up the overall build. The cache file is selected based on the SHA value of version dependency files.
2022-12-11 19:20:56 -06:00
SONIC_VERSION_CACHE=${SONIC_VERSION_CACHE} \
DBGOPT="${DBGOPT}" \
scripts/collect_host_image_version_files.sh $CONFIGURED_ARCH $IMAGE_DISTRO $TARGET_PATH $FILESYSTEM_ROOT
# Remove GCC
sudo LANG=C DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT apt-get -y remove gcc
Image build time improvements (#10104) * [build]: Patch debootstrap to not unmount the host's /proc filesystem Currently, when the final image is being built (sonic-vs.img.gz, sonic-broadcom.bin, or similar), each invocation of sudo in the build_debian.sh script takes 0.8 seconds to run and execute the actual command. This is because the /proc filesystem in the slave container has been unmounted somehow. This is happening when debootstrap is running, and it incorrectly unmounts the host's (in our case, the slave container's) /proc filesystem because in the new image being built, /proc is a symlink to the host's (the slave container's) /proc. Because of that, /proc is gone, and each invocation of sudo adds 0.8 seconds overhead. As a side effect, docker exec into the slave container during this time will fail, because /proc/self/fd doesn't exist anymore, and docker exec assumes that that exists. Debootstrap has fixed this in 1.0.124 and newer, so backport the patch that fixes this into the version that Bullseye has. Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com> * [build_debian.sh]: Use eatmydata to speed up deb package installations During package installations, dpkg calls fsync multiples times (for each package) to ensure that tht efiles are written to disk, so that if there's some system crash during package installation, then it is in at least a somewhat recoverable state. For our use case though, we're installing packages in a chroot in fsroot-* from a slave container and then packaging it into an image. If there were a system crash (or even if docker crashed), the fsroot-* directory would first be removed, and the process would get restarted. This means that the fsync calls aren't really needed for our use case. The eatmydata package includes a library that will block/suppress the use of fsync (and similar) system calls from applications and will instead just return success, so that the application is not blocked on disk writes, which can instead happen in the background instead as necessary. If dpkg is run with this library, then the fsync calls that it does will have no effect. Therefore, install the eatmydata package at the beginning of build_debian.sh and have dpkg be run under eatmydata for almost all package installations/removals. At the end of the installation, remove it, so that the final image uses dpkg as normal. In my testing, this saves about 2-3 minutes from the image build time. Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com> * Change ln syntax to use chroot Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
2022-04-19 11:22:16 -05:00
# Remove eatmydata
sudo rm $FILESYSTEM_ROOT/etc/apt/apt.conf.d/00image-install-eatmydata $FILESYSTEM_ROOT/usr/local/bin/dpkg
sudo LANG=C DEBIAN_FRONTEND=noninteractive chroot $FILESYSTEM_ROOT apt-get -y remove eatmydata
2016-03-08 13:42:20 -06:00
## Clean up apt
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get -y autoremove
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get autoclean
2016-03-08 13:42:20 -06:00
sudo LANG=C chroot $FILESYSTEM_ROOT apt-get clean
sudo LANG=C chroot $FILESYSTEM_ROOT bash -c 'rm -rf /usr/share/doc/* /usr/share/locale/* /var/lib/apt/lists/* /tmp/*'
2016-03-08 13:42:20 -06:00
## Clean up proxy
[ -n "$http_proxy" ] && sudo rm -f $FILESYSTEM_ROOT/etc/apt/apt.conf.d/01proxy
## Clean up pip cache
sudo LANG=C chroot $FILESYSTEM_ROOT pip3 cache purge
2016-03-08 13:42:20 -06:00
## Umount all
echo '[INFO] Umount all'
## Display all process details access /proc
sudo LANG=C chroot $FILESYSTEM_ROOT fuser -vm /proc
## Kill the processes
2016-03-08 13:42:20 -06:00
sudo LANG=C chroot $FILESYSTEM_ROOT fuser -km /proc || true
## Wait fuser fully kill the processes
sudo timeout 15s bash -c 'until LANG=C chroot $0 umount /proc; do sleep 1; done' $FILESYSTEM_ROOT || true
2016-03-08 13:42:20 -06:00
## Prepare empty directory to trigger mount move in initramfs-tools/mount_loop_root, implemented by patching
sudo mkdir $FILESYSTEM_ROOT/host
if [[ "$CHANGE_DEFAULT_PASSWORD" == "y" ]]; then
## Expire default password for exitsing users that can do login
default_users=$(cat $FILESYSTEM_ROOT/etc/passwd | grep "/home"| grep ":/bin/bash\|:/bin/sh" | awk -F ":" '{print $1}' 2> /dev/null)
for user in $default_users
do
sudo LANG=C chroot $FILESYSTEM_ROOT passwd -e ${user}
done
fi
2016-03-08 13:42:20 -06:00
## Compress most file system into squashfs file
sudo rm -f $INSTALLER_PAYLOAD $FILESYSTEM_SQUASHFS
## Output the file system total size for diag purpose
## Note: -x to skip directories on different file systems, such as /proc
sudo du -hsx $FILESYSTEM_ROOT
sudo mkdir -p $FILESYSTEM_ROOT/var/lib/docker
## Clear DNS configuration inherited from the build server
sudo rm -f $FILESYSTEM_ROOT/etc/resolvconf/resolv.conf.d/original
sudo cp files/image_config/resolv-config/resolv.conf.head $FILESYSTEM_ROOT/etc/resolvconf/resolv.conf.d/head
## Optimize filesystem size
if [ "$BUILD_REDUCE_IMAGE_SIZE" = "y" ]; then
sudo scripts/build-optimize-fs-size.py "$FILESYSTEM_ROOT" \
--image-type "$IMAGE_TYPE" \
--hardlinks var/lib/docker \
--hardlinks usr/share/sonic/device \
--remove-docs \
--remove-mans \
--remove-licenses
fi
sudo mksquashfs $FILESYSTEM_ROOT $FILESYSTEM_SQUASHFS -comp zstd -b 1M -e boot -e var/lib/docker -e $PLATFORM_DIR
## Reduce /boot permission
sudo chmod -R go-wx $FILESYSTEM_ROOT/boot
# Ensure admin gid is 1000
gid_user=$(sudo LANG=C chroot $FILESYSTEM_ROOT id -g $USERNAME) || gid_user="none"
if [ "${gid_user}" != "1000" ]; then
die "expect gid 1000. current:${gid_user}"
fi
# ALERT: This bit of logic tears down the qemu based build environment used to
# perform builds for the ARM architecture. This must be the last step in this
# script before creating the Sonic installer payload zip file.
Ported Marvell armhf build on amd64 host for debian buster to use cross-comp… (#8035) * Ported Marvell armhf build on x86 for debian buster to use cross-compilation instead of qemu emulation Current armhf Sonic build on amd64 host uses qemu emulation. Due to the nature of the emulation it takes a very long time, about 22-24 hours to complete the build. The change I did to reduce the building time by porting Sonic armhf build on amd64 host for Marvell platform for debian buster to use cross-compilation on arm64 host for armhf target. The overall Sonic armhf building time using cross-compilation reduced to about 6 hours. Signed-off-by: marvell <marvell@cpss-build3.marvell.com> * Fixed final Sonic image build with dockers inside * Update Dockerfile.j2 Fixed qemu-user-static:x86_64-aarch64-5.0.0-2 . * Update cross-build-arm-python-reqirements.sh Added support for both armhf and arm64 cross-build platform using $PY_PLAT environment variable. * Update Makefile Added TARGET=<cross-target> for armhf/arm64 cross-compilation. * Reviewer's @qiluo-msft requests done Signed-off-by: marvell <marvell@cpss-build3.marvell.com> * Added new radius/pam patch for arm64 support * Update slave.mk Added missing back tick. * Added libgtest-dev: libgmock-dev: to the buster Dockerfile.j2. Fixed arm perl version to be generic * Added missing armhf/arm64 entries in /etc/apt/sources.list * fix libc-bin core dump issue from xumia:fix-libc-bin-install-issue commit * Removed unnecessary 'apt-get update' from sonic-slave-buster/Dockerfile.j2 * Fixed saiarcot895 reviewer's requests * Fixed README and replaced 'sed/awk' with patches * Fixed ntp build to use openssl * Unuse sonic-slave-buster/cross-build-arm-python-reqirements.sh script (put all prebuilt python packages cross-compilation/install inside Dockerfile.j2). Fixed src/snmpd/Makefile to use -j1 in all cases * Clean armhf cross-compilation build fixes * Ported cross-compilation armhf build to bullseye * Additional change for bullseye * Set CROSS_BUILD_ENVIRON default value n * Removed python2 references * Fixes after merge with the upstream * Deleted unused sonic-slave-buster/cross-build-arm-python-reqirements.sh file * Fixed 2 @saiarcot895 requests * Fixed @saiarcot895 reviewer's requests * Removed use of prebuilt python wheels * Incorporated saiarcot895 CC/CXX and other simplification/generalization changes Signed-off-by: marvell <marvell@cpss-build3.marvell.com> * Fixed saiarcot895 reviewer's additional requests * src/libyang/patch/debian-packaging-files.patch * Removed --no-deps option when installing wheels. Removed unnecessary lazy_object_proxy arm python3 package instalation Co-authored-by: marvell <marvell@cpss-build3.marvell.com> Co-authored-by: marvell <marvell@cpss-build2.marvell.com>
2022-07-21 16:15:16 -05:00
if [[ $MULTIARCH_QEMU_ENVIRON == y || $CROSS_BUILD_ENVIRON == y ]]; then
# Remove qemu arm bin executable used for cross-building
sudo rm -f $FILESYSTEM_ROOT/usr/bin/qemu*static || true
DOCKERFS_PATH=../dockerfs/
fi
## Compress docker files
pushd $FILESYSTEM_ROOT && sudo tar -I $GZ_COMPRESS_PROGRAM -cf $OLDPWD/$FILESYSTEM_DOCKERFS -C ${DOCKERFS_PATH}var/lib/docker .; popd
## Compress together with /boot, /var/lib/docker and $PLATFORM_DIR as an installer payload zip file
pushd $FILESYSTEM_ROOT && sudo tar -I $GZ_COMPRESS_PROGRAM -cf platform.tar.gz -C $PLATFORM_DIR . && sudo zip -n .gz $OLDPWD/$INSTALLER_PAYLOAD -r boot/ platform.tar.gz; popd
sudo zip -g -n .squashfs:.gz $INSTALLER_PAYLOAD $FILESYSTEM_SQUASHFS $FILESYSTEM_DOCKERFS