Why I did it
[Build] Support to use the snapshot mirror for debian base image
How I did it
If the MIRROR_SNAPSHOT=n, then use the default mirror http://deb.debian.org/debian
If the MIRROR_SNAPSHOT=y, then use the snapshot mirror, for instance http://packages.trafficmanager.net/snapshot/debian/20230330T000330Z/.
How to verify it
+ scripts/build_debian_base_system.sh amd64 bullseye ./fsroot-vs
I: Target architecture can be executed
I: Retrieving InRelease
I: Checking Release signature
I: Valid Release signature (key id A4285295FC7B1A81600062A9605C66F00D6C9793)
I: Retrieving Packages
I: Validating Packages
I: Resolving dependencies of required packages...
I: Resolving dependencies of base packages...
I: Checking component main on http://packages.trafficmanager.net/snapshot/debian/20230331T000125Z...
I: Retrieving libacl1 2.2.53-10
Co-authored-by: xumia <59720581+xumia@users.noreply.github.com>
Why I did it
Optimize the version control for Debian packages.
Fix sonic-slave-buster/sources.list.amd64 not found display issue, need to generate the file before running the shell command to evaluate the sonic image tag.
When using the snapshot mirror, it is not necessary to update the version file based on the base image. It will reduce the version dependency issue, when an image is not run when freezing the version.
How I did it
Not to update the version file when snapshot mirror enabled.
How to verify it
Why I did it
Change the mirror config file
Use the files/build/versions/default/versions-mirror only when reproducible build enabled.
The config in files/build/versions is only for reproducible build, while snapshot mirror feature does not have the dependency on the reproducible build.
How I did it
Skip the mirror config in files/build/versions/default/versions-mirror if reproducible build not enabled.
How to verify it
Co-authored-by: xumia <59720581+xumia@users.noreply.github.com>
Why I did it
Cherry pick from #13097
[Build] Support Debian snapshot mirror to improve build stability
It is to enhance the reproducible build, supports the Debian snapshot mirror. It guarantees all the docker images using the same Debian mirror snapshot and fixes the temporary build failure which is caused by remote Debain mirror indexes changed during the build. It is also to fix the version conflict issue caused by no fixed versions of some of the Debian packages.
How I did it
Add a new feature to support the Debian snapshot mirror.
How to verify it
Why I did it
Unify the Debian mirror sources
Make easy to upgrade to the next Debian release, not source url code change required. Support to customize the Debian mirror sources during the build
Relative issue: #12523
How I did it
How to verify it
Why I did it
Makefile needs some dependencies from the Internet. It will fail for network related issue.
Retries will fix most of these issues.
How I did it
Add retries when running commands which maybe related with networking.
How to verify it
Why I did it
Fix the build unstable issue caused by the kvm 9000 port is not ready to use in 2 seconds.
2022-09-02T10:57:30.8122304Z + /usr/bin/kvm -m 8192 -name onie -boot order=cd,once=d -cdrom target/files/bullseye/onie-recovery-x86_64-kvm_x86_64_4_asic-r0.iso -device e1000,netdev=onienet -netdev user,id=onienet,hostfwd=:0.0.0.0:3041-:22 -vnc 0.0.0.0:0 -vga std -drive file=target/sonic-6asic-vs.img,media=disk,if=virtio,index=0 -drive file=./sonic-installer.img,if=virtio,index=1 -serial telnet:127.0.0.1:9000,server
2022-09-02T10:57:30.8123378Z + sleep 2.0
2022-09-02T10:57:30.8123889Z + '[' -d /proc/284923 ']'
2022-09-02T10:57:30.8124528Z + echo 'to kill kvm: sudo kill 284923'
2022-09-02T10:57:30.8124994Z to kill kvm: sudo kill 284923
2022-09-02T10:57:30.8125362Z + ./install_sonic.py
2022-09-02T10:57:30.8125720Z Trying 127.0.0.1...
2022-09-02T10:57:30.8126041Z telnet: Unable to connect to remote host: Connection refused
How I did it
Waiting more time until the tcp port 9000 is ready, waiting for 60 seconds in maximum.
#### Why I did it
Fix the build failure caused by the installer image size too small. The installer image is only used during the build, not impact the final images.
See https://dev.azure.com/mssonic/build/_build/results?buildId=139926&view=logs&j=cef3d8a9-152e-5193-620b-567dc18af272&t=359769c4-8b5e-5976-a793-85da132e0a6f
```
+ fallocate -l 2048M ./sonic-installer.img
+ mkfs.vfat ./sonic-installer.img
mkfs.fat 4.2 (2021-01-31)
++ mktemp -d
+ tmpdir=/tmp/tmp.TqdDSc00Cn
+ mount -o loop ./sonic-installer.img /tmp/tmp.TqdDSc00Cn
+ cp target/sonic-vs.bin /tmp/tmp.TqdDSc00Cn/onie-installer.bin
cp: error writing '/tmp/tmp.TqdDSc00Cn/onie-installer.bin': No space left on device
[ FAIL LOG END ] [ target/sonic-vs.img.gz ]
```
#### How I did it
Increase the size from 2048M to 4096M.
Why not increase to 16G like qcow2 image?
The qcow2 supports the sparse disk, although a big disk size allocated, but it will not consume the real disk size. The falocate does not support the sparse disk. We do not want to allocate a very big disk, but no use at all. It will require more space to build.
Why I did it
Cherry pick PR: #11072
[Bug]: fix the version file name issue
Why I did it
[Bug]: fix the version file name issue
Fix the build failure: https://dev.azure.com/mssonic/build/_build/results?buildId=107211&view=results
+ scripts/build_debian_base_system.sh amd64 bullseye ./fsroot-centec
sed: can't read /tmp/tmp.glTzJefV24/version-deb: No such file or directory
Not found host-base-image packages, please check the version files in files/build/versions/host-base-image
How I did it
Change the version-deb, to versions-deb
And add an improvement for host base image build, if the version path not exist, skipped the version control for base image.
How to verify it
https://dev.azure.com/mssonic/build/_build/results?buildId=107587&view=results
Why I did it
Fix host image debian package version issue.
The package dependencies may have issue, when some of debian packages of the base image are upgraded. For example, libc is installed in base image, but if the mirror has new version, when running "apt-get upgrade", the package will be upgraded unexpected. To avoid such issue, need to add the versions when building the host image.
How I did it
The package versions of host-image should contain host-base-image.
#### Why I did it
Info: Attempting file://dev/vdb/onie-installer ...
Info: Attempting file://dev/vdb/onie-installer.bin ...
cp: write error: No space left on device
Failure: local_fs_run():/dev/vdb Unable to copy /tmp/tmp.CPY0ad/onie-installer.bin to tmpfs
vs image is failing. Increase kvm device space.
When the package name with special characters, such as +, the package name may be encoded as %2b, the package url will not be found when reproducible build enabled.
* Remove the rw folder from the image after installing in KVM
When the image is installed from within KVM and then loaded, some files
(such as timer stamp files) are created as part of that bootup that then
get into the final image. This can cause some side effects, such as
systemd thinking that some persistent timers need to run because the
last trigger time got missed.
Therefore, at the end of the check_install.py script, remove the rw
folder so that it doesn't exist in the image, and that when this image
is started up in a KVM setup for the first time, it starts with a truly
clean slate.
Without this change, the issue seen was that for fstrim.timer, a stamp
file would be present in /var/lib/systemd/timers (and for other timers
that are marked as persistent). This would then cause fstrim.service to
get started immediately when starting a QEMU setup if the timer for that
service missed a trigger, and not wait 10 minutes after bootup. In the
case of fstrim.timer, that means if the image was started in QEMU after
next Monday, since that timer is scheduled to be triggered weekly.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* Split installation of SONiC and test bootup into two separate scripts
Just removing the rw directory causes other issues, since the first boot
tasks no longer run since that file isn't present. Also, just recreating
that file doesn't completely help, because there are some files that are
moved from the /host folder into the base filesystem layer, and so are
no longer available.
Instead, split the installation of SONiC and doing the test bootup into
two separate scripts and two separate KVM instances. The first KVM
instance is the one currently being run, while the second one has the
`-snapshot` flag added in, which means any changes to the disk image
don't take effect.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Fix no space left on device issue in tmpfs.
2021-12-01T06:30:40.1651742Z cp: write error: No space left on device
2021-12-01T06:30:40.1652225Z Failure: local_fs_run():/dev/vdb Unable to copy /tmp/tmp.gl4Sgp/onie-installer.bin to tmpfs
Why I did it
Fix some of the version files not used issue.
One of example version file version-py3-all-armhf, when building marvell-armhf, the version is used as expected, but it not use.
1. Fix build for armhf and arm64
2. upgrade centec tsingma bsp support to 5.10 kernel
3. modify centec platform driver for linux 5.10
Co-authored-by: Shi Lei <shil@centecnetworks.com>
Build failed on a Ubuntu 20.04 system with kvm kernel, which does not have the /proc/sys/vm/compact_memory
Should check if compact_memory is writeable before doing it.
Signed-off-by: Chris Ward <tjcw@uk.ibm.com>
Why I did it
Multiple build failed in 202012 branch
It is caused by the disorder of the package urls retrieved from the command "apt-get download --print-urls "
Signed-off-by: Stepan Blyschak stepanb@nvidia.com
This PR is part of SONiC Application Extension
Depends on #5938
- Why I did it
To provide an infrastructure change in order to support SONiC Application Extension feature.
- How I did it
Label every installable SONiC Docker with a minimal required manifest and auto-generate packages.json file based on
installed SONiC images.
- How to verify it
Build an image, execute the following command:
admin@sonic:~$ docker inspect docker-snmp:1.0.0 | jq '.[0].Config.Labels["com.azure.sonic.manifest"]' -r | jq
Cat /var/lib/sonic-package-manager/packages.json file to verify all dockers are listed there.
- Support compile sonic arm image on arm server. If arm image compiling is executed on arm server instead of using qemu mode on x86 server, compile time can be saved significantly.
- Add kernel argument systemd.unified_cgroup_hierarchy=0 for upgrade systemd to version 247, according to #7228
- rename multiarch docker to sonic-slave-${distro}-march-${arch}
Co-authored-by: Xianghong Gu <xgu@centecnetworks.com>
Co-authored-by: Shi Lei <shil@centecnetworks.com>
py2/py3/deb packages names are case insensitive, and the versions map
key should be the same for packages whose name can have different cases.
For example, in files/build/versions/default/versions-py3, package
"click==7.1.2" is pinned; and in
files/build/versions/dockers/docker-sonic-vs/versions-py3, package
"Click==7.0" is pinned.
Without this fix, the aggregated versions-py3 file used for building
docker-sonic-vs looks like below:
...
click==7.1.2
Click==7.0
...
However, we actually want "click==7.0" to overwrite "click==7.1.2" for
docker-sonic-vs build.
* Added debug symbols to many debug dockers.
* For debug images *only*:
1) Archive source files into debug image
2) Archived source is copied into /src
3) Created an empty dir /debug
4) Mount both /src as ro & /debug as rw into every docker
5) Login banner will give some details on /src & /debug
6) Devs can copy core file into /debug and view it from inside a container.
7) Dev may create all gdb logs and other data directly into /debug.
* Dropped redundant REDIS_TOOLS per review comments.
* Added debug symbols to frr package and hence FRR based BGP docker.
* 1) Moved dbg_files.sh to scripts/
2) Src directories to archive are now collected from individual Makefiles.
3) Added few more debug symbols
4) Added few more debug dockers.
Here after no more changes except per review comments.
To debug:
Install required version of debug image in Switch or VM.
Copy core file into /debug of host
Get into Docker
gdb /usr/bin/<daemon> -c /debug/<your core file>
set directory /src/... <-- inside gdb to get the source
For non-in-depth debugging:
Download corresponding debug Docker image (docker-...-dbg.gz) to your VM
Load the image
Run image with entrypoint as 'bash' with dir containing core mapped in.
Run gdb on the core.
* [build]: wait 60 seconds for docker engine to start
On some platforms, it can take more than 1 second for docker
engine to start.
Signed-off-by: Guohan Lu <gulv@microsoft.com>