#### Why I did it
Fix the build failure caused by the installer image size too small. The installer image is only used during the build, not impact the final images.
See https://dev.azure.com/mssonic/build/_build/results?buildId=139926&view=logs&j=cef3d8a9-152e-5193-620b-567dc18af272&t=359769c4-8b5e-5976-a793-85da132e0a6f
```
+ fallocate -l 2048M ./sonic-installer.img
+ mkfs.vfat ./sonic-installer.img
mkfs.fat 4.2 (2021-01-31)
++ mktemp -d
+ tmpdir=/tmp/tmp.TqdDSc00Cn
+ mount -o loop ./sonic-installer.img /tmp/tmp.TqdDSc00Cn
+ cp target/sonic-vs.bin /tmp/tmp.TqdDSc00Cn/onie-installer.bin
cp: error writing '/tmp/tmp.TqdDSc00Cn/onie-installer.bin': No space left on device
[ FAIL LOG END ] [ target/sonic-vs.img.gz ]
```
#### How I did it
Increase the size from 2048M to 4096M.
Why not increase to 16G like qcow2 image?
The qcow2 supports the sparse disk, although a big disk size allocated, but it will not consume the real disk size. The falocate does not support the sparse disk. We do not want to allocate a very big disk, but no use at all. It will require more space to build.
* Ported Marvell armhf build on x86 for debian buster to use cross-compilation instead of qemu emulation
Current armhf Sonic build on amd64 host uses qemu emulation. Due to the
nature of the emulation it takes a very long time, about 22-24 hours to
complete the build. The change I did to reduce the building time by
porting Sonic armhf build on amd64 host for Marvell platform for debian
buster to use cross-compilation on arm64 host for armhf target. The
overall Sonic armhf building time using cross-compilation reduced to
about 6 hours.
Signed-off-by: marvell <marvell@cpss-build3.marvell.com>
* Fixed final Sonic image build with dockers inside
* Update Dockerfile.j2
Fixed qemu-user-static:x86_64-aarch64-5.0.0-2 .
* Update cross-build-arm-python-reqirements.sh
Added support for both armhf and arm64 cross-build platform using $PY_PLAT environment variable.
* Update Makefile
Added TARGET=<cross-target> for armhf/arm64 cross-compilation.
* Reviewer's @qiluo-msft requests done
Signed-off-by: marvell <marvell@cpss-build3.marvell.com>
* Added new radius/pam patch for arm64 support
* Update slave.mk
Added missing back tick.
* Added libgtest-dev: libgmock-dev: to the buster Dockerfile.j2. Fixed arm perl version to be generic
* Added missing armhf/arm64 entries in /etc/apt/sources.list
* fix libc-bin core dump issue from xumia:fix-libc-bin-install-issue commit
* Removed unnecessary 'apt-get update' from sonic-slave-buster/Dockerfile.j2
* Fixed saiarcot895 reviewer's requests
* Fixed README and replaced 'sed/awk' with patches
* Fixed ntp build to use openssl
* Unuse sonic-slave-buster/cross-build-arm-python-reqirements.sh script (put all prebuilt python packages cross-compilation/install inside Dockerfile.j2). Fixed src/snmpd/Makefile to use -j1 in all cases
* Clean armhf cross-compilation build fixes
* Ported cross-compilation armhf build to bullseye
* Additional change for bullseye
* Set CROSS_BUILD_ENVIRON default value n
* Removed python2 references
* Fixes after merge with the upstream
* Deleted unused sonic-slave-buster/cross-build-arm-python-reqirements.sh file
* Fixed 2 @saiarcot895 requests
* Fixed @saiarcot895 reviewer's requests
* Removed use of prebuilt python wheels
* Incorporated saiarcot895 CC/CXX and other simplification/generalization changes
Signed-off-by: marvell <marvell@cpss-build3.marvell.com>
* Fixed saiarcot895 reviewer's additional requests
* src/libyang/patch/debian-packaging-files.patch
* Removed --no-deps option when installing wheels. Removed unnecessary lazy_object_proxy arm python3 package instalation
Co-authored-by: marvell <marvell@cpss-build3.marvell.com>
Co-authored-by: marvell <marvell@cpss-build2.marvell.com>
Why I did it
Fix host image debian package version issue.
The package dependencies may have issue, when some of debian packages of the base image are upgraded. For example, libc is installed in base image, but if the mirror has new version, when running "apt-get upgrade", the package will be upgraded unexpected. To avoid such issue, need to add the versions when building the host image.
How I did it
The package versions of host-image should contain host-base-image.
#### Why I did it
Info: Attempting file://dev/vdb/onie-installer ...
Info: Attempting file://dev/vdb/onie-installer.bin ...
cp: write error: No space left on device
Failure: local_fs_run():/dev/vdb Unable to copy /tmp/tmp.CPY0ad/onie-installer.bin to tmpfs
vs image is failing. Increase kvm device space.
When the package name with special characters, such as +, the package name may be encoded as %2b, the package url will not be found when reproducible build enabled.
* Remove the rw folder from the image after installing in KVM
When the image is installed from within KVM and then loaded, some files
(such as timer stamp files) are created as part of that bootup that then
get into the final image. This can cause some side effects, such as
systemd thinking that some persistent timers need to run because the
last trigger time got missed.
Therefore, at the end of the check_install.py script, remove the rw
folder so that it doesn't exist in the image, and that when this image
is started up in a KVM setup for the first time, it starts with a truly
clean slate.
Without this change, the issue seen was that for fstrim.timer, a stamp
file would be present in /var/lib/systemd/timers (and for other timers
that are marked as persistent). This would then cause fstrim.service to
get started immediately when starting a QEMU setup if the timer for that
service missed a trigger, and not wait 10 minutes after bootup. In the
case of fstrim.timer, that means if the image was started in QEMU after
next Monday, since that timer is scheduled to be triggered weekly.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
* Split installation of SONiC and test bootup into two separate scripts
Just removing the rw directory causes other issues, since the first boot
tasks no longer run since that file isn't present. Also, just recreating
that file doesn't completely help, because there are some files that are
moved from the /host folder into the base filesystem layer, and so are
no longer available.
Instead, split the installation of SONiC and doing the test bootup into
two separate scripts and two separate KVM instances. The first KVM
instance is the one currently being run, while the second one has the
`-snapshot` flag added in, which means any changes to the disk image
don't take effect.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Fix no space left on device issue in tmpfs.
2021-12-01T06:30:40.1651742Z cp: write error: No space left on device
2021-12-01T06:30:40.1652225Z Failure: local_fs_run():/dev/vdb Unable to copy /tmp/tmp.gl4Sgp/onie-installer.bin to tmpfs
Why I did it
Fix some of the version files not used issue.
One of example version file version-py3-all-armhf, when building marvell-armhf, the version is used as expected, but it not use.
1. Fix build for armhf and arm64
2. upgrade centec tsingma bsp support to 5.10 kernel
3. modify centec platform driver for linux 5.10
Co-authored-by: Shi Lei <shil@centecnetworks.com>
Build failed on a Ubuntu 20.04 system with kvm kernel, which does not have the /proc/sys/vm/compact_memory
Should check if compact_memory is writeable before doing it.
Signed-off-by: Chris Ward <tjcw@uk.ibm.com>
Why I did it
Multiple build failed in 202012 branch
It is caused by the disorder of the package urls retrieved from the command "apt-get download --print-urls "
Signed-off-by: Stepan Blyschak stepanb@nvidia.com
This PR is part of SONiC Application Extension
Depends on #5938
- Why I did it
To provide an infrastructure change in order to support SONiC Application Extension feature.
- How I did it
Label every installable SONiC Docker with a minimal required manifest and auto-generate packages.json file based on
installed SONiC images.
- How to verify it
Build an image, execute the following command:
admin@sonic:~$ docker inspect docker-snmp:1.0.0 | jq '.[0].Config.Labels["com.azure.sonic.manifest"]' -r | jq
Cat /var/lib/sonic-package-manager/packages.json file to verify all dockers are listed there.
- Support compile sonic arm image on arm server. If arm image compiling is executed on arm server instead of using qemu mode on x86 server, compile time can be saved significantly.
- Add kernel argument systemd.unified_cgroup_hierarchy=0 for upgrade systemd to version 247, according to #7228
- rename multiarch docker to sonic-slave-${distro}-march-${arch}
Co-authored-by: Xianghong Gu <xgu@centecnetworks.com>
Co-authored-by: Shi Lei <shil@centecnetworks.com>
py2/py3/deb packages names are case insensitive, and the versions map
key should be the same for packages whose name can have different cases.
For example, in files/build/versions/default/versions-py3, package
"click==7.1.2" is pinned; and in
files/build/versions/dockers/docker-sonic-vs/versions-py3, package
"Click==7.0" is pinned.
Without this fix, the aggregated versions-py3 file used for building
docker-sonic-vs looks like below:
...
click==7.1.2
Click==7.0
...
However, we actually want "click==7.0" to overwrite "click==7.1.2" for
docker-sonic-vs build.
* Added debug symbols to many debug dockers.
* For debug images *only*:
1) Archive source files into debug image
2) Archived source is copied into /src
3) Created an empty dir /debug
4) Mount both /src as ro & /debug as rw into every docker
5) Login banner will give some details on /src & /debug
6) Devs can copy core file into /debug and view it from inside a container.
7) Dev may create all gdb logs and other data directly into /debug.
* Dropped redundant REDIS_TOOLS per review comments.
* Added debug symbols to frr package and hence FRR based BGP docker.
* 1) Moved dbg_files.sh to scripts/
2) Src directories to archive are now collected from individual Makefiles.
3) Added few more debug symbols
4) Added few more debug dockers.
Here after no more changes except per review comments.
To debug:
Install required version of debug image in Switch or VM.
Copy core file into /debug of host
Get into Docker
gdb /usr/bin/<daemon> -c /debug/<your core file>
set directory /src/... <-- inside gdb to get the source
For non-in-depth debugging:
Download corresponding debug Docker image (docker-...-dbg.gz) to your VM
Load the image
Run image with entrypoint as 'bash' with dir containing core mapped in.
Run gdb on the core.
* [build]: wait 60 seconds for docker engine to start
On some platforms, it can take more than 1 second for docker
engine to start.
Signed-off-by: Guohan Lu <gulv@microsoft.com>