Why I did it
Set build options in pipeline UI.
Support setting reproducible build options to py2,py3 in release branch and none in master branch.
Work item tracking
Microsoft ADO (number only): 22335854
How I did it
How to verify it
Why I did it
Enable the reproducible build for PR build for master branch
Fix the reproducible build variable display error in the slave container.
The below config is none, although the config is set and takes effect.
"SONIC_VERSION_CONTROL_COMPONENTS": "none"
How I did it
Passing the variable through the slave container command line.
The variable has been passed to the slave container and the other docker container by a config file, it is only used to display the value during the build.
How to verify it
See https://dev.azure.com/mssonic/build/_build/results?buildId=247960&view=logs&j=88ce9a53-729c-5fa9-7b6e-3d98f2488e3f&t=88f376cf-c35d-5783-0a48-9ad83a873284
"SONIC_VERSION_CONTROL_COMPONENTS": "deb,py2,py3,web,git,docker"
Why I did it
Cherry-pick commits from master to support the snapshot based mirror, and fix the code conflicts.
ad162ae [Build] Optimize the version control for Debian packages (#14557)
38c5d7f [Build] Support j2 template for debian sources for docker ptf (#13198)
5e4826e [Ci] Support to use the same snapshot for all platform builds (#13913)
8206925 [Build] Change the default mirror version config file (#13786)
5e4a866 [Build] Support Debian snapshot mirror to improve build stability (#13097)
ac5d89c [Build] Support j2 template for debian sources (#12557)
Work item tracking
Microsoft ADO (number only): 18018114
How I did it
How to verify it
Why I did it
sonic-slave-stretch build failed for mmh3 version update to 3.10 on Mar 24.
How I did it
Enable reproducible build for vhdx image.
How to verify it
Why I did it
Docker build has a low rate of hanging up.
It hangs on different steps. So, it looks like a bug in docker daemon.
How I did it
Start a daemon process to scan running time more than 1 hours, and kill the process.
How to verify it
Why I did it
Makefile needs some dependencies from the Internet. It will fail for network related issue.
Retries will fix most of these issues.
How I did it
Add retries when running commands which maybe related with networking.
How to verify it
Why I did it
Fix the official build not triggered correctly issue, caused by the azp template path not existing.
How I did it
Change the azp template path.
Why I did it
It is not necessary to trigger the publish pipeline when build is failed.
How I did it
Remove the condition in the azp task, change to use template condition.
Why I did it
Support to trigger a pipeline to download and publish artifacts to storage and container registry.
Support to specify the patterns which docker images to upload.
How I did it
Pass the pipeline information and the artifact information by pipeline parameters to the pipeline which will be triggered a new build. It is to decouple the artifacts generation and the publish logic, how and where the artifacts/docker images will be published, depends on the triggered pipeline.
How to verify it
Why I did it
Exclude the innovium build in upgrading version build, currently, the builds are always failed, exclude the build temporarily.
Increase the broadcom build timeout.
Fix the generating version file failure issue caused by artifacts folder change.
When changing to use the same template for PR build, official build and packages version upgrade, the artifacts folder adding a "target" folder, the version upgrade task should be changed accordingly.
Why I did it
docker hub will limit the pull rate.
Use ACR instead to pull debian related docker image.
How I did it
Set DEFAULT_CONTAINER_REGISTRY in pipeline.
Why I did it
[Build]: Enable marvell-armhf PR check
Improve the azp dependencies, make the Test stage only depended on BuildVS stage. The Test stage will be triggered once the BuildVS stage finished, reduce the waiting time.
Fix the nodesource.list cannot read issue, it is cased by the full path not used.
```
2021-12-03T06:59:26.0019306Z Removing intermediate container 77cfe980cd36
2021-12-03T06:59:26.0020872Z ---> 528fd40e60f6
2021-12-03T06:59:26.0021457Z Step 81/81 : RUN post_run_buildinfo
2021-12-03T06:59:26.0841136Z ---> Running in d804bd7e1b06
2021-12-03T06:59:29.1626594Z [91mDEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality.
2021-12-03T06:59:34.2960105Z [0m[91m/usr/bin/sed: can't read nodesource.list: No such file or directory
2021-12-03T06:59:34.5094880Z [0mThe command '/bin/sh -c post_run_buildinfo' returned a non-zero code: 2
```
Co-authored-by: Ubuntu <xumia@xumia-vm1.jqzc3g5pdlluxln0vevsg3s20h.xx.internal.cloudapp.net>
With a Bullseye upgrade, a change that requires everything to get
rebuilt (including the slave containers) takes about 12 hours (the vs
job that builds the virtual switch image as well as the PTF container
sometimes times out towards the end). Part of this is because the kernel
is now built after all of the sonic containers (kernel is built in a
Bullseye slave, the docker containers are built in a Buster slave).
Another part is because during the ptf container build, for some reason,
all of the docker containers are rebuilt.
Therefore, to make sure PRs don't time out after Bullseye gets merged
in, bump up the timeout from 12 hours to 15 hours. This should be enough
for the builds to complete.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Why I did it
To be able to run VS test on official multi asic VS image.
How I did it
Add a new script to build multi-asic VS image by passing NUM_ASIC build parameter.
Rung multi-asic t1-lag test cases with the built image.