Why I did it
K8S_OPTIONS maybe empty, so there will be syntax error. Need to fix this issue.
Work item tracking
Microsoft ADO (number only): 25495020
How I did it
Add "" for K8S_OPTIONS to avoid exception.
How to verify it
No more exception is throwed in PR build validation pipeline.
Why I did it
Line:7 will exit when k8s file didn't change.
Use 'System.PullRequest.TargetBranchName' instead of 'System.PullRequest.TargetBranch'. Because git server in AzDevOps don't support 'System.PullRequest.TargetBranch'.
Work item tracking
Microsoft ADO (number only): 24636791
How I did it
How to verify it
Why I did it
Currently, k8s master image is generated from a separate branch which we created by ourselves, not release ones. We need to commit these k8s master related code to master branch for a better way to do k8s master image build out.
Work item tracking
Microsoft ADO (number only):
19998138
How I did it
Install k8s dashboard docker images
Install geneva mds and mdsd and fluentd docker images and tag them as latest, tagging latest will help create container always with the latest version
Install azure-storage-blob and azure-identity, this will help do etcd backup and restore.
Install kubernetes python client packages, this will help read worker and container state, we can send these metric to Geneva.
Remove mdm debian package, will replace it with the mdm docker image
Add k8s master entrance script, this script will be called by rc-local service when system startup. we have some master systemd services in compute-move repo, when VMM service create master VM, VMM will copy all master service files inside VM, the entrance script will setup all services according to the service files.
When the entrance script content changed, the PR build will set include_kubernetes_master=y to help do validation for k8s master related code change. The default value of include_kubernetes_master should be always n for public master branch. We will generate master image from internal master branch
How to verify it
Build with INCLUDE_KUBERNETES_MASTER = y
Why I did it
Refine PR test template format.
How I did it
Refine PR test template format.
How to verify it
PR test executed normally.
Signed-off-by: Chun'ang Li <chunangli@microsoft.com>
Why I did it
For the DASH scenario, the APP_DB will be optimized by protobuf message for less memory consumption.
How I did it
Download the Debian package of protobuf 3.21.12 and create a corresponding rule for building it.
Add a submodule of sonic-dash-api and generated its Debian package which includes C++ library and Python library
How to verify it
Check artifacts of Azp that the protobuf-related and dash-api deb packages should be generated.
Signed-off-by: Ze Gan <ganze718@gmail.com>
Why I did it
Set build options in pipeline UI.
Support setting reproducible build options to py2,py3 in release branch and none in master branch.
Work item tracking
Microsoft ADO (number only): 22335854
How I did it
How to verify it
Why I did it
libsaithriftv2 build fails and nobody is maintaining saiserverv2's build.
Remove them from official build.
Work item tracking
Microsoft ADO (number only): 23764652
How I did it
How to verify it
Why I did it
Run kvmtest when update package versions to avoid test break.
Work item tracking
Microsoft ADO (number only): 22335854
How I did it
How to verify it
Why I did it
Provide a method to override build options in webUI.
We can stop python dependency upgrade version.
Work item tracking
Microsoft ADO (number only): 22335854
How I did it
Add a variable at the tail of make command.
Why I did it
Enable the reproducible build for PR build for master branch
Fix the reproducible build variable display error in the slave container.
The below config is none, although the config is set and takes effect.
"SONIC_VERSION_CONTROL_COMPONENTS": "none"
How I did it
Passing the variable through the slave container command line.
The variable has been passed to the slave container and the other docker container by a config file, it is only used to display the value during the build.
How to verify it
See https://dev.azure.com/mssonic/build/_build/results?buildId=247960&view=logs&j=88ce9a53-729c-5fa9-7b6e-3d98f2488e3f&t=88f376cf-c35d-5783-0a48-9ad83a873284
"SONIC_VERSION_CONTROL_COMPONENTS": "deb,py2,py3,web,git,docker"
Description for the changelog
Ensure to add label/tag for the feature raised. example - PR#2174 under sonic-utilities repo. where, Generic Config and Update feature has been labelled as GCU.
Why I did it
sonic-slave-stretch build failed for mmh3 version update to 3.10 on Mar 24.
How I did it
Enable reproducible build for vhdx image.
How to verify it
Why I did it
[Ci] Support the SONiC reproducible build in Azure Pipelines
Enable the reproducible build on master branch
Enable the mirror snapshot based build on 202205+ which support snapshot build.
How I did it
Enable the build flag on Azure Pipelines.
How to verify it
Why I did it
Docker build has a low rate of hanging up.
It hangs on different steps. So, it looks like a bug in docker daemon.
How I did it
Start a daemon process to scan running time more than 1 hours, and kill the process.
How to verify it
Why I did it
New docker release v23.0 uses BUILDKIT by default.
It leads to OOM issue in pipeline build.
##[error]Exit code 137 returned from process: file name '/agent/externals/node16/bin/node',
How I did it
Disable BUILDKIT when building sonic-slave-* image.
Keep checking if there are issues when building docker image inside sonic-slave-*.
How to verify it
Check docker build logs.
Disable BUILDKIT log:
Step 1/80 : FROM publicmirror.azurecr.io/debian:buster
---> ff5db168d4c5
Why I did it
docker-sonic-mgmt build is failing.
How I did it
stretch docker is disabled recently. Update docker-sonic-mgmt to buster.
Migrate from sonictest to sonicbld. Because Azure requires migrate vm from uswest2 to uswest3.
Fix a build issue when build image.
How to verify it
Why I did it
docker-sonic-slave pipeline has a different tag with PR build.
It leads to ENABLE_DOCKER_BASE_PUll=y not work.
How I did it
set reproducible build option in bash.
How to verify it
The display of azure pipeline is not specific now, such as when the step Run test fails, the display of itself shows successful, but the display of step Kvmdump shows fails, but actually, the step Kvmdump doesn't fail. I improve the display of azure pipeline in this pr, each step has its own success or failure, and is shown in azure pipeline.
Why I did it
The display of azure pipeline is not specific now, such as when the step Run test fails, the display of itself shows successful, but the display of step Kvmdump shows fails, but actually, the step Kvmdump doesn't fail. I improve the display of azure pipeline in this pr, each step has its own success or failure, and is shown in azure pipeline.
How I did it
1. Each step has its own signature of success or failure.
2. Using the chain of responsibility pattern to manage all status.
3. Modify the expected-state in each step.
Why I did it
Enable Test sai api on bfn container with a lightweight container(saiserver).
How I did it
enable saiserver container on barefoot platform.
add docker-saiserver-bfn.mk for building saiserver container
in platform/barefoot/docker-saiserver-bfn, add necessary files that needs in saiserver container
How to verify it
Tested on Intel platform ec9516
Signed-off-by: richardyu-ms <richard.yu@microsoft.com>
Signed-off-by: richardyu-ms <richard.yu@microsoft.com>
Why I did it
Currently sonic-slave-* tag is confusing. Set correct tag on sonic-slave-* image.
Fix job name to fit the build.
How I did it
build amd image in amd64:
sonic-slave-bullseye:cfe29bff67c
sonic-slave-bullseye:latest
sonic-slave-bullseye:master
build armhf image in amd64:
sonic-slave-bullseye-march-armhf:33614806dc3
sonic-slave-bullseye-march-armhf:latest
sonic-slave-bullseye-march-armhf:master
build arm64 image in amd64:
sonic-slave-bullseye-march-arm64:f3b1b16c801
sonic-slave-bullseye-march-arm64:latest
sonic-slave-bullseye-march-arm64:master
build arm64 image in arm64:
sonic-slave-bullseye:75cb326c9a7
sonic-slave-bullseye-arm64:latest
sonic-slave-bullseye:master
build armhf image in armhf:
sonic-slave-bullseye:64d178951fc
sonic-slave-bullseye-armhf:latest
sonic-slave-bullseye:master
How to verify it
Why I did it
tag sonic-slave-* image with branch.
Only keep sonic-slave-* latest tag when it is master branch and amd64 arch.
How I did it
How to verify it
Why I did it
Makefile needs some dependencies from the Internet. It will fail for network related issue.
Retries will fix most of these issues.
How I did it
Add retries when running commands which maybe related with networking.
How to verify it
Previously, we set timeout in each step such as Lock testbed, Prepare testbed, Run test and KVM dump. When some issue suck like retry happens in one step, it will cause timeout error, but actually, it only needs more time to success. In this pr, we remove the timeout limit in each step and control the timeout outside in each job. When the job runs more than four hours, it will be cancelled.
Why I did it
Previously, we set timeout in each step such as Lock testbed, Prepare testbed, Run test and KVM dump. When some issue suck like retry happens in one step, it will cause timeout error, but actually, it only needs more time to success. In this pr, we remove the timeout limit in each step and control the timeout outside in each job. When the job runs more than four hours, it will be cancelled.
How I did it
Remove the timeout parameter in each step, and control the timeout outside in each job.
How to verify it
Set the timeout of one job to 4 hours, and when timeout happens, azure pipeline will cancel this job.
Previously, we hard code the min and max numbers of instance in a plan. In this pr, we support passing the instance numbers of a testplan.
Why I did it
Previously, we hard code the min and max numbers of instance in a plan. In this pr, we support passing the instance numbers of a testplan.
How I did it
Use a variable to set the instance number.
* [SAI PTF] SAI PTF docker support sai-ptf v2
Publish the sai-ptf docker.
Take part of the change from previous PR #11610 (already reverted as some cache issue)
Cause in #11610, added two new target in it, one is sai-ptf another one is syncd-rpc with sai-ptf v2, to make the upgrade with more clear target, use this one take the sai-ptf one.
Test one:
NOSTRETCH=y NOJESSIE=y make configure PLATFORM=vs
NOSTRETCH=y NOJESSIE=y NOBULLSEYE=y SAITHRIFT_V2=y make target/docker-ptf-sai.gz
Signed-off-by: richardyu-ms <richard.yu@microsoft.com>
* remove useless change
Signed-off-by: richardyu-ms <richard.yu@microsoft.com>
* remove useless parameters
Signed-off-by: richardyu-ms <richard.yu@microsoft.com>
* remove useless change
Signed-off-by: richardyu-ms <richard.yu@microsoft.com>
* Update azure-pipelines-build.yml
remove a useless option
Signed-off-by: richardyu-ms <richard.yu@microsoft.com>
Migrate multi-asic test jobs to TestbedV2.
Why I did it
Migrate multi-asic test jobs to TestbedV2.
How I did it
Add one parameter num_asic to create testplan.
Modify azure-pipelines.yml to run multi-asic on tbv2.
Signed-off-by: Yutong Zhang <yutongzhang@microsoft.com>
Migrate t0-sonic test jobs to TestbedV2.
Why I did it
Migrate t0-sonic test jobs to TestbedV2.
How I did it
Add two parameters to create testplan.
Modify azure-pipelines.yml to run t0-sonic on tbv2.
Signed-off-by: Yutong Zhang <yutongzhang@microsoft.com>
Add dualtor test using TestbedV2 in buildimage repo.
Why I did it
Add dualtor test using TestbedV2 in buildimage repo.
How I did it
Add dualtor test using TestbedV2 in buildimage repo.
Signed-off-by: Yutong Zhang <yutongzhang@microsoft.com>
Why I did it
Fix the issue that test plan can't be canceled by KVM dump stage
How I did it
Fix the issue that test plan can't be canceled by KVM dump stage