Why I did it
1. Upgrade centec-arm64 platform to Bookworm.
2. Solve the problem of compiling the docker-syncd-centec-rpc.gz error on the centec platform.
How I did it
1. Modified platform driver to comply with bookworm kernel.
2. Upgrade SONiC package versions of the centec platform.
How to verify it
1. Compile the centec-arm64 platform to generate sonic-centec-arm64.bin.
2. Compile the centec platform to generate docker-syncd-centec-rpc.gz.
Signed-off-by: centecqianj <qianj@centec.com>
How I did it
Modified platform driver to comply with bookworm kernel.
Modified python build commands for building whl packages.
How to verify it
Verify whether all the platform bookworm debs are built.
make target/debs/bookworm/platform-modules-v682-48y8c-d_1.0_amd64.deb
Load the platform debian into the device and install it in bookworm image.
Verify the platform related CLI and the functionality
Signed-off-by: centecqianj <qianj@centec.com>
1. Upgrade Centec SAI debian package version to v1.13, in order to match syncd's requirement.
2. Fix syncd compile fail for missing sai_query_api_version function in verdor sai
Signed-off-by: Xianghong Gu <xgu@centec.com>
What I did it
Add new platform arm64-ragile_ra-b6010-48gt4x-r0 (Centec)
ASIC Vendor: Centec
Switch ASIC: Centec
Port Config: 48x1G+4x10G
Why I did it
Add new platform RA-B6010-48GT4X
How I did it
Add new platform RA-B6010-48GT4X
Signed-off-by: pettershao-ragilenetworks <pettershao@ragilenetworks.com>
Why I did it
Upgrade both Centec X86 and ARM64 platform containers(syncd/saiserver/syncd-rpc) to bullseye
Optimize Centec X86 platform makefile, change sdk.mk to sai.mk
How I did it
Modify Makefile and Dockerfile to use bullseye
Change filename form sdk.mk to sai.mk, optimize and modify related files
How to verify it
For Centec X86 platform, compile the code with : a) make configure PLATFORM=centec; b) make all
For Centec ARM64 platform, cmpile the code with: a) make configure PLATFORM=centec-arm64 PLATFORM_ARCH=arm64; b) make all
Verifiy the sonic-centec.bin and sonic-centec-arm64.bin on Centec chip based board.
- Why I did it
Support syslog rate limit configuration feature
- How I did it
Remove unused rsyslog.conf from containers
Modify docker startup script to generate rsyslog.conf from template files
Add metadata/init data for syslog rate limit configuration
- How to verify it
Manual test
New sonic-mgmt regression cases
Adding platform support for FS s5800-48t4s and s5800-48t8s-mars8p.
Both s5800-48t4s and s5800-48t8s-mars8p have 48 * 10/100/1000 Base-T ports, 4 * 10GE SFP+ Ports on Centec TsingMa.
s5800-48t4s is different from s5800-48t8s-mars8p in that:
The phy chip used by s5800-48t4s is Marvell 88e1680;
The phy chip used by s5800-48t4s-mars8p is Centec ctc21108;
Why I did it
support multi-platform device tree for default dtb may not suitable on all vender hardware designs.
How I did it
use onie_platform variable to load device tree blob
#### Why I did it
Fix some bugs on centec tsingma bsp and v682 sonic_platform package.
#### How I did it
1. add module license for centec mars phy driver
2. Fix i2c function ability setting for tsingma soc i2c controller
3. Fix eeprom read error on v682 sonic_platform sfp module
#### How to verify it
Build SONiC image and verify it on centec E530-48T4X and V682-48Y8C board.
Why I did it
Fix Centec-Arm64 compile error, Centec SAI Dev package reference is error
How I did it
Modify sai.mk of arm64 platform for Centec
How to verify it
Build centec amd64 and arm64 sonic image
Currently, the build dockers are created as a user dockers(docker-base-stretch-<user>, etc) that are
specific to each user. But the sonic dockers (docker-database, docker-swss, etc) are
created with a fixed docker name and common to all the users.
docker-database:latest
docker-swss:latest
When multiple builds are triggered on the same build server that creates parallel building issue because
all the build jobs are trying to create the same docker with latest tag.
This happens only when sonic dockers are built using native host dockerd for sonic docker image creation.
This patch creates all sonic dockers as user sonic dockers and then, while
saving and loading the user sonic dockers, it rename the user sonic
dockers into correct sonic dockers with tag as latest.
docker-database:latest <== SAVE/LOAD ==> docker-database-<user>:tag
The user sonic docker names are derived from 'DOCKER_USERNAME and DOCKER_USERTAG' make env
variable and using Jinja template, it replaces the FROM docker name with correct user sonic docker name for
loading and saving the docker image.
Add docker-syncd-centec-rpc and docker-saiserver-centec to buster docker image list. These 2 docker images are based on docker-config-engine-buster.
Co-authored-by: Xianghong Gu <xgu@centecnetworks.com>
upgrade centec arm64 sai to v1.9.1 and fix syncd compile error:
```
2021-11-21T19:58:09.0294367Z /usr/bin/ld: libSyncd.a(libSyncd_a-VendorSai.o): in function `syncd::VendorSai::queryStatsCapability(unsigned long, _sai_object_type_t, _sai_stat_capability_list_t*)':
2021-11-21T19:58:09.0297367Z ./syncd/VendorSai.cpp:439: undefined reference to `sai_query_stats_capability'
2021-11-21T19:58:09.0298900Z collect2: error: ld returned 1 exit status
```
Co-authored-by: shil <shil@centecnetworks.com>
1. Fix build for armhf and arm64
2. upgrade centec tsingma bsp support to 5.10 kernel
3. modify centec platform driver for linux 5.10
Co-authored-by: Shi Lei <shil@centecnetworks.com>
- Why I did it
In case an app.ext requires a dependency syncd^1.0.0, the RPC version of syncd will not satisfy this constraint, since 1.0.0-rpc < 1.0.0. This is not correct to put 'rpc' as a prerelease identifier. Instead put 'rpc' as build metadata in the version: 1.0.0+rpc which satisfies the constraint ^1.0.0.
- How I did it
Changed the way how to version in RPC and DBG images are constructed.
- How to verify it
Install app.ext with syncd^1.0.0 dependency on a switch with RPC syncd docker.
Signed-off-by: Stepan Blyshchak <stepanb@nvidia.com>
Signed-off-by: Stepan Blyschak stepanb@nvidia.com
This PR is part of SONiC Application Extension
Depends on #5938
- Why I did it
To provide an infrastructure change in order to support SONiC Application Extension feature.
- How I did it
Label every installable SONiC Docker with a minimal required manifest and auto-generate packages.json file based on
installed SONiC images.
- How to verify it
Build an image, execute the following command:
admin@sonic:~$ docker inspect docker-snmp:1.0.0 | jq '.[0].Config.Labels["com.azure.sonic.manifest"]' -r | jq
Cat /var/lib/sonic-package-manager/packages.json file to verify all dockers are listed there.
- Support compile sonic arm image on arm server. If arm image compiling is executed on arm server instead of using qemu mode on x86 server, compile time can be saved significantly.
- Add kernel argument systemd.unified_cgroup_hierarchy=0 for upgrade systemd to version 247, according to #7228
- rename multiarch docker to sonic-slave-${distro}-march-${arch}
Co-authored-by: Xianghong Gu <xgu@centecnetworks.com>
Co-authored-by: Shi Lei <shil@centecnetworks.com>
When Building syncd-rpc, libthrift has dependency on libboost-atomic1.71.0,
however the debian packager install version 1.67 instead. This PR
preinstalls libboost-atomic v 1.71 to avoid falling back to v 1.67.
signed-off-by: Tamer Ahmed <tamer.ahmed@microsoft.com>
- Why I did it
Fix issue: ptf_nn_agent isn't able to start in syncd-rpc docker on buster.
- How I did it
The issue is fixed by installing python-dev, cffi and nnpy for python 2 explicitly.
- How to verify it
Run copp test on RPC image.
- combine docker-ptf-saithrift into docker-ptf docker
- build docker-ptf under platform vs
- remove docker-ptf for other platforms
Signed-off-by: Guohan Lu <lguohan@gmail.com>
- Why I did it
Initially, we used Monit to monitor critical processes in each container. If one of critical processes was not running
or crashed due to some reasons, then Monit will write an alerting message into syslog periodically. If we add a new process
in a container, the corresponding Monti configuration file will also need to update. It is a little hard for maintenance.
Currently we employed event listener of Supervisod to do this monitoring. Since processes in each container are managed by
Supervisord, we can only focus on the logic of monitoring.
- How I did it
We borrowed the event listener of Supervisord to monitor critical processes in containers. The event listener will take
following steps if it was notified one of critical processes exited unexpectedly:
The event listener will first check whether the auto-restart mechanism was enabled for this container or not. If auto-restart mechanism was enabled, event listener will kill the Supervisord process, which should cause the container to exit and subsequently get restarted.
If auto-restart mechanism was not enabled for this contianer, the event listener will enter a loop which will first sleep 1 minute and then check whether the process is running. If yes, the event listener exits. If no, an alerting message will be written into syslog.
- How to verify it
First, we need checked whether the auto-restart mechanism of a container was enabled or not by running the command show feature status. If enabled, one critical process should be selected and killed manually, then we need check whether the container will be restarted or not.
Second, we can disable the auto-restart mechanism if it was enabled at step 1 by running the commnad sudo config feature autorestart <container_name> disabled. Then one critical process should be selected and killed. After that, we will see the alerting message which will appear in the syslog every 1 minute.
- Which release branch to backport (provide reason below if selected)
201811
201911
[x ] 202006
Centec syncd have beend upgraded to buster, docker-syncd-centec-rpc do not need generate stretch based docker.
Co-authored-by: Xianghong Gu <xgu@centecnetworks.com>
* Enable telemetry for ARM64 by default
* [Centec]Upgrade Centec syncd docker to buster; libjemalloc2 have been installed in docker-base-buster, remove libjemalloc1 from docker-syncd-centec's Dockerfile.j2
Co-authored-by: Gu Xianghong <xgu@centecnetworks.com>
Some syncd containers are still based on Debian Stretch, and thus do not have Python 3 available. For these containers, we must still rely on Python 2 to run supervisord_dependent_startup and supervisor-proc-exit-listener.
**- Why I did it**
We were building a custom version of Supervisor because I had added patches to prevent hangs and crashes if the system clock ever rolled backward. Those changes were merged into the upstream Supervisor repo as of version 3.4.0 (http://supervisord.org/changes.html#id9), therefore, we should be able to simply install the vanilla package via pip. This will also allow us to easily move to Python 3, as Python 3 support was added in version 4.0.0.
**- How I did it**
- Remove Makefiles and patches for building supervisor package from source
- Install Python 3 supervisor package version 4.2.1 in Buster base container
- Also install Python 3 version of supervisord-dependent-startup in Buster base container
- Debian package installed binary in `/usr/bin/`, but pip package installs in `/usr/local/bin/`, so rather than update all absolute paths, I changed all references to simply call `supervisord` and let the system PATH find the executable to prevent future need for changes just in case we ever need to switch back to build a Debian package, then we won't need to modify these again.
- Install Python 2 supervisor package >= 3.4.0 in Stretch and Jessie base containers
* LIBSAIREDIS isn't depend on CENTEC_SAI remove this dependence
* Build depends are optimized in PR #4880 and #5039. Merge these optimization to Centec ARM64 platform.
When stopping the swss, pmon or bgp containers, log messages like the following can be seen:
```
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,061 ERRO pool dependent-startup event buffer overflowed, discarding event 34
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,063 ERRO pool dependent-startup event buffer overflowed, discarding event 35
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,064 ERRO pool dependent-startup event buffer overflowed, discarding event 36
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,066 ERRO pool dependent-startup event buffer overflowed, discarding event 37
```
This is due to the number of programs in the container managed by supervisor, all generating events at the same time. The default event queue buffer size in supervisor is 10. This patch increases that value in all containers in order to eliminate these errors. As more programs are added to the containers, we may need to further adjust these values. I increased all buffer sizes to 25 except for containers with more programs or templated supervisor.conf files which allow for a variable number of programs. In these cases I increased the buffer size to 50. One final exception is the swss container, where the buffer fills up to ~50, so I increased this buffer to 100.
Resolves https://github.com/Azure/sonic-buildimage/issues/5241
summary of E530 platfrom:
- CPU: CTC5236, arm64
- LAN switch chip set: CENTEC CTC7132 (TsingMa). TsingMa is a purpose built device to address the challenge in the recent network evolution such as Cloud computing. CTC7132 provides 440Gbps I/O bandwidth and 400Gcore bandwidth, the CTC7132 family combines a feature-rich switch core and an embedded ARM A53 CPU Core running at 800MHz/1.2GHz. CTC7132 supports a variety of port configurations, such as QSGMII and USXGMII-M, providing full-rate port capability from 100M to 100G.
- device E530-48T4X: 48 * 10/100/1000 Base-T Ports, 4 * 10GE SFP+ Ports.
- device E530-24X2C: 24 * 10 GE SFP+ Ports, 2 * 100GE QSFP28 Ports.
add new files in three directories:
device/centec/arm64-centec_e530_24x2c-r0
device/centec/arm64-centec_e530_48t4x_p-r0
platform/centec-arm64
Co-authored-by: taocy <taocy2@centecnetworks.com>
Co-authored-by: Gu Xianghong <gxh2001757@163.com>
Co-authored-by: shil <shil@centecnetworks.com>