Notable changes:
* Use j2cli from Debian repos instead of pip
* Use setuptools from Debian repos instead of pip
* Use wheel from Debian repos instead of pip
* Update grpcio and grpcio-tools python packages to match version in
Bookworm
* Use m2crypto from Debian repos instead of pip
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
This fixes 3 issues:
* Specify test dependencies under extra_requires
* Update the PAM configuration for Bookworm
* Break a cyclical dependency between sonic-host-services and
sonic-buildimage by moving the contents of
src/sonic-host-services-data into sonic-host-services submodule
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
This fixes 4 issues:
* Update tabulate to 0.9.0 and deepdiff to 6.2.2
* Specify test dependencies under extra_requires
* Add check_output parameter to the setup function due to the patch
* Fix error about having a mutable default for field headers in
dataclass
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Update test_cfggen_from_yang.py and test_yang_data.json to the current
config_db format, and allow tests for sonic-config-engine to run for
Bookworm.
Also update pyangbind to 0.8.2 for Bookworm to fix an issue with some
classes being moved into a different package.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
The help text printed for sonic-yang-mgmt has slight differences
depending on the package versions. Loosen this check to only check the
options themselves, rather than the surrounding text.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
This ordering dependency causes FRR to get built for Bookworm, which we
don't need currently. Skip this by having it apply only to Bookworm.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Newer versions of pip/setuptools don't support test_requires, and the
current standard is to specify any extra dependencies (such as those
required for testing) under extra_requires.
Therefore, specify the testing dependencies under extra_requires. These
can be installed via pip using `pip install '.[testing]'`.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
In Bookworm's version of setuptools, direct calls to setup.py are
deprecated and no longer guaranteed to work. One of the recommended
commands is to use the `build` python package to build packages, and
call it with `python -m build`. This, by default, builds the packages in
a virtualenv to ensure that only the specified dependencies in setup.py
are needed to build the package. This also extends to running tests,
where directly calling `setup.py test` may not work, and the recommended
alternatives are to either call `pytest` directly, or call `tox` or
`nox.` More details are available at [1].
For SONiC's use case, for building python packages, we cannot build all
Python packages in a virtualenv since there are dependencies that we
would have built earlier, and these packages are not pushed to pypi or
any package registry. (There may be a cleaner approach to this, though,
but I'm not aware of it.) For this reason, the `-n` flag is added to not
build the package in a virtualenv.
For testing, `pytest` is now called instead of `setup.py test`.
[1] https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Changes from Bullseye slave container:
* Python 2 is no longer available at all
* Python 3.11 (instead of Python 3.9)
* GCC 12 (instead of GCC 10)
* Python ipaddr package is no longer available
* OpenJDK 17 (instead of OpenJDK 11)
* Remove doxygen armhf manual compilation (no longer needed)
* Disable FIPS, as the FIPS binaries are currently not yet available
* Install Python setuptools through Debian instead of pip
* Install Python wheel through Debian instead of pip
* Install Python nose through Debian instead of pip
* Install Python j2cli through Debian instead of pip
* Install Python pexpect through Debian instead of pip
* Install Python parameterized through Debian instead of pip
* Install Python pyyaml through Debian instead of pip
* Install Python pyfakefs through Debian instead of pip
* Install Python m2crypto through Debian instead of pip
* Python pympler 1.0 (instead of 0.8)
* Install Python build (as a replacement to setup.py)
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Fix bug: #17161 (comment)
multi-asic platforms it will never go to the else part as DATABASE_TYPE is always ""
Microsoft ADO (number only): 25072889
Move the checker NAMESPACE_ID == "" back
Signed-off-by: Ze Gan <ganze718@gmail.com>
#### Why I did it
src/sonic-host-services
```
* 50db9d3 - (HEAD -> master, origin/master, origin/HEAD) Move sonic-host-services-data from sonic-buildimage into this repo (3 hours ago) [Saikrishna Arcot]
* 1a9442f - Replace libpam-cracklib with libpam-pwquality (3 hours ago) [Saikrishna Arcot]
* 31590a1 - Fix diff output in test for Python 3 (3 hours ago) [Saikrishna Arcot]
* cc3e330 - Specify test dependencies under extra_requires (3 hours ago) [Saikrishna Arcot]
```
#### How I did it
#### How to verify it
#### Description for the changelog
- Why I did it
Fix the issue with configuration generation from the minigrapth:
- How I did it
Change the default breakout mode for internal ports to the mode that corresponds platfom.json configuration.
- How to verify it
1. Deploy minigraph
2. Run config load_minigraph -y command
Why I did it
HLD implementation: Container Hardening (sonic-net/SONiC#1364)
Work item tracking
Microsoft ADO (number only): 14807420
How I did it
Reduce linux capabilities in privileged flag
How to verify it
Run restapi sonic-mgmt tests on sn4600c
Check container's settings: Privileged is false and container only has default Linux caps, does not have extended caps.
* [Marvell-arm64] Add platform support for rd98DX35xx
This change adds following two variants of rd98DX35xx board to arm64
build.
Board with CPU integrated into the 98DX35xx switching chip:
Platform: arm64-marvell_rd98DX35xx-r0
HwSKU: rd98DX35xx
ASIC: marvell
Port Config: 32x1G + 16x2.5G + 6x25G
Board with external CN9131 CPU connected over PCI to 98DX35xx
switching chip:
Platform: arm64-marvell_rd98DX35xx_cn9131-r0
HwSKU: rd98DX35xx_cn9131
ASIC: marvell
Port Config: 32x1G + 16x2.5G + 6x25G
Change-Id: I21dc9fe972417daaabb20a5bddf7779d72b7972e
Signed-off-by: Pavan Naregundi <pnaregundi@marvell.com>
* Add HWSKU for rd98DX35xx and rd98DX35xx_cn9131
This patch adds new HWSKU's for Marvell arm64 platforms rd98DX35xx
and rd98DX35xx_cn9131.
Change-Id: Id7c14f49f0e304335cc4ca73dcae52362c49d231
Signed-off-by: Pavan Naregundi <pnaregundi@marvell.com>
---------
Signed-off-by: Pavan Naregundi <pnaregundi@marvell.com>
What I did:
In Chassis TSA mode Loopback0 Ip's of each LC's should be advertise through e-BGP peers of each remote LC's
How I did:
- Route-map policy to Advertise own/self Loopback IP to other internal iBGP peers with a community internal_community as define in constants.yml
- Route-map policy to match on above internal_community when route is received from internal iBGP peers and set a internal tag as define in constants.yml and also delete the internal_community so we don't send to any of e-BGP peers
- In TSA new route-map match on above internal tag and permit the route (Loopback0 IP's of remote LC's) and set the community to traffic_shift_community.
- In TSB delete the above new route-map.
How I verify:
Manual Verification
UT updated.
sonic-mgmt PR: sonic-net/sonic-mgmt#10239
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
- Why I did it
Added YANG model as part of Generic Hash feature development
- How I did it
Added YANG model
- How to verify it
1. Add UT
2. Verified manually with the feature qualification
Signed-off-by: Nazarii Hnydyn <nazariig@nvidia.com>
- Why I did it
To fix BIOS firmware update after fresh image installation from ONiE
- How I did it
Initialized empty GRUB environment file after ONiE installation
- How to verify it
1. Install image from ONiE
2. Run BIOS firmware upgrade
Signed-off-by: Nazarii Hnydyn <nazariig@nvidia.com>
- Why I did it
Support running hw-management service on MSN4700 emulation platform.
- How I did it
Use physical EEPROM instead of the fake one
Do not skip PSUd, PCId, thermal control daemon
Adjust PCIe and thermal configuration files
Adjust platform.json for different chassis names and thermals
Remove a patch to hw-management in order to enable it
- How to verify it
Run Nvidia simulation on SN4700 (ASIC and Platform)
Signed-off-by: Stephen Sun <stephens@nvidia.com>
- Why I did it
In order to activate FW after it was upgraded need to perform reboot.
If reboot wasn't performed and user need to upgrade to another SONiC image then it will fail.
The reason for that is that during SONiC upgrade new FW should be installed but it will fail because previously installed FW wasn't activated.
In order to allow 2nd FW upgrade without reboot in-between need to reactivate FW image.
This change handles such flow.
Example of issue scenario:
User installed SONiC image on the switch
Then for some reason FW was upgraded by user or script but reboot was not performed to activate it.
After that upgrade to new SONiC image will fail because new image need to install FW but it fails due to previous one wasn't activated.
- How I did it
In "mlnx-fw-upgrade" script check if FW upgrade failed with the error that FW was already installed but reboot was not performed.
If so then perform FW image reactivation and try to upgrade FW again.
- How to verify it
Install SONiC image on the switch
Then upgrade FW but don't perform reboot.
After that upgrade to new SONiC image and check that upgrade was successfull.
Signed-off-by: Volodymyr Samotiy <volodymyrs@nvidia.com>
Why I did it
- Convert hw-dump into generate-dump plugins
- Enable DRAM scrubber on some products
- Fix xcvr driver active low register bit logic
- Improve cooling algorithm (now considers xcvrs and modules)
- Add linecard graceful shutdown (disabled by default)
The scrubber was enabled for the following products:
- DCS-7050QX-32S
- DCS-7050CX3-32S
- DCS-7060CX-32S
What I did:
Revert the GTSM feature for VOQ iBGP session done as part of #16777.
Why I did:
On VOQ chassis BGP packets go over Recycle Port and then for Ingress Pipeline Routing making ttl as 254 and failing single hop check.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
Sub PRs:
sonic-net/sonic-host-services#84
#17191
Why I did it
According to the design, the database instances of DPU will be kept in the NPU host.
Microsoft ADO (number only): 25072889
How I did it
To follow the multiple ASIC design, I assume a new platform environment variable NUM_DPU will be defined in the /usr/share/sonic/device/$PLATFORM/platform_env.conf. Based on this number, NPU host will launch a corresponding number of instances for the DPU database.
Signed-off-by: Ze Gan <ganze718@gmail.com>
SAI 9.x requires a SYNCD_SHM_SIZE specified otherwise it will default to 64mb which is insufficient for syncd.
E.G. of a few failures seen when insufficient shmem was set
ha_init: The file: warmboot_data_0 is of size=762[MB] and is beyond the directory: /dev/shm available storage of size=64[MB]#015
syncd.sh[26074]: Cannot get SYNCD_SHM_SIZE for chip: [869] in /usr/share/sonic/device/x86_64-broadcom_common/syncd_shm.ini. Skip set SYNCD_SHM_SIZE.
Syncd hangs here:
syncd#syncd: [none] SAI_API_SWITCH:_brcm_sai_shr_ha_section_resize:536 start=0x7f6e641b4000, end=0x7f6e645b4000, len=302276608, free=0x7f6e641b4000
Broadcom recommended using 1gb for DNX devices.
Since currently we don't use SAI9.x on master and 202305 this change won't fix anything until we upgrade the SAI on those branches.
#### Why I did it
src/sonic-dbsyncd
```
* e294eb0 - (HEAD -> master, origin/master, origin/HEAD) Update the code coverage rate to 80% (#63) (16 hours ago) [xumia]
```
#### How I did it
#### How to verify it
#### Description for the changelog
#### Why I did it
src/sonic-platform-daemons
```
* 55a6828 - (HEAD -> master, origin/master, origin/HEAD) Update the code coverage rate to 80% (#406) (16 hours ago) [xumia]
```
#### How I did it
#### How to verify it
#### Description for the changelog
Why I did it
Work item tracking
Microsoft ADO (number only): 25858445
How I did it
sonic-mgmt-docker with both Python2 and Python3 tag is latest
sonic-mgmt-docker with Python3 only tag is py3only
How to verify it
Why I did it
Add config_db monitor and customize options for dhcpservd. HLD: sonic-net/SONiC#1282
Work item tracking
Microsoft ADO (number only): 25600859
How I did it
Add support to customize unassigned DHCP options. Current support type: binary, boolean, ipv4-address, string, uint8, uint16, uint32
Add db config change monitor for dhcpservd
How to verify it
Unit tests in sonic-dhcp-server all passed
Why I did it
HLD implementation: Container Hardening (sonic-net/SONiC#1364)
Work item tracking
Microsoft ADO (number only): 14807420
How I did it
Reduce linux capabilities in privileged flag
How to verify it
Run snmp sonic-mgmt tests
Check container's settings: Privileged is false and container only has default Linux caps, does not have extended caps.
admin@vlab-01:~$ docker inspect snmp | grep Privi
"Privileged": false,
admin@vlab-01:~$ docker exec -it snmp bash
root@vlab-01:/# capsh --print
Current: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_mknod,cap_audit_write,cap_setfcap=ep
Bounding set =cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_se
#### Why I did it
src/sonic-mgmt-common
```
* faa2a51 - (HEAD -> master, origin/master, origin/HEAD) Go Code format checker and formatter (#112) (8 hours ago) [faraazbrcm]
* faaa9f5 - PathInfo optimizations (#115) (22 hours ago) [Sachin Holla]
```
#### How I did it
#### How to verify it
#### Description for the changelog
#### Why I did it
src/sonic-platform-common
```
* 30fb0ce - (HEAD -> master, origin/master, origin/HEAD) Implement is_copper for SFP (#414) (12 hours ago) [Junchao-Mellanox]
```
#### How I did it
#### How to verify it
#### Description for the changelog
Fix#16204
Microsoft ADO (number only): 25746782
How I did it
multiarch/debian-debootstrap:arm64-bullseye is too old.
It needs to add some gpg keys before 'apt-get update'
In the ubuntu environment, the debian server key wasn't installed by default. So, we will get the following error in the Azp pipeline
gpg: WARNING: no command supplied. Trying to guess what you mean ...
gpg: Signature made Sun Apr 9 06:25:32 2023 UTC
gpg: using RSA key 7D887DC8BA7BBBA7B835E3BADCE310E7864CC8BF
gpg: Can't check signature: No public key
gpg: can't create `/home/vsts/.gnupg/random_seed': No such file or directory
Validation FAILED!!
Signed-off-by: Ze Gan <ganze718@gmail.com>