Why I did it
ip command cannot update packet number if the cipher is XPN.
How I did it
Specify SSCI when update packet number and ignore SSCI value if update action.
Signed-off-by: Ze Gan <ganze718@gmail.com>
* Ported Marvell armhf build on x86 for debian buster to use cross-compilation instead of qemu emulation
Current armhf Sonic build on amd64 host uses qemu emulation. Due to the
nature of the emulation it takes a very long time, about 22-24 hours to
complete the build. The change I did to reduce the building time by
porting Sonic armhf build on amd64 host for Marvell platform for debian
buster to use cross-compilation on arm64 host for armhf target. The
overall Sonic armhf building time using cross-compilation reduced to
about 6 hours.
Signed-off-by: marvell <marvell@cpss-build3.marvell.com>
* Fixed final Sonic image build with dockers inside
* Update Dockerfile.j2
Fixed qemu-user-static:x86_64-aarch64-5.0.0-2 .
* Update cross-build-arm-python-reqirements.sh
Added support for both armhf and arm64 cross-build platform using $PY_PLAT environment variable.
* Update Makefile
Added TARGET=<cross-target> for armhf/arm64 cross-compilation.
* Reviewer's @qiluo-msft requests done
Signed-off-by: marvell <marvell@cpss-build3.marvell.com>
* Added new radius/pam patch for arm64 support
* Update slave.mk
Added missing back tick.
* Added libgtest-dev: libgmock-dev: to the buster Dockerfile.j2. Fixed arm perl version to be generic
* Added missing armhf/arm64 entries in /etc/apt/sources.list
* fix libc-bin core dump issue from xumia:fix-libc-bin-install-issue commit
* Removed unnecessary 'apt-get update' from sonic-slave-buster/Dockerfile.j2
* Fixed saiarcot895 reviewer's requests
* Fixed README and replaced 'sed/awk' with patches
* Fixed ntp build to use openssl
* Unuse sonic-slave-buster/cross-build-arm-python-reqirements.sh script (put all prebuilt python packages cross-compilation/install inside Dockerfile.j2). Fixed src/snmpd/Makefile to use -j1 in all cases
* Clean armhf cross-compilation build fixes
* Ported cross-compilation armhf build to bullseye
* Additional change for bullseye
* Set CROSS_BUILD_ENVIRON default value n
* Removed python2 references
* Fixes after merge with the upstream
* Deleted unused sonic-slave-buster/cross-build-arm-python-reqirements.sh file
* Fixed 2 @saiarcot895 requests
* Fixed @saiarcot895 reviewer's requests
* Removed use of prebuilt python wheels
* Incorporated saiarcot895 CC/CXX and other simplification/generalization changes
Signed-off-by: marvell <marvell@cpss-build3.marvell.com>
* Fixed saiarcot895 reviewer's additional requests
* src/libyang/patch/debian-packaging-files.patch
* Removed --no-deps option when installing wheels. Removed unnecessary lazy_object_proxy arm python3 package instalation
Co-authored-by: marvell <marvell@cpss-build3.marvell.com>
Co-authored-by: marvell <marvell@cpss-build2.marvell.com>
- Why I did it
To implement Syslog Source IP feature
In order to include the following commit: 8e5d478 [ssip]: Add CLI (#2191)
- How I did it
Updated syslog config template
Advanced submodule sonic-utilities
ea11b22 [sonic-bootchart] add sonic-bootchart (#2195)
8e5d478 [ssip]: Add CLI (#2191)
1dacb7f Replace pyswsssdk with swsscommon (#2251)
- How to verify it
make configure PLATFORM=mellanox
make target/sonic-mellanox.bin
Signed-off-by: Nazarii Hnydyn <nazariig@nvidia.com>
- Why I did it
Support Mellanox-SN4600C-C64 as T1 switch in dual-ToR scenario
This is to port #11032 and #11299 from 202012 to master.
Support additional queue and PG in buffer templates, including both traditional and dynamic model
Support mapping DSCP 2/6 to lossless traffic in the QoS template.
Add macros to generate additional lossless PG in the dynamic model
Adjust the order in which the generic/dedicated (with additional lossless queues) macros are checked and called to generate buffer tables in common template buffers_config.j2
Buffer tables are rendered via using macros.
Both generic and dedicated macros are defined on our platform. Currently, the generic one is called as long as it is defined, which causes the generic one always being called on our platform. To avoid it, the dedicated macrio is checked and called first and then the generic ones.
Support MAP_PFC_PRIORITY_TO_PRIORITY_GROUP on ports with additional lossless queues.
On Mellanox-SN4600C-C64, buffer configuration for t1 is calculated as:
40 * 100G downlink ports with 4 lossless PGs/queues, 1 lossy PG, and 3 lossy queues
16 * 100G uplink ports with 2 lossless PGs/queues, 1 lossy PG, and 5 lossy queues
Signed-off-by: Stephen Sun <stephens@nvidia.com>
#### Why I did it
Submodule update for sonic-swss-common with following change:
597b022 Add SonicDBConfig.getInstanceList() API (#639)
7073dc6 add table name for vlan_stacking (#646)
7e39f31 Add SonicV2Connector::set method for int value. (#648)
154cc9c Improve hset and hdel performance with RedisPipeline. (#647)
#### How I did it
#### How to verify it
#### Which release branch to backport (provide reason below if selected)
<!--
- Note we only backport fixes to a release branch, *not* features!
- Please also provide a reason for the backporting below.
- e.g.
- [x] 202006
-->
- [ ] 201811
- [ ] 201911
- [ ] 202006
- [ ] 202012
- [ ] 202106
#### Description for the changelog
Submodule update for sonic-swss-common with following change:
597b022 Add SonicDBConfig.getInstanceList() API (#639)
7073dc6 add table name for vlan_stacking (#646)
7e39f31 Add SonicV2Connector::set method for int value. (#648)
154cc9c Improve hset and hdel performance with RedisPipeline. (#647)
#### A picture of a cute animal (not mandatory but encouraged)
Fix an issue with front panel port led introduced in previous PR
Implement status led for linecards
Implement full power cycle for linecards
Improve reboot cause reporting for Ucd devices
Add fan support for PikeZ
Miscellaneous fixes and improvements
* Why I did it
Followup to #10656. This change adds the remaining configs for the 720DT-48S platform.
* How I did it
Adds the following:
gearbox_config.json and other gearbox-related config files, to enable traffic on external PHY ports (Ethernet0-23)
sensors.conf
pcie.yaml
Also add missing facts in platform.json
* How to verify it
show interfaces status shows links up on interfaces Ethernet0-23
traffic flows with no errors on interfaces Ethernet0-23
Note: above testing depends on Add gbsyncd container for broncos #11154 and [orchagent]: Enhance initSaiPhyApi sonic-swss#2367, as well as having the appropriate PAI driver.
Co-authored-by: Samuel Angebault <staphylo@arista.com>
Why I did it
[multi-asic]: Add build param to build multi-asic KVM image.
Require sonic-4asic-vs.img.gz to run multi-asic tests.
BUILD_MULTIASIC_KVM=y has to be added to build parameter to generate multi-asic KVM image which can be eventually used in multi-asic VS testbed.
This is a pipeline that we currently have to generate multi-asic KVM image:
https://github.com/Azure/sonic-buildimage/blob/master/.azure-pipelines/official-build-multi-asic.yml
How I did it
Add BUILD_MULTIASIC_KVM=y to build parameter in pipeline script to generate multi-asic KVM image.
How to verify it
The build param is already used in official-build-multi-asic.yml pipeline.
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
Improve throughput and latency for 7260 deployments
How I did it
Update the dynamic threshold to 0 and ECN settings as 2mb/10mb/5%
How to verify it
Updated unit tests to use the modified values for 7260 ecn settings.
Signed-off-by: Yong Zhao yozhao@microsoft.com
Why I did it
This PR aims to check in the commit f2b11e4 introduced by the update related to gNMI python client.
How I did it
I changed the Dockfile.j2 such that the update of gNMI client script will be checked in when ptf docker image is built.
How to verify it
A PTF container image was built and then loaded on a testbed. I checked the update of gNMI client script was checked in.
What I did:
Added Support for deployment_id parsing for Device Asic metadata.
Why I did:-
Deployment Id is used in BGP docker for FRR template generation. For multi-asic platforms running in namespace without deployment id as key in DEVICE_METADATA FRR template generation fails. This change is needed after this #10154 where if deployment_id is none we don't update DEVICE_METADA dictionary.
How I verify:-
Added unit-test.
Why I did it
This PR is to add a flag to control whether to generate PORT_QOS_MAP|global entry or not.
It's because for some HWSKU, such as BackEndToRRouter and BackEndLeafRouter, there is no DSCP_TO_TC_MAP defined.
Hence, if the PORT_QOS_MAP|global entry is generated, OA will report some error because the DSCP_TO_TC_MAP map AZURE can not be found.
Jul 14 00:24:40.286767 str2-7050qx-32s-acs-03 ERR swss#orchagent: :- saiObjectTypeQuery: invalid object id oid:0x7fddb43605d0
Jul 14 00:24:40.286767 str2-7050qx-32s-acs-03 ERR swss#orchagent: :- meta_generic_validation_objlist: SAI_SWITCH_ATTR_QOS_DSCP_TO_TC_MAP:SAI_ATTR_VALUE_TYPE_OBJECT_ID object on list [0] oid 0x7fddb43605d0 is not valid, returned null object id
Jul 14 00:24:40.286767 str2-7050qx-32s-acs-03 ERR swss#orchagent: :- applyDscpToTcMapToSwitch: Failed to apply DSCP_TO_TC QoS map to switch rv:-5
Jul 14 00:24:40.286767 str2-7050qx-32s-acs-03 ERR swss#orchagent: :- doTask: Failed to process QOS task, drop it
This PR is to address the issue.
How I did it
Add a flag require_global_dscp_to_tc_map to control whether to generate the PORT_QOS_MAP|global entry. The default value for require_global_dscp_to_tc_map is true. If the device type is storage backend, the value is changed to false. Then the PORT_QOS_MAP|global entry is not generated.
How to verify it
Update the current test_qos_dscp_remapping_render_template to cover storage backend.
- Why I did it
Support get_port_or_cage_type for RJ45 ports
- How I did it
Implement the new platform API get_port_or_cage_type
Fix the issue: unable to import SFP when chassis object is destructed
- How to verify it
Manually test and regression test
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Why I did it
Enable UT code coverage in sonic-buildimage repo submodule and enable LGTM
How I did it
create separate repo for sonic-host-services in sonic-net, and update submodule for sonic-buildimage
How to verify it
Build image
- Why I did it
To optimize fast-reboot. Teamd can be stopped after bgp is stopped and after swss is stopped because the last LACP packet can be sent still since syncd is still running. Saves 15 sec on shutdown.
- How I did it
Defined in the manifest for teamd to be stopped after swss
- How to verify it
Run it on the switch.
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
- Why I did it
To implement Syslog Source IP feature
- How I did it
Added the relevant yang doc
- How to verify it
N/A
Signed-off-by: Nazarii Hnydyn <nazariig@nvidia.com>
- Why I did it
Fix bug: pmon report error on start up because some SKUs do not have hwsku.json
- How I did it
If hwsku.json, do not extract RJ45 port information
- How to verify it
Manual test.
Unit test.
Updating sonic-utilities submodule with the following commits
d6b8869 [Auto-Techsupport] Fix the coredump_gen_handler Exception when the History table is empty
b41da8f Fix README to reflect sonic-utilities being built in Bullseye
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
There is a need to select different mmu profiles based on deployment type
How I did it
There will be separate subfolders (RDMA-CENTRIC, TCP-CENTRIC, BALANCED) in each hwsku folder which contains deployment specific mmu and qos settings. SonicQosProfile attribute in the minigraph will be used to determine which settings to use. If that attribute is not present, the default settings that exist in the hwsku folder will be used
Why I did it
Add infrastructure to support adding feature specific acls.
If feature specific ACLs has to be added:
if feature_name in self.feature_present and self.feature_present.get('feature_name'):
add_feature_specific_acls()
How I did it
Add function to get features present in feature table.
How to verify it
unit-test passes.
- Why I did it
Add support for mellanox platform building for target architecture arm64.
- How I did it
Contains the following changes:
1. Change instances of hard-coded amd64 to $(CONFIGURED_ARCH)
2. Add logic to download correct binary for MFT package
3. Add TARGET_BOOTLOADER=grub definition to rules.mk to override default arm64 bootloader
- How to verify it
Build mellanox platform with TARGET_ARCH set as arm64
Refactors the SONiC Installer to support greater flexibility in building for a given architecture and bootloader.
#### Why I did it
Currently the SONiC installer assumes that if a platform is ARM based that it uses the `uboot` bootloader and uses the `grub` bootloader otherwise. This is not a correct assumption to make as ARM is not strictly tied to uboot and x86 is not strictly tied to grub.
#### How I did it
To implement this I introduce the following changes:
* Remove the different arch folders from the `installer/` directory
* Merge the generic components of the ARM and x86 installer into `installer/installer.sh`
* Refactor x86 + grub specific functions into `installer/default_platform.conf`
* Modify installer to call `default_platform.conf` file and also call `platform/[platform]/patform.conf` file as well to override as needed
* Update references to the installer in the `build_image.sh` script
* Add `TARGET_BOOTLOADER` variable that is by default `uboot` for ARM devices and `grub` for x86 unless overridden in `platform/[platform]/rules.mk`
* Update bootloader logic in `build_debian.sh` to be based on `TARGET_BOOTLOADER` instead of `TARGET_ARCH` and to reference the grub package in a generic manner
#### How to verify it
This has been tested on a ARM test platform as well as on Mellanox amd64 switches as well to ensure there was no impact.
#### Description for the changelog
[arm] Refactor installer and build to allow arm builds targeted at grub platforms
#### Link to config_db schema for YANG module changes
N/A
Why I did it
Currently interfaces.j2 hardcodes to eth0 even when there are multiple interfaces in MGMT_INTERFACE. This change adds support to generate /e/n/i when there are multiple interfaces in MGMT_INTERFACE.
How I did it
By removing hardcoded eth0 when looping through MGMT_INTERFACE.
How to verify it
Verified through unit test.
Which release branch to backport (provide reason below if selected)
201811
201911
202006
202012
202106
202111
202205
Description for the changelog
Link to config_db schema for YANG module changes
A picture of a cute animal (not mandatory but encouraged)
Why I did it
This was an ask by Microsoft to provide:
7260 config.bcm file for hardware sku Arista-7260CX3-D92C16 (Named Arista-7260CX3-D96C16).
There are 16 100G uplinks:
Ethernet13-20/1
Ethernet45-52/1
All other ports are breakout to 2 50G ports.
How I did it
Copied existing Arista-7260CX3-D108C8 HWSKU and altered the bcm.config and port_config.ini files.
How to verify it
The new 100G ports do come up with a 201811 image using this HWSKU.
Co-authored-by: Zhi Yuan (Carl) Zhao <zyzhao@arista.com>