Add missing aggregate port_config.ini needed by sonic-mgmt
Concatenate the ASIC specific port_config.ini from device/arista/x86_64-arista_7800r3a_36d2_lc/Arista-7800R3A-36D2-C36/[01] to create the aggregate file.
This change is to disable the pcie firmware check done by Broadcom SAI. The change is needed for the Arista platform x86_64-arista_7050cx3_32s; otherwise, the check will fail, blocking the initialization.
There was a pcie firmware check added in brcm SDK and certain Arista hardwares do not compliant with the check, so we added the disable_pcie_firmware_check originally for x86_64-arista_7060dx4_32. For x86_64-arista_7050cx3_32s, it was able to pass the check but some firmware change done in August made it fail.
Why I did it
Added to allow test_crm_route to pass; the test tries to add a /126 ipv6 route and this change is required in order for the count of available routes to be updated correctly.
Why I did it
TX FIR tuning should be done based on the type of inserted transceiver
How I did it
Add media_settings.json which contains the tuning data for 100G optic and 400G optic.
How to verify it
Tested against x86_64-arista_7800r3a_36d2_lc
Why I did it
The PR is to apply separated DSCP_TO_TC_MAP and TC_TO_QUEUE_MAP to uplink ports on dualtor.
The traffic with DSCP 2 and DSCP 6 from T1 is treated as lossless traffic.
DSCP TC Queue
2 2 2
6 6 6
Traffic with DSCP 2 or DSCP 6 from downlink is still treated as lossy traffic as before.
How I did it
Define DSCP_TO_TC_MAP|AZURE_UPLINK and TC_TO_QUEUE_MAP|AZURE_UPLINK.
How to verify it
Verified by UT
Verified by coping the new template to a testbed, and rendering a config_db.json
Why I did it
Some sonic-mgmt platform_tests/api were failing on the 7060CX-32S
How I did it
Added the missing metadata in platform.json and platform_components.json
This is purely test data and does not impact our API implementation.
How to verify it
Run platform_tests / api and expect 100% pass rate.
Why I did it
Some sonic-mgmt platform_tests/api were failing on the 7260CX3-64
How I did it
Added the missing metadata in platform.json and platform_components.json
This is purely test data and does not impact our API implementation.
How to verify it
Run platform_tests/api and expect 100% passrate.
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
There is a need to have separate profiles on compute and storage and this infra update will help achieve that
How I did it
Moved buffer pool/profile and qos definitions on TD2 to a common folder and all TD2 hwsku's will reference that folder
Why I did it
Fixes#12614
How I did it
In the container_checker the database_chassis is added to expected container if device is supervisor
To detect the device is superviso, add supervisor=1 to the platform_env.conf of 7808 sup platform
How to verify it
run container_checker monit check
Signed-off-by: Arvindsrinivasan Lakshmi Narasimhan <arlakshm@microsoft.com>
Why I did it
Add components data for sonic-mgmt testing
How I did it
Update platform.json and add platform_components.json
How to verify it
Ran sonic-mgmt tests (test_chassis and test_component)
Why I did it
Some Arista products do not have an SSD but use an eMMC instead.
The SsdUtil plugin is therefore extended to support both.
How I did it
Implemented ssd_util.py platform plugin loaded by ssdutil.
This plugin fallback to the default SONiC implementation if the arista one can't be found.
How to verify it
Run show platform ssdhealth on a product with an eMMC
Multi-asic Docker instances are created behind Docker's default bridge
which doesn't allow talking to other Docker instances that are in the
host network (like database-chassis).
On linecards, we configure midplane interfaces to let per-asic docker
containers talk to CHASSIS_DB on the supervisor through internal chassis
network.
On the supervisor we don't need to use chassis internal network, but we
still need a similar setup in order to allow fabric containers to talk
to database-chassis
Why I did it
This PR is to update TC_TO_QUEUE_MAP|AZURE for SKU Arista-7050CX3-32S-D48C8 and Arista-7260CX3 T0.
The change is only to align the TC_TO_QUEUE_MAP for regular traffic and bounced traffic. It has no impact on business because we have no traffic being mapped to TC2 or TC6.
How I did it
Update TC_TO_QUEUE_MAP|AZURE , and test cases as well.
How to verify it
Verified by running test case test_j2files.py
/sonic/src/sonic-config-engine$ python3 setup.py test -s tests/test_j2files.py
running test
......
----------------------------------------------------------------------
Ran 29 tests in 25.390s
OK
Why I did it
After PFC interop testing between 8102 and 7050cx3, data packet losses were observed on the Rx ports of the 7050cx3 (inflow from 8102) during testing. This was primarily due to the slower response times to react to PFC pause packets for the 8102, when receiving such frames from neighboring devices. To solve for the packet drops, the 7050cx3 pg headroom size has to be increased to 160kB.
How I did it
Modified the xoff threshold value to 160kB in the pg_profile file to allow for the buffer manager to read that value when building the image, and configuring the device
How to verify it
run "mmuconfig -l" once image is built
Signed-off-by: dojha <devojha@microsoft.com>
Why I did it
Content of platform.json was outdated and some platform_tests/api of sonic-mgmt were failing.
How I did it
Added the necessary values to platform.json
How to verify it
Running platform_tests/api of sonic-mgmt should yield 100% passrate.
Port index 22 is associated with phy23_config.json, then same port index 22 in phy24_config.json may cause gearbox port creation error. Port Ethernet22 maps to index 23.
* Why I did it
Followup to #10656. This change adds the remaining configs for the 720DT-48S platform.
* How I did it
Adds the following:
gearbox_config.json and other gearbox-related config files, to enable traffic on external PHY ports (Ethernet0-23)
sensors.conf
pcie.yaml
Also add missing facts in platform.json
* How to verify it
show interfaces status shows links up on interfaces Ethernet0-23
traffic flows with no errors on interfaces Ethernet0-23
Note: above testing depends on Add gbsyncd container for broncos #11154 and [orchagent]: Enhance initSaiPhyApi sonic-swss#2367, as well as having the appropriate PAI driver.
Co-authored-by: Samuel Angebault <staphylo@arista.com>
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
Improve throughput and latency for 7260 deployments
How I did it
Update the dynamic threshold to 0 and ECN settings as 2mb/10mb/5%
How to verify it
Updated unit tests to use the modified values for 7260 ecn settings.
Why I did it
This was an ask by Microsoft to provide:
7260 config.bcm file for hardware sku Arista-7260CX3-D92C16 (Named Arista-7260CX3-D96C16).
There are 16 100G uplinks:
Ethernet13-20/1
Ethernet45-52/1
All other ports are breakout to 2 50G ports.
How I did it
Copied existing Arista-7260CX3-D108C8 HWSKU and altered the bcm.config and port_config.ini files.
How to verify it
The new 100G ports do come up with a 201811 image using this HWSKU.
Co-authored-by: Zhi Yuan (Carl) Zhao <zyzhao@arista.com>
* [device]: Add SAI checksum verify to TD3 config
* A new config option was added to control the value of IPV4_INCR_CHECKSUM_ORIGINAL_VALUE_VERIFY in the EGR_FLEX_CONFIG control register (this prevents checksums of 0xffff from being propagated to other devices)
For 7800 LCs, set LAG mode to support 1024 number of 16-member system
LAGs.
Why I did it
The SOC property changes are necessary to match #10519 which increases the number of system LAG IDs to 1024.
Description for the changelog
For 7800 LCs, set LAG mode to support 1024 number of 16-member system
LAGs.
Why I did it
This linecard runs in multi-asic mode and therefore needs the use_pcie_id_chassis file to work properly.
The default_sku file was also missing which would break the boot when no minigraph is provided.
Description for the changelog
Add missing default_sku and use_pci_id_chassis configs for 7800R3A-36D2
Added initial set of config files to allow for booting and partial traffic testing in SONiC on the 720DT-48S.
How to verify it
- Switch boots
- show interfaces status shows links up on interfaces Ethernet24-51
- Traffic flows with no errors on interfaces Ethernet24-51
Signed-off-by: bingwang <wang.bing@microsoft.com>
Why I did it
This PR brings two changes
Add lossy PG profile for PG2 and PG6 on T1 for ports between T1 and T2.
After PR Update qos config to clear queues for bounced back traffic #10176 , the DSCP_TO_TC_MAP and TC_TO_PG_MAP is updated when remapping is enable
DSCP_TO_TC_MAP
Before After Why do this change
"2" : "1" "2" : "2" Only change for leaf router to map DSCP 2 to TC 2 as TC 2 will be used for lossless TC
"6" : "1" "6" : "6" Only change for leaf router to map DSCP 6 to TC 6 as TC 6 will be used for lossless TC
TC_TO_PRIORITY_GROUP_MAP
Before After Why do this change
"2" : "0" "2" : "2" Only change for leaf router to map TC 2 to PG 2 as PG 2 will be used for lossless PG
"6" : "0" "6" : "6" Only change for leaf router to map TC 6 to PG 6 as PG 6 will be used for lossless PG
So, we have two new lossy PGs (2 and 6) for the T2 facing ports on T1, and two new lossless PGs (2 and 6) for the T0 facing port on T1.
However, there is no lossy PG profile for the T2 facing ports on T1. The lossless PGs for ports between T1 and T0 have been handled by buffermgrd .Therefore, We need to add lossy PG profiles for T2 facing ports on T1.
We don't have this issue on T0 because PG 2 and PG 6 are lossless PGs, and there is no lossy traffic mapped to PG 2 and PG 6
Map port level TC7 to PG0
Before the PCBB change, DSCP48 -> TC 6 -> PG 0.
After the PCBB change, DSCP48 -> TC 7 -> PG 7
Actually, we can map TC7 to PG0 to save a lossy PG.
How I did it
Update the qos and buffer template.
How to verify it
Verified by UT.