Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
There is a need to have separate profiles on compute and storage and this infra update will help achieve that
How I did it
Moved buffer pool/profile and qos definitions on TD2 to a common folder and all TD2 hwsku's will reference that folder
Why I did it
TX FIR tuning should be done based on the type of inserted transceiver
How I did it
Add media_settings.json which contains the tuning data for 100G optic and 400G optic.
How to verify it
Tested against x86_64-arista_7800r3a_36d2_lc
- Why I did it
A new SKU for MSN4700 Platform i.e. Mellanox-SN4700-V16A96
Requirements:
Breakout:
Port 1-24: 4x25G(4)[10G,1G]
Port 25-28: 2x100G[200G,50G,40G,25G,10G,1G]
Port 29-32: 2x200G[100G,50G,40G,25G,10G,1G]
Downlinks: 96 (1-24) + 4 (25-28)
Uplinks: 4 (29-32)
Shared Headroom: Enabled
Over Subscribe Ratio: 1:4
Default Topology: T0
Default Cable Length for T1: 5m
VxLAN source port range set: No
Static Policy Based Hashing Supported: No
Additional Details:
QoS params: The default ones defined in qos_config.j2 will be applied
Small Packet Percentage: Used 50% for traditional buffer model Note: For dynamic model, the value defined in LOSSLESS_TRAFFIC_PATTERN|AZURE|small_packet_percentage is used
SKU was drafted under the assumption that the downlink ports uses xcvr's that will only support the first 4 lanes of the physical port they are connected to. Hence for the ports 1-24, the last four lanes are not used
Cable Lengths used for generating buffer_defaults_{t0,t1}.j2 values
Signed-off-by: Vivek Reddy Karri <vkarri@nvidia.com>
A new SKU for MSN4700 Platform i.e. Mellanox-SN4700-V48C32
Requirements:
Breakout:
Port 1-24: 2x200G
Port 25-32: 4x100G
Downlinks: 48 (1-24)
Uplinks: 32 (25-32)
Shared Headroom: Enabled
Over Subscribe Ratio: 1:8
Default Topology: T1
Default Cable Length for T1: 300m
VxLAN source port range set: No
Static Policy Based Hashing Supported: No
Additional Details:
QoS params: The default ones defined in qos_config.j2 will be applied
Small Packet Percentage: Used 50% for traditional buffer model Note: For dynamic model, the value defined in LOSSLESS_TRAFFIC_PATTERN|AZURE|small_packet_percentage is used
Cable Lengths used for generating buffer_defaults_{t0,t1}.j2 values
Signed-off-by: Vivek Reddy Karri <vkarri@nvidia.com>
- Why I did it
Add new sensor conf file to support respined platforms(SN3700/SN3700C/SN4600C)
- How I did it
Add new sensor conf
Update the get_sensors_conf_path scripts to apply the sensor conf according to the HW respin version info
- How to verify it
run platform test(including sensor test)
Signed-off-by: Kebo Liu <kebol@nvidia.com>
Why I did it
Fixes#12614
How I did it
In the container_checker the database_chassis is added to expected container if device is supervisor
To detect the device is superviso, add supervisor=1 to the platform_env.conf of 7808 sup platform
How to verify it
run container_checker monit check
Signed-off-by: Arvindsrinivasan Lakshmi Narasimhan <arlakshm@microsoft.com>
#### Why I did it
To add new SKU Mellanox-SN2700-D44C10 with following requirements:
| Port configuration | Value |
| ------ |--------- |
| Breakout mode for each port |**Defined in port mapping** |
| Speed of the port | **Defined in Port mapping** |
| Auto-negotiation enable/disable | **No setting required** |
| FEC mode | **No setting required** |
|Type of transceiver used | **Not needed**|
Buffer configuration | Value
------ |---------
Shared headroom | **Enabled**
Shared headroom pool factor | **2**
Dynamic Buffer | **Disable**
In static buffer scenario how many uplinks and downlinks? | **44 x50G and 2x100G Downlinks 8x100G uplinks**
2km cable support required? | **No**
Switch configuration | Value
------ |---------
Warmboot enabled? | **yes**
Should warmboot be added to SAI profile when enabled? | **yes**
Is VxLAN source port range set? | **No**
Should Vxlan source port range be added to SAI profile when set. | **No**
Is Static Policy Based Hashing enabled? | **No**
Port Mapping
| Ports | Mode |
| ------ |--------- |
| 1,2 | 1x100G |
| 3-6 | 2x50G |
| 7-10 | 1x100G |
| 11-22 | 2x50G |
| 23-26 | 1x100G |
| 27-32 | 2x50G |
Number of Uplinks / Downlinks:
TO topology: **44 x50G and 2x100G Downlinks 8x100G uplinks**.
#### How I did it
Defined the SKU as per requirements
#### How to verify it
Load the SKU and verify if all links come up and traffic passes.
changed the platform device name under nokia directory; we now need to specify marvell armhf/arm64 to provide more accurate platform identity. otherwise onie discovery won't recognize the asic being installed.
Why I did it
when we load images using onie discovery, the process was failing because of marvell ASIC mismatch
How I did it
replace the platform asic with marvell-armhf under 7215
How to verify it
load a new image using http server and verify that the image can be loaded successfully
Why I did it
Add components data for sonic-mgmt testing
How I did it
Update platform.json and add platform_components.json
How to verify it
Ran sonic-mgmt tests (test_chassis and test_component)
Why I did it
Some Arista products do not have an SSD but use an eMMC instead.
The SsdUtil plugin is therefore extended to support both.
How I did it
Implemented ssd_util.py platform plugin loaded by ssdutil.
This plugin fallback to the default SONiC implementation if the arista one can't be found.
How to verify it
Run show platform ssdhealth on a product with an eMMC
Signed-off-by: maipbui maibui@microsoft.com
Why I did it
The xml.etree.ElementTree module is not secure against maliciously constructed data.
How I did it
Remove xml. Use lxml XML parsers package that prevent potentially malicious operation.
Multi-asic Docker instances are created behind Docker's default bridge
which doesn't allow talking to other Docker instances that are in the
host network (like database-chassis).
On linecards, we configure midplane interfaces to let per-asic docker
containers talk to CHASSIS_DB on the supervisor through internal chassis
network.
On the supervisor we don't need to use chassis internal network, but we
still need a similar setup in order to allow fabric containers to talk
to database-chassis
Why I did it
This PR is to update TC_TO_QUEUE_MAP|AZURE for SKU Arista-7050CX3-32S-D48C8 and Arista-7260CX3 T0.
The change is only to align the TC_TO_QUEUE_MAP for regular traffic and bounced traffic. It has no impact on business because we have no traffic being mapped to TC2 or TC6.
How I did it
Update TC_TO_QUEUE_MAP|AZURE , and test cases as well.
How to verify it
Verified by running test case test_j2files.py
/sonic/src/sonic-config-engine$ python3 setup.py test -s tests/test_j2files.py
running test
......
----------------------------------------------------------------------
Ran 29 tests in 25.390s
OK
- Why I did it
Remove extra empty lines in the SN2201 pg_profile_lookup.ini to make it aligned with other platforms.
This extra empty line could confuse some test cases which need to parse this file.
Signed-off-by: Kebo Liu <kebol@nvidia.com>
Why I did it
After PFC interop testing between 8102 and 7050cx3, data packet losses were observed on the Rx ports of the 7050cx3 (inflow from 8102) during testing. This was primarily due to the slower response times to react to PFC pause packets for the 8102, when receiving such frames from neighboring devices. To solve for the packet drops, the 7050cx3 pg headroom size has to be increased to 160kB.
How I did it
Modified the xoff threshold value to 160kB in the pg_profile file to allow for the buffer manager to read that value when building the image, and configuring the device
How to verify it
run "mmuconfig -l" once image is built
Signed-off-by: dojha <devojha@microsoft.com>
Why I did it
Content of platform.json was outdated and some platform_tests/api of sonic-mgmt were failing.
How I did it
Added the necessary values to platform.json
How to verify it
Running platform_tests/api of sonic-mgmt should yield 100% passrate.
Port index 22 is associated with phy23_config.json, then same port index 22 in phy24_config.json may cause gearbox port creation error. Port Ethernet22 maps to index 23.
Added initial set of config files to allow for booting and partial traffic testing in SONiC on the 720DT-48S.
How to verify it
- Switch boots
- show interfaces status shows links up on interfaces Ethernet24-51
- Traffic flows with no errors on interfaces Ethernet24-51
- Why I did it
This new breakout mode is required when a QSFP cable is used on the QSFP-DD supported 4700 port. since QSFP only uses the first 4 lanes, this mode is required to restrict the child ports to only use the first four lanes
- How I did it
Updated the platfrom.json file with the extended data
- How to verify it
Tested on one port:
root@msn-4700:/home/admin# show int status
Interface Lanes Speed MTU FEC Alias Vlan Oper Admin Type Asym PFC
----------- ------------------------------- ------- ----- ----- ------- ------ ------ ------- ----------------------------------------------- ----------
Ethernet0 0 25G 9100 N/A etp1a routed up up QSFP28 or later N/A
Ethernet1 1 25G 9100 N/A etp1b routed down up N/A N/A
Ethernet2 2 25G 9100 N/A etp1c routed down up N/A N/A
Ethernet3 3 25G 9100 N/A etp1d routed down up N/A N/A
Signed-off-by: Vivek Reddy Karri <vkarri@nvidia.com>
Update the bcm config file system_ref_core_clock_khz param to
handlesystems with J2cplus linecards.
We need system_ref_core_clock_khz to be set to 1600000 for supporting j2
and j2cplus linecards on the same chassis.
- Why I did it
SN5600 has an additional service interface with a different parameters than other interfaces.
- How I did it
Added the etp65 interface with the correct parameters.
- How to verify it
Run platform test on SN5600 platform.
Check the service port can startup correctly.
- Why I did it
New SKU for MSN-4700 Platform i.e. Mellanox-SN4700-C128
Requirements:
* Breakout: Port 1-32: 4x100G
* Downlinks: 120 (1-30)
* Uplinks: 8 (31-32)
* Shared Headroom: Enabled
* Over Subscribe Ratio: 1:8
* Default Topology: T2
* Default Cable Length for T2: 1500m
* QoS params: The default ones defined in qos_config.j2 will be applied
* Small Packet Percentage: Used 50% for traditional buffer model Note: For dynamic model, the value defined in LOSSLESS_TRAFFIC_PATTERN|AZURE|small_packet_percentage is used
Additional Details:
Switch Type has to be programmed as SpineRouter through config_db.json in DEVICE_METADATA|localhost|type field for the buffer values & cable lengths defined in the buffers_defaults_t2.j2 to apply on the device
Cable Lengths Used for generating buffer_defaults_{t0,t1,t2}.j2 values
Signed-off-by: Vivek Reddy Karri <vkarri@nvidia.com>
For 7800 LCs, set LAG mode to support 1024 number of 16-member system
LAGs.
Why I did it
The SOC property changes are necessary to match #10519 which increases the number of system LAG IDs to 1024.
Description for the changelog
For 7800 LCs, set LAG mode to support 1024 number of 16-member system
LAGs.
Why I did it
This linecard runs in multi-asic mode and therefore needs the use_pcie_id_chassis file to work properly.
The default_sku file was also missing which would break the boot when no minigraph is provided.
Description for the changelog
Add missing default_sku and use_pci_id_chassis configs for 7800R3A-36D2
Why I did it:
To fix hlx platform sfp+ module tx disable issue
How I did it:
Fix sfp+ tx disable function according SFF-8472 specification
Co-authored-by: Eric Zhu <erzhu@celestica.com>
- Why I did it
Support Mellanox-SN4600C-C64 as T1 switch in dual-ToR scenario
This is to port #11032 and #11299 from 202012 to master.
Support additional queue and PG in buffer templates, including both traditional and dynamic model
Support mapping DSCP 2/6 to lossless traffic in the QoS template.
Add macros to generate additional lossless PG in the dynamic model
Adjust the order in which the generic/dedicated (with additional lossless queues) macros are checked and called to generate buffer tables in common template buffers_config.j2
Buffer tables are rendered via using macros.
Both generic and dedicated macros are defined on our platform. Currently, the generic one is called as long as it is defined, which causes the generic one always being called on our platform. To avoid it, the dedicated macrio is checked and called first and then the generic ones.
Support MAP_PFC_PRIORITY_TO_PRIORITY_GROUP on ports with additional lossless queues.
On Mellanox-SN4600C-C64, buffer configuration for t1 is calculated as:
40 * 100G downlink ports with 4 lossless PGs/queues, 1 lossy PG, and 3 lossy queues
16 * 100G uplink ports with 2 lossless PGs/queues, 1 lossy PG, and 5 lossy queues
Signed-off-by: Stephen Sun <stephens@nvidia.com>