Backport #14372 to 202012
Why I did it
For better accounting purposes, updating the ingress lossy traffic profile to use static threshold. This change is only intended for Th devices using RDMA-CENTRIC profiles
How I did it
Update the buffer templates for Th devices in RDMA-CENTRIC folder to use the correct threshold
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
Update dynamic threshold to -1 to get optimal performance for RDMA traffic
How I did it
Modified pg_profile_lookup.ini to reflect the correct value
Signed-off-by: Neetha John <nejo@microsoft.com>
This PR is to backport #14138 to 202012.
- Why I did it
In sfplpm API, the number of logical ports is hardcoded as 64. When a system contains more port than this, the SDK APIs would fail with a trace as below
Enabling low-power mode for port Ethernet0... Traceback (most recent call last):
File "/usr/share/sonic/platform/plugins/sfplpmset.py", line 167, in
set_lpmode(handle, cmd, sfp_module)
File "/usr/share/sonic/platform/plugins/sfplpmset.py", line 128, in set_lpmode
SX_MGMT_PHY_MOD_PWR_ATTR_PWR_MODE_E, SX_MGMT_PHY_MOD_PWR_MODE_LOW_E)
File "/usr/share/sonic/platform/plugins/sfplpmset.py", line 115, in pwr_attr_set
mgmt_phy_mod_pwr_attr_set(handle, module_id, attr_type, power_mode)
File "/usr/share/sonic/platform/plugins/sfplpmset.py", line 84, in mgmt_phy_mod_pwr_attr_set
assert SX_STATUS_SUCCESS == rc, "sx_mgmt_phy_mod_pwr_attr_set failed"
AssertionError: sx_mgmt_phy_mod_pwr_attr_set failed
Error! Unable to set LPM for 1, rc = 1, err msg: [+] opening sdk
Mar 07 03:25:28 INFO LOG: Initializing SX log with STDOUT as output file.
Mar 07 03:25:28 ERROR SX_API_PORT: sx_mgmt_phy_mod_pwr_attr_get: This API is deprecated and will be removed in the future. Please use sx_mgmt_phy_module_pwr_attr_get in its place.
Mar 07 03:25:28 ERROR SX_API_PORT: sx_mgmt_phy_mod_pwr_attr_set: This API is deprecated and will be removed in the future. Please use sx_mgmt_phy_module_pwr_attr_set in its place.
- How I did it
Remove the hardcoded value of 64. Obtained the number of logical ports from SDK
- How to verify it
Manual testing
Why I did it
Platform cases test_tx_disable, test_tx_disable_channel, test_power_override failed in dx010.
How I did it
Add i2c access algorithm for CPLD i2c adapters.
How to verify it
Verify it with platform_tests/api/test_sfp.py::TestSfpApi test cases.
Why I did it
Seastone does not have the psu fans' status led, need to reflect it in platform.json.
How I did it
Set the psu fans status led available to false.
How to verify it
Verify it with platform_tests/api/test_psu_fans.py::TestPsuFans::test_set_fans_led case.
Why I did it
1. fix chassis test_set_fans_led case
2. fix chassis get_name case mismatch issue
3. fix fan_drawer test_set_fans_speed
4. fix component test_components test case
How I did it
Add corresponding configuration into chassis json file
How to verify it
Run platform tests cases to verify these failure cases
#### Why I did it
Some tests under platform_tests/api were failing on the 7050CX3 due to outdated facts in platform.json
#### How I did it
Updated platform.json facts with appropriate values
#### How to verify it
Run tests under platform_tests/api to verify no failures
Why I did it
fix DX010 fan drawer and watchdog platform test case issues
How I did it
1. Add fan_drawer get_maximum_consumed_power support
2. Adjust maximum watchdog timeout value check
How to verify it
Run test_fan_drawer and test_watchdog test cases.
Why I did it
Fix the following issues for Seastone platform:
- system-health issue: show system-health detail will not complete #9530, Celestica Seastone DX010-C32: show system-health detail fails with 'Chassis' object has no attribute 'initizalize_system_led' #11322
- show platform firmware updates issue: Celestica Seastone DX010-C32: show platform firmware updates #11317
- other platform optimization
How I did it
Modify and optimize the platform implememtation.
How to verify it
Manual run the test commands described in these issues.
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
ECN parameters need to be updated for storage backend
How I did it
Included the check for storage backend devices to update qos configs
How to verify it
Verified that the new ecn settings are applied on storage backend device.
Verified that the old ecn settings are applied for storage frontend, non storage frontend/backend devices
Signed-off-by: Neetha John <nejo@microsoft.com>
This is to backport #12626
Why I did it
There is a need to have separate profiles on compute and storage and this infra update will help achieve that
How I did it
Moved buffer pool/profile and qos definitions on TD2 to a common folder and all TD2 hwsku's will reference that folder
Co-authored-by: Zhi Yuan (Carl) Zhao <zyzhao@arista.com>
Why I did it
Arista 7060 platform has a rare and unreproduceable PCIe timeout that could possibly be solved with increasing the switch PCIe timeout value. To do this we'll call a script for this platform to increase the PCIe timeout on boot-up.
No issues would be expected from the setpci command. From the PCIe spec:
"Software is permitted to change the value in this field at any
time. For Requests already pending when the Completion
Timeout Value is changed, hardware is permitted to use either
the new or the old value for the outstanding Requests, and is
permitted to base the start time for each Request either on when
this value was changed or on when each request was issued. "
How I did it
Add "platform-init" support in swss docker similar to how "hwsku-init" is called, only this would be for any device belonging to a platform. Then the script would reside in device data folder.
Additionally, add pciutils dependency to docker-orchagent so it can run the setpci commands.
How to verify it
On bootup of an Arista 7060, can execute:
lspci -vv -s 01:00.0 | grep -i "devctl2"
In order to check that the timeout has changed.
New SKU for MSN-4700 Platform i.e. Mellanox-SN4700-C128
Requirements:
* Breakout: Port 1-32: 4x100G
* Downlinks: 120 (1-30)
* Uplinks: 8 (31-32)
* Shared Headroom: Enabled
* Over Subscribe Ratio: 1:8
* Default Topology: T2
* Default Cable Length for T2: 1500m
* QoS params: The default ones defined in qos_config.j2 will be applied
* Small Packet Percentage: Used 50% for traditional buffer model Note: For dynamic model, the value defined in LOSSLESS_TRAFFIC_PATTERN|AZURE|small_packet_percentage is used
Additional Details:
Switch Type has to be programmed as SpineRouter through config_db.json in DEVICE_METADATA|localhost|type field for the buffer values & cable lengths defined in the buffers_defaults_t2.j2 to apply on the device
Cable Lengths Used for generating buffer_defaults_{t0,t1,t2}.j2
Signed-off-by: Vivek Reddy Karri <vkarri@nvidia.com>
A new SKU for MSN4700 Platform i.e. Mellanox-SN4700-A96C8V8
Requirements:
Breakout:
Port 1-24: 4x25G(4)[10G,1G]
Port 25-28: 2x100G[200G,50G,40G,25G,10G,1G]
Port 29-32: 2x200G[100G,50G,40G,25G,10G,1G]
Downlinks: 96 (1-24) + 4 (25-28)
Uplinks: 4 (29-32)
Shared Headroom: Enabled
Over Subscribe Ratio: 1:4
Default Topology: T0
Default Cable Length for T1: 5m
VxLAN source port range set: No
Static Policy Based Hashing Supported: No
Additional Details:
QoS params: The default ones defined in qos_config.j2 will be applied
Small Packet Percentage: Used 50% for traditional buffer model Note: For dynamic model, the value defined in LOSSLESS_TRAFFIC_PATTERN|AZURE|small_packet_percentage is used
SKU was drafted under the assumption that the downlink ports uses xcvr's that will only support the first 4 lanes of the physical port they are connected to. Hence for the ports 1-24, the last four lanes are not used
Cable Lengths used for generating buffer_defaults_{t0,t1}.j2 D
Signed-off-by: Vivek Reddy <vkarri@nvidia.com>
Why I did it
This PR is to update TC_TO_QUEUE_MAP|AZURE for SKU Arista-7050CX3-32S-D48C8 and Arista-7260CX3 T0.
The change is only to align the TC_TO_QUEUE_MAP for regular traffic and bounced traffic. It has no impact on business because we have no traffic being mapped to TC2 or TC6.
How I did it
Update TC_TO_QUEUE_MAP|AZURE , and test cases as well.
How to verify it
Verified by running test case test_j2files.py
/sonic/src/sonic-config-engine$ python3 setup.py test -s tests/test_j2files.py
running test
......
----------------------------------------------------------------------
Ran 29 tests in 25.390s
OK
- Why I did it
New SKU for MSN-4700 Platform i.e. Mellanox-SN4700-C128
Requirements:
* Breakout: Port 1-32: 4x100G
* Downlinks: 120 (1-30)
* Uplinks: 8 (31-32)
* Shared Headroom: Enabled
* Over Subscribe Ratio: 1:8
* Default Topology: T2
* Default Cable Length for T2: 1500m
* QoS params: The default ones defined in qos_config.j2 will be applied
* Small Packet Percentage: Used 50% for traditional buffer model Note: For dynamic model, the value defined in LOSSLESS_TRAFFIC_PATTERN|AZURE|small_packet_percentage is used
Additional Details:
Switch Type has to be programmed as SpineRouter through config_db.json in DEVICE_METADATA|localhost|type field for the buffer values & cable lengths defined in the buffers_defaults_t2.j2 to apply on the device
Cable Lengths Used for generating buffer_defaults_{t0,t1,t2}.j2 values
Signed-off-by: Vivek Reddy <vkarri@nvidia.com>
Why I did it
After PFC interop testing between 8102 and 7050cx3, data packet losses were observed on the Rx ports of the 7050cx3 (inflow from 8102) during testing. This was primarily due to the slower response times to react to PFC pause packets for the 8102, when receiving such frames from neighboring devices. To solve for the packet drops, the 7050cx3 pg headroom size has to be increased to 160kB.
How I did it
Modified the xoff threshold value to 160kB in the pg_profile file to allow for the buffer manager to read that value when building the image, and configuring the device
How to verify it
run "mmuconfig -l" once image is built
Signed-off-by: dojha <devojha@microsoft.com>
Why I did it:
To fix hlx platform sfp+ module tx disable issue
How I did it:
Fix sfp+ tx disable function according SFF-8472 specification
Co-authored-by: Eric Zhu <erzhu@celestica.com>
Why I did it
This PR is to backport #11569 into 202012 branch.
This PR is to apply different DSCP_TO_TC_MAP to downlink and uplink ports on T1 in dualtor deployment.
For T1 downlink ports (To T0)
The DSCP_TO_TC_MAP is not changed. DSCP2 and DSCP6 are mapped to TC2 and TC6 respectively.
For T1 uplink ports (To T1)
A new DSCP_TO_TC_MAP|AZURE_UPLINK is defined and applied. DSCP2 and DSCP6 are mapped to TC1 to avoid mixing up lossy and lossless traffic from T2.
The extra lossy PG2 and PG6 added in PR #11157 is reverted as well because no traffic from T2 is mapped to PG2 or PG6 now.
How I did it
Define a new map DSCP_TO_TC_MAP|AZURE_UPLINK for 7260 T1.
How to verify it
Verified by test case in test_j2files.py.
- Why I did it
New SN410 A1 system has a different sensor layout with A0 system, needs a new sensor conf file to support it.
- How I did it
Since the SN4410 A1 system use exactly the same sensor layout as the SN4700 A1 system, so add a symbol link linking to the SN4700 A1 sensor conf file to reuse.
- How to verify it
Run sensor test against the SN4410 A1 system;
Run platform related regression test against the SN4410 A1 system
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
Missed this sku in the previous PR #11398
How I did it
Update the dynamic threshold to 0 and ECN settings as 2mb/10mb/5%
How to verify it
Updated unit tests to use the modified values for 7260 ecn settings.
- Why I did it
MSN3700/3700C/4600C have been re-spined, the new HW version of platforms has different sensors, so need to apply the correct sensor.conf for them.
- How I did it
Add new sensor.conf files for the new re-spined platforms.
Enhance the logic of "get_sensors_conf_path" for the related platforms in order to load the correct sensor.conf for each version of platforms.
- How to verify it
run sensors test on different versions of platforms
Signed-off-by: Kebo Liu <kebol@nvidia.com>
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
Improve throughput and latency for 7260 deployments
How I did it
Update the dynamic threshold to 0 and ECN settings as 2mb/10mb/5%
How to verify it
Updated unit tests to use the modified values for 7260 ecn settings.
- Why I did it
Configure different DSCP_TO_TC_MAP between uplink and downlink on T1 switch in dual ToR scenario
On T1 uplink, both DSCP 2/6 will be mapped to TC 1 for the purpose of avoiding such traffic occupying lossless buffers.
On T1 downlink, they will be mapped to TC 2/6 respectively. (unchanged)
- How I did it
For vendors who want to configure different DSCP_TO_TC_MAP between uplinks and downlinks on T1, they should
Define generate_dscp_to_tc_map macro in SKU's qos.json.j2 file
Define map AZURE for downlink and AZURE_UPLINK for uplink
Define jinja2 variable different_dscp_to_tc_map as True
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Why I did it
Support Mellanox-SN4600C-C64 as T1 switch in dual-ToR scenario
1. Support additional queue and PG in buffer templates, including both traditional and dynamic model
2. Support mapping DSCP 2/6 to lossless traffic in the QoS template.
3. Add macros to generate additional lossless PG in the dynamic model
4. Adjust the order in which the generic/dedicated (with additional lossless queues) macros are checked and called to generate buffer tables in common template buffers_config.j2
- Buffer tables are rendered via using macros.
- Both generic and dedicated macros are defined on our platform. Currently, the generic one is called as long as it is defined, which causes the generic one always being called on our platform. To avoid it, the dedicated macrio is checked and called first and then the generic ones.
5. Support MAP_PFC_PRIORITY_TO_PRIORITY_GROUP on ports with additional lossless queues.
On Mellanox-SN4600C-C64, buffer configuration for t1 is calculated as:
40 * 100G downlink ports with 4 lossless PGs/queues, 1 lossy PG, and 3 lossy queues
16 * 100G uplink ports with 2 lossless PGs/queues, 1 lossy PG, and 5 lossy queues
Signed-off-by: Stephen Sun stephens@nvidia.com
How to verify it
Run regression test.