8f4a1b7b85
- Why I did it Support Mellanox-SN4600C-C64 as T1 switch in dual-ToR scenario This is to port #11032 and #11299 from 202012 to master. Support additional queue and PG in buffer templates, including both traditional and dynamic model Support mapping DSCP 2/6 to lossless traffic in the QoS template. Add macros to generate additional lossless PG in the dynamic model Adjust the order in which the generic/dedicated (with additional lossless queues) macros are checked and called to generate buffer tables in common template buffers_config.j2 Buffer tables are rendered via using macros. Both generic and dedicated macros are defined on our platform. Currently, the generic one is called as long as it is defined, which causes the generic one always being called on our platform. To avoid it, the dedicated macrio is checked and called first and then the generic ones. Support MAP_PFC_PRIORITY_TO_PRIORITY_GROUP on ports with additional lossless queues. On Mellanox-SN4600C-C64, buffer configuration for t1 is calculated as: 40 * 100G downlink ports with 4 lossless PGs/queues, 1 lossy PG, and 3 lossy queues 16 * 100G uplink ports with 2 lossless PGs/queues, 1 lossy PG, and 5 lossy queues Signed-off-by: Stephen Sun <stephens@nvidia.com> |
||
---|---|---|
.. | ||
x86_64-mlnx_lssn2700-r0 | ||
x86_64-mlnx_msn2010-r0 | ||
x86_64-mlnx_msn2100-r0 | ||
x86_64-mlnx_msn2410-r0 | ||
x86_64-mlnx_msn2700_simx-r0 | ||
x86_64-mlnx_msn2700-r0 | ||
x86_64-mlnx_msn2740-r0 | ||
x86_64-mlnx_msn3420-r0 | ||
x86_64-mlnx_msn3700_simx-r0 | ||
x86_64-mlnx_msn3700-r0 | ||
x86_64-mlnx_msn3700c-r0 | ||
x86_64-mlnx_msn3800-r0 | ||
x86_64-mlnx_msn4410-r0 | ||
x86_64-mlnx_msn4600-r0 | ||
x86_64-mlnx_msn4600c-r0 | ||
x86_64-mlnx_msn4700_simx-r0 | ||
x86_64-mlnx_msn4700-r0 | ||
x86_64-mlnx_x86-r5.0.1400 | ||
x86_64-nvidia_sn2201-r0 | ||
x86_64-nvidia_sn4800_simx-r0 | ||
x86_64-nvidia_sn4800-r0 | ||
x86_64-nvidia_sn5600_simx-r0 |