Why I did it
This PR is to cherry-pick #11448 to 202012 branch after resolving conflicts.
There are conflicts in
files/build_templates/qos_config.j2
src/sonic-config-engine/tests/test_j2files.py
- Why I did it
Configure different DSCP_TO_TC_MAP between uplink and downlink on T1 switch in dual ToR scenario
On T1 uplink, both DSCP 2/6 will be mapped to TC 1 for the purpose of avoiding such traffic occupying lossless buffers.
On T1 downlink, they will be mapped to TC 2/6 respectively. (unchanged)
- How I did it
For vendors who want to configure different DSCP_TO_TC_MAP between uplinks and downlinks on T1, they should
Define generate_dscp_to_tc_map macro in SKU's qos.json.j2 file
Define map AZURE for downlink and AZURE_UPLINK for uplink
Define jinja2 variable different_dscp_to_tc_map as True
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Why I did it
Support Mellanox-SN4600C-C64 as T1 switch in dual-ToR scenario
1. Support additional queue and PG in buffer templates, including both traditional and dynamic model
2. Support mapping DSCP 2/6 to lossless traffic in the QoS template.
3. Add macros to generate additional lossless PG in the dynamic model
4. Adjust the order in which the generic/dedicated (with additional lossless queues) macros are checked and called to generate buffer tables in common template buffers_config.j2
- Buffer tables are rendered via using macros.
- Both generic and dedicated macros are defined on our platform. Currently, the generic one is called as long as it is defined, which causes the generic one always being called on our platform. To avoid it, the dedicated macrio is checked and called first and then the generic ones.
5. Support MAP_PFC_PRIORITY_TO_PRIORITY_GROUP on ports with additional lossless queues.
On Mellanox-SN4600C-C64, buffer configuration for t1 is calculated as:
40 * 100G downlink ports with 4 lossless PGs/queues, 1 lossy PG, and 3 lossy queues
16 * 100G uplink ports with 2 lossless PGs/queues, 1 lossy PG, and 5 lossy queues
Signed-off-by: Stephen Sun stephens@nvidia.com
How to verify it
Run regression test.
Map priority 0 to TC 1 and priority 1 to TC 0
Send traffic on priority 0 and 1 and verified that it gets mapped correctly in hw
Signed-off-by: Neetha John <nejo@microsoft.com>
Treat devices that are ToRRouters (ToRRouters and BackEndToRRouters) the same when rendering templates
Except for BackEndToRRouters belonging to a storage cluster, since these devices have extra sub-interfaces created
Treat devices that are LeafRouters (LeafRouters and BackEndLeafRouters) the same when rendering templates
Signed-off-by: Lawrence Lee <lawlee@microsoft.com>
Dynamic threshold setting changed to 0 and WRED profile green min threshold set to 250000 for Tomahawk devices
Changed the dynamic threshold settings in pg_profile_lookup.ini
Added a macro for WRED profiles in qos.json.j2 for Tomahawk devices
Necessary changes made in qos.config.j2 to use the macro if present
Signed-off-by: Neetha John <nejo@microsoft.com>
* Use dot1p to tc mapping for backend switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
* Do not write DSCP to TC mapping into CONFIG_DB or config_db.json for
storage switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
* Use dot1p to tc mapping for backend switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
* Do not write DSCP to TC mapping into CONFIG_DB or config_db.json for
storage switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
Lossy traffic does not need to be mapped to different ingress PGs. They can all share the same ingress PG.
Signed-off-by: Wenda Ni <wenni@microsoft.com>
* QoS config change: 1) DSCP mapping; 2) link pg/queue 6 to lossy buffer;
3) redistribute scheduler
Signed-off-by: Wenda <wenni@microsoft.com>
* Add scheduling weight to queue 2
Signed-off-by: Wenda <wenni@microsoft.com>
* Link pg/queue 2 to lossy buffer
Signed-off-by: Wenda <wenni@microsoft.com>
* Update the pg headroom for a7060-D48C8 50G
Signed-off-by: Wenda <wenni@microsoft.com>
* Update config gen test for qos
Signed-off-by: Wenda <wenni@microsoft.com>
* Update pg headroom size, and update egress lossy pool size accordingly
Signed-off-by: Wenda <wenni@microsoft.com>
* Update headroom pool size; Update ingress service pool and egress lossy
pool sizes accordingly;
Signed-off-by: Wenda <wenni@microsoft.com>
* a7260: update headroom pool size; Update ingress service pool and egress lossy pool sizes accordingly;
Signed-off-by: Wenda <wenni@microsoft.com>
* Update config gen test for buffer
Signed-off-by: Wenda <wenni@microsoft.com>
* 1) DSCP 46 to 5; 2) ecn config for lossless traffic; 3) ecn on by default; 4) DWRR equal weight;
Signed-off-by: Wenda <wenni@microsoft.com>
* 1) link pg & queue 5 to lossy buffer profile; 2) ingress lossless alpha 1/8
Signed-off-by: Wenda <wenni@microsoft.com>
* Update the test case for qos & buffer json template
Signed-off-by: Wenda <wenni@microsoft.com>
* Migrate a7050-qx32 and s6000 to use pg_profile lookup architecture
Signed-off-by: Wenda <wenni@microsoft.com>
* Update pg headroom egress service pool for a7050-qx-32s, a7050-qx32, and s6000
Signed-off-by: Wenda <wenni@microsoft.com>
* Link queue 5 to lossy profile
Signed-off-by: Wenda <wenni@microsoft.com>
* Unify qos config with qos_config.j2 template
Signed-off-by: Wenda <wenni@microsoft.com>
* Change 7050 to use qos config template
Signed-off-by: Wenda <wenni@microsoft.com>
modified: device/arista/x86_64-arista_7050_qx32/Arista-7050-QX32/qos.json.j2
modified: device/arista/x86_64-arista_7050_qx32s/Arista-7050-QX-32S/qos.json.j2
* Change a7060, a7260, s6000, s6100, z9100 to use qos config template
Signed-off-by: Wenda <wenni@microsoft.com>
* Change mlnx devices to use qos config template
Signed-off-by: Wenda <wenni@microsoft.com>
modified: ../../../mellanox/x86_64-mlnx_msn2100-r0/ACS-MSN2100/qos.json.j2
modified: ../../../mellanox/x86_64-mlnx_msn2410-r0/ACS-MSN2410/qos.json.j2
modified: ../../../mellanox/x86_64-mlnx_msn2700-r0/ACS-MSN2700/qos.json.j2
modified: ../../../mellanox/x86_64-mlnx_msn2700-r0/Mellanox-SN2700-D48C8/qos.json.j2
* Change barefoot devices to use qos config template
Signed-off-by: Wenda <wenni@microsoft.com>
modified: barefoot/x86_64-accton_wedge100bf_32x-r0/montara/qos.json.j2
modified: barefoot/x86_64-accton_wedge100bf_65x-r0/mavericks/qos.json.j2
* Change accton as7212 to use qos config template
Signed-off-by: Wenda <wenni@microsoft.com>
modified: accton/x86_64-accton_as7212_54x-r0/AS7212-54x/qos.json.j2
* Apply PORT_QOS_MAP to active ports only
Signed-off-by: Wenda <wenni@microsoft.com>
* Update qos config test with qos_config.j2 template
Signed-off-by: Wenda <wenni@microsoft.com>
* Update sample output of qos-dell6100.json
Signed-off-by: Wenda <wenni@microsoft.com>
* Remove generating the default port name and index list, i.e., remove the generate_port_lists macro, because PORT is always defined
Signed-off-by: Wenda <wenni@microsoft.com>
* Include pfc_to_pg_map according to platform asic type obtained from
/etc/sonic/sonic_version.yml rather than specifying per hwsku
Signed-off-by: Wenda Ni <wenni@microsoft.com>
* Customize TC_TO_PRIORITY_GROUP_MAP and
PFC_PRIORITY_TO_PRIORITY_GROUP_MAP for barefoot
Signed-off-by: Wenda <wenni@microsoft.com>
* Unify PFC_PRIORITY_TO_PRIORITY_GROUP_MAP: remove "0":"0", "1":"1" as
these two pgs do not generate PFC frames.
Signed-off-by: Wenda <wenni@microsoft.com>