Signed-off-by: bingwang <wang.bing@microsoft.com>
Why I did it
This PR brings two changes
Add lossy PG profile for PG2 and PG6 on T1 for ports between T1 and T2.
After PR Update qos config to clear queues for bounced back traffic #10176 , the DSCP_TO_TC_MAP and TC_TO_PG_MAP is updated when remapping is enable
DSCP_TO_TC_MAP
Before After Why do this change
"2" : "1" "2" : "2" Only change for leaf router to map DSCP 2 to TC 2 as TC 2 will be used for lossless TC
"6" : "1" "6" : "6" Only change for leaf router to map DSCP 6 to TC 6 as TC 6 will be used for lossless TC
TC_TO_PRIORITY_GROUP_MAP
Before After Why do this change
"2" : "0" "2" : "2" Only change for leaf router to map TC 2 to PG 2 as PG 2 will be used for lossless PG
"6" : "0" "6" : "6" Only change for leaf router to map TC 6 to PG 6 as PG 6 will be used for lossless PG
So, we have two new lossy PGs (2 and 6) for the T2 facing ports on T1, and two new lossless PGs (2 and 6) for the T0 facing port on T1.
However, there is no lossy PG profile for the T2 facing ports on T1. The lossless PGs for ports between T1 and T0 have been handled by buffermgrd .Therefore, We need to add lossy PG profiles for T2 facing ports on T1.
We don't have this issue on T0 because PG 2 and PG 6 are lossless PGs, and there is no lossy traffic mapped to PG 2 and PG 6
Map port level TC7 to PG0
Before the PCBB change, DSCP48 -> TC 6 -> PG 0.
After the PCBB change, DSCP48 -> TC 7 -> PG 7
Actually, we can map TC7 to PG0 to save a lossy PG.
How I did it
Update the qos and buffer template.
How to verify it
Verified by UT.
Why I did it
As part of PCBB changes, we need to enable 2 extra lossless queues. The changes in this PR are done to adjust only the reserved sizes on Th2 for the additional 2 lossless queues
Calculations are done based on 40 downlinks for T1 and 16 uplinks for dual ToR
How to verify it
Verified that the rendering works fine on Th2 dut
Unit tests have been updated to reflect the modified buffer sizes when pcbb is enabled. There are existing testcases that will test the original buffer sizes when pcbb is disabled. With these changes, was able to build sonic-config-engine wheel successfully
Signed-off-by: Neetha John <nejo@microsoft.com>
Signed-off-by: bingwang <bingwang@microsoft.com>
Why I did it
This PR is to add two extra lossless queues for bounced back traffic.
HLD sonic-net/SONiC#950
SKUs include
Arista-7050CX3-32S-C32
Arista-7050CX3-32S-D48C8
Arista-7260CX3-D108C8
Arista-7260CX3-C64
Arista-7260CX3-Q64
How I did it
Update the buffers.json.j2 template and buffers_config.j2 template to generate new BUFFER_QUEUE table.
For T1 devices, queue 2 and queue 6 are set as lossless queues on T0 facing ports.
For T0 devices, queue 2 and queue 6 are set as lossless queues on T1 facing ports.
Queue 7 is added as a new lossy queue as DSCP 48 is mapped to TC 7, and then mapped into Queue 7
How to verify it
Verified by UT
Verified by coping the new template and generate buffer config with sonic-cfggen
* [Tunnel PFC] Add property for tunnel PFC
Replace the config.bcm file with j2 template file
- Add 'sai_remap_prio_on_tnl_egress=1' property when device metadata local
- Host subtype is 'dualtor'
- Change sai.profile foe the new config.bcm.j2
* [device config] Adding configuration for default route fallback
* Set sai_tunnel_underlay_route_mode attribute to fallback to default route if more specific route is unavailable.
Depends on Azure/sonic-utilities#1626
Depends on Azure/sonic-swss#1754
QOS tables in config db used ABNF format i.e "[TABLE_NAME|name] to refer fieldvalue to other qos tables.
Example:
Config DB:
"Ethernet92|3": {
"scheduler": "[SCHEDULER|scheduler.1]",
"wred_profile": "[WRED_PROFILE|AZURE_LOSSLESS]"
},
"Ethernet0|0": {
"profile": "[BUFFER_PROFILE|ingress_lossy_profile]"
},
"Ethernet0": {
"dscp_to_tc_map": "[DSCP_TO_TC_MAP|AZURE]",
"pfc_enable": "3,4",
"pfc_to_queue_map": "[MAP_PFC_PRIORITY_TO_QUEUE|AZURE]",
"tc_to_pg_map": "[TC_TO_PRIORITY_GROUP_MAP|AZURE]",
"tc_to_queue_map": "[TC_TO_QUEUE_MAP|AZURE]"
},
This format is not consistent with other DB schema followed in sonic.
And also this reference in DB is not required, This is taken care by YANG "leafref".
Removed this format from all platform files to consistent with other sonic db schema.
Example:
"Ethernet92|3": {
"scheduler": "scheduler.1",
"wred_profile": "AZURE_LOSSLESS"
},
Dependent pull requests:
#7752 - To modify platfrom files
#7281 - Yang model
Azure/sonic-utilities#1626 - DB migration
Azure/sonic-swss#1754 - swss change to remove ABNF format
What I did:
Updated 7260 MMU Profile based on latest MSFT Tier 1 Tomahawk2_MMU_Setting_48x100G_40m_16x100G_300m_v1.0 and
TH2_PGHdrm_MSFT.
How I verify:
Made sure image is up/traffic is flowing/mmu dump looked fine.
SAI qos test need will be updated to support this SKU.
* 7260cx3 DualToR config.bcm support based on DualToR setting in device metadata at boot time.
For HWSKU Arista-7260CX3-C64 the MMU setting SOC for T0/T1 is also combined into the config.bcm.j2 logic so use just one config file and adding delta based on Switch Roles.
However in SAI 3.7 default behaviout got changes to 128 Group and 128
Memeber each.
This change is to make sure we are using same ECMP Group/Memeber Per
Group for 3.7 also so that behaviour is consistent.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
as needed by SAI 3.7 and above. Without this change
Warmboot fails from 3.5 to 3.7 as Braodcoam Datastructure
gets corrupted after warm-boot.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* [Device] Rename SKU Arista-7260CX3-64 to Arista-7260CX3-C64
Renaming to add the speed indication: C64 means 64 100G ports.
renamed: Arista-7260CX3-64/port_config.ini -> Arista-7260CX3-C64/port_config.ini
renamed: Arista-7260CX3-64/sai.profile -> Arista-7260CX3-C64/sai.profile
* [Arista-7260cx3] Fix 64x100G port_config.ini
Reorder the port lanes to match the front panel port numbering.
* [Arista-7260CX3] add hwSKU Arista-7260CX3-D108C8
This hwSKU has 108x50G ports and 8x100G ports (2 x 100G ports are unused)