* Re-add 127.0.0.1/8 when bringing down the interfaces
With #5353, 127.0.0.1/16 was added to the lo interface, and then
127.0.0.1/8 was removed. However, when bringing down the lo interface,
like during a config reload, 127.0.0.1/16 gets removed, but 127.0.0.1/8
isn't added back to the interface. This means that there's a period of
time where 127.0.0.1 is not available at all, and services that need to
connect to 127.0.01 (such as for redis DB) will fail.
To fix this, when going down, add 127.0.0.1/8. Add this address before
the existing configuration gets removed, so that 127.0.0.1 is available
at all times.
Note that running `ifdown lo` doesn't actually bring down the loopback
interface; the interface always stays "physically" up.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Why I did it
Update cable length for uplink/downlink ports for chassis and and update PG/pool headroom size accordingly.
Work item tracking
17880812
How I did it
Updated cable length as well as buffer config in HWSKU files.
Why I did it
To improve readability of config.bcm, fixed the alignment of soc properties
How to verify it
Build sonic_config_engine-1.0-py3-none-any.whl successfully
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
Update soc properties for certain roles that need to use pfcwd dlr init based recovery mechanism
How to verify it
Updated the templates on a 7050cx3 dual tor and 7260 T1 which satisfies these conditions and validated pfcwd recovery which uses DLR_INIT based mechanism. Also validated that this mechanism is not used on 7050cx3 single tor with the updated templates
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
We need to store information of power shelf in config_db for SONiC MX switch. Current minigraph parser cannot parse rack_mgmt_map field.
Work item tracking
Microsoft ADO (number only): 22179645
How I did it
Add support for parsing rack_mgmt_map.
- Why I did it
There are chassis-packet and Single asic platforms which support this 400G to 100G/40G speed change via config.
Enabling this feature for all platforms which can support this. Keeping it enabled for all does not affect the platforms
which do not support this feature yet.
Work item tracking
Microsoft ADO (number only):
17952356
- How I did it
Removed switch_type and role type check.
- How to verify it
Loaded router with default 400G config. Loaded minigraph to convert 400G to 100G speed.
Signed-off-by: anamehra <anamehra@cisco.com>
* Support ACL interface type BmcData in minigraph parser
* Support ACL interface type BmcData in minigraph parser
* add unittest
* Add a global dict for storing the defination of custom acl tables
* yang mode support for neighbor metadata
* add description in leaf node
* modify description
Co-authored-by: jcaiMR <111116206+jcaiMR@users.noreply.github.com>
* [minigraph] add support for changing T1 ports speed from 400G to 100G and vice-versa (#14505)
Open
[minigraph] add support for changing T1 ports speed from 400G to 100G and vice-versa
vdahiya12 wants to merge 9 commits into sonic-net:master from vdahiya12:dev/vdahiya/minigraph_parser
Conversation 10
Commits 9
Checks 18
Files changed 5
Conversation
vdahiya12
@vdahiya12 vdahiya12 commented 2 weeks ago •
On SONiC T1 cisco 8101 HwSku, the speed changes are done from 400G to 100G needs to be supported on 400G ports.
To enable this, along with speed change the port lanes need to be changed. This PR has the changes to update the port lanes when such speed change happens.
Basically if Banwidth in minigraph.xml intends to enable a 100G speed on a 400G port, then the appropriate lane change and speed change needs to be invoked in mingraph parser
Example if port_config.ini dicatates the speed to be 400G and minigraph has 100G speed, then this changeneeds to be accommodated
Ethernet96 1536,1537,1538,1539,1540,1541,1542,1543 etp12 12 400000 0
<DeviceLinkBase>
<ElementType>DeviceInterfaceLink</ElementType>
<EndDevice>ARISTA01T2</EndDevice>
<EndPort>Ethernet1</EndPort>
<StartDevice>Device-8101-01</StartDevice>
<StartPort>etp12</StartPort>
<Bandwidth>100000</Bandwidth>
</DeviceLinkBase>
These platforms today have 400g port with 8 serdes lines, and 100g will operate with 4 serdes lane. When the port speed changes from 400G to 100G the first 4 lanes will be used for 100G port.
Signed-off-by: vaibhav-dahiya <vdahiya@microsoft.com>
* add all
Signed-off-by: vaibhav-dahiya <vdahiya@microsoft.com>
* fix unit
Signed-off-by: vaibhav-dahiya <vdahiya@microsoft.com>
---------
Signed-off-by: vaibhav-dahiya <vdahiya@microsoft.com>
Why I did it
SONiC currently does not identify 'EdgeZoneAggregator' neighbor. As a result, the buffer profile attached to those interfaces uses the default cable length which could cause ingress packet drops due to insufficient headroom. Hence, there is a need to update the buffer templates to identify such neighbors and assign the same cable length as used by the T1.
How I did it
Modified the buffer template to identify EdgeZoneAggregator as a neighbor device type and assign it the same cable length as a T1/leaf router.
How to verify it
Unit tests pass, and manually checked on a 7260 to see the changes take effect.
Backport #14372 to 202205
Why I did it
For better accounting purposes, updating the ingress lossy traffic profile to use static threshold. This change is only intended for Th devices using RDMA-CENTRIC profiles
How I did it
Update the buffer templates for Th devices in RDMA-CENTRIC folder to use the correct threshold
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
After the renaming of the asic_port_name in port_config.ini file (PR: #13053 ), the asic_ifname in port_config.ini is changed from '-ASIC<asic_id>' to just port. Example: 'Eth0-ASIC0' to 'Eth0'.
However, with this change a config_db generated via config load_minigraph would cause the EVERFLOW and EVERFLOWV6 tables under ACL_TABLE to not have any of non-LAG front panel interfaces. This was causing the EVERFLOW suite to fail.
How I did it
In parse_asic_external_neigbhors in minigraph.py there was a check that the asic_name.lower() (like asic0) is present in the port_alias_asic_map. However with -ASIC removed from the asic_ifname, the port_alias_asic_map would not have the asic_name and thus any non-LAG neighbor would not be included.
Fix was the ignore the asic name change as the port_alias_asic_map is already only looking for ports in just the same asic as asic_name.
How to verify it
Execute "config load_minigraph" with the mingraph which is generated by sonic-mgmt gen-minigraph script. And confirm ono-lag interface are present in the Everfloe table in the config_dbs.
Signed-off-by: mlok <marty.lok@nokia.com>
Fixes#11873.
When loading from minigraph, for port channels, don't create the members@ array in config_db in the PORTCHANNEL table. This is no longer needed or used.
In addition, when adding a port channel member from the CLI, that member doesn't get added into the members@ array, resulting in a bit of inconsistency. This gets rid of that inconsistency.
- Why I did it
Support DSCP remapping in dual ToR topo on T0 switch for SKU Mellanox-SN4600c-C64, Mellanox-SN4600c-D48C40, Mellanox-SN2700, Mellanox-SN2700-D48C8.
- How I did it
Regarding buffer settings, originally, there are two lossless PGs and queues 3, 4. In dual ToR scenario, the lossless traffic from the leaf switch to the uplink of the ToR switch can be bounced back.
To avoid PFC deadlock, we need to map the bounce-back lossless traffic to different PGs and queues. Therefore, 2 additional lossless PGs and queues are allocated on uplink ports on ToR switches.
On uplink ports, map DSCP 2/6 to TC 2/6 respectively
On downlink ports, both DSCP 2/6 are still mapped to TC 1
Buffer adjusted according to the ports information:
Mellanox-SN4600c-C64:
56 downlinks 50G + 8 uplinks 100G
Mellanox-SN4600c-D48C40, Mellanox-SN2700, Mellanox-SN2700-D48C8:
24 downlinks 50G + 8 uplinks 100G
- How to verify it
Unit test.
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Co-authored-by: Stephen Sun <5379172+stephenxs@users.noreply.github.com>
Why I did it
This PR is to update minigraph.py to support both port alias and port name as input of AttachTo attribute of ACL table.
Before this change, only port alias is supported.
How I did it
Add a global variable to store port names
Search both port names and port alias wheh parsing the value of AttachTo.
How to verify it
Verified by a new unit test case test_minigraph_acl_attach_to_ports
Verified by copying the new minigraph.py to a testbed and run conflg load_minigraph.
Why I did it
In the voq chassis the buffer_queue configuration needs to be applied on system_port instead of the sonic port.
This PR has the change to do this.
How I did it
Modify buffer_config.j2 to generate buffer_queue configuration on system_ports if the device is Voq Chassis
How to verify it
Verify the buffer_queue configuration is generated properly using sonic-cfggen
Signed-off-by: Arvindsrinivasan Lakshmi Narasimhan <arlakshm@microsoft.com>
Why I did it
To ensure, that after a BGP startup, dualtor T0 receives BGP updates before sending out BGP updates.
Please refer to sonic-net/SONiC#1161 for more details.
How I did it
add coalesce-time 10000 to the frr bgp startup config.
Signed-off-by: Longxiang Lyu <lolv@microsoft.com>
Signed-off-by: Longxiang Lyu <lolv@microsoft.com>
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
ECN parameters need to be updated for storage backend
How I did it
Included the check for storage backend devices to update qos configs
How to verify it
Verified that the new ecn settings are applied on storage backend device.
Verified that the old ecn settings are applied for storage frontend, non storage frontend/backend devices
Why I did it
The PR is to apply separated DSCP_TO_TC_MAP and TC_TO_QUEUE_MAP to uplink ports on dualtor.
The traffic with DSCP 2 and DSCP 6 from T1 is treated as lossless traffic.
DSCP TC Queue
2 2 2
6 6 6
Traffic with DSCP 2 or DSCP 6 from downlink is still treated as lossy traffic as before.
How I did it
Define DSCP_TO_TC_MAP|AZURE_UPLINK and TC_TO_QUEUE_MAP|AZURE_UPLINK.
How to verify it
Verified by UT
Verified by coping the new template to a testbed, and rendering a config_db.json
Cherry-pick #12306 to 202205 branch
Why I did it
Add yang model definition for VOQ_INBAND_INTERFACE defined and implemented for VOQ chassis. HLD for voq-inband-interface is included in https://github.com/sonic-net/SONiC/blob/master/doc/voq/voq_hld.md
How I did it
Added yang model definition, unit tests, sample config and documentation for the table
How to verify it
Validated config tree generation using "pyang -Vf tree -p /usr/local/share/yang/modules/ietf ./yang-models/sonic-voq-inband-interface.yang"
Built the below python-wheels to validate unit tests and other changes
target/python-wheels/bullseye/sonic_yang_mgmt-1.0-py3-none-any.whl
target/python-wheels/bullseye/sonic_yang_models-1.0-py3-none-any.whl
target/python-wheels/bullseye/sonic_config_engine-1.0-py3-none-any.whl
- Why I did it
Fixes#11431
- How I did it
dhcp6relay binds to ipv6 addresses configured on these vlan interfaces
Thus check if they are ready before launching dhcp6relay
- How to verify it
Unit Tests
Tested on a live device
Signed-off-by: Vivek Reddy Karri <vkarri@nvidia.com>
Signed-off-by: Neetha John nejo@microsoft.com
Why I did it
slb and bgp mon peers are not needed for storage backend. These neighbor are present in the minigraph.
How I did it
After minigraph parsing, remove these neighbors if it is a storage backend device
How to verify it
Unit tests
Verified on the device that once these tables are removed, these peers don't show up in "show runningconfig bgp" output
Why I did it
This PR is to update TC_TO_QUEUE_MAP|AZURE for SKU Arista-7050CX3-32S-D48C8 and Arista-7260CX3 T0.
The change is only to align the TC_TO_QUEUE_MAP for regular traffic and bounced traffic. It has no impact on business because we have no traffic being mapped to TC2 or TC6.
How I did it
Update TC_TO_QUEUE_MAP|AZURE , and test cases as well.
How to verify it
Verified by running test case test_j2files.py
/sonic/src/sonic-config-engine$ python3 setup.py test -s tests/test_j2files.py
running test
......
----------------------------------------------------------------------
Ran 29 tests in 25.390s
OK
Signed-off-by: Arvindsrinivasan Lakshmi Narasimhan arlakshm@microsoft.com
Why I did it
Generate the port configuration required 400G ZR port from minigraph.
How I did it
Add parse logic to get tx_power and laser_freq from LinkMetadata section of the minigraph.
Add UT for packet-chassis and voq chassis
How to verify it
UT
Signed-off-by: Arvindsrinivasan Lakshmi Narasimhan <arlakshm@microsoft.com>
Why I did it
This PR is to backport PR #11056 and PR #11045 into master branch.
This PR is to enable tunnel_qos_remap on T1 and T0 in DualToR deployment.
On T1, we check the property DownstreamRedundancyTypes. On T0, we check the property RedundancyType.
tunnel_qos_remap is set to enabled if gemini is in DownstreamRedundancyTypes (on T1) or RedundancyType (on T0).
How I did it
The change is implemented in minigraph.py.
How to verify it
Verified by test_minigraph_case.py and 'test_j2files.py`.
What I did:
Added Support for deployment_id parsing for Device Asic metadata.
Why I did:-
Deployment Id is used in BGP docker for FRR template generation. For multi-asic platforms running in namespace without deployment id as key in DEVICE_METADATA FRR template generation fails. This change is needed after this #10154 where if deployment_id is none we don't update DEVICE_METADA dictionary.
How I verify:-
Added unit-test.
- Why I did it
Support Mellanox-SN4600C-C64 as T1 switch in dual-ToR scenario
This is to port #11032 and #11299 from 202012 to master.
Support additional queue and PG in buffer templates, including both traditional and dynamic model
Support mapping DSCP 2/6 to lossless traffic in the QoS template.
Add macros to generate additional lossless PG in the dynamic model
Adjust the order in which the generic/dedicated (with additional lossless queues) macros are checked and called to generate buffer tables in common template buffers_config.j2
Buffer tables are rendered via using macros.
Both generic and dedicated macros are defined on our platform. Currently, the generic one is called as long as it is defined, which causes the generic one always being called on our platform. To avoid it, the dedicated macrio is checked and called first and then the generic ones.
Support MAP_PFC_PRIORITY_TO_PRIORITY_GROUP on ports with additional lossless queues.
On Mellanox-SN4600C-C64, buffer configuration for t1 is calculated as:
40 * 100G downlink ports with 4 lossless PGs/queues, 1 lossy PG, and 3 lossy queues
16 * 100G uplink ports with 2 lossless PGs/queues, 1 lossy PG, and 5 lossy queues
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
Improve throughput and latency for 7260 deployments
How I did it
Update the dynamic threshold to 0 and ECN settings as 2mb/10mb/5%
How to verify it
Updated unit tests to use the modified values for 7260 ecn settings.
Why I did it
This PR is to add a flag to control whether to generate PORT_QOS_MAP|global entry or not.
It's because for some HWSKU, such as BackEndToRRouter and BackEndLeafRouter, there is no DSCP_TO_TC_MAP defined.
Hence, if the PORT_QOS_MAP|global entry is generated, OA will report some error because the DSCP_TO_TC_MAP map AZURE can not be found.
Jul 14 00:24:40.286767 str2-7050qx-32s-acs-03 ERR swss#orchagent: :- saiObjectTypeQuery: invalid object id oid:0x7fddb43605d0
Jul 14 00:24:40.286767 str2-7050qx-32s-acs-03 ERR swss#orchagent: :- meta_generic_validation_objlist: SAI_SWITCH_ATTR_QOS_DSCP_TO_TC_MAP:SAI_ATTR_VALUE_TYPE_OBJECT_ID object on list [0] oid 0x7fddb43605d0 is not valid, returned null object id
Jul 14 00:24:40.286767 str2-7050qx-32s-acs-03 ERR swss#orchagent: :- applyDscpToTcMapToSwitch: Failed to apply DSCP_TO_TC QoS map to switch rv:-5
Jul 14 00:24:40.286767 str2-7050qx-32s-acs-03 ERR swss#orchagent: :- doTask: Failed to process QOS task, drop it
This PR is to address the issue.
How I did it
Add a flag require_global_dscp_to_tc_map to control whether to generate the PORT_QOS_MAP|global entry. The default value for require_global_dscp_to_tc_map is true. If the device type is storage backend, the value is changed to false. Then the PORT_QOS_MAP|global entry is not generated.
How to verify it
Update the current test_qos_dscp_remapping_render_template to cover storage backend.
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
There is a need to select different mmu profiles based on deployment type
How I did it
There will be separate subfolders (RDMA-CENTRIC, TCP-CENTRIC, BALANCED) in each hwsku folder which contains deployment specific mmu and qos settings. SonicQosProfile attribute in the minigraph will be used to determine which settings to use. If that attribute is not present, the default settings that exist in the hwsku folder will be used
Signed-off-by: Neetha John nejo@microsoft.com
Why I did it
For storage backend, certain rules will be applied to the DATAACL table to allow only vlan tagged packets and drop untagged packets.
How I did it
Create DATAACL table if the device is a storage backend device
To avoid ACL resource issues, remove EVERFLOW related tables if the device is a storage backend device
How to verify it
Added the following unit tests
- verify that EVERFLOW acl tables is removed and DATAACL table is added for storage backend tor
- verify that no DATAACL tables are created and EVERFLOW tables exist for storage backend leaf
Why I did it
Storage backend has all vlan members tagged. If untagged packets are received on those links, they are accounted as RX_DROPS which can lead to false alarms in monitoring tools. Using this acl to hide these drops.
How I did it
Created a acl template which will be loaded during minigraph load for backend. This template will allow tagged vlan packets and dropped untagged
How to verify it
Unit tests
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
To further support parse out soc_ipv4 and soc_ipv6 out of Dpg:
<DeviceDataPlaneInfo>
<IPSecTunnels />
<LoopbackIPInterfaces xmlns:a="http://schemas.datacontract.org/2004/07/Microsoft.Search.Autopilot.Evolution">
<a:LoopbackIPInterface>
<ElementType>LoopbackInterface</ElementType>
<Name>HostIP</Name>
<AttachTo>Loopback0</AttachTo>
<a:Prefix xmlns:b="Microsoft.Search.Autopilot.NetMux">
<b:IPPrefix>10.10.10.2/32</b:IPPrefix>
</a:Prefix>
<a:PrefixStr>10.10.10.2/32</a:PrefixStr>
</a:LoopbackIPInterface>
<a:LoopbackIPInterface>
<ElementType>LoopbackInterface</ElementType>
<Name>HostIP1</Name>
<AttachTo>Loopback0</AttachTo>
<a:Prefix xmlns:b="Microsoft.Search.Autopilot.NetMux">
<b:IPPrefix>fe80::0002/128</b:IPPrefix>
</a:Prefix>
<a:PrefixStr>fe80::0002/128</a:PrefixStr>
</a:LoopbackIPInterface>
<a:LoopbackIPInterface>
<ElementType>LoopbackInterface</ElementType>
<Name>SoCHostIP0</Name>
<AttachTo>server2SOC</AttachTo>
<a:Prefix xmlns:b="Microsoft.Search.Autopilot.NetMux">
<b:IPPrefix>10.10.10.3/32</b:IPPrefix>
</a:Prefix>
<a:PrefixStr>10.10.10.3/32</a:PrefixStr>
</a:LoopbackIPInterface>
<a:LoopbackIPInterface>
<ElementType>LoopbackInterface</ElementType>
<Name>SoCHostIP1</Name>
<AttachTo>server2SOC</AttachTo>
<a:Prefix xmlns:b="Microsoft.Search.Autopilot.NetMux">
<b:IPPrefix>fe80::0003/128</b:IPPrefix>
</a:Prefix>
<a:PrefixStr>fe80::0003/128</a:PrefixStr>
</a:LoopbackIPInterface>
</LoopbackIPInterfaces>
</DeviceDataPlaneInfo>
Signed-off-by: Longxiang Lyu lolv@microsoft.com
How I did it
For servers loopback definitions in Dpg, if they contain LoopbackIPInterface with tags AttachTo, which has value of format like <server_name>SOC, the address will be regarded as a SoC IP, and sonic-cfggen now will treat the port connected to the server as active-active if the redundancy_type is either Libra or Mixed.
How to verify it
Pass the unittest.
Signed-off-by: Longxiang Lyu <lolv@microsoft.com>