Commit Graph

29 Commits

Author SHA1 Message Date
Stephen Sun
fe6be5da92
[202012] Configure different map between uplink and downlink on t1 switch in dual ToR scenario (#11299)
- Why I did it
Configure different DSCP_TO_TC_MAP between uplink and downlink on T1 switch in dual ToR scenario
On T1 uplink, both DSCP 2/6 will be mapped to TC 1 for the purpose of avoiding such traffic occupying lossless buffers.
On T1 downlink, they will be mapped to TC 2/6 respectively. (unchanged)

- How I did it
For vendors who want to configure different DSCP_TO_TC_MAP between uplinks and downlinks on T1, they should
Define generate_dscp_to_tc_map macro in SKU's qos.json.j2 file
Define map AZURE for downlink and AZURE_UPLINK for uplink
Define jinja2 variable different_dscp_to_tc_map as True

Signed-off-by: Stephen Sun <stephens@nvidia.com>
2022-07-03 15:58:06 +03:00
Stephen Sun
307d0e2aca
[Mellanox][202012] Support Mellanox-SN4600C-C64 as T1 switch in dual-ToR scenario (#11032)
Why I did it
Support Mellanox-SN4600C-C64 as T1 switch in dual-ToR scenario

1. Support additional queue and PG in buffer templates, including both traditional and dynamic model
2. Support mapping DSCP 2/6 to lossless traffic in the QoS template.
3. Add macros to generate additional lossless PG in the dynamic model
4. Adjust the order in which the generic/dedicated (with additional lossless queues) macros are checked and called to generate buffer tables in common template buffers_config.j2
  - Buffer tables are rendered via using macros.
  - Both generic and dedicated macros are defined on our platform. Currently, the generic one is called as long as it is defined, which causes the generic one always being called on our platform. To avoid it, the dedicated macrio is checked and called first and then the generic ones.
5. Support MAP_PFC_PRIORITY_TO_PRIORITY_GROUP on ports with additional lossless queues.

On Mellanox-SN4600C-C64, buffer configuration for t1 is calculated as:
40 * 100G downlink ports with 4 lossless PGs/queues, 1 lossy PG, and 3 lossy queues
16 * 100G uplink ports with 2 lossless PGs/queues, 1 lossy PG, and 5 lossy queues

Signed-off-by: Stephen Sun stephens@nvidia.com

How to verify it
Run regression test.
2022-06-21 10:04:49 -07:00
Junchao-Mellanox
0f983c5796 [Mellanox] Fix issue: SN4600C has 4 CPU core temperature sensors (#9930)
- Why I did it
platform.json of 4600C only has 2 CPU core thermal sensors, but there are 4 actually

- How I did it
Added thermal sensors for CPU core 2 and core 3.

- How to verify it
Build.
2022-02-09 19:27:49 +00:00
Kebo Liu
75bd97e176 [Mellanox] Add sensors conf for MSN4600C A1 platform (#9706)
- Why I did it
Add sensor conf for MSN4600C A1 platform

- How I did it
Add a new sensor conf file and relevant scripts to support two different versions of the platform

- How to verify it
Run "sensors" cmd to check the output on the A1 platform to see whether it's as expected.
Signed-off-by: Kebo Liu <kebol@nvidia.com>
2022-01-13 07:01:26 +00:00
Stephen Sun
b36ee67bc7 Fix typo and missing files in SN3800 and SN4600C's buffer templates (#9537)
Why I did it
Fix typo and missing files in SN3800 and SN4600C's buffer templates

How I did it
ingress_lossless_xoff_size => ingress_lossless_pool_xoff add missing files for SN4600C-D100C12S2

How to verify it
Deploy the fix and verify whether the device can be up.

Signed-off-by: Stephen Sun <stephens@nvidia.com>
2021-12-23 03:28:43 +00:00
Stephen Sun
8836b6bcd2 [Mellanox] Adjust buffer parameters with 2km cable supported for 4600C non-generic SKUs (#9215)
- Why I did it
Also recalculated all parameters with the latest algorithm with per-speed peer response time taken into account

- How I did it
Detailed information of each SKU:

C64:
t0: 32 100G downlinks and 32 100G uplinks
t1: 56 100G downlinks and 8 100G uplinks with 2km-cable supported
D112C8: 112 50G downlinks and 8 100G uplinks.
D48C40: 48 50G downlinks, 32 100G downlinks, and 8 100G uplinks
D100C12S2: 4 100G downlinks, 2 10G downlinks, 100 50G downlinks, and 8 100G uplinks
2km cable is supported for C64 on t1 only

- How to verify it
Run regression test (QoS)

Signed-off-by: Stephen Sun <stephens@nvidia.com>
2021-12-12 01:36:53 +00:00
Stephen Sun
acac848858
[Reclaim buffer][202012] Reclaim unused buffers by applying zero buffer profiles (#9063)
- Why I did it
Support zero buffer profiles

1. Add buffer profiles and pool definition for zero buffer profiles
2. Support applying zero profiles on INACTIVE PORTS
3. Enable dynamic buffer manager to load zero pools and profiles from a JSON file

- How I did it
Add buffer profiles and pool definition for zero buffer profiles

If the buffer model is static:
 * Apply normal buffer profiles to admin-up ports
 * Apply zero buffer profiles to admin-down ports
If the buffer model is dynamic:
 * Apply normal buffer profiles to all ports
 * buffer manager will take care when a port is shut down

Update buffers_config.j2 to support INACTIVE PORTS by extending the existing macros to generate the various buffer objects, including PGs, queues, ingress/egress profile lists

Originally, all the macros to generate the above buffer objects took active ports only as an argument.
Now that buffer items need to be generated on inactive ports as well, an extra argument representing the inactive ports need to be added.
To be backward compatible, a new series of macros are introduced to take both active and inactive ports as arguments
The original version (with active ports only) will be checked first. If it is not defined, then the extended version will be called.
Only vendors who support zero profiles need to change their buffer templates
Enable buffer manager to load zero pools and profiles from a JSON file:

The JSON file is provided on a per-platform basis
It is copied from platform/<vendor> folder to /usr/share/sonic/temlates folder in compiling time and rendered when the swss container is being created.
To make code clean and reduce redundant code, extract common macros from buffer_defaults_t{0,1}.j2 of all SKUs to two common files:
One in Mellanox-SN2700-D48C8 for single ingress pool mode
The other in ACS-MSN2700 for double ingress pool mode
Those files of all other SKUs will be symbol link to the above files

Update sonic-cfggen test accordingly:
 * Adjust example output file of JSON template for unit test
 * Add unit test in for Mellanox's new buffer templates.

- How to verify it
Regression test.
Unit test in sonic-cfggen
Run regression test and manually test.

Signed-off-by: stephens <stephens@nvidia.com>
2021-12-09 17:34:56 +02:00
Alexander Allen
d5149889fc
Add Mellanox-SN4600C-D100C12S2 SKU (#8754)
*[mellanox] Add D100C12S2 SKU to 4600C
2021-09-16 13:31:30 -07:00
Vivek Reddy
1eaa951966 [Mellanox] [SKU] Fix the shared headroom for 4600C-C64 SKU (#8242)
Removed ingress_lossy_pool from the BUFFER_POOL list
Fx the the egress_lossless_pool_size value

Signed-off-by: Vivek Reddy Karri <vkarri@nvidia.com>
2021-08-03 09:58:40 +00:00
Vivek Reddy
97460c06e5
SonicName Changes (#8154)
Edited port_config.ini files for all the 4600c for difference of 4.
Co-authored-by: Vivek Reddy Karri <vkarri@nvidia.com>
2021-07-12 10:43:50 -07:00
Vivek Reddy
cb2ffa324f
[Mellanox] [202012] Added D48C40 SKU for 4600C platform (#8133)
* Added new SKU for SN4600C Platform: Mellanox-SN4600C-D48C40
Co-authored-by: Vivek Reddy Karri <vkarri@nvidia.com>
2021-07-08 18:52:45 -07:00
madhanmellanox
c068369b16
[202012]Removing hwsku.json file from Mellanox-SN4600C-C64 SKU (#8009)
removed the file hwsku.json from the Mellanox-4600C-C64
Co-authored-by: Madhan Babu <madhan@l-csi-0241l.mtl.labs.mlnx>
2021-07-01 15:33:42 -07:00
DavidZagury
b70a46cd3e
[Mellanox] Update SKUs to enable SDK dumps (#7708) (#7978)
- Why I did it
To create SDK dump on Mellanox devices when SDK event has occurred.

- How I did it
Set the SKUs keys needed to initialize the feature in SAI.

- How to verify it
Simulate SDK event and check that dump is created in the expected path.
2021-06-27 10:05:14 +03:00
madhanmellanox
44625215d8 Adding new SKU Mellanox-SN4600C-C4 (#7815)
Add new SKU of SN4600C switch: Mellanox-SN4600c-c64

Co-authored-by: Madhan Babu <madhan@r-build-sonic06.mtr.labs.mlnx>
2021-06-21 09:55:43 +00:00
Kebo Liu
100c14007f
[Mellanox] [202012] Enhance the platform.json with adding more platform device facts. (#7496)
- Why I did it
Current platform.json lacks some peripheral device related facts, like chassis/fan/pasu/drawer/thermal/components names, numbers, etc.

- How I did it
Add platform device facts to the platform.json file

- How to verify it
Run sonic-mgmt platform API tests which depend on these facts.

Signed-off-by: Kebo Liu <kebol@nvidia.com>
2021-05-09 10:45:36 +03:00
noaOrMlnx
aee4892ca4
[Mellanox] Align Mellanox-SN4600C-D112C8 SKU with SKU definition (#7057)
- Why I did it
Mellanox-SN4600C-D112C8 SKU is not configured properly.
It should have 112 50G interfaces and 8 100G interfaces as described on this PR.

- How I did it
Changed port_config.ini & sai profile.

- How to verify it
Apply this HwSKU to a MSN4600C Mellanox platform.
2021-03-25 08:50:59 +02:00
Nazarii Hnydyn
bb2014a0af [Mellanox]: Fix PCIEd config for SN4600c (#6892)
Signed-off-by: Nazarii Hnydyn <nazariig@nvidia.com>
2021-03-04 21:23:05 +00:00
Stephen Sun
97e6b4d15c Support shared headroom pool for Microsoft SKUs (#6366)
- Why I did it
Support shared headroom pool

Signed-off-by: Stephen Sun stephens@nvidia.com

- How I did it
Port configurations for SKUs based on 2700/3800 platform from 201911
For SN3800 platform:
C64: 32 100G down links and 32 100G up links.
D112C8: 112 50G down links and 8 100G up links.
D24C52: 24 50G down links, 20 100G down links, and 32 100G up links.
D28C50: 28 50G down links, 18 100G down links, and 32 100G up links.
For SN2700 platform:
D48C8: 48 50G down links and 8 100G up links
C32: 16 100G downlinks and 16 100G uplinks
Add configuration for Mellanox-SN4600C-D112C8
112 50G down links and 8 100G up links.

- How to verify it
Run regression test.
2021-02-23 23:56:01 +00:00
Stephen Sun
e010d83fc3
[Dynamic buffer calc] Support dynamic buffer calculation (#6194)
**- Why I did it**
To support dynamic buffer calculation.
This PR also depends on the following PRs for sub modules
- [sonic-swss: [buffermgr/bufferorch] Support dynamic buffer calculation #1338](https://github.com/Azure/sonic-swss/pull/1338)
- [sonic-swss-common: Dynamic buffer calculation #361](https://github.com/Azure/sonic-swss-common/pull/361)
- [sonic-utilities: Support dynamic buffer calculation #973](https://github.com/Azure/sonic-utilities/pull/973)

**- How I did it**
1. Introduce field `buffer_model` in `DEVICE_METADATA|localhost` to represent which buffer model is running in the system currently:
    - `dynamic` for the dynamic buffer calculation model
    - `traditional` for the traditional model in which the `pg_profile_lookup.ini` is used
2. Add the tables required for the feature:
   - ASIC_TABLE in platform/\<vendor\>/asic_table.j2
   - PERIPHERAL_TABLE in platform/\<vendor\>/peripheral_table.j2
   - PORT_PERIPHERAL_TABLE on a per-platform basis in device/\<vendor\>/\<platform\>/port_peripheral_config.j2 for each platform with gearbox installed.
   - DEFAULT_LOSSLESS_BUFFER_PARAMETER and LOSSLESS_TRAFFIC_PATTERN in files/build_templates/buffers_config.j2
   - Add lossless PGs (3-4) for each port in files/build_templates/buffers_config.j2
3. Copy the newly introduced j2 files into the image and rendering them when the system starts
4. Update the CLI options for buffermgrd so that it can start with dynamic mode
5. Fetches the ASIC vendor name in orchagent:
   - fetch the vendor name when creates the docker and pass it as a docker environment variable
   - `buffermgrd` can use this passed-in variable
6. Clear buffer related tables from STATE_DB when swss docker starts
7. Update the src/sonic-config-engine/tests/sample_output/buffers-dell6100.json according to the buffer_config.j2
8. Remove buffer pool sizes for ingress pools and egress_lossy_pool
   Update the buffer settings for dynamic buffer calculation
2020-12-13 11:35:39 -08:00
Kebo Liu
1158701edc
add pcied config files for mellanox platform (#5669)
This PR has a dependency on community change to move PCIe config files from $PLATFORM/plugin folder to $PLATFORM/ folder
- Why I did it
To support PCIed daemon on Mellanox platforms
- How I did it
Add PCIed config yaml files for all Mellanox platforms
Update pmon daemon config files for SimX platforms
2020-11-02 19:45:36 -08:00
Nazarii Hnydyn
5486f87afc
[Mellanox] Update platform components config files. (#5685)
Signed-off-by: Nazarii Hnydyn <nazariig@nvidia.com>
2020-10-25 19:44:37 +02:00
Nazarii Hnydyn
e2b4afc438
[Mellanox] Update platform components config files. (#5360)
Signed-off-by: Nazarii Hnydyn <nazariig@nvidia.com>
2020-09-13 10:23:22 +03:00
Stephen Sun
54fcdbb380
Support single ingress pool for MSFT SKUs and optimize headroom calculation (#4686)
Calculate pool size in t1 as 24 * downlink port + 8 * uplink port

- Take both port and peer MTU into account when calculating headroom
- Worst case factor is decreased to 50%
- Mellanox-SN2700-C28D8 t0, assume 48 * 50G/5m + 8 * 100G/40m ports
- Mellanox-SN2700 (C32)
  - t0: 16 * 100G/5m + 16 * 100G/40m
  - t1: 16 * 100G/40m + 16 * 100G/300m

Signed-off-by: Stephen Sun <stephens@mellanox.com>

Co-authored-by: Stephen Sun <stephens@mellanox.com>
2020-08-13 13:12:09 +03:00
Junchao-Mellanox
e1f7fb135b
[Mellanox] Add system health configuration file for Mellanox platforms (#4834)
The new feature system health support a platform based configuration file. Add configuration files for all Mellanox platform.

Add a configuration file for SN2700, other platform will use a soft link to it.
2020-07-13 10:20:22 -07:00
shlomibitton
e666bf8490 [Mellanox] Add a new SKU Mellanox-SN4600C-D112C8 (#4833)
Add related files to the device folder:

buffer config templates
pg lookup profile
port_config.ini
sai profile
sensor conf
plugins

Co-authored-by: Stephen Sun <stephens@mellanox.com>
2020-07-12 18:08:52 +00:00
Junchao-Mellanox
563a0fd21e
[Mellanox] Change port index in port_config.ini to 1-based (#4781)
* Change port index in port_config.ini to 1-based
* Add default port index to port_config.ini, change platform plugins to accept 1-based port index
* fix port index in sfp_event.py
2020-06-23 17:21:36 -07:00
shlomibitton
ea63f3e4b2
[mellanox]: Fix for MSN4600C sensors (#4754)
Signed-off-by: Shlomi Bitton <shlomibi@mellanox.com>
2020-06-12 16:08:27 -07:00
Junchao-Mellanox
5e6c20481d
[Mellanox] Enhancement for fan led management (#4437) 2020-05-13 10:01:32 -07:00
shlomibitton
b6291372d9
[Mellanox] Add a new Mellanox platform x86_64-mlnx_msn4600c and new SKU ACS-MSN4600C (#4483)
* New SKU support for MSN4600C

Signed-off-by: Shlomi Bitton <shlomibi@mellanox.com>
2020-04-30 00:30:11 -07:00