C64: 32 100G down links and 32 100G up links.
D112C8: 112 50G down links and 8 100G up links.
D24C52: 24 50G down links, 20 100G down links, and 32 100G up links.
D28C50: 28 50G down links, 18 100G down links, and 32 100G up links.
Signed-off-by: Stephen Sun <stephens@nvidia.com>
This is to backport the #4886 to 201911
Calculate pool size in t1 as 24 * downlink port + 8 * uplink port
- Take both port and peer MTU into account when calculating headroom
- Worst case factor is decreased to 50%
- Mellanox-SN2700-C28D8 t0, assume 48 * 50G/5m + 8 * 100G/40m ports
- Mellanox-SN2700 (C32)
- t0: 16 * 100G/5m + 16 * 100G/40m
- t1: 16 * 100G/40m + 16 * 100G/300m
Signed-off-by: Stephen Sun <stephens@nvidia.com>
* Update the buffer size based on the latest excel
Signed-off-by: Stephen Sun <stephens@mellanox.com>
* Align the buffer configuration with the latest formula:
- reduce redundant "*2" in formula
- use port MTU for local sending the PFC frame and peer lossless MTU for peer sending lossless traffic
Buffer pool size updated accordingly.
Signed-off-by: Stephen Sun <stephens@mellanox.com>
* Change port index in port_config.ini to 1-based
* Add default port index to port_config.ini, change platform plugins to accept 1-based port index
* fix port index in sfp_event.py
sai_xml contains info about port splits, previously it simply linked to the MSN3800 sai xml, which does not have splits. New version describes splits and speeds according to Mellanox-SN3800-D112-C8 SKU.
Practically it can cause port recreation on SAI init.
Signed-off-by: Mykola Faryma <mykolaf@mellanox.com>