#### Why I did it
1. Update pg_profile_lookup.ini with 2000m cable supported
2. Update buffer configuration for t1 with uplink cable 2000m
- For SN3800 platform:
- C64:
- t0: 32 100G down links and 32 100G up links.
- t1: 56 100G down links and 8 100G up links with 2 km cable.
- D112C8: 112 50G down links and 8 100G up links.
- D24C52: 24 50G down links, 20 100G down links, and 32 100G up links.
- D28C50: 28 50G down links, 18 100G down links, and 32 100G up links.
- For SN2700 platform:
- D48C8: 48 50G down links and 8 100G up links
- C32:
- t0: 16 100G down links and 16 100G up links.
- t1: 24 100G down links and 8 100G up links with 2 km cable.
Signed-off-by: Stephen Sun <stephens@nvidia.com>
#### How I did it
#### How to verify it
Run QoS regression test
C64: 32 100G down links and 32 100G up links.
D112C8: 112 50G down links and 8 100G up links.
D24C52: 24 50G down links, 20 100G down links, and 32 100G up links.
D28C50: 28 50G down links, 18 100G down links, and 32 100G up links.
Signed-off-by: Stephen Sun <stephens@nvidia.com>
This is to backport the #4886 to 201911
Calculate pool size in t1 as 24 * downlink port + 8 * uplink port
- Take both port and peer MTU into account when calculating headroom
- Worst case factor is decreased to 50%
- Mellanox-SN2700-C28D8 t0, assume 48 * 50G/5m + 8 * 100G/40m ports
- Mellanox-SN2700 (C32)
- t0: 16 * 100G/5m + 16 * 100G/40m
- t1: 16 * 100G/40m + 16 * 100G/300m
Signed-off-by: Stephen Sun <stephens@nvidia.com>