Why I did it
The SONiC switches get their docker images from local repo, populated during install with container images pre-built into SONiC FW. With the introduction of kubernetes, new docker images available in remote repo could be deployed. This requires dockerd to be able to pull images from remote repo.
Depending on the Switch network domain & config, it may or may not be able to reach the remote repo. In the case where remote repo is unreachable, we could potentially make Kubernetes server to also act as http-proxy.
How I did it
When admin explicitly enables, the kubernetes-server could be configured as docker-proxy. But any update to docker-proxy has to be via service-conf file environment variable, implying a "service restart docker" is required. But restart of dockerd is vey expensive, as it would restarts all dockers, including database docker.
To avoid dockerd restart, pre-configure an http_proxy using an unused IP. When k8s server is enabled to act as http-proxy, an IP table entry would be created to direct all traffic to the configured-unused-proxy-ip to the kubernetes-master IP. This way any update to Kubernetes master config would be just manipulating IPTables, which will be transparent to all modules, until dockerd needs to download from remote repo.
How to verify it
Configure a switch such that image repo is unreachable
Pre-configure dockerd with http_proxy.conf using an unused IP (e.g. 172.16.1.1)
Update ctrmgrd.service to invoke ctrmgrd.py with "-p" option.
Configure a k8s server, and deploy an image for feature with set_owner="kube"
Check if switch could successfully download the image or not.
What I did:
Updated 7260 MMU Profile based on latest MSFT Tier 1 Tomahawk2_MMU_Setting_48x100G_40m_16x100G_300m_v1.0 and
TH2_PGHdrm_MSFT.
How I verify:
Made sure image is up/traffic is flowing/mmu dump looked fine.
SAI qos test need will be updated to support this SKU.
#### Why I did it
The PR checkers do not re-run the sonic-config-engine test cases, caused by some of the config files changes not detected.
https://sonic-jenkins.westus2.cloudapp.azure.com/job/mellanox/job/buildimage-mlnx-all/660/console
…
07:13:24 ======================================================================
07:13:24 ERROR: test_bgpd_quagga (tests.test_j2files.TestJ2Files)
07:13:24 ----------------------------------------------------------------------
…
07:13:24 ======================================================================
07:13:24 ERROR: test_zebra_quagga (tests.test_j2files.TestJ2Files)
07:13:24 ----------------------------------------------------------------------
…
07:13:24 error: Test failed: <unittest.runner.TextTestResult run=161 errors=2 failures=0>
07:13:24 [ FAIL LOG END ] [ target/python-wheels/sonic_config_engine-1.0-py2-none-any.whl ]
07:13:24 make: *** [slave.mk:603: target/python-wheels/sonic_config_engine-1.0-py2-none-any.whl] Error 1
07:13:24 Makefile.work:292: recipe for target 'target/sonic-mellanox.bin' failed
07:13:24 make[1]: *** [target/sonic-mellanox.bin] Error 2
07:13:24 make[1]: Leaving directory '/data2/johnar/workspace/mellanox/buildimage-mlnx-all'
07:13:24 Makefile:7: recipe for target 'target/sonic-mellanox.bin' failed
07:13:24 make: *** [target/sonic-mellanox.bin] Error 2
See PR: https://github.com/Azure/sonic-buildimage/pull/7476
#### How I did it
Add the depended files.
See src/sonic-config-engine/tests/test_j2files.py
Signed-off-by: Mykola Gerasymenko <mykolax.gerasymenko@intel.com>
Why I did it
Dynamic Port Breakout falls cause of PG_DROP yang model missing
How I did it
Add PG_DROP yang model and add check this field in unit test for yang model
How to verify it
Firstly try to do DPB (2x50G) for Ethernet0 port:
sudo config interface breakout Ethernet0 2x50G -f
After that try to do DPB (1x100G[40G]) for Ethernet0 port:
sudo config interface breakout Ethernet0 1x100G[40G] -f
Both commands should work correctly.
- Why I did it
Enhance the Python3 support for platform API. Originally, some platform APIs call SDK API which didn't support Python 3. Now the Python 3 APIs have been supported in SDK 4.4.3XXX, Python3 is completely supported by platform API
- How I did it
Start all platform daemons from python3
1. Remove #/usr/bin/env python at the beginning of each platform API file as the platform API won't be started as daemons but be imported from other daemons.
2. Adjust SDK API calls accordingly
- How to verify it
Manually test and run regression platform test
Signed-off-by: Stephen Sun <stephens@nvidia.com>
- Why I did it
Adjust the Makefile for SDK/python-SDK-API to support both python2 and python3
- How to verify it
Build the image and check whether python2 and python3 are both supported by SDK API.
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Added support BRCM SAI 5.0.0.1.
Major changes here:
CS00012019568 Link Training (all 100G ASICs - TH families and TD3)
CS00012184310 [attribute_capability| for port SAI_PORT_ATTR_TPID returns CREATE_IMP=false|SET_IMP=true|GET_IMP=true
CS00012182145 [IPinIP][Tunnel Delete] If IPinIP tunnel delete is performed observed following SYNCd error: ERR syncd#syncd: [none] SAI_API_TUNNEL:brcm_sai_tnl_mp_remove_tunnel_term_table_entry:4026 _brcm_sai_mptnl_sip_tnl_lookup failed with error -7.
CS00012182148 Rate Limit Parity error message to syncd/sonic.
CS00012178692 ACL drops counted as interface drops
CS00012183901 [WARMBOOT] WARMReboot with active traffic causes port flap reported during warm reboot
CS00012023263 TD3/TH2 : Support 4 lossless queues(2 SW PFCWD and 2 HW PFCWD)
CS00012019578 Pre FEC bit-error rate (BER) - DNX and XGS (TD and TH 50/100G)
To fix determine-reboot-cause service which was failing due to non-implemented thrown from get_reboot_case, if the reboot was done with `sudo reboot` (cold reboot)
Signed-off-by: Volodymyr Boyko <volodymyrx.boiko@intel.com>
Why I did it
This PR adds changes in sonic-config-engine to consume configuration data in SONiC Yang schema and generate config_db entries
How I did it
Add a new file sonic_yang_cfg_generator .
This file has the functions to
parse yang data json and convert them in config_db json format.
Validate the converted config_db entries to make sure all the dependencies and constraints are met.
Add a new option -Y to the sonic-cfggen command for this purpose
Add unit tests
This capability is support only in sonic-config-engine Python3 package only
Why I did it
* For SAI - Upgrade to Version 1.19.0
- Add support for VxLAN encap TTL uniform model on SPC2/3
- Add ACL entry actions set VRF, set do no learn, add VLAN ID, add VLAN priority
- Add ACL field has VLAN tag
- Bulk counters (improve port statistics performance)
- Create async dump extra as part of debug generate dump
- Create irisc dump on severe health event
- Support 0 port systems (modify get switch mac to work accordingly)
- Set interface vlan up state for ping tool in SONiC
- Support attributes SAI_PORT_ATTR_QOS_SCHEDULER_PROFILE_ID, SAI_PORT_ATTR_QOS_INGRESS_BUFFER_PROFILE_LIST,
SAI_PORT_ATTR_QOS_EGRESS_BUFFER_PROFILE_LIST, SAI_PORT_ATTR_POLICER_ID as part of port create Git stats
* For SDK\FW - Upgrade to Version SDK 4.4.3106, FW 2008_3110
Added Features:
- Increased ACL table
- Enhanced PSAMPLE support
- Added support for Finisar SR4 module in SN3700 systems
- Added support for Python 3.0 in examples.
Fix bugs:
- On LR4 transceivers 00YD278, the firmware incorrectly identified the transceiver
- Reduce memory consumption for virtual LAG
- Fixed PSAMPLE listeners cleanup on SDK drivers unloading.
- On Spectrum-2 and Spectrum-3 systems, slow reaction time to Rx pause packets may lead to buffer overflow on servers.
- BER may be experienced when using 5m DAC cables between SN4700 and SN2700 in 100GbE speed.
- On very rare occasion, when connecting DR4 PAM4 transceiver to 100GbE DR1 NRZ, low BER may be experienced.
- Unexpected packet drops on the port ingress buffer may be experienced when working in 400GbE mode.
Note: When performing ISSU from an older version, this fix won't be applied. For fix to apply, a non-ISSU reset is required.
- Fix SN3800 specific warm boot scenario: Disable interface, Warm Boot, Enable Interface --> link will remain down.
Signed-off-by: Dror Prital <drorp@nvidia.com>
Process pcied failed on Arista-7170-32CD-C32
```
root@sonic:/# supervisorctl
chassis_db_init EXITED Jun 03 08:48 AM
dependent-startup EXITED Jun 03 08:48 AM
ledd RUNNING pid 28, uptime 3:07:49
lm-sensors EXITED Jun 03 08:48 AM
pcied FATAL Exited too quickly (process log may have details)
```
Signed-off-by: Andriy Kokhan <andriyx.kokhan@intel.com>
This is due to the fact that we use SONIC_OVERRIDE_BUILD_VARS internally
in our build jobs and this is not accounted in caching framework.
So we add MLNX_SDK_DEB_VERSION to force rebuild if we changed it via
SONIC_OVERRIDE_BUILD_VARS.
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
#### Why I did it
Usecase:
export DOCKER_EXTRA_OPTS="--registry-mirror=https://some.host" - to avoid DockerHub pull rate limiting.
#### How I did it
Added DOCKER_EXTRA_OPTS
#### How to verify it
export DOCKER_EXTRA_OPTS="--registry-mirror=https://some.host"
make target/sonic-mellanox.bin
Why I did it
Quagga is no longer being used. Remove quagga-related code (e.g., docker-fpm-quagga, sonic-quagga, etc.).
How I did it
Remove quagga-related code.
Why I did it
portconfig.py gets PORT table from config_db if it is present. If not, port_config.ini files are parsed.
For multi-asic platform, if namespace is passed to get_port_config(), config_db connection was done to host namespace always and not to asic specific namespace.
Provides fix for: #7161
How I did it
Modify db connection function to connect to namespace config_db.
Why I did it
7050 S4Q31 mmu configuration is missing ALPM configurations, causing not enough memory reserved for routes. Orchagent crashes on a nightly testbed with 6400 route entries.
How I did it
Add the missing ALPM configurations.
How to verify it
Load the configuration on testbed and verified new configuration exists and no more crash.
Signed-off-by: Ying Xie ying.xie@microsoft.com
#### Why I did it
- After [sonic-linux-kernel#177](https://github.com/Azure/sonic-linux-kernel/pull/177) changes, the I2C mux channels of Baseboard and Switchboard CPLDs are moved from i2c-4 and i2c-5 to i2c-36 and i2c-37 respectively.
- This caused QSFP driver initialization of i2c-36 to i2c-41 to fail causing the ports from Ethernet208 to Ethernet248 fail.
#### How I did it
- The fix to this problem is to change the order of QSFP driver initialization to I2C mux channels.
- Instead of the order i2c-10 to i2c-41, the order i2c-4 to i2c-35 is being utilized.
- Also, need to change the i2c-mux-channel number for Baseboard CPLD and switchboard CPLD in scripts to access them.
Signed-off-by: Yong Zhao yozhao@microsoft.com
Why I did it
Currently we leveraged the Supervisor to monitor the running status of critical processes in each container and it is more reliable and flexible than doing the monitoring by Monit. So we removed the functionality of monitoring the critical processes by Monit.
How I did it
I removed the script process_checker and corresponding Monit configuration entries of critical processes.
How to verify it
I verified this on the device str-7260cx3-acs-1.
#### Why I did it
On our platforms syncd must be up while using the sonic_platform.
The issue is warm-reboot script first disables syncd then instantiate Chassis, which tries to connect syncd in __init__.
#### How I did it
Refactor Chassis to lazy initialize components.
Signed-off-by: Volodymyr Boyko <volodymyrx.boiko@intel.com>
#### Why I did it
Provide possibility to specify curl options as the present curl support provided in Azure/sonic does not extend capability for options like --user which some of the cisco artifacts are requiring.
#### How I did it
Add extensions to the slave.mk file to include curl options as follows:
$($*_CURL_OPTIONS)
#### How to verify it
Option 1) use curl -u, and environment variables
it with --user <user:password> curl_options. Ex: --user foo:'bar!'
curl -u ${BASIC_AUTH_HEADER} https://foo.bar
This works to obscure password/credential in a terminal session that someone else might see directly or via screen share.
Option 2) Option 1: use curl -n
If you run linux, create a ~/.netrc file and insert your creds there, and use curl -n.
chmod the file to 400. curl knows how to extract your creds from the file silently. You never have to type creds on the command line again.
If you run Windows, and use curl, you must name the file _netrc . As on *nix, the file should be in your home directory, and should have appropriate permissions.
For Administrative APIs , my .netrc file looks like this:
machine foobar-linux
login foo
password bar
Why I did it
In upgrade scenarios, where config_db.json is not carry forwarded to new image, it could be left w/o TACACS credentials.
Added a service to trigger 5 minutes after boot and restore TACACS, if /etc/sonic/old_config/tacacs.json is present.
How I did it
By adding a service, that would fire 5 mins after boot.
This service apply tacacs if available.
How to verify it
Upgrade and watch status of tacacs.timer & tacacs.service
You may create /etc/sonic/old_config/tacacs.json, with updated credentials
(before 5mins after boot) and see that appears in config & persisted too.
Which release branch to backport (provide reason below if selected)
201911
202006
202012
- Why I did it
migrate to python3 support
add dependent packages for Klish
allow login as non-root user
- How I did it
update sonic-cli script to start Klish with user name, system name and timeout
update the Dockerfile.j2 to resolve dependent packages
add python3-dev for Klish use
- How to verify it
Incremental buster build with Azure/sonic-mgmt-framework#76 and verify the sonic-cli
- Description for the changelog
Migrate to python3.7 support, update sonic-cli script and resolve package dependencies
Why I did it
Update sonic-sairedis submodule to include below commits:
0e2105a [vs]: Start syncd by passing context configuration file and global context index. (#832)
f931ae4 [VS] Add support for context and multiple switches (#830)
59208de [submodule] Update SAI submodule (#829)
77d44f5 [Mellanox] Update mellanoxs dump generation to include SDK dumps (#833)
4fb571b Generalizing config.bcm support for BRCM silicons (#693)
Signed-off-by: Suvarna Meenakshi <sumeenak@microsoft.com>
Why I did it
To enable syncd-rpc for Barefoot build
How I did it
Set the flag
How to verify it
ENABLE_SYNCD_RPC=y make configure PLATFORM=barefoot
ENABLE_SYNCD_RPC=y make all
hwskus.
Why I did it
For multi-asic platforms, orchagent process in swss docker is started by passing device_ids(or asic_ids).
Each swss docker starts orchagent with a different device_id. This device_id is passed as Hardware info to syncd. For syncd to start with the right hwinfo, context_config.json is passed as an argument. context_config.json file is looked up to get the hwinfo information.
sonic-sairedis PRs required for this diff to be used to bring up multi-asic VS:
Azure/sonic-sairedis#830Azure/sonic-sairedis#832
How I did it
Add context_config.json for each asic in the same structure as provided here: https://github.com/Azure/sonic-sairedis/blob/master/lib/src/context_config.json
Each asic context_config.json will have different hwinfo string.
hwinfo string will be same as device id retrieved from asic.conf file.
Signed-off-by: Suvarna Meenakshi <sumeenak@microsoft.com>
#### Why I did it
These methods were added to make some convenient platform and chassis information methods accessible through sonic-py-common. These methods were refactored from sonic-utilities and are used in the `show platform summary` and `show version` commands.
#### How I did it
There are two methods, one is `get_platform_info()` which simply calls local methods to collect useful platform information into a dictionary format, this came directly from sonic-utilities.