Why I did it
Need to enable DHCPv6 copp rule
How I did it
Add a separate DHCPv6 copp rule config file and load it during cold reboot.
How to verify it
cold reboot, and verify config being loaded and dhcpv6 rules got installed.
Signed-off-by: Ying Xie ying.xie@microsoft.com
* Invoke disk check periodically. (#7374)
Why I did it
Helps with periodic scan of disk for RO state.
If found, this script makes transient fix and raise error message.
Why I did it
201811 branch image build has been failing due to the certificate expiring: https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021. This issue so far only affect Jessie docker because it is using openssl 1.0.
How I did it
Remove the expired cert and refresh the certs bundle.
How to verify it
Build image.
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
Why I did it
serial-getty service exited in Dell S6100 device randomly.
How I did it
Added serial-getty to monit services.
How to verify it
Stop serial-getty in ssh session and check whether the service restarts or not.
Why I did it
In upgrade scenarios, where config_db.json is not carry forwarded to new image, it could be left w/o TACACS credentials.
Added a service to trigger 5 minutes after boot and restore TACACS, if /etc/sonic/old_config/tacacs.json is present.
How I did it
By adding a service, that would fire 5 mins after boot.
This service apply tacacs if available.
How to verify it
Upgrade and watch status of tacacs.timer & tacacs.service
You may create /etc/sonic/old_config/tacacs.json, with updated credentials
(before 5mins after boot) and see that appears in config & persisted too.
Why I did it
7050 S4Q31 mmu configuration is missing ALPM configurations, causing not enough memory reserved for routes. Orchagent crashes on a nightly testbed with 6400 route entries.
How I did it
Add the missing ALPM configurations.
How to verify it
Load the configuration on testbed and verified new configuration exists and no more crash.
Signed-off-by: Ying Xie ying.xie@microsoft.com
This PR contains the following changes
Original Arista-7050-QX-32S sku (32x40G ports) has been renamed to Arista-7050QX32S-Q32
Arista-7050-QX-32S is symlinked to Arista-7050QX-32S-S4Q31 (4x10G, 31x40G ports)
Signed-off-by: Neetha John <nejo@microsoft.com>
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
Need proper MMU and Qos settings for Arista-7050QX-32S-S4Q31
How I did it
Updated the settings based on Arista-7050-QX-32S
Why I did it
Backport #7744
How to verify it
Ran sonic-cfggen on a minigraph and verified that interface of type DeviceMgmtLink has speed set in the PORT table from the bandwidth attribute in the minigraph
Signed-off-by: Neetha John <nejo@microsoft.com>
Why I did it
PG profile settings need to be aligned with Arista-7050-QX-32S
How I did it
Copy over the current settings from Arista-7050-QX-32S and define params for 10G and 1G speeds as well
Signed-off-by: Neetha John <nejo@microsoft.com>
* Support readonly vtysh for sudoers (#7383)
Why I did it
Support readonly version of the command vtysh
How I did it
Check if the command starting with "show", and verify only contains single command in script.
* Fix the type issue in rvtysh
Recent changes brought l2 vlan concept which does not have DHCP
clients behind them and so DHCP relay is not required. Also,
dhcpmon fails to launch on those vlans as their interfaces
lack IP addresses. This PR backposts #6527 that limits launch
of both DHCP relay and dhcpmon to L3 vlans only.
original-pr: #6527
singed-off-by: Tamer Ahmed tamer.ahmed@microsoft.com
When submitting a new official build for broadcom, vs, it prompts a error message, which says the job is not defined.
It was caused by the default option "[]", which is not empty, it is used as the jobGroups parameter.
Why I did it
Improve the version of the Pull Request build by changing the local branch name.
How I did it
Change the default branch name merge to [target_branch_name]-[pullrequestid].
How to verify it
For official build, the version is not changed.
For pull request build, the version as below:
Why I did it
Fix the boolean value case sensitive issue in Azure Pipelines
When passing parameters to a template, the "true" or "false" will have case sensitive issue, it should be a type casting issue.
To fix it, we change the true/false to yes/no, to escape the trap.
Support to override the job groups in the template, so PR build has chance to use different build parameters, only build simple targets. For example, for broadcom, we only build target/sonic-broadcom.bin, the other images, such as swi, debug bin, etc, will not be built.
Why I did it
Failed to build the centec-arm64 for no space in docker data root.
How I did it
Change to use the data root to the folder under /data.
See detail info about DOCKER_DATA_ROOT_FOR_MULTIARCH in the file Makefile.work.
How to verify it
Set the environment variable, then the /data used as docker root.
Why I did it
Fix the wrong build options and improve display name for some tasks
1. Fix the wrong build architecture for arm64 and armhf
2. Fix the build timeout parameter
Why I did it
To monitor the SSD health condition in DellEMC S6100 platform post upgrade.
A daemon is introduced to monitor the SSD every one hour.
To check for SSD status at boot time and at the time of cold-reboot.
All these changes are supported only for newer SSD firmware.
Porting changes from 201911 branch
Added a platform_reboot_pre_check script to prevent cold-reboot based on SSD status.
Depends on Azure/sonic-utilities#1557
#### Why I did it
- xcvrd crash was seen in latest 201811 images.
- For Dell S6100,API 2.0 uses poll mode while 1.0 was still using interrupt mode.
#### How I did it
- Modified get_transceiver_change_event in 1.0 to poll mode in all the related branches.
Why I did it
sonic snmp subagent build was failing recently.
How I did it
install python-arptable and psutil in sonic slave docker to address sonic snmp subagent build issue.
How to verify it
build 201811 image locally.
Signed-off-by: Ying Xie ying.xie@microsoft.com
Why I did it
for 201811 build image, swsssdk wheel package is not getting build due to redis==2.10.6 requirement issue. Notied that after upgrading pip to 20.3.3, we are experiencling this issue.
the issue
21:10:41 /sonic/src/sonic-py-swsssdk /sonic
21:10:41 running test
21:10:41 Searching for redis==2.10.6
21:10:41 Reading https://pypi.python.org/simple/redis/
21:10:41 Couldn't find index page for 'redis' (maybe misspelled?)
21:10:41 Scanning index of all packages (this may take a while)
21:10:41 Reading https://pypi.python.org/simple/
21:10:41 No local packages or download links found for redis==2.10.6
21:10:41 error: Could not find suitable distribution for Requirement.parse('redis==2.10.6')
21:10:41 [ FAIL LOG END ] [ target/python-wheels/swsssdk-2.0.1-py3-none-any.whl ]
21:10:41 slave.mk:422: recipe for target 'target/python-wheels/swsssdk-2.0.1-py3-none-any.whl' failed
21:10:41 make: *** [target/python-wheels/swsssdk-2.0.1-py3-none-any.whl] Error 1
21:10:43 Makefile.work:132: recipe for target 'target/sonic-aboot-broadcom.swi' failed
21:10:43 make[1]: *** [target/sonic-aboot-broadcom.swi] Error 2
21:10:43 make[1]: Leaving directory '/data/johnar/workspace/broadcom/buildimage-brcm-201811'
21:10:43 Makefile:6: recipe for target 'target/sonic-aboot-broadcom.swi' failed
So, what I have understood till now, if pip v20.3.3 is able to produce a wheel that is not installable because it raises pip._vendor.packaging.version.InvalidVersion or some error like that (resource- > pypa/pip#9206), it just raises an exception when building the wheel.
SO, this issue occurs when we pinned down pip to 20.3.3 in short.
As of now, there are two solutions mentioned below.
pin down pip to 20.3.3(which it is) and explicitly install packages.
pin down pip to 20.3.4(pip wheel now verifies the built wheel contains valid metadata, and can be installed by a subsequent pip install.)(resource-> https://pip.pypa.io/en/stable/news/) -- didn't try yet
How I did it
Install nose explicitly for mockredispy
Install redis==2.10.6 for swsssdk tests.
How to verify it
Run local build after removing all previously built dockers.
* [build] Fix the snmp docker build error. (#7452)
Issue is get_pip.py is moved to pip 21.1 (https://github.com/pypa/get-pip/commits/main) which is not compatible with 3.6.
Issue of pip itself is fixed as part of 21.1.1 in pip community (pypa/pip#9835).
However get-pip.py is still not updated to latest pip. Also get.pip.py does not support python 3.6 version explicitly (pypa/get-pip#88)
Step 15/29 : RUN curl https://bootstrap.pypa.io/get-pip.py | python3.6
---> Running in bece31f49267
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 1891k 100 1891k 0 0 9564k 0 --:--:-- --:--:-- --:--:-- 9600k
Traceback (most recent call last):
File "<stdin>", line 24298, in <module>
File "<stdin>", line 139, in main
File "<stdin>", line 115, in bootstrap
File "<stdin>", line 96, in monkeypatch_for_cert
File "/tmp/tmp5fnxrz0a/pip.zip/pip/_internal/commands/__init__.py", line 9, in <module>
File "/tmp/tmp5fnxrz0a/pip.zip/pip/_internal/cli/base_command.py", line 12, in <module>
File "/tmp/tmp5fnxrz0a/pip.zip/pip/_internal/cli/cmdoptions.py", line 30, in <module>
File "/tmp/tmp5fnxrz0a/pip.zip/pip/_internal/utils/hashes.py", line 2, in <module>
ImportError: cannot import name 'NoReturn'
The command '/bin/sh -c curl https://bootstrap.pypa.io/get-pip.py | python3.6' returned a non-zero code: 1
How I did:
Got the file from https://github.com/pypa/get-pip/tree/21.0 and added to the buildimage
pin pip to the previous release 21.0.1. (Similar is done in other public repos eg: grpc/grpc-java#8115)
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
Why I did it
201811 build was failing due to the newly added bcm config file key word was not on the permitted list
How I did it
add phy_an_lt_msft to permitted list.
How to verify it
Build device package.
Signed-off-by: Ying Xie <ying.xie@microsoft.com>