Why I did it
This PR adds changes in sonic-config-engine to consume configuration data in SONiC Yang schema and generate config_db entries
How I did it
Add a new file sonic_yang_cfg_generator .
This file has the functions to
parse yang data json and convert them in config_db json format.
Validate the converted config_db entries to make sure all the dependencies and constraints are met.
Add a new option -Y to the sonic-cfggen command for this purpose
Add unit tests
This capability is support only in sonic-config-engine Python3 package only
Signed-off-by: Yong Zhao yozhao@microsoft.com
Why I did it
Currently we leveraged the Supervisor to monitor the running status of critical processes in each container and it is more reliable and flexible than doing the monitoring by Monit. So we removed the functionality of monitoring the critical processes by Monit.
How I did it
I removed the script process_checker and corresponding Monit configuration entries of critical processes.
How to verify it
I verified this on the device str-7260cx3-acs-1.
Why I did it
In upgrade scenarios, where config_db.json is not carry forwarded to new image, it could be left w/o TACACS credentials.
Added a service to trigger 5 minutes after boot and restore TACACS, if /etc/sonic/old_config/tacacs.json is present.
How I did it
By adding a service, that would fire 5 mins after boot.
This service apply tacacs if available.
How to verify it
Upgrade and watch status of tacacs.timer & tacacs.service
You may create /etc/sonic/old_config/tacacs.json, with updated credentials
(before 5mins after boot) and see that appears in config & persisted too.
Which release branch to backport (provide reason below if selected)
201911
202006
202012
Signed-off-by: Yong Zhao yozhao@microsoft.com
Why I did it
This PR aims to monitor the memory usage of streaming telemetry container and restart streaming telemetry container if memory usage is larger than the pre-defined threshold.
How I did it
I borrowed the system tool Monit to run a script memory_checker which will periodically check the memory usage of streaming telemetry container. If the memory usage of telemetry container is larger than the pre-defined threshold for 10 times during 20 cycles, then an alerting message will be written into syslog and at the same time Monit will run the script restart_service to restart the streaming telemetry container.
How to verify it
I verified this implementation on device str-7260cx3-acs-1.
Signed-off-by: Stepan Blyschak stepanb@nvidia.com
This PR is part of SONiC Application Extension
Depends on #5938
- Why I did it
To provide an infrastructure change in order to support SONiC Application Extension feature.
- How I did it
Label every installable SONiC Docker with a minimal required manifest and auto-generate packages.json file based on
installed SONiC images.
- How to verify it
Build an image, execute the following command:
admin@sonic:~$ docker inspect docker-snmp:1.0.0 | jq '.[0].Config.Labels["com.azure.sonic.manifest"]' -r | jq
Cat /var/lib/sonic-package-manager/packages.json file to verify all dockers are listed there.
Fix#7364
99-default.link - was always in SONiC, but previous systemd (<247) had an issue and it did not work due to issue systemd/systemd#3374. Now systemd 247 works.
However, such policy overrides teamd provided mac address which causes teamd netdev to use a random mac
address. Therefore, needs to be disabled.
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
#### Why I did it
To build flashrom properly with dependency tracking.
#### How I did it
Moved flashrom code from platform/broadcom/sonic-platform-modules-dell/tools directory to src/flashrom directory.
At the end, flashrom_0.9.7_amd64.deb package is build which will be installed in the devices.
- Support compile sonic arm image on arm server. If arm image compiling is executed on arm server instead of using qemu mode on x86 server, compile time can be saved significantly.
- Add kernel argument systemd.unified_cgroup_hierarchy=0 for upgrade systemd to version 247, according to #7228
- rename multiarch docker to sonic-slave-${distro}-march-${arch}
Co-authored-by: Xianghong Gu <xgu@centecnetworks.com>
Co-authored-by: Shi Lei <shil@centecnetworks.com>
- Why I did it
To allow SONiC Package Migration during SONiC-2-SONiC upgrade we need to start docker daemon in chroot-ed environment in new SONiC filesystem.
Later this script will be used to start dockerd in chroot environment on SONiC
- How I did it
Install a docker service script into /usr/lib/docker/ in SONiC filesystem.
- How to verify it
Install SONiC image on the switch, mount squashfs to some directory, mount overlay rw layer over squashfs, mount procfs and sysfs, mount docker library. Start the docker using:
root@sonic:~$ /usr/lib/docker/docker.sh start
Signed-off-by: Stepan Blyshchak <stepanb@nvidia.com>
Why I did it
We skip install of CNI plugin, as we don't need. But this leaves node in "not ready" state, upon joining master.
To fix, we copy this dummy .conf file in /etc/cni/net.d
How I did it
Keep this file in /usr/share/sonic/templates and copy to /etc/cni/net.d upon joining k8s master.
How to verify it
Upon configuring master-IP and enable join, watch node join and move to ready state.
You may verify using kubectl get nodes command
SONiC Package Manager will require to auto-generate the start script using that template. For that, we need this template to be recorded in SONiC filesystem.
Signed-off-by: Stepan Blyshchak <stepanb@nvidia.com>
- Why I did it
Group all SONiC services together and able to manage them together. Will be used in config reload command as much simpler and generic way to restart services.
- How I did it
Add services to sonic.target
- How to verify it
Together with Azure/sonic-utilities#1199
config reload -y
Signed-off-by: Stepan Blyshchak <stepanb@nvidia.com>
- Why I did it
The pcie configuration file location is under plugin directory not under platform directory.
#6437
- How I did it
Move all pcie.yaml configuration file from plugin to platform directory.
Remove unnecessary timer to start pcie-check.service
Move pcie-check.service to sonic-host-services
- How to verify it
Verify on the device
Fix marvell-armhf build break
The azure-storage package depends on the cryptography package. Newer
versions of cryptography require the rust compiler, the correct version
for which is not readily available in buster. Hence we pre-install an
older version here to satisfy the azure-storage dependency.
Note: This is not a problem for other architectures as pre-built versions
of cryptography are available for those. This sequence can be removed
after upgrading to debian bullseye.
- Why I did it
To move ‘sonic-host-service’ which is currently built as a separate package to ‘sonic-host-services' package.
- How I did it
- Moved 'sonic-host-server' to 'src/sonic-host-services' and included it as part of the python3 wheel.
- Other files were moved to 'src/sonic-host-services-data' and included as part of the deb package.
- Changed build option ‘INCLUDE_HOST_SERVICE’ to ‘ENABLE_HOST_SERVICE_ON_START’ for enabling sonic-hostservice at boot-up by default.
**- Why I did it**
This PR aims to monitor the running status of each container. Currently the auto-restart feature was enabled. If a critical process exited unexpected, the container will be restarted. If the container was restarted 3 times during 20 minutes, then it will not run anymore unless we cleared the flag using the command `sudo systemctl reset-failed <container_name>` manually.
**- How I did it**
We will employ Monit to monitor a script. This script will generate the expected running container list and compare it with the current running containers. If there are containers which were expected to run but were not running, then an alerting message will be written into syslog.
**- How to verify it**
I tested this feature on a lab device `str-a7050-acs-3` which has single ASIC and `str2-n3164-acs-3` which has a Multi-ASIC. First I manually stopped a container by running the command `sudo systemctl stop <container_name>`, then I checked whether there was an alerting message in the syslog.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
- Make PDDF code compliant with both Python 2 and Python 3
- Align code with PEP8 standards using autopep8
- Build and install both Python 2 and Python 3 PDDF packages
* First cut image update for kubernetes support.
With this,
1) dockers dhcp_relay, lldp, pmon, radv, snmp, telemetry are enabled
for kube management
init_cfg.json configure set_owner as kube for these
2) Each docker's start.sh updated to call container_startup.py to register going up
As part of this call, it registers the current owner as local/kube and its version
The images are built with its version ingrained into image during build
3) Update all docker's bash script to call 'container start/stop/wait' instead of 'docker start/stop/wait'.
For all locally managed containers, it calls docker commands, hence no change for locally managed.
4) Introduced a new ctrmgrd service, that helps with transition between owners as kube & local and carry over any labels update from STATE-DB to API server
5) hostcfgd updated to handle owner change
6) Reboot scripts are updatd to tag kube running images as local, so upon reboot they run the same image.
7) Added kube_commands.py to handle all updates with Kubernetes API serrver -- dedicated for k8s interaction only.
Added source interface support for NTP.
Also made NTP start on Mgmt-VRF by default when configured.
**- How I did it**
1) Updated hostcfg to listen to global config NTP and NTP_SERVER tables and restart ntp when ever the configuration changes. NTP table includes source interface configuration.
2) The ntp script updated to by default start on Mgmt-VFT when configured.
Signed-off-by: Prabhu Sreenivasan <prabhu.sreenivasan@broadcom>
Install the 'wheel' package in host OS (along with python3 and python3-distutils which are also needed for building some Python packages) to eliminate error messages like the following:
```
Running setup.py bdist_wheel for watchdog: started
Running setup.py bdist_wheel for watchdog: finished with status 'error'
Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-Qd3K08/watchdog/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/pip-wheel-0AHpMe --python-tag cp27:
usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: -c --help [cmd1 cmd2 ...]
or: -c --help-commands
or: -c cmd --help
error: invalid command 'bdist_wheel'
----------------------------------------
Failed building wheel for watchdog
```
These error messages appear to have no impact on the image build, because the Python package seems to still get installed successfully afterward, just the building of a wheel package fails. Therefore, this is more of a cosmetic fix than an actual bug.
This is an addendum to https://github.com/Azure/sonic-buildimage/pull/6182.
Also upgrade pip and install more recent version of setuptools package via PyPI.
libxslt-dev and libz-dev are dependencies for lxml==4.6.1 which is required for pyangbind==0.8.1
lxml-4.6.2-cp37-cp37m-manylinux1_x86_64.whl is directly downloaded in amd64 whereas in arm this is built from lxml-4.6.2.tar.gz
Signed-off-by: Sabareesh Kumar Anandan <sanandan@marvell.com>
**- Why I did it**
To support dynamic buffer calculation.
This PR also depends on the following PRs for sub modules
- [sonic-swss: [buffermgr/bufferorch] Support dynamic buffer calculation #1338](https://github.com/Azure/sonic-swss/pull/1338)
- [sonic-swss-common: Dynamic buffer calculation #361](https://github.com/Azure/sonic-swss-common/pull/361)
- [sonic-utilities: Support dynamic buffer calculation #973](https://github.com/Azure/sonic-utilities/pull/973)
**- How I did it**
1. Introduce field `buffer_model` in `DEVICE_METADATA|localhost` to represent which buffer model is running in the system currently:
- `dynamic` for the dynamic buffer calculation model
- `traditional` for the traditional model in which the `pg_profile_lookup.ini` is used
2. Add the tables required for the feature:
- ASIC_TABLE in platform/\<vendor\>/asic_table.j2
- PERIPHERAL_TABLE in platform/\<vendor\>/peripheral_table.j2
- PORT_PERIPHERAL_TABLE on a per-platform basis in device/\<vendor\>/\<platform\>/port_peripheral_config.j2 for each platform with gearbox installed.
- DEFAULT_LOSSLESS_BUFFER_PARAMETER and LOSSLESS_TRAFFIC_PATTERN in files/build_templates/buffers_config.j2
- Add lossless PGs (3-4) for each port in files/build_templates/buffers_config.j2
3. Copy the newly introduced j2 files into the image and rendering them when the system starts
4. Update the CLI options for buffermgrd so that it can start with dynamic mode
5. Fetches the ASIC vendor name in orchagent:
- fetch the vendor name when creates the docker and pass it as a docker environment variable
- `buffermgrd` can use this passed-in variable
6. Clear buffer related tables from STATE_DB when swss docker starts
7. Update the src/sonic-config-engine/tests/sample_output/buffers-dell6100.json according to the buffer_config.j2
8. Remove buffer pool sizes for ingress pools and egress_lossy_pool
Update the buffer settings for dynamic buffer calculation
python2 is end of life and SONiC is going to support python3. This PR is going to support:
1. Mellanox SONiC platform API python3 support
2. Install both python2 and python3 verson of Mellanox SONiC platform API or pmon and host side
ntp-systemd-wrapper file from files/image_config/ntp was not getting picked up. Added a line on sonic_debian_extension.j2 to copy over the file from files/image_config/ntp after installing the debian package.
Signed-off-by: Prabhu Sreenivasan <prabhu.sreenivasan@broadcom.com>
- Allow platform specific reboot script to be called after crash kernel has
finished copying the kernel vmcore
- Disable pcie advanced features when running crash kernel. This improves
reliability of the crash kernel to successfully create a vmcore and also
reboot
- Allow crash kernel to reboot if a panic is seen while it is generating a
vmcore
- Fix crash kernel to use the SONiC specific /usr/local/bin/reboot script
instead of the Linux reboot command /sbin/reboot
- Use sonic_platform as the kernel command line parameter to pass platform identifier string
Signed-off-by: Rajendra Dendukuri <rajendra.dendukuri@broadcom.com>
Barefoot platform vendors' sonic_platform packages import the Python 'thrift' library. Previously, our custom-built package was being installed in the PMon container and host OS. However, we are only building a Python 2 version of that package, which was only intended for use with saithrift.
Fixes#6077
Submodule updates include the following commits:
* src/sonic-utilities 9dc58ea...f9eb739 (18):
> Remove unnecessary calls to str.encode() now that the package is Python 3; Fix deprecation warning (#1260)
> [generate_dump] Ignoring file/directory not found Errors (#1201)
> Fixed porstat rate and util issues (#1140)
> fix error: interface counters is mismatch after warm-reboot (#1099)
> Remove unnecessary calls to str.decode() now that the package is Python 3 (#1255)
> [acl-loader] Make list sorting compliant with Python 3 (#1257)
> Replace hard-coded fast-reboot with variable. And some typo corrections (#1254)
> [configlet][portconfig] Remove calls to dict.has_key() which is not available in Python 3 (#1247)
> Remove unnecessary conversions to list() and calls to dict.keys() (#1243)
> Clean up LGTM alerts (#1239)
> Add 'requests' as install dependency in setup.py (#1240)
> Convert to Python 3 (#1128)
> Fix mock SonicV2Connector in python3: use decode_responses mode so caller code will be the same as python2 (#1238)
> [tests] Do not trim from PATH if we did not append to it; Clean up/fix shebangs in scripts (#1233)
> Updates to bgp config and show commands with BGP_INTERNAL_NEIGHBOR table (#1224)
> [cli]: NAT show commands newline issue after migrated to Python3 (#1204)
> [doc]: Update Command-Reference.md (#1231)
> Added 'import sys' in feature.py file (#1232)
* src/sonic-py-swsssdk 9d9f0c6...1664be9 (2):
> Fix: no need to decode() after redis client scan, so it will work for both python2 and python3 (#96)
> FieldValueMap `contains`(`in`) will also work when migrated to libswsscommon(C++ with SWIG wrapper) (#94)
- Also fix Python 3-related issues:
- Use integer (floor) division in config_samples.py (sonic-config-engine)
- Replace print statement with print function in eeprom.py plugin for x86_64-kvm_x86_64-r0 platform
- Update all platform plugins to be compatible with both Python 2 and Python 3
- Remove shebangs from plugins files which are not intended to be executable
- Replace tabs with spaces in Python plugin files and fix alignment, because Python 3 is more strict
- Remove trailing whitespace from plugins files
- Why I did it
Add reboot history to State db so that can be used telemetry service
- How I did it
Split the process-reboot-cause service to determine-reboot-cause and process-reboot-cause
determine-reboot-cause to determine the reboot cause
process-reboot-cause to parse the reboot cause files and put the reboot history to state db
Moved to sonic-host-service* packages
- How to verify it
Performed unit test and tested on DUT
Summary: Move teamd functions to a new service script
Motivation: To segregate teamd functions in one common place. fast-reboot script calls teamd functions that should ideally be replaced by a simple call to a service script.
Changes: New teamd service script and path modification from /usr/bin/teamd.sh to /usr/local/bin/teamd.sh
fast-reboot script (in sonic-utilities) modification (to use new teamd.sh to stop teamd) should follow soon after this change.
Verification: VS image tests.
Signed-off-by: Vaibhav Hemant Dixit <vaibhav.dixit@microsoft.com>
Co-authored-by: heidi.ou@alibaba-inc.com <heidi.ou@alibaba-inc.com>
Co-authored-by: Ying Xie <ying.xie@microsoft.com>
This change introduces PDDF which is described here: https://github.com/Azure/SONiC/pull/536
Most of the platform bring up effort goes in developing the platform device drivers, SONiC platform APIs and validating them. Typically each platform vendor writes their own drivers and platform APIs which is very tailor made to that platform. This involves writing code, building, installing it on the target platform devices and testing. Many of the details of the platform are hard coded into these drivers, from the HW spec. They go through this cycle repetitively till everything works fine, and is validated before upstreaming the code.
PDDF aims to make this platform driver and platform APIs development process much simpler by providing a data driven development framework. This is enabled by:
JSON descriptor files for platform data
Generic data-driven drivers for various devices
Generic SONiC platform APIs
Vendor specific extensions for customisation and extensibility
Signed-off-by: Fuzail Khan <fuzail.khan@broadcom.com>
To consolidate host services and install via packages instead of file-by-file, also as part of migrating all of SONiC to Python 3, as Python 2 is no longer supported.
- Convert core_uploader.py script to Python 3
- Use logger from sonic-py-common for uniform logging
- Reorganize imports alphabetically per PEP8 standard
- Two blank lines precede functions per PEP8 standard
- Remove unnecessary global variable declarations
* Build and install openssh from source
* Copy openssh deb package to dest folder
* Update make rule
* Update sonic debian extension
* Append empty line before EOF
* Update openssh patch
* Add openssh-server to base image dependency
* Fix indent type
* Fix comments
* Use commit id instead of tag id and add comment
Signed-off-by: Jing Kan jika@microsoft.com
As part of the transition from Python 2 to Python 3, we are installing both pip2 and pip3 in the slave and config-engine containers. This PR replaces calls to `pip` in these containers with an explicit call to `pip2` to ensure the proper version of pip is executed, no matter which version of pip is aliased to `pip`, as we no longer rely on that alias.
Also some other pip-related cleanup
To consolidate host services and install via packages instead of file-by-file, also as part of migrating all of SONiC to Python 3, as Python 2 is no longer supported, convert caclmgrd to Python 3 and add to sonic-host-services package
**- Why I did it**
Install all host services and their data files in package format rather than file-by-file
**- How I did it**
- Create sonic-host-services Python wheel package, currently including procdockerstatsd
- Also add the framework for unit tests by adding one simple procdockerstatsd test case
- Create sonic-host-services-data Debian package which is responsible for installing the related systemd unit files to control the services in the Python wheel. This package will also be responsible for installing any Jinja2 templates and other data files needed by the host services.
bring up chassisdb service on sonic switch according to the design in
Distributed Forwarding in VoQ Arch HLD
Signed-off-by: Honggang Xu <hxu@arista.com>
**- Why I did it**
To bring up new ChassisDB service in sonic as designed in ['Distributed forwarding in a VOQ architecture HLD' ](90c1289eaf/doc/chassis/architecture.md).
**- How I did it**
Implement the section 2.3.1 Global DB Organization of the VOQ architecture HLD.
**- How to verify it**
ChassisDB service won't start without chassisdb.conf file on the existing platforms.
ChassisDB service is accessible with global.conf file in the distributed arichitecture.
Signed-off-by: Honggang Xu <hxu@arista.com>
We were building our own python-click package because we needed features/bug fixes available as of version 7.0.0, but the most recent version available from Debian was in the 6.x range.
"Click" is needed for building/testing and installing sonic-utilities. Now that we are building sonic-utilities as a wheel, with Click specified as a dependency in the setup.py file, setuptools will install a more recent version of Click in the sonic-slave-buster container when building the package, and pip will install a more recent version of Click in the host OS of SONiC when installing the sonic-utilities package. Also, we don't need to worry about installing the Python 2 or 3 version of the package, as the proper one will be installed as necessary.
* system health first commit
* system health daemon first commit
* Finish healthd
* Changes due to lower layer logic change
* Get ASIC temperature from TEMPERATURE_INFO table
* Add system health make rule and service files
* fix bugs found during manual test
* Change make file to install system-health library to host
* Set system LED to blink on bootup time
* Caught exceptions in system health checker to make it more robust
* fix issue that fan/psu presence will always be true
* fix issue for external checker
* move system-health service to right after rc-local service
* Set system-health service start after database service
* Get system up time via /proc/uptime
* Provide more information in stat for CLI to use
* fix typo
* Set default category to External for external checker
* If external checker reported OK, save it to stat too
* Trim string for external checker output
* fix issue: PSU voltage check always return OK
* Add unit test cases for system health library
* Fix LGTM warnings
* fix demo comments: 1. get boot up timeout from monit configuration file; 2. set system led in library instead of daemon
* Remove boot_timeout configuration because it will get from monit config file
* Fix argument miss
* fix unit test failure
* fix issue: summary status is not correct
* Fix format issues found in code review
* rename th to threshold to make it clearer
* Fix review comment: 1. add a .dep file for system health; 2. deprecated daemon_base and uses sonic-py-common instead
* Fix unit test failure
* Fix LGTM alert
* Fix LGTM alert
* Fix review comments
* Fix review comment
* 1. Add relevant comments for system health; 2. rename external_checker to user_define_checker
* Ignore check for unknown service type
* Fix unit test issue
* Rename user define checker to user defined checker
* Rename user_define_checkers to user_defined_checkers for configuration file
* Renmae file user_define_checker.py -> user_defined_checker.py
* Fix typo
* Adjust import order for config.py
Co-authored-by: Joe LeVeque <jleveque@users.noreply.github.com>
* Adjust import order for src/system-health/health_checker/hardware_checker.py
Co-authored-by: Joe LeVeque <jleveque@users.noreply.github.com>
* Adjust import order for src/system-health/scripts/healthd
Co-authored-by: Joe LeVeque <jleveque@users.noreply.github.com>
* Adjust import orders in src/system-health/tests/test_system_health.py
* Fix typo
* Add new line after import
* If system health configuration file not exist, healthd should exit
* Fix indent and enable pytest coverage
* Fix typo
* Fix typo
* Remove global logger and use log functions inherited from super class
* Change info level logger to notice level
Co-authored-by: Joe LeVeque <jleveque@users.noreply.github.com>
* buildimage: Add gearbox phy device files and a new physyncd docker to support VS gearbox phy feature
* scripts and configuration needed to support a second syncd docker (physyncd)
* physyncd supports gearbox device and phy SAI APIs and runs multiple instances of syncd, one per phy in the device
* support for VS target (sonic-sairedis vslib has been extended to support a virtual BCM81724 gearbox PHY).
HLD is located at b817a12fd8/doc/gearbox/gearbox_mgr_design.md
**- Why I did it**
This work is part of the gearbox phy joint effort between Microsoft and Broadcom, and is based
on multi-switch support in sonic-sairedis.
**- How I did it**
Overall feature was implemented across several projects. The collective pull requests (some in late stages of review at this point):
https://github.com/Azure/sonic-utilities/pull/931 - CLI (merged)
https://github.com/Azure/sonic-swss-common/pull/347 - Minor changes (merged)
https://github.com/Azure/sonic-swss/pull/1321 - gearsyncd, config parsers, changes to orchargent to create gearbox phy on supported systems
https://github.com/Azure/sonic-sairedis/pull/624 - physyncd, virtual BCM81724 gearbox phy added to vslib
**- How to verify it**
In a vslib build:
root@sonic:/home/admin# show gearbox interfaces status
PHY Id Interface MAC Lanes MAC Lane Speed PHY Lanes PHY Lane Speed Line Lanes Line Lane Speed Oper Admin
-------- ----------- --------------- ---------------- --------------- ---------------- ------------ ----------------- ------ -------
1 Ethernet48 121,122,123,124 25G 200,201,202,203 25G 204,205 50G down down
1 Ethernet49 125,126,127,128 25G 206,207,208,209 25G 210,211 50G down down
1 Ethernet50 69,70,71,72 25G 212,213,214,215 25G 216 100G down down
In addition, docker ps | grep phy should show a physyncd docker running.
Signed-off-by: syd.logan@broadcom.com
We want to let Monit to unmonitor the processes in containers which are disabled in `FEATURE` table such that
Monit will not generate false alerting messages into the syslog.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
We are moving toward building all Python packages for SONiC as wheel packages rather than Debian packages. This will also allow us to more easily transition to Python 3.
Python files are now packaged in "sonic-utilities" Pyhton wheel. Data files are now packaged in "sonic-utilities-data" Debian package.
**- How I did it**
- Build and install sonic-utilities as a Python package
- Remove explicit installation of wheel dependencies, as these will now get installed implicitly by pip when installing sonic-utilities as a wheel
- Build and install new sonic-utilities-data package to install data files required by sonic-utilities applications
- Update all references to sonic-utilities scripts/entrypoints to either reference the new /usr/local/bin/ location or remove absolute path entirely where applicable
Submodule updates:
* src/sonic-utilities aa27dd9...2244d7b (5):
> Support building sonic-utilities as a Python wheel package instead of a Debian package (#1122)
> [consutil] Display remote device name in show command (#1120)
> [vrf] fix check state_db error when vrf moving (#1119)
> [consutil] Fix issue where the ConfigDBConnector's reference is missing (#1117)
> Update to make config load/reload backward compatible. (#1115)
* src/sonic-ztp dd025bc...911d622 (1):
> Update paths to reflect new sonic-utilities install location, /usr/local/bin/ (#19)
Introduced a new build parameter 'SONIC_IMAGE_VERSION' that allows build
system users to build SONiC image with a specific version string. If
'SONIC_IMAGE_VERSION' was not passed by the user, SONIC_IMAGE_VERSION will be
set to the output of functions.sh:sonic_get_version function.
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
management framework provides management plane services like rest and
CLI which is not needed right after boot, instead by delaying this
service we give some more CPU for data plane and control plane services
on fast/warm boot.
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
- Why I did it
When SONiC is configured with the management framework and/or telemetry services, the applications running inside those containers need to access some functionality on the host system. The following is a non-exhaustive list of such functionality:
Image management
Configuration save and load
ZTP enable/disable and status
Show tech support
- How I did it
The host service is a Python process that listens for requests via D-Bus. It will then service those requests and send a response back to the requestor.
This PR only introduces the host service infrastructure. Applications that need access to the host services must add applets that will register on D-Bus endpoints to service the appropriate functionality.
- How to verify it
- Description for the changelog
Add SONiC Host Service for container to execute select commands in host
Signed-off-by: Nirenjan Krishnan <Nirenjan.Krishnan@dell.com>
* Support for Control Plane ACL's for Multi-asic Platforms.
Following changes were done:
1) Moved from using blocking listen() on Config DB to the select() model
via python-swsscommon since we have to wait on event from multiple
config db's
2) Since python-swsscommon is not available on host added libswsscommon and python-swsscommon
and dependent packages in the base image (host enviroment)
3) Made iptables programmed in all namespace using ip netns exec
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* Address Review Comments
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* Fix Review Comments
* Fix Comments
* Added Change for Multi-asic to have iptables
rules to accept internal docker tcp/udp traffic
needed for syslog and redis-tcp connection.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* Fix Review Comments
* Added more comments on logic.
* Fixed all warning/errors reported by http://pep8online.com/
other than line > 80 characters.
* Fix Comment
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* Verified with swsscommon package. Fix issue for single asic platforms.
* Moved to new python package
* Address Review Comments.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* Address Review Comments.
Removes installation of kube-proxy (117 MB) and flannel (53 MB) images from Kubernetes-enabled devices. These images are tested to be unnecessary for our use case, as we do not rely on ClusterIPs for Kubernetes Services or a CNI for pod networking.
1. remove container feature table
2. do not generate feature entry if the feature is not included
in the image
3. rename ENABLE_* to INCLUDE_* for better clarity
4. rename feature status to feature state
5. [submodule]: update sonic-utilities
* 9700e45 2020-08-03 | [show/config]: combine feature and container feature cli (#1015) (HEAD, origin/master, origin/HEAD) [lguohan]
* c9d3550 2020-08-03 | [tests]: fix drops_group_test failure on second run (#1023) [lguohan]
* dfaae69 2020-08-03 | [lldpshow]: Fix input device is not a TTY error (#1016) [Arun Saravanan Balachandran]
* 216688e 2020-08-02 | [tests]: rename sonic-utilitie-tests to tests (#1022) [lguohan]
Signed-off-by: Guohan Lu <lguohan@gmail.com>
As part of consolidating all common Python-based functionality into the new sonic-py-common package, this pull request:
1. Redirects all Python applications/scripts in sonic-buildimage repo which previously imported sonic_device_util or sonic_daemon_base to instead import sonic-py-common, which was added in https://github.com/Azure/sonic-buildimage/pull/5003
2. Replaces all calls to `sonic_device_util.get_platform_info()` to instead call `sonic_py_common.get_platform()` and removes any calls to `sonic_device_util.get_machine_info()` which are no longer necessary (i.e., those which were only used to pass the results to `sonic_device_util.get_platform_info()`.
3. Removes unused imports to the now-deprecated sonic-daemon-base package and sonic_device_util.py module
This is the next step toward resolving https://github.com/Azure/sonic-buildimage/issues/4999
Also reverted my previous change in which device_info.get_platform() would first try obtaining the platform ID string from Config DB and fall back to gathering it from machine.conf upon failure because this function is called by sonic-cfggen before the data is in the DB, in which case, the db_connect() call will hang indefinitely, which was not the behavior I expected. As of now, the function will always reference machine.conf.
Consolidate common SONiC Python-language functionality into one shared package (sonic-py-common) and eliminate duplicate code.
The package currently includes three modules:
- daemon_base
- device_info
- logger
* [sonic-buildimage] Move BGP warm reboot scripts into BGP service /usr/local/bin
* Revert "[sonic-buildimage] Move BGP warm reboot scripts into BGP service /usr/local/bin"
This reverts commit d16d163fc4.
* [sonic-buildimage] Move BGP warm reboot script to BGP service
* [sonic-buildimage] Move BGP warm reboot script to BGP service
* [sonic-buildimage] Move BGP warm reboot script to BGP service
- access DB correctly
* Address code review comments, also change file mode of bgp.sh (+x)
* Address code review comments, also change the file mode of bgp.sh (+x)
* BGP warm reboot script to service, also handle fast boot as indicated by flag saved in StateDB
* BGP warm reboot script to service, code review comments on space alignment
* BGP warm reboot script to service: remove uncesseary space
* BGP warm reboot script to service: replace tab with space
* Code review comments: -) use new multi-db api -) add ignore error from zebra in case it's not configured
* Integrate with multi-ASIC changes committed recently
Co-authored-by: heidi.ou@alibaba-inc.com <heidi.ou@alibaba-inc.com>
* PCIe Monitor service
* Add rescan to pcie-mon.service when it fails to get all pcie devices
* space
* Clean up
* review comments
* update the pcie status in state db
* update the failed pcie status once at the end
* Update the pcie_status in STATE_DB and rename the service
* Add log to exit the service if the configuration file doesn't exist.
* fix the build failure
* Redo the pcie rescan for pcie-check failed case.
* review comments
* review comments
* review comments
Add changes for syslog support for containers running in namespaces on multi ASIC platforms.
On Multi ASIC platforms
Rsyslog service is only running on the host. There is no rsyslog service running in each namespace.
On multi ASIC platforms the rsyslog service on the host will be listening on the docker0 ip address instead of loopback address.
The rsyslog.conf on the containers is modified to have omfwd target ip to be docker0 ipaddress instead of loopback ip
Signed-off-by: Arvindsrinivasan Lakshmi Narasimhan <arlakshm@microsoft.com>
* [systemd-generator]: Fix the code to make sure that dependencies
of host services are generated correctly for multi-asic platforms.
Add code to make sure that systemd timer files are also modified
to add the correct service dependency for multi-asic platforms.
Signed-off-by: SuvarnaMeenakshi <sumeenak@microsoft.com>
* [systemd-generator]: Minor fix, remove debug code and
remove unused variable.
When building the SONiC image, used systemd to mask all services which are set to "disabled" in init_cfg.json.
This PR depends on https://github.com/Azure/sonic-utilities/pull/944, otherwise `config load_minigraph will fail when trying to restart disabled services.
* Fix the Build on 201911 (Stretch) where the directory
/usr/lib/systemd/system/ does not exist so creating
manually. Change should not harm Master (buster) where
the directory is created by Linux
* Fix as per review comments
**- What I did**
#### wheel package Makefiles
- wheel package Makefiles for sonic-yang-mgmt package.
#### libyang Python APIs:
- python APIs based on libyang
- functions to load/merge yang models and Yang data files
- function to validate data trees based on Yang models
- functions to merge yang data files/trees
- add/set/delete node in schema and data trees
- find data/schema nodes from xpath from the Yang data/schema tree in memory
- find dependencies
- dump the data tree in json/xml
#### Extension of libyang Python APIs:
-- Cropping input config based on Yang Model.
-- Translate input config based on Yang Model.
-- rev Translate input config based on Yang Model.
-- Find xpath of port, portleaf and a yang list.
-- Find if node is key of a list while deletion if yes, then delete the parent.
Signed-off-by: Praveen Chaudhary pchaudhary@linkedin.com
Signed-off-by: Ping Mao pmao@linkedin.com
do mount/umount in the chroot environment
install cron explicitly
install rasdaemon as a replacement for mcelog
switch python package docker-py to docker
[sonic-yang-models]: First version of yang models for Port, VLan, Interface, PortChannel, loopback and ACL.
YANG models as per Guidelines.
Guideline doc: https://github.com/Azure/SONiC/blob/master/doc/mgmt/SONiC_YANG_Model_Guidelines.md
[sonic-yang-models/tests]: YANG model test code and JSON input for testing.
[sonic-yang-models/setup.py]: Build infra for yang models.
**- What I did**
Created Yang model for Sonic.
Tables: PORT, VLAN, VLAN_INTERFACE, VLAN_MEMBER, ACL_RULE, ACL_TABLE, INTERFACE.
Created build infra files using which a new package (sonic-yang-models) can be build and can be deployed on sonic switches. Yang models will be part of this new package.
**- How I did it**
Wrote yang models based on Guideline doc:
https://github.com/Azure/SONiC/blob/master/doc/mgmt/SONiC_YANG_Model_Guidelines.md
and
https://github.com/Azure/SONiC/wiki/Configuration.
Wrote python wheel Package infra which runs test for these Yang models using a json files which consists configuration as per yang models. These configs are for negative tests, which means we want to test that most must condition, pattern and when condition works as expected.
**- How to verify it**
Build Logs and testing:
———————————————————————————————————
```
/sonic/src/sonic-yang-models /sonic
running test
running egg_info
writing top-level names to sonic_yang_models.egg-info/top_level.txt
writing dependency_links to sonic_yang_models.egg-info/dependency_links.txt
writing sonic_yang_models.egg-info/PKG-INFO
reading manifest file 'sonic_yang_models.egg-info/SOURCES.txt'
writing manifest file 'sonic_yang_models.egg-info/SOURCES.txt'
running build_ext
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
running bdist_wheel
running build
running build_py
(Reading database ... 155852 files and directories currently installed.)
Preparing to unpack .../libyang_1.0.73_amd64.deb ...
Unpacking libyang (1.0.73) over (1.0.73) ...
Setting up libyang (1.0.73) ...
Processing triggers for libc-bin (2.24-11+deb9u4) ...
Processing triggers for man-db (2.7.6.1-2) ...
(Reading database ... 155852 files and directories currently installed.)
Preparing to unpack .../libyang-cpp_1.0.73_amd64.deb ...
Unpacking libyang-cpp (1.0.73) over (1.0.73) ...
Setting up libyang-cpp (1.0.73) ...
Processing triggers for libc-bin (2.24-11+deb9u4) ...
(Reading database ... 155852 files and directories currently installed.)
Preparing to unpack .../python3-yang_1.0.73_amd64.deb ...
Unpacking python3-yang (1.0.73) over (1.0.73) ...
Setting up python3-yang (1.0.73) ...
INFO:YANG-TEST:module: sonic-vlan is loaded successfully
ERROR:YANG-TEST:Could not get module: sonic-head
INFO:YANG-TEST:module: sonic-portchannel is loaded successfully
INFO:YANG-TEST:module: sonic-acl is loaded successfully
INFO:YANG-TEST:module: sonic-loopback-interface is loaded successfully
ERROR:YANG-TEST:Could not get module: sonic-port
INFO:YANG-TEST:module: sonic-interface is loaded successfully
INFO:YANG-TEST:
------------------- Test 1: Configure a member port in VLAN_MEMBER table which does not exist.---------------------
libyang[0]: Leafref "/sonic-port:sonic-port/sonic-port:PORT/sonic-port:PORT_LIST/sonic-port:port_name" of value "Ethernet156" points to a non
-existing leaf. (path: /sonic-vlan:sonic-vlan/VLAN_MEMBER/VLAN_MEMBER_LIST[vlan_name='Vlan100'][port='Ethernet156']/port)
INFO:YANG-TEST:Configure a member port in VLAN_MEMBER table which does not exist. Passed
INFO:YANG-TEST:
------------------- Test 2: Configure non-existing ACL_TABLE in ACL_RULE.---------------------
libyang[0]: Leafref "/sonic-acl:sonic-acl/sonic-acl:ACL_TABLE/sonic-acl:ACL_TABLE_LIST/sonic-acl:ACL_TABLE_NAME" of value "NOT-EXIST" points
to a non-existing leaf. (path: /sonic-acl:sonic-acl/ACL_RULE/ACL_RULE_LIST[ACL_TABLE_NAME='NOT-EXIST'][RULE_NAME='Rule_20']/ACL_TABLE_NAME)
INFO:YANG-TEST:Configure non-existing ACL_TABLE in ACL_RULE. Passed
INFO:YANG-TEST:
------------------- Test 3: Configure IP_TYPE as ARP and ICMPV6_CODE in ACL_RULE.---------------------
libyang[0]: When condition "boolean(IP_TYPE[.='ANY' or .='IP' or .='IPV6' or .='IPv6ANY'])" not satisfied. (path: /sonic-acl:sonic-acl/ACL_RU
LE/ACL_RULE_LIST[ACL_TABLE_NAME='NO-NSW-PACL-V4'][RULE_NAME='Rule_40']/ICMPV6_CODE)
INFO:YANG-TEST:Configure IP_TYPE as ARP and ICMPV6_CODE in ACL_RULE. Passed
INFO:YANG-TEST:
INFO:YANG-TEST:
------------------- Test 4: Configure IP_TYPE as ipv4any and SRC_IPV6 in ACL_RULE.---------------------
libyang[0]: When condition "boolean(IP_TYPE[.='ANY' or .='IP' or .='IPV6' or .='IPv6ANY'])" not satisfied. (path: /sonic-acl:sonic-acl/ACL_RU
LE/ACL_RULE_LIST[ACL_TABLE_NAME='NO-NSW-PACL-V4'][RULE_NAME='Rule_20']/SRC_IPV6)
INFO:YANG-TEST:Configure IP_TYPE as ipv4any and SRC_IPV6 in ACL_RULE. Passed
------------------- Test 5: Configure l4_src_port_range as 99999-99999 in ACL_RULE---------------------
libyang[0]: Value "99999-99999" does not satisfy the constraint "([0-9]{1,4}|[0-5][0-9]{4}|[6][0-4][0-9]{3}|[6][5][0-2][0-9]{2}|[6][5][3][0-5]{2}|[6][5][3][6][0-5])-([0-9]{1,4}|[0-5][0-9]{4}|[6][0-4][0-9]{3}|[6][5][0-2][0-9]{2}|[6][5][3][0-5]{2}|[6][5][3][6][0-5])" (range, length, or pattern). (path: /sonic-acl:sonic-acl/ACL_RULE/ACL_RULE_LIST[ACL_TABLE_NAME='NO-NSW-PACL-V6'][RULE_NAME='Rule_20']/L4_SRC_PORT_RANGE)
INFO:YANG-TEST:Configure l4_src_port_range as 99999-99999 in ACL_RULE Passed
INFO:YANG-TEST:
------------------- Test 6: Configure empty string as ip-prefix in INTERFACE table.---------------------
libyang[0]: Invalid value "" in "ip-prefix" element. (path: /sonic-interface:sonic-interface/INTERFACE/INTERFACE_LIST[interface='Ethernet8'][ip-prefix='']/ip-prefix)
INFO:YANG-TEST:Configure empty string as ip-prefix in INTERFACE table. Passed
INFO:YANG-TEST:
------------------- Test 7: Configure Wrong family with ip-prefix for VLAN_Interface Table---------------------
libyang[0]: Must condition "(contains(../ip-prefix, ':') and current()='IPv6') or (contains(../ip-prefix, '.') and current()='IPv4')" not satisfied. (path: /sonic-vlan:sonic-vlan/VLAN_INTERFACE/VLAN_INTERFACE_LIST[vlanid='100'][ip-prefix='2a04:5555:66:7777::1/64']/family)
INFO:YANG-TEST:Configure Wrong family with ip-prefix for VLAN_Interface Table Passed
INFO:YANG-TEST:
------------------- Test 8: Configure IP_TYPE as ARP and DST_IPV6 in ACL_RULE.---------------------
libyang[0]: When condition "boolean(IP_TYPE[.='ANY' or .='IP' or .='IPV6' or .='IPV6ANY'])" not satisfied. (path: /sonic-acl:sonic-acl/ACL_RULE/ACL_RULE_LIST[ACL_TABLE_NAME='NO-NS
W-PACL-V6'][RULE_NAME='Rule_20']/DST_IPV6)
INFO:YANG-TEST:Configure IP_TYPE as ARP and DST_IPV6 in ACL_RULE. Passed
INFO:YANG-TEST:
------------------- Test 9: Configure INNER_ETHER_TYPE as 0x080C in ACL_RULE.---------------------
libyang[0]: Value "0x080C" does not satisfy the constraint "(0x88CC|0x8100|0x8915|0x0806|0x0800|0x86DD|0x8847)" (range, length, or pattern). (path: /sonic-acl:sonic-acl/ACL_RULE/ACL_RULE_LIST[ACL_TABLE_NAME='NO-NSW-PACL-V4'][RULE_NAME='Rule_40']/INNER_ETHER_TYPE)
INFO:YANG-TEST:Configure INNER_ETHER_TYPE as 0x080C in ACL_RULE. Passed
INFO:YANG-TEST:
------------------- Test 10: Add dhcp_server which is not in correct ip-prefix format.---------------------
libyang[0]: Invalid value "10.186.72.566" in "dhcp_servers" element. (path: /sonic-vlan:sonic-vlan/VLAN/VLAN_LIST/dhcp_servers[.='10.186.72.566'])
INFO:YANG-TEST:Add dhcp_server which is not in correct ip-prefix format. Passed
INFO:YANG-TEST:
------------------- Test 11: Configure undefined acl_table_type in ACL_TABLE table.---------------------
libyang[0]: Invalid value "LAYER3V4" in "type" element. (path: /sonic-acl:sonic-acl/ACL_TABLE/ACL_TABLE_LIST[ACL_TABLE_NAME='NO-NSW-PACL-V6']/type)
INFO:YANG-TEST:Configure undefined acl_table_type in ACL_TABLE table. Passed
INFO:YANG-TEST:
------------------- Test 12: Configure undefined packet_action in ACL_RULE table.---------------------
libyang[0]: Invalid value "SEND" in "PACKET_ACTION" element. (path: /sonic-acl:sonic-acl/ACL_RULE/ACL_RULE_LIST/PACKET_ACTION)
INFO:YANG-TEST:Configure undefined packet_action in ACL_RULE table. Passed
INFO:YANG-TEST:
------------------- Test 13: Configure wrong value for tagging_mode.---------------------
libyang[0]: Invalid value "non-tagged" in "tagging_mode" element. (path: /sonic-vlan:sonic-vlan/VLAN_MEMBER/VLAN_MEMBER_LIST/tagging_mode)
INFO:YANG-TEST:Configure wrong value for tagging_mode. Passed
INFO:YANG-TEST:
------------------- Test 14: Configure vlan-id in VLAN_MEMBER table which does not exist in VLAN table.---------------------
libyang[0]: Leafref "../../../VLAN/VLAN_LIST/vlanid" of value "200" points to a non-existing leaf. (path: /sonic-vlan:sonic-vlan/VLAN_MEMBER/VLAN_MEMBER_LIST[vlanid='200'][port='Ethernet0']/vlanid)
libyang[0]: Leafref "../../../VLAN/VLAN_LIST/vlanid" of value "200" points to a non-existing leaf. (path: /sonic-vlan:sonic-vlan/VLAN_MEMBER/VLAN_MEMBER_LIST[vlanid='200'][port='Ethernet0']/vlanid)
INFO:YANG-TEST:Configure vlan-id in VLAN_MEMBER table which does not exist in VLAN table. Passed
INFO:YANG-TEST:All Test Passed
../../target/debs/stretch/libyang0.16_0.16.105-1_amd64.deb installtion failed
../../target/debs/stretch/libyang-cpp0.16_0.16.105-1_amd64.deb installtion failed
../../target/debs/stretch/python2-yang_0.16.105-1_amd64.deb installtion failed
YANG Tests passed
Passed: pyang -f tree ./yang-models/*.yang > ./yang-models/sonic_yang_tree
copying tests/yangModelTesting.py -> build/lib/tests
copying tests/test_sonic_yang_models.py -> build/lib/tests
copying tests/__init__.py -> build/lib/tests
running egg_info
writing top-level names to sonic_yang_models.egg-info/top_level.txt
writing dependency_links to sonic_yang_models.egg-info/dependency_links.txt
writing sonic_yang_models.egg-info/PKG-INFO
reading manifest file 'sonic_yang_models.egg-info/SOURCES.txt'
writing manifest file 'sonic_yang_models.egg-info/SOURCES.txt'
installing to build/bdist.linux-x86_64/wheel
running install
running install_lib
creating build/bdist.linux-x86_64/wheel
creating build/bdist.linux-x86_64/wheel/tests
copying build/lib/tests/yangModelTesting.py -> build/bdist.linux-x86_64/wheel/tests
copying build/lib/tests/test_sonic_yang_models.py -> build/bdist.linux-x86_64/wheel/tests
copying build/lib/tests/__init__.py -> build/bdist.linux-x86_64/wheel/tests
running install_data
creating build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data
creating build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data
creating build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
copying ./yang-models/sonic-head.yang -> build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
copying ./yang-models/sonic-acl.yang -> build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
copying ./yang-models/sonic-interface.yang -> build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
copying ./yang-models/sonic-loopback-interface.yang -> build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
copying ./yang-models/sonic-port.yang -> build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
copying ./yang-models/sonic-portchannel.yang -> build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
copying ./yang-models/sonic-vlan.yang -> build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
```
* Install kubernetes worker node packages, if enabled.
* Minor updates
* Added some comments
* Updates per review comments.
Built a private image to test to work fine.
* Remove the removed file.
* Update per comments
Make a fix, as kubeadm no demands a higher version of kubelet & kubectl.
As kubeadm auto install kubectl & kubelet, removing explicit install is an easier/robust fix.
* Changes per review comments.
* Updates per comments.
1) Dropped helper & pod scripts
2) Made install verbose
* Drop creation of pods subdir, as this PR does not use them.
* From comments to 'n' per review comments.
* 1) kubeadm.conf is created as part of kubeadm package install. Hence dropped explicit copy.
* Changes in sonic-buildimage for the NAT feature
- Docker for NAT
- installing the required tools iptables and conntrack for nat
Signed-off-by: kiran.kella@broadcom.com
* Add redis-tools dependencies in the docker nat compilation
* Addressed review comments
* add natsyncd to warm-boot finalizer list
* addressed review comments
* using swsscommon.DBConnector instead of swsssdk.SonicV2Connector
* Enable NAT application in docker-sonic-vs
as the current kdump installation is searching for grub path, and ARM arch (marvell-armhf) are dependent on uboot, these changes has to be addressed. For now skipping kdump installation on ARM
Co-authored-by: lguohan <lguohan@gmail.com>
- move single instance services into their own folder
- generate Systemd templates for any multi-instance service files in slave.mk
- detect single or multi-instance platform in systemd-sonic-generator based on asic.conf platform specific file.
- update container hostname after creation instead of during creation (docker_image_ctl)
- run Docker containers in a network namespace if specified
- add a service to create a simulated multi-ASIC topology on the virtual switch platform
Signed-off-by: Lawrence Lee <t-lale@microsoft.com>
Signed-off-by: Suvarna Meenakshi <Suvarna.Meenaksh@microsoft.com>
* Updates per review comments
1) core_uploader service waits for syslog.service
2) core_uploader service enabled for restart on failure
3) Use mtime instead of file size + ample time to be robust.
* Avoid reloading already uploaded file, by marking the names with a prefix.
* Updated failing path.
1) If rc file is missing or required data missing, it periodically logs error in forever loop.
2) If upload fails, retry every hour with a error log, forever.
* Fix few bugs
* The binary update_json.py will come from sonic-utilities.
Delay CPU intensive services at boot
- How I did it
Made snmp.timer work and add telemetry.timer.
But this is not enough because it breaks the existing snmp dependency on swss.
So, in this solution snmp timer is a wanted by swss service, but since OnBootSec timer expires only once it will not trigger snmp service, so I added line "OnUnitActiveSec=0 sec" which will start snmp service based on the last time it was active. On boot only OnBootSec will expire, on swss start/restarts only second timer will expire immediately and trigger snmp service.
However, snmp service will not stop after "systemctl stop snmp" because of the second timer which will always expire when snmp service because unavailable.
So there is a conflict which will be handled by systemd if we add "Conflicts=" line to both snmp.service and snmp.timer.
So during boot:
snmp does not start by default
swss starts and starts snmp timer
OnUnitActiveSec=0 does not expire since there is no snmp active
OnBootSec expires and starts snmp service and snmp timer gets stopped
During "systemctl restart swss"
snmp stops because of Requisite on swss
snmp unblocks snmp timer from running
swss starts and starts snmp timer
OnUnitActiveSec=0 expires imidiately and start snmp which stops snmp timer
During "systemctl stop snmp"
stop of snmp service unblocks snmp timer but no one starts the timer so it is not started by "OnUnitActiveSec=0"
* Corefile uploader service
1) A service is added to watch /var/core and upload to Azure storage
2) The service is disabled on boot. One may enable explicitly.
3) The .rc file to be updated with acct credentials and http proxy to use.
4) If service is enabled with no credentials, it would sleep, with periodic log messages
5) For any update in .rc, the service has to be restarted to take effect.
* Remove rw permission for .rc file for group & others.
* Changes per review comments.
Re-ordered .rc file per JSON.dump order.
Added a script to enable partial update of .rc, which HWProxy would use to add acct key.
* Azure storage upload requires python module futures, hence added it to install list.
* Removed trailing spaces.
* A mistake in name corrected.
Copy the .rc updater script to /usr/bin.
* ZTP infrastructure changes to support DHCP discovery provisioning data
- Dynamically generate DHCP client configuration based on current ZTP state
- Added support to request and process hostname when using DHCPv6
- Do not process graphservice url dhcp option if ZTP is enabled, ZTP service
will process it
- Generate /e/n/i file with all active interfaces seeking address assignment
via DHCP. Only interfaces that are created in Linux will be added to /e/n/i.
Also DHCP is started only on linked up in-band interfaces.
Signed-off-by: Rajendra Dendukuri <rajendra.dendukuri@broadcom.com>
* Create a SONiC configuration management service
* Perform config db migration after loading config_db.json to redis DB
* Migrate config-setup post migration hooks on image upgrade
config-setup post migration hooks help user to migrate configurations from
old image to new image. If the installed hooks are user defined they will not
be part of the newly installed image. So these hooks have to be migrated to
new image and only then they can be executing when the new image is booting.
The changes in this fix migrate config-setup post-migration hooks and ensure
that any hooks with the same filename in newly installed image are not
overwritten.
It is expected that users install new hooks as per their requirement and
not edit existing hooks. Any changes to existing hooks need to be done as
part of new image and not post bootup.
* Build sonic-ztp package
- Add changes in make rules to conditionally include sonic-ztp package
Signed-off-by: Rajendra Dendukuri <rajendra.dendukuri@broadcom.com>
Add the same mechanism I developed for the SwSS service in #2845 to the syncd service. However, in order to cause the SwSS service to also exit and restart in this situation, I developed a docker-wait-any program which the SwSS service uses to wait for either the swss or syncd containers to exit.
* In the event of a kernel crash, we need to gather as much information
as possible to understand and identify the root cause of the crash.
Currently, the kernel does not provide much information, which make
kernel crash investigation difficult and time consuming.
Fortunately, there is a way in the kernel to provide more information
in the case of a kernel crash. kdump is a feature of the Linux kernel
that creates crash dumps in the event of a kernel crash. This PR
will add kermel kdump support.
An extension to the CLI utilities config and show is provided to
configure and manage kdump:
- enable / disable kdump functionality
- configure kdump (how many kernel crash logs can be saved, memory
allocated for capture kernel)
- view kernel crash logs
* Rename asn/deployment_id_asn_map.yaml to constants/constants.yaml
* Fix bgp templates
* Add community for loopback when bgpd is isolated
* Use correct community value
slave.mk: add SONIC_PLATFORM_API_PY2 as dependency of host
sonic_debian_extension.j2: install sonic_daemon_base and Mellanox-specific sonic_platform on host
mlnx-platform-api.mk: export mlnx_platform_api_py2_wheel_path for sonic_debian_extension.j2
sonic-daemon-base.mk: export daemon_base_py2_wheel_path for sonic_debian_extension.j2
daemon_base.py: hind unnecessary dependency of swss_common on host
Introduce a new "sflow" container (if ENABLE_SFLOW is set). The new docker will include:
hsflowd : host-sflow based daemon is the sFlow agent
psample : Built from libpsample repository. Useful in debugging sampled packets/groups.
sflowtool : Locally dump sflow samples (e.g. with a in-unit collector)
In case of SONiC-VS, enable psample & act_sample kernel modules.
VS' syncd needs iproute2=4.20.0-2~bpo9+1 & libcap2-bin=1:2.25-1 to support tc-sample
tc-syncd is provided as a convenience tool for debugging (e.g. tc-syncd filter show ...)
- What I did
Move the enabling of Systemd services from sonic_debian_extension to a new systemd generator
- How I did it
Create a new systemd generator to manually create symlinks to enable systemd services
Add rules/Makefile to build generator
Add services to be enabled to /etc/sonic/generated_services.conf to be read by the generator at boot time
Signed-off-by: Lawrence Lee <t-lale@microsoft.com>
ARM Architecture support in SONIC
make configure platform=[ASIC_VENDOR_ARCH] PLATFORM_ARCH=[ARM_ARCH]
SONIC_ARCH: default amd64
armhf - arm32bit
arm64 - arm64bit
Signed-off-by: Antony Rheneus <arheneus@marvell.com>
* Upgrade ifupdown2 to version 1.2.8
Required by ZTP to support ZTP over IPv6 transport
Signed-off-by: Rajendra Dendukuri <rajendra.dendukuri@broadcom.com>
* Updated Makefile infrastructure to build debug images.
As a sample, platform/broadcom/docker-orchagent-brcm.mk is updated to add a docker-orchagent-brcm-dbg.gz target.
Now "BLDENV=stretch make target/docker-orchagent-brcm-dbg.gz" will build the debug image.
NOTE: If you don't specify NOSTRETcH=1, it implicitly calls "make stretch", which builds all stretch targets and that would include debug dockers too.
This debug image can be used in any linux box to inspect core file. If your module's external dependency can be suitably mocked, you my even manually run it inside.
"docker run -it --entrypoint=/bin/bash e47a8fb8ed38"
You may map the core file path to this docker run.
* Dropped the regular binary using DBG_PACKAGES and a small name change to help readability.
* Tweaked the changes to retain the existing behavior w.r.t INSTALL_DEBUG_TOOLS=y.
When this change ('building debug docker image transparently') is extended to all dockers, this flag would become redundant. Yet, there can be some test based use cases that rely on this flag.
Until after all the dockers gets their debug images by default and we switch all use cases of this flag to use the newly built debug images, we need to maintain the existing behavior.
* 1) slave.mk - Dropped unused Docker build args
2) Debug template builder: renamed build_dbg_j2.sh to build_debug_docker_j2.sh
3) Dropped insignifcant statement CMD from debug Docker file, as base docker has Entrypoint.
* Reverted some changes, per review comments.
"User, uid, guid, frr-uid & frr-guid" are required for all docker images, with exception of debug images.
* Get in sync with the new update that filters out dockers to be built (SONIC_STRETCH_DOCKERS_FOR_INSTALLERS) and build debug-dockers only for those to be built and debug target is available.
* Mkae a template for each target that can be shared by all platforms.
Where needed a platform entry can override the template.
This avoids duplication, hence easier to maintain.
* A small change, that can fit better with other targets too.
Just take the platform code and do the rest in template.
* Extended debug to all stretch based docker images
* 1) Combined all orchagent makefiles into one platform independent make under rules/docker-orchagent.mk
2) Extened debug image to all stretch dockers
* Changes per review comments:
1) Dropped LIBSAIREDIS_DBG from database, teamd, router-advertiser, telemetry, and platform-monitor docker*.mk files from _DBG_DEPENDS list
2) W.r.t docker make for syncd, moved DEPENDS from template to specific makefile and let the template has stuff that is applicable to all.
* 1) Corrected a copy/paste mistake
* Fixed a copy/paste bug
* The base syncd dockers follow a template, which defines the base docker as DOCKER_SYNCD_BASE instead of DOCKER_SYNCD_<platform code>. Fix the docker-syncd-<mlnx, bfn>.mk to use the new one.
[Yet to be tested locally]
* Fixed spelling mistake
* Enable build of dbg-sonic-broadcom.bin, which uses dbg-dockers in place of regular dockers, for dockers that build debug version. For dockers that do not build debug version, it uses the regular docker.
This debug bin is installable and usable in a DUT, just like a regular bin.
* Per review comments:
1) Share a single rule for final image for normal & debug flavors (e.g. sonic-broadcom.bin & sonic-broadcom-dbg.bin)
2) Put dbg as suffix in final image name.
3) Compared target/sonic-broadcom.bin.logs with & w/o fix to verify integrity of sonic-broadcom.bin
4) Compared target/sonic-broadcom.bin.logs with sonic-broadcom-dbg.bin.log for verification
This fix takes care of ONIE image only. The next PR will cover the rest.
The next PR, will also make debug image conditional with flag.
* Updated per comments.
Now that debug dockers are available, do not need a way to install debug symbols in regular dockers.
With this commit, when INSTALL_DEBUG_TOOLS=y is set, it builds debug dockers (for dockers that enable debug build) and the final image uses debug dockers. For dockers that do not enable debug build, regular dockers get used in the final image.
Note:
The debug dockers are explicitly named as <docker name>-dbg.gz. But there is no "-dbg" suffix for image.
Hence if you make two runs with and w/o INSTALL_DEBUG_TOOLS=y, you have complete set of regular dockers + debug dockers. But the image gets overwritten.
Hence if both regular & debug images are needed, make two runs, as one with INSTALL_DEBUG_TOOLS=y and one w/o. Make sure to copy/rename the final image, before making the second run.
- use superviord to manage process in frr docker
- intro separated configuration mode for frr
- bring quagga configuration template to frr.
Signed-off-by: Guohan Lu <gulv@microsoft.com>
This service (weekly) will let SSD firmware to do the garbage collection
after file-system deleted files. It could avoid slowness or
even READ-ONLY error due to SSD not being able to free the pages
even though the file system thinks there was a lot of space left.
Signed-off-by: Zhenggen Xu <zxu@linkedin.com>
After warm reboot is done, we need to disable warm reboot flag and
tear down anything setup for warm reboot and persisted across.
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
* Install ipaddress python package that has deprecated current ipaddr. ipaddress has backport to python2.7
* Install python ipaddress module as required by route_check.py sonic utility. BTW, ipaddress deprecates ipaddr and ipaddress has python2 backport
* Revert the old chaneg per review comments.
Signed-off-by: Renuka Manavalan <remanava@microsoft.com>
- Why it is required
since SONiC master switches ifupdown package to the new implementation (ifupdown2), it is required to change the configuration of a platform-specific interface for wedge100bf_32x and wedge100bf_65x platforms (bc of ifupdown2 doesn't support auto mode for inet6 protocol).
Also, need to make some refactoring and remove if platform == smth then.. from the system level scripts.
- What I did
removed customization of /usr/bin/interfaces-config.sh
explicitly created directory /etc/network/interfaces.d
added "source" to the /etc/network/interfaces generation template (to include platform-specific interfaces processing)
added platform-specific interfaces config itself (for wedge100bf_32x and wedge100bf_65x)
fixed testcase in sonic-config-engine
- How to verify it
build image for wedge100bf_32x
perform sudo config reload -y on new installation
check the correct configuration of usb0 interface
- Description for the changelog
Allow configuration of platform-specific interfaces
* [build]: put stretch debian packages under target/debs/stretch/
* in stretch build phase, all debian packages built in that stage are placed under target/debs/stretch directory.
* for python-based debian packages, since they are really the same for jessie and stretch, they are placed under target/python-debs directory.
Signed-off-by: Guohan Lu <gulv@microsoft.com>
init_interfaces meant to be sonic init interfaces configuration file.
However, it needs to be copied to the right file name to take effect.
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
There was a fix to speed up initialization when networking used init.d
but it did not carry over to systemd networking.service. This fix will
apply the same change on the systemd service.
The result is much less time spent being blocked in networking.service.
This driver should be loaded by sonic service. If kernel tries to load
it, the driver would be loaded with default parameters, which is not
right for sonic.
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
* Fix redis-py version
Signed-off-by: Qi Luo <qiluo-msft@users.noreply.github.com>
* Update submodule sonic-py-swsssdk: Fix redis-py version to 2.10.6
Signed-off-by: Qi Luo <qiluo-msft@users.noreply.github.com>
* Unify qos config with qos_config.j2 template
Signed-off-by: Wenda <wenni@microsoft.com>
* Change 7050 to use qos config template
Signed-off-by: Wenda <wenni@microsoft.com>
modified: device/arista/x86_64-arista_7050_qx32/Arista-7050-QX32/qos.json.j2
modified: device/arista/x86_64-arista_7050_qx32s/Arista-7050-QX-32S/qos.json.j2
* Change a7060, a7260, s6000, s6100, z9100 to use qos config template
Signed-off-by: Wenda <wenni@microsoft.com>
* Change mlnx devices to use qos config template
Signed-off-by: Wenda <wenni@microsoft.com>
modified: ../../../mellanox/x86_64-mlnx_msn2100-r0/ACS-MSN2100/qos.json.j2
modified: ../../../mellanox/x86_64-mlnx_msn2410-r0/ACS-MSN2410/qos.json.j2
modified: ../../../mellanox/x86_64-mlnx_msn2700-r0/ACS-MSN2700/qos.json.j2
modified: ../../../mellanox/x86_64-mlnx_msn2700-r0/Mellanox-SN2700-D48C8/qos.json.j2
* Change barefoot devices to use qos config template
Signed-off-by: Wenda <wenni@microsoft.com>
modified: barefoot/x86_64-accton_wedge100bf_32x-r0/montara/qos.json.j2
modified: barefoot/x86_64-accton_wedge100bf_65x-r0/mavericks/qos.json.j2
* Change accton as7212 to use qos config template
Signed-off-by: Wenda <wenni@microsoft.com>
modified: accton/x86_64-accton_as7212_54x-r0/AS7212-54x/qos.json.j2
* Apply PORT_QOS_MAP to active ports only
Signed-off-by: Wenda <wenni@microsoft.com>
* Update qos config test with qos_config.j2 template
Signed-off-by: Wenda <wenni@microsoft.com>
* Update sample output of qos-dell6100.json
Signed-off-by: Wenda <wenni@microsoft.com>
* Remove generating the default port name and index list, i.e., remove the generate_port_lists macro, because PORT is always defined
Signed-off-by: Wenda <wenni@microsoft.com>
* Include pfc_to_pg_map according to platform asic type obtained from
/etc/sonic/sonic_version.yml rather than specifying per hwsku
Signed-off-by: Wenda Ni <wenni@microsoft.com>
* Customize TC_TO_PRIORITY_GROUP_MAP and
PFC_PRIORITY_TO_PRIORITY_GROUP_MAP for barefoot
Signed-off-by: Wenda <wenni@microsoft.com>
* Unify PFC_PRIORITY_TO_PRIORITY_GROUP_MAP: remove "0":"0", "1":"1" as
these two pgs do not generate PFC frames.
Signed-off-by: Wenda <wenni@microsoft.com>
* [swss.sh] refactor ssh service script code
- Move checks and waits to helper functions.
- Remove early returns from code stream
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
* [swss.sh] Add debug log for service state changes
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
* [syncd] Separate out syncd service from swss service
Still make them start/stop/restart synchronously so existing scripts
continue working.
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
* Remove extra 'After' in swss service and remove syncd docker warm boot code
Syncd warm boot needs more thinking, we can put it back once the work
flow has been defined and ready for coding/testing.
* [syncd] syncd start/stop/restart shouldn't affect swss state
Semi-detach syncd service state change from swss:
- swss state change still chase syncd service to follow except warm boot
- syncd state change will only affect itself.
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
* add missing '{'