#### Why I did it
Following the discussion in another PR https://github.com/Azure/sonic-buildimage/pull/7708#discussion_r642933510 , since there will be multi subfolders under **/var/log/mellanox**, so we agreed to only mount this folder and the subfolders will be created afterward on demand.
#### How I did it
during the syncd docker creation, only mount folder **/var/log/mellanox**
#### How to verify it
build an Mellanox image and verify the related folder on the host and docker side.
#### Why I did it
Create a target for delayed service timers. Few services in sonic have delayed to speed up the bring up of the system and essential services. However there is no way to track when they start. This will be a problem when executing config reload as config reload expects all services to be up. Hence grouped all the timers that trigger the delayed services under one target so that they could be tracked in 'config reload' command
#### How I did it
Created delay.target service and add created dependency on the delayed targets.
Why I did it
The SONiC switches get their docker images from local repo, populated during install with container images pre-built into SONiC FW. With the introduction of kubernetes, new docker images available in remote repo could be deployed. This requires dockerd to be able to pull images from remote repo.
Depending on the Switch network domain & config, it may or may not be able to reach the remote repo. In the case where remote repo is unreachable, we could potentially make Kubernetes server to also act as http-proxy.
How I did it
When admin explicitly enables, the kubernetes-server could be configured as docker-proxy. But any update to docker-proxy has to be via service-conf file environment variable, implying a "service restart docker" is required. But restart of dockerd is vey expensive, as it would restarts all dockers, including database docker.
To avoid dockerd restart, pre-configure an http_proxy using an unused IP. When k8s server is enabled to act as http-proxy, an IP table entry would be created to direct all traffic to the configured-unused-proxy-ip to the kubernetes-master IP. This way any update to Kubernetes master config would be just manipulating IPTables, which will be transparent to all modules, until dockerd needs to download from remote repo.
How to verify it
Configure a switch such that image repo is unreachable
Pre-configure dockerd with http_proxy.conf using an unused IP (e.g. 172.16.1.1)
Update ctrmgrd.service to invoke ctrmgrd.py with "-p" option.
Configure a k8s server, and deploy an image for feature with set_owner="kube"
Check if switch could successfully download the image or not.
Why I did it
This PR adds changes in sonic-config-engine to consume configuration data in SONiC Yang schema and generate config_db entries
How I did it
Add a new file sonic_yang_cfg_generator .
This file has the functions to
parse yang data json and convert them in config_db json format.
Validate the converted config_db entries to make sure all the dependencies and constraints are met.
Add a new option -Y to the sonic-cfggen command for this purpose
Add unit tests
This capability is support only in sonic-config-engine Python3 package only
Signed-off-by: Yong Zhao yozhao@microsoft.com
Why I did it
Currently we leveraged the Supervisor to monitor the running status of critical processes in each container and it is more reliable and flexible than doing the monitoring by Monit. So we removed the functionality of monitoring the critical processes by Monit.
How I did it
I removed the script process_checker and corresponding Monit configuration entries of critical processes.
How to verify it
I verified this on the device str-7260cx3-acs-1.
Why I did it
In upgrade scenarios, where config_db.json is not carry forwarded to new image, it could be left w/o TACACS credentials.
Added a service to trigger 5 minutes after boot and restore TACACS, if /etc/sonic/old_config/tacacs.json is present.
How I did it
By adding a service, that would fire 5 mins after boot.
This service apply tacacs if available.
How to verify it
Upgrade and watch status of tacacs.timer & tacacs.service
You may create /etc/sonic/old_config/tacacs.json, with updated credentials
(before 5mins after boot) and see that appears in config & persisted too.
Which release branch to backport (provide reason below if selected)
201911
202006
202012
Signed-off-by: Yong Zhao yozhao@microsoft.com
Why I did it
This PR aims to monitor the memory usage of streaming telemetry container and restart streaming telemetry container if memory usage is larger than the pre-defined threshold.
How I did it
I borrowed the system tool Monit to run a script memory_checker which will periodically check the memory usage of streaming telemetry container. If the memory usage of telemetry container is larger than the pre-defined threshold for 10 times during 20 cycles, then an alerting message will be written into syslog and at the same time Monit will run the script restart_service to restart the streaming telemetry container.
How to verify it
I verified this implementation on device str-7260cx3-acs-1.
- Why I did it
To give SONiC Application Extension developers an environment to run and develop their apps.
- How I did it
Created sonic-sdk and sonic-sdk-buildenv dockers and their dbg versions.
- How to verify it
Build:
$ make -f slave target/sonic-sdk.gz target/sonic-sdk-buildenv.gz
Map priority 0 to TC 1 and priority 1 to TC 0
Send traffic on priority 0 and 1 and verified that it gets mapped correctly in hw
Signed-off-by: Neetha John <nejo@microsoft.com>
Signed-off-by: Stepan Blyschak stepanb@nvidia.com
This PR is part of SONiC Application Extension
Depends on #5938
- Why I did it
To provide an infrastructure change in order to support SONiC Application Extension feature.
- How I did it
Label every installable SONiC Docker with a minimal required manifest and auto-generate packages.json file based on
installed SONiC images.
- How to verify it
Build an image, execute the following command:
admin@sonic:~$ docker inspect docker-snmp:1.0.0 | jq '.[0].Config.Labels["com.azure.sonic.manifest"]' -r | jq
Cat /var/lib/sonic-package-manager/packages.json file to verify all dockers are listed there.
Fix#7364
99-default.link - was always in SONiC, but previous systemd (<247) had an issue and it did not work due to issue systemd/systemd#3374. Now systemd 247 works.
However, such policy overrides teamd provided mac address which causes teamd netdev to use a random mac
address. Therefore, needs to be disabled.
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
#### Why I did it
To build flashrom properly with dependency tracking.
#### How I did it
Moved flashrom code from platform/broadcom/sonic-platform-modules-dell/tools directory to src/flashrom directory.
At the end, flashrom_0.9.7_amd64.deb package is build which will be installed in the devices.
- Support compile sonic arm image on arm server. If arm image compiling is executed on arm server instead of using qemu mode on x86 server, compile time can be saved significantly.
- Add kernel argument systemd.unified_cgroup_hierarchy=0 for upgrade systemd to version 247, according to #7228
- rename multiarch docker to sonic-slave-${distro}-march-${arch}
Co-authored-by: Xianghong Gu <xgu@centecnetworks.com>
Co-authored-by: Shi Lei <shil@centecnetworks.com>
Signed-off-by: vedganes <vedavinayagam.ganesan@nokia.com>
Changes for setting platfrom specific lag id boundary id in the chassis
app db. The platfrom specific lag id boundaries are supplied via
chassisdb.conf. The lag_id_start and lag_id_end boundary values sourced
from this file are set in chassis app db which will be used by lag id
allocator to allocate unique lag id in atomic fashion
- Why I did it
I made the docker_img_ctl.j2 applicable for more dockers (including application extensions dockers) by adding an option not to mount tmpfs on /tmp/ and /var/tmp/. In some applications /tmp/ is a different docker volume which can't be tmpfs.
Also, I added and ability to pass REPO[:TAG]|[@digest]/IMAGE_ID instead of just REPO name.
- How I did it
Modified docker_img_ctl.j2 and docker makefiles.
- How to verify it
Run it on the switch.
- Why I did it
To allow SONiC Package Migration during SONiC-2-SONiC upgrade we need to start docker daemon in chroot-ed environment in new SONiC filesystem.
Later this script will be used to start dockerd in chroot environment on SONiC
- How I did it
Install a docker service script into /usr/lib/docker/ in SONiC filesystem.
- How to verify it
Install SONiC image on the switch, mount squashfs to some directory, mount overlay rw layer over squashfs, mount procfs and sysfs, mount docker library. Start the docker using:
root@sonic:~$ /usr/lib/docker/docker.sh start
Signed-off-by: Stepan Blyshchak <stepanb@nvidia.com>
Why I did it
We skip install of CNI plugin, as we don't need. But this leaves node in "not ready" state, upon joining master.
To fix, we copy this dummy .conf file in /etc/cni/net.d
How I did it
Keep this file in /usr/share/sonic/templates and copy to /etc/cni/net.d upon joining k8s master.
How to verify it
Upon configuring master-IP and enable join, watch node join and move to ready state.
You may verify using kubectl get nodes command
SONiC Package Manager will require to auto-generate the start script using that template. For that, we need this template to be recorded in SONiC filesystem.
Signed-off-by: Stepan Blyshchak <stepanb@nvidia.com>
- Why I did it
Group all SONiC services together and able to manage them together. Will be used in config reload command as much simpler and generic way to restart services.
- How I did it
Add services to sonic.target
- How to verify it
Together with Azure/sonic-utilities#1199
config reload -y
Signed-off-by: Stepan Blyshchak <stepanb@nvidia.com>
- Why I did it
The pcie configuration file location is under plugin directory not under platform directory.
#6437
- How I did it
Move all pcie.yaml configuration file from plugin to platform directory.
Remove unnecessary timer to start pcie-check.service
Move pcie-check.service to sonic-host-services
- How to verify it
Verify on the device
* Add *MUX_CABLE_TABLE* to set of tables to clear on SWSS start, which
will clear HW_MUX_CABLE_TABLE and MUX_CABLE_TABLE
* Order swss to start before pmon to ensure that DBs are cleared before
xcvrd (running inside pmon) starts and re-populates the tables
Signed-off-by: Lawrence Lee <lawlee@microsoft.com>
Fix marvell-armhf build break
The azure-storage package depends on the cryptography package. Newer
versions of cryptography require the rust compiler, the correct version
for which is not readily available in buster. Hence we pre-install an
older version here to satisfy the azure-storage dependency.
Note: This is not a problem for other architectures as pre-built versions
of cryptography are available for those. This sequence can be removed
after upgrading to debian bullseye.
**- Why I did it**
To support FW upgrade on init.
**- How I did it**
Change timeout value
**- How to verify it**
I manually changed ASIC and Gearbox FW followed by hard reset in order for FW upgrade to take place on init.
Signed-off-by: liora <liora@nvidia.com>
- Why I did it
To move ‘sonic-host-service’ which is currently built as a separate package to ‘sonic-host-services' package.
- How I did it
- Moved 'sonic-host-server' to 'src/sonic-host-services' and included it as part of the python3 wheel.
- Other files were moved to 'src/sonic-host-services-data' and included as part of the deb package.
- Changed build option ‘INCLUDE_HOST_SERVICE’ to ‘ENABLE_HOST_SERVICE_ON_START’ for enabling sonic-hostservice at boot-up by default.
[multi_asic][vs]: Add dependency in teamd service to start after topology service.
- Why I did it
In multi-asic VS, topology service is run after database service to set up the internal asic topology.
swss and syncd have a dependency to start after topology service is run so that the interfaces are moved to right namespace and created in the right namespace. In case of multi-asic vs, during the initial boot up, when there is no configuration added, teamd service starts and swss/syncd do not start as topology service does not start. Upon loading configuration using config_db or minigraph, swss and sycnd start up , but teamd is not restarted as swss is not stopped and started. This causes teamd to be in a bad state and requires a reload of config.
- How I did it
Add dependency in teamd service to start after topology service is completed.
- How to verify it
No change in single asic vs or platform.
No change in multi-asic regular image.
Change only in multi-asic VS. Bring up a multi-asic VS image without any configration, teamd service will fail to start due to dependency failure. Load minigraph, start topology service, load configuration, ensure all services come up.
Signed-off-by: SuvarnaMeenakshi <sumeenak@microsoft.com>
Following changes were done for ebtables:
- Support for Multi-asic platforms. Ebtable filters are installed in namespace for multi-asic and not host. On Single asic installed on host.
- For Multi-asic platforms we don't want to install on host otherwise Namespace-to-Namespace communication does not happens since ARP Request are not forwarded.
- Updated to use text file to restore ebtables rules then the binary format. Rules are restore as part of Database docker init instead of rc.local
- Removed the ebtable service files for buster as not needed as filters are restored/installed as part of database docker init.
All the binaries are pre-installed with ebtables* binary are same as ebatbles-legacy-*
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
The Portchannels were not getting cleaned up as the cleanup activity was taking more than 10 secs which is default docker timeout after which a SIGKILL will be send.
Fixes#6199
To check if it works out for this issue in 201911 ? #6503
This issue is significantly seen in master branch compared to 201911 because the Portchannel cleanup takes more time in master. Test on a DUT with 8 Port Channels.
master
admin@str-s6000-acs-8:~$ time sudo systemctl stop teamd
real 0m15.599s
user 0m0.061s
sys 0m0.038s
Sonic 201911.v58
admin@str-s6000-acs-8:~$ time sudo systemctl stop teamd
real 0m5.541s
user 0m0.020s
sys 0m0.028s
**- Why I did it**
This PR aims to monitor the running status of each container. Currently the auto-restart feature was enabled. If a critical process exited unexpected, the container will be restarted. If the container was restarted 3 times during 20 minutes, then it will not run anymore unless we cleared the flag using the command `sudo systemctl reset-failed <container_name>` manually.
**- How I did it**
We will employ Monit to monitor a script. This script will generate the expected running container list and compare it with the current running containers. If there are containers which were expected to run but were not running, then an alerting message will be written into syslog.
**- How to verify it**
I tested this feature on a lab device `str-a7050-acs-3` which has single ASIC and `str2-n3164-acs-3` which has a Multi-ASIC. First I manually stopped a container by running the command `sudo systemctl stop <container_name>`, then I checked whether there was an alerting message in the syslog.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
- Make PDDF code compliant with both Python 2 and Python 3
- Align code with PEP8 standards using autopep8
- Build and install both Python 2 and Python 3 PDDF packages
swss/teamd/syncd services were changed to always enabled
in commit fad481edc1 as a workaround
for not letting hostcfgd start service during the bootup process.
commit 317a4b3410 introduce
wait till full system bootup before updating feature states in hostcfgd.
Thus, workaround introduced in commit fad481ed can be removed
Signed-off-by: Guohan Lu <lguohan@gmail.com>
Signed-off-by: Prabhu Sreenivasan prabhu.sreenivasan@broadcom
What I did
Added support for snat, dnat and ipmc resources under CRM module.
How I did it
New feature NAT adds new resources snat_enty and dnat_entry that needs to be monitored. ipmc_entry tracks IP multicast resources used by switch.
How to verify it
sonic-utilities tests and crm spytest
* First cut image update for kubernetes support.
With this,
1) dockers dhcp_relay, lldp, pmon, radv, snmp, telemetry are enabled
for kube management
init_cfg.json configure set_owner as kube for these
2) Each docker's start.sh updated to call container_startup.py to register going up
As part of this call, it registers the current owner as local/kube and its version
The images are built with its version ingrained into image during build
3) Update all docker's bash script to call 'container start/stop/wait' instead of 'docker start/stop/wait'.
For all locally managed containers, it calls docker commands, hence no change for locally managed.
4) Introduced a new ctrmgrd service, that helps with transition between owners as kube & local and carry over any labels update from STATE-DB to API server
5) hostcfgd updated to handle owner change
6) Reboot scripts are updatd to tag kube running images as local, so upon reboot they run the same image.
7) Added kube_commands.py to handle all updates with Kubernetes API serrver -- dedicated for k8s interaction only.
Added source interface support for NTP.
Also made NTP start on Mgmt-VRF by default when configured.
**- How I did it**
1) Updated hostcfg to listen to global config NTP and NTP_SERVER tables and restart ntp when ever the configuration changes. NTP table includes source interface configuration.
2) The ntp script updated to by default start on Mgmt-VFT when configured.
Signed-off-by: Prabhu Sreenivasan <prabhu.sreenivasan@broadcom>
Fixes#5663
- Why I did it
It's currently possible for the SNMP timer to conflict with config reload (specifically if the timer triggers while config reload is stopping the SWSS service). config reload triggers SWSS to shutdown, which causes SNMP to shutdown, which conflicts with the SNMP timer causing SNMP to startup. See the linked issue for more details.
- How I did it
Including the After ordering dependency forces the SNMP timer to wait until SWSS finishes stopping, preventing the conflict. If there is an ordering dependency between two units (e.g. one unit is ordered After another), if one unit is shutting down while the other is starting up, the shutdown will always be ordered before the startup. In this case, that means that the SNMP timer is forced to wait for the SWSS shutdown to complete. Only then can the SNMP timer proceed. See here for more details.
It's important to note that the After dependency will not cause SWSS to be started when the SNMP timer fires (assuming that SWSS has not yet been started). The existing Requisite dependency in the SNMP service will also not cause SWSS to be started, instead it will cause the SNMP service to fail if SWSS is not active.
Signed-off-by: Lawrence Lee <lawlee@microsoft.com>
Install the 'wheel' package in host OS (along with python3 and python3-distutils which are also needed for building some Python packages) to eliminate error messages like the following:
```
Running setup.py bdist_wheel for watchdog: started
Running setup.py bdist_wheel for watchdog: finished with status 'error'
Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-Qd3K08/watchdog/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/pip-wheel-0AHpMe --python-tag cp27:
usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: -c --help [cmd1 cmd2 ...]
or: -c --help-commands
or: -c cmd --help
error: invalid command 'bdist_wheel'
----------------------------------------
Failed building wheel for watchdog
```
These error messages appear to have no impact on the image build, because the Python package seems to still get installed successfully afterward, just the building of a wheel package fails. Therefore, this is more of a cosmetic fix than an actual bug.
This is an addendum to https://github.com/Azure/sonic-buildimage/pull/6182.
Also upgrade pip and install more recent version of setuptools package via PyPI.
libxslt-dev and libz-dev are dependencies for lxml==4.6.1 which is required for pyangbind==0.8.1
lxml-4.6.2-cp37-cp37m-manylinux1_x86_64.whl is directly downloaded in amd64 whereas in arm this is built from lxml-4.6.2.tar.gz
Signed-off-by: Sabareesh Kumar Anandan <sanandan@marvell.com>
**- Why I did it**
To support dynamic buffer calculation.
This PR also depends on the following PRs for sub modules
- [sonic-swss: [buffermgr/bufferorch] Support dynamic buffer calculation #1338](https://github.com/Azure/sonic-swss/pull/1338)
- [sonic-swss-common: Dynamic buffer calculation #361](https://github.com/Azure/sonic-swss-common/pull/361)
- [sonic-utilities: Support dynamic buffer calculation #973](https://github.com/Azure/sonic-utilities/pull/973)
**- How I did it**
1. Introduce field `buffer_model` in `DEVICE_METADATA|localhost` to represent which buffer model is running in the system currently:
- `dynamic` for the dynamic buffer calculation model
- `traditional` for the traditional model in which the `pg_profile_lookup.ini` is used
2. Add the tables required for the feature:
- ASIC_TABLE in platform/\<vendor\>/asic_table.j2
- PERIPHERAL_TABLE in platform/\<vendor\>/peripheral_table.j2
- PORT_PERIPHERAL_TABLE on a per-platform basis in device/\<vendor\>/\<platform\>/port_peripheral_config.j2 for each platform with gearbox installed.
- DEFAULT_LOSSLESS_BUFFER_PARAMETER and LOSSLESS_TRAFFIC_PATTERN in files/build_templates/buffers_config.j2
- Add lossless PGs (3-4) for each port in files/build_templates/buffers_config.j2
3. Copy the newly introduced j2 files into the image and rendering them when the system starts
4. Update the CLI options for buffermgrd so that it can start with dynamic mode
5. Fetches the ASIC vendor name in orchagent:
- fetch the vendor name when creates the docker and pass it as a docker environment variable
- `buffermgrd` can use this passed-in variable
6. Clear buffer related tables from STATE_DB when swss docker starts
7. Update the src/sonic-config-engine/tests/sample_output/buffers-dell6100.json according to the buffer_config.j2
8. Remove buffer pool sizes for ingress pools and egress_lossy_pool
Update the buffer settings for dynamic buffer calculation
python2 is end of life and SONiC is going to support python3. This PR is going to support:
1. Mellanox SONiC platform API python3 support
2. Install both python2 and python3 verson of Mellanox SONiC platform API or pmon and host side
ntp-systemd-wrapper file from files/image_config/ntp was not getting picked up. Added a line on sonic_debian_extension.j2 to copy over the file from files/image_config/ntp after installing the debian package.
Signed-off-by: Prabhu Sreenivasan <prabhu.sreenivasan@broadcom.com>
- Allow platform specific reboot script to be called after crash kernel has
finished copying the kernel vmcore
- Disable pcie advanced features when running crash kernel. This improves
reliability of the crash kernel to successfully create a vmcore and also
reboot
- Allow crash kernel to reboot if a panic is seen while it is generating a
vmcore
- Fix crash kernel to use the SONiC specific /usr/local/bin/reboot script
instead of the Linux reboot command /sbin/reboot
- Use sonic_platform as the kernel command line parameter to pass platform identifier string
Signed-off-by: Rajendra Dendukuri <rajendra.dendukuri@broadcom.com>
The service crash when the platform boots due to missing waits.
/usr/bin/database.sh tries to operate on a missing socket and fails.
We now wait for the chassis database to be ready the same way we do database.
Barefoot platform vendors' sonic_platform packages import the Python 'thrift' library. Previously, our custom-built package was being installed in the PMon container and host OS. However, we are only building a Python 2 version of that package, which was only intended for use with saithrift.
Fixes#6077
Under certain conditions, the sFlow service can start before
interface configurations are sucessfully applied. This will
cause hsflowd to get a socket error.
This fix ensures all interface configurations are successfully
applied before the sFlow service (hsflowd) starts.
During testing we saw this error from hsflowd if interface configs were not successfully applied before hsflowd started.
ERR sflow#hsflowd: socket sendto error: Network is unreachable
no FLOW samples can be seen. This can be consistently reproducible if you force sFlow service to start before interface-config.service.
Signed-off-by: Garrick He <garrick_he@dell.com>
Submodule updates include the following commits:
* src/sonic-utilities 9dc58ea...f9eb739 (18):
> Remove unnecessary calls to str.encode() now that the package is Python 3; Fix deprecation warning (#1260)
> [generate_dump] Ignoring file/directory not found Errors (#1201)
> Fixed porstat rate and util issues (#1140)
> fix error: interface counters is mismatch after warm-reboot (#1099)
> Remove unnecessary calls to str.decode() now that the package is Python 3 (#1255)
> [acl-loader] Make list sorting compliant with Python 3 (#1257)
> Replace hard-coded fast-reboot with variable. And some typo corrections (#1254)
> [configlet][portconfig] Remove calls to dict.has_key() which is not available in Python 3 (#1247)
> Remove unnecessary conversions to list() and calls to dict.keys() (#1243)
> Clean up LGTM alerts (#1239)
> Add 'requests' as install dependency in setup.py (#1240)
> Convert to Python 3 (#1128)
> Fix mock SonicV2Connector in python3: use decode_responses mode so caller code will be the same as python2 (#1238)
> [tests] Do not trim from PATH if we did not append to it; Clean up/fix shebangs in scripts (#1233)
> Updates to bgp config and show commands with BGP_INTERNAL_NEIGHBOR table (#1224)
> [cli]: NAT show commands newline issue after migrated to Python3 (#1204)
> [doc]: Update Command-Reference.md (#1231)
> Added 'import sys' in feature.py file (#1232)
* src/sonic-py-swsssdk 9d9f0c6...1664be9 (2):
> Fix: no need to decode() after redis client scan, so it will work for both python2 and python3 (#96)
> FieldValueMap `contains`(`in`) will also work when migrated to libswsscommon(C++ with SWIG wrapper) (#94)
- Also fix Python 3-related issues:
- Use integer (floor) division in config_samples.py (sonic-config-engine)
- Replace print statement with print function in eeprom.py plugin for x86_64-kvm_x86_64-r0 platform
- Update all platform plugins to be compatible with both Python 2 and Python 3
- Remove shebangs from plugins files which are not intended to be executable
- Replace tabs with spaces in Python plugin files and fix alignment, because Python 3 is more strict
- Remove trailing whitespace from plugins files
Added new flag value 'always_enabled' for the state and auto-restart field of feature table
init_cfg.json is updated to initialize state field of database/swss/syncd/teamd feature and auto-restart field of database feature
as always_enabled
Once the state/auto-restart value is initialized as "always_enabled" it is immutable and cannot be change via feature config commands. (config feature..) PR#Azure/sonic-utilities#1271
hostcfgd will not take any action if state field value is 'always_enabled'
Since we have always_enabled field for auto-restart updated supervisor-proc-exit-listener
not to have special check for database and always rely on value from Feature table.
- Why I did it
Add reboot history to State db so that can be used telemetry service
- How I did it
Split the process-reboot-cause service to determine-reboot-cause and process-reboot-cause
determine-reboot-cause to determine the reboot cause
process-reboot-cause to parse the reboot cause files and put the reboot history to state db
Moved to sonic-host-service* packages
- How to verify it
Performed unit test and tested on DUT
Summary: Move teamd functions to a new service script
Motivation: To segregate teamd functions in one common place. fast-reboot script calls teamd functions that should ideally be replaced by a simple call to a service script.
Changes: New teamd service script and path modification from /usr/bin/teamd.sh to /usr/local/bin/teamd.sh
fast-reboot script (in sonic-utilities) modification (to use new teamd.sh to stop teamd) should follow soon after this change.
Verification: VS image tests.
Signed-off-by: Vaibhav Hemant Dixit <vaibhav.dixit@microsoft.com>
Co-authored-by: heidi.ou@alibaba-inc.com <heidi.ou@alibaba-inc.com>
Co-authored-by: Ying Xie <ying.xie@microsoft.com>
This change introduces PDDF which is described here: https://github.com/Azure/SONiC/pull/536
Most of the platform bring up effort goes in developing the platform device drivers, SONiC platform APIs and validating them. Typically each platform vendor writes their own drivers and platform APIs which is very tailor made to that platform. This involves writing code, building, installing it on the target platform devices and testing. Many of the details of the platform are hard coded into these drivers, from the HW spec. They go through this cycle repetitively till everything works fine, and is validated before upstreaming the code.
PDDF aims to make this platform driver and platform APIs development process much simpler by providing a data driven development framework. This is enabled by:
JSON descriptor files for platform data
Generic data-driven drivers for various devices
Generic SONiC platform APIs
Vendor specific extensions for customisation and extensibility
Signed-off-by: Fuzail Khan <fuzail.khan@broadcom.com>
* Remove 'backend' from device type strings so that backend devices ('BackEndToRRouter' and 'BackEndLeafRouter') are given the same cable lengths as regular device types.
Signed-off-by: Lawrence Lee <lawlee@microsoft.com>
Treat devices that are ToRRouters (ToRRouters and BackEndToRRouters) the same when rendering templates
Except for BackEndToRRouters belonging to a storage cluster, since these devices have extra sub-interfaces created
Treat devices that are LeafRouters (LeafRouters and BackEndLeafRouters) the same when rendering templates
Signed-off-by: Lawrence Lee <lawlee@microsoft.com>
API getMount() API was not updated to handle multi-asic platforms
Updated API getMount() to return abspath() for Docker Mount Point
and use that one for mount point comparison
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
To consolidate host services and install via packages instead of file-by-file, also as part of migrating all of SONiC to Python 3, as Python 2 is no longer supported.
- Convert core_uploader.py script to Python 3
- Use logger from sonic-py-common for uniform logging
- Reorganize imports alphabetically per PEP8 standard
- Two blank lines precede functions per PEP8 standard
- Remove unnecessary global variable declarations
* Build and install openssh from source
* Copy openssh deb package to dest folder
* Update make rule
* Update sonic debian extension
* Append empty line before EOF
* Update openssh patch
* Add openssh-server to base image dependency
* Fix indent type
* Fix comments
* Use commit id instead of tag id and add comment
Signed-off-by: Jing Kan jika@microsoft.com
As part of the transition from Python 2 to Python 3, we are installing both pip2 and pip3 in the slave and config-engine containers. This PR replaces calls to `pip` in these containers with an explicit call to `pip2` to ensure the proper version of pip is executed, no matter which version of pip is aliased to `pip`, as we no longer rely on that alias.
Also some other pip-related cleanup
To consolidate host services and install via packages instead of file-by-file, also as part of migrating all of SONiC to Python 3, as Python 2 is no longer supported, convert caclmgrd to Python 3 and add to sonic-host-services package
remove syncd from critical process list because
gbsyncd process will exit for platform without
gearbox.
closes#5623
Signed-off-by: Guohan Lu <lguohan@gmail.com>
The orchagent and syncd need to have the same default synchronous mode configuration. This PR adds a template file to translate the default value in CONFIG_DB (empty field) to an explicit mode so that the orchagent and syncd could have the same default mode.
**- Why I did it**
Install all host services and their data files in package format rather than file-by-file
**- How I did it**
- Create sonic-host-services Python wheel package, currently including procdockerstatsd
- Also add the framework for unit tests by adding one simple procdockerstatsd test case
- Create sonic-host-services-data Debian package which is responsible for installing the related systemd unit files to control the services in the Python wheel. This package will also be responsible for installing any Jinja2 templates and other data files needed by the host services.
use correct chassisdb.conf path while bringing up chassis_db service on VoQ modular switch.chassis_db service on VoQ modular switch.
resolves#5631
Signed-off-by: Honggang Xu <hxu@arista.com>
bring up chassisdb service on sonic switch according to the design in
Distributed Forwarding in VoQ Arch HLD
Signed-off-by: Honggang Xu <hxu@arista.com>
**- Why I did it**
To bring up new ChassisDB service in sonic as designed in ['Distributed forwarding in a VOQ architecture HLD' ](90c1289eaf/doc/chassis/architecture.md).
**- How I did it**
Implement the section 2.3.1 Global DB Organization of the VOQ architecture HLD.
**- How to verify it**
ChassisDB service won't start without chassisdb.conf file on the existing platforms.
ChassisDB service is accessible with global.conf file in the distributed arichitecture.
Signed-off-by: Honggang Xu <hxu@arista.com>
We were building our own python-click package because we needed features/bug fixes available as of version 7.0.0, but the most recent version available from Debian was in the 6.x range.
"Click" is needed for building/testing and installing sonic-utilities. Now that we are building sonic-utilities as a wheel, with Click specified as a dependency in the setup.py file, setuptools will install a more recent version of Click in the sonic-slave-buster container when building the package, and pip will install a more recent version of Click in the host OS of SONiC when installing the sonic-utilities package. Also, we don't need to worry about installing the Python 2 or 3 version of the package, as the proper one will be installed as necessary.
* system health first commit
* system health daemon first commit
* Finish healthd
* Changes due to lower layer logic change
* Get ASIC temperature from TEMPERATURE_INFO table
* Add system health make rule and service files
* fix bugs found during manual test
* Change make file to install system-health library to host
* Set system LED to blink on bootup time
* Caught exceptions in system health checker to make it more robust
* fix issue that fan/psu presence will always be true
* fix issue for external checker
* move system-health service to right after rc-local service
* Set system-health service start after database service
* Get system up time via /proc/uptime
* Provide more information in stat for CLI to use
* fix typo
* Set default category to External for external checker
* If external checker reported OK, save it to stat too
* Trim string for external checker output
* fix issue: PSU voltage check always return OK
* Add unit test cases for system health library
* Fix LGTM warnings
* fix demo comments: 1. get boot up timeout from monit configuration file; 2. set system led in library instead of daemon
* Remove boot_timeout configuration because it will get from monit config file
* Fix argument miss
* fix unit test failure
* fix issue: summary status is not correct
* Fix format issues found in code review
* rename th to threshold to make it clearer
* Fix review comment: 1. add a .dep file for system health; 2. deprecated daemon_base and uses sonic-py-common instead
* Fix unit test failure
* Fix LGTM alert
* Fix LGTM alert
* Fix review comments
* Fix review comment
* 1. Add relevant comments for system health; 2. rename external_checker to user_define_checker
* Ignore check for unknown service type
* Fix unit test issue
* Rename user define checker to user defined checker
* Rename user_define_checkers to user_defined_checkers for configuration file
* Renmae file user_define_checker.py -> user_defined_checker.py
* Fix typo
* Adjust import order for config.py
Co-authored-by: Joe LeVeque <jleveque@users.noreply.github.com>
* Adjust import order for src/system-health/health_checker/hardware_checker.py
Co-authored-by: Joe LeVeque <jleveque@users.noreply.github.com>
* Adjust import order for src/system-health/scripts/healthd
Co-authored-by: Joe LeVeque <jleveque@users.noreply.github.com>
* Adjust import orders in src/system-health/tests/test_system_health.py
* Fix typo
* Add new line after import
* If system health configuration file not exist, healthd should exit
* Fix indent and enable pytest coverage
* Fix typo
* Fix typo
* Remove global logger and use log functions inherited from super class
* Change info level logger to notice level
Co-authored-by: Joe LeVeque <jleveque@users.noreply.github.com>
* [Multi-Asic] Forward SNMP requests destined to loopback IP, and coming in through the front panel interface
present in the network namespace, to SNMP agent running in the linux host.
* Updates based on comments
* Further updates in docker_image_ctl.j2 and caclmgrd
* Change the variable for net config file.
* Updated the comments in the code.
* No need to clean up the exising NAT rules if present, which could be created by some other process.
* Delete our rule first and add it back, to take care of caclmgrd restart.
Another benefit is that we delete only our rules, rather than earlier approach of "iptables -F" which cleans up all rules.
* Keeping the original logic to clean the NAT entries, to revist when NAT feature added in namespace.
* Missing updates to log_info call.
* buildimage: Add gearbox phy device files and a new physyncd docker to support VS gearbox phy feature
* scripts and configuration needed to support a second syncd docker (physyncd)
* physyncd supports gearbox device and phy SAI APIs and runs multiple instances of syncd, one per phy in the device
* support for VS target (sonic-sairedis vslib has been extended to support a virtual BCM81724 gearbox PHY).
HLD is located at b817a12fd8/doc/gearbox/gearbox_mgr_design.md
**- Why I did it**
This work is part of the gearbox phy joint effort between Microsoft and Broadcom, and is based
on multi-switch support in sonic-sairedis.
**- How I did it**
Overall feature was implemented across several projects. The collective pull requests (some in late stages of review at this point):
https://github.com/Azure/sonic-utilities/pull/931 - CLI (merged)
https://github.com/Azure/sonic-swss-common/pull/347 - Minor changes (merged)
https://github.com/Azure/sonic-swss/pull/1321 - gearsyncd, config parsers, changes to orchargent to create gearbox phy on supported systems
https://github.com/Azure/sonic-sairedis/pull/624 - physyncd, virtual BCM81724 gearbox phy added to vslib
**- How to verify it**
In a vslib build:
root@sonic:/home/admin# show gearbox interfaces status
PHY Id Interface MAC Lanes MAC Lane Speed PHY Lanes PHY Lane Speed Line Lanes Line Lane Speed Oper Admin
-------- ----------- --------------- ---------------- --------------- ---------------- ------------ ----------------- ------ -------
1 Ethernet48 121,122,123,124 25G 200,201,202,203 25G 204,205 50G down down
1 Ethernet49 125,126,127,128 25G 206,207,208,209 25G 210,211 50G down down
1 Ethernet50 69,70,71,72 25G 212,213,214,215 25G 216 100G down down
In addition, docker ps | grep phy should show a physyncd docker running.
Signed-off-by: syd.logan@broadcom.com
We want to let Monit to unmonitor the processes in containers which are disabled in `FEATURE` table such that
Monit will not generate false alerting messages into the syslog.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
buffer_default_*.j2 because of which internal cable length never gets
define and cause failure in test case test_multinpu_cfggen.py
Signed-off-by: Abhishek Dosi <abdosi@abdosi-ubuntu-vm0.nwp1qucpfg5ejooejenqshkj3e.cx.internal.cloudapp.net>
Co-authored-by: Abhishek Dosi <abdosi@abdosi-ubuntu-vm0.nwp1qucpfg5ejooejenqshkj3e.cx.internal.cloudapp.net>
* Enhanced Feature Table state enable/disbale for multi-asic platforms.
In Multi-asic for some features we can service per asic so we need to
get list of all services.
Also updated logic to return if any one of systemctl command return failure
and make sure syslog of feature getting enable/disable only come when
all commads are sucessful.
Moved the service list get api from sonic-util to sonic-py-common
Signed-off-by: Abhishek Dosi <abdosi@abdosi-ubuntu-vm0.nwp1qucpfg5ejooejenqshkj3e.cx.internal.cloudapp.net>
* Make sure to retun None for both service list in case of error.
Signed-off-by: Abhishek Dosi <abdosi@abdosi-ubuntu-vm0.nwp1qucpfg5ejooejenqshkj3e.cx.internal.cloudapp.net>
* Return empty list as fail condition
Signed-off-by: Abhishek Dosi <abdosi@abdosi-ubuntu-vm0.nwp1qucpfg5ejooejenqshkj3e.cx.internal.cloudapp.net>
* Address Review Comments.
Made init_cfg.json.j2 knowledegable of Feature
service is global scope or per asic scope
Signed-off-by: Abhishek Dosi <abdosi@abdosi-ubuntu-vm0.nwp1qucpfg5ejooejenqshkj3e.cx.internal.cloudapp.net>
* Fix merge conflict
* Address Review Comment.
Signed-off-by: Abhishek Dosi <abdosi@abdosi-ubuntu-vm0.nwp1qucpfg5ejooejenqshkj3e.cx.internal.cloudapp.net>
Co-authored-by: Abhishek Dosi <abdosi@abdosi-ubuntu-vm0.nwp1qucpfg5ejooejenqshkj3e.cx.internal.cloudapp.net>
We are moving toward building all Python packages for SONiC as wheel packages rather than Debian packages. This will also allow us to more easily transition to Python 3.
Python files are now packaged in "sonic-utilities" Pyhton wheel. Data files are now packaged in "sonic-utilities-data" Debian package.
**- How I did it**
- Build and install sonic-utilities as a Python package
- Remove explicit installation of wheel dependencies, as these will now get installed implicitly by pip when installing sonic-utilities as a wheel
- Build and install new sonic-utilities-data package to install data files required by sonic-utilities applications
- Update all references to sonic-utilities scripts/entrypoints to either reference the new /usr/local/bin/ location or remove absolute path entirely where applicable
Submodule updates:
* src/sonic-utilities aa27dd9...2244d7b (5):
> Support building sonic-utilities as a Python wheel package instead of a Debian package (#1122)
> [consutil] Display remote device name in show command (#1120)
> [vrf] fix check state_db error when vrf moving (#1119)
> [consutil] Fix issue where the ConfigDBConnector's reference is missing (#1117)
> Update to make config load/reload backward compatible. (#1115)
* src/sonic-ztp dd025bc...911d622 (1):
> Update paths to reflect new sonic-utilities install location, /usr/local/bin/ (#19)
This PR limited the number of calls to sonic-cfggen to one call
per iteration instead of current 3 calls per iteration.
The PR also installs jq on host for future scripts if needed.
signed-off-by: Tamer Ahmed <tamer.ahmed@microsoft.com>
Introduced a new build parameter 'SONIC_IMAGE_VERSION' that allows build
system users to build SONiC image with a specific version string. If
'SONIC_IMAGE_VERSION' was not passed by the user, SONIC_IMAGE_VERSION will be
set to the output of functions.sh:sonic_get_version function.
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
sonic-cfggen is now using Unix Domain Socket for Redis DB. The socket
is created using root account. Subsequently, services that are started
as admin fails to start. This PR creates redis group and add admin
user to redis group. It also grants read/write access on redis.sock
for redis group members.
signed-off-by: Tamer Ahmed <tamer.ahmed@microsoft.com>
management framework provides management plane services like rest and
CLI which is not needed right after boot, instead by delaying this
service we give some more CPU for data plane and control plane services
on fast/warm boot.
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
Add a master switch so that the sync/async mode can be configured.
Example usage of the switch:
1. Configure mode while building an image
`make ENABLE_SYNCHRONOUS_MODE=y <target>`
2. Configure when the device is running
Change CONFIG_DB with `sonic-cfggen -a '{"DEVICE_METADATA":{"localhost": {"synchronous_mode": "enable"}}}' --write-to-db`
Restart swss with `systemctl restart swss`
- Why I did it
When SONiC is configured with the management framework and/or telemetry services, the applications running inside those containers need to access some functionality on the host system. The following is a non-exhaustive list of such functionality:
Image management
Configuration save and load
ZTP enable/disable and status
Show tech support
- How I did it
The host service is a Python process that listens for requests via D-Bus. It will then service those requests and send a response back to the requestor.
This PR only introduces the host service infrastructure. Applications that need access to the host services must add applets that will register on D-Bus endpoints to service the appropriate functionality.
- How to verify it
- Description for the changelog
Add SONiC Host Service for container to execute select commands in host
Signed-off-by: Nirenjan Krishnan <Nirenjan.Krishnan@dell.com>
startup when doing redis PING since database_config.json getting
generated from jinja2 template is still not ready.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* Support for Control Plane ACL's for Multi-asic Platforms.
Following changes were done:
1) Moved from using blocking listen() on Config DB to the select() model
via python-swsscommon since we have to wait on event from multiple
config db's
2) Since python-swsscommon is not available on host added libswsscommon and python-swsscommon
and dependent packages in the base image (host enviroment)
3) Made iptables programmed in all namespace using ip netns exec
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* Address Review Comments
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* Fix Review Comments
* Fix Comments
* Added Change for Multi-asic to have iptables
rules to accept internal docker tcp/udp traffic
needed for syslog and redis-tcp connection.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* Fix Review Comments
* Added more comments on logic.
* Fixed all warning/errors reported by http://pep8online.com/
other than line > 80 characters.
* Fix Comment
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* Verified with swsscommon package. Fix issue for single asic platforms.
* Moved to new python package
* Address Review Comments.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* Address Review Comments.
SNMP and Telemetry services are not critical to switch startup.
They also cause fast-reboot not to meet timing requirements.
In order to delay start those service are associated with systemd
timer units, however when hostcfgd initiate service start, it start
the service and not the timer. This PR fixes this issue by
starting the timer associated with systemd unit.
signed-off-by: Tamer Ahmed <tamer.ahmed@microsoft.com>
Removes installation of kube-proxy (117 MB) and flannel (53 MB) images from Kubernetes-enabled devices. These images are tested to be unnecessary for our use case, as we do not rely on ClusterIPs for Kubernetes Services or a CNI for pod networking.
1. remove container feature table
2. do not generate feature entry if the feature is not included
in the image
3. rename ENABLE_* to INCLUDE_* for better clarity
4. rename feature status to feature state
5. [submodule]: update sonic-utilities
* 9700e45 2020-08-03 | [show/config]: combine feature and container feature cli (#1015) (HEAD, origin/master, origin/HEAD) [lguohan]
* c9d3550 2020-08-03 | [tests]: fix drops_group_test failure on second run (#1023) [lguohan]
* dfaae69 2020-08-03 | [lldpshow]: Fix input device is not a TTY error (#1016) [Arun Saravanan Balachandran]
* 216688e 2020-08-02 | [tests]: rename sonic-utilitie-tests to tests (#1022) [lguohan]
Signed-off-by: Guohan Lu <lguohan@gmail.com>
As part of consolidating all common Python-based functionality into the new sonic-py-common package, this pull request:
1. Redirects all Python applications/scripts in sonic-buildimage repo which previously imported sonic_device_util or sonic_daemon_base to instead import sonic-py-common, which was added in https://github.com/Azure/sonic-buildimage/pull/5003
2. Replaces all calls to `sonic_device_util.get_platform_info()` to instead call `sonic_py_common.get_platform()` and removes any calls to `sonic_device_util.get_machine_info()` which are no longer necessary (i.e., those which were only used to pass the results to `sonic_device_util.get_platform_info()`.
3. Removes unused imports to the now-deprecated sonic-daemon-base package and sonic_device_util.py module
This is the next step toward resolving https://github.com/Azure/sonic-buildimage/issues/4999
Also reverted my previous change in which device_info.get_platform() would first try obtaining the platform ID string from Config DB and fall back to gathering it from machine.conf upon failure because this function is called by sonic-cfggen before the data is in the DB, in which case, the db_connect() call will hang indefinitely, which was not the behavior I expected. As of now, the function will always reference machine.conf.
* [platform] Add Support For Environment Variable
This PR adds the ability to read environment file from /etc/sonic.
the file contains immutable SONiC config attributes such as platform,
hwsku, version, device_type. The aim is to minimize calls being made
into sonic-cfggen during boot time.
singed-off-by: Tamer Ahmed <tamer.ahmed@microsoft.com>
Consolidate common SONiC Python-language functionality into one shared package (sonic-py-common) and eliminate duplicate code.
The package currently includes three modules:
- daemon_base
- device_info
- logger
Otherwise, it may cause issues for warm restarts, warm reboot.
Warm restart of swss will start nat which is not expected for warm
restart. Also it is observed that during warm-reboot script execution
nat container gets started after it was killed. This causes removal of
nat dump generated by nat previously:
A check [ -f /host/warmboot/nat/nat_entries.dump ] || echo "NAT dump
does not exists" was added right before kexec:
```
Fri Jul 17 10:47:16 UTC 2020 Prepare MLNX ASIC to fastfast-reboot:
install new FW if required
Fri Jul 17 10:47:18 UTC 2020 Pausing orchagent ...
Fri Jul 17 10:47:18 UTC 2020 Stopping nat ...
Fri Jul 17 10:47:18 UTC 2020 Stopped nat ...
Fri Jul 17 10:47:18 UTC 2020 Stopping radv ...
Fri Jul 17 10:47:19 UTC 2020 Stopping bgp ...
Fri Jul 17 10:47:19 UTC 2020 Stopped bgp ...
Fri Jul 17 10:47:21 UTC 2020 Initialize pre-shutdown ...
Fri Jul 17 10:47:21 UTC 2020 Requesting pre-shutdown ...
Fri Jul 17 10:47:22 UTC 2020 Waiting for pre-shutdown ...
Fri Jul 17 10:47:24 UTC 2020 Pre-shutdown succeeded ...
Fri Jul 17 10:47:24 UTC 2020 Backing up database ...
Fri Jul 17 10:47:25 UTC 2020 Stopping teamd ...
Fri Jul 17 10:47:25 UTC 2020 Stopped teamd ...
Fri Jul 17 10:47:25 UTC 2020 Stopping syncd ...
Fri Jul 17 10:47:35 UTC 2020 Stopped syncd ...
Fri Jul 17 10:47:35 UTC 2020 Stopping all remaining containers ...
Warning: Stopping telemetry.service, but it can still be activated by:
telemetry.timer
Fri Jul 17 10:47:37 UTC 2020 Stopped all remaining containers ...
NAT dump does not exists
Fri Jul 17 10:47:39 UTC 2020 Rebooting with /sbin/kexec -e to
SONiC-OS-201911.140-08245093 ...
```
With this change, executed warm-reboot 10 times without hitting this
issue, while without this change the issue is easily reproducible almost
every warm-reboot run.
Signed-off-by: Stepan Blyschak <stepanb@mellanox.com>
* [sonic-buildimage] Move BGP warm reboot scripts into BGP service /usr/local/bin
* Revert "[sonic-buildimage] Move BGP warm reboot scripts into BGP service /usr/local/bin"
This reverts commit d16d163fc4.
* [sonic-buildimage] Move BGP warm reboot script to BGP service
* [sonic-buildimage] Move BGP warm reboot script to BGP service
* [sonic-buildimage] Move BGP warm reboot script to BGP service
- access DB correctly
* Address code review comments, also change file mode of bgp.sh (+x)
* Address code review comments, also change the file mode of bgp.sh (+x)
* BGP warm reboot script to service, also handle fast boot as indicated by flag saved in StateDB
* BGP warm reboot script to service, code review comments on space alignment
* BGP warm reboot script to service: remove uncesseary space
* BGP warm reboot script to service: replace tab with space
* Code review comments: -) use new multi-db api -) add ignore error from zebra in case it's not configured
* Integrate with multi-ASIC changes committed recently
Co-authored-by: heidi.ou@alibaba-inc.com <heidi.ou@alibaba-inc.com>
* PCIe Monitor service
* Add rescan to pcie-mon.service when it fails to get all pcie devices
* space
* Clean up
* review comments
* update the pcie status in state db
* update the failed pcie status once at the end
* Update the pcie_status in STATE_DB and rename the service
* Add log to exit the service if the configuration file doesn't exist.
* fix the build failure
* Redo the pcie rescan for pcie-check failed case.
* review comments
* review comments
* review comments
Signed-off-by: Arvindsrinivasan Lakshmi Narasimhan <arlakshm@microsoft.com>
This PR has changes to support accessing the bcmsh and bcmcmd utilities on multi ASIC devices
Changes done
- move the link of /var/run/sswsyncd from docker-syncd-brcm.mk to docker_image_ctl.j2
- update the bcmsh and bcmcmd scripts to take -n [ASIC_ID] as an argument on multi ASIC platforms
* [sonic-buildimage] Changes to make network specific sysctl
common for both host and docker namespace (in multi-npu).
This change is triggered with issue found in multi-npu platforms
where in docker namespace
net.ipv6.conf.all.forwarding was 0 (should be 1) because of
which RS/RA message were triggered and link-local router were learnt.
Beside this there were some other sysctl.net.ipv6* params whose value
in docker namespace is not same as host namespace.
So to make we are always in sync in host and docker namespace
created common file that list all sysctl.net.* params and used
both by host and docker namespace. Any change will get applied
to both namespace.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* Address Review Comments and made sure to invoke augtool
only one and do string concatenation of all set commands
* Address Review Comments.
Add changes for syslog support for containers running in namespaces on multi ASIC platforms.
On Multi ASIC platforms
Rsyslog service is only running on the host. There is no rsyslog service running in each namespace.
On multi ASIC platforms the rsyslog service on the host will be listening on the docker0 ip address instead of loopback address.
The rsyslog.conf on the containers is modified to have omfwd target ip to be docker0 ipaddress instead of loopback ip
Signed-off-by: Arvindsrinivasan Lakshmi Narasimhan <arlakshm@microsoft.com>
* Changes to make default route programming
correct in multi-asic platform where frr is not running
in host namespace. Change is to set correct administrative distance.
Also make NAMESPACE* enviroment variable available for all dockers
so that it can be used when needed.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* Fix review comments
* Review comment to check to add default route
only if default route exist and delete is successful.
* [systemd-generator]: Fix the code to make sure that dependencies
of host services are generated correctly for multi-asic platforms.
Add code to make sure that systemd timer files are also modified
to add the correct service dependency for multi-asic platforms.
Signed-off-by: SuvarnaMeenakshi <sumeenak@microsoft.com>
* [systemd-generator]: Minor fix, remove debug code and
remove unused variable.
When building the SONiC image, used systemd to mask all services which are set to "disabled" in init_cfg.json.
This PR depends on https://github.com/Azure/sonic-utilities/pull/944, otherwise `config load_minigraph will fail when trying to restart disabled services.
* Fix the Build on 201911 (Stretch) where the directory
/usr/lib/systemd/system/ does not exist so creating
manually. Change should not harm Master (buster) where
the directory is created by Linux
* Fix as per review comments
**- Why I did it**
To ensure telemetry service is enabled by default after installing a fresh SONiC image
**- How I did it**
Set telemetry feature status to "enabled" when generating init_cfg.json file
**- What I did**
#### wheel package Makefiles
- wheel package Makefiles for sonic-yang-mgmt package.
#### libyang Python APIs:
- python APIs based on libyang
- functions to load/merge yang models and Yang data files
- function to validate data trees based on Yang models
- functions to merge yang data files/trees
- add/set/delete node in schema and data trees
- find data/schema nodes from xpath from the Yang data/schema tree in memory
- find dependencies
- dump the data tree in json/xml
#### Extension of libyang Python APIs:
-- Cropping input config based on Yang Model.
-- Translate input config based on Yang Model.
-- rev Translate input config based on Yang Model.
-- Find xpath of port, portleaf and a yang list.
-- Find if node is key of a list while deletion if yes, then delete the parent.
Signed-off-by: Praveen Chaudhary pchaudhary@linkedin.com
Signed-off-by: Ping Mao pmao@linkedin.com
- bug fix : Fixed an issue which the nps ko file was not loaded due to the wrong service file name
- Optimize the code to reduce changes due to the kernel upgrade
- Remove nephos ko file loaded in swss.service.j2 because it has loaded at syncd.service.j2
* Changes for LLDP for Multi NPU Platoforms:-
a) Enable LLDP for Host namespace for Management Port
b) Make sure Management IP is avaliable in per asic namespace
needed for LLDP Chassis configuration
c) Make sure chassis mac-address is correct in per asic namespace
d) Do not run lldp on eth0 of per asic namespace and avoid chassis
configuration for same
e) Use Linux hostname instead from Device Metadata for lldp chassis
configuration since in multi-npu platforms device metadata hostname
will be differnt
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* Address Review Comment with following changes:
a) Use Device Metadata hostname even in per namespace conatiner.
updated minigraph parsing for same to have hostname as system
hostname and add new key for asic name
b) Minigraph changes to have MGMT_INTERFACE Key in per asic/namespace
config also as needed for LLDP for setting chassis management IP.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* Address Review Comments
Dynamic threshold setting changed to 0 and WRED profile green min threshold set to 250000 for Tomahawk devices
Changed the dynamic threshold settings in pg_profile_lookup.ini
Added a macro for WRED profiles in qos.json.j2 for Tomahawk devices
Necessary changes made in qos.config.j2 to use the macro if present
Signed-off-by: Neetha John <nejo@microsoft.com>
* Multi DB with namespace support, Introducing the database_global.json file
for supporting accessing DB's in other namespaces for service running in
linux host
* Updates based on comments
* Adding the j2 templates for database_config and database_global files.
* Updating to retrieve the redis DIR's to be mounted from database_global.json file.
* Additional check to see if asic.conf file exists before sourcing it.
* Updates based on PR comments discussion.
* Review comments update
* Updates to the argument "-n" for namespace used in both context of parsing minigraph and multi DB access.
* Update with the attribute "persistence_for_warm_boot" that was added to database_config.json file earlier.
* Removing the database_config.json file to avioid confusion in future.
We use the database_config.json.j2 file to generate database_config.json files dynamically.
* Update the comments for sudo usage in docker_image_ctrl.j2
* Update with the new logic in PING PONG tests using sonic-db-cli. With this we wait till the
PONG response is received when redis server is up.
* Similar changes in swss and syncd scripts for the PING tests with sonic-db-cli
* Updated with a missing , in the database_config.json.j2 file, Do pip install of j2cli in docker-base-buster.
currently, vs docker always create 32 front panel ports.
when vs docker starts, it first detects the peer links
in the namespace and then setup equal number of front panel
interfaces as the peer links.
Signed-off-by: Guohan Lu <lguohan@gmail.com>
do mount/umount in the chroot environment
install cron explicitly
install rasdaemon as a replacement for mcelog
switch python package docker-py to docker
[sonic-yang-models]: First version of yang models for Port, VLan, Interface, PortChannel, loopback and ACL.
YANG models as per Guidelines.
Guideline doc: https://github.com/Azure/SONiC/blob/master/doc/mgmt/SONiC_YANG_Model_Guidelines.md
[sonic-yang-models/tests]: YANG model test code and JSON input for testing.
[sonic-yang-models/setup.py]: Build infra for yang models.
**- What I did**
Created Yang model for Sonic.
Tables: PORT, VLAN, VLAN_INTERFACE, VLAN_MEMBER, ACL_RULE, ACL_TABLE, INTERFACE.
Created build infra files using which a new package (sonic-yang-models) can be build and can be deployed on sonic switches. Yang models will be part of this new package.
**- How I did it**
Wrote yang models based on Guideline doc:
https://github.com/Azure/SONiC/blob/master/doc/mgmt/SONiC_YANG_Model_Guidelines.md
and
https://github.com/Azure/SONiC/wiki/Configuration.
Wrote python wheel Package infra which runs test for these Yang models using a json files which consists configuration as per yang models. These configs are for negative tests, which means we want to test that most must condition, pattern and when condition works as expected.
**- How to verify it**
Build Logs and testing:
———————————————————————————————————
```
/sonic/src/sonic-yang-models /sonic
running test
running egg_info
writing top-level names to sonic_yang_models.egg-info/top_level.txt
writing dependency_links to sonic_yang_models.egg-info/dependency_links.txt
writing sonic_yang_models.egg-info/PKG-INFO
reading manifest file 'sonic_yang_models.egg-info/SOURCES.txt'
writing manifest file 'sonic_yang_models.egg-info/SOURCES.txt'
running build_ext
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
running bdist_wheel
running build
running build_py
(Reading database ... 155852 files and directories currently installed.)
Preparing to unpack .../libyang_1.0.73_amd64.deb ...
Unpacking libyang (1.0.73) over (1.0.73) ...
Setting up libyang (1.0.73) ...
Processing triggers for libc-bin (2.24-11+deb9u4) ...
Processing triggers for man-db (2.7.6.1-2) ...
(Reading database ... 155852 files and directories currently installed.)
Preparing to unpack .../libyang-cpp_1.0.73_amd64.deb ...
Unpacking libyang-cpp (1.0.73) over (1.0.73) ...
Setting up libyang-cpp (1.0.73) ...
Processing triggers for libc-bin (2.24-11+deb9u4) ...
(Reading database ... 155852 files and directories currently installed.)
Preparing to unpack .../python3-yang_1.0.73_amd64.deb ...
Unpacking python3-yang (1.0.73) over (1.0.73) ...
Setting up python3-yang (1.0.73) ...
INFO:YANG-TEST:module: sonic-vlan is loaded successfully
ERROR:YANG-TEST:Could not get module: sonic-head
INFO:YANG-TEST:module: sonic-portchannel is loaded successfully
INFO:YANG-TEST:module: sonic-acl is loaded successfully
INFO:YANG-TEST:module: sonic-loopback-interface is loaded successfully
ERROR:YANG-TEST:Could not get module: sonic-port
INFO:YANG-TEST:module: sonic-interface is loaded successfully
INFO:YANG-TEST:
------------------- Test 1: Configure a member port in VLAN_MEMBER table which does not exist.---------------------
libyang[0]: Leafref "/sonic-port:sonic-port/sonic-port:PORT/sonic-port:PORT_LIST/sonic-port:port_name" of value "Ethernet156" points to a non
-existing leaf. (path: /sonic-vlan:sonic-vlan/VLAN_MEMBER/VLAN_MEMBER_LIST[vlan_name='Vlan100'][port='Ethernet156']/port)
INFO:YANG-TEST:Configure a member port in VLAN_MEMBER table which does not exist. Passed
INFO:YANG-TEST:
------------------- Test 2: Configure non-existing ACL_TABLE in ACL_RULE.---------------------
libyang[0]: Leafref "/sonic-acl:sonic-acl/sonic-acl:ACL_TABLE/sonic-acl:ACL_TABLE_LIST/sonic-acl:ACL_TABLE_NAME" of value "NOT-EXIST" points
to a non-existing leaf. (path: /sonic-acl:sonic-acl/ACL_RULE/ACL_RULE_LIST[ACL_TABLE_NAME='NOT-EXIST'][RULE_NAME='Rule_20']/ACL_TABLE_NAME)
INFO:YANG-TEST:Configure non-existing ACL_TABLE in ACL_RULE. Passed
INFO:YANG-TEST:
------------------- Test 3: Configure IP_TYPE as ARP and ICMPV6_CODE in ACL_RULE.---------------------
libyang[0]: When condition "boolean(IP_TYPE[.='ANY' or .='IP' or .='IPV6' or .='IPv6ANY'])" not satisfied. (path: /sonic-acl:sonic-acl/ACL_RU
LE/ACL_RULE_LIST[ACL_TABLE_NAME='NO-NSW-PACL-V4'][RULE_NAME='Rule_40']/ICMPV6_CODE)
INFO:YANG-TEST:Configure IP_TYPE as ARP and ICMPV6_CODE in ACL_RULE. Passed
INFO:YANG-TEST:
INFO:YANG-TEST:
------------------- Test 4: Configure IP_TYPE as ipv4any and SRC_IPV6 in ACL_RULE.---------------------
libyang[0]: When condition "boolean(IP_TYPE[.='ANY' or .='IP' or .='IPV6' or .='IPv6ANY'])" not satisfied. (path: /sonic-acl:sonic-acl/ACL_RU
LE/ACL_RULE_LIST[ACL_TABLE_NAME='NO-NSW-PACL-V4'][RULE_NAME='Rule_20']/SRC_IPV6)
INFO:YANG-TEST:Configure IP_TYPE as ipv4any and SRC_IPV6 in ACL_RULE. Passed
------------------- Test 5: Configure l4_src_port_range as 99999-99999 in ACL_RULE---------------------
libyang[0]: Value "99999-99999" does not satisfy the constraint "([0-9]{1,4}|[0-5][0-9]{4}|[6][0-4][0-9]{3}|[6][5][0-2][0-9]{2}|[6][5][3][0-5]{2}|[6][5][3][6][0-5])-([0-9]{1,4}|[0-5][0-9]{4}|[6][0-4][0-9]{3}|[6][5][0-2][0-9]{2}|[6][5][3][0-5]{2}|[6][5][3][6][0-5])" (range, length, or pattern). (path: /sonic-acl:sonic-acl/ACL_RULE/ACL_RULE_LIST[ACL_TABLE_NAME='NO-NSW-PACL-V6'][RULE_NAME='Rule_20']/L4_SRC_PORT_RANGE)
INFO:YANG-TEST:Configure l4_src_port_range as 99999-99999 in ACL_RULE Passed
INFO:YANG-TEST:
------------------- Test 6: Configure empty string as ip-prefix in INTERFACE table.---------------------
libyang[0]: Invalid value "" in "ip-prefix" element. (path: /sonic-interface:sonic-interface/INTERFACE/INTERFACE_LIST[interface='Ethernet8'][ip-prefix='']/ip-prefix)
INFO:YANG-TEST:Configure empty string as ip-prefix in INTERFACE table. Passed
INFO:YANG-TEST:
------------------- Test 7: Configure Wrong family with ip-prefix for VLAN_Interface Table---------------------
libyang[0]: Must condition "(contains(../ip-prefix, ':') and current()='IPv6') or (contains(../ip-prefix, '.') and current()='IPv4')" not satisfied. (path: /sonic-vlan:sonic-vlan/VLAN_INTERFACE/VLAN_INTERFACE_LIST[vlanid='100'][ip-prefix='2a04:5555:66:7777::1/64']/family)
INFO:YANG-TEST:Configure Wrong family with ip-prefix for VLAN_Interface Table Passed
INFO:YANG-TEST:
------------------- Test 8: Configure IP_TYPE as ARP and DST_IPV6 in ACL_RULE.---------------------
libyang[0]: When condition "boolean(IP_TYPE[.='ANY' or .='IP' or .='IPV6' or .='IPV6ANY'])" not satisfied. (path: /sonic-acl:sonic-acl/ACL_RULE/ACL_RULE_LIST[ACL_TABLE_NAME='NO-NS
W-PACL-V6'][RULE_NAME='Rule_20']/DST_IPV6)
INFO:YANG-TEST:Configure IP_TYPE as ARP and DST_IPV6 in ACL_RULE. Passed
INFO:YANG-TEST:
------------------- Test 9: Configure INNER_ETHER_TYPE as 0x080C in ACL_RULE.---------------------
libyang[0]: Value "0x080C" does not satisfy the constraint "(0x88CC|0x8100|0x8915|0x0806|0x0800|0x86DD|0x8847)" (range, length, or pattern). (path: /sonic-acl:sonic-acl/ACL_RULE/ACL_RULE_LIST[ACL_TABLE_NAME='NO-NSW-PACL-V4'][RULE_NAME='Rule_40']/INNER_ETHER_TYPE)
INFO:YANG-TEST:Configure INNER_ETHER_TYPE as 0x080C in ACL_RULE. Passed
INFO:YANG-TEST:
------------------- Test 10: Add dhcp_server which is not in correct ip-prefix format.---------------------
libyang[0]: Invalid value "10.186.72.566" in "dhcp_servers" element. (path: /sonic-vlan:sonic-vlan/VLAN/VLAN_LIST/dhcp_servers[.='10.186.72.566'])
INFO:YANG-TEST:Add dhcp_server which is not in correct ip-prefix format. Passed
INFO:YANG-TEST:
------------------- Test 11: Configure undefined acl_table_type in ACL_TABLE table.---------------------
libyang[0]: Invalid value "LAYER3V4" in "type" element. (path: /sonic-acl:sonic-acl/ACL_TABLE/ACL_TABLE_LIST[ACL_TABLE_NAME='NO-NSW-PACL-V6']/type)
INFO:YANG-TEST:Configure undefined acl_table_type in ACL_TABLE table. Passed
INFO:YANG-TEST:
------------------- Test 12: Configure undefined packet_action in ACL_RULE table.---------------------
libyang[0]: Invalid value "SEND" in "PACKET_ACTION" element. (path: /sonic-acl:sonic-acl/ACL_RULE/ACL_RULE_LIST/PACKET_ACTION)
INFO:YANG-TEST:Configure undefined packet_action in ACL_RULE table. Passed
INFO:YANG-TEST:
------------------- Test 13: Configure wrong value for tagging_mode.---------------------
libyang[0]: Invalid value "non-tagged" in "tagging_mode" element. (path: /sonic-vlan:sonic-vlan/VLAN_MEMBER/VLAN_MEMBER_LIST/tagging_mode)
INFO:YANG-TEST:Configure wrong value for tagging_mode. Passed
INFO:YANG-TEST:
------------------- Test 14: Configure vlan-id in VLAN_MEMBER table which does not exist in VLAN table.---------------------
libyang[0]: Leafref "../../../VLAN/VLAN_LIST/vlanid" of value "200" points to a non-existing leaf. (path: /sonic-vlan:sonic-vlan/VLAN_MEMBER/VLAN_MEMBER_LIST[vlanid='200'][port='Ethernet0']/vlanid)
libyang[0]: Leafref "../../../VLAN/VLAN_LIST/vlanid" of value "200" points to a non-existing leaf. (path: /sonic-vlan:sonic-vlan/VLAN_MEMBER/VLAN_MEMBER_LIST[vlanid='200'][port='Ethernet0']/vlanid)
INFO:YANG-TEST:Configure vlan-id in VLAN_MEMBER table which does not exist in VLAN table. Passed
INFO:YANG-TEST:All Test Passed
../../target/debs/stretch/libyang0.16_0.16.105-1_amd64.deb installtion failed
../../target/debs/stretch/libyang-cpp0.16_0.16.105-1_amd64.deb installtion failed
../../target/debs/stretch/python2-yang_0.16.105-1_amd64.deb installtion failed
YANG Tests passed
Passed: pyang -f tree ./yang-models/*.yang > ./yang-models/sonic_yang_tree
copying tests/yangModelTesting.py -> build/lib/tests
copying tests/test_sonic_yang_models.py -> build/lib/tests
copying tests/__init__.py -> build/lib/tests
running egg_info
writing top-level names to sonic_yang_models.egg-info/top_level.txt
writing dependency_links to sonic_yang_models.egg-info/dependency_links.txt
writing sonic_yang_models.egg-info/PKG-INFO
reading manifest file 'sonic_yang_models.egg-info/SOURCES.txt'
writing manifest file 'sonic_yang_models.egg-info/SOURCES.txt'
installing to build/bdist.linux-x86_64/wheel
running install
running install_lib
creating build/bdist.linux-x86_64/wheel
creating build/bdist.linux-x86_64/wheel/tests
copying build/lib/tests/yangModelTesting.py -> build/bdist.linux-x86_64/wheel/tests
copying build/lib/tests/test_sonic_yang_models.py -> build/bdist.linux-x86_64/wheel/tests
copying build/lib/tests/__init__.py -> build/bdist.linux-x86_64/wheel/tests
running install_data
creating build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data
creating build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data
creating build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
copying ./yang-models/sonic-head.yang -> build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
copying ./yang-models/sonic-acl.yang -> build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
copying ./yang-models/sonic-interface.yang -> build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
copying ./yang-models/sonic-loopback-interface.yang -> build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
copying ./yang-models/sonic-port.yang -> build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
copying ./yang-models/sonic-portchannel.yang -> build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
copying ./yang-models/sonic-vlan.yang -> build/bdist.linux-x86_64/wheel/sonic_yang_models-1.0.data/data/yang-models
```
* Install kubernetes worker node packages, if enabled.
* Minor updates
* Added some comments
* Updates per review comments.
Built a private image to test to work fine.
* Remove the removed file.
* Update per comments
Make a fix, as kubeadm no demands a higher version of kubelet & kubectl.
As kubeadm auto install kubectl & kubelet, removing explicit install is an easier/robust fix.
* Changes per review comments.
* Updates per comments.
1) Dropped helper & pod scripts
2) Made install verbose
* Drop creation of pods subdir, as this PR does not use them.
* From comments to 'n' per review comments.
* 1) kubeadm.conf is created as part of kubeadm package install. Hence dropped explicit copy.
Take advantage of an SDK environment variable to customize the location where sdk_socket exists.
In the latest SDK sdk_socket has been moved from /tmp to /var/run which is a better place to contain this kind of file.
However, this prevents the subdirs under /var/run from being mapped to different volumes. To resolve this, we take advantage of an SDK variable to designate the location of sdk_socket.
This requires every process that requires to access sdk_socket have this environment variable defined. However, to define environment variable for each process is less scalable. We take advantage of the docker scope environment variable to avoid that.
It depends on PR 4227
Instead of updating hostname manualy on Config DB hostname change,
simply share containers UTS namespace with host OS.
Ideally, instead of setting `--uts=host` for every container in SONiC,
this setting can be set per container if feature requires.
One behaviour change is introduced in this commit, when `--privileged`
or `--cap-add=CAP_SYS_ADMIN` and `--uts=host` are combined, container
has privilege to change host OS and every other container hostname.
Such privilege should be fixed by limiting containers capabilities.
Signed-off-by: Stepan Blyschak <stepanb@mellanox.com>
This patch upgrade the kernel from version
4.9.0-9-2 (4.9.168-1+deb9u3) to 4.9.0-11-2 (4.9.189-3+deb9u2)
Co-authored-by: rajendra-dendukuri <47423477+rajendra-dendukuri@users.noreply.github.com>
* [database] Implement the auto-restart feature for database container.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
* [database] Remove the duplicate dependency in service files. Since we
already have updategraph ---> config_setup ---> database, we do not need
explicitly add database.service in all other container service files.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
* [event listener] Reorganize the line 73 in event listener script.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
* [database] update the file sflow.service.j2 to remove the duplicate
dependency.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
* [event listener] Add comments in event listener.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
* [event listener] Update the comments in line 56.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
* [event listener] Add parentheses for if statement in line 76 in event listener.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
* [init_cfg.json] Add a new table CONTAINER_FEATURE.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
* [init_cfg.json] Update the content of table CONTAINER_FEATURE.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
* [init_cfg.json] Use the template to generate the table
CONTAINER_FEATURE.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
* [init_cfg.json] Add a new table FEATURE.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
* [init_cfg.json] Change the order of container names according to
alphabetical order.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
* [init_cfg.json] Change the dhcp_relay container name and add rest-api.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
* Changes in sonic-buildimage for the NAT feature
- Docker for NAT
- installing the required tools iptables and conntrack for nat
Signed-off-by: kiran.kella@broadcom.com
* Add redis-tools dependencies in the docker nat compilation
* Addressed review comments
* add natsyncd to warm-boot finalizer list
* addressed review comments
* using swsscommon.DBConnector instead of swsssdk.SonicV2Connector
* Enable NAT application in docker-sonic-vs
as the current kdump installation is searching for grub path, and ARM arch (marvell-armhf) are dependent on uboot, these changes has to be addressed. For now skipping kdump installation on ARM
Co-authored-by: lguohan <lguohan@gmail.com>
- move single instance services into their own folder
- generate Systemd templates for any multi-instance service files in slave.mk
- detect single or multi-instance platform in systemd-sonic-generator based on asic.conf platform specific file.
- update container hostname after creation instead of during creation (docker_image_ctl)
- run Docker containers in a network namespace if specified
- add a service to create a simulated multi-ASIC topology on the virtual switch platform
Signed-off-by: Lawrence Lee <t-lale@microsoft.com>
Signed-off-by: Suvarna Meenakshi <Suvarna.Meenaksh@microsoft.com>
* [MultiDB] (except ./src and ./dockers dirs): replace redis-cli with sonic-db-cli and use new DBConnector
* update comment for a potential bug
* update comment
* add TODO maker as review reqirement
When database service is down, psud daemon throws an error because of DB connection reset, this because pmon service has no dependency with database service.
To resolve this issue, added database service dependency to the pmon service.
Also, increased the net.core.somaxconn value to 512 to solve the connection failure on the scaled setup.
* Updates per review comments
1) core_uploader service waits for syslog.service
2) core_uploader service enabled for restart on failure
3) Use mtime instead of file size + ample time to be robust.
* Avoid reloading already uploaded file, by marking the names with a prefix.
* Updated failing path.
1) If rc file is missing or required data missing, it periodically logs error in forever loop.
2) If upload fails, retry every hour with a error log, forever.
* Fix few bugs
* The binary update_json.py will come from sonic-utilities.
* Added sonic-mgmt-framework as submodule / docker
* fix build issues
* update sonic-mgmt-framework submodule branch to master
* Merged changes 70007e6d2ba3a4c0b371cd693ccc63e0a8906e77..00d4fcfed6a759e40d7b92120ea0ee1f08300fc6
00d4fcfed6a759e40d7b92120ea0ee1f08300fc6 Modified environemnt variables
* Changes to build sonic-mgmt-framework docker
* bumped up sonic-mgmt-framework commit-id
* version bump for sonic-mgmt-framework commit-it
* bumped up sonic-mgmt-framework commit-id
* Add python packages to docker
* Build fix for docker with python packages
* added libyang as dependent package
* Allow building images on NFS-mounted clones
Prior to this change, `build_debian.sh` would generate a Debian
filesystem in `./fsroot`. This needs root permissions, and one of the
tests that is performed is whether the user can create a character
special file in the filesystem (using mknod).
On most NFS deployments, `root` is the least privileged user, and cannot
run mknod. Also, attempting to run commands like rm or mv as root would
fail due to permission errors, since the root user gets mapped to an
unprivileged user like `nobody`.
This commit changes the location of the Debian filesystem to `/fsroot`,
which is a tmpfs mount within the slave Docker. The default squashfs,
docker tarball and zip files are also created within /tmp, before being
copied back to /sonic as the regular user.
The side effect of this change is that the contents of `/fsroot` are no
longer available once the slave container exits, however they are
available within the squashfs image.
Signed-off-by: Nirenjan Krishnan <Nirenjan.Krishnan@dell.com>
* bumped up sonc-mgmt-framework commit to include PR #18
* REST Server startup script is enahnced to read the settings from
ConfigDB. Below table provides mapping of db field to command line
argument name.
============================================================
ConfigDB entry key Field name REST Server argument
============================================================
REST_SERVER|default port -port
REST_SERVER|default client_auth -client_auth
REST_SERVER|default log_level -v
DEVICE_METADATA|x509 server_crt -cert
DEVICE_METADATA|x509 server_key -key
DEVICE_METADATA|x509 ca_crt -cacert
============================================================
* Replace src/telemetry as submodule to sonic-telemetry
* Update telemetry commit HEAD
* Update sonic-telemetry commit HEAD
* libyang env path update
* Add libyang dependency to telemetry
* Add scripts to create JSON files for CLI backend
Scripts to create /var/platform/syseeprom and /var/platform/system, which are back-end
files for CLI, for system EEPROM and system information.
Signed-off-by: Howard Persh <Howard_Persh@dell.com>
* In startup script, create directory where CLI back-end files live
Signed-off-by: Howard Persh <Howard_Persh@dell.com>
* build dependency pkgs added to docker for build failure fix
* Changes to fix build issue for mgmt framework
* Fix exec path issue with telemetry
* s5232[device] PSU detecttion and default led state support
* Processing of first boot in rc.local should not have premature exit
Signed-off-by: Howard Persh <Howard_Persh@dell.com>
* docker mount options added for platform, system features
* bumped up sonic-mgmt-framework commit id to pick 23rd July 2019 changes
* Added mount options for telemetry docker to get access for system and platform info.
* Update commit for sonic-utilities
* [dell]: Corrected dport map and renamed config files for S5232F
* Fix telemetry submodule commit
* added support for sonic-cli console
* [Dell S5232F, Z9264F] Harden FPGA driver kernel module
For Dell S5232F and Z9264F platforms, be more strict when checking state
in ISR of FPGA driver, to harden against spurious interrupts.
Signed-off-by: Howard Persh <Howard_Persh@dell.com>
* update mgmt-framework submodule to 27th Aug commit.
* remove changes not related to mgmt-framework and sonic-telemetry
* Revert "Replace src/telemetry as submodule to sonic-telemetry"
This reverts commit 11c3192975.
* Revert "Replace src/telemetry as submodule to sonic-telemetry"
This reverts commit 11c3192975.
* make submodule changes and remove a change not related to PR
* more changes
* Update .gitmodules
* Update Dockerfile.j2
* Update .gitmodules
* Update .gitmodules
* Update .gitmodules
reverting experimental change
* Removed syspoll for release_1.0
Signed-off-by: Jeff Yin <29264773+jeff-yin@users.noreply.github.com>
* Update docker-sonic-mgmt-framework.mk
* Update sonic-mgmt-framework.mk
* Update sonic-mgmt-framework.mk
* Update docker-sonic-mgmt-framework.mk
* Update docker-sonic-mgmt-framework.mk
* Revert "Processing of first boot in rc.local should not have premature exit"
This reverts commit e99a91ffc2.
* Remove old telemetry directory
* Update docker-sonic-mgmt-framework.mk
* Resolving merge conflict with Azure
* Reverting the wrong merge
* Use CVL_SCHEMA_PATH instead of changing directory for telemetry startup
* Add missing export
* Add python mmh3 to slave dockerfile
* Remove sonic-mgmt-framework build dep for telemetry, fix dialout startup issues
* Provided flag to disable compiling mgmt-framework
* Update sonic-utilites point latest commit id
* Point sonic-utilities to Azure accepted SHA
* Updating mgmt framework to right sha
* Add sonic-telemetry submodule
* Update the mgmt-framework commit id
Co-authored-by: jghalam <joe.ghalam@gmail.com>
Co-authored-by: Partha Dutta <51353699+dutta-partha@users.noreply.github.com>
Co-authored-by: srideepDell <srideep_devireddy@dell.com>
Co-authored-by: nirenjan <nirenjan@users.noreply.github.com>
Co-authored-by: Sachin Holla <51310506+sachinholla@users.noreply.github.com>
Co-authored-by: Eric Seifert <seiferteric@gmail.com>
Co-authored-by: Howard Persh <hpersh@yahoo.com>
Co-authored-by: Jeff Yin <29264773+jeff-yin@users.noreply.github.com>
Co-authored-by: Arunsundar Kannan <31632515+arunsundark@users.noreply.github.com>
Co-authored-by: rvasanthm <51932293+rvasanthm@users.noreply.github.com>
Co-authored-by: Ashok Daparthi-Dell <Ashok_Daparthi@Dell.com>
Co-authored-by: anand-kumar-subramanian <51383315+anand-kumar-subramanian@users.noreply.github.com>
Delay CPU intensive services at boot
- How I did it
Made snmp.timer work and add telemetry.timer.
But this is not enough because it breaks the existing snmp dependency on swss.
So, in this solution snmp timer is a wanted by swss service, but since OnBootSec timer expires only once it will not trigger snmp service, so I added line "OnUnitActiveSec=0 sec" which will start snmp service based on the last time it was active. On boot only OnBootSec will expire, on swss start/restarts only second timer will expire immediately and trigger snmp service.
However, snmp service will not stop after "systemctl stop snmp" because of the second timer which will always expire when snmp service because unavailable.
So there is a conflict which will be handled by systemd if we add "Conflicts=" line to both snmp.service and snmp.timer.
So during boot:
snmp does not start by default
swss starts and starts snmp timer
OnUnitActiveSec=0 does not expire since there is no snmp active
OnBootSec expires and starts snmp service and snmp timer gets stopped
During "systemctl restart swss"
snmp stops because of Requisite on swss
snmp unblocks snmp timer from running
swss starts and starts snmp timer
OnUnitActiveSec=0 expires imidiately and start snmp which stops snmp timer
During "systemctl stop snmp"
stop of snmp service unblocks snmp timer but no one starts the timer so it is not started by "OnUnitActiveSec=0"
* Corefile uploader service
1) A service is added to watch /var/core and upload to Azure storage
2) The service is disabled on boot. One may enable explicitly.
3) The .rc file to be updated with acct credentials and http proxy to use.
4) If service is enabled with no credentials, it would sleep, with periodic log messages
5) For any update in .rc, the service has to be restarted to take effect.
* Remove rw permission for .rc file for group & others.
* Changes per review comments.
Re-ordered .rc file per JSON.dump order.
Added a script to enable partial update of .rc, which HWProxy would use to add acct key.
* Azure storage upload requires python module futures, hence added it to install list.
* Removed trailing spaces.
* A mistake in name corrected.
Copy the .rc updater script to /usr/bin.
* ZTP infrastructure changes to support DHCP discovery provisioning data
- Dynamically generate DHCP client configuration based on current ZTP state
- Added support to request and process hostname when using DHCPv6
- Do not process graphservice url dhcp option if ZTP is enabled, ZTP service
will process it
- Generate /e/n/i file with all active interfaces seeking address assignment
via DHCP. Only interfaces that are created in Linux will be added to /e/n/i.
Also DHCP is started only on linked up in-band interfaces.
Signed-off-by: Rajendra Dendukuri <rajendra.dendukuri@broadcom.com>
Put a flag for fast-reboot to the db using EXPIRE feature. Using this flag in other part of SONiC to start in Fast-reboot mode. If we reload a config, the state in the db will be removed.
* Create a SONiC configuration management service
* Perform config db migration after loading config_db.json to redis DB
* Migrate config-setup post migration hooks on image upgrade
config-setup post migration hooks help user to migrate configurations from
old image to new image. If the installed hooks are user defined they will not
be part of the newly installed image. So these hooks have to be migrated to
new image and only then they can be executing when the new image is booting.
The changes in this fix migrate config-setup post-migration hooks and ensure
that any hooks with the same filename in newly installed image are not
overwritten.
It is expected that users install new hooks as per their requirement and
not edit existing hooks. Any changes to existing hooks need to be done as
part of new image and not post bootup.
* Build sonic-ztp package
- Add changes in make rules to conditionally include sonic-ztp package
Signed-off-by: Rajendra Dendukuri <rajendra.dendukuri@broadcom.com>
The sflow service should not start unless the swss service is started. However, if this service is not started, the sflow service should not attempt to start them, instead it should simply fail to start. Using Requisite=, we will achieve this behavior, whereas using Requires= will cause the required service to be started.
Add the same mechanism I developed for the SwSS service in #2845 to the syncd service. However, in order to cause the SwSS service to also exit and restart in this situation, I developed a docker-wait-any program which the SwSS service uses to wait for either the swss or syncd containers to exit.
* In the event of a kernel crash, we need to gather as much information
as possible to understand and identify the root cause of the crash.
Currently, the kernel does not provide much information, which make
kernel crash investigation difficult and time consuming.
Fortunately, there is a way in the kernel to provide more information
in the case of a kernel crash. kdump is a feature of the Linux kernel
that creates crash dumps in the event of a kernel crash. This PR
will add kermel kdump support.
An extension to the CLI utilities config and show is provided to
configure and manage kdump:
- enable / disable kdump functionality
- configure kdump (how many kernel crash logs can be saved, memory
allocated for capture kernel)
- view kernel crash logs
* Rename asn/deployment_id_asn_map.yaml to constants/constants.yaml
* Fix bgp templates
* Add community for loopback when bgpd is isolated
* Use correct community value
While doing CLI changes for SNMP configuration, few changes are made in backend to handle the modified CLI.
** Changes**
- "community" for "snmp trap" is also made as "configurable". snmpd_conf.j2 is modified to handle the same.
- Changed the snmp.yml file generation from postStartAction to preStartAction in docker_image_ctl.j2 specific to SNMP docker, to ensure that the snmp.yml is generated before sonic-cfggen generates the snmpd.conf.
- Changed to make the code common for management vrf and default vrf. Users can configure snmp trap and snmp listening IP for both management vrf and default vrf.
Issue Overview
shutdown flow
For any shutdown flow, which means all dockers are stopped in order, pmon docker stops after syncd docker has stopped, causing pmon docker fail to release sx_core resources and leaving sx_core in a bad state. The related logs are like the following:
INFO syncd.sh[23597]: modprobe: FATAL: Module sx_core is in use.
INFO syncd.sh[23597]: Unloading sx_core[FAILED]
INFO syncd.sh[23597]: rmmod: ERROR: Module sx_core is in use
config reload & service swss.restart
In the flows like "config reload" and "service swss restart", the failure cause further consequences:
sx_core initialization error with error message like "sx_core: create EMAD sdq 0 failed. err: -16"
syncd fails to execute the create switch api with error message "syncd_main: Runtime error: :- processEvent: failed to execute api: create, key: SAI_OBJECT_TYPE_SWITCH:oid:0x21000000000000, status: SAI_STATUS_FAILURE"
swss fails to call SAI API "SAI_SWITCH_ATTR_INIT_SWITCH", which causes orchagent to restart. This will introduce an extra 1 or 2 minutes for the system to be available, failing related test cases.
reboot, warm-reboot & fast-reboot
In the reboot flows including "reboot", "fast-reboot" and "warm-reboot" this failure doesn't have further negative effects since the system has already rebooted. In addition, "warm-reboot" requires the system to be shutdown as soon as possible to meet the GR time restriction of both BGP and LACP. "fast-reboot" also requires to meet the GR time restriction of BGP which is longer than LACP. In this sense, any unnecessary steps should be avoided. It's better to keep those flows untouched.
summary
To summarize, we have to come up with a way to ensure:
shutdown pmon docker ahead of syncd for "config reload" or "service swss restart" flow;
don't shutdown pmon docker ahead of syncd for "fast-reboot" or "warm-reboot" flow in order to save time.
for "reboot" flow, either order is acceptable.
Solution
To solve the issue, pmon shoud be stopped ahead of syncd stopped for all flows except for the warm-reboot.
- How I did it
To stop pmon ahead of syncd stopped. This is done in /usr/local/bin/syncd.sh::stop() and for all shutdown sequence.
Now pmon stops ahead of syncd so there must be a way in which pmon can start after syncd started. Another point that should be taken consideration is that pmon starting should be deferred so that services which have the logic of graceful restart in fast-reboot and warm-reboot have sufficient CPU cycles to meet their deadline.
This is done by add "syncd.service" as "After" to pmon.service and startin /usr/local/bin/syncd.sh::wait()
To start pmon automatically after syncd started.
slave.mk: add SONIC_PLATFORM_API_PY2 as dependency of host
sonic_debian_extension.j2: install sonic_daemon_base and Mellanox-specific sonic_platform on host
mlnx-platform-api.mk: export mlnx_platform_api_py2_wheel_path for sonic_debian_extension.j2
sonic-daemon-base.mk: export daemon_base_py2_wheel_path for sonic_debian_extension.j2
daemon_base.py: hind unnecessary dependency of swss_common on host
* [SNMP] management VRF SNMP support
This commit adds SNMP support for Management VRF using l3mdev.
The patch included provides VRF support, there is no single
"listendevice" configuration, rather multiple agentaddress
config options can each have their own "interface" to bind to
using "ip%interface". The snmpd.conf file is accordingly
generated using the snmp.yml file and redis database info.
Adding below the comments of SNMP patch 1376
--------------------------------------------
Since the Linux kernel added support for Virtual Routing
and Forwarding (VRF) in version 4.3
(Note: these won't compile on non-linux platforms)
https://www.kernel.org/doc/Documentation/networking/vrf.txt
Linux users could not use snmpd in its current form to
bind specific listening IP addresses to specific VRF
devices. A simplified description of a VRF inteface
is an interface that is a master (a container of sorts)
that collects a set of physicalinterfaces to form a
routing table.
This set of two patches (one for V5-7-patches and one
for V5-8-patches branches) is almost identical to patch
single "listendevice" configuration. Rather, multiple
agentAddress config options can each have their own
"interface" to bind to using the <ip>%<interface>
syntax.</interface></ip>
-------------------------------------------
Signed-off-by: Harish Venkatraman <harish_venkatraman@dell.com>
Introduce a new "sflow" container (if ENABLE_SFLOW is set). The new docker will include:
hsflowd : host-sflow based daemon is the sFlow agent
psample : Built from libpsample repository. Useful in debugging sampled packets/groups.
sflowtool : Locally dump sflow samples (e.g. with a in-unit collector)
In case of SONiC-VS, enable psample & act_sample kernel modules.
VS' syncd needs iproute2=4.20.0-2~bpo9+1 & libcap2-bin=1:2.25-1 to support tc-sample
tc-syncd is provided as a convenience tool for debugging (e.g. tc-syncd filter show ...)
* Use dot1p to tc mapping for backend switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
* Do not write DSCP to TC mapping into CONFIG_DB or config_db.json for
storage switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
* Use dot1p to tc mapping for backend switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
* Do not write DSCP to TC mapping into CONFIG_DB or config_db.json for
storage switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
this is the first step to moving different databases tables into different database instances
in this PR, only handle multiple database instances creation based on user configuration at /etc/sonic/database_config.json
we keep current method to create single database instance if no extra/new DATABASE configuration exist in database_config.json file.
if user try to configure more db instances at database_config.json , we create those new db instances along with the original db instance existing today.
The configuration is as below, later we can add more db related information if needed:
{
...
"DATABASE": {
"redis-db-01" : {
"port" : "6380",
"database": ["APPL_DB", "STATE_DB"]
},
"redis-db-02" : {
"port" : "6381",
"database":["ASIC_DB"]
},
}
...
}
The detail description is at design doc at Azure/SONiC#271
The main idea is : when database.sh started, we check the configuration and generate corresponding scripts.
rc.local service handle old_config copy when loading new images, there is no dependency between rc.local and database service today, for safety and make sure the copy operation are done before database try to read it, we make database service run after rc.local
Then database docker started, we check the configuration and generate corresponding scripts/.conf in database docker as well.
based on those conf, we create databases instances as required.
at last, we ping_pong check database are up and continue
Signed-off-by: Dong Zhang d.zhang@alibaba-inc.com
radv should be left alone during warm restart of swss. Otherwise it will
announce departure and cause hosts to lose default gateway.
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
* [service dependent] describe non-warm-reboot dependency outside systemctl
When dependency was described with systemctl, it will kick in all the time,
including under warm reboot/restart scenarios. This is not what we always
want. For components that are capable of warm reboot/start, they need to
describe dependency in service files.
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
* [service] teamd service should not require swss service
Adding require swss will cause teamd to be killed by systemctl when swss
stops. This is not what we want in warm reboot.
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
* refactoring code
* rename functions to match other functions in the file
- What I did
Move the enabling of Systemd services from sonic_debian_extension to a new systemd generator
- How I did it
Create a new systemd generator to manually create symlinks to enable systemd services
Add rules/Makefile to build generator
Add services to be enabled to /etc/sonic/generated_services.conf to be read by the generator at boot time
Signed-off-by: Lawrence Lee <t-lale@microsoft.com>
ARM Architecture support in SONIC
make configure platform=[ASIC_VENDOR_ARCH] PLATFORM_ARCH=[ARM_ARCH]
SONIC_ARCH: default amd64
armhf - arm32bit
arm64 - arm64bit
Signed-off-by: Antony Rheneus <arheneus@marvell.com>
* Upgrade ifupdown2 to version 1.2.8
Required by ZTP to support ZTP over IPv6 transport
Signed-off-by: Rajendra Dendukuri <rajendra.dendukuri@broadcom.com>
- Make sure that migrated DB contents persisted for next boot
- Make sure that db saved after warm reboot.
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
* Added debug symbols to many debug dockers.
* For debug images *only*:
1) Archive source files into debug image
2) Archived source is copied into /src
3) Created an empty dir /debug
4) Mount both /src as ro & /debug as rw into every docker
5) Login banner will give some details on /src & /debug
6) Devs can copy core file into /debug and view it from inside a container.
7) Dev may create all gdb logs and other data directly into /debug.
* Dropped redundant REDIS_TOOLS per review comments.
* Added debug symbols to frr package and hence FRR based BGP docker.
* 1) Moved dbg_files.sh to scripts/
2) Src directories to archive are now collected from individual Makefiles.
3) Added few more debug symbols
4) Added few more debug dockers.
Here after no more changes except per review comments.
To debug:
Install required version of debug image in Switch or VM.
Copy core file into /debug of host
Get into Docker
gdb /usr/bin/<daemon> -c /debug/<your core file>
set directory /src/... <-- inside gdb to get the source
For non-in-depth debugging:
Download corresponding debug Docker image (docker-...-dbg.gz) to your VM
Load the image
Run image with entrypoint as 'bash' with dir containing core mapped in.
Run gdb on the core.
* fix fast reboot compatibility
We should handle both cases for backward-compatible with 201803:
- fast-reboot
- SONIC_BOOT_TYPE=fast-reboot
* handle review comments
* add a comment that getBootType code snippet is shared between two files
* [submodule] update sonic-linux-kernel
* update linux kernel version
* Fix many version strings
* update mellanox components (built with new kernel)
* [mlnx] add make files for SDK WJH libs
* Update arista driver submodule (#8)
Make the debian packaging point to a newer kernel version.
* Updated Makefile infrastructure to build debug images.
As a sample, platform/broadcom/docker-orchagent-brcm.mk is updated to add a docker-orchagent-brcm-dbg.gz target.
Now "BLDENV=stretch make target/docker-orchagent-brcm-dbg.gz" will build the debug image.
NOTE: If you don't specify NOSTRETcH=1, it implicitly calls "make stretch", which builds all stretch targets and that would include debug dockers too.
This debug image can be used in any linux box to inspect core file. If your module's external dependency can be suitably mocked, you my even manually run it inside.
"docker run -it --entrypoint=/bin/bash e47a8fb8ed38"
You may map the core file path to this docker run.
* Dropped the regular binary using DBG_PACKAGES and a small name change to help readability.
* Tweaked the changes to retain the existing behavior w.r.t INSTALL_DEBUG_TOOLS=y.
When this change ('building debug docker image transparently') is extended to all dockers, this flag would become redundant. Yet, there can be some test based use cases that rely on this flag.
Until after all the dockers gets their debug images by default and we switch all use cases of this flag to use the newly built debug images, we need to maintain the existing behavior.
* 1) slave.mk - Dropped unused Docker build args
2) Debug template builder: renamed build_dbg_j2.sh to build_debug_docker_j2.sh
3) Dropped insignifcant statement CMD from debug Docker file, as base docker has Entrypoint.
* Reverted some changes, per review comments.
"User, uid, guid, frr-uid & frr-guid" are required for all docker images, with exception of debug images.
* Get in sync with the new update that filters out dockers to be built (SONIC_STRETCH_DOCKERS_FOR_INSTALLERS) and build debug-dockers only for those to be built and debug target is available.
* Mkae a template for each target that can be shared by all platforms.
Where needed a platform entry can override the template.
This avoids duplication, hence easier to maintain.
* A small change, that can fit better with other targets too.
Just take the platform code and do the rest in template.
* Extended debug to all stretch based docker images
* 1) Combined all orchagent makefiles into one platform independent make under rules/docker-orchagent.mk
2) Extened debug image to all stretch dockers
* Changes per review comments:
1) Dropped LIBSAIREDIS_DBG from database, teamd, router-advertiser, telemetry, and platform-monitor docker*.mk files from _DBG_DEPENDS list
2) W.r.t docker make for syncd, moved DEPENDS from template to specific makefile and let the template has stuff that is applicable to all.
* 1) Corrected a copy/paste mistake
* Fixed a copy/paste bug
* The base syncd dockers follow a template, which defines the base docker as DOCKER_SYNCD_BASE instead of DOCKER_SYNCD_<platform code>. Fix the docker-syncd-<mlnx, bfn>.mk to use the new one.
[Yet to be tested locally]
* Fixed spelling mistake
* Enable build of dbg-sonic-broadcom.bin, which uses dbg-dockers in place of regular dockers, for dockers that build debug version. For dockers that do not build debug version, it uses the regular docker.
This debug bin is installable and usable in a DUT, just like a regular bin.
* Per review comments:
1) Share a single rule for final image for normal & debug flavors (e.g. sonic-broadcom.bin & sonic-broadcom-dbg.bin)
2) Put dbg as suffix in final image name.
3) Compared target/sonic-broadcom.bin.logs with & w/o fix to verify integrity of sonic-broadcom.bin
4) Compared target/sonic-broadcom.bin.logs with sonic-broadcom-dbg.bin.log for verification
This fix takes care of ONIE image only. The next PR will cover the rest.
The next PR, will also make debug image conditional with flag.
* Updated per comments.
Now that debug dockers are available, do not need a way to install debug symbols in regular dockers.
With this commit, when INSTALL_DEBUG_TOOLS=y is set, it builds debug dockers (for dockers that enable debug build) and the final image uses debug dockers. For dockers that do not enable debug build, regular dockers get used in the final image.
Note:
The debug dockers are explicitly named as <docker name>-dbg.gz. But there is no "-dbg" suffix for image.
Hence if you make two runs with and w/o INSTALL_DEBUG_TOOLS=y, you have complete set of regular dockers + debug dockers. But the image gets overwritten.
Hence if both regular & debug images are needed, make two runs, as one with INSTALL_DEBUG_TOOLS=y and one w/o. Make sure to copy/rename the final image, before making the second run.
- use superviord to manage process in frr docker
- intro separated configuration mode for frr
- bring quagga configuration template to frr.
Signed-off-by: Guohan Lu <gulv@microsoft.com>
* [service] Restart SwSS Docker container if orchagent exits unexpectedly
* Configure systemd to stop restarting swss if it attempts to restart more than 3 times in 20 minutes
* Move supervisor-proc-exit-listener script
* [docker-dhcp-relay] Enhance wait_for_intf.sh.j2 to utilize STATEDB
* Ensure dependent services stop/start/restart with SwSS
* Change 'StartLimitInterval' to 'StartLimitIntervalSec', as Stretch installs systemd 232 (>= v230)
* Also update journald.conf options
* Remove 'PartOf' option from unit files
* Add '$(SUPERVISOR_PROC_EXIT_LISTENER_SCRIPT)' to new shared docker-orchagent makefile
* Make supervisor-proc-exit-listener script read from 'critical_processes' file inside container
* Update critical_processes file for swss container
SWSS clears DB tables, if teamd is not started after swss, there is a
race condition that swss might clear vital teamd information.
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
This service (weekly) will let SSD firmware to do the garbage collection
after file-system deleted files. It could avoid slowness or
even READ-ONLY error due to SSD not being able to free the pages
even though the file system thinks there was a lot of space left.
Signed-off-by: Zhenggen Xu <zxu@linkedin.com>
After warm reboot is done, we need to disable warm reboot flag and
tear down anything setup for warm reboot and persisted across.
Signed-off-by: Ying Xie <ying.xie@microsoft.com>
* Install ipaddress python package that has deprecated current ipaddr. ipaddress has backport to python2.7
* Install python ipaddress module as required by route_check.py sonic utility. BTW, ipaddress deprecates ipaddr and ipaddress has python2 backport
* Revert the old chaneg per review comments.
Signed-off-by: Renuka Manavalan <remanava@microsoft.com>
- Why it is required
since SONiC master switches ifupdown package to the new implementation (ifupdown2), it is required to change the configuration of a platform-specific interface for wedge100bf_32x and wedge100bf_65x platforms (bc of ifupdown2 doesn't support auto mode for inet6 protocol).
Also, need to make some refactoring and remove if platform == smth then.. from the system level scripts.
- What I did
removed customization of /usr/bin/interfaces-config.sh
explicitly created directory /etc/network/interfaces.d
added "source" to the /etc/network/interfaces generation template (to include platform-specific interfaces processing)
added platform-specific interfaces config itself (for wedge100bf_32x and wedge100bf_65x)
fixed testcase in sonic-config-engine
- How to verify it
build image for wedge100bf_32x
perform sudo config reload -y on new installation
check the correct configuration of usb0 interface
- Description for the changelog
Allow configuration of platform-specific interfaces