Commit Graph

404 Commits

Author SHA1 Message Date
Stepan Blyshchak
9ce7c6d9fe
[hostcfgd] Configure service auto-restart in hostcfgd. (#5744)
Before this change, a process running inside every SONiC container dealt with FEATURE table 'auto_restart' field and depending on the value decided whether a container has to be killed or not.
If killed service auto restart mechanism restarts the container.
This change moves the logic from container to the host daemon - hostcfgd.
The 'auto_restart' handling is kept in supervisor-proc-exit-listener but now it is not required for container that wants to support auto restart feature.

hostcfgd refactoring - move feature handling in another class.
override systemd service Restart= setting from hostcfgd.
remove default systemd Restart=always.
Signed-off-by: Stepan Blyshchak stepanb@nvidia.com

- Why I did it

Remove the need to deal with container orchestration logic from the container itself. Leave this logic to the orchestrator - host OS.

- How I did it

hostcfgd configures 'Restart=' value for systemd service.

- How to verify it

root@r-tigon-11:/home/admin# sudo config feature autorestart lldp enabled
root@r-tigon-11:/home/admin# show feature status | grep lldp
lldp            enabled   enabled
root@r-tigon-11:/home/admin# docker exec -it lldp pkill -9 lldpd
root@r-tigon-11:/home/admin# docker ps -a | grep lldp
65058396277c        docker-lldp:latest                   "/usr/bin/docker-lld…"   2 days ago          Exited (0) 20 seconds ago                       lldp
root@r-tigon-11:/home/admin# docker ps -a | grep lldp
65058396277c        docker-lldp:latest                   "/usr/bin/docker-lld…"   2 days ago          Up 5 seconds                            lldp
root@r-tigon-11:/home/admin# sudo config feature autorestart lldp disabled
root@r-tigon-11:/home/admin# docker exec -it lldp pkill -9 lldpd
root@r-tigon-11:/home/admin# docker ps -a | grep lldp
65058396277c        docker-lldp:latest                   "/usr/bin/docker-lld…"   2 days ago          Up 35 seconds                           lldp
root@r-tigon-11:/home/admin# docker ps -a | grep lldp
65058396277c        docker-lldp:latest                   "/usr/bin/docker-lld…"   2 days ago          Exited (0) 3 seconds ago                       lldp
root@r-tigon-11:/home/admin# docker ps -a | grep lldp
65058396277c        docker-lldp:latest                   "/usr/bin/docker-lld…"   2 days ago          Exited (0) 39 seconds ago                       lldp
root@r-tigon-11:/home/admin#
2021-06-29 09:06:21 -07:00
Santhosh Kumar T
f8eb5b0958
Flashrom refactoring for broadcom platforms (#7693)
#### Why I did it
- To build flashrom properly with dependency tracking.

#### How I did it
- Moved flashrom code from platform/broadcom/sonic-platform-modules-dell/tools directory to src/flashrom directory.
- At the end, flashrom_0.9.7_amd64.deb package is build which will be installed in the devices.
- Currently flashrom builds only for Dell S6100 platforms.
2021-06-22 15:29:21 -07:00
judyjoseph
3ad830eb49
New sonic-buildimage images for Broadcom DNX ASIC family. (#7598)
Introduce new sonic-buildimage images for Broadcom DNX ASIC family.

sonic-broadcom-dnx.bin
sonic-aboot-broadcom-dnx.swi

How I did it

NO CHANGE to existing make commands

make init; make configure PLATFORM=broadcom;  make target/sonic-aboot-broadcom.swi; make  target/sonic-broadcom.bin

The difference now is that it will result in new broadcom images for DNX asic family as well. 

sonic-broadcom.bin, sonic-broadcom-dnx.bin
sonic-aboot-broadcom.swi, sonic-aboot-broadcom-dnx.swi

Note: This PR also adds support for Broadcom SAI 5.0 (based on 1.8 SAI ) for DNX based platform + changes in platform x86_64-arista_7280cr3_32p4 bcm config files and platform_env.conf files
2021-06-22 11:12:22 -07:00
Kebo Liu
078e0e0410
mount 'mellanox' folder only instead of create each sub folder (#7830)
#### Why I did it

Following the discussion in another PR https://github.com/Azure/sonic-buildimage/pull/7708#discussion_r642933510 , since there will be multi subfolders under **/var/log/mellanox**, so we agreed to only mount this folder and the subfolders will be created afterward on demand.  

#### How I did it

during the syncd docker creation, only mount  folder **/var/log/mellanox**

#### How to verify it

build an Mellanox image and verify the related folder on the host and docker side.
2021-06-22 06:27:56 -07:00
Sudharsan Dhamal Gopalarathnam
c88c3c7ba5
Grouping delayed services under a target for config reload checks (#7846)
#### Why I did it
Create a target for delayed service timers. Few services in sonic have delayed to speed up the bring up of the system and essential services. However there is no way to track when they start. This will be a problem when executing config reload as config reload expects all services to be up. Hence grouped all the timers that trigger the delayed services under one target so that they could be tracked in 'config reload' command

#### How I did it
Created delay.target service and add created dependency on the delayed targets.
2021-06-21 11:55:02 -07:00
Ann Pokora
3d629233bf
[MPLS][libnl3] libnl patches for supporting MPLS
* New accessors in libnl3 for MPLS attributes
* contains patch files for bug fixes in libnl3 for MPLS attribute parsing
2021-06-16 15:08:23 -07:00
Renuka Manavalan
f7ed82f44a
[Kubernetes]: The kube server could be used as http-proxy for docker (#7469)
Why I did it
The SONiC switches get their docker images from local repo, populated during install with container images pre-built into SONiC FW. With the introduction of kubernetes, new docker images available in remote repo could be deployed. This requires dockerd to be able to pull images from remote repo.

Depending on the Switch network domain & config, it may or may not be able to reach the remote repo. In the case where remote repo is unreachable, we could potentially make Kubernetes server to also act as http-proxy.

How I did it
When admin explicitly enables, the kubernetes-server could be configured as docker-proxy. But any update to docker-proxy has to be via service-conf file environment variable, implying a "service restart docker" is required. But restart of dockerd is vey expensive, as it would restarts all dockers, including database docker.

To avoid dockerd restart, pre-configure an http_proxy using an unused IP. When k8s server is enabled to act as http-proxy, an IP table entry would be created to direct all traffic to the configured-unused-proxy-ip to the kubernetes-master IP. This way any update to Kubernetes master config would be just manipulating IPTables, which will be transparent to all modules, until dockerd needs to download from remote repo.

How to verify it
Configure a switch such that image repo is unreachable
Pre-configure dockerd with http_proxy.conf using an unused IP (e.g. 172.16.1.1)
Update ctrmgrd.service to invoke ctrmgrd.py with "-p" option.
Configure a k8s server, and deploy an image for feature with set_owner="kube"
Check if switch could successfully download the image or not.
2021-06-16 07:46:01 -07:00
arlakshm
4d07bbbec6
[Yang][cfggen] update sonic-cfggen to generate config_db from Yang data (#7712)
Why I did it
This PR adds changes in sonic-config-engine to consume configuration data in SONiC Yang schema and generate config_db entries

How I did it
Add a new file sonic_yang_cfg_generator .
This file has the functions to

parse yang data json and convert them in config_db json format.
Validate the converted config_db entries to make sure all the dependencies and constraints are met.
Add a new option -Y to the sonic-cfggen command for this purpose

Add unit tests

This capability is support only in sonic-config-engine Python3 package only
2021-06-10 12:03:33 -07:00
yozhao101
1a3cab43ac
[Monit] Deprecate the feature of monitoring the critical processes by Monit (#7676)
Signed-off-by: Yong Zhao yozhao@microsoft.com

Why I did it
Currently we leveraged the Supervisor to monitor the running status of critical processes in each container and it is more reliable and flexible than doing the monitoring by Monit. So we removed the functionality of monitoring the critical processes by Monit.

How I did it
I removed the script process_checker and corresponding Monit configuration entries of critical processes.

How to verify it
I verified this on the device str-7260cx3-acs-1.
2021-06-04 10:16:53 -07:00
Renuka Manavalan
73447efc31
Add service to restore TACACS from old config (#7560)
Why I did it
In upgrade scenarios, where config_db.json is not carry forwarded to new image, it could be left w/o TACACS credentials.
Added a service to trigger 5 minutes after boot and restore TACACS, if /etc/sonic/old_config/tacacs.json is present.

How I did it
By adding a service, that would fire 5 mins after boot.
This service apply tacacs if available.

How to verify it
Upgrade and watch status of tacacs.timer & tacacs.service
You may create /etc/sonic/old_config/tacacs.json, with updated credentials
(before 5mins after boot) and see that appears in config & persisted too.

Which release branch to backport (provide reason below if selected)
 201911
 202006
 202012
2021-06-03 20:07:17 -07:00
yozhao101
37863ac854
[Monit] Restart telemetry container if memory usage is beyond the threshold (#7645)
Signed-off-by: Yong Zhao yozhao@microsoft.com

Why I did it
This PR aims to monitor the memory usage of streaming telemetry container and restart streaming telemetry container if memory usage is larger than the pre-defined threshold.

How I did it
I borrowed the system tool Monit to run a script memory_checker which will periodically check the memory usage of streaming telemetry container. If the memory usage of telemetry container is larger than the pre-defined threshold for 10 times during 20 cycles, then an alerting message will be written into syslog and at the same time Monit will run the script restart_service to restart the streaming telemetry container.

How to verify it
I verified this implementation on device str-7260cx3-acs-1.
2021-05-28 11:13:44 -07:00
Stepan Blyshchak
d7b96dfdf1
[sonic-sdk] add sonic sdk and sonic sdk buildenv (#6712)
- Why I did it

To give SONiC Application Extension developers an environment to run and develop their apps.

- How I did it
Created sonic-sdk and sonic-sdk-buildenv dockers and their dbg versions.

- How to verify it
Build:

$ make -f slave target/sonic-sdk.gz target/sonic-sdk-buildenv.gz
2021-05-28 10:16:02 -07:00
Lawrence Lee
79914f5336
[swss.service]: Remove ordering with pmon (#7614)
Signed-off-by: Lawrence Lee <lawlee@microsoft.com>
2021-05-26 09:12:54 -07:00
Neetha John
3b06f44555
[qos]: modify dot1p to tc mapping (#7661)
Map priority 0 to TC 1 and priority 1 to TC 0

Send traffic on priority 0 and 1 and verified that it gets mapped correctly in hw

Signed-off-by: Neetha John <nejo@microsoft.com>
2021-05-20 10:36:39 -07:00
Ze Gan
8f883fee67
[macsec]: Bind macsec service to sonic.target (#7642)
MACsec service cannot be enabled by "sudo config feature state macsec enabled"

Signed-off-by: Ze Gan <ganze718@gmail.com>
2021-05-18 11:44:21 -07:00
Nazarii Hnydyn
6e264d8ac9
[swss_vars]: Add 'resource_type' attribute. (#7526)
Signed-off-by: Nazarii Hnydyn <nazariig@nvidia.com>
2021-05-06 12:14:21 -07:00
Stepan Blyshchak
cd2c86eab6
[dockers] label SONiC Docker with manifest (#5939)
Signed-off-by: Stepan Blyschak stepanb@nvidia.com

This PR is part of SONiC Application Extension

Depends on #5938

- Why I did it
To provide an infrastructure change in order to support SONiC Application Extension feature.

- How I did it
Label every installable SONiC Docker with a minimal required manifest and auto-generate packages.json file based on
installed SONiC images.

- How to verify it
Build an image, execute the following command:

admin@sonic:~$ docker inspect docker-snmp:1.0.0 | jq '.[0].Config.Labels["com.azure.sonic.manifest"]' -r | jq
Cat /var/lib/sonic-package-manager/packages.json file to verify all dockers are listed there.
2021-04-26 13:51:50 -07:00
Guohan Lu
27a635a15a Revert "Flashrom refactoring (#6922)"
This reverts commit 7dd9d1f3f2.
2021-04-25 11:51:35 -07:00
a-barboza
ec9101f9c5
RADIUS Management User Authentication Feature (#7284)
Why I did it
HLD: https://github.com/Azure/SONiC/blob/master/doc/aaa/radius_authentication.md
CLI: In a separate PR.

How I did it
How to verify it
UT: src/sonic-host-services/tests/hostcfgd/hostcfgd_radius_test.py
2021-04-23 19:09:41 -07:00
Stepan Blyshchak
ae339c95d2
[systemd] disable default systemd udev rules for interfaces (#7369)
Fix #7364

99-default.link - was always in SONiC, but previous systemd (<247) had an issue and it did not work due to issue systemd/systemd#3374. Now systemd 247 works.

However, such policy overrides teamd provided mac address which causes teamd netdev to use a random mac
address. Therefore, needs to be disabled.

Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
2021-04-21 17:50:00 -07:00
Santhosh Kumar T
7dd9d1f3f2
Flashrom refactoring (#6922)
#### Why I did it
To build flashrom properly with dependency tracking.

#### How I did it
Moved flashrom code from platform/broadcom/sonic-platform-modules-dell/tools directory to src/flashrom directory.
At the end, flashrom_0.9.7_amd64.deb package is build which will be installed in the devices.
2021-04-20 15:24:44 -07:00
guxianghong
6fe6d7394d
[arm] support compile sonic arm image on arm server (#7285)
- Support compile sonic arm image on arm server. If arm image compiling is executed on arm server instead of using qemu mode on x86 server, compile time can be saved significantly.
- Add kernel argument systemd.unified_cgroup_hierarchy=0 for upgrade systemd to version 247, according to #7228
- rename multiarch docker to sonic-slave-${distro}-march-${arch}

Co-authored-by: Xianghong Gu <xgu@centecnetworks.com>
Co-authored-by: Shi Lei <shil@centecnetworks.com>
2021-04-18 08:17:57 -07:00
Stepan Blyshchak
4369361894
[sonic_debian_extension.j2] fix systemd version not from buster-backports (#7322)
Install systemd explicitelly from backports and install libsystemd* packages from backports.

Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
2021-04-18 08:07:02 -07:00
vganesan-nokia
b313d4d092
[systemlag] Lag id boundary set for system lag (#6488)
Signed-off-by: vedganes <vedavinayagam.ganesan@nokia.com>

Changes for setting platfrom specific lag id boundary id in the chassis
app db. The platfrom specific lag id boundaries are supplied via
chassisdb.conf. The lag_id_start and lag_id_end boundary values sourced
from this file are set in chassis app db which will be used by lag id
allocator to allocate unique lag id in atomic fashion
2021-03-30 23:21:53 -07:00
Stepan Blyshchak
1f7d9e2698
[docker_img_ctl.j2] make tmpfs mounts optional and add ability to run container by image id (#6439)
- Why I did it
I made the docker_img_ctl.j2 applicable for more dockers (including application extensions dockers) by adding an option not to mount tmpfs on /tmp/ and /var/tmp/. In some applications /tmp/ is a different docker volume which can't be tmpfs.
Also, I added and ability to pass REPO[:TAG]|[@digest]/IMAGE_ID instead of just REPO name.

- How I did it
Modified docker_img_ctl.j2 and docker makefiles.

- How to verify it
Run it on the switch.
2021-03-16 17:03:12 +02:00
Stepan Blyshchak
2b8941e716
[sonic_debian_extension] add docker script to SONiC filesystem (#5935)
- Why I did it
To allow SONiC Package Migration during SONiC-2-SONiC upgrade we need to start docker daemon in chroot-ed environment in new SONiC filesystem.
Later this script will be used to start dockerd in chroot environment on SONiC

- How I did it
Install a docker service script into /usr/lib/docker/ in SONiC filesystem.

- How to verify it
Install SONiC image on the switch, mount squashfs to some directory, mount overlay rw layer over squashfs, mount procfs and sysfs, mount docker library. Start the docker using:
root@sonic:~$ /usr/lib/docker/docker.sh start

Signed-off-by: Stepan Blyshchak <stepanb@nvidia.com>
2021-03-14 14:15:42 +02:00
Renuka Manavalan
6f7cd8d772
Copy dummy flannel.conf to get around absence of CNI Network (#6985)
Why I did it
We skip install of CNI plugin, as we don't need. But this leaves node in "not ready" state, upon joining master.
To fix, we copy this dummy .conf file in /etc/cni/net.d

How I did it
Keep this file in /usr/share/sonic/templates and copy to /etc/cni/net.d upon joining k8s master.

How to verify it
Upon configuring master-IP and enable join, watch node join and move to ready state.
You may verify using kubectl get nodes command
2021-03-09 19:49:54 -08:00
Stepan Blyshchak
12c03c4f25
[sonic_debian_exntesion] install docker_image_ctl.j2 template in the image templates (#5937)
SONiC Package Manager will require to auto-generate the start script using that template. For that, we need this template to be recorded in SONiC filesystem.

Signed-off-by: Stepan Blyshchak <stepanb@nvidia.com>
2021-02-25 09:11:12 -08:00
Stepan Blyshchak
e179ec2fae
[services] introduce sonic.target (#5705)
- Why I did it
Group all SONiC services together and able to manage them together. Will be used in config reload command as much simpler and generic way to restart services.

- How I did it
Add services to sonic.target

- How to verify it
Together with Azure/sonic-utilities#1199
config reload -y

Signed-off-by: Stepan Blyshchak <stepanb@nvidia.com>
2021-02-25 14:26:24 +02:00
Ze Gan
4068944202
[MACsec]: Set MACsec feature to be auto-start (#6678)
1. Add supervisord as the entrypoint of docker-macsec
2. Add wpa_supplicant conf into docker-macsec
3. Set the macsecmgrd as the critical_process
4. Configure supervisor to monitor macsecmgrd
5. Set macsec in the features list
6. Add config variable `INCLUDE_MACSEC`
7. Add macsec.service

**- How to verify it**

Change the `/etc/sonic/config_db.json` as follow
```
{
    "PORT": {
        "Ethernet0": {
            ...
            "macsec": "test"
         }
    }
    ...
    "MACSEC_PROFILE": {
        "test": {
            "priority": 64,
            "cipher_suite": "GCM-AES-128",
            "primary_cak": "0123456789ABCDEF0123456789ABCDEF",
            "primary_ckn": "6162636465666768696A6B6C6D6E6F707172737475767778797A303132333435",
            "policy": "security"
        }
    }
}
```
To execute `sudo config reload -y`, We should find the following new items were inserted in app_db of redis
```
127.0.0.1:6379> keys *MAC*
1) "MACSEC_EGRESS_SC_TABLE:Ethernet0:72152375678227538"
2) "MACSEC_PORT_TABLE:Ethernet0"
127.0.0.1:6379> hgetall "MACSEC_EGRESS_SC_TABLE:Ethernet0:72152375678227538"
1) "ssci"
2) ""
3) "encoding_an"
4) "0"
127.0.0.1:6379> hgetall "MACSEC_PORT_TABLE:Ethernet0"
 1) "enable"
 2) "false"
 3) "cipher_suite"
 4) "GCM-AES-128"
 5) "enable_protect"
 6) "true"
 7) "enable_encrypt"
 8) "true"
 9) "enable_replay_protect"
10) "false"
11) "replay_window"
12) "0"
```

Signed-off-by: Ze Gan <ganze718@gmail.com>
2021-02-23 13:22:45 -08:00
Sujin Kang
d5238ae8dd
[pcie.yaml] Move pcie configuration file path to platform directory (#6475)
- Why I did it
The pcie configuration file location is under plugin directory not under platform directory.
#6437

- How I did it

Move all pcie.yaml configuration file from plugin to platform directory.
Remove unnecessary timer to start pcie-check.service
Move pcie-check.service to sonic-host-services
- How to verify it
Verify on the device
2021-02-21 08:27:37 -08:00
Lawrence Lee
97c605f1f7
[swss]: Clear MUX-related state DB tables on start (#6759)
* Add *MUX_CABLE_TABLE* to set of tables to clear on SWSS start, which
will clear HW_MUX_CABLE_TABLE and MUX_CABLE_TABLE
* Order swss to start before pmon to ensure that DBs are cleared before
xcvrd (running inside pmon) starts and re-populates the tables

Signed-off-by: Lawrence Lee <lawlee@microsoft.com>
2021-02-14 12:43:49 -08:00
dflynn-Nokia
88961f1339
[armhf build] Fix azure-storage dependency on cryptography package (#6780)
Fix marvell-armhf build break

The azure-storage package depends on the cryptography package. Newer
versions of cryptography require the rust compiler, the correct version
for which is not readily available in buster. Hence we pre-install an
older version here to satisfy the azure-storage dependency.
Note: This is not a problem for other architectures as pre-built versions
of cryptography are available for those. This sequence can be removed
after upgrading to debian bullseye.
2021-02-14 10:36:04 -08:00
Lior Avramov
6f8c31554f
[systemd] Increase syncd startup script timeout to support FW upgrade on init. (#6709)
**- Why I did it**
To support FW upgrade on init.

**- How I did it**
Change timeout value

**- How to verify it**
I manually changed ASIC and Gearbox FW followed by hard reset in order for FW upgrade to take place on init.

Signed-off-by: liora <liora@nvidia.com>
2021-02-11 12:53:36 +02:00
Arun Saravanan Balachandran
3015de1dd0
[sonic-host-service] Move to sonic-host-services package (#6273)
- Why I did it

To move ‘sonic-host-service’ which is currently built as a separate package to ‘sonic-host-services' package. 

- How I did it

- Moved 'sonic-host-server' to 'src/sonic-host-services' and included it as part of the python3 wheel.
- Other files were moved to 'src/sonic-host-services-data' and included as part of the deb package.
- Changed build option ‘INCLUDE_HOST_SERVICE’ to ‘ENABLE_HOST_SERVICE_ON_START’ for enabling sonic-hostservice at boot-up by default.
2021-02-08 19:35:08 -08:00
SuvarnaMeenakshi
62a599a5b3
[multi_asic][vs]: Add dependency in teamd service to start after topology service(#6594)
[multi_asic][vs]: Add dependency in teamd service to start after topology service.
- Why I did it
In multi-asic VS, topology service is run after database service to set up the internal asic topology.
swss and syncd have a dependency to start after topology service is run so that the interfaces are moved to right namespace and created in the right namespace. In case of multi-asic vs, during the initial boot up, when there is no configuration added, teamd service starts and swss/syncd do not start as topology service does not start. Upon loading configuration using config_db or minigraph, swss and sycnd start up , but teamd is not restarted as swss is not stopped and started. This causes teamd to be in a bad state and requires a reload of config.

- How I did it
Add dependency in teamd service to start after topology service is completed.

- How to verify it
No change in single asic vs or platform.
No change in multi-asic regular image.
Change only in multi-asic VS. Bring up a multi-asic VS image without any configration, teamd service will fail to start due to dependency failure. Load minigraph, start topology service, load configuration, ensure all services come up.
Signed-off-by: SuvarnaMeenakshi <sumeenak@microsoft.com>
2021-02-04 14:10:56 -08:00
abdosi
cfa8fbbf1a
[baseimage]: Updates for Ebtables and support for multi-asic (#6542)
Following changes were done for ebtables:

- Support for Multi-asic platforms. Ebtable filters are installed in namespace for multi-asic and not host. On Single asic installed on  host.

- For Multi-asic platforms we don't want to install on host otherwise Namespace-to-Namespace communication does not happens since ARP Request are not forwarded.

- Updated to use text file to restore ebtables rules then the binary format. Rules are restore as part of Database docker init instead of rc.local

- Removed the ebtable service files for buster as not needed as filters are restored/installed as part of database docker init.
   All the binaries are pre-installed with ebtables* binary are same as ebatbles-legacy-* 

Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
2021-01-27 08:36:10 -08:00
judyjoseph
46b3bd5503
[teamd]: Increase wait timeout for teamd docker stop to clean Port channels. (#6537)
The Portchannels were not getting cleaned up as the cleanup activity was taking more than 10 secs which is default docker timeout after which a SIGKILL will be send.
Fixes #6199
To check if it works out for this issue in 201911 ? #6503

This issue is significantly seen in master branch compared to 201911 because the Portchannel cleanup takes more time in master. Test on a DUT with 8 Port Channels.

master

    admin@str-s6000-acs-8:~$ time sudo systemctl stop teamd
    real    0m15.599s
    user    0m0.061s
    sys     0m0.038s
Sonic 201911.v58

    admin@str-s6000-acs-8:~$ time sudo systemctl stop teamd
    real    0m5.541s
    user    0m0.020s
    sys     0m0.028s
2021-01-23 20:57:52 -08:00
yozhao101
04cd1d61e8
[Monit] Monitoring the running status of containers. (#6251)
**- Why I did it**
This PR aims to monitor the running status of each container. Currently the auto-restart feature was enabled. If a critical process exited unexpected, the container will be restarted. If the container was restarted 3 times during 20 minutes, then it will not run anymore unless we cleared the flag using the command `sudo systemctl reset-failed <container_name>` manually. 

**- How I did it**
We will employ Monit to monitor a script. This script will generate the expected running container list and compare it with the current running containers. If there are containers which were expected to run but were not running, then an alerting message will be written into syslog.

**- How to verify it**
I tested this feature on a lab device `str-a7050-acs-3` which has single ASIC and `str2-n3164-acs-3` which has a Multi-ASIC. First I manually stopped a container by running the command `sudo systemctl stop <container_name>`, then I checked whether there was an alerting message in the syslog.

Signed-off-by: Yong Zhao <yozhao@microsoft.com>
2021-01-07 19:52:22 -08:00
Joe LeVeque
e52581e919
[PDDF] Build and install Python 3 package (#6286)
- Make PDDF code compliant with both Python 2 and Python 3
- Align code with PEP8 standards using autopep8
- Build and install both Python 2 and Python 3 PDDF packages
2021-01-07 10:03:29 -08:00
Joe LeVeque
566ea4f601
[system-health] Convert to Python 3 (#5886)
- Convert system-health scripts to Python 3
- Build and install system-health as a Python 3 wheel
- Also convert newlines from DOS to UNIX
2020-12-29 14:04:09 -08:00
Joe LeVeque
62662acbd5
No longer install some unnecessary Python 2 packages in host (#6301)
- No longer install Python 2 packages in host:
    - libpython2.7-dev
    - docker
    - ipaddress
    - netifaces
    - azure-storage
    - watchdog
    - futures

- Install Python 3 versions of the following packages in host:
    - docker
    - azure-storage
    - watchdog
    - redis
    - swsssdk (install unconditionally)
2020-12-29 13:02:11 -08:00
lguohan
162f0fdfe1
[init_cfg]: allow enable/disable swss/teamd/syncd services (#6291)
swss/teamd/syncd services were changed to always enabled
in commit fad481edc1 as a workaround
for not letting hostcfgd start service during the bootup process.

commit 317a4b3410 introduce
wait till full system bootup before updating feature states in hostcfgd.

Thus, workaround introduced in commit fad481ed can be removed

Signed-off-by: Guohan Lu <lguohan@gmail.com>
2020-12-28 10:33:46 -08:00
Prabhu Sreenivasan
df13245b9f
[CRM] Add support for snat, dnat and ipmc crm resources (#6012)
Signed-off-by: Prabhu Sreenivasan prabhu.sreenivasan@broadcom

What I did
Added support for snat, dnat and ipmc resources under CRM module.

How I did it
New feature NAT adds new resources snat_enty and dnat_entry that needs to be monitored. ipmc_entry tracks IP multicast resources used by switch.

How to verify it
sonic-utilities tests and crm spytest
2020-12-23 06:15:53 -08:00
lguohan
aa1cc848e2
[sonic-yang-mgmt-py2]: remove sonic-yang-mgmt py2 (#6262)
No longer needed as sonic-utilties has been moved python3

Signed-off-by: Guohan Lu <lguohan@gmail.com>
2020-12-22 21:05:33 -08:00
Renuka Manavalan
ba02209141
First cut image update for kubernetes support. (#5421)
* First cut image update for kubernetes support.
With this,
    1)  dockers dhcp_relay, lldp, pmon, radv, snmp, telemetry are enabled
        for kube management
        init_cfg.json configure set_owner as kube for these

    2)  Each docker's start.sh updated to call container_startup.py to register going up
          As part of this call, it registers the current owner as local/kube and its version
          The images are built with its version ingrained into image during build

    3)  Update all docker's bash script to call 'container start/stop/wait' instead of 'docker start/stop/wait'.
         For all locally managed containers, it calls docker commands, hence no change for locally managed.
        
    4)  Introduced a new ctrmgrd service, that helps with transition between owners as  kube & local and carry over any labels update from STATE-DB to API server

    5)  hostcfgd updated to handle owner change

    6) Reboot scripts are updatd to tag kube running images as local, so upon reboot they run the same image.

   7) Added kube_commands.py to handle all updates with Kubernetes API serrver -- dedicated for k8s interaction only.
2020-12-22 08:01:33 -08:00
Prabhu Sreenivasan
df2a4ded98
[ntp]: Source interface support for NTP (#6033)
Added source interface support for NTP.
Also made NTP start on Mgmt-VRF by default when configured.

**- How I did it**
1) Updated hostcfg to listen to global config NTP and NTP_SERVER tables and restart ntp when ever the configuration changes. NTP table includes source interface configuration.
2) The ntp script updated to by default start on Mgmt-VFT when configured.

Signed-off-by: Prabhu Sreenivasan <prabhu.sreenivasan@broadcom>
2020-12-21 05:34:13 -08:00
Lawrence Lee
03ad30d2ab
[build_templates]: Start SNMP timer after SWSS service (#6195)
Fixes #5663

- Why I did it
It's currently possible for the SNMP timer to conflict with config reload (specifically if the timer triggers while config reload is stopping the SWSS service). config reload triggers SWSS to shutdown, which causes SNMP to shutdown, which conflicts with the SNMP timer causing SNMP to startup. See the linked issue for more details.

- How I did it
Including the After ordering dependency forces the SNMP timer to wait until SWSS finishes stopping, preventing the conflict. If there is an ordering dependency between two units (e.g. one unit is ordered After another), if one unit is shutting down while the other is starting up, the shutdown will always be ordered before the startup. In this case, that means that the SNMP timer is forced to wait for the SWSS shutdown to complete. Only then can the SNMP timer proceed. See here for more details.

It's important to note that the After dependency will not cause SWSS to be started when the SNMP timer fires (assuming that SWSS has not yet been started). The existing Requisite dependency in the SNMP service will also not cause SWSS to be started, instead it will cause the SNMP service to fail if SWSS is not active.

Signed-off-by: Lawrence Lee <lawlee@microsoft.com>
2020-12-16 16:39:14 -08:00
Joe LeVeque
c829e6914a
Install 'wheel' package in host OS; upgrade pip and setuptools (#6187)
Install the 'wheel' package in host OS (along with python3 and python3-distutils which are also needed for building some Python packages) to eliminate error messages like the following:

```
  Running setup.py bdist_wheel for watchdog: started
  Running setup.py bdist_wheel for watchdog: finished with status 'error'
  Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-Qd3K08/watchdog/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/pip-wheel-0AHpMe --python-tag cp27:
  usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
     or: -c --help [cmd1 cmd2 ...]
     or: -c --help-commands
     or: -c cmd --help
  
  error: invalid command 'bdist_wheel'
  
  ----------------------------------------
  Failed building wheel for watchdog

```

These error messages appear to have no impact on the image build, because the Python package seems to still get installed successfully afterward, just the building of a wheel package fails. Therefore, this is more of a cosmetic fix than an actual bug.

This is an addendum to https://github.com/Azure/sonic-buildimage/pull/6182.

Also upgrade pip and install more recent version of setuptools package via PyPI.
2020-12-16 16:38:15 -08:00
Sabareesh-Kumar-Anandan
9f4ca01388
[sonic-config-engine] Adding dependent pkgs needed for arm compilation (#6186)
libxslt-dev and libz-dev are dependencies for lxml==4.6.1 which is required for pyangbind==0.8.1

lxml-4.6.2-cp37-cp37m-manylinux1_x86_64.whl is directly downloaded in amd64 whereas in arm this is built from lxml-4.6.2.tar.gz

Signed-off-by: Sabareesh Kumar Anandan <sanandan@marvell.com>
2020-12-15 08:44:46 -08:00