- Why I did it Initially, we used Monit to monitor critical processes in each container. If one of critical processes was not running or crashed due to some reasons, then Monit will write an alerting message into syslog periodically. If we add a new process in a container, the corresponding Monti configuration file will also need to update. It is a little hard for maintenance. Currently we employed event listener of Supervisod to do this monitoring. Since processes in each container are managed by Supervisord, we can only focus on the logic of monitoring. - How I did it We borrowed the event listener of Supervisord to monitor critical processes in containers. The event listener will take following steps if it was notified one of critical processes exited unexpectedly: The event listener will first check whether the auto-restart mechanism was enabled for this container or not. If auto-restart mechanism was enabled, event listener will kill the Supervisord process, which should cause the container to exit and subsequently get restarted. If auto-restart mechanism was not enabled for this contianer, the event listener will enter a loop which will first sleep 1 minute and then check whether the process is running. If yes, the event listener exits. If no, an alerting message will be written into syslog. - How to verify it First, we need checked whether the auto-restart mechanism of a container was enabled or not by running the command show feature status. If enabled, one critical process should be selected and killed manually, then we need check whether the container will be restarted or not. Second, we can disable the auto-restart mechanism if it was enabled at step 1 by running the commnad sudo config feature autorestart <container_name> disabled. Then one critical process should be selected and killed. After that, we will see the alerting message which will appear in the syslog every 1 minute. - Which release branch to backport (provide reason below if selected) 201811 201911 [x ] 202006 |
||
---|---|---|
.. | ||
docker-gbsyncd-vs | ||
docker-sonic-vs | ||
docker-syncd-vs | ||
sonic-version | ||
tests | ||
create_vnet.sh | ||
docker-gbsyncd-vs.dep | ||
docker-gbsyncd-vs.mk | ||
docker-sonic-vs.dep | ||
docker-sonic-vs.mk | ||
docker-syncd-vs.dep | ||
docker-syncd-vs.mk | ||
gbsyncd-vs.mk | ||
kvm-image.dep | ||
kvm-image.mk | ||
one-image.dep | ||
one-image.mk | ||
onie.dep | ||
onie.mk | ||
platform.conf | ||
raw-image.dep | ||
raw-image.mk | ||
README.gns3.md | ||
README.vsdocker.md | ||
README.vsvm.md | ||
rules.dep | ||
rules.mk | ||
sonic_multiasic.xml | ||
sonic-gns3a.sh | ||
sonic-version.dep | ||
sonic-version.mk | ||
sonic.xml | ||
syncd-vs.dep | ||
syncd-vs.mk |
HOWTO Use Virtual Switch (VM)
- Install libvirt, kvm, qemu
sudo apt-get install libvirt-clients qemu-kvm libvirt-bin
- Create SONiC VM for single ASIC HWSKU
$ sudo virsh
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh #
virsh # create sonic.xml
Domain sonic created from sonic.xml
virsh #
- Create SONiC VM for multi-ASIC HWSKU Update sonic_multiasic.xml with the external interfaces required for HWSKU.
$ sudo virsh
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh #
virsh # create sonic_multiasic.xml
Domain sonic created from sonic.xml
virsh #
Once booted up, create a file "asic.conf" with the content:
NUM_ASIC=<Number of asics>
under /usr/share/sonic/device/x86_64-kvm_x86_64-r0/
Also, create a "topology.sh" file which will simulate the internal
asic connectivity of the hardware under
/usr/share/sonic/device/x86_64-kvm_x86_64-r0/<HWSKU>
The HWSKU directory will have the required files like port_config.ini
for each ASIC.
Having done this, a new service "topology.service" will be started
during bootup which will invoke topology.sh script.
3. Access virtual switch:
1. Connect SONiC VM via console
```
$ telnet 127.0.0.1 7000
```
OR
2. Connect SONiC VM via SSH
1. Connect via console (see 3.1 above)
2. Request a new DHCP address
```
sudo dhclient -v
```
3. Connect via SSH
```
$ ssh -p 3040 admin@127.0.0.1
```