**- Why I did it** Initially, the critical_processes file contains either the name of critical process or the name of group. For example, the critical_processes file in the dhcp_relay container contains a single group name `isc-dhcp-relay`. When testing the autorestart feature of each container, we need get all the critical processes and test whether a container can be restarted correctly if one of its critical processes is killed. However, it will be difficult to differentiate whether the names in the critical_processes file are the critical processes or group names. At the same time, changing the syntax in this file will separate the individual process from the groups and also makes it clear to the user. Right now the critical_processes file contains two different kind of entries. One is "program:xxx" which indicates a critical process. Another is "group:xxx" which indicates a group of critical processes managed by supervisord using the name "xxx". At the same time, I also updated the logic to parse the file critical_processes in supervisor-proc-event-listener script. **- How to verify it** We can first enable the autorestart feature of a specified container for example `dhcp_relay` by running the comman `sudo config container feature autorestart dhcp_relay enabled` on DUT. Then we can select a critical process from the command `docker top dhcp_relay` and use the command `sudo kill -SIGKILL <pid>` to kill that critical process. Final step is to check whether the container is restarted correctly or not. |
||
---|---|---|
.. | ||
docker-sonic-vs | ||
docker-syncd-vs | ||
sonic-version | ||
tests | ||
create_vnet.sh | ||
docker-sonic-vs.dep | ||
docker-sonic-vs.mk | ||
docker-syncd-vs.dep | ||
docker-syncd-vs.mk | ||
kvm-image.dep | ||
kvm-image.mk | ||
one-image.dep | ||
one-image.mk | ||
onie.dep | ||
onie.mk | ||
platform.conf | ||
raw-image.dep | ||
raw-image.mk | ||
README.gns3.md | ||
README.vsdocker.md | ||
README.vsvm.md | ||
rules.dep | ||
rules.mk | ||
sonic_multiasic.xml | ||
sonic-gns3a.sh | ||
sonic-version.dep | ||
sonic-version.mk | ||
sonic.xml | ||
syncd-vs.dep | ||
syncd-vs.mk |
HOWTO Use Virtual Switch (VM)
- Install libvirt, kvm, qemu
sudo apt-get install libvirt-clients qemu-kvm libvirt-bin
- Create SONiC VM for single ASIC HWSKU
$ sudo virsh
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh #
virsh # create sonic.xml
Domain sonic created from sonic.xml
virsh #
- Create SONiC VM for multi-ASIC HWSKU Update sonic_multiasic.xml with the external interfaces required for HWSKU.
$ sudo virsh
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh #
virsh # create sonic_multiasic.xml
Domain sonic created from sonic.xml
virsh #
Once booted up, create a file "asic.conf" with the content:
NUM_ASIC=<Number of asics>
under /usr/share/sonic/device/x86_64-kvm_x86_64-r0/
Also, create a "topology.sh" file which will simulate the internal
asic connectivity of the hardware under
/usr/share/sonic/device/x86_64-kvm_x86_64-r0/<HWSKU>
The HWSKU directory will have the required files like port_config.ini
for each ASIC.
Having done this, a new service "topology.service" will be started
during bootup which will invoke topology.sh script.
3. Access virtual switch:
1. Connect SONiC VM via console
```
$ telnet 127.0.0.1 7000
```
OR
2. Connect SONiC VM via SSH
1. Connect via console (see 3.1 above)
2. Request a new DHCP address
```
sudo dhclient -v
```
3. Connect via SSH
```
$ ssh -p 3040 admin@127.0.0.1
```