sonic-buildimage/platform/vs
Joe LeVeque c651a9ade4
[dockers][supervisor] Increase event buffer size for process exit listener; Set all event buffer sizes to 1024 (#7083)
To prevent error [messages](https://dev.azure.com/mssonic/build/_build/results?buildId=2254&view=logs&j=9a13fbcd-e92d-583c-2f89-d81f90cac1fd&t=739db6ba-1b35-5485-5697-de102068d650&l=802) like the following from being logged:

```
Mar 17 02:33:48.523153 vlab-01 INFO swss#supervisord 2021-03-17 02:33:48,518 ERRO pool supervisor-proc-exit-listener event buffer overflowed, discarding event 46
```

This is basically an addendum to https://github.com/Azure/sonic-buildimage/pull/5247, which increased the event buffer size for dependent-startup. While supervisor-proc-exit-listener doesn't subscribe to as many events as dependent-startup, there is still a chance some containers (like swss, as in the example above) have enough processes running to cause an overflow of the default buffer size of 10.

This is especially important for preventing erroneous log_analyzer failures in the sonic-mgmt repo regression tests, which have started occasionally causing PR check builds to fail. Example [here](https://dev.azure.com/mssonic/build/_build/results?buildId=2254&view=logs&j=9a13fbcd-e92d-583c-2f89-d81f90cac1fd&t=739db6ba-1b35-5485-5697-de102068d650&l=802).

I set all supervisor-proc-exit-listener event buffer sizes to 1024, and also updated all dependent-startup event buffer sizes to 1024, as well, to keep things simple, unified, and allow headroom so that we will not need to adjust these values frequently, if at all.
2021-03-27 21:14:24 -07:00
..
docker-gbsyncd-vs [dockers][supervisor] Increase event buffer size for process exit listener; Set all event buffer sizes to 1024 (#7083) 2021-03-27 21:14:24 -07:00
docker-sonic-vs [DPB|master] Update Dynamic Port Breakout Logic for flexible alias support a… (#6831) 2021-02-26 00:13:33 -08:00
docker-syncd-vs [dockers][supervisor] Increase event buffer size for process exit listener; Set all event buffer sizes to 1024 (#7083) 2021-03-27 21:14:24 -07:00
sonic-version [vs]: build sonic vs kvm image (#2269) 2018-11-20 22:32:40 -08:00
tests [vstest]: add default vs test 2021-01-24 22:25:11 -08:00
create_vnet.sh [vs]: dynamically create front panel ports in vs docker (#4499) 2020-04-30 12:50:59 -07:00
docker-gbsyncd-vs.dep Add gearbox phy device files and a new physyncd docker to support VS gearbox phy feature (#4851) 2020-09-25 08:32:44 -07:00
docker-gbsyncd-vs.mk Add gearbox phy device files and a new physyncd docker to support VS gearbox phy feature (#4851) 2020-09-25 08:32:44 -07:00
docker-ptf.dep [docker-ptf]: build docker ptf 2021-01-27 08:28:21 -08:00
docker-ptf.mk [docker-ptf]: build docker ptf 2021-01-27 08:28:21 -08:00
docker-sonic-vs.dep [build]: support for DPKG local caching (#4117) 2020-03-11 20:04:52 -07:00
docker-sonic-vs.mk [docker-sonic-vs] Install sonic-platform-common package (#6587) 2021-01-28 09:44:43 -08:00
docker-syncd-vs.dep [build]: support for DPKG local caching (#4117) 2020-03-11 20:04:52 -07:00
docker-syncd-vs.mk [docker-{sonic,syncd}-vs]: upgrade {sonic,syncd}-vs docker to stretch (#2865) 2019-05-06 07:19:36 -07:00
gbsyncd-vs.mk Add gearbox phy device files and a new physyncd docker to support VS gearbox phy feature (#4851) 2020-09-25 08:32:44 -07:00
kvm-image.dep [build]: support for DPKG local caching (#4117) 2020-03-11 20:04:52 -07:00
kvm-image.mk [build]: Move Systemd service start to systemd generator (#3172) 2019-07-29 15:52:15 -07:00
libsaithrift-dev.dep [docker-ptf]: build docker ptf 2021-01-27 08:28:21 -08:00
libsaithrift-dev.mk [build]: add _BUILD_ENV to specify env for dpkg-buildpackage 2021-01-27 08:28:21 -08:00
one-image.dep [build]: support for DPKG local caching (#4117) 2020-03-11 20:04:52 -07:00
one-image.mk [vsimage]: install systemd generator into one image 2020-04-17 04:51:51 +00:00
onie.dep [build]: support for DPKG local caching (#4117) 2020-03-11 20:04:52 -07:00
onie.mk [vs]: build sonic vs kvm image (#2269) 2018-11-20 22:32:40 -08:00
platform.conf [vs]: build sonic vs kvm image (#2269) 2018-11-20 22:32:40 -08:00
raw-image.dep [vsraw]: build sonic-vs.raw image 2020-04-17 04:51:51 +00:00
raw-image.mk [vsraw]: build sonic-vs.raw image 2020-04-17 04:51:51 +00:00
README.gns3.md [vsimage]: Support for the creation of a GNS3 appliance file (#3553) 2019-10-07 07:16:11 -07:00
README.vsdocker.md [vs]: dynamically create front panel ports in vs docker (#4499) 2020-04-30 12:50:59 -07:00
README.vsvm.md [multi-asic][vs]: Update readme file to create multi-asic vs (#6867) 2021-03-05 12:46:07 -08:00
rules.dep [docker-ptf]: build docker ptf 2021-01-27 08:28:21 -08:00
rules.mk [docker-ptf]: build docker ptf 2021-01-27 08:28:21 -08:00
sonic_multiasic.xml Multi-ASIC implementation (#3888) 2020-03-31 10:06:19 -07:00
sonic-gns3a.sh [kvmimae]: Update sonic-gns3a.sh (#4694) 2020-06-04 13:29:36 -07:00
sonic-version.dep [build]: support for DPKG local caching (#4117) 2020-03-11 20:04:52 -07:00
sonic-version.mk [vs]: build sonic vs kvm image (#2269) 2018-11-20 22:32:40 -08:00
sonic.xml [vs]: sync changes to disk and add e1000 driver to sonic vm (#2288) 2018-11-22 12:09:21 -08:00
syncd-vs.dep [build]: support for DPKG local caching (#4117) 2020-03-11 20:04:52 -07:00
syncd-vs.mk [docker-orchagent]: make build depends only on sairedis package (#4880) 2020-07-12 18:08:51 +00:00

HOWTO Use Virtual Switch (VM)

  1. Install libvirt, kvm, qemu
sudo apt-get install libvirt-clients qemu-kvm libvirt-bin
  1. Create SONiC VM for single ASIC HWSKU
$ sudo virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # 
virsh # create sonic.xml
Domain sonic created from sonic.xml

virsh # 
  1. Create SONiC VM for multi-ASIC HWSKU
  • Based on the number of asics of hwsku, update device/x86_64-kvm_x86_64-r0/asic.conf
NUM_ASIC=<n>
DEV_ID_ASIC_0=0
DEV_ID_ASIC_1=1
DEV_ID_ASIC_2=2
DEV_ID_ASIC_3=3
..
DEV_ID_ASIC_<n-1>=<n-1>

For example, a four asic VS asic.conf will be:

NUM_ASIC=4
DEV_ID_ASIC_0=0
DEV_ID_ASIC_1=1
DEV_ID_ASIC_2=2
DEV_ID_ASIC_3=3
  • Create a topology.sh script which will create the internal asic topology for the specific hwsku. For example, for msft_multi_asic_vs: https://github.com/Azure/sonic-buildimage/blob/master/device/virtual/x86_64-kvm_x86_64-r0/msft_multi_asic_vs/topology.sh

  • With the updated asic.conf and topology.sh, build sonic-vs.img which can be used to bring up multi-asic virtual switch.

  • Update platform/vs/sonic_multiasic.xml with higher memory and vcpu as required.

    • For 4-asic vs platform msft_four_asic_vs hwsku, 8GB memory and 10vCPUs.
    • For 7-ASIC vs platform msft_multi_asic_vs hwsku, 8GB and 16vCPUs.
  • Update the number of front-panel interfaces in sonic_multliasic.xml

    • For 4-ASIC vs platform, 8 front panel interfaces.
    • For 6-ASIC vs platform, 64 front panel interfaces.
  • With multi-asic sonic_vs.img and sonic_multiasic.xml file, bring up multi-asic vs as:

$ sudo virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh #
virsh # create sonic_multiasic.xml 
Domain sonic created from sonic.xml

virsh #
  • Steps to convert a prebuilt single asic sonic-vs.img:

    • Use the updated sonic_multiasic.xml file and bring up virtual switch.
    • Update /usr/share/sonic/device/x86_64-kvm_x86_64-r0/asic.conf as above.
    • Add topology.sh in /usr/share/sonic/device/x86_64-kvm_x86_64-r0/
    • stop database service and remove database docker, so that when vs is rebooted, database_global.json is created with the right namespaces.
      • systemctl stop database
      • docker rm database
    • sudo reboot
    • Once rebooted, VS should be multi-asic VS.
  • Start topology service.

sudo systemctl start topology.
  • Load configuration using minigraph or config_dbs.
  1. Access virtual switch:

    1. Connect SONiC VM via console
    $ telnet 127.0.0.1 7000
    

    OR

    1. Connect SONiC VM via SSH

      1. Connect via console (see 3.1 above)

      2. Request a new DHCP address

      sudo dhclient -v
      
      1. Connect via SSH
      $ ssh -p 3040 admin@127.0.0.1