This PR limited the number of calls to sonic-cfggen to one call
per iteration instead of current 3 calls per iteration.
The PR also installs jq on host for future scripts if needed.
signed-off-by: Tamer Ahmed <tamer.ahmed@microsoft.com>
When stopping the swss, pmon or bgp containers, log messages like the following can be seen:
```
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,061 ERRO pool dependent-startup event buffer overflowed, discarding event 34
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,063 ERRO pool dependent-startup event buffer overflowed, discarding event 35
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,064 ERRO pool dependent-startup event buffer overflowed, discarding event 36
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,066 ERRO pool dependent-startup event buffer overflowed, discarding event 37
```
This is due to the number of programs in the container managed by supervisor, all generating events at the same time. The default event queue buffer size in supervisor is 10. This patch increases that value in all containers in order to eliminate these errors. As more programs are added to the containers, we may need to further adjust these values. I increased all buffer sizes to 25 except for containers with more programs or templated supervisor.conf files which allow for a variable number of programs. In these cases I increased the buffer size to 50. One final exception is the swss container, where the buffer fills up to ~50, so I increased this buffer to 100.
Resolves https://github.com/Azure/sonic-buildimage/issues/5241
Applications running in the host OS can read the platform identifier from /host/machine.conf. When loading configuration, sonic-config-engine *needs* to read the platform identifier from machine.conf, as it it responsible for populating the value in Config DB.
When an application is running inside a Docker container, the machine.conf file is not accessible, as the /host directory is not mounted. So we need to retrieve the platform identifier from Config DB if get_platform() is called from inside a Docker
container. However, we can't simply check that we're running in a Docker container because the host OS of the SONiC virtual switch is running inside a Docker container. So I refactored `get_platform()` to:
1. Read from the `PLATFORM` environment variable if it exists (which is defined in a virtual switch Docker container)
2. Read from machine.conf if possible (works in the host OS of a standard SONiC image, critical for sonic-config-engine at boot)
3. Read the value from Config DB (needed for Docker containers running in SONiC, as machine.conf is not accessible to them)
- Also fix typo in daemon_base.py
- Also changes to align `get_hwsku()` with `get_platform()`
Consolidate common SONiC Python-language functionality into one shared package (sonic-py-common) and eliminate duplicate code.
The package currently includes four modules:
- daemon_base
- device_info
- logger
- task_base
NOTE: This is a combination of all changes from https://github.com/Azure/sonic-buildimage/pull/5003, https://github.com/Azure/sonic-buildimage/pull/5049 and some changes from https://github.com/Azure/sonic-buildimage/pull/5043 backported to align with the 201911 branch. As part of the 201911 port, I am not installing the Python 3 package in the base image or in the VS container, because we do not have pip3 installed, and we do not intend to migrate to Python 3 in 201911.
- move single instance services into their own folder
- generate Systemd templates for any multi-instance service files in slave.mk
- detect single or multi-instance platform in systemd-sonic-generator based on asic.conf platform specific file.
- update container hostname after creation instead of during creation (docker_image_ctl)
- run Docker containers in a network namespace if specified
- add a service to create a simulated multi-ASIC topology on the virtual switch platform
Signed-off-by: Lawrence Lee <t-lale@microsoft.com>
Signed-off-by: Suvarna Meenakshi <Suvarna.Meenaksh@microsoft.com>
* Changes in sonic-buildimage for the NAT feature
- Docker for NAT
- installing the required tools iptables and conntrack for nat
Signed-off-by: kiran.kella@broadcom.com
* Add redis-tools dependencies in the docker nat compilation
* Addressed review comments
* add natsyncd to warm-boot finalizer list
* addressed review comments
* using swsscommon.DBConnector instead of swsssdk.SonicV2Connector
* Enable NAT application in docker-sonic-vs
The script sonic-gns3a.sh creates a GNS3 appliance flle, that points to a sonin-vs.img (SONiC Virtual Switch).
The appliance file (and sonic-vs.img file) can subsequently be imported into a GNS3 simulation environment.
Introduce a new "sflow" container (if ENABLE_SFLOW is set). The new docker will include:
hsflowd : host-sflow based daemon is the sFlow agent
psample : Built from libpsample repository. Useful in debugging sampled packets/groups.
sflowtool : Locally dump sflow samples (e.g. with a in-unit collector)
In case of SONiC-VS, enable psample & act_sample kernel modules.
VS' syncd needs iproute2=4.20.0-2~bpo9+1 & libcap2-bin=1:2.25-1 to support tc-sample
tc-syncd is provided as a convenience tool for debugging (e.g. tc-syncd filter show ...)
* Use dot1p to tc mapping for backend switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
* Do not write DSCP to TC mapping into CONFIG_DB or config_db.json for
storage switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
- What I did
Move the enabling of Systemd services from sonic_debian_extension to a new systemd generator
- How I did it
Create a new systemd generator to manually create symlinks to enable systemd services
Add rules/Makefile to build generator
Add services to be enabled to /etc/sonic/generated_services.conf to be read by the generator at boot time
Signed-off-by: Lawrence Lee <t-lale@microsoft.com>
* Update sonic-quagga submodule
* Port some patches from sonic-quagga
* Fix Makefile
* Another patch
* Uncomment bgp test
* Downport Nikos's patch
* Add a patch to alleviate the vendor issue
* use patch instead of stg
Since we move to FRR, we need to connect FRR with fpmsyncd via FPM.
Adding static routes is also required.
Signed-off-by: Shu0T1an ChenG <shuche@microsoft.com>
* [frr]: change frr as default sonic routing stack
* fix quagga configuration
* [vstest]: fix bgp test for frr
* [vstest]: skip bgp/test_invalid_nexthop.py for frr
Signed-off-by: Guohan Lu <gulv@microsoft.com>
* Add bridge-utils to orchagent image
- Add vxlanmgrd to supervisorctl in docker -orchagent
Signed-off-by: Ze Gan zegan@microsoft.com
* Update submodule pointer for swss to include Vxlanmgrd changes
Overall goal: Build debug images for every stretch docker.
An earlier PR (#2789) made the first cut, by transforming broadcom/orchagent to build target/docker-orhagent-dbg.gz.
Changes in this PR:
Made docker-orchagent build to be platform independent.
1.1) Created rules/docker_orchagent.mk
1.2) Removed platform//docker-orchagent-*.mk
1.3) Removed the corresponding entry from platform//rules.mk
Extended the debug docker image build to stretch based syncd dockers.
2.1) For now, only mellanox & barefoot are stretch based.
2.2) All the common variable definitions are put in one place platform/template/docker-syncd-base.mk
2.3) platform/[mellanox, bfn]/docker-syncd-[mlnx, bfn].mk are updated as detailed below.
2.3.1) Set platform code and include template base file
2.3.2) Add the dependencies & debug dependencies and any update over what base template offers.
Extended all stretch based non-platform dockers to build debug dockers too.
3.1) Affected are:
docker-database.mk,
docker-platform-monitor.mk,
docker-router-advertiser.mk,
docker-teamd.mk,
docker-telemetry.mk
Next: Build debug flavor of final images with regular dockers replaced with debug dockers where available.
* [build]: put stretch debian packages under target/debs/stretch/
* in stretch build phase, all debian packages built in that stage are placed under target/debs/stretch directory.
* for python-based debian packages, since they are really the same for jessie and stretch, they are placed under target/python-debs directory.
Signed-off-by: Guohan Lu <gulv@microsoft.com>
* syncd changes to disk and add e1000 driver to sonic vm
* add pg_profile_lookup.ini
Signed-off-by: Guohan Lu <gulv@microsoft.com>
* update swss and sairedis
sairedis:
* d146572 2018-11-22 | Fix interface name used on link message using lane map (#386)
swss:
* c74dc60 2018-11-22 | [vstest]: use eth1~32 as physical interface name in vs docker (#700) (HEAD -> master, origin/master, origin/HEAD) [lguohan]
* 6007e7f 2018-11-22 | [portmgrd]: Fix setting default port admin status and MTU (#691) [stepanblyschak]
* 6c70f6d 2018-11-22 | [portsorch] Fix port queue index init bug (#505) [yangbashuang]
* 70ac79b 2018-11-21 | [gitignore]: Update all binary names in the ignore list (#698) [Shuotian Cheng]
* 2a3626c 2018-11-21 | [test]: Remove duplicate legacy ACL tests (#699) [Shuotian Cheng]
* 8099811 2018-11-20 | [aclorch]: Remove unnecessary warning message (#696) [Shuotian Cheng]
* 63d8ebc 2018-11-18 | [portsorch]: Remove duplicate local variables - port (#690) [Shuotian Cheng]
* 28dc042 2018-11-18 | Remove default docker name value of swss. (#692) [Jipan Yang]
Signed-off-by: Guohan Lu <gulv@microsoft.com>
To suppress the error message:
INFO #supervisord: teammgrd Can't write to the lacp directory
'/var/warmboot/teamd/': No such file or directory
Signed-off-by: Shu0T1an ChenG <shuche@microsoft.com>
* Restore neighbor table to kernel during system warm-reboot
Added a service: "restore_neighbors" to restore neighbor table into
kernel during system warm reboot. The service is started by supervisord
in swss docker when the docker is started.
In case system warm reboot is enabled, it will try to restore the neighbor
table from appDB into kernel through netlink API calls and update the neighbor
table by sending arp/ns requests to all neighbor entries, then it sets the
stateDB flag for neighsyncd to continue the reconciliation process.
-- Added tcpdump python-scapy debian package into orchagent and vs dockers.
-- Added python module: pyroute2 netifaces into orchagent and vc dockers.
-- Workarounded tcpdump issue in the vs docker
Signed-off-by: Zhenggen Xu <zxu@linkedin.com>
* Move the restore_neighbors.py to sonic-swss submodule
Made changes to makefiles accordingly
Make dockerfile.j2 changes and supervisord config changes
Add python monotonic lib for time access
Signed-off-by: Zhenggen Xu <zxu@linkedin.com>
* Added PYTHON_SWSSCOMMON as swss runtime dependency
Signed-off-by: Zhenggen Xu <zxu@linkedin.com>
Currently, the teamd configuration is no longer used in SONiC. The
configuration is dynamically generated and stored in the configuration
database, which is picked up by the teammgrd.
To create new portchannels, and the members, the command:
config portchannel add
config portchannel member add
are used.
Signed-off-by: Shu0T1an ChenG <shuche@microsoft.com>