make swss build depends only on libsairedis instead of syncd. This allows to build swss without depending
on vendor sai library.
Currently, libsairedis build also buils syncd which requires vendor SAI lib. This makes difficult to build
swss docker in buster while still keeping syncd docker in stretch, as swss requires libsairedis which also
build syncd and requires vendor to provide SAI for buster. As swss docker does not really contain syncd
binary, so it is not necessary to build syncd for swss docker.
* [submodule]: update sonic-sairedis
* ccbb3bc 2020-06-28 | add option to build without syncd (HEAD, origin/master, origin/HEAD) [Guohan Lu]
* 4247481 2020-06-28 | install saidiscovery into syncd package [Guohan Lu]
* 61b8e8e 2020-06-26 | Revert "sonic-sairedis: Add support to sonic-sairedis for gearbox phys (#624)" (#630) [Danny Allen]
* 85e543c 2020-06-26 | add a README to tests directory to describe how to run 'make check' (#629) [Syd Logan]
* 2772f15 2020-06-26 | sonic-sairedis: Add support to sonic-sairedis for gearbox phys (#624) [Syd Logan]
Signed-off-by: Guohan Lu <lguohan@gmail.com>
**- Why I did it**
Initially, the critical_processes file contains either the name of critical process or the name of group.
For example, the critical_processes file in the dhcp_relay container contains a single group name
`isc-dhcp-relay`. When testing the autorestart feature of each container, we need get all the critical
processes and test whether a container can be restarted correctly if one of its critical processes is
killed. However, it will be difficult to differentiate whether the names in the critical_processes file are
the critical processes or group names. At the same time, changing the syntax in this file will separate the individual process from the groups and also makes it clear to the user.
Right now the critical_processes file contains two different kind of entries. One is "program:xxx" which indicates a critical process. Another is "group:xxx" which indicates a group of critical processes
managed by supervisord using the name "xxx". At the same time, I also updated the logic to
parse the file critical_processes in supervisor-proc-event-listener script.
**- How to verify it**
We can first enable the autorestart feature of a specified container for example `dhcp_relay` by running the comman `sudo config container feature autorestart dhcp_relay enabled` on DUT. Then we can select a critical process from the command `docker top dhcp_relay` and use the command `sudo kill -SIGKILL <pid>` to kill that critical process. Final step is to check whether the container is restarted correctly or not.
Cleanup description string
First port (management port) are excluded from general port naming scheme.
Management port are excluded from general port naming scheme.
before:
|on GNS3 |in SONiC |
|---------|---------|
|Ethernet0|eth0 |
|Ethernet1|Ethernet0|
|Ethernet2|Ethernet4|
|Ethernet3|Ethernet8|
after:
|on GNS3 |in SONiC |
|---------|---------|
|eth0 |eth0 |
|Ethernet0|Ethernet0|
|Ethernet1|Ethernet4|
|Ethernet2|Ethernet8|
Signed-off-by: Masaru OKI <masaru.oki@gmail.com>
* Update sonic-sairedis (sairedis with SAI 1.6 headers)
* Update SAIBCM to 3.7.4.2, which is built upon SAI1.6 headers
* missed updating BRCM_SAI variable, fixed it
* Update SAIBCM to 3.7.4.2, updated link to libsaibcm
* [Mellanox] Update SAI (release:v1.16.3; API:v1.6)
Signed-off-by: Volodymyr Samotiy <volodymyrs@mellanox.com>
* Update sonic-sairedis pointer to include SAI1.6 headers
* [Mellanox] Update SDK to 4.4.0914 and FW to xx.2007.1112 to match SAI 1.16.3 (API:v1.6)
Signed-off-by: Volodymyr Samotiy <volodymyrs@mellanox.com>
* ensure the veth link is up in docker VS container
* ensure the veth link is up in docker VS container
* [Mellanox] Update SAI (release:v1.16.3.2; API:v1.6)
Signed-off-by: Volodymyr Samotiy <volodymyrs@mellanox.com>
* use 'config interface startup' instead of using ifconfig command, also undid the previous change'
Co-authored-by: Volodymyr Samotiy <volodymyrs@mellanox.com>
currently, vs docker always create 32 front panel ports.
when vs docker starts, it first detects the peer links
in the namespace and then setup equal number of front panel
interfaces as the peer links.
Signed-off-by: Guohan Lu <lguohan@gmail.com>
libpython2.7, libdaemon0, libdbus-1-3, libjansson4 are common
across different containers. move them into docker-base-stretch
Signed-off-by: Guohan Lu <lguohan@gmail.com>
DPKG caching framework provides the infrastructure to cache the sonic module/target .deb files into a local cache by tracking the target dependency files.SONIC build infrastructure is designed as a plugin framework where any new source code can be easily integrated into sonic as a module and that generates output as a .deb file. The source code compilation of a module is completely independent of other modules compilation. Inter module dependency is resolved through build artifacts like header files, libraries, and binaries in the form of Debian packages. For example module A depends on module B. While module A is being built, it uses B's .deb file to install it in the build docker.
The DPKG caching framework provides an infrastructure that caches a module's deb package and restores it back to the build directory if its dependency files are not modified. When a module is compiled for the first time, the generated deb package is stored at the DPKG cache location. On the subsequent build, first, it checks the module dependency file modification. If none of the dependent files is changed, it copies the deb package from the cache location, otherwise, it goes for local compilation and generates the deb package. The modified files should be checked-in to get the newer cache deb package.
This provides a huge improvement in build time and also supports the true incremental build by tracking the dependency files.
- How I did it
It takes two global arguments to enable the DPKG caching, the first one indicates the caching method and the second one describes the location of the cache.
SONIC_DPKG_CACHE_METHOD=cache
SONIC_DPKG_CACHE_SOURCE=
where SONIC_DPKG_CACHE_METHOD - Default method is 'cache' for deb package caching
none: no caching
cache: cache from local directory
Dependency file tracking:
Dependency files are tracked for each target in two levels.
1. Common make infrastructure files - rules/config, rules/functions, slave.mk etc.
2. Per module files - files which are specific to modules, Makefile, debian/rules, patch files, etc.
For example: dependency files for Linux Kernel - src/sonic-linux-kernel,
SPATH := $($(LINUX_HEADERS_COMMON)_SRC_PATH)
DEP_FILES := $(SONIC_COMMON_FILES_LIST) rules/linux-kernel.mk rules/linux-kernel.dep
DEP_FILES += $(SONIC_COMMON_BASE_FILES_LIST)
SMDEP_FILES := $(addprefix $(SPATH)/,$(shell cd $(SPATH) && git ls-files))
DEP_FLAGS := $(SONIC_COMMON_FLAGS_LIST) \
$(KERNEL_PROCURE_METHOD) $(KERNEL_CACHE_PATH)
$(LINUX_HEADERS_COMMON)_CACHE_MODE := GIT_CONTENT_SHA
$(LINUX_HEADERS_COMMON)_DEP_FLAGS := $(DEP_FLAGS)
$(LINUX_HEADERS_COMMON)_DEP_FILES := $(DEP_FILES)
$(LINUX_HEADERS_COMMON)_SMDEP_FILES := $(SMDEP_FILES)
$(LINUX_HEADERS_COMMON)_SMDEP_PATHS := $(SPATH)
Cache file tracking:
The Cache file is a compressed TAR ball of a module's target DEB file and its derived-target DEB files.
The cache filename is formed with the following format
FORMAT:
<module deb filename>.<24 byte of DEP SHA hash >-<24 byte of MOD SHA hash>.tgz
Eg:
linux-headers-4.9.0-9-2-common_4.9.168-1+deb9u3_all.deb-23658712fd21bb776fa16f47-c0b63ef593d4a32643bca228.tgz
< 24-byte DEP SHA value > - the SHA value is derived from all the dependent packages.
< 24-byte MOD SHA value > - the SHA value is derived from either of the following.
GIT_COMMIT_SHA - SHA value of the last git commit ID if it is a submodule
GIT_CONTENT_SHA - SHA value is generated from the content of the target dependency files.
Target Specific rules:
Caching can be enabled/disabled on a global level and also on the per-target level.
$(addprefix $(DEBS_PATH)/, $(SONIC_DPKG_DEBS)) : $(DEBS_PATH)/% : .platform $$(addsuffix -install,$$(addprefix $(DEBS_PATH)/,$$($$*_DEPENDS))) \
$(call dpkg_depend,$(DEBS_PATH)/%.dep )
$(HEADER)
# Load the target deb from DPKG cache
$(call LOAD_CACHE,$*,$@)
# Skip building the target if it is already loaded from cache
if [ -z '$($*_CACHE_LOADED)' ] ; then
.....
# Rules for Generating the target DEB file.
.....
# Save the target deb into DPKG cache
$(call SAVE_CACHE,$*,$@)
fi
$(FOOTER)
The make rule-'$(call dpkg_depend,$(DEBS_PATH)/%.dep )' checks for target dependency file modification. If it is newer than the target, it will go for re-generation of that target.
Two main macros 'LOAD_CACHE' and 'SAVE_CACHE' are used for loading and storing the cache contents.
The 'LOAD_CACHE' macro is used to load the cache file from cache storage and extracts them into the target folder. It is done only if target dependency files are not modified by checking the GIT file status, otherwise, cache loading is skipped and full compilation is performed.
It also updates the target-specific variable to indicate the cache is loaded or not.
The 'SAVE_CACHE' macro generates the compressed tarball of the cache file and saves them into cache storage. Saving into the cache storage is protected with a lock.
- How to verify it
The caching functionality is verified by enabling it in Linux kernel submodule.
It uses the cache directory as 'target/cache' where Linux cache file gets stored on the first-time build and it is picked from the cache location during the subsequent clean build.
- Description for the changelog
The DPKG caching framework provides the infrastructure to save the module-specific deb file to be cached by tracking the module's dependency files.
If the module's dependency files are not changed, it restores the module deb files from the cache storage.
- Description for the changelog
- A picture of a cute animal (not mandatory but encouraged)
DOCUMENT PR:
https://github.com/Azure/SONiC/pull/559
* Changes in sonic-buildimage for the NAT feature
- Docker for NAT
- installing the required tools iptables and conntrack for nat
Signed-off-by: kiran.kella@broadcom.com
* Add redis-tools dependencies in the docker nat compilation
* Addressed review comments
* add natsyncd to warm-boot finalizer list
* addressed review comments
* using swsscommon.DBConnector instead of swsssdk.SonicV2Connector
* Enable NAT application in docker-sonic-vs
- move single instance services into their own folder
- generate Systemd templates for any multi-instance service files in slave.mk
- detect single or multi-instance platform in systemd-sonic-generator based on asic.conf platform specific file.
- update container hostname after creation instead of during creation (docker_image_ctl)
- run Docker containers in a network namespace if specified
- add a service to create a simulated multi-ASIC topology on the virtual switch platform
Signed-off-by: Lawrence Lee <t-lale@microsoft.com>
Signed-off-by: Suvarna Meenakshi <Suvarna.Meenaksh@microsoft.com>
The script sonic-gns3a.sh creates a GNS3 appliance flle, that points to a sonin-vs.img (SONiC Virtual Switch).
The appliance file (and sonic-vs.img file) can subsequently be imported into a GNS3 simulation environment.
Introduce a new "sflow" container (if ENABLE_SFLOW is set). The new docker will include:
hsflowd : host-sflow based daemon is the sFlow agent
psample : Built from libpsample repository. Useful in debugging sampled packets/groups.
sflowtool : Locally dump sflow samples (e.g. with a in-unit collector)
In case of SONiC-VS, enable psample & act_sample kernel modules.
VS' syncd needs iproute2=4.20.0-2~bpo9+1 & libcap2-bin=1:2.25-1 to support tc-sample
tc-syncd is provided as a convenience tool for debugging (e.g. tc-syncd filter show ...)
* Use dot1p to tc mapping for backend switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
* Do not write DSCP to TC mapping into CONFIG_DB or config_db.json for
storage switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
- What I did
Move the enabling of Systemd services from sonic_debian_extension to a new systemd generator
- How I did it
Create a new systemd generator to manually create symlinks to enable systemd services
Add rules/Makefile to build generator
Add services to be enabled to /etc/sonic/generated_services.conf to be read by the generator at boot time
Signed-off-by: Lawrence Lee <t-lale@microsoft.com>
* Update sonic-quagga submodule
* Port some patches from sonic-quagga
* Fix Makefile
* Another patch
* Uncomment bgp test
* Downport Nikos's patch
* Add a patch to alleviate the vendor issue
* use patch instead of stg
Since we move to FRR, we need to connect FRR with fpmsyncd via FPM.
Adding static routes is also required.
Signed-off-by: Shu0T1an ChenG <shuche@microsoft.com>
* [frr]: change frr as default sonic routing stack
* fix quagga configuration
* [vstest]: fix bgp test for frr
* [vstest]: skip bgp/test_invalid_nexthop.py for frr
Signed-off-by: Guohan Lu <gulv@microsoft.com>
* Add bridge-utils to orchagent image
- Add vxlanmgrd to supervisorctl in docker -orchagent
Signed-off-by: Ze Gan zegan@microsoft.com
* Update submodule pointer for swss to include Vxlanmgrd changes
Overall goal: Build debug images for every stretch docker.
An earlier PR (#2789) made the first cut, by transforming broadcom/orchagent to build target/docker-orhagent-dbg.gz.
Changes in this PR:
Made docker-orchagent build to be platform independent.
1.1) Created rules/docker_orchagent.mk
1.2) Removed platform//docker-orchagent-*.mk
1.3) Removed the corresponding entry from platform//rules.mk
Extended the debug docker image build to stretch based syncd dockers.
2.1) For now, only mellanox & barefoot are stretch based.
2.2) All the common variable definitions are put in one place platform/template/docker-syncd-base.mk
2.3) platform/[mellanox, bfn]/docker-syncd-[mlnx, bfn].mk are updated as detailed below.
2.3.1) Set platform code and include template base file
2.3.2) Add the dependencies & debug dependencies and any update over what base template offers.
Extended all stretch based non-platform dockers to build debug dockers too.
3.1) Affected are:
docker-database.mk,
docker-platform-monitor.mk,
docker-router-advertiser.mk,
docker-teamd.mk,
docker-telemetry.mk
Next: Build debug flavor of final images with regular dockers replaced with debug dockers where available.
* [build]: put stretch debian packages under target/debs/stretch/
* in stretch build phase, all debian packages built in that stage are placed under target/debs/stretch directory.
* for python-based debian packages, since they are really the same for jessie and stretch, they are placed under target/python-debs directory.
Signed-off-by: Guohan Lu <gulv@microsoft.com>