DPKG caching framework provides the infrastructure to cache the sonic module/target .deb files into a local cache by tracking the target dependency files.SONIC build infrastructure is designed as a plugin framework where any new source code can be easily integrated into sonic as a module and that generates output as a .deb file. The source code compilation of a module is completely independent of other modules compilation. Inter module dependency is resolved through build artifacts like header files, libraries, and binaries in the form of Debian packages. For example module A depends on module B. While module A is being built, it uses B's .deb file to install it in the build docker.
The DPKG caching framework provides an infrastructure that caches a module's deb package and restores it back to the build directory if its dependency files are not modified. When a module is compiled for the first time, the generated deb package is stored at the DPKG cache location. On the subsequent build, first, it checks the module dependency file modification. If none of the dependent files is changed, it copies the deb package from the cache location, otherwise, it goes for local compilation and generates the deb package. The modified files should be checked-in to get the newer cache deb package.
This provides a huge improvement in build time and also supports the true incremental build by tracking the dependency files.
- How I did it
It takes two global arguments to enable the DPKG caching, the first one indicates the caching method and the second one describes the location of the cache.
SONIC_DPKG_CACHE_METHOD=cache
SONIC_DPKG_CACHE_SOURCE=
where SONIC_DPKG_CACHE_METHOD - Default method is 'cache' for deb package caching
none: no caching
cache: cache from local directory
Dependency file tracking:
Dependency files are tracked for each target in two levels.
1. Common make infrastructure files - rules/config, rules/functions, slave.mk etc.
2. Per module files - files which are specific to modules, Makefile, debian/rules, patch files, etc.
For example: dependency files for Linux Kernel - src/sonic-linux-kernel,
SPATH := $($(LINUX_HEADERS_COMMON)_SRC_PATH)
DEP_FILES := $(SONIC_COMMON_FILES_LIST) rules/linux-kernel.mk rules/linux-kernel.dep
DEP_FILES += $(SONIC_COMMON_BASE_FILES_LIST)
SMDEP_FILES := $(addprefix $(SPATH)/,$(shell cd $(SPATH) && git ls-files))
DEP_FLAGS := $(SONIC_COMMON_FLAGS_LIST) \
$(KERNEL_PROCURE_METHOD) $(KERNEL_CACHE_PATH)
$(LINUX_HEADERS_COMMON)_CACHE_MODE := GIT_CONTENT_SHA
$(LINUX_HEADERS_COMMON)_DEP_FLAGS := $(DEP_FLAGS)
$(LINUX_HEADERS_COMMON)_DEP_FILES := $(DEP_FILES)
$(LINUX_HEADERS_COMMON)_SMDEP_FILES := $(SMDEP_FILES)
$(LINUX_HEADERS_COMMON)_SMDEP_PATHS := $(SPATH)
Cache file tracking:
The Cache file is a compressed TAR ball of a module's target DEB file and its derived-target DEB files.
The cache filename is formed with the following format
FORMAT:
<module deb filename>.<24 byte of DEP SHA hash >-<24 byte of MOD SHA hash>.tgz
Eg:
linux-headers-4.9.0-9-2-common_4.9.168-1+deb9u3_all.deb-23658712fd21bb776fa16f47-c0b63ef593d4a32643bca228.tgz
< 24-byte DEP SHA value > - the SHA value is derived from all the dependent packages.
< 24-byte MOD SHA value > - the SHA value is derived from either of the following.
GIT_COMMIT_SHA - SHA value of the last git commit ID if it is a submodule
GIT_CONTENT_SHA - SHA value is generated from the content of the target dependency files.
Target Specific rules:
Caching can be enabled/disabled on a global level and also on the per-target level.
$(addprefix $(DEBS_PATH)/, $(SONIC_DPKG_DEBS)) : $(DEBS_PATH)/% : .platform $$(addsuffix -install,$$(addprefix $(DEBS_PATH)/,$$($$*_DEPENDS))) \
$(call dpkg_depend,$(DEBS_PATH)/%.dep )
$(HEADER)
# Load the target deb from DPKG cache
$(call LOAD_CACHE,$*,$@)
# Skip building the target if it is already loaded from cache
if [ -z '$($*_CACHE_LOADED)' ] ; then
.....
# Rules for Generating the target DEB file.
.....
# Save the target deb into DPKG cache
$(call SAVE_CACHE,$*,$@)
fi
$(FOOTER)
The make rule-'$(call dpkg_depend,$(DEBS_PATH)/%.dep )' checks for target dependency file modification. If it is newer than the target, it will go for re-generation of that target.
Two main macros 'LOAD_CACHE' and 'SAVE_CACHE' are used for loading and storing the cache contents.
The 'LOAD_CACHE' macro is used to load the cache file from cache storage and extracts them into the target folder. It is done only if target dependency files are not modified by checking the GIT file status, otherwise, cache loading is skipped and full compilation is performed.
It also updates the target-specific variable to indicate the cache is loaded or not.
The 'SAVE_CACHE' macro generates the compressed tarball of the cache file and saves them into cache storage. Saving into the cache storage is protected with a lock.
- How to verify it
The caching functionality is verified by enabling it in Linux kernel submodule.
It uses the cache directory as 'target/cache' where Linux cache file gets stored on the first-time build and it is picked from the cache location during the subsequent clean build.
- Description for the changelog
The DPKG caching framework provides the infrastructure to save the module-specific deb file to be cached by tracking the module's dependency files.
If the module's dependency files are not changed, it restores the module deb files from the cache storage.
- Description for the changelog
- A picture of a cute animal (not mandatory but encouraged)
DOCUMENT PR:
https://github.com/Azure/SONiC/pull/559
* Changes in sonic-buildimage for the NAT feature
- Docker for NAT
- installing the required tools iptables and conntrack for nat
Signed-off-by: kiran.kella@broadcom.com
* Add redis-tools dependencies in the docker nat compilation
* Addressed review comments
* add natsyncd to warm-boot finalizer list
* addressed review comments
* using swsscommon.DBConnector instead of swsssdk.SonicV2Connector
* Enable NAT application in docker-sonic-vs
- move single instance services into their own folder
- generate Systemd templates for any multi-instance service files in slave.mk
- detect single or multi-instance platform in systemd-sonic-generator based on asic.conf platform specific file.
- update container hostname after creation instead of during creation (docker_image_ctl)
- run Docker containers in a network namespace if specified
- add a service to create a simulated multi-ASIC topology on the virtual switch platform
Signed-off-by: Lawrence Lee <t-lale@microsoft.com>
Signed-off-by: Suvarna Meenakshi <Suvarna.Meenaksh@microsoft.com>
The script sonic-gns3a.sh creates a GNS3 appliance flle, that points to a sonin-vs.img (SONiC Virtual Switch).
The appliance file (and sonic-vs.img file) can subsequently be imported into a GNS3 simulation environment.
Introduce a new "sflow" container (if ENABLE_SFLOW is set). The new docker will include:
hsflowd : host-sflow based daemon is the sFlow agent
psample : Built from libpsample repository. Useful in debugging sampled packets/groups.
sflowtool : Locally dump sflow samples (e.g. with a in-unit collector)
In case of SONiC-VS, enable psample & act_sample kernel modules.
VS' syncd needs iproute2=4.20.0-2~bpo9+1 & libcap2-bin=1:2.25-1 to support tc-sample
tc-syncd is provided as a convenience tool for debugging (e.g. tc-syncd filter show ...)
* Use dot1p to tc mapping for backend switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
* Do not write DSCP to TC mapping into CONFIG_DB or config_db.json for
storage switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
- What I did
Move the enabling of Systemd services from sonic_debian_extension to a new systemd generator
- How I did it
Create a new systemd generator to manually create symlinks to enable systemd services
Add rules/Makefile to build generator
Add services to be enabled to /etc/sonic/generated_services.conf to be read by the generator at boot time
Signed-off-by: Lawrence Lee <t-lale@microsoft.com>
* Update sonic-quagga submodule
* Port some patches from sonic-quagga
* Fix Makefile
* Another patch
* Uncomment bgp test
* Downport Nikos's patch
* Add a patch to alleviate the vendor issue
* use patch instead of stg
Since we move to FRR, we need to connect FRR with fpmsyncd via FPM.
Adding static routes is also required.
Signed-off-by: Shu0T1an ChenG <shuche@microsoft.com>
* [frr]: change frr as default sonic routing stack
* fix quagga configuration
* [vstest]: fix bgp test for frr
* [vstest]: skip bgp/test_invalid_nexthop.py for frr
Signed-off-by: Guohan Lu <gulv@microsoft.com>
* Add bridge-utils to orchagent image
- Add vxlanmgrd to supervisorctl in docker -orchagent
Signed-off-by: Ze Gan zegan@microsoft.com
* Update submodule pointer for swss to include Vxlanmgrd changes
Overall goal: Build debug images for every stretch docker.
An earlier PR (#2789) made the first cut, by transforming broadcom/orchagent to build target/docker-orhagent-dbg.gz.
Changes in this PR:
Made docker-orchagent build to be platform independent.
1.1) Created rules/docker_orchagent.mk
1.2) Removed platform//docker-orchagent-*.mk
1.3) Removed the corresponding entry from platform//rules.mk
Extended the debug docker image build to stretch based syncd dockers.
2.1) For now, only mellanox & barefoot are stretch based.
2.2) All the common variable definitions are put in one place platform/template/docker-syncd-base.mk
2.3) platform/[mellanox, bfn]/docker-syncd-[mlnx, bfn].mk are updated as detailed below.
2.3.1) Set platform code and include template base file
2.3.2) Add the dependencies & debug dependencies and any update over what base template offers.
Extended all stretch based non-platform dockers to build debug dockers too.
3.1) Affected are:
docker-database.mk,
docker-platform-monitor.mk,
docker-router-advertiser.mk,
docker-teamd.mk,
docker-telemetry.mk
Next: Build debug flavor of final images with regular dockers replaced with debug dockers where available.
* [build]: put stretch debian packages under target/debs/stretch/
* in stretch build phase, all debian packages built in that stage are placed under target/debs/stretch directory.
* for python-based debian packages, since they are really the same for jessie and stretch, they are placed under target/python-debs directory.
Signed-off-by: Guohan Lu <gulv@microsoft.com>
* syncd changes to disk and add e1000 driver to sonic vm
* add pg_profile_lookup.ini
Signed-off-by: Guohan Lu <gulv@microsoft.com>
* update swss and sairedis
sairedis:
* d146572 2018-11-22 | Fix interface name used on link message using lane map (#386)
swss:
* c74dc60 2018-11-22 | [vstest]: use eth1~32 as physical interface name in vs docker (#700) (HEAD -> master, origin/master, origin/HEAD) [lguohan]
* 6007e7f 2018-11-22 | [portmgrd]: Fix setting default port admin status and MTU (#691) [stepanblyschak]
* 6c70f6d 2018-11-22 | [portsorch] Fix port queue index init bug (#505) [yangbashuang]
* 70ac79b 2018-11-21 | [gitignore]: Update all binary names in the ignore list (#698) [Shuotian Cheng]
* 2a3626c 2018-11-21 | [test]: Remove duplicate legacy ACL tests (#699) [Shuotian Cheng]
* 8099811 2018-11-20 | [aclorch]: Remove unnecessary warning message (#696) [Shuotian Cheng]
* 63d8ebc 2018-11-18 | [portsorch]: Remove duplicate local variables - port (#690) [Shuotian Cheng]
* 28dc042 2018-11-18 | Remove default docker name value of swss. (#692) [Jipan Yang]
Signed-off-by: Guohan Lu <gulv@microsoft.com>
To suppress the error message:
INFO #supervisord: teammgrd Can't write to the lacp directory
'/var/warmboot/teamd/': No such file or directory
Signed-off-by: Shu0T1an ChenG <shuche@microsoft.com>
* Restore neighbor table to kernel during system warm-reboot
Added a service: "restore_neighbors" to restore neighbor table into
kernel during system warm reboot. The service is started by supervisord
in swss docker when the docker is started.
In case system warm reboot is enabled, it will try to restore the neighbor
table from appDB into kernel through netlink API calls and update the neighbor
table by sending arp/ns requests to all neighbor entries, then it sets the
stateDB flag for neighsyncd to continue the reconciliation process.
-- Added tcpdump python-scapy debian package into orchagent and vs dockers.
-- Added python module: pyroute2 netifaces into orchagent and vc dockers.
-- Workarounded tcpdump issue in the vs docker
Signed-off-by: Zhenggen Xu <zxu@linkedin.com>
* Move the restore_neighbors.py to sonic-swss submodule
Made changes to makefiles accordingly
Make dockerfile.j2 changes and supervisord config changes
Add python monotonic lib for time access
Signed-off-by: Zhenggen Xu <zxu@linkedin.com>
* Added PYTHON_SWSSCOMMON as swss runtime dependency
Signed-off-by: Zhenggen Xu <zxu@linkedin.com>
Currently, the teamd configuration is no longer used in SONiC. The
configuration is dynamically generated and stored in the configuration
database, which is picked up by the teammgrd.
To create new portchannels, and the members, the command:
config portchannel add
config portchannel member add
are used.
Signed-off-by: Shu0T1an ChenG <shuche@microsoft.com>
Remove the teamd.j2 templates used for starting the teamd. Add
teammgrd instead to manage all port channel related configuration
changes. Remove front panel port related configurations in
interfaces.j2 templates as well.
Remove teamd.sh script and use teammgrd to start all the teamd
processes. Remove all the logics in the start.sh script as well.
Update the sonic-swss submodule.
Signed-off-by: Shu0T1an ChenG <shuche@microsoft.com>
* [docker-orchagent]: Add vrfmgrd to supervisorctl
Signed-off-by: Marian Pritsak <marianp@mellanox.com>
* [sonic-vs]: Add vrfmgrd to supervisorctl
Signed-off-by: Marian Pritsak <marianp@mellanox.com>
- Move front panel ports and port channels MTU and IP configurations out of
the current /etc/network/interfaces file and store them in the configuration
database.
- The default MTU value for both front panel ports and the port channels is
9100. They are set via the minigraph or 9100 by default.
- Introduce portmgrd which will pick up the MTU configurations from the
configuration database.
- The updated intfmgrd will pick up IP address changes from the configuration
database.
- Update sonic-swss submodule
Signed-off-by: Shu0T1an ChenG <shuche@microsoft.com>