It should no longer be necessary to explicitly install the 'wheel' package, as SONiC packages built as wheels should specify 'wheel' as a dependency in their setup.py files. Therefore, pip[3] should check for the presence of 'wheel' and install it if it isn't present before attempting to call 'setup.py bdist_wheel' to install the package.
bring up chassisdb service on sonic switch according to the design in
Distributed Forwarding in VoQ Arch HLD
Signed-off-by: Honggang Xu <hxu@arista.com>
**- Why I did it**
To bring up new ChassisDB service in sonic as designed in ['Distributed forwarding in a VOQ architecture HLD' ](90c1289eaf/doc/chassis/architecture.md).
**- How I did it**
Implement the section 2.3.1 Global DB Organization of the VOQ architecture HLD.
**- How to verify it**
ChassisDB service won't start without chassisdb.conf file on the existing platforms.
ChassisDB service is accessible with global.conf file in the distributed arichitecture.
Signed-off-by: Honggang Xu <hxu@arista.com>
We were building our own python-click package because we needed features/bug fixes available as of version 7.0.0, but the most recent version available from Debian was in the 6.x range.
"Click" is needed for building/testing and installing sonic-utilities. Now that we are building sonic-utilities as a wheel, with Click specified as a dependency in the setup.py file, setuptools will install a more recent version of Click in the sonic-slave-buster container when building the package, and pip will install a more recent version of Click in the host OS of SONiC when installing the sonic-utilities package. Also, we don't need to worry about installing the Python 2 or 3 version of the package, as the proper one will be installed as necessary.
* buildimage: Add gearbox phy device files and a new physyncd docker to support VS gearbox phy feature
* scripts and configuration needed to support a second syncd docker (physyncd)
* physyncd supports gearbox device and phy SAI APIs and runs multiple instances of syncd, one per phy in the device
* support for VS target (sonic-sairedis vslib has been extended to support a virtual BCM81724 gearbox PHY).
HLD is located at b817a12fd8/doc/gearbox/gearbox_mgr_design.md
**- Why I did it**
This work is part of the gearbox phy joint effort between Microsoft and Broadcom, and is based
on multi-switch support in sonic-sairedis.
**- How I did it**
Overall feature was implemented across several projects. The collective pull requests (some in late stages of review at this point):
https://github.com/Azure/sonic-utilities/pull/931 - CLI (merged)
https://github.com/Azure/sonic-swss-common/pull/347 - Minor changes (merged)
https://github.com/Azure/sonic-swss/pull/1321 - gearsyncd, config parsers, changes to orchargent to create gearbox phy on supported systems
https://github.com/Azure/sonic-sairedis/pull/624 - physyncd, virtual BCM81724 gearbox phy added to vslib
**- How to verify it**
In a vslib build:
root@sonic:/home/admin# show gearbox interfaces status
PHY Id Interface MAC Lanes MAC Lane Speed PHY Lanes PHY Lane Speed Line Lanes Line Lane Speed Oper Admin
-------- ----------- --------------- ---------------- --------------- ---------------- ------------ ----------------- ------ -------
1 Ethernet48 121,122,123,124 25G 200,201,202,203 25G 204,205 50G down down
1 Ethernet49 125,126,127,128 25G 206,207,208,209 25G 210,211 50G down down
1 Ethernet50 69,70,71,72 25G 212,213,214,215 25G 216 100G down down
In addition, docker ps | grep phy should show a physyncd docker running.
Signed-off-by: syd.logan@broadcom.com
libzmq5 is tcp/ipc communication library, needed by sairedis/syncd to communicate, as a new way (compared to current REDIS channel) to exchange messages which is faster than REDIS library and allows synchronous mode
this library is required in sairedis repo, swss repo, sonic-buildimage and sonic-build-tools to make it work
We are moving toward building all Python packages for SONiC as wheel packages rather than Debian packages. This will also allow us to more easily transition to Python 3.
Python files are now packaged in "sonic-utilities" Pyhton wheel. Data files are now packaged in "sonic-utilities-data" Debian package.
**- How I did it**
- Build and install sonic-utilities as a Python package
- Remove explicit installation of wheel dependencies, as these will now get installed implicitly by pip when installing sonic-utilities as a wheel
- Build and install new sonic-utilities-data package to install data files required by sonic-utilities applications
- Update all references to sonic-utilities scripts/entrypoints to either reference the new /usr/local/bin/ location or remove absolute path entirely where applicable
Submodule updates:
* src/sonic-utilities aa27dd9...2244d7b (5):
> Support building sonic-utilities as a Python wheel package instead of a Debian package (#1122)
> [consutil] Display remote device name in show command (#1120)
> [vrf] fix check state_db error when vrf moving (#1119)
> [consutil] Fix issue where the ConfigDBConnector's reference is missing (#1117)
> Update to make config load/reload backward compatible. (#1115)
* src/sonic-ztp dd025bc...911d622 (1):
> Update paths to reflect new sonic-utilities install location, /usr/local/bin/ (#19)
This PR limited the number of calls to sonic-cfggen to one call
per iteration instead of current 3 calls per iteration.
The PR also installs jq on host for future scripts if needed.
signed-off-by: Tamer Ahmed <tamer.ahmed@microsoft.com>
* [redis] Use redis-server and redis-tools in blob storage to prevent
upstream link broken
* Use curl instead of wget
* Explicitly install dependencies
Copying platform.json file into an empty /usr/share/sonic/platform directory does not mimic an actual device. A more correct approach is to create a /usr/share/sonic/platform symlink which links to the actual platform directory; this is more like what is done inside SONiC containers. Then, we only need to copy the platform.json file into the actual platform directory; the symlink takes care of the alternative path, and also exposes all the other files in the platform directory.
sonic-py-common package relies on the `PLATFORM` environment variable to be set at runtime in the SONiC VS container. Exporting the variables in the start.sh script causes the variables to only be available to the shell running start.sh and any subshells it spawns. However, once the script exits, the variable is lost. This is resulting in the failure of tests which are run in the VS container, as they call applications which in turn call sonic-py-common functions which rely on PLATFORM to be set.
Setting the environment variables in the Dockerfile allows them to persist through the entire runtime of the container.
Add a master switch so that the sync/async mode can be configured.
Example usage of the switch:
1. Configure mode while building an image
`make ENABLE_SYNCHRONOUS_MODE=y <target>`
2. Configure when the device is running
Change CONFIG_DB with `sonic-cfggen -a '{"DEVICE_METADATA":{"localhost": {"synchronous_mode": "enable"}}}' --write-to-db`
Restart swss with `systemctl restart swss`
- Reverts commit 457674c
- Creates "platform.json" for vs docker
- Adds test case for port breakout CLI
- Explicitly sets admin status of all the VS interfaces to down to be compatible with SWSS test cases, specifically vnet tests and sflow tests
Signed-off-by: Sangita Maity <sangitamaity0211@gmail.com>
Swap order of orchagent and portsyncd in start.sh and fix priorities
Many docker virtual switch tests are failing at the moment because orchagent never finishes initializing. After doing some searching I figured out that Ethernet24 is never published to State DB, which is reminiscent of #4821
Signed-off-by: Danny Allen <daall@microsoft.com>
Applications running in the host OS can read the platform identifier from /host/machine.conf. When loading configuration, sonic-config-engine *needs* to read the platform identifier from machine.conf, as it it responsible for populating the value in Config DB.
When an application is running inside a Docker container, the machine.conf file is not accessible, as the /host directory is not mounted. So we need to retrieve the platform identifier from Config DB if get_platform() is called from inside a Docker
container. However, we can't simply check that we're running in a Docker container because the host OS of the SONiC virtual switch is running inside a Docker container. So I refactored `get_platform()` to:
1. Read from the `PLATFORM` environment variable if it exists (which is defined in a virtual switch Docker container)
2. Read from machine.conf if possible (works in the host OS of a standard SONiC image, critical for sonic-config-engine at boot)
3. Read the value from Config DB (needed for Docker containers running in SONiC, as machine.conf is not accessible to them)
- Also fix typo in daemon_base.py
- Also changes to align `get_hwsku()` with `get_platform()`
- Created the VS setup to test DPB functionality
- Created "platform.json" for VS docker
- Added test case for Breakout CLI
Signed-off-by: Sangita Maity <sangitamaity0211@gmail.com>
As part of consolidating all common Python-based functionality into the new sonic-py-common package, this pull request:
1. Redirects all Python applications/scripts in sonic-buildimage repo which previously imported sonic_device_util or sonic_daemon_base to instead import sonic-py-common, which was added in https://github.com/Azure/sonic-buildimage/pull/5003
2. Replaces all calls to `sonic_device_util.get_platform_info()` to instead call `sonic_py_common.get_platform()` and removes any calls to `sonic_device_util.get_machine_info()` which are no longer necessary (i.e., those which were only used to pass the results to `sonic_device_util.get_platform_info()`.
3. Removes unused imports to the now-deprecated sonic-daemon-base package and sonic_device_util.py module
This is the next step toward resolving https://github.com/Azure/sonic-buildimage/issues/4999
Also reverted my previous change in which device_info.get_platform() would first try obtaining the platform ID string from Config DB and fall back to gathering it from machine.conf upon failure because this function is called by sonic-cfggen before the data is in the DB, in which case, the db_connect() call will hang indefinitely, which was not the behavior I expected. As of now, the function will always reference machine.conf.
virtual-chassis test uses multiple vs instances to simulate a
modular switch and a redis-chassis service is required to run on
the vs instance that represents a supervisor card.
This change allows vs docker start redis-chassis service according
to external config file.
**- Why I did it**
To support virtual-chassis setup, so that we can test distributed forwarding feature in virtual sonic environment, see `Distributed forwarding in a VOQ architecture HLD` pull request at https://github.com/Azure/SONiC/pull/622
**- How I did it**
The sonic-vs start.sh is enhanced to start new redis_chassis service if external chassis config file found. The config file doesn't exist in current vs environment, start.sh will behave like before.
**- How to verify it**
The swss/test still pass. The chassis_db service is verified in virtual-chassis topology and tests which are in following PRs.
Signed-off-by: Honggang Xu <hxu@arista.com>
(cherry picked from commit c1d45cf81ce3238be2dcbccae98c0780944981ce)
Co-authored-by: Honggang Xu <hxu@arista.com>
Added required packages to enabled YANG dependency check for Dynamic Port Breakout in VS container.
[sonic-utilities PR #766](https://github.com/Azure/sonic-utilities/pull/766) has a dependency on it.
Getting error like the following without this fix: `ImportError: No module named yang - required module not found`
Signed-off-by: Sangita Maity <sangitamaity0211@gmail.com>
Added xmltodict and jsondiff packages needed to run vs test cases successfully for DPB.
sonic-utilities PR #766 has a dependency on these packages.
Signed-off-by: Sangita Maity <sangitamaity0211@gmail.com>
currently, vs docker always create 32 front panel ports.
when vs docker starts, it first detects the peer links
in the namespace and then setup equal number of front panel
interfaces as the peer links.
Signed-off-by: Guohan Lu <lguohan@gmail.com>
libpython2.7, libdaemon0, libdbus-1-3, libjansson4 are common
across different containers. move them into docker-base-stretch
Signed-off-by: Guohan Lu <lguohan@gmail.com>
* Changes in sonic-buildimage for the NAT feature
- Docker for NAT
- installing the required tools iptables and conntrack for nat
Signed-off-by: kiran.kella@broadcom.com
* Add redis-tools dependencies in the docker nat compilation
* Addressed review comments
* add natsyncd to warm-boot finalizer list
* addressed review comments
* using swsscommon.DBConnector instead of swsssdk.SonicV2Connector
* Enable NAT application in docker-sonic-vs
* Use dot1p to tc mapping for backend switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
* Do not write DSCP to TC mapping into CONFIG_DB or config_db.json for
storage switches
Signed-off-by: Wenda Ni <wenni@microsoft.com>
Since we move to FRR, we need to connect FRR with fpmsyncd via FPM.
Adding static routes is also required.
Signed-off-by: Shu0T1an ChenG <shuche@microsoft.com>
* [frr]: change frr as default sonic routing stack
* fix quagga configuration
* [vstest]: fix bgp test for frr
* [vstest]: skip bgp/test_invalid_nexthop.py for frr
Signed-off-by: Guohan Lu <gulv@microsoft.com>
* Add bridge-utils to orchagent image
- Add vxlanmgrd to supervisorctl in docker -orchagent
Signed-off-by: Ze Gan zegan@microsoft.com
* Update submodule pointer for swss to include Vxlanmgrd changes
* [build]: put stretch debian packages under target/debs/stretch/
* in stretch build phase, all debian packages built in that stage are placed under target/debs/stretch directory.
* for python-based debian packages, since they are really the same for jessie and stretch, they are placed under target/python-debs directory.
Signed-off-by: Guohan Lu <gulv@microsoft.com>
To suppress the error message:
INFO #supervisord: teammgrd Can't write to the lacp directory
'/var/warmboot/teamd/': No such file or directory
Signed-off-by: Shu0T1an ChenG <shuche@microsoft.com>
* Restore neighbor table to kernel during system warm-reboot
Added a service: "restore_neighbors" to restore neighbor table into
kernel during system warm reboot. The service is started by supervisord
in swss docker when the docker is started.
In case system warm reboot is enabled, it will try to restore the neighbor
table from appDB into kernel through netlink API calls and update the neighbor
table by sending arp/ns requests to all neighbor entries, then it sets the
stateDB flag for neighsyncd to continue the reconciliation process.
-- Added tcpdump python-scapy debian package into orchagent and vs dockers.
-- Added python module: pyroute2 netifaces into orchagent and vc dockers.
-- Workarounded tcpdump issue in the vs docker
Signed-off-by: Zhenggen Xu <zxu@linkedin.com>
* Move the restore_neighbors.py to sonic-swss submodule
Made changes to makefiles accordingly
Make dockerfile.j2 changes and supervisord config changes
Add python monotonic lib for time access
Signed-off-by: Zhenggen Xu <zxu@linkedin.com>
* Added PYTHON_SWSSCOMMON as swss runtime dependency
Signed-off-by: Zhenggen Xu <zxu@linkedin.com>
Remove the teamd.j2 templates used for starting the teamd. Add
teammgrd instead to manage all port channel related configuration
changes. Remove front panel port related configurations in
interfaces.j2 templates as well.
Remove teamd.sh script and use teammgrd to start all the teamd
processes. Remove all the logics in the start.sh script as well.
Update the sonic-swss submodule.
Signed-off-by: Shu0T1an ChenG <shuche@microsoft.com>
* [docker-orchagent]: Add vrfmgrd to supervisorctl
Signed-off-by: Marian Pritsak <marianp@mellanox.com>
* [sonic-vs]: Add vrfmgrd to supervisorctl
Signed-off-by: Marian Pritsak <marianp@mellanox.com>
- Move front panel ports and port channels MTU and IP configurations out of
the current /etc/network/interfaces file and store them in the configuration
database.
- The default MTU value for both front panel ports and the port channels is
9100. They are set via the minigraph or 9100 by default.
- Introduce portmgrd which will pick up the MTU configurations from the
configuration database.
- The updated intfmgrd will pick up IP address changes from the configuration
database.
- Update sonic-swss submodule
Signed-off-by: Shu0T1an ChenG <shuche@microsoft.com>
* Fix potential blackholing/looping traffic and refresh ipv6 neighbor to avoid CPU hit
In case ipv6 global addresses were configured on L3 interfaces and used for peering,
and routing protocol was using link-local addresses on the same interfaces as prefered nexthops,
the link-local addresses could be aged out after a while due to no activities towards the link-local
addresses themselves. And when we receive new routes with the link-local nexthops, SONiC won't insert
them to the HW, and thus cause looping or blackholing traffic.
Global ipv6 addresses on L3 interfaces between switches are refreshed by BGP keeplive and other messages.
On server facing side, traffic may hit fowarding plane only, and no refresh for the ipv6 neighbor entries regularly.
This could age-out the linux kernel ipv6 neighbor entries, and HW neighbor table entries could be removed,
and thus traffic going to those neighbors would hit CPU, and cause traffic drop and temperary CPU high load.
Also, if link-local addresses were not learned, we may not get them at all later.
It is intended to fix all above issues.
Changes:
Add ndisc6 package in swss docker and use it for ipv6 ndp ping to update the neighbors' state on Vlan interfaces
Change the default ipv6 neighbor reachable timer to 30mins
Add periodical ipv6 multicast ping to ff02::11 to get/refresh link-local neighbor info.
* Fix review comments:
Add PORTCHANNEL_INTERFACE interface for ipv6 multicast ping
format issue
* Combine regular L3 interface and portchannel interface for looping
* Add ndisc6 package to vs docker
* [docker-base] Instruct apt-get to NOT install 'recommended' or 'suggested' packages
* Modify docker-fpm-quagga, docker-snmp-sv2 and docker-sonic-vs Dockerfile templates in order to properly install .deb dependencies
* REDIS_SERVER depends on REDIS_TOOLS; ensure REDIS_TOOLS is always installed before REDIS_SERVER
* [vs]: add zebra/quagga/fpmsyncd in supervisord.conf
* setup the hostname for vs docker
* do not save to the disk for redis db
* install ipaddress module in vs docker
* update sonic-sairedis submodule