- Why I did it
Update the routine is_bgp_session_internal() by checking the BGP_INTERNAL_NEIGHBOR table.
Additionally to address the review comment #5520 (comment)
Add timer settings as will in the internal session templates and keep it minimal as these sessions which will always be up.
Updates to the internal tests data + add all of it to template tests.
- How I did it
Updated the APIs and the template files.
- How to verify it
Verified the internal BGP sessions are displayed correctly with show commands with this API is_bgp_session_internal()
Added new MultiASIC util method "get_back_end_interface_set()" to speed up back-end interface check by allowing caller to cache the back-end intf into a set. This way the caller can use this set for all subsequent back-end interface check requests instead of each time need to read from redis DB which become a scaling issue for cases such as checking for thousands of nexthop routes for filtering purpose.
**- Why I did it**
On teamd docker restart, the swss and syncd needs to be restarted as there are dependent resources present.
**- How I did it**
Add the teamd as a dependent service for swss
Updated the docker-wait script to handle service and dependent services separately.
Handle the case of warm-restart for the dependent service
**- How to verify it**
Verified the following scenario's with the following testbed
VM1 ----------------------------[DUT 6100] -----------------------VM2, ping traffic continuous between VMs
1. Stop teamd docker alone
> swss, syncd dockers seen going away
> The LAG reference count error messages seen for a while till swss docker stops.
> Dockers back up.
2. Enable WR mode for teamd. Stop teamd docker alone
> swss, syncd dockers not removed.
> The LAG reference count error messages not seen
> Repeated stop teamd docker test - same result, no effect on swss/syncd.
3. Stop swss docker.
> swss, teamd, syncd goes off - dockers comes back correctly, interfaces up
4. Enable WR mode for swss . Stop swss docker
> swss goes off not affecting syncd/teamd dockers.
5. Config reload
> no reference counter error seen, dockers comes back correctly, with interfaces up
6. Warm reboot, observations below
> swss docker goes off first
> teamd + syncd goes off to the end of WR process.
> dockers comes back up fine.
> ping traffic between VM's was NOT HIT
7. Fast reboot, observations below
> teamd goes off first ( **confirmed swss don't exit here** )
> swss goes off next
> syncd goes away at the end of the FR process
> dockers comes back up fine.
> there is a traffic HIT as per fast-reboot
8. Verified in multi-asic platform, the tests above other than WR/FB scenarios
With python 2.7, import yaml module was resulting in huge memory allocation in the heap per process. As an interim fix, moving the import yaml to the function which actually uses this module. This helps reduce the memory footprint of pmon docker, as it don't use the API's which need yaml processing.
This issue not seen with importing yaml with python3, Need to be further analyzed, hence putting this fix in 201911 where we continue to use python2.7.
* Add sonic_interface.py in sonic-py-common for sonic interface utilities to keep this SONIC PREFIX naming convention in one place in py-common and all modules/applications use the functions defined here.
* [201911] skip_thermalctld for VS platform as it is
not supported
root@vlab-01:/# supervisorctl status
dependent-startup EXITED Aug 18 06:22 AM
rsyslogd RUNNING pid 18, uptime 0:12:26
start EXITED Aug 18 06:22 AM
supervisor-proc-exit-listener RUNNING pid 13, uptime 0:12:27
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
* [201911] after PR #https://github.com/Azure/sonic-buildimage/pull/5204
this function is no more needed so cleaning up.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
Update the name from get_npu_device_id() -to get_asic_device_id() and move from device_info.py, to the new file multi_asic.py in sonic-py-common so that it references to the correct valid constants.
a) we should use get_platform() with new sonic_py-common package
b) In 201911 DB Connector is still using db id based constructor
as following PR https://github.com/Azure/sonic-buildimage/pull/4549
is not cherry-picked yet. So revert the change to use db is insatead of
db_name for now.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
As part of consolidating all common Python-based functionality into the new sonic-py-common package, this pull request:
1. Redirects all Python applications/scripts in sonic-buildimage repo which previously imported sonic_device_util or sonic_daemon_base to instead import sonic-py-common, which was added to the 201911 branch in https://github.com/Azure/sonic-buildimage/pull/5063
2. Replaces all calls to `sonic_device_util.get_platform_info()` to instead call `sonic_py_common.get_platform()` and removes any calls to `sonic_device_util.get_machine_info()` which are no longer necessary (i.e., those which were only used to pass the results to `sonic_device_util.get_platform_info()`.
3. Removes unused imports to the now-deprecated sonic-daemon-base package and sonic_device_util.py module
This is a step toward resolving https://github.com/Azure/sonic-buildimage/issues/4999
Applications running in the host OS can read the platform identifier from /host/machine.conf. When loading configuration, sonic-config-engine *needs* to read the platform identifier from machine.conf, as it it responsible for populating the value in Config DB.
When an application is running inside a Docker container, the machine.conf file is not accessible, as the /host directory is not mounted. So we need to retrieve the platform identifier from Config DB if get_platform() is called from inside a Docker
container. However, we can't simply check that we're running in a Docker container because the host OS of the SONiC virtual switch is running inside a Docker container. So I refactored `get_platform()` to:
1. Read from the `PLATFORM` environment variable if it exists (which is defined in a virtual switch Docker container)
2. Read from machine.conf if possible (works in the host OS of a standard SONiC image, critical for sonic-config-engine at boot)
3. Read the value from Config DB (needed for Docker containers running in SONiC, as machine.conf is not accessible to them)
- Also fix typo in daemon_base.py
- Also changes to align `get_hwsku()` with `get_platform()`
Consolidate common SONiC Python-language functionality into one shared package (sonic-py-common) and eliminate duplicate code.
The package currently includes four modules:
- daemon_base
- device_info
- logger
- task_base
NOTE: This is a combination of all changes from https://github.com/Azure/sonic-buildimage/pull/5003, https://github.com/Azure/sonic-buildimage/pull/5049 and some changes from https://github.com/Azure/sonic-buildimage/pull/5043 backported to align with the 201911 branch. As part of the 201911 port, I am not installing the Python 3 package in the base image or in the VS container, because we do not have pip3 installed, and we do not intend to migrate to Python 3 in 201911.