Update the sonic-sairedis submodule. The following are the commits in the submodule.
[syncd_init_common.sh] Use template file to retrieve vars (#683)
0bf336a3e895167357d5d2e5a988471e115522e8
[syncd/FlexCounter]:Fix last remove bug (#679)
4d21a264d5956501bf69ad3a89ea2ebccd369654
remove syncd from critical process list because
gbsyncd process will exit for platform without
gearbox.
closes#5623
Signed-off-by: Guohan Lu <lguohan@gmail.com>
a659219 [SONIC_SFP] adding abstract methods for reading and writing the eeprom address space within platform api (#126)
848f4a6 Add third-party licenses (#138)
c2ecd9a Add license file (#137)
403747a [sonic-platform-common] Add new platform API for SONiC Physical MIB Extension feature (#134)
19b8545 [sonic_y_cable] fix the unpacking (#135)
Signed-off-by: vaibhav-dahiya <vdahiya@microsoft.com>
The psutil library used in process_checker create a cache for each
process when calling process_iter. So, there is some possibility that
one process exists when calling process_iter, but not exists when
calling cmdline, which will raise a NoSuchProcess exception. This commit
fix the issue.
Signed-off-by: bingwang <bingwang@microsoft.com>
* Fix for LLDP advertisments being sent with wrong information.
Since lldpd is starting before lldpmgr, some advertisment packets might sent with default value, mac address as Port ID.
This fix hold the packets from being sent by the lldpd until all interfaces are well configured by the lldpmgrd.
Signed-off-by: Shlomi Bitton <shlomibi@nvidia.com>
* Fix comments
* Fix unit-test output caused a failure during build
* Add 'run_cmd' function and use it
* Resume lldpd even if port init timeout reached
Advance sonic-swss-common submodule by adding the following commits
3ec30ef Deprecate RedisClient and remove unused header file (#399)
165a679 Schema update for BGP internal neighbor table (#389)
262e330 Fix SonicV2Connector interfaces (#396)
Advance sonic-sairedis submodule by adding the following commits
bc3e044 [Sai]: Change Sai::set log to level INFO (#680)
b16bc8b Clean code: remove unused header file (#678)
40439b4 [syncd] Remove depreacated dependency on swss::RedisClient (#681)
1b6fc2e [syncd] Add supports of bulk api in syncd (#656)
a9f69c1 [syncd] Add to handle FDB MOVE notification (#670)
c7ef5e9 [gbsyncd] exit with zero when platform has no gearbox (#676)
57228fd [gbsyncd]: add missing python dependency (#675)
02a57a6 [vs] Add CRM SAI attributes to virtual switch interface (#673)
609445a fix boot type for fast boot (#674)
1325cdf Add support for saiplayer bulk API and add performance timers (#666)
1d84b90 Add ZeroMQ communication channel between sairedis and syncd (#659)
017056a Support System ports config (#657)
0f3668f Enable fabric counter for syncd's FlexCounter (#669)
Find LogicalLinks in minigraph and parse the port information. A new field called `mux_cable` is added to each port's entry in the Port table in config DB:
```
PORT|Ethernet0: {
"alias": "Ethernet4/1"
...
"mux_cable": "true"
}
```
If a mux cable is present on a port, the value for `mux_cable` will be `"true"`. If no mux cable is present, the attribute will either be omitted (default behavior) or set to `"false"`.
Current implementation of logger class is based on standard python syslog library.
Thus, logger class can be instantiated in different places and share the same context across the entire process.
This means that reducing log severity level will affect other modules which use logging facility.
**- Why I did it**
* To fix syslog implicit min priority override
**- How I did it**
* Added per instance log severity check
**- How to verify it**
1. Run code snippet
```
from sonic_py_common import logger
log1 = logger.Logger(log_identifier='myApp1')
log1.set_min_log_priority_debug()
log1.log_error("=> this is error")
log1.log_warning("=> this is warning")
log1.log_notice("=> this is notice")
log1.log_info("=> this is info")
log1.log_debug("=> this is debug")
log2 = logger.Logger(
log_identifier='myApp2',
log_facility=logger.Logger.LOG_FACILITY_DAEMON,
log_option=(logger.Logger.LOG_OPTION_NDELAY | logger.Logger.LOG_OPTION_PID)
)
log2.log_error("=> this is error")
log2.log_warning("=> this is warning")
log2.log_notice("=> this is notice")
log2.log_info("=> this is info")
log2.log_debug("=> this is debug")
```
2. Sample output:
```
Oct 23 15:08:30.447301 sonic ERR myApp1: => this is error
Oct 23 15:08:30.447908 sonic WARNING myApp1: => this is warning
Oct 23 15:08:30.448305 sonic NOTICE myApp1: => this is notice
Oct 23 15:08:30.448696 sonic INFO myApp1: => this is info
Oct 23 15:08:30.449063 sonic DEBUG myApp1: => this is debug
Oct 23 15:08:30.449442 sonic ERR myApp2[19178]: => this is error
Oct 23 15:08:30.449819 sonic WARNING myApp2[19178]: => this is warning
Oct 23 15:08:30.450183 sonic NOTICE myApp2[19178]: => this is notice
```
Signed-off-by: Nazarii Hnydyn <nazariig@nvidia.com>
The orchagent and syncd need to have the same default synchronous mode configuration. This PR adds a template file to translate the default value in CONFIG_DB (empty field) to an explicit mode so that the orchagent and syncd could have the same default mode.
When detecting a new SFP insertion, read its SFP type and DOM capability from EEPROM again.
SFP object will be initialized to a certain type even if no SFP present. A case could be:
1. A SFP object is initialized to QSFP type by default when there is no SFP present
2. User insert a SFP with an adapter to this QSFP port
3. The SFP object fail to read EEPROM because it still treats itself as QSFP.
This PR fixes this issue.
- Enable thermalctld support for our platforms
- Fix Chassis.get_num_sfp which had an off by one
- Implement read_eeprom and write_eeprom in SfpBase
- Refactor of Psus and PsuSlots. Psus they are now detected and metadata reported
- Improvements to modular support
Co-authored-by: Zhi Yuan Carl Zhao <zyzhao@arista.com>
**- Why I did it**
Install all host services and their data files in package format rather than file-by-file
**- How I did it**
- Create sonic-host-services Python wheel package, currently including procdockerstatsd
- Also add the framework for unit tests by adding one simple procdockerstatsd test case
- Create sonic-host-services-data Debian package which is responsible for installing the related systemd unit files to control the services in the Python wheel. This package will also be responsible for installing any Jinja2 templates and other data files needed by the host services.
**- Why I did it**
On teamd docker restart, the swss and syncd needs to be restarted as there are dependent resources present.
**- How I did it**
Add the teamd as a dependent service for swss
Updated the docker-wait script to handle service and dependent services separately.
Handle the case of warm-restart for the dependent service
**- How to verify it**
Verified the following scenario's with the following testbed
VM1 ----------------------------[DUT 6100] -----------------------VM2, ping traffic continuous between VMs
1. Stop teamd docker alone
> swss, syncd dockers seen going away
> The LAG reference count error messages seen for a while till swss docker stops.
> Dockers back up.
2. Enable WR mode for teamd. Stop teamd docker alone
> swss, syncd dockers not removed.
> The LAG reference count error messages not seen
> Repeated stop teamd docker test - same result, no effect on swss/syncd.
3. Stop swss docker.
> swss, teamd, syncd goes off - dockers comes back correctly, interfaces up
4. Enable WR mode for swss . Stop swss docker
> swss goes off not affecting syncd/teamd dockers.
5. Config reload
> no reference counter error seen, dockers comes back correctly, with interfaces up
6. Warm reboot, observations below
> swss docker goes off first
> teamd + syncd goes off to the end of WR process.
> dockers comes back up fine.
> ping traffic between VM's was NOT HIT
7. Fast reboot, observations below
> teamd goes off first ( **confirmed swss don't exit here** )
> swss goes off next
> syncd goes away at the end of the FR process
> dockers comes back up fine.
> there is a traffic HIT as per fast-reboot
8. Verified in multi-asic platform, the tests above other than WR/FB scenarios
**- Why I did it**
If we ran the CLI commands `sudo config feature autorestart snmp disabled/enabled` or `sudo config feature autorestart swss disabled/enabled`, then SNMP container will be stopped and started. This behavior was not expected since we updated the `auto_restart` field not update `state` field in `FEATURE` table. The reason behind this issue is that either `state` field or `auto_restart` field was updated, the function `update_feature_state(...)` will be invoked which then starts snmp.timer service.
The snmp.timer service will first stop snmp.service and later start snmp.service.
In order to solve this issue, the function `update_feature_state(...)` will be only invoked if `state` field in `FEATURE` table was
updated.
**- How I did it**
When the demon `hostcfgd` was activated, all the values of `state` field in `FEATURE` table of each container will be
cached. Each time the function `feature_state_handler(...)` is invoked, it will determine whether the `state` field of a
container was changed or not. If it was changed, function `update_feature_state(...)` will be invoked and the cached
value will also be updated. Otherwise, nothing will be done.
**- How to verify it**
We can run the CLI commands `sudo config feature autorestart snmp disabled/enabled` or `sudo config feature autorestart swss disabled/enabled` to check whether SNMP container is stopped and started. We also can run the CLI commands `sudo config feature state snmp disabled/enabled` or `sudo config feature state swss disabled/enabled` to check whether the container is stopped and restarted.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
**- Why I did it**
To introduce dynamic support of BBR functionality into bgpcfgd.
BBR is adding `neighbor PEER_GROUP allowas-in 1' for all BGP peer-groups which points to T0
Now we can add and remove this configuration based on CONFIG_DB entry
**- How I did it**
I introduced a new CONFIG_DB entry:
- table name: "BGP_BBR"
- key value: "all". Currently only "all" is supported, which means that all peer-groups which points to T0s will be updated
- data value: a dictionary: {"status": "status_value"}, where status_value could be either "enabled" or "disabled"
Initially, when bgpcfgd starts, it reads initial BBR status values from the [constants.yml](https://github.com/Azure/sonic-buildimage/pull/5626/files#diff-e6f2fe13a6c276dc2f3b27a5bef79886f9c103194be4fcb28ce57375edf2c23cR34). Then you can control BBR status by changing "BGP_BBR" table in the CONFIG_DB (see examples below).
bgpcfgd knows what peer-groups to change fron [constants.yml](https://github.com/Azure/sonic-buildimage/pull/5626/files#diff-e6f2fe13a6c276dc2f3b27a5bef79886f9c103194be4fcb28ce57375edf2c23cR39). The dictionary contains peer-group names as keys, and a list of address-families as values. So when bgpcfgd got a request to change the BBR state, it changes the state only for peer-groups listed in the constants.yml dictionary (and only for address families from the peer-group value).
**- How to verify it**
Initially, when we start SONiC FRR has BBR enabled for PEER_V4 and PEER_V6:
```
admin@str-s6100-acs-1:~$ vtysh -c 'show run' | egrep 'PEER_V.? allowas'
neighbor PEER_V4 allowas-in 1
neighbor PEER_V6 allowas-in 1
```
Then we apply following configuration to the db:
```
admin@str-s6100-acs-1:~$ cat disable.json
{
"BGP_BBR": {
"all": {
"status": "disabled"
}
}
}
admin@str-s6100-acs-1:~$ sonic-cfggen -j disable.json -w
```
The log output are:
```
Oct 14 18:40:22.450322 str-s6100-acs-1 DEBUG bgp#bgpcfgd: Received message : '('all', 'SET', (('status', 'disabled'),))'
Oct 14 18:40:22.450620 str-s6100-acs-1 DEBUG bgp#bgpcfgd: execute command '['vtysh', '-f', '/tmp/tmpmWTiuq']'.
Oct 14 18:40:22.681084 str-s6100-acs-1 DEBUG bgp#bgpcfgd: execute command '['vtysh', '-c', 'clear bgp peer-group PEER_V4 soft in']'.
Oct 14 18:40:22.904626 str-s6100-acs-1 DEBUG bgp#bgpcfgd: execute command '['vtysh', '-c', 'clear bgp peer-group PEER_V6 soft in']'.
```
Check FRR configuraiton and see that no allowas parameters are there:
```
admin@str-s6100-acs-1:~$ vtysh -c 'show run' | egrep 'PEER_V.? allowas'
admin@str-s6100-acs-1:~$
```
Then we apply enabling configuration back:
```
admin@str-s6100-acs-1:~$ cat enable.json
{
"BGP_BBR": {
"all": {
"status": "enabled"
}
}
}
admin@str-s6100-acs-1:~$ sonic-cfggen -j enable.json -w
```
The log output:
```
Oct 14 18:40:41.074720 str-s6100-acs-1 DEBUG bgp#bgpcfgd: Received message : '('all', 'SET', (('status', 'enabled'),))'
Oct 14 18:40:41.074720 str-s6100-acs-1 DEBUG bgp#bgpcfgd: execute command '['vtysh', '-f', '/tmp/tmpDD6SKv']'.
Oct 14 18:40:41.587257 str-s6100-acs-1 DEBUG bgp#bgpcfgd: execute command '['vtysh', '-c', 'clear bgp peer-group PEER_V4 soft in']'.
Oct 14 18:40:42.042967 str-s6100-acs-1 DEBUG bgp#bgpcfgd: execute command '['vtysh', '-c', 'clear bgp peer-group PEER_V6 soft in']'.
```
Check FRR configuraiton and see that the BBR configuration is back:
```
admin@str-s6100-acs-1:~$ vtysh -c 'show run' | egrep 'PEER_V.? allowas'
neighbor PEER_V4 allowas-in 1
neighbor PEER_V6 allowas-in 1
```
*** The test coverage ***
Below is the test coverage
```
---------- coverage: platform linux2, python 2.7.12-final-0 ----------
Name Stmts Miss Cover
----------------------------------------------------
bgpcfgd/__init__.py 0 0 100%
bgpcfgd/__main__.py 3 3 0%
bgpcfgd/config.py 78 41 47%
bgpcfgd/directory.py 63 34 46%
bgpcfgd/log.py 15 3 80%
bgpcfgd/main.py 51 51 0%
bgpcfgd/manager.py 41 23 44%
bgpcfgd/managers_allow_list.py 385 21 95%
bgpcfgd/managers_bbr.py 76 0 100%
bgpcfgd/managers_bgp.py 193 193 0%
bgpcfgd/managers_db.py 9 9 0%
bgpcfgd/managers_intf.py 33 33 0%
bgpcfgd/managers_setsrc.py 45 45 0%
bgpcfgd/runner.py 39 39 0%
bgpcfgd/template.py 64 11 83%
bgpcfgd/utils.py 32 24 25%
bgpcfgd/vars.py 1 0 100%
----------------------------------------------------
TOTAL 1128 530 53%
```
**- Which release branch to backport (provide reason below if selected)**
- [ ] 201811
- [x] 201911
- [x] 202006
Issue was because we were relying on port_alias_asic_map dictionary
but that dictionary can't be used as alias name format has changed.
Fix the port alias mapping as what is needed.
Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
use correct chassisdb.conf path while bringing up chassis_db service on VoQ modular switch.chassis_db service on VoQ modular switch.
resolves#5631
Signed-off-by: Honggang Xu <hxu@arista.com>
Now we are reading base mac, product name from eeprom data, and the data read from eeprom contains multiple "\0" characters at the end, need trim them to make the string clean and display correct.
There is currently a bug where messages from swss with priority lower than the current log level are still being counted against the syslog rate limiting threshhold. This leads to rate-limiting in syslog when the rate-limiting conditions have not been met, which causes several sonic-mgmt tests to fail since they are dependent on LogAnalyzer. It also omits potentially useful information from the syslog. Only rate-limiting messages of level INFO and lower allows these tests to pass successfully.
Signed-off-by: Lawrence Lee <lawlee@microsoft.com>
**- Why I did it**
I was asked to change "Allow list" prefix-list generation rule.
Previously we generated the rules using following method:
```
For each {prefix}/{masklen} we would generate the prefix-rule
permit {prefix}/{masklen} ge {masklen}+1
Example:
Prefix 1.2.3.4/24 would have following prefix-list entry generated
permit 1.2.3.4/24 ge 23
```
But we discovered the old rule doesn't work for all cases we have.
So we introduced the new rule:
```
For ipv4 entry,
For mask < 32 , we will add ‘le 32’ to cover all prefix masks to be sent by T0
For mask =32 , we will not add any ‘le mask’
For ipv6 entry, we will add le 128 to cover all the prefix mask to be sent by T0
For mask < 128 , we will add ‘le 128’ to cover all prefix masks to be sent by T0
For mask = 128 , we will not add any ‘le mask’
```
**- How I did it**
I change prefix-list entry generation function. Also I introduced a test for the changed function.
**- How to verify it**
1. Build an image and put it on your dut.
2. Create a file test_schema.conf with the test configuration
```
{
"BGP_ALLOWED_PREFIXES": {
"DEPLOYMENT_ID|0|1010:1010": {
"prefixes_v4": [
"10.20.0.0/16",
"10.50.1.0/29"
],
"prefixes_v6": [
"fc01:10::/64",
"fc02:20::/64"
]
},
"DEPLOYMENT_ID|0": {
"prefixes_v4": [
"10.20.0.0/16",
"10.50.1.0/29"
],
"prefixes_v6": [
"fc01:10::/64",
"fc02:20::/64"
]
}
}
}
```
3. Apply the configuration by command
```
sonic-cfggen -j test_schema.conf --write-to-db
```
4. Check that your bgp configuration has following prefix-list entries:
```
admin@str-s6100-acs-1:~$ show runningconfiguration bgp | grep PL_ALLOW
ip prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_1010:1010_V4 seq 10 deny 0.0.0.0/0 le 17
ip prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_1010:1010_V4 seq 20 permit 127.0.0.1/32
ip prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_1010:1010_V4 seq 30 permit 10.20.0.0/16 le 32
ip prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_1010:1010_V4 seq 40 permit 10.50.1.0/29 le 32
ip prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_empty_V4 seq 10 deny 0.0.0.0/0 le 17
ip prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_empty_V4 seq 20 permit 127.0.0.1/32
ip prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_empty_V4 seq 30 permit 10.20.0.0/16 le 32
ip prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_empty_V4 seq 40 permit 10.50.1.0/29 le 32
ipv6 prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_1010:1010_V6 seq 10 deny ::/0 le 59
ipv6 prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_1010:1010_V6 seq 20 deny ::/0 ge 65
ipv6 prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_1010:1010_V6 seq 30 permit fc01:10::/64 le 128
ipv6 prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_1010:1010_V6 seq 40 permit fc02:20::/64 le 128
ipv6 prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_empty_V6 seq 10 deny ::/0 le 59
ipv6 prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_empty_V6 seq 20 deny ::/0 ge 65
ipv6 prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_empty_V6 seq 30 permit fc01:10::/64 le 128
ipv6 prefix-list PL_ALLOW_LIST_DEPLOYMENT_ID_0_COMMUNITY_empty_V6 seq 40 permit fc02:20::/64 le 128
```
Co-authored-by: Pavel Shirshov <pavel.contrib@gmail.com>
- Fixes the dependency issue in the DPKG dependent SHA calculation.
- The dependent SHA value of package is derived from the content of all its dependent packages.
SHA_HASH => is an SHA value derived from a module/package dedpendent files ( .flags, .sha and .smsha files) .
SHA_VALUE = > is an SHA value derived from a module/package dependent packages(.deb, .whl, etc)
Eg, For SNMP docker, SNMP and SNMPD packages are the dependency
$(DOCKER_SNMP)_DEPENDS += $(SNMP) $(SNMPD)
So, the SHA value calculation of SNMP would include the SHA value of SNMP and SNMPD packages as well. so that any change in the package should trigger the docker rebuild.
When a large number of changes occur to the ACL table of Config DB, caclmgrd will get flooded with notifications, and previously, it would regenerate and apply the iptables rules for each change, which is unnecessary, as the iptables rules should only get applied once after the last change notification is received. If the ACL table contains a large number of control plane ACL rules, this could cause a large delay in caclmgrd getting the rules applied.
This patch causes caclmgrd to delay updating the iptables rules until it has not received a change notification for at least 0.5 seconds.
The `get_serial_number()` method in the ChassisBase and ModuleBase classes was redundant, as the `get_serial()` method is inherited from the DeviceBase class. This method was removed from the base classes in sonic-platform-common and the submodule was updated in https://github.com/Azure/sonic-buildimage/pull/5625.
This PR aligns the existing vendor platform API implementations to remove the `get_serial_number()` methods and ensure the `get_serial()` methods are implemented, if they weren't previously.
Note that this PR does not modify the Dell platform API implementations, as this will be handled as part of https://github.com/Azure/sonic-buildimage/pull/5609
Example of syslog message from Mellanox SAI:
"Oct 7 15:39:11.482315 arc-switch1025 INFO syncd#supervisord: syncd Oct 07 15:39:11 NOTICE SAI_BUFFER: mlnx_sai_buffer.c[3893]- mlnx_clear_buffer_pool_stats: Clear pool stats pool id:1"
There is a log INFO from supervisord which actually printed NOTICE and
date again. This confusion happens becuase if SAI is not built to log
to syslog it will log everything to stdout with format "[date] [level]
[message]" so supervisord sends it to syslog with level INFO.
New logs look like:
"Oct 7 15:40:21.488055 arc-switch1025 NOTICE syncd#SDK [SAI_BUFFER]: mlnx_sai_buffer.c[3893]- mlnx_clear_buffer_pool_stats: Clear pool stats pool id:17"
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
- Make DellEMC platform modules Python3 compliant.
- Change return type of PSU Platform APIs in DellEMC Z9264, S5232 and Thermal Platform APIs in S5232 to 'float'.
- Remove multiple copies of pcisysfs.py.
- PEP8 style changes for utility scripts.
- Build and install Python3 version of sonic_platform package.
- Fix minor Platform API issues.
- The issue is that the SAI package content is changed without changing its version. The DPKG caches the wrong version of SAI package.
- The fix is to include the SAI package content header for SHA calcaulation. This will detect if there is any change in the SAI package.
It should no longer be necessary to explicitly install the 'wheel' package, as SONiC packages built as wheels should specify 'wheel' as a dependency in their setup.py files. Therefore, pip[3] should check for the presence of 'wheel' and install it if it isn't present before attempting to call 'setup.py bdist_wheel' to install the package.
[swss]
[acl] Replace IP_PROTOCOL with NEXT_HEADER for IPv6 ACL tables (#1458)
[acl] Refactor port OID retrieval into aclorch (#1462)
Fix issue #5157 by identifying the dependency among objects and avoiding releasing an object still being referenced (#1440)
[mock tests] Update MockDBConnector to match new swsscommon interface (#1465)
[swss-common]
netlink: Setting nl_socket buffer size to 3M from 2M (#391)
Added support in Swig file to cast Selectable object to Subscriber Table object (#394)
[warm reboot] Warm Reboot Support for EVPN VXLAN (#350)
Implement DBInterface/SonicV2Connector in C++ (#387)
Fix memory leak if a RedisCommand object were to be reused (#392)
Signed-off-by: Danny Allen <daall@microsoft.com>
This PR makes two changes:
- Store Jinja2 cache in LOGLEVEL DB instead of STATE DB
- Store bytecode cache encoded in base64
Tested with the following command: "redis-dump -d 3 -k JINJA2_CACHE"
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>