Now we are reading base mac, product name from eeprom data, and the data read from eeprom contains multiple "\0" characters at the end, need trim them to make the string clean and display correct.
The `get_serial_number()` method in the ChassisBase and ModuleBase classes was redundant, as the `get_serial()` method is inherited from the DeviceBase class. This method was removed from the base classes in sonic-platform-common and the submodule was updated in https://github.com/Azure/sonic-buildimage/pull/5625.
This PR aligns the existing vendor platform API implementations to remove the `get_serial_number()` methods and ensure the `get_serial()` methods are implemented, if they weren't previously.
Note that this PR does not modify the Dell platform API implementations, as this will be handled as part of https://github.com/Azure/sonic-buildimage/pull/5609
Example of syslog message from Mellanox SAI:
"Oct 7 15:39:11.482315 arc-switch1025 INFO syncd#supervisord: syncd Oct 07 15:39:11 NOTICE SAI_BUFFER: mlnx_sai_buffer.c[3893]- mlnx_clear_buffer_pool_stats: Clear pool stats pool id:1"
There is a log INFO from supervisord which actually printed NOTICE and
date again. This confusion happens becuase if SAI is not built to log
to syslog it will log everything to stdout with format "[date] [level]
[message]" so supervisord sends it to syslog with level INFO.
New logs look like:
"Oct 7 15:40:21.488055 arc-switch1025 NOTICE syncd#SDK [SAI_BUFFER]: mlnx_sai_buffer.c[3893]- mlnx_clear_buffer_pool_stats: Clear pool stats pool id:17"
Signed-off-by: Stepan Blyschak <stepanb@nvidia.com>
- Make DellEMC platform modules Python3 compliant.
- Change return type of PSU Platform APIs in DellEMC Z9264, S5232 and Thermal Platform APIs in S5232 to 'float'.
- Remove multiple copies of pcisysfs.py.
- PEP8 style changes for utility scripts.
- Build and install Python3 version of sonic_platform package.
- Fix minor Platform API issues.
- The issue is that the SAI package content is changed without changing its version. The DPKG caches the wrong version of SAI package.
- The fix is to include the SAI package content header for SHA calcaulation. This will detect if there is any change in the SAI package.
It should no longer be necessary to explicitly install the 'wheel' package, as SONiC packages built as wheels should specify 'wheel' as a dependency in their setup.py files. Therefore, pip[3] should check for the presence of 'wheel' and install it if it isn't present before attempting to call 'setup.py bdist_wheel' to install the package.
bring up chassisdb service on sonic switch according to the design in
Distributed Forwarding in VoQ Arch HLD
Signed-off-by: Honggang Xu <hxu@arista.com>
**- Why I did it**
To bring up new ChassisDB service in sonic as designed in ['Distributed forwarding in a VOQ architecture HLD' ](90c1289eaf/doc/chassis/architecture.md).
**- How I did it**
Implement the section 2.3.1 Global DB Organization of the VOQ architecture HLD.
**- How to verify it**
ChassisDB service won't start without chassisdb.conf file on the existing platforms.
ChassisDB service is accessible with global.conf file in the distributed arichitecture.
Signed-off-by: Honggang Xu <hxu@arista.com>
We were building our own python-click package because we needed features/bug fixes available as of version 7.0.0, but the most recent version available from Debian was in the 6.x range.
"Click" is needed for building/testing and installing sonic-utilities. Now that we are building sonic-utilities as a wheel, with Click specified as a dependency in the setup.py file, setuptools will install a more recent version of Click in the sonic-slave-buster container when building the package, and pip will install a more recent version of Click in the host OS of SONiC when installing the sonic-utilities package. Also, we don't need to worry about installing the Python 2 or 3 version of the package, as the proper one will be installed as necessary.
- SN3800 vs Cisco9236 - no link copper or optics - start sending IDLE before PHY_UP for specific OPNs
Signed-off-by: Nazarii Hnydyn <nazariig@nvidia.com>
During platform deinitialization, dell_ich is not removed properly and when we do initialize s6100 platform, ICH driver sysfs attributes are not attached. Because of this, get_transceiver_change_event returns error and this leads xcvrd to crash.
Make Python3-swsscommon available in image.
```
admin@str-s6000-acs-13:~$ sudo apt-cache search swss
libswsscommon - This package contains Switch State Service common library.
python-swsscommon - This package contains Switch State Service common Python2 library.
python3-swsscommon- This package contains Switch State Service common Python3 library.
admin@str-s6000-acs-13:~$
```
Some platforms don't leverage the brcm led coprocessor.
However ledinit will try to load a non existing file and exit with an
error code.
This change is a cosmetic fix mostly.
- How to verify it
Boot a platform without the configuration and verify in the syslog that the exit status of ledinit is 0
Boot a platform with the configuration and verify in the syslog that the exit status of ledinit is 0 and the leds are working.
Verified by adding a dumb led_proc_init.soc on an Arista platform which usually doesn't use it.
**- Why I did it**
- Platform API implementation using sonic-cfggen to get platform name and SKU name, which will fail when the database is not available.
- Chassis name is not correctly assigned, it shall be assigned with EEPROM TLV "Product Name", instead of SKU name
- Chassis model is not implemented, it shall be assigned with EEPROM TLV "Part Number"
**- How I did it**
1. Chassis
> - Get platform name from /host/machine.conf
> - Remove get SKU name with sonic-cfggen
> - Get Chassis name and model from EEPROM TLV "Product Name" and "Part Number"
> - Add function to return model
2. EEPROM
> - Add function to return product name and part number
3. Platform
> - Init EEPROM on the host side, so also can get the Chassis name model from EEPROM on the host side.
* buildimage: Add gearbox phy device files and a new physyncd docker to support VS gearbox phy feature
* scripts and configuration needed to support a second syncd docker (physyncd)
* physyncd supports gearbox device and phy SAI APIs and runs multiple instances of syncd, one per phy in the device
* support for VS target (sonic-sairedis vslib has been extended to support a virtual BCM81724 gearbox PHY).
HLD is located at b817a12fd8/doc/gearbox/gearbox_mgr_design.md
**- Why I did it**
This work is part of the gearbox phy joint effort between Microsoft and Broadcom, and is based
on multi-switch support in sonic-sairedis.
**- How I did it**
Overall feature was implemented across several projects. The collective pull requests (some in late stages of review at this point):
https://github.com/Azure/sonic-utilities/pull/931 - CLI (merged)
https://github.com/Azure/sonic-swss-common/pull/347 - Minor changes (merged)
https://github.com/Azure/sonic-swss/pull/1321 - gearsyncd, config parsers, changes to orchargent to create gearbox phy on supported systems
https://github.com/Azure/sonic-sairedis/pull/624 - physyncd, virtual BCM81724 gearbox phy added to vslib
**- How to verify it**
In a vslib build:
root@sonic:/home/admin# show gearbox interfaces status
PHY Id Interface MAC Lanes MAC Lane Speed PHY Lanes PHY Lane Speed Line Lanes Line Lane Speed Oper Admin
-------- ----------- --------------- ---------------- --------------- ---------------- ------------ ----------------- ------ -------
1 Ethernet48 121,122,123,124 25G 200,201,202,203 25G 204,205 50G down down
1 Ethernet49 125,126,127,128 25G 206,207,208,209 25G 210,211 50G down down
1 Ethernet50 69,70,71,72 25G 212,213,214,215 25G 216 100G down down
In addition, docker ps | grep phy should show a physyncd docker running.
Signed-off-by: syd.logan@broadcom.com
We want to let Monit to unmonitor the processes in containers which are disabled in `FEATURE` table such that
Monit will not generate false alerting messages into the syslog.
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
libzmq5 is tcp/ipc communication library, needed by sairedis/syncd to communicate, as a new way (compared to current REDIS channel) to exchange messages which is faster than REDIS library and allows synchronous mode
this library is required in sairedis repo, swss repo, sonic-buildimage and sonic-build-tools to make it work
- Why I did it
For fixing PCA MUX attachment issue in Dell S6100 platform.
- How I did it
Wait till IOM MUX powered up properly and start I2C enumeration.
We are moving toward building all Python packages for SONiC as wheel packages rather than Debian packages. This will also allow us to more easily transition to Python 3.
Python files are now packaged in "sonic-utilities" Pyhton wheel. Data files are now packaged in "sonic-utilities-data" Debian package.
**- How I did it**
- Build and install sonic-utilities as a Python package
- Remove explicit installation of wheel dependencies, as these will now get installed implicitly by pip when installing sonic-utilities as a wheel
- Build and install new sonic-utilities-data package to install data files required by sonic-utilities applications
- Update all references to sonic-utilities scripts/entrypoints to either reference the new /usr/local/bin/ location or remove absolute path entirely where applicable
Submodule updates:
* src/sonic-utilities aa27dd9...2244d7b (5):
> Support building sonic-utilities as a Python wheel package instead of a Debian package (#1122)
> [consutil] Display remote device name in show command (#1120)
> [vrf] fix check state_db error when vrf moving (#1119)
> [consutil] Fix issue where the ConfigDBConnector's reference is missing (#1117)
> Update to make config load/reload backward compatible. (#1115)
* src/sonic-ztp dd025bc...911d622 (1):
> Update paths to reflect new sonic-utilities install location, /usr/local/bin/ (#19)
This PR limited the number of calls to sonic-cfggen to one call
per iteration instead of current 3 calls per iteration.
The PR also installs jq on host for future scripts if needed.
signed-off-by: Tamer Ahmed <tamer.ahmed@microsoft.com>
Refactor SFP reset, low power get/set API, and plugins with new SDK SX APIs. Previously they were calling SDK SXD APIs which have glibc dependency because of shared memory usage.
Remove implementation "set_power_override", "tx_disable_channel", "tx_disable" which using SXD APIs, once related SDK SX API available, will add them back based on new SDK SX APIs.
- Merge chassis codebase upstream
- Add support for Otterlake supervisor
- Add support for NorthFace and Camp chassis
- Add support for Eldridge, Dragonfly and Brooks fabrics
- Add support for Clearwater2 and Clearwater2Ms linecards
- Add new arista Cli to power on/off cards
- Add new arista show Cli to inspect supervisor, chassis, fabrics and linecards
- Finish the refactor of the smbus backend (huge performance enhencement)
- Split scd-hwmon kernel module in multiple source files
- Add new scd-uart driver to manage linecard consoles within a chassis
- Bootstrap of thermal platform API implementation
- Fix psud led error message
- Fix fan led color on Smartsville
When stopping the swss, pmon or bgp containers, log messages like the following can be seen:
```
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,061 ERRO pool dependent-startup event buffer overflowed, discarding event 34
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,063 ERRO pool dependent-startup event buffer overflowed, discarding event 35
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,064 ERRO pool dependent-startup event buffer overflowed, discarding event 36
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,066 ERRO pool dependent-startup event buffer overflowed, discarding event 37
```
This is due to the number of programs in the container managed by supervisor, all generating events at the same time. The default event queue buffer size in supervisor is 10. This patch increases that value in all containers in order to eliminate these errors. As more programs are added to the containers, we may need to further adjust these values. I increased all buffer sizes to 25 except for containers with more programs or templated supervisor.conf files which allow for a variable number of programs. In these cases I increased the buffer size to 50. One final exception is the swss container, where the buffer fills up to ~50, so I increased this buffer to 100.
Resolves https://github.com/Azure/sonic-buildimage/issues/5241
* [redis] Use redis-server and redis-tools in blob storage to prevent
upstream link broken
* Use curl instead of wget
* Explicitly install dependencies
Copying platform.json file into an empty /usr/share/sonic/platform directory does not mimic an actual device. A more correct approach is to create a /usr/share/sonic/platform symlink which links to the actual platform directory; this is more like what is done inside SONiC containers. Then, we only need to copy the platform.json file into the actual platform directory; the symlink takes care of the alternative path, and also exposes all the other files in the platform directory.
sonic-py-common package relies on the `PLATFORM` environment variable to be set at runtime in the SONiC VS container. Exporting the variables in the start.sh script causes the variables to only be available to the shell running start.sh and any subshells it spawns. However, once the script exits, the variable is lost. This is resulting in the failure of tests which are run in the VS container, as they call applications which in turn call sonic-py-common functions which rely on PLATFORM to be set.
Setting the environment variables in the Dockerfile allows them to persist through the entire runtime of the container.
Add a master switch so that the sync/async mode can be configured.
Example usage of the switch:
1. Configure mode while building an image
`make ENABLE_SYNCHRONOUS_MODE=y <target>`
2. Configure when the device is running
Change CONFIG_DB with `sonic-cfggen -a '{"DEVICE_METADATA":{"localhost": {"synchronous_mode": "enable"}}}' --write-to-db`
Restart swss with `systemctl restart swss`
- Reverts commit 457674c
- Creates "platform.json" for vs docker
- Adds test case for port breakout CLI
- Explicitly sets admin status of all the VS interfaces to down to be compatible with SWSS test cases, specifically vnet tests and sflow tests
Signed-off-by: Sangita Maity <sangitamaity0211@gmail.com>
As part of migrating all Python-based package installers to wheel format rather than Debian packages. Also to allow for easily building a Python 3 version of the package in the near future.
- Also remove some references to sonic-daemon-base which I previously missed and add missing sonic-py-common dependency for sonic-pcied.
Original converting from register is wrong, it makes the fan speed much higher than it is. Change the way fan speed is calculated from CPLD.
Signed-off-by: roy_lee <roy_lee@accton.com>
HW set qsfp port to reset at default. so need SW to set to normal when boot.
1. Modify cpld driver to invert reset offset value
2. Set to normal when boot.
Swap order of orchagent and portsyncd in start.sh and fix priorities
Many docker virtual switch tests are failing at the moment because orchagent never finishes initializing. After doing some searching I figured out that Ethernet24 is never published to State DB, which is reminiscent of #4821
Signed-off-by: Danny Allen <daall@microsoft.com>
summary of E530 platfrom:
- CPU: CTC5236, arm64
- LAN switch chip set: CENTEC CTC7132 (TsingMa). TsingMa is a purpose built device to address the challenge in the recent network evolution such as Cloud computing. CTC7132 provides 440Gbps I/O bandwidth and 400Gcore bandwidth, the CTC7132 family combines a feature-rich switch core and an embedded ARM A53 CPU Core running at 800MHz/1.2GHz. CTC7132 supports a variety of port configurations, such as QSGMII and USXGMII-M, providing full-rate port capability from 100M to 100G.
- device E530-48T4X: 48 * 10/100/1000 Base-T Ports, 4 * 10GE SFP+ Ports.
- device E530-24X2C: 24 * 10 GE SFP+ Ports, 2 * 100GE QSFP28 Ports.
add new files in three directories:
device/centec/arm64-centec_e530_24x2c-r0
device/centec/arm64-centec_e530_48t4x_p-r0
platform/centec-arm64
Co-authored-by: taocy <taocy2@centecnetworks.com>
Co-authored-by: Gu Xianghong <gxh2001757@163.com>
Co-authored-by: shil <shil@centecnetworks.com>
1. remove container feature table
2. do not generate feature entry if the feature is not included
in the image
3. rename ENABLE_* to INCLUDE_* for better clarity
4. rename feature status to feature state
5. [submodule]: update sonic-utilities
* 9700e45 2020-08-03 | [show/config]: combine feature and container feature cli (#1015) (HEAD, origin/master, origin/HEAD) [lguohan]
* c9d3550 2020-08-03 | [tests]: fix drops_group_test failure on second run (#1023) [lguohan]
* dfaae69 2020-08-03 | [lldpshow]: Fix input device is not a TTY error (#1016) [Arun Saravanan Balachandran]
* 216688e 2020-08-02 | [tests]: rename sonic-utilitie-tests to tests (#1022) [lguohan]
Signed-off-by: Guohan Lu <lguohan@gmail.com>
* Moving BRCM SAI from 3.7.5.1-1 to 3.7.5.1-2 to pick up the stubbed SAI API that were present in the header but missing in SAI implementation that cause link error during build.
In SAI header 1.5.2 there are defined SAI APIs such as "sai_query_attribute_capability()" and few other APIs that were missing the stub code in BRCM 3.7.5.1 SAI code delivery. This causes linking issue during build when we implemented the common handler to support "sai_query_attribute_capability()" that other platform supports. The expectation is that all SAI vendors must implement all the SAI header defined SAI APIs by stubbing those that they don't have the actual code support to return "SAI_STATUS_NOT_IMPLEMENTED". This allows common code for all platforms to be developed without having linking issue during build for all platforms. BRCM has provided the patch code and this PR moves the BRCM SAI from 3.7.5.1-1 to 3.7.5.1-2 to pick up those missing stubbed SAI APIs.
See BRCM case CS00010790550 for more details
Applications running in the host OS can read the platform identifier from /host/machine.conf. When loading configuration, sonic-config-engine *needs* to read the platform identifier from machine.conf, as it it responsible for populating the value in Config DB.
When an application is running inside a Docker container, the machine.conf file is not accessible, as the /host directory is not mounted. So we need to retrieve the platform identifier from Config DB if get_platform() is called from inside a Docker
container. However, we can't simply check that we're running in a Docker container because the host OS of the SONiC virtual switch is running inside a Docker container. So I refactored `get_platform()` to:
1. Read from the `PLATFORM` environment variable if it exists (which is defined in a virtual switch Docker container)
2. Read from machine.conf if possible (works in the host OS of a standard SONiC image, critical for sonic-config-engine at boot)
3. Read the value from Config DB (needed for Docker containers running in SONiC, as machine.conf is not accessible to them)
- Also fix typo in daemon_base.py
- Also changes to align `get_hwsku()` with `get_platform()`
- Created the VS setup to test DPB functionality
- Created "platform.json" for VS docker
- Added test case for Breakout CLI
Signed-off-by: Sangita Maity <sangitamaity0211@gmail.com>
As part of consolidating all common Python-based functionality into the new sonic-py-common package, this pull request:
1. Redirects all Python applications/scripts in sonic-buildimage repo which previously imported sonic_device_util or sonic_daemon_base to instead import sonic-py-common, which was added in https://github.com/Azure/sonic-buildimage/pull/5003
2. Replaces all calls to `sonic_device_util.get_platform_info()` to instead call `sonic_py_common.get_platform()` and removes any calls to `sonic_device_util.get_machine_info()` which are no longer necessary (i.e., those which were only used to pass the results to `sonic_device_util.get_platform_info()`.
3. Removes unused imports to the now-deprecated sonic-daemon-base package and sonic_device_util.py module
This is the next step toward resolving https://github.com/Azure/sonic-buildimage/issues/4999
Also reverted my previous change in which device_info.get_platform() would first try obtaining the platform ID string from Config DB and fall back to gathering it from machine.conf upon failure because this function is called by sonic-cfggen before the data is in the DB, in which case, the db_connect() call will hang indefinitely, which was not the behavior I expected. As of now, the function will always reference machine.conf.
* [platform] Add Support For Environment Variable
This PR adds the ability to read environment file from /etc/sonic.
the file contains immutable SONiC config attributes such as platform,
hwsku, version, device_type. The aim is to minimize calls being made
into sonic-cfggen during boot time.
singed-off-by: Tamer Ahmed <tamer.ahmed@microsoft.com>
virtual-chassis test uses multiple vs instances to simulate a
modular switch and a redis-chassis service is required to run on
the vs instance that represents a supervisor card.
This change allows vs docker start redis-chassis service according
to external config file.
**- Why I did it**
To support virtual-chassis setup, so that we can test distributed forwarding feature in virtual sonic environment, see `Distributed forwarding in a VOQ architecture HLD` pull request at https://github.com/Azure/SONiC/pull/622
**- How I did it**
The sonic-vs start.sh is enhanced to start new redis_chassis service if external chassis config file found. The config file doesn't exist in current vs environment, start.sh will behave like before.
**- How to verify it**
The swss/test still pass. The chassis_db service is verified in virtual-chassis topology and tests which are in following PRs.
Signed-off-by: Honggang Xu <hxu@arista.com>
(cherry picked from commit c1d45cf81ce3238be2dcbccae98c0780944981ce)
Co-authored-by: Honggang Xu <hxu@arista.com>
when parallel build is enabled, both docker-fpm-frr and docker-syncd-brcm
is built at the same time, docker-fpm-frr requires swss which requires to
install libsaivs-dev. docker-syncd-brcm requires syncd package which requires
to install libsaibcm-dev.
since libsaivs-dev and libsaibcm-dev install the sai header in the same
location, these two packages cannot be installed at the same time. Therefore,
we need to serialize the build between these two packages. Simply uninstall
the conflict package is not enough to solve this issue. The correct solution
is to have one package wait for another package to be uninstalled.
For example, if syncd is built first, then it will install libsaibcm-dev.
Meanwhile, if the swss build job starts and tries to install libsaivs-dev,
it will first try to query if libsaibcm-dev is installed or not. if it is
installed, then it will wait until libsaibcm-dev is uninstalled. After syncd
job is finished, it will uninstall libsaibcm-dev and swss build job will be
unblocked.
To solve this issue, _UNINSTALLS is introduced to uninstall a package that
is no longer needed and to allow blocked job to continue.
Signed-off-by: Guohan Lu <lguohan@gmail.com>
Consolidate common SONiC Python-language functionality into one shared package (sonic-py-common) and eliminate duplicate code.
The package currently includes three modules:
- daemon_base
- device_info
- logger
Added required packages to enabled YANG dependency check for Dynamic Port Breakout in VS container.
[sonic-utilities PR #766](https://github.com/Azure/sonic-utilities/pull/766) has a dependency on it.
Getting error like the following without this fix: `ImportError: No module named yang - required module not found`
Signed-off-by: Sangita Maity <sangitamaity0211@gmail.com>
the command kills all exabgp processes on the host.
since the namespaces are newly added, there should
be no prior exabgp processes.
if it is existing namespace, it is also the dvs
framework job to clean up all prior processes.
Align SFP key names with new standard defined in https://github.com/Azure/sonic-platform-common/pull/97
- hardwarerev -> hardware_rev
- serialnum -> serial
- manufacturename -> manufacturer
- modelname -> model
- Connector -> connector
Added xmltodict and jsondiff packages needed to run vs test cases successfully for DPB.
sonic-utilities PR #766 has a dependency on these packages.
Signed-off-by: Sangita Maity <sangitamaity0211@gmail.com>
1) Fixing the issues while applying platform TxCTLE settings in sfputil.py for QFX5200
2) Adding the support for transceiver dom threshold info in sfputil.py for both QFX5210 & QFX5200 platforms
3) Updating the sfputil.py for QFX5210 & QFX5200 platforms
4) Adding a new platform specific command 'show_thresholds' to display the FAN dutycycle percentage for various temperature ranges (for both AFI & AFO QFX5200 systems).
Signed-off-by: Ciju Rajan K <crajank@juniper.net>
System health feature needs to set/get system led status
- Add a led object in chassis class and initialize it when the API is called on host side
- Read/write system led system fs to get/set the status
Signed-off-by: Arvindsrinivasan Lakshmi Narasimhan <arlakshm@microsoft.com>
This PR has changes to support accessing the bcmsh and bcmcmd utilities on multi ASIC devices
Changes done
- move the link of /var/run/sswsyncd from docker-syncd-brcm.mk to docker_image_ctl.j2
- update the bcmsh and bcmcmd scripts to take -n [ASIC_ID] as an argument on multi ASIC platforms
- Xilinx/pericom peripherals are not actively used in DellEMC S6100 switch.
- These peripherals are throwing PCIE corrected messages in some of the units and filling syslog.
- Since it is not usable disabling it at startup.
make swss build depends only on libsairedis instead of syncd. This allows to build swss without depending
on vendor sai library.
Currently, libsairedis build also buils syncd which requires vendor SAI lib. This makes difficult to build
swss docker in buster while still keeping syncd docker in stretch, as swss requires libsairedis which also
build syncd and requires vendor to provide SAI for buster. As swss docker does not really contain syncd
binary, so it is not necessary to build syncd for swss docker.
* [submodule]: update sonic-sairedis
* ccbb3bc 2020-06-28 | add option to build without syncd (HEAD, origin/master, origin/HEAD) [Guohan Lu]
* 4247481 2020-06-28 | install saidiscovery into syncd package [Guohan Lu]
* 61b8e8e 2020-06-26 | Revert "sonic-sairedis: Add support to sonic-sairedis for gearbox phys (#624)" (#630) [Danny Allen]
* 85e543c 2020-06-26 | add a README to tests directory to describe how to run 'make check' (#629) [Syd Logan]
* 2772f15 2020-06-26 | sonic-sairedis: Add support to sonic-sairedis for gearbox phys (#624) [Syd Logan]
Signed-off-by: Guohan Lu <lguohan@gmail.com>
1. Upgrade SAI headers to v1.6.3
2. Fix traffic lost during FFB related to buffer config + optimize buffer config timing for FB
3. Add ACL fields BTH, IP flags
4. Add ACL infrastructure of different fields per ASIC type
**- Why I did it**
System health feature requires to read ASIC temperature and threshold from platform API
**- How I did it**
Implement Chassis.get_asic_temperature and Chassis.get_asic_temperature_threshold by getting value from system fs.
This patch addresses the following issues:
1) Platform drivers were not loading in the latest images. Fixed
the intialization script to make sure that all the drivers are
loaded.
2) Getting rid of "pstore: crypto_comp_decompress failed, ret = -22!"
messages during the kernel boot, after moving to 4.19 kernel. The
solution is to remove the files under '/sys/fs/pstore' directory.
Signed-off-by: Ciju Rajan K <crajank@juniper.net>
**- Why I did it**
Initially, the critical_processes file contains either the name of critical process or the name of group.
For example, the critical_processes file in the dhcp_relay container contains a single group name
`isc-dhcp-relay`. When testing the autorestart feature of each container, we need get all the critical
processes and test whether a container can be restarted correctly if one of its critical processes is
killed. However, it will be difficult to differentiate whether the names in the critical_processes file are
the critical processes or group names. At the same time, changing the syntax in this file will separate the individual process from the groups and also makes it clear to the user.
Right now the critical_processes file contains two different kind of entries. One is "program:xxx" which indicates a critical process. Another is "group:xxx" which indicates a group of critical processes
managed by supervisord using the name "xxx". At the same time, I also updated the logic to
parse the file critical_processes in supervisor-proc-event-listener script.
**- How to verify it**
We can first enable the autorestart feature of a specified container for example `dhcp_relay` by running the comman `sudo config container feature autorestart dhcp_relay enabled` on DUT. Then we can select a critical process from the command `docker top dhcp_relay` and use the command `sudo kill -SIGKILL <pid>` to kill that critical process. Final step is to check whether the container is restarted correctly or not.
**- Why I did it**
After discussed with Joe, we use the string "/usr/bin/syncd\s" in Monit configuration file to monitor
syncd process on Broadcom and Mellanox. Due to my careless, I did not find this bug during the
previous testing. If we use the string "/usr/bin/syncd" in Monit configuration file to monitor the
syncd process, Monit will not detect whether syncd process is running or not.
If we ran the command `sudo monit procmactch “/usr/bin/syncd”` on Broadcom, there will be three
processes in syncd container which matched this "/usr/bin/syncd": `/bin/bash /usr/bin/syncd.sh
wait`, `/usr/bin/dsserve /usr/bin/syncd –diag -u -p /etc/sai.d/sai.profile` and `/usr/bin/syncd –diag -
u -p /etc/sai.d/said.profile`. Monit will select the processes with the highest uptime (at there
`/bin/bash /usr/bin/syncd.sh wait`) to match and did not select `/usr/bin/syncd –diag -u -p
/etc/sai.d/said.profile` to match.
Similarly, On Mellanox Monit will also select the process with the highest uptime (at there
`/bin/bash /usr/bin/syncd.sh wait`) to match and did not select `/usr/bin/syncd –diag -u -p
/etc/sai.d/said.profile` to match.
That is why Monit is unable to detect whether syncd process is running or not if we use the string “/usr/bin/syncd” in Monit configuration file. If we use the string "/usr/bin/syncd\s" in Monit configuration file, Monit can filter out the process `/bin/bash /usr/bin/syncd.sh wait` and thus can correctly monitor the syncd process.
**- How I did it**
**- How to verify it**
Signed-off-by: Yong Zhao <yozhao@microsoft.com>
* Change port index in port_config.ini to 1-based
* Add default port index to port_config.ini, change platform plugins to accept 1-based port index
* fix port index in sfp_event.py
**- Why I did it**
The tx_disable function isn't work for the accton_ax5835-54x device.
**- How I did it**
Fix the incorrect path of the sfp node path inside the util file.
**- How to verify it**
Test with
"sudo accton_as5835_54x_util.py show"
"sudo accton_as5835_54x_util.py set sfp"
There should see correct value for module_present and module_tx_disable. And should able to set it.
Signed-off-by: kuanyu_chen <kuanyu_chen@edge-core.com>
The `-sv2` suffix was used to differentiate SNMP Dockers when we transitioned from "SONiCv1" to "SONiCv2", about four years ago. The old Docker materials were removed long ago; there is no need to keep this suffix. Removing it aligns the name with all the other Dockers.
Also edit Monit configuration to detect proper snmp-subagent command line in Buster, and make snmpd command line matching more robust.
- Add .gitignore files in each subdirectory of src/, so as to reduce the size of the .gitignore file in the project root, and also make it easier to maintain (i.e., if a directory in src/ is removed, there will not be outdated entries in the root .gitignore file.
- Also add missing .gitignore entries and remove outdated entries and duplicates.
The -sv2 suffix was used to differentiate SNMP Dockers when we transitioned from "SONiCv1" to "SONiCv2", about four years ago. The old Docker materials were removed long ago; there is no need to keep this suffix. Removing it aligns the name with all the other Dockers.
**- Why I did it**
For decoding system EEPROM of S6000 based on Dell offset format and S6000-ON’s system EEPROM in ONIE TLV format.
**- How I did it**
- Differentiate between S6000 and S6000-ON using the product name available in ‘dmi’ ( “/sys/class/dmi/id/product_name” )
- For decoding S6000 system EEPROM in Dell offset format and updating the redis DB with the EEPROM contents, added a new class ‘EepromS6000’ in eeprom.py,
- Renamed certain methods in both Eeprom, EepromS6000 classes to accommodate the plugin-specific methods.
**- How to verify it**
- Use 'decode-syseeprom' command to list the system EEPROM details.
- Wrote a python script to load chassis class and call the appropriate methods.
UT Logs: [S6000_eeprom_logs.txt](https://github.com/Azure/sonic-buildimage/files/4735515/S6000_eeprom_logs.txt), [S6000-ON_eeprom_logs.txt](https://github.com/Azure/sonic-buildimage/files/4735461/S6000-ON_eeprom_logs.txt)
Test script: [eeprom_test_py.txt](https://github.com/Azure/sonic-buildimage/files/4735509/eeprom_test_py.txt)
- Sensor and Fan information added to primary platforms for thermal API.
- Refactors involving better abstractions, code reuse and dead code removal.
- Improvements to the diag capabilities
- Pylintrc added to improve code quality. Will become fatal at a later time.
Co-authored-by: Baptiste Covolato <baptiste@arista.com>
Cleanup description string
First port (management port) are excluded from general port naming scheme.
Management port are excluded from general port naming scheme.
before:
|on GNS3 |in SONiC |
|---------|---------|
|Ethernet0|eth0 |
|Ethernet1|Ethernet0|
|Ethernet2|Ethernet4|
|Ethernet3|Ethernet8|
after:
|on GNS3 |in SONiC |
|---------|---------|
|eth0 |eth0 |
|Ethernet0|Ethernet0|
|Ethernet1|Ethernet4|
|Ethernet2|Ethernet8|
Signed-off-by: Masaru OKI <masaru.oki@gmail.com>
* Update sonic-sairedis (sairedis with SAI 1.6 headers)
* Update SAIBCM to 3.7.4.2, which is built upon SAI1.6 headers
* missed updating BRCM_SAI variable, fixed it
* Update SAIBCM to 3.7.4.2, updated link to libsaibcm
* [Mellanox] Update SAI (release:v1.16.3; API:v1.6)
Signed-off-by: Volodymyr Samotiy <volodymyrs@mellanox.com>
* Update sonic-sairedis pointer to include SAI1.6 headers
* [Mellanox] Update SDK to 4.4.0914 and FW to xx.2007.1112 to match SAI 1.16.3 (API:v1.6)
Signed-off-by: Volodymyr Samotiy <volodymyrs@mellanox.com>
* ensure the veth link is up in docker VS container
* ensure the veth link is up in docker VS container
* [Mellanox] Update SAI (release:v1.16.3.2; API:v1.6)
Signed-off-by: Volodymyr Samotiy <volodymyrs@mellanox.com>
* use 'config interface startup' instead of using ifconfig command, also undid the previous change'
Co-authored-by: Volodymyr Samotiy <volodymyrs@mellanox.com>
- Skip thermalctld in DellEMC S6000, S6100, Z9100 and Z9264 platforms.
- Change the return type of thermal Platform APIs in DellEMC S6000, S6100, Z9100 and Z9264 platforms to 'float'.
* [platform]: Add a new supported platform, Delta-agc032
Switch Vendor: Delta
Switch SKU: Delta-agc032
CPU: BROADWELL-DE
ASIC Vendor: Broadcom
Switch ASIC: Tomahawk3, BCM56980
Port Configuration: 32x400G + 2x10G
- What I did
Add a new Delta platform Delta-agc032.
- How I did it
Add files by following SONiC Porting Guide.
- How to verify it
1. decode-syseeprom
2. sensors
3. psuutil
4. sfputil
5. show interface status
6. bcmcmd
Signed-off-by: zoe-kuan <ZOE.KUAN@deltaww.com>
3y Power YPEB-1200am PSU doens't support read fan_dir from pmbus register
Check with vendor this PUS type only support F2B fan direction. So add to show "F2B"
when red psu_fan_dir sysfs.
* Trigger thermal action log only if thermal condition changes
* test file existence before read file content
* fix error for set psu fan speed
* Remove logs because it print too frequently
- bug fix : Fixed an issue which the nps ko file was not loaded due to the wrong service file name
- Optimize the code to reduce changes due to the kernel upgrade
- Remove nephos ko file loaded in swss.service.j2 because it has loaded at syncd.service.j2
For detecting transceiver change events through xcvrd in DellEMC S6000, S6100 and Z9100 platforms.
- In S6000, rename 'get_transceiver_change_event' in chassis.py to 'get_change_event' and return appropriate values.
- In S6100, implement 'get_change_event' through polling method (poll interval = 1 second) in chassis.py (Transceiver insertion/removal does not generate interrupts due to a CPLD bug)
- In Z9100, implement 'get_change_event' through interrupt method using select.epoll().
This patchset implement the following:
- Setting the FAN frequency
- Corrections to the EM policy with respect to platform
defined temperature / fan values
- Updates to the platform monitorng script logging
- Fixes to platform initialization script
Signed-off-by: Ciju Rajan K <crajank@juniper.net>
Update SAI/SDK/FW and MSN4700 device files to support 8 lanes 400G
Update SAI to 1.16.3
Update SDK to 4.4.0914
Update FW to *.2007.1112
Update MSN4700 device files to support 8 lanes 400G
currently, vs docker always create 32 front panel ports.
when vs docker starts, it first detects the peer links
in the namespace and then setup equal number of front panel
interfaces as the peer links.
Signed-off-by: Guohan Lu <lguohan@gmail.com>
- Add Makefile rules to build debug containers for SWI images
- Fix some platform API implementation for xcvrd and thermalctld
- Improvements to arista diag command
- Miscellaneous refactors
Co-authored-by: Maxime Lorrillere <mlorrillere@arista.com>
* New SKU support for MSN3420
Signed-off-by: Shlomi Bitton <shlomibi@mellanox.com>
Conflicts:
device/mellanox/x86_64-mlnx_msn2700-r0/plugins/sfputil.py
* Add CPLD's
* Symlink fixes and semantics
* Adding new platform at end of lines
This patch set implements the following:
- Fixes the conflicts in chassis.py / platform.py in sonic_platfrom
- Consolidating the common library files in sonic_platform
- Moving QFX5210 specific drivers to qfx5210/modules
- Moving Juniper common fpga drivers to common/modules
- Cleaning up the platform driver files
- Bug fixes in QFX5210 platform monitor & initialiazation script
- Fixing the bugs in QFX5210 eeprom parsing
Signed-off-by: Ciju Rajan K <crajank@juniper.net>
[baseimage]: upgrade base image to debian buster
bring up the base image to debian buster 4.19 kernel.
using the merge commits to preserve the individual commits to better track the history.
Currently we port SONiC to buster in a way that base image is on buster and
other dockers based on stretch. The benefit is that tasks can be carried out
simultaneously.
The build procedure can be treated as 2 stages.
The first stage is to build the stretch-based debs and dockers and the second
stage is to build the buster-based ones.
One thing we have to pay attention to is some debs depend on kernel should not
be built at stretch stage because the kernel isn't available at that time.
The idea is to move that kind of debs out of SONIC_STRETCH_DEBS. Meanwhile,
any dependency explicitly put on the stretch based dockers on kernel should be
removed.
1. undefine led_classdev_register as it is defined in leds.h
2. header file location change
a. linux/i2c/pmbus.h -> linux/pmbus.h
b. linux/i2c-mux-gpio.h -> linux/platform_data/i2c-mux-gpio.h
c. linux/i2c/pca954x.h -> linux/platform_data/pca954x.h
- build SONIC_STRETCH_DOCKERS in sonic-slave-stretch docker
- build image related module in sonic-slave-buster docker.
This includes all kernels modules and some packages
Signed-off-by: Guohan Lu <lguohan@gmail.com>
This is a 1RU switch with 32 QSFP28 (40G/100G) ports on
Broadcom Tomahawk I chipset. CPU used in QFX5200-32C-S
is Intel Ivy Bridge. The machine has Redundant and
hot-swappable Power Supply (1+1) and also has Redundant
and hot swappable fans (5).
Signed-off-by: Ciju Rajan K <crajank@juniper.net>
Signed-off-by: Ashish Bhensdadia <bashish@juniper.net>
FPGA driver crash fix for stale buffer in i2c transfer
LED firmware load issue fix.
10G port swapfix
psu/sfp bug fixes to report correct states/status of hw
libpython2.7, libdaemon0, libdbus-1-3, libjansson4 are common
across different containers. move them into docker-base-stretch
Signed-off-by: Guohan Lu <lguohan@gmail.com>
* [thermal control] Fix pmon docker stop issue on 3800
* [thermal fix] Fix QA test issue
* [thermal fix] change psu._get_power_available_status to psu.get_power_available_status
* [thermal fix] adjust log for PSU absence and power absence
* [thermal fix] add unit test for loading thermal policy file with duplicate conditions in different policies
* [thermal] fix fan.get_presence for non-removable SKU
* [thermal fix] fix issue: fan direction is based on drawer
* Fix issue: when fan is not present, should not read fan direction from sysfs but directly return N/A
* [thermal fix] add unit test for get_direction for absent FAN
* Unplugable PSU has no FAN, no need add a FAN object for this PSU
* Update submodules
Co-authored-by: Stephen Sun <5379172+stephenxs@users.noreply.github.com>
* add MSN4700 device files
* update ACS-MSN4700 sai profile
* update buffer pool size, headroom, sensor conf, port config and reboot scripts
* fix ident
* update sensor conf and buffer pool
* [sn4700] add sku 4700 to chassis.py
* [Mellanox-4700] Add 4700 info to psu and thermal platform API
* update buffer config file template to the latest.
update SAI profile to use 100G X 4lanes for now
update port_config.ini according to the SAI profile
* [Mellanox]Update the buffer configurations for 4700
* fix alignment in pg_profile_lookup.ini
* add platform components file for new sku
* Update device/mellanox/x86_64-mlnx_msn4700-r0/ACS-MSN4700/pg_profile_lookup.ini
Co-Authored-By: Nazarii Hnydyn <nazariig@mellanox.com>
* remove redundant line
* [Mellanox]Correct type, buffer size
Co-authored-by: Nazarii Hnydyn <nazariig@mellanox.com>
Co-authored-by: junchao <junchao@mellanox.com>
Co-authored-by: Stephen Sun <stephens@mellanox.com>
In the file "files/build_templates/docker_image_ctl.j2", it adds the option
"--net" to the docker create command through the commit
"abe7ef7e2e2e1215c97cee19a83aab0b130cebe5" (#3856).
Remove the "--net" option in "docker-syncd-brcm-rpc.mk" to avoid
specifying duplicated "--net" options.
Signed-off-by: charlie_chen <charlie_chen@edge-core.com>
DPKG caching framework provides the infrastructure to cache the sonic module/target .deb files into a local cache by tracking the target dependency files.SONIC build infrastructure is designed as a plugin framework where any new source code can be easily integrated into sonic as a module and that generates output as a .deb file. The source code compilation of a module is completely independent of other modules compilation. Inter module dependency is resolved through build artifacts like header files, libraries, and binaries in the form of Debian packages. For example module A depends on module B. While module A is being built, it uses B's .deb file to install it in the build docker.
The DPKG caching framework provides an infrastructure that caches a module's deb package and restores it back to the build directory if its dependency files are not modified. When a module is compiled for the first time, the generated deb package is stored at the DPKG cache location. On the subsequent build, first, it checks the module dependency file modification. If none of the dependent files is changed, it copies the deb package from the cache location, otherwise, it goes for local compilation and generates the deb package. The modified files should be checked-in to get the newer cache deb package.
This provides a huge improvement in build time and also supports the true incremental build by tracking the dependency files.
- How I did it
It takes two global arguments to enable the DPKG caching, the first one indicates the caching method and the second one describes the location of the cache.
SONIC_DPKG_CACHE_METHOD=cache
SONIC_DPKG_CACHE_SOURCE=
where SONIC_DPKG_CACHE_METHOD - Default method is 'cache' for deb package caching
none: no caching
cache: cache from local directory
Dependency file tracking:
Dependency files are tracked for each target in two levels.
1. Common make infrastructure files - rules/config, rules/functions, slave.mk etc.
2. Per module files - files which are specific to modules, Makefile, debian/rules, patch files, etc.
For example: dependency files for Linux Kernel - src/sonic-linux-kernel,
SPATH := $($(LINUX_HEADERS_COMMON)_SRC_PATH)
DEP_FILES := $(SONIC_COMMON_FILES_LIST) rules/linux-kernel.mk rules/linux-kernel.dep
DEP_FILES += $(SONIC_COMMON_BASE_FILES_LIST)
SMDEP_FILES := $(addprefix $(SPATH)/,$(shell cd $(SPATH) && git ls-files))
DEP_FLAGS := $(SONIC_COMMON_FLAGS_LIST) \
$(KERNEL_PROCURE_METHOD) $(KERNEL_CACHE_PATH)
$(LINUX_HEADERS_COMMON)_CACHE_MODE := GIT_CONTENT_SHA
$(LINUX_HEADERS_COMMON)_DEP_FLAGS := $(DEP_FLAGS)
$(LINUX_HEADERS_COMMON)_DEP_FILES := $(DEP_FILES)
$(LINUX_HEADERS_COMMON)_SMDEP_FILES := $(SMDEP_FILES)
$(LINUX_HEADERS_COMMON)_SMDEP_PATHS := $(SPATH)
Cache file tracking:
The Cache file is a compressed TAR ball of a module's target DEB file and its derived-target DEB files.
The cache filename is formed with the following format
FORMAT:
<module deb filename>.<24 byte of DEP SHA hash >-<24 byte of MOD SHA hash>.tgz
Eg:
linux-headers-4.9.0-9-2-common_4.9.168-1+deb9u3_all.deb-23658712fd21bb776fa16f47-c0b63ef593d4a32643bca228.tgz
< 24-byte DEP SHA value > - the SHA value is derived from all the dependent packages.
< 24-byte MOD SHA value > - the SHA value is derived from either of the following.
GIT_COMMIT_SHA - SHA value of the last git commit ID if it is a submodule
GIT_CONTENT_SHA - SHA value is generated from the content of the target dependency files.
Target Specific rules:
Caching can be enabled/disabled on a global level and also on the per-target level.
$(addprefix $(DEBS_PATH)/, $(SONIC_DPKG_DEBS)) : $(DEBS_PATH)/% : .platform $$(addsuffix -install,$$(addprefix $(DEBS_PATH)/,$$($$*_DEPENDS))) \
$(call dpkg_depend,$(DEBS_PATH)/%.dep )
$(HEADER)
# Load the target deb from DPKG cache
$(call LOAD_CACHE,$*,$@)
# Skip building the target if it is already loaded from cache
if [ -z '$($*_CACHE_LOADED)' ] ; then
.....
# Rules for Generating the target DEB file.
.....
# Save the target deb into DPKG cache
$(call SAVE_CACHE,$*,$@)
fi
$(FOOTER)
The make rule-'$(call dpkg_depend,$(DEBS_PATH)/%.dep )' checks for target dependency file modification. If it is newer than the target, it will go for re-generation of that target.
Two main macros 'LOAD_CACHE' and 'SAVE_CACHE' are used for loading and storing the cache contents.
The 'LOAD_CACHE' macro is used to load the cache file from cache storage and extracts them into the target folder. It is done only if target dependency files are not modified by checking the GIT file status, otherwise, cache loading is skipped and full compilation is performed.
It also updates the target-specific variable to indicate the cache is loaded or not.
The 'SAVE_CACHE' macro generates the compressed tarball of the cache file and saves them into cache storage. Saving into the cache storage is protected with a lock.
- How to verify it
The caching functionality is verified by enabling it in Linux kernel submodule.
It uses the cache directory as 'target/cache' where Linux cache file gets stored on the first-time build and it is picked from the cache location during the subsequent clean build.
- Description for the changelog
The DPKG caching framework provides the infrastructure to save the module-specific deb file to be cached by tracking the module's dependency files.
If the module's dependency files are not changed, it restores the module deb files from the cache storage.
- Description for the changelog
- A picture of a cute animal (not mandatory but encouraged)
DOCUMENT PR:
https://github.com/Azure/SONiC/pull/559
Take advantage of an SDK environment variable to customize the location where sdk_socket exists.
In the latest SDK sdk_socket has been moved from /tmp to /var/run which is a better place to contain this kind of file.
However, this prevents the subdirs under /var/run from being mapped to different volumes. To resolve this, we take advantage of an SDK variable to designate the location of sdk_socket.
This requires every process that requires to access sdk_socket have this environment variable defined. However, to define environment variable for each process is less scalable. We take advantage of the docker scope environment variable to avoid that.
It depends on PR 4227
* [Mellanox]Integrate hw-mgmt 7.0000.3012
* [sonic-linux-kernel]Advance the submodule head
Advance the sonic-linux-kernel
[sFlow]: Patch to fix skb_over_panic in psample driver (#120)
Added support in the kernel for fullcone 3-tuple unique nat. (#100)
Adding support to compile ARM architecture (#102)
[ixgbe] Support bcm54616s external phy in ixgbe (#122)
Fix i2c ISMT DMA buffer alignment issue (#123)
[mellanox]: Add SN4700 patches. (#126)
* Add new CIG device CS6436-54P and CS5435-54P, also update code for CS6436-56P
* security kernel update to 4.9.189 for CIG devices
* security kernel update to 4.9.189 for CIG devices
* Update rules
Update rule file
update FW to xx_2000_3298
update SAI to 1.16.0
update Spectrum-1 and Spectrum-2 buffer pool size according to the new SDK default config change.
modified: ../../device/mellanox/x86_64-mlnx_msn2700-r0/ACS-MSN2700/buffers_defaults_t0.j2
modified: ../../device/mellanox/x86_64-mlnx_msn2700-r0/ACS-MSN2700/buffers_defaults_t1.j2
modified: ../../device/mellanox/x86_64-mlnx_msn3700-r0/ACS-MSN3700/buffers_defaults_t0.j2
modified: ../../device/mellanox/x86_64-mlnx_msn3700-r0/ACS-MSN3700/buffers_defaults_t1.j2
modified: fw.mk
modified: mlnx-sai.mk
modified: mlnx-sai/SAI-Implementation
modified: sdk-src/sx-kernel/Switch-SDK-drivers
modified: sdk.mk
signed-off by kebol@mellanox.com