2019-07-26 00:06:41 -05:00
|
|
|
[supervisord]
|
|
|
|
logfile_maxbytes=1MB
|
|
|
|
logfile_backups=2
|
|
|
|
nodaemon=true
|
|
|
|
|
2020-05-16 11:17:37 -05:00
|
|
|
[eventlistener:dependent-startup]
|
2020-11-24 00:35:58 -06:00
|
|
|
command=python2 -m supervisord_dependent_startup
|
2020-05-16 11:17:37 -05:00
|
|
|
autostart=true
|
|
|
|
autorestart=unexpected
|
|
|
|
startretries=0
|
|
|
|
exitcodes=0,3
|
|
|
|
events=PROCESS_STATE
|
[dockers][supervisor] Increase event buffer size for dependent-startup (#5247)
When stopping the swss, pmon or bgp containers, log messages like the following can be seen:
```
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,061 ERRO pool dependent-startup event buffer overflowed, discarding event 34
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,063 ERRO pool dependent-startup event buffer overflowed, discarding event 35
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,064 ERRO pool dependent-startup event buffer overflowed, discarding event 36
Aug 23 22:50:43.789760 sonic-dut INFO swss#supervisord 2020-08-23 22:50:10,066 ERRO pool dependent-startup event buffer overflowed, discarding event 37
```
This is due to the number of programs in the container managed by supervisor, all generating events at the same time. The default event queue buffer size in supervisor is 10. This patch increases that value in all containers in order to eliminate these errors. As more programs are added to the containers, we may need to further adjust these values. I increased all buffer sizes to 25 except for containers with more programs or templated supervisor.conf files which allow for a variable number of programs. In these cases I increased the buffer size to 50. One final exception is the swss container, where the buffer fills up to ~50, so I increased this buffer to 100.
Resolves https://github.com/Azure/sonic-buildimage/issues/5241
2020-09-09 01:36:38 -05:00
|
|
|
buffer_size=25
|
2020-05-16 11:17:37 -05:00
|
|
|
|
2020-02-07 14:34:07 -06:00
|
|
|
[eventlistener:supervisor-proc-exit-listener]
|
2020-11-24 00:35:58 -06:00
|
|
|
command=python2 /usr/bin/supervisor-proc-exit-listener --container-name syncd
|
[supervisord] Monitoring the critical processes with supervisord. (#6242)
- Why I did it
Initially, we used Monit to monitor critical processes in each container. If one of critical processes was not running
or crashed due to some reasons, then Monit will write an alerting message into syslog periodically. If we add a new process
in a container, the corresponding Monti configuration file will also need to update. It is a little hard for maintenance.
Currently we employed event listener of Supervisod to do this monitoring. Since processes in each container are managed by
Supervisord, we can only focus on the logic of monitoring.
- How I did it
We borrowed the event listener of Supervisord to monitor critical processes in containers. The event listener will take
following steps if it was notified one of critical processes exited unexpectedly:
The event listener will first check whether the auto-restart mechanism was enabled for this container or not. If auto-restart mechanism was enabled, event listener will kill the Supervisord process, which should cause the container to exit and subsequently get restarted.
If auto-restart mechanism was not enabled for this contianer, the event listener will enter a loop which will first sleep 1 minute and then check whether the process is running. If yes, the event listener exits. If no, an alerting message will be written into syslog.
- How to verify it
First, we need checked whether the auto-restart mechanism of a container was enabled or not by running the command show feature status. If enabled, one critical process should be selected and killed manually, then we need check whether the container will be restarted or not.
Second, we can disable the auto-restart mechanism if it was enabled at step 1 by running the commnad sudo config feature autorestart <container_name> disabled. Then one critical process should be selected and killed. After that, we will see the alerting message which will appear in the syslog every 1 minute.
- Which release branch to backport (provide reason below if selected)
201811
201911
[x ] 202006
2021-01-21 14:57:49 -06:00
|
|
|
events=PROCESS_STATE_EXITED,PROCESS_STATE_RUNNING
|
2020-02-07 14:34:07 -06:00
|
|
|
autostart=true
|
|
|
|
autorestart=unexpected
|
|
|
|
|
2019-07-26 00:06:41 -05:00
|
|
|
[program:rsyslogd]
|
2020-05-16 11:17:37 -05:00
|
|
|
command=/usr/sbin/rsyslogd -n -iNONE
|
|
|
|
priority=1
|
2019-07-26 00:06:41 -05:00
|
|
|
autostart=false
|
|
|
|
autorestart=false
|
|
|
|
stdout_logfile=syslog
|
|
|
|
stderr_logfile=syslog
|
2020-05-16 11:17:37 -05:00
|
|
|
dependent_startup=true
|
2019-07-26 00:06:41 -05:00
|
|
|
|
|
|
|
[program:syncd]
|
|
|
|
command=/usr/bin/syncd_start.sh
|
2020-05-16 11:17:37 -05:00
|
|
|
priority=2
|
2019-07-26 00:06:41 -05:00
|
|
|
autostart=false
|
|
|
|
autorestart=false
|
|
|
|
stdout_logfile=syslog
|
|
|
|
stderr_logfile=syslog
|
2020-05-16 11:17:37 -05:00
|
|
|
dependent_startup=true
|
|
|
|
dependent_startup_wait_for=rsyslogd:running
|