Werk #5538: Improved performance when processing a large amount of piggyback data
Component | Core & setup | ||||||||||||||
Title | Improved performance when processing a large amount of piggyback data | ||||||||||||||
Date | Nov 22, 2017 | ||||||||||||||
Level | Trivial Change | ||||||||||||||
Class | New Feature | ||||||||||||||
Compatibility | Compatible - no manual interaction needed | ||||||||||||||
Checkmk versions & editions |
|
When Check_MK needs to handle a large amount of piggyback data (a lot of piggbacked hosts from a lot of piggyback source hosts, several hundreds to thousands), the performance of Check_MK could decrease during regular monitoring. This was caused by some too expensive house keeping logic that was executed too often.
The mechanism has now been changed to work like this:
- During regular monitoring now piggyback data is removed anymore from the disk.
- New piggyback data is written to disk when communicating with the source host.
- When monitoring piggybacked hosts, the outdated piggyback data available on the disk is filtered.
- There is a dedicated housekeeping cron job executed sites crontab daily at 00:10 which removes outdated piggyback data. This job is mostly used to free up some tmpfs space, the outated stored data is not read by monitoring anymore.