• Introducing WebPros Cloud - a fully managed infrastructure platform purpose-built to simplify the deployment of WebPros products !  WebPros Cloud enables you to easily deliver WebPros solutions — without the complexity of managing the infrastructure.
    Join the pilot program today!
  • The Horde component is removed from Plesk Installer. We recommend switching to another webmail software supported in Plesk.
  • The BIND DNS server has already been deprecated and removed from Plesk for Windows.
    If a Plesk for Windows server is still using BIND, the upgrade to Plesk Obsidian 18.0.70 will be unavailable until the administrator switches the DNS server to Microsoft DNS. We strongly recommend transitioning to Microsoft DNS within the next 6 weeks, before the Plesk 18.0.70 release.

Resolved Alert "average(cpu usage average) is higher 75% for 10 minutes" triggered by backup job

carini

Basic Pleskian
Server operating system version
Ubuntu 20.04 x86_64
Plesk version and microupdate number
Plesk Obsidian 18.0.44.2
Wen Plesk build a full backup on S3-Compatible the monitoring send an alert (and a SMS is consumed).

Is possible to take account of backup processes (or any other processes that runs on high niceness) and spew the alarm messages only if the backup runs more than <x> hours?

Screenshot 2022-06-19 at 00.29.53.png
 
Hi,

What are your backup settings? You can lower the priority of the backup processes so they won't reach 100% CPU time.
 
Backup process uses two threads and default priority for CPU (10) and I/O (7)

As shown on the graph the CPU usage is almost entirely due to nice processes.
 
We had the same issue and ended up using these settings:

Run scheduled backup processes with low priority:
Priority: 19
IOPriority: 7

Run all backup processes with low priority:
Priority: 19
IOPriority: 7

Compression level: No compression

The compression level was the biggest cause of the high CPU usage. As soon as we lowered it to less or no compression, the CPU graphs never reached 50% again.
 
Thanks @maartenv .

Two question:

  1. How much space do you waste by using no compression?
  2. Old backups remains recoverable if you change compression level?

Thanks in advance.
 
The size of "no compression" backups is not as much as I expected, but you should just try it. If you are using the "fast" level, try "fastest" and see if that solves the CPU problem. If the size of the backups is acceptable, try "no compression".

It also helps to use an incremental backup scheme like weekly, which gives a full backup once a week and incremental backups on the other days. It works perfectly fine if you want to recover files from the backup.

The old backups are still recoverable if you change the level of compression.
 
The compression level was the biggest cause of the high CPU usage. As soon as we lowered it to less or no compression, the CPU graphs never reached 50% again.
That's because the compressor is the only part that can actually use multithreading.
Also I don't see how this is a problem. Why have a powerful CPU if you only use a few % of it?
 
That's because the compressor is the only part that can actually use multithreading.
Also I don't see how this is a problem. Why have a powerful CPU if you only use a few % of it?
The only problem in using 100% CPU for a long time is the triggering of unnecessary notifications from platform360.io
 
The size of "no compression" backups is not as much as I expected, but you should just try it. If you are using the "fast" level, try "fastest" and see if that solves the CPU problem. If the size of the backups is acceptable, try "no compression".

It also helps to use an incremental backup scheme like weekly, which gives a full backup once a week and incremental backups on the other days. It works perfectly fine if you want to recover files from the backup.

The old backups are still recoverable if you change the level of compression.

I tried to set up with pri = 19 and compression normal, it seems to work without loosing out too much space plese mark as solved
 
Back
Top