• Our team is looking to connect with folks who use email services provided by Plesk, or a premium service. If you'd like to be part of the discovery process and share your experiences, we invite you to complete this short screening survey. If your responses match the persona we are looking for, you'll receive a link to schedule a call at your convenience. We look forward to hearing from you!
  • The BIND DNS server has already been deprecated and removed from Plesk for Windows.
    If a Plesk for Windows server is still using BIND, the upgrade to Plesk Obsidian 18.0.70 will be unavailable until the administrator switches the DNS server to Microsoft DNS. We strongly recommend transitioning to Microsoft DNS within the next 6 weeks, before the Plesk 18.0.70 release.
  • The Horde component is removed from Plesk Installer. We recommend switching to another webmail software supported in Plesk.

Issue cgroups problems

Oshikuru

New Pleskian
Hi,

with the help of plesk support I now have the system ressource controller running.
I set up (just for testing purposes) 8MB memory limit (4MB soft).
How can I test now if it works? I tried to consume memory with the user that has set these limits, and I can consume more memory than 8MB.
How do I find out if the values were set correctly by plesk?

Code:
# lscgroup 
cpuset:/
cpu,cpuacct:/
cpu,cpuacct:/user.slice
cpu,cpuacct:/user.slice/user-0.slice
cpu,cpuacct:/user.slice/user-10159.slice
cpu,cpuacct:/user.slice/user-10165.slice
cpu,cpuacct:/system.slice
blkio:/
blkio:/user.slice
blkio:/user.slice/user-0.slice
blkio:/user.slice/user-10159.slice
blkio:/user.slice/user-10165.slice
blkio:/system.slice
memory:/
memory:/user.slice
memory:/user.slice/user-0.slice
memory:/user.slice/user-10159.slice
memory:/user.slice/user-10165.slice
memory:/system.slice
devices:/
freezer:/
net_cls,net_prio:/
perf_event:/
pids:/

Code:
# cgget -g memory:/user.slice/user-10165.slice
/user.slice/user-10165.slice:
memory.use_hierarchy: 1
memory.kmem.tcp.max_usage_in_bytes: 0
memory.kmem.slabinfo: slabinfo - version: 2.1
    # name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
    kmalloc-64             0      0     64   64    1 : tunables  120   60    8 : slabdata      0      0      0
    kmalloc-1024           2      8   1024    4    1 : tunables   54   27    8 : slabdata      2      2      0
    kmalloc-192            1     21    192   21    1 : tunables  120   60    8 : slabdata      1      1      0
    shmem_inode_cache      2     11    688   11    2 : tunables   54   27    8 : slabdata      1      1      0
    pid                    3     32    128   32    1 : tunables  120   60    8 : slabdata      1      1      0
    mm_struct              4      8   1024    4    1 : tunables   54   27    8 : slabdata      2      2      0
    signal_cache           3      7   1088    7    2 : tunables   24   12    8 : slabdata      1      1      0
    sighand_cache          3      3   2112    3    2 : tunables   24   12    8 : slabdata      1      1      0
    fs_cache               3     63     64   63    1 : tunables  120   60    8 : slabdata      1      1      0
    files_cache            3     11    704   11    2 : tunables   54   27    8 : slabdata      1      1      0
    task_struct            3      4   3392    2    2 : tunables   24   12    8 : slabdata      2      2      0
    kmalloc-512            0      0    512    8    1 : tunables   54   27    8 : slabdata      0      0      0
    kmalloc-256            0      0    256   16    1 : tunables  120   60    8 : slabdata      0      0      0
    proc_inode_cache      14     24    640    6    1 : tunables   54   27    8 : slabdata      4      4      0
    kmalloc-32             2    124     32  124    1 : tunables  120   60    8 : slabdata      1      1      0
    inode_cache            2     14    584    7    1 : tunables   54   27    8 : slabdata      2      2      0
    sock_inode_cache       8     12    640    6    1 : tunables   54   27    8 : slabdata      2      2      0
    anon_vma             261    392     72   56    1 : tunables  120   60    8 : slabdata      7      7      0
    anon_vma_chain       507    704     64   64    1 : tunables  120   60    8 : slabdata     11     11      0
    vm_area_struct       526    600    200   20    1 : tunables  120   60    8 : slabdata     30     30      0
    dentry                32     84    192   21    1 : tunables  120   60    8 : slabdata      4      4      0
    cred_jar              10     84    192   21    1 : tunables  120   60    8 : slabdata      4      4      0
memory.kmem.tcp.usage_in_bytes: 0
memory.kmem.failcnt: 0
memory.force_empty: 
memory.max_usage_in_bytes: 3694592
memory.swappiness: 60
memory.limit_in_bytes: 1098412116148224
memory.kmem.usage_in_bytes: 1122304
memory.pressure_level: 
memory.kmem.max_usage_in_bytes: 1310720
memory.kmem.tcp.limit_in_bytes: 9223372036854771712
memory.stat: cache 0
    rss 2129920
    rss_huge 0
    mapped_file 0
    dirty 0
    writeback 0
    pgpgin 707
    pgpgout 187
    pgfault 1260
    pgmajfault 0
    inactive_anon 0
    active_anon 2129920
    inactive_file 0
    active_file 0
    unevictable 0
    hierarchical_memory_limit 1098412116148224
    total_cache 0
    total_rss 2129920
    total_rss_huge 0
    total_mapped_file 0
    total_dirty 0
    total_writeback 0
    total_pgpgin 707
    total_pgpgout 187
    total_pgfault 1260
    total_pgmajfault 0
    total_inactive_anon 0
    total_active_anon 2129920
    total_inactive_file 0
    total_active_file 0
    total_unevictable 0
memory.numa_stat: total=520 N0=520
    file=0 N0=0
    anon=520 N0=520
    unevictable=0 N0=0
    hierarchical_total=520 N0=520
    hierarchical_file=0 N0=0
    hierarchical_anon=520 N0=520
    hierarchical_unevictable=0 N0=0
memory.kmem.tcp.failcnt: 0
memory.oom_control: oom_kill_disable 0
    under_oom 0
memory.kmem.limit_in_bytes: 9223372036854771712
memory.soft_limit_in_bytes: 9223372036854771712
memory.failcnt: 0
memory.usage_in_bytes: 3252224
memory.move_charge_at_immigrate: 0

For me it looks like there is no limit, since the value "memory.kmem.limit_in_bytes" is 9223372036854771712.

Can you please help me to make cgroups work?
 
Back
Top