• Our team is looking to connect with folks who use email services provided by Plesk, or a premium service. If you'd like to be part of the discovery process and share your experiences, we invite you to complete this short screening survey. If your responses match the persona we are looking for, you'll receive a link to schedule a call at your convenience. We look forward to hearing from you!
  • The BIND DNS server has already been deprecated and removed from Plesk for Windows.
    If a Plesk for Windows server is still using BIND, the upgrade to Plesk Obsidian 18.0.70 will be unavailable until the administrator switches the DNS server to Microsoft DNS. We strongly recommend transitioning to Microsoft DNS within the next 6 weeks, before the Plesk 18.0.70 release.
  • The Horde component is removed from Plesk Installer. We recommend switching to another webmail software supported in Plesk.

tmpfs problem

ProWebS

Regular Pleskian
Hello,

suddenly I did a df -h in the server and I saw the above strange output:

Filesystem Size Used Avail Use% Mounted on
/dev/md2 688G 270G 383G 42% /
/dev/md1 2.0G 86M 1.9G 5% /boot
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-local
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-queue
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-remote
tmpfs 3.9G 8.9M 3.9G 1% /usr/local/psa/handlers/info
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-local
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-queue
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-remote
tmpfs 3.9G 8.9M 3.9G 1% /usr/local/psa/handlers/info
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/spool

as you can see the tmpfs partition have been created twice...(it wasnt like that from the beggining).
Any ideas on how to fix that?
 
Igor,

I tried your suggestion and here is the output:

[root@~]# /usr/lib64/plesk-9.0/handlers-tmpfs stop

[root@~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 688G 151G 502G 24% /
/dev/md1 2.0G 86M 1.9G 5% /boot
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-local
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-queue
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-remote
tmpfs 3.9G 8.9M 3.9G 1% /usr/local/psa/handlers/info



[root@ ~]# /usr/lib64/plesk-9.0/handlers-tmpfs start

[root@ ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 688G 151G 502G 24% /
/dev/md1 2.0G 86M 1.9G 5% /boot
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-local
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-queue
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-remote
tmpfs 3.9G 8.9M 3.9G 1% /usr/local/psa/handlers/info
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-local
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-queue
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-remote
tmpfs 3.9G 8.9M 3.9G 1% /usr/local/psa/handlers/info
tmpfs 3.9G 24K 3.9G 1% /usr/local/psa/handlers/spool


Even If i did try to run twice in a row /usr/lib64/plesk-9.0/handlers-tmpfs stop
still I could see:

[root@~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 688G 151G 502G 24% /
/dev/md1 2.0G 86M 1.9G 5% /boot
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-local
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-queue
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-remote
tmpfs 3.9G 8.9M 3.9G 1% /usr/local/psa/handlers/info


Any suggestion?
 
Unfortunatley I dont:

[root@~]# cat /etc/fstab
proc /proc proc defaults 0 0
none /dev/pts devpts gid=5,mode=620 0 0
/dev/md0 none swap sw 0 0
/dev/md1 /boot ext3 defaults 0 0
/dev/md2 / ext3 defaults,usrquota 0 0

I did a server reboot and it appears correctly:

[root@~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 688G 152G 502G 24% /
/dev/md1 2.0G 86M 1.9G 5% /boot
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-local
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-queue
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-remote
tmpfs 3.9G 4.9M 3.9G 1% /usr/local/psa/handlers/info
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/spool

the problem now is that :

[root@~]# cat /proc/mdstat
Personalities : [raid1] [raid10] [raid0] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1] sda1[0]
4198976 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
2104448 blocks [2/2] [UU]

md2 : active raid1 sdb3[1]
726266432 blocks [2/1] [_U]

unused devices: <none>

a partition at the raid is off and i dont know if the tmpfs problem caused this or the opposite..
 
Back
Top