1and1 have been regularly criticized for their unusual partitions on default installations of their dedicated servers – but this is the first time I have been affected. One of our dedicated servers started producing Postfix SMTP errors – caused by low disk space.
Upon investigation, the default partitions on my 1&1 Plesk 10.4 server came set at 4Gb, and my ‘var’ partition was full. The used space is all genuine files, so my only option was to increase the partition. On Windows this is quite a complex procedure, requiring additional applications – however following a simple guide made this a 5 minute procedure. Here are the steps that I took :
SSH onto your server. Once logged in, type
df - h to display the partition and logical volume sizes, including the used disk space. In my case, the var logical volume is 4Gb.
df > disk free space command, will display disk usage information.
-h > this option will display sizes in KB, MB or GB.
# df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 3.7G 673M 3.1G 18% / /dev/mapper/vg00-usr 4.0G 1.3G 2.7G 33% /usr /dev/mapper/vg00-var 4.0G 3.6G 456M 89% /var /dev/mapper/vg00-home 4.0G 4.2M 4.0G 1% /home none 2.0G 10M 2.0G 1% /tmp
Next, type fdisk -l to view the total hard disk(s) size and partitions on the disk, in my case it seems that my RAID mirror shows the drives as individual.
# fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 487 3911796 fd Linux raid autodetect /dev/sda2 488 731 1959930 82 Linux swap / Solaris /dev/sda3 732 121601 970888275 fd Linux raid autodetect Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 487 3911796 fd Linux raid autodetect /dev/sdb2 488 731 1959930 82 Linux swap / Solaris /dev/sdb3 732 121601 970888275 fd Linux raid autodetect Disk /dev/md1: 4005 MB, 4005560320 bytes 2 heads, 4 sectors/track, 977920 cylinders Units = cylinders of 8 * 512 = 4096 bytes
Type the pvs command and press Enter.
pvs > Physical Volume Show command.
PV > Physical Volume path
VF > Volume Group name.
Fmt > LVM Format
Attr > Physical volume attributes. The a attribute means that the physical volume is allocatable and not read-only.
PSize > Physical Size of the physical volume.
PFree > Physical Free space left on the physical volume.
# pvs PV VG Fmt Attr PSize PFree /dev/md3 vg00 lvm2 a- 925.91G 913.91G
Since the logical volume assigned to /var is only 4GB, I will be increasing this to 10GB using the lvextend command. The command below is to be used as reference only as the parameters will be different depending on your scenario.
lvextend > This is the logical volume extend command used to make a logical volume larger.
-L +6G > It is specified using the Logical volume size option, how much larger to make the volume. In this scenario, 6 gigabytes is added to the current 4 gigabyte volume to result in a 10 gigabyte volume.
/dev/mapper/vg00-var > The path to the logical volume is specified last. The path to the volume to be extended was taken from the output from the second step in this guide.
# lvextend -L +6G /dev/mapper/vg00-var Extending logical volume var to 10.00 GB Logical volume var successfully resized
df -h to display the disk free space once again. The lvextend operation finished successfully in the last step however the /dev/mapper/vg00-var size is still only showing 4.0G. This is because while the logical volume was increased successfully, the file system needs to be extended to take advantage of the full space of the logical volume.
# df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 3.7G 673M 3.1G 18% / /dev/mapper/vg00-usr 4.0G 1.3G 2.7G 33% /usr /dev/mapper/vg00-var 4.0G 3.6G 461M 89% /var /dev/mapper/vg00-home 4.0G 4.2M 4.0G 1% /home none 2.0G 10M 2.0G 1% /tmp
lvs to show the logical volume information once again. Here, we can confirm that the logical volume has successfully been extended to 10 gigabytes. In the next steps, we will increase the file system to match the logical volume size.
# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert home vg00 -wi-ao 4.00G usr vg00 -wi-ao 4.00G var vg00 -wi-ao 10.00G
mount to display the mounted file systems. From the output, we find that the /dev/mapper/vg00-var logical volume is using xfs.
# mount /dev/md1 on / type ext3 (rw) none on /proc type proc (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/mapper/vg00-usr on /usr type xfs (rw) /dev/mapper/vg00-var on /var type xfs (rw,usrquota) /dev/mapper/vg00-home on /home type xfs (rw,usrquota) none on /tmp type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
To increase the file system to match that of the logical volume, we will use the
xfs_growfs /var will extend the file system to the 10 gigabyte limit of the logical volume.
# xfs_growfs /var meta-data=/dev/vg00/var isize=256 agcount=4, agsize=262144 blks = sectsz=512 attr=2 data = bsize=4096 blocks=1048576, imaxpct=25 = sunit=0 swidth=0 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=0 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 1048576 to 2621440
df -h to display the disk free space to confirm that the file system has been extended.
# df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 3.7G 673M 3.1G 18% / /dev/mapper/vg00-usr 4.0G 1.3G 2.7G 33% /usr /dev/mapper/vg00-var 10G 3.6G 6.5G 36% /var /dev/mapper/vg00-home 4.0G 4.2M 4.0G 1% /home none 2.0G 10M 2.0G 1% /tmp
Hopefully this will buy me a bit more time (and space!)…
21 replies on “1and1 default Plesk partition – resizing guide”
If you have an ext4 file system (CentOS 6) you need to use the resize2fs command. The syntax is the same: resize2fs /dev/mapper/vg00-var. Thanks for the article!
Thank you so much for clearing this up as this had me puzzled as all 1and1’s help pointed towards different file systems and therefore the wrong commands. Jared King was the biggest life saver though as mine uses the centos6 ex4 file system as do all new 1and1 servers. Therefore you may want to bear that in mind if your reading this and change the command to resize2fs /dev/mapper/vg00-var
Hi there, thanks for the help, really appreciate it.
however i have another question that i need help with.
How do i allocate more space to the /tmp directory under the /vg00-home partition?
I have ran out of space in the temp directory a few times just recently which is killing MySQL operations and need to keep manually cleaning the temp directory.
I would imagine it would be the same as above but /dev/mapper/vg00-home/tmp
but cant find any references any where and don’t want to kill my server.
this is my dir structure, and i need to allocate more space to the /tmp dir
60G 6.0M 60G 1% /home
none 2.0G 876K 2.0G 1% /tmp
Any help is greatly received
Wow, what a great guide for exactly what I was looking for. Clear, good explanations, and screen shots to boot. Very much appreciated.
Thanks, thanks, thanks.
I have 100GB and suddenly i was not able to upload files. Then i realized that only 4Gb were available. This is a major problem for 1and1. One thing is not to administer the systems, other thing is not giving the correct information about what they are selling.
I’m paying 100GB and no one said to me that the default size is 4GB.
Once again thanks very much.
Thank you for this neat, complete and useful explanation. It worked like a charm.
Thanks for your help again.
Glad it helped!
Hope others will find it useful too…
Hello Mik. I have dropped you an e-mail about the same issue. I am having issues while resizing the logical volume. Can you please reply my e-mail with a solution.
I was sent an email that shows your drive tables to be as below :
/dev/mapper/VolGroup-lv_root 50Gb Total 80.00 % Used
/dev/mapper/VolGroup-lv_home 410Gb Total 0.05 % Used
I would suggest looking to first reduce the size of your home drive and then increase the root drive. If all free space is allocated, this will be the reason for the insufficient space error when running the
Thank you for your suggestions but as you can see in my e-mail that i am totally unaware with the Linux command line environments and thats why i am unable to know how i can redcue the space of /home volume. Can you please help me in this matter as i can provide you team viewer access to my desktop so that you could do all the needful procedure using SSH shell from my computer. I am sure this is not more than 10 minutes jobs for you to reduce the /home volume and to increase /root volume.
Awaiting your earliest and positive answer on thsi matter.
Thanks for this post … really helped me out.
I tried to increase the /dev/mapper/VolGroup-lv_root by 100 GB and i got successful. Now the lvs command is showing that /dev/mapper/VolGroup-lv_root is of 150 GB which was of 50 GB previously. I have added 100 GB in it. But the command xfs_growfs / is not working for it as SSH shell is giving me bad command error. Please note that /dev/mapper/VolGroup-lv_root is mounted on /. I would be very helpful if you can please help me out from this situation because the remaining disk space is getting low day by day.
Awaiting your reply.
I want to update you one more thing. When i type mount in SSH command shell. It is showing me the following output.
/dev/mapper/VolGroup-lv_root on / type ext4 (rw)
Is this means that the file system is of ext4 type and not xfs and thats why the xfs command is not in working condition. Can you please just type the command i need to put in SSH shell for increasing /dev/mapper/VolGroup-lv_root to a total of 150 GB whereas this is mounted on /
Awaiting your reply.
I am very excited as i have given command of resize2fs /dev/mapper/VolGroup-lv_root and it has done the job for me. Even i was very much afraid before deploying this command but i am very glad that this finally works for me.
How do I increase the size /dev/md1 /
This is 100% now with 3.7G.
Looks like this is not a logical volume. 1and1 is not helping in anyway.
Thank you very much for this – I ran into an issue where I could not open phpMyAdmin Plesk saying could not open database and errno 38. It turns out this was the reason. I only have 3 sites on the server. Typical 1&1 !
Thank you. Not badly written post at all.
Very usefull article , same my day. Thanks a lot 😉
I’ve been baffled for a while why I was running out of space. This was a great article and helped me increase the space on the server – Annoying that 1and1 customers have to do lots of the hard work for them. Thanks for this article, it is definitely booked marked for future reference.
I hope someone is still picking up replies. I had the same issue with 1and1 back in 2012 and solved it in the way described in this article.
Now I have just started renting a 4TB server from 1and1 and they have set it up different they There is an additional partition which so see below:
PV VG Fmt Attr PSize PFree
/dev/sda3 vg00 lvm2 a– <54.00g <3.00g
/dev/sdb1 hdd lvm2 a– <3.58t <3.58
I have increase LVM on sda3 100% which contains my root var folder. This space is going to run out very soon.
1and1 I believe deliberately placed all the free space on a separate partition!! When I try to increase the LVM further I get a error saying there is no free space available.
The partitions have been setup in two separate volume groups.
Can someone point me in the right direction to rectify this please?
Many thanks, very good Article