Noticed that a thin volume had weird data usage according to LVM. 94% of data space was being used. But inside the VM it said 25%. Learned of a command called fstrim which sends “discard” commands to the underlying storage while going through a filesystem to tell the storage what isn’t actually being used. Didn’t work. Not sure if remount was sufficient or if it was that MariaDB was shut down or maybe that I removed the noatime setting for the FS, but after some tinkering it worked:
root@cluster1:~# fstrim -av
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/sdb
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/: 74.8 MiB (78364672 bytes) trimmed on /dev/mapper/ubuntu--vg-ubuntu--lv
root@cluster1:~# fsck
fsck fsck.btrfs fsck.cramfs fsck.ext2 fsck.ext3 fsck.ext4 fsck.fat fsck.minix fsck.msdos fsck.vfat fsck.xfs
root@cluster1:~# fsck.ext4 /var/lib/mysql/
e2fsck 1.45.5 (07-Jan-2020)
fsck.ext4: Is a directory while trying to open /var/lib/mysql/
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
root@cluster1:~# fsck.ext4 /dev/sdb
e2fsck 1.45.5 (07-Jan-2020)
/dev/sdb is mounted.
e2fsck: Cannot continue, aborting.
root@cluster1:~# systemctl stop mysql
root@cluster1:~# umount /var/lib/mysql
root@cluster1:~# fsck.ext4 /dev/sdb
e2fsck 1.45.5 (07-Jan-2020)
/dev/sdb: clean, 481/1966080 files, 1872010/7864320 blocks
root@cluster1:~# vim /etc/fstab
root@cluster1:~# mount /var/lib/mysql/
root@cluster1:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 415M 0 415M 0% /dev
tmpfs 170M 1.1M 169M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 20G 7.4G 12G 40% /
tmpfs 459M 0 459M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 459M 0 459M 0% /sys/fs/cgroup
/dev/loop0 56M 56M 0 100% /snap/core18/1997
/dev/sda2 976M 199M 711M 22% /boot
/dev/loop5 71M 71M 0 100% /snap/lxd/19647
/dev/loop4 68M 68M 0 100% /snap/lxd/20326
/dev/loop7 56M 56M 0 100% /snap/core18/2066
/dev/loop1 33M 33M 0 100% /snap/snapd/12057
/dev/loop6 33M 33M 0 100% /snap/snapd/12159
tmpfs 102M 0 102M 0% /run/user/1000
/dev/sdb 30G 6.6G 22G 24% /var/lib/mysql
root@cluster1:~# fstrim -v /var/lib/mysql
/var/lib/mysql: 22.9 GiB (24544501760 bytes) trimmed
root@cluster1:~# systemctl restart mysql
But still no change in lvdisplay:
root@pve1:~# lvdisplay pve/vm-109-disk-1
--- Logical volume ---
LV Path /dev/pve/vm-109-disk-1
LV Name vm-109-disk-1
VG Name pve
LV UUID ATDHHH-DXGG-KqtR-sLA6-XeUW-xa51-HLXxId
LV Write Access read/write
LV Creation host, time pve1, 2020-09-10 20:25:44 +0200
LV Pool name data
LV Status available
# open 1
LV Size 30.00 GiB
Mapped size 93.87%
Current LE 7680
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:11
Damn it… This took a while. Apparently the big issue was that the VM definition in ProxMox had “discard=0” configured for that hard drive. So I had to change that, and then fstrim -av worked a lot better. It ran way slower of course since it actually ended up doing work.
root@cluster1:~# fstrim -av
/var/lib/mysql: 22.9 GiB (24514723840 bytes) trimmed on /dev/sdb
/boot: 676.1 MiB (708972544 bytes) trimmed on /dev/sda2
/: 11.9 GiB (12730437632 bytes) trimmed on /dev/mapper/ubuntu--vg-ubuntu--lv
And result:
root@pve1:~# lvdisplay pve/vm-109-disk-1
--- Logical volume ---
LV Path /dev/pve/vm-109-disk-1
LV Name vm-109-disk-1
VG Name pve
LV UUID ATDHHH-DXGG-KqtR-sLA6-XeUW-xa51-HLXxId
LV Write Access read/write
LV Creation host, time pve1, 2020-09-10 20:25:44 +0200
LV Pool name data
LV Status available
# open 1
LV Size 30.00 GiB
Mapped size 23.40%
Current LE 7680
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:11