Hello everyone,
I recently installed Alpine on a dedicated server using LVM for root and
swap partitions. But after some benchmarks, I noticed that the hard disk
speed of my new Alpine install was noticeably slower then on other
distro (Archinux and Debian in this case).
I then ran more benchmarks using `fio`. Here is my methodology and the
benchs results:
# Benchmark commands:
Read:
fio --name=alpine-read --ioengine=posixaio --rw=read --bs=4k
--numjobs=1 --size=4g --iodepth=1 --runtime=60 --time_based
--end_fsync=1
Write:
fio --name=alpine-write --ioengine=posixaio --rw=write --bs=4k
--numjobs=1 --size=4g --iodepth=1 --runtime=60 --time_based
--end_fsync=1
# Benchmark setups:
## Server:
Hardware specs:
- GPT on a BIOS only capable motherboard with a `lvmsys`
Alpine install
- Hardware raid1 - HP SmartArray p222
Alpine - LVM:
READ: bw=154MiB/s (162MB/s)
WRITE: bw=86.3MiB/s (90.5MB/s)
Archlinux - LVM (Alpine LVM partitions mounted on archlinux liveCD):
READ: bw=236MiB/s (247MB/s)
WRITE: bw=126MiB/s (132MB/s)
## VM (VirtualBox):
Hardware specs:
- Basic `lvmsys` Alpine install on a laptop with a NVMe SSD
Alpine - LVM:
READ: bw=96.2MiB/s (101MB/s)
WRITE: bw=93.5MiB/s (98.0MB/s)
Archlinux - LVM (Alpine LVM partitions mounted on archlinux liveCD):
READ: bw=354MiB/s (371MB/s)
WRITE: bw=237MiB/s (248MB/s)
As you can see, Alpine performances are quite bad, is this an expected
behavior ?
If it's not, where should I try to look for a fix ? Kernel config ? LVM
using musl ?
--
Sevan
Hello,
On Wednesday, December 2, 2020 11:21:30 AM MST Sevan wrote:
> Hello everyone,
>
> I recently installed Alpine on a dedicated server using LVM for root and
> swap partitions. But after some benchmarks, I noticed that the hard disk
> speed of my new Alpine install was noticeably slower then on other
> distro (Archinux and Debian in this case).
>
> I then ran more benchmarks using `fio`. Here is my methodology and the
> benchs results:
>
> # Benchmark commands:
> Read:
> fio --name=alpine-read --ioengine=posixaio --rw=read --bs=4k
> --numjobs=1 --size=4g --iodepth=1 --runtime=60 --time_based
> --end_fsync=1
> Write:
> fio --name=alpine-write --ioengine=posixaio --rw=write --bs=4k
> --numjobs=1 --size=4g --iodepth=1 --runtime=60 --time_based
> --end_fsync=1
>
> # Benchmark setups:
>
> ## Server:
> Hardware specs:
> - GPT on a BIOS only capable motherboard with a `lvmsys`
> Alpine install
> - Hardware raid1 - HP SmartArray p222
>
> Alpine - LVM:
> READ: bw=154MiB/s (162MB/s)
> WRITE: bw=86.3MiB/s (90.5MB/s)
>
> Archlinux - LVM (Alpine LVM partitions mounted on archlinux liveCD):
> READ: bw=236MiB/s (247MB/s)
> WRITE: bw=126MiB/s (132MB/s)
>
> ## VM (VirtualBox):
> Hardware specs:
> - Basic `lvmsys` Alpine install on a laptop with a NVMe SSD
>
> Alpine - LVM:
> READ: bw=96.2MiB/s (101MB/s)
> WRITE: bw=93.5MiB/s (98.0MB/s)
>
> Archlinux - LVM (Alpine LVM partitions mounted on archlinux liveCD):
> READ: bw=354MiB/s (371MB/s)
> WRITE: bw=237MiB/s (248MB/s)
>
> As you can see, Alpine performances are quite bad, is this an expected
> behavior ?
>
> If it's not, where should I try to look for a fix ? Kernel config ? LVM
> using musl ?
Two things you may want to try.
Make sure write cache is enabled: sdparm -s WCE=y /dev/sda (or whatever your
RAID volume is).
Secondly, Alpine uses mq-deadline scheduler by default on rotational media,
this trades linear read/write performance for better seek latencies. You
might try using kyber scheduler instead: echo kyber > /sys/block/sda/queue/
scheduler
Ariadne