Para mostrar el listado de unidades de almacenamiento y particiones actual, ejecutar el comando:
lsblk -o NAME,MAJ:MIN,RM,SIZE,TYPE,FSTYPE,MOUNTPOINT
NAME MAJ:MIN RM SIZE TYPE FSTYPE MOUNTPOINT sda 8:0 0 232.9G disk isw_raid_member └─md126 9:126 0 465.8G raid0 ├─md126p1 259:0 0 120G md xfs / ├─md126p2 259:1 0 1.9G md ext4 └─md126p3 259:2 0 343.9G md xfs /home/VMsRepository sdb 8:16 0 232.9G disk isw_raid_member └─md126 9:126 0 465.8G raid0 ├─md126p1 259:0 0 120G md xfs / ├─md126p2 259:1 0 1.9G md ext4 └─md126p3 259:2 0 343.9G md xfs /home/VMsRepository sdc 8:32 0 465.8G disk isw_raid_member └─md124 9:124 0 931.5G raid0 └─md124p1 259:3 0 931.5G md bcache └─bcache0 252:0 0 931.5G disk xfs /home sdd 8:48 0 111.8G disk ├─sdd1 8:49 0 500M part vfat /boot/efi ├─sdd2 8:50 0 1G part ext4 /boot ├─sdd3 8:51 0 7.9G part swap [SWAP] └─sdd4 8:52 0 102.5G part bcache └─bcache0 252:0 0 931.5G disk xfs /home sde 8:64 0 465.8G disk isw_raid_member └─md124 9:124 0 931.5G raid0 └─md124p1 259:3 0 931.5G md bcache └─bcache0 252:0 0 931.5G disk xfs /home sdk 8:160 0 931.5G disk └─sdk1 8:161 0 931.5G part ext4 /run/media/Backup01 sr0 11:0 1 1024M rom
Configuración RAID0 con discos SATA:
# mdadm --detail /dev/md124 /dev/md124: Container : /dev/md/imsm0, member 0 Raid Level : raid0 Array Size : 976768000 (931.52 GiB 1000.21 GB) Raid Devices : 2 Total Devices : 2 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 128K Consistency Policy : none UUID : 29a34795:f040ca44:6d80b65e:22c429a8 Number Major Minor RaidDevice State 1 8 64 0 active sync /dev/sde 0 8 32 1 active sync /dev/sdc # mdadm --detail /dev/md125 /dev/md125: Version : imsm Raid Level : container Total Devices : 2 Working Devices : 2 UUID : c0bdc8e8:5bba49c4:0467a19c:d0a09f7a Member Arrays : /dev/md/Volume0_0 Number Major Minor RaidDevice - 8 64 - /dev/sde - 8 32 - /dev/sdc
Y la configuración de RAID0 con discos SSD:
# mdadm --detail /dev/md126 /dev/md126: Container : /dev/md/imsm1, member 0 Raid Level : raid0 Array Size : 488391680 (465.77 GiB 500.11 GB) Raid Devices : 2 Total Devices : 2 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 128K Consistency Policy : none UUID : dc00ef36:c7b058c3:a6598f75:06f6e016 Number Major Minor RaidDevice State 1 8 0 0 active sync /dev/sda 0 8 16 1 active sync /dev/sdb # mdadm --detail /dev/md127 /dev/md127: Version : imsm Raid Level : container Total Devices : 2 Working Devices : 2 UUID : 0125bb79:4caae511:15023fbb:4006b23c Member Arrays : /dev/md/SystemRAID0 Number Major Minor RaidDevice - 8 0 - /dev/sda - 8 16 - /dev/sdb
Comparando los dos RAID0:
Primero en el raid SSD
#sudo fio --name=randread --ioengine=libaio --iodepth=16 --rw=randread --bs=4k --direct=0 --size=512M --numjobs=4 --runtime=240 --group_reporting
randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=16 ... fio-3.7 Starting 4 processes randread: Laying out IO file (1 file / 512MiB) randread: Laying out IO file (1 file / 512MiB) randread: Laying out IO file (1 file / 512MiB) randread: Laying out IO file (1 file / 512MiB) Jobs: 4 (f=4): [r(4)][100.0%][r=153MiB/s,w=0KiB/s][r=39.1k,w=0 IOPS][eta 00m:00s] randread: (groupid=0, jobs=4): err= 0: pid=19959: Sun May 26 20:38:56 2019 read: IOPS=40.6k, BW=159MiB/s (166MB/s)(2048MiB/12902msec) slat (usec): min=73, max=8454, avg=94.76, stdev=28.83 clat (usec): min=2, max=10144, avg=1477.35, stdev=157.81 lat (usec): min=87, max=10244, avg=1572.52, stdev=164.40 clat percentiles (usec): | 1.00th=[ 1270], 5.00th=[ 1303], 10.00th=[ 1336], 20.00th=[ 1385], | 30.00th=[ 1401], 40.00th=[ 1434], 50.00th=[ 1467], 60.00th=[ 1483], | 70.00th=[ 1516], 80.00th=[ 1565], 90.00th=[ 1631], 95.00th=[ 1696], | 99.00th=[ 1860], 99.50th=[ 1958], 99.90th=[ 2278], 99.95th=[ 2966], | 99.99th=[ 9765] bw ( KiB/s): min=38320, max=42080, per=25.03%, avg=40680.80, stdev=871.14, samples=100 iops : min= 9580, max=10520, avg=10170.18, stdev=217.76, samples=100 lat (usec) : 4=0.01%, 10=0.01%, 20=0.01%, 100=0.01%, 250=0.01% lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% lat (msec) : 2=99.59%, 4=0.38%, 10=0.01%, 20=0.01% cpu : usr=4.53%, sys=14.80%, ctx=524767, majf=0, minf=111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=524288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=159MiB/s (166MB/s), 159MiB/s-159MiB/s (166MB/s-166MB/s), io=2048MiB (2147MB), run=12902-12902msec Disk stats (read/write): md126: ios=520885/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=262144/0, aggrmerge=0/0, aggrticks=21902/0, aggrin_queue=27, aggrutil=99.02% sdb: ios=262144/0, merge=0/0, ticks=22610/0, in_queue=44, util=99.02% sda: ios=262144/0, merge=0/0, ticks=21195/0, in_queue=10, util=97.82%
Ahora en el raid SATA con bcache
# sudo fio --name=randread --ioengine=libaio --iodepth=16 --rw=randread --bs=4k --direct=0 --size=512M --numjobs=4 --runtime=240 --group_reporting
randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=16 ... fio-3.7 Starting 4 processes randread: Laying out IO file (1 file / 512MiB) randread: Laying out IO file (1 file / 512MiB) randread: Laying out IO file (1 file / 512MiB) randread: Laying out IO file (1 file / 512MiB) Jobs: 4 (f=4): [r(4)][100.0%][r=1164KiB/s,w=0KiB/s][r=291,w=0 IOPS][eta 00m:00s] randread: (groupid=0, jobs=4): err= 0: pid=20549: Sun May 26 20:44:54 2019 read: IOPS=291, BW=1168KiB/s (1196kB/s)(274MiB/240041msec) slat (usec): min=80, max=194852, avg=13690.49, stdev=10673.08 clat (usec): min=5, max=610985, avg=205391.13, stdev=44617.88 lat (msec): min=27, max=635, avg=219.08, stdev=46.30 clat percentiles (msec): | 1.00th=[ 124], 5.00th=[ 142], 10.00th=[ 155], 20.00th=[ 169], | 30.00th=[ 180], 40.00th=[ 190], 50.00th=[ 201], 60.00th=[ 211], | 70.00th=[ 224], 80.00th=[ 241], 90.00th=[ 264], 95.00th=[ 284], | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 401], 99.95th=[ 430], | 99.99th=[ 575] bw ( KiB/s): min= 80, max= 448, per=24.99%, avg=291.67, stdev=44.48, samples=1920 iops : min= 20, max= 112, avg=72.88, stdev=11.12, samples=1920 lat (usec) : 10=0.01% lat (msec) : 50=0.01%, 100=0.07%, 250=85.32%, 500=14.56%, 750=0.02% cpu : usr=0.08%, sys=0.39%, ctx=70109, majf=0, minf=109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=70076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=1168KiB/s (1196kB/s), 1168KiB/s-1168KiB/s (1196kB/s-1196kB/s), io=274MiB (287MB), run=240041-240041msec Disk stats (read/write): bcache0: ios=70048/212, merge=0/0, ticks=954259/668, in_queue=954927, util=28.96%, aggrios=35128/266, aggrmerge=0/6, aggrticks=206/200, aggrin_queue=303, aggrutil=0.21% md124: ios=70028/331, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=35014/155, aggrmerge=0/14, aggrticks=477297/315, aggrin_queue=460121, aggrutil=21.90% sde: ios=33713/152, merge=0/13, ticks=448824/272, in_queue=432278, util=20.81% sdc: ios=36316/158, merge=0/15, ticks=505771/359, in_queue=487965, util=21.90% sdd: ios=229/202, merge=0/13, ticks=412/401, in_queue=606, util=0.21%
Usando la utilidad de benchmarking de gnome-disks.
Primero con la cache vacía:
Después de realizar el test anterior, la cache ya contiene datos y el tiempo de acceso se reduce de forma drástica:
Comparamos ahora el mismo dispositivo sin cache activada:
Con respecto al tiempo de arranque:
Con la configuración actual de discos HDD en RAID0 con bcache en SSD, el tiempo de arranque es:
# systemd-analyze Startup finished in 1.824s (kernel) + 1.994s (initrd) + 58.781s (userspace) = 1min 2.600s graphical.target reached after 58.764s in userspace
Un segundo arranque confirma resultados similares:
# systemd-analyze Startup finished in 1.793s (kernel) + 1.874s (initrd) + 59.652s (userspace) = 1min 3.320s graphical.target reached after 59.584s in userspace
Para obtener el resultado de forma gráfica, se puede usar:
# systemd-analyze plot > Pictures/Bootcharts/plot-20190603-1.svg
o también:
/usr/lib/systemd/systemd-bootchartque crea un archivo svg en /run/log
En este articulo se citan varias posibilidades para reducir el tiempo de arranque.
No hay comentarios:
Publicar un comentario