Avatar

Michael's Blog

Home | Gallery | Stats | Downloads | Links | Scripts | Fuel Stats | Wiki | RSS
Quote:
You buy furniture. You tell yourself, this is the last sofa I will ever need in my life. Buy the sofa, then for a couple years you’re satisfied that no matter what goes wrong, at least you’ve got your sofa issue handled. Then the right set of dishes. Then the perfect bed. The drapes. The rug. Then you’re trapped in your lovely nest, and the things you used to own, now they own you.
-- Chuck Palahniuk, Fight Club     Add quote.

Storm VPS Lustre Benchmarks

2013-04-11 13:56:00 by Michael 0 Comments
Tags: linux sysadmin lustre storage

After reading about various cluster file systems I decided to set up a small cluster running Lustre using Storm VPS instances. All nodes have the same hardware configuration and use a 50 GB SAN volume connected through iSCSI as the lustre block device. Specs are as follows.

Node configuration:

OS: CentOS 6.3 x86_64
Kernel: 2.6.32-279.19.1.el6_lustre.x86_64
RAM: 3556 MB (Storm 4 GB)
Primary Disk: 300 GB virtual disk
Secondary Disk (iscsi): 50 GB SAN volume
CPU: Two Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz cores

Lustre configuration: 1 management server, 1 metadata server, 1 object storage server.  LNET was configured to use a private network interface.

Disk performance was tested with the sgpdd_survey script from the Lustre IOkit. Write speed appears to average around 35-40 MB/s.

Wed Apr 10 10:29:39 EDT 2013 sgpdd-survey on /dev/sda from oss1.watters.ws
total_size  8388608K rsz 1024 crg     1 thr     1 write   49.32 MB/s     1 x  49.32 =   49.32 MB/s read   68.15 MB/s     1 x  68.15 =   68.15 MB/s
total_size  8388608K rsz 1024 crg     1 thr     2 write   77.15 MB/s     1 x  77.15 =   77.15 MB/s read   92.85 MB/s     1 x  92.85 =   92.85 MB/s
total_size  8388608K rsz 1024 crg     1 thr     8 write   36.15 MB/s     1 x  36.14 =   36.14 MB/s read   94.08 MB/s     1 x  94.09 =   94.09 MB/s
total_size  8388608K rsz 1024 crg     1 thr    16 write   35.84 MB/s     1 x  35.85 =   35.85 MB/s read  101.59 MB/s     1 x 101.59 =  101.59 MB/s
total_size  8388608K rsz 1024 crg     2 thr     2 write   35.34 MB/s     2 x  17.67 =   35.34 MB/s read   67.38 MB/s     2 x  33.69 =   67.39 MB/s
total_size  8388608K rsz 1024 crg     2 thr     4 write   39.09 MB/s     2 x  19.55 =   39.10 MB/s read   79.20 MB/s     2 x  39.60 =   79.19 MB/s
total_size  8388608K rsz 1024 crg     2 thr     8 write   40.40 MB/s     2 x  20.20 =   40.40 MB/s read   98.16 MB/s     2 x  49.09 =   98.17 MB/s
total_size  8388608K rsz 1024 crg     2 thr    16 write   37.73 MB/s     2 x  18.86 =   37.73 MB/s read   99.31 MB/s     2 x  49.66 =   99.32 MB/s
total_size  8388608K rsz 1024 crg     2 thr    32 write   38.08 MB/s     2 x  19.04 =   38.07 MB/s read   97.30 MB/s     2 x  48.66 =   97.31 MB/s
total_size  8388608K rsz 1024 crg     4 thr     4 write   38.38 MB/s     4 x   9.59 =   38.38 MB/s read   98.17 MB/s     4 x  24.55 =   98.19 MB/s
total_size  8388608K rsz 1024 crg     4 thr     8 write   38.25 MB/s     4 x   9.57 =   38.26 MB/s read  100.06 MB/s     4 x  25.01 =  100.06 MB/s
total_size  8388608K rsz 1024 crg     4 thr    16 write   39.42 MB/s     4 x   9.85 =   39.41 MB/s read   99.96 MB/s     4 x  25.00 =   99.98 MB/s
total_size  8388608K rsz 1024 crg     4 thr    32 write   39.43 MB/s     4 x   9.86 =   39.44 MB/s read   99.93 MB/s     4 x  24.99 =   99.95 MB/s
total_size  8388608K rsz 1024 crg     4 thr    64 write   38.22 MB/s     4 x   9.56 =   38.22 MB/s read   97.80 MB/s     4 x  24.45 =   97.81 MB/s
total_size  8388608K rsz 1024 crg     8 thr     8 write   38.73 MB/s     8 x   4.84 =   38.76 MB/s read   87.71 MB/s     8 x  10.97 =   87.74 MB/s
total_size  8388608K rsz 1024 crg     8 thr    16 write   39.70 MB/s     8 x   4.96 =   39.67 MB/s read   81.09 MB/s     8 x  10.14 =   81.10 MB/s
total_size  8388608K rsz 1024 crg     8 thr    32 write   43.40 MB/s     8 x   5.43 =   43.41 MB/s read   81.21 MB/s     8 x  10.16 =   81.25 MB/s
total_size  8388608K rsz 1024 crg     8 thr    64 write   38.88 MB/s     8 x   4.86 =   38.91 MB/s read   67.10 MB/s     8 x   8.39 =   67.14 MB/s
total_size  8388608K rsz 1024 crg     8 thr   128 write   42.19 MB/s     8 x   5.27 =   42.19 MB/s read   65.92 MB/s     8 x   8.24 =   65.92 MB/s

IOPS performance was tested using iozone, here are the results.

	OPS Mode. Output is in operations per second.
	Include fsync in write timing
	No retest option selected
        Record Size 4 KB
        File size set to 4194304 KB
        Command line used: iozone -l 32 -O -i 0 -i 1 -i 2 -e -+n -r 4K -s 4G
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Min process = 32
        Max process = 32
        Throughput test with 32 processes
        Each process writes a 4194304 Kbyte file in 4 Kbyte records

        Children see throughput for 32 initial writers  =   27764.87 ops/sec
        Parent sees throughput for 32 initial writers   =   26692.16 ops/sec
        Min throughput per process                      =     840.07 ops/sec
        Max throughput per process                      =     903.35 ops/sec
        Avg throughput per process                      =     867.65 ops/sec
        Min xfer                                        =  975918.00 ops

        Children see throughput for 32 readers          =   26758.37 ops/sec
        Parent sees throughput for 32 readers           =   26755.12 ops/sec
        Min throughput per process                      =     448.79 ops/sec
        Max throughput per process                      =    1372.74 ops/sec
        Avg throughput per process                      =     836.20 ops/sec
        Min xfer                                        =  342845.00 ops

As you can see lustre is a relatively high performance file system and is easily scalable to store petabytes of data. Adding more space is as simple as building a new object server and running mkfs.lustre.