No, and also no benchmarks.Well, good on him for branching out. I don't suppose there's any test IPs floating around?
I'm going to stress test it.Eh, those will pop up eventually. I was more interested in the traceroutes.
root@vps:/# traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 50.2.186.19 (50.2.186.19) 0.094 ms 0.038 ms 0.033 ms
2 23.90.60.57 (23.90.60.57) 0.382 ms 0.369 ms 0.359 ms
3 23.90.60.17 (23.90.60.17) 0.393 ms 0.332 ms 0.244 ms
4 lag-7-864.ear1.Dallas1.Level3.net (4.31.141.237) 0.689 ms 0.657 ms 0.818 ms
5 * * *
6 Google-level3-3x10G.Dallas.Level3.net (4.68.70.166) 47.819 ms 51.984 ms 51.769 ms
7 * * 72.14.233.67 (72.14.233.67) 13.745 ms
8 72.14.237.215 (72.14.237.215) 2.083 ms 2.022 ms 72.14.237.221 (72.14.237.221) 22.949 ms
9 216.239.47.121 (216.239.47.121) 10.313 ms 8.695 ms 209.85.243.178 (209.85.243.178) 8.679 ms
10 216.239.46.59 (216.239.46.59) 19.286 ms 19.166 ms 216.239.46.63 (216.239.46.63) 7.841 ms
11 * * *
12 google-public-dns-a.google.com (8.8.8.8) 8.049 ms 8.372 ms 8.174 ms
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 84.2906 s, 12.7 MB/s
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 50.2.186.19 (50.2.186.19) 0.080 ms 0.035 ms 0.031 ms
2 23.90.60.57 (23.90.60.57) 0.409 ms 0.420 ms 0.413 ms
3 23.90.60.17 (23.90.60.17) 0.439 ms 0.462 ms 0.381 ms
4 lag-7-864.ear1.Dallas1.Level3.net (4.31.141.237) 1.091 ms 1.037 ms 1.177 ms
5 ae-1-60.edge2.Dallas1.Level3.net (4.69.145.11) 1.141 ms 0.961 ms *
6 Google-level3-3x10G.Dallas.Level3.net (4.68.70.166) 2.435 ms 1.492 ms 1.411 ms
7 72.14.233.67 (72.14.233.67) 1.331 ms 72.14.233.65 (72.14.233.65) 1.164 ms 72.14.233.67 (72.14.233.67) 4.858 ms
8 72.14.237.219 (72.14.237.219) 1.379 ms 72.14.237.221 (72.14.237.221) 1.448 ms 1.296 ms
9 216.239.47.121 (216.239.47.121) 8.101 ms 8.106 ms 209.85.243.178 (209.85.243.178) 7.991 ms
10 216.239.46.39 (216.239.46.39) 8.839 ms 216.239.46.63 (216.239.46.63) 8.845 ms 8.777 ms
11 * * *
12 google-public-dns-a.google.com (8.8.8.8) 9.138 ms 9.311 ms 9.413 ms
Ahh those speeds suck...I/O speed : 23.4 MB/s
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 84.2906 s, 12.7 MB/s
Did you mean this command, or something else?Ahh those speeds suck...
How is the network?
wget -O /dev/null http://cachefly.cachefly.net/100mb.test
--2014-01-28 23:29:50-- http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)... 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'
100%[======================================>] 104,857,600 79.0M/s in 1.3s
2014-01-28 23:29:51 (79.0 MB/s) - `/dev/null' saved [104857600/104857600]
Your VPS has 2GB RAM/2GB vRAM. The advertised specs are 2GB "Guaranteed" RAM/4GB vRAMCPU model : Genuine Intel(R) CPU @ 2.00GHz
Number of cores : 4
CPU frequency : 2000.054 MHz
Total amount of ram : 2024 MB
Total amount of swap : 2024 MB
I/O speed : 23.4 MB/s
Code:dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 84.2906 s, 12.7 MB/s
Let me know if there are any other kinds of benchmarks/speed tests to try.
I noticed the vSwap RAM discrepancy too, opened a ticket about it and currently waiting for a response. Doesn't seem like I'm the only one with low I/O either judging from the comments on the LET post.Your VPS has 2GB RAM/2GB vRAM. The advertised specs are 2GB "Guaranteed" RAM/4GB vRAM
As for the disk speeds...that's a new node with "LSI HARDWARE RAID-10 SSD Cached Disk Space"? That server has problems...
I bet he doesn't know how Solus handles setting up vSwap in the package creation. He couldn't quite figure out how to set 100TB to 100TB instead of 97TB eitherI noticed the vSwap RAM discrepancy too, opened a ticket about it and currently waiting for a response. Doesn't seem like I'm the only one with low I/O either judging from the comments on the LET post.
Same time zone *high five*Times in GMT+11/AEDT.
It's SSD cached not "pure" SSD but it's still reallly off for what a new almost empty node with HW RAID-10 should be.I am not a huge believer in the 'dd test' as a gold standard, but it seems really off for a box with RAID-10 SSDs.