# Scaleway Cloud Launches x86-64 C2 Servers: Dedicated Avoton C2550 and C2750



## DomainBop

Scaleway added x86_64 dedicated Avoton's to their cloud mix today.  All offerings are dedicated bare metal servers with unlimited transfer,  LSSD storage (the €23.99 offering also includes 250GB DSSD storage)


new offerings:


€11.99month
Avoton C2550
4 Dedicated x64 Cores
8GB Memory
50GB SSD Disk 
1 Flexible public IPv4
300Mbit/s Internet bandwidth
2.5Gbit/s Internal bandwidth


€17.99month
Avoton C2750
8 Dedicated x64 Cores
16GB Memory
50GB SSD Disk 
1 Flexible public IPv4
500Mbit/s Internet bandwidth
5Gbit/s Internal bandwidth


€23.99month
Avoton C2750
8 Dedicated x64 Cores
32GB Memory
50GB SSD Disk
250GB Direct SSD Disk
1 Flexible public IPv4
800Mbit/s Internet bandwidth
5Gbit/s Internal bandwidth


blog announcement: https://blog.scaleway.com/2016/03/08/c2-insanely-affordable-x64-servers/


----------



## willie

I'm surprised it turns out to be an Avoton and that it's an x86 at all... I guess it's an attractive offer in line with Kimsufi and the cheap Online.net and Hetzner dedis, especially with the hourly billing, but if they're going x86 I'd have hoped for some faster cpus like the new low power Xeons.  And the VPS offer isn't even that great, compared with OVH.  Oh well.


Added: fwiw, I think the 32GB config is the most attractive one, because of the local disk besides the extra ram.  I also saw something about ipv6 being available for these servers, but haven't confirmed.


----------



## DomainBop

willie said:


> And the VPS offer isn't even that great, compared with OVH.



For the 2.99 offerings: Scaleway's 2 (Avoton) cores are dedicated, OVH's 1 (E5) core is shared.  UnixBench is about the same (1500-1600), both are KVM,  Scaleway comes with 50GB network drive and 200 Mbps, OVH is 10GB local drive and 100 Mbps.  


CPU power on this new VPS offering is about double their existing 4-core ARM server.


I was hoping to see a 64-bit ARM offering but their new Avaton offers are attractive.


----------



## willie

I don't see any claim in the Scaleway blog post that the VPS cores are dedicated, and that doesn't fit the hardware picture we've seen.  The vps is on a c2750 which has 8 cores, so if the cores are dedicated there are 4 vps per box, and since the vps has 2gb of ram that means there's 8gb of ram in the box.  But the c2750 boxes we've heard about have 16gb or 32gb of ram, and in fact the ones with LSSD available as bare metal all have 32gb of ram.


If the VPS are on the same servers (c2750, 32gb ram, 200gb LSSD) then if the ram is not oversold then there's 15 or 16 vps per box, making the cpu and disk definitely oversold (note the Avaton vps is supposed to have local rather than network storage).  The blog posts also says the VPS are for light usage, testing etc.  Certainly, the 1x OVH E5 core is faster than 2x Avaton cores when sharing is not taken into account. I'd expect the typical situation where you can get close to 100% cpu on either box, but for limited bursts of computation.  That works ok for many things.  If you need more, get a dedi.


As for total throughput: if you get 2x Avoton cores at 1/16 load, that's 1/8 of an Avoton core (though you can probably use more than that if your neighbors don't compute much).  That's about 1/32 of an E3 core (maybe 1/20th of the cheaper OVH E5 core).  By comparison the four ARM cores in the C1 are comparable to maybe 1/2 of an E3 core, so you get much more compute power from a C1 than the C2 VPS if your load parallelizes and is 24/7.


Interestingly, the C1, C2M dedi, and midrange Hetzner auction servers (i7-3770) are all on a comparable footing in CPU per dollar, with Scaleway's hourly billing being nicer for short term uses.  I plan to snag a C2M or C2L (if any are still available) and run some benchmarks soon.


----------



## drmike

Still invite only?


Where did the ARM units go?  oh I see them nested on interior page.... 


I like the new faster NICs with much more throughput.  Good for those of us with multiple servers working together in same facility.


See they have monkeyed with their upstreams, is the internet still not so good for Scaleway?


----------



## graeme

I agree with @willie that the €24/32GB config is the most attractive. More for your money that the nearest OVH VPSs (Cloud VPS3, SP-30 or EG-7) *except* that the OVH cloud VPSs all give you triple redundant Ceph storage. Are OVH also not supposed to have a better network?


Incidentally, is there an appropriate forum here to ask for opinions on how hosts compare, or is that more of a WHT thing?


----------



## willie

Scaleway internet seems reasonably ok to me.  If there's a C1 benchmark you want, I can run it, I've had a C1 for a few months.  I might get a C2 to test but probably won't hold onto it since the last thing I need is another x86 box.  I can request an invite code for you using the member page if you want, but I don't know if that's better than using the public request form.


----------



## Amney

Здравствуйте!


Может кто-нибудь поделиться приглашение?


----------



## willie

Amney, per Google Translate I think you're asking for an invite.  I can't send out any directly but I can ask Scaleway to send you one.  PM me your email address and I'll put it into their request form.  I don't know how long it takes to actually get an invite.  I think the LET thread said they might re-open it to the public in April.


----------



## drmike

Any tried these new Scaleway boxes yet?


----------



## Amney

willie said:


> Amney, per Google Translate I think you're asking for an invite.  I can't send out any directly but I can ask Scaleway to send you one.  PM me your email address and I'll put it into their request form.  I don't know how long it takes to actually get an invite.  I think the LET thread said they might re-open it to the public in April.



I heard that this will probably be in April.
I've registered your current mailboxes. )))


----------



## willie

drmike said:


> Any tried these new Scaleway boxes yet?



There are some benchmarks over on LET.  The C2750 per cpubenchmark.net is around 3800 Passmark, so a bit less than half the speed of a current E3 or i7 or comparable to a dual core i3, not too bad but not a cpu monster.  The dual 2.5gbps network interface might make cluster computing with it better.  It will be interesting if they use a really fast cpu in the c3.  Is there something in particular you'd like to try on the c2?


----------



## mikeyur

drmike said:


> Still invite only?



I have invites if you want one, just PM me your email.



drmike said:


> Any tried these new Scaleway boxes yet?



Just played with the VPS and just spun up the C2L server to play with it. It's an Atom C2750, so not much surprise there speed-wise.


I think the VPS is the best deal, it's 2x dedicated C2750 cores + 50GB SSD + 200Mbps unmetered (on gigabit port) for $3.25/mo. $1.10 per 50GB you add, up to 150GB in a single volume.


----------



## willie

I don't think the vps cores are dedicated--see my post from 4:56pm Tuesday.  It is indeed a good deal though, if you just want a cheap x86 vps with good memory and network resources.  I'm mostly after raw cpu power when it comes to products like this, but maybe I'm not typical.  So I prefer the c1 dedi to the c2 VPS, which costs the same although the c1 is ARM and its disk is networked rather than local.  I think the C1 disk is always in the same rack as the cpu though.


How did you get invite codes?  I don't need one (already have a Scaleway account) but I only remember seeing a web form to ask for a code to be sent to someone in FIFO order.


----------



## mikeyur

willie said:


> I don't think the vps cores are dedicated--see my post from 4:56pm Tuesday.  It is indeed a good deal though, if you just want a cheap x86 vps with good memory and network resources.  I'm mostly after raw cpu power when it comes to products like this, but maybe I'm not typical.  So I prefer the c1 dedi to the c2 VPS, which costs the same although the c1 is ARM and its disk is networked rather than local.  I think the C1 disk is always in the same rack as the cpu though.
> 
> 
> How did you get invite codes?  I don't need one (already have a Scaleway account) but I only remember seeing a web form to ask for a code to be sent to someone in FIFO order.



Ah, I just assumed they were running with the 'dedicated core' thing. Also it's not local storage, it's the network drive. LSSD is confusing as it can be interpreted as 'local' - but it's their network storage, the local disks are "Direct SSD". Perhaps they have a bunch of the C2750's racked with 8-10GB ram?


I don't have invite codes, just the 'request' form which I thought was direct invites but just re-read the description.


----------



## willie

Oh I got confused and thought LSSD was local SSD.  Thanks for clarifying.  If it's the networked drive, that suggests they're using the 16GB boxes so the amount of CPU overcommit isn't as much.  It would be weird for them to build a different server config with less ram for the sake of those VPS's.


Also my cpu calculation in the earlier post was wrong: if there's 16 vps on the 8-core box then you get 2 cores at 1/4 load, not 1/16, so equivalent to 50% of a core.  If there's 8 vps on the box then each vps gets equivalent to a full core, which is maybe 2x the compute power of a C1, not bad.  There's a trick someone mentioned on the Scaleway forum, which is that if you spin up a C1 instance through the API instead of the web interface, you can get a logical disk as small as 1GB(?) instead of 50GB.  So if you don't need an internet connection (i.e. access through another C1 over the Scaleway LAN) and you don't need much disk space, you can possibly get a C1 compute board for just over 1 euro per month.  Maybe that works with the VPS as well.


----------



## Omar

Amney said:


> Здравствуйте!
> 
> 
> Может кто-нибудь поделиться приглашение?



Здрасте!


Каково ваше электронная почта? (Hopefully non-English content is allowed here!)


----------



## willie

If you post in english, more of us will be able to read it .


If you want someone's email it's preferable to ask them by PM.


----------



## fm7

*Scaleway VPS *(hourly billing)


3€/month KVM *2* vCPU* 2GB* RAM *50GB* LSSD 1 IPv4 *200Mbit* unmetered


   BYTE UNIX Benchmarks (Version 5.1.3)


   CPU 0: Intel(R) Atom(TM) CPU *C2750* @ 2.40GHz
   CPU 1: Intel(R) Atom(TM) CPU *C2750* @ 2.40GHz


Benchmark Run: Tue Mar 08 2016 20:11:04 - 20:31:43
2 CPUs in system; running 1 parallel copy of tests                                                 
System Benchmarks Index Score (Partial Only)  *627.4*


Benchmark Run: Tue Mar 08 2016 20:31:43 - 20:52:25
2 CPUs in system; running 2 parallel copies of tests
System Benchmarks Index Score (Partial Only) *1207.7*


:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 6.39253 s, *168 MB/s*


----------



## fm7

*OVH VPS 2016 SSD 1 *(flat-rate)


3€ VPS KVM OpenStack *1* vCPU *2GB* RAM *10GB* SSD *100Mbit* unmetered


           BYTE UNIX Benchmarks (Version 5.1.3)


           CPU 0: Intel Xeon E312xx (Sandy Bridge) (4788.9 bogomips)


        Benchmark Run: Thu Feb 18 2016 17:23:52 - 17:44:44
        1 CPU in system; running 1 parallel copy of tests
        System Benchmarks Index Score (Partial Only)  *529.6*


:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 6.06531 s, *177 MB/s*


----------



## fm7

*Scaleway* dedicated *C2 S  *(hourly rate billing)


*12€*/month *4* cores  *8GB* RAM *50GB* LSSD *300Mbit* unmetered


*1€*/month each additional 50GB LSSD


*1€*/month each additional IPv4

BYTE UNIX Benchmarks (Version 5.1.3)


   CPU 0: Intel(R) Atom(TM) CPU *C2550* @ 2.40GHz (4787.8 bogomips)
   CPU 1: Intel(R) Atom(TM) CPU C2550 @ 2.40GHz (4787.8 bogomips)
   CPU 2: Intel(R) Atom(TM) CPU C2550 @ 2.40GHz (4787.8 bogomips)
   CPU 3: Intel(R) Atom(TM) CPU C2550 @ 2.40GHz (4787.8 bogomips)


Benchmark Run: Sun Mar 13 2016 21:41:29 - 22:02:09
4 CPUs in system; running 1 parallel copy of tests
System Benchmarks Index Score (Partial Only) *804.3*


Benchmark Run: Sun Mar 13 2016 22:02:09 - 22:22:55
4 CPUs in system; running 4 parallel copies of tests
System Benchmarks Index Score (Partial Only)* 2337.2*


:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 5.47228 s, *196 MB/s*


----------



## DomainBop

fm7 said:


> *OVH VPS 2016 SSD 1 *(flat-rate)
> 
> 
> 3€ VPS KVM OpenStack *1* vCPU *2GB* RAM *10GB* SSD *100Mbit* unmetered
> 
> 
> BYTE UNIX Benchmarks (Version 5.1.3)
> 
> 
> CPU 0: Intel Xeon E312xx (Sandy Bridge) (4788.9 bogomips)
> 
> 
> Benchmark Run: Thu Feb 18 2016 17:23:52 - 17:44:44
> 1 CPU in system; running 1 parallel copy of tests
> System Benchmarks Index Score (Partial Only)  *529.6*
> 
> 
> :~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
> 16384+0 records in
> 16384+0 records out
> 1073741824 bytes (1.1 GB) copied, 6.06531 s, 177 MB/s



These are the results for a 2GB Public Cloud VPS-SSD 1 instance in Gravelines GRA1 (€2.99 monthly or €0.008 /hour)



> BYTE UNIX Benchmarks (Version 5.1.3)
> 
> 
> System: bordeaux.pig.bz: GNU/Linux
> OS: GNU/Linux -- 4.3.0-0.bpo.1-amd64 -- #1 SMP Debian 4.3.5-1~bpo8+1 (2016-02-23)
> Machine: x86_64 (unknown)
> Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
> CPU 0: Intel Xeon E312xx (Sandy Bridge) (4788.9 bogomips)
> x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET, Intel virtualization
> 20:57:44 up 10:28,  1 user,  load average: 0.00, 0.03, 0.05; runlevel 5
> 
> 
> ------------------------------------------------------------------------
> Benchmark Run: Sat Mar 05 2016 20:57:44 - 21:25:57
> 1 CPU in system; running 1 parallel copy of tests
> 
> 
> Dhrystone 2 using register variables       28816487.9 lps   (10.0 s, 7 samples)
> Double-Precision Whetstone                     3698.5 MWIPS (9.8 s, 7 samples)
> Execl Throughput                               4860.8 lps   (29.9 s, 2 samples)
> File Copy 1024 bufsize 2000 maxblocks       1239251.7 KBps  (30.0 s, 2 samples)
> File Copy 256 bufsize 500 maxblocks          343182.8 KBps  (30.0 s, 2 samples)
> File Copy 4096 bufsize 8000 maxblocks       2681223.4 KBps  (30.0 s, 2 samples)
> Pipe Throughput                             2622817.2 lps   (10.0 s, 7 samples)
> Pipe-based Context Switching                 407802.5 lps   (10.0 s, 7 samples)
> Process Creation                              12052.8 lps   (30.0 s, 2 samples)
> Shell Scripts (1 concurrent)                   6296.5 lpm   (60.0 s, 2 samples)
> Shell Scripts (8 concurrent)                    843.9 lpm   (60.0 s, 2 samples)
> System Call Overhead                        3975227.2 lps   (10.0 s, 7 samples)
> 
> 
> System Benchmarks Index Values               BASELINE       RESULT    INDEX
> Dhrystone 2 using register variables         116700.0   28816487.9   2469.3
> Double-Precision Whetstone                       55.0       3698.5    672.5
> Execl Throughput                                 43.0       4860.8   1130.4
> File Copy 1024 bufsize 2000 maxblocks          3960.0    1239251.7   3129.4
> File Copy 256 bufsize 500 maxblocks            1655.0     343182.8   2073.6
> File Copy 4096 bufsize 8000 maxblocks          5800.0    2681223.4   4622.8
> Pipe Throughput                               12440.0    2622817.2   2108.4
> Pipe-based Context Switching                   4000.0     407802.5   1019.5
> Process Creation                                126.0      12052.8    956.6
> Shell Scripts (1 concurrent)                     42.4       6296.5   1485.0
> Shell Scripts (8 concurrent)                      6.0        843.9   1406.5
> System Call Overhead                          15000.0    3975227.2   2650.2
> ========
> System Benchmarks Index Score                                        1713.6





> dd VPS:
> dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
> 16384+0 records in
> 16384+0 records out
> 1073741824 bytes (1.1 GB) copied, 2.49473 s, 430 MB/s
> 
> 
> dd Additional Disk (High Speed Volume, triple replication, €0.08 /month/GB of storage):
> dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
> 16384+0 records in
> 16384+0 records out
> 1073741824 bytes (1.1 GB) copied, 2.86435 s, 375 MB/s


----------



## fm7

willie said:


> Oh I got confused and thought LSSD was local SSD.



LSSD (which stands for _local SSD_)* is* local but not directly attached (except *C2 L* -- 250GB SSD included).


BTW LSSD space is "temporary", Each time you stop/start a VM the content of each attached volume is copied to/from storage located elsewhere. Depending on how many attached disks you have and the space used it may take an awful lot of time to stop and start a VM.


From the FAQ


Each server has access to a pool of local drives. These drives are exported to servers via the NBD protocol which effectively makes them network drives. However, network between these drives and Cx nodes are dedicated PCB tracks which ensures minimal latency and avoids network congestion.


There is no redundancy on these volumes, you need to handle redundancy on your side! They are archieved to a permanent storage when you start and stop your server.


----------



## fm7

DomainBop said:


> These are the results for a 2GB Public Cloud VPS-SSD 1 instance in Gravelines GRA1 (€2.99 monthly or €0.008 /hour)



Thanks!


The results I did post before refer to *OVH-BHS* (Montreal).


_BTW that VPS recently had ~9h20m downtime caused by power outage (short-circuit, 4h:40m to repair the electrical damage) followed by network outage/issues. No redundancy at all, no way comparable to Scaleway's Iliad DC-3 2N facility._


Same data center (OVH-BHS), another instance, few months ago:


BYTE UNIX Benchmarks (Version 5.1.3)


Benchmark Run: Mon *Nov 23 2015* 17:57:02 - 18:17:56
1 CPU in system; running 1 parallel copy of tests
System Benchmarks Index Score (Partial Only) *1545.8*


:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 2.71918 s, *395 MB/s*


Strasbourg SBG1:


BYTE UNIX Benchmarks (Version 5.1.3)


CPU 0: AMD Opteron(tm) Processor 6386 SE 


Benchmark Run: Fri *Jun 05 2015* 06:06:56 - 06:28:23
1 CPU in system; running 1 parallel copy of tests
System Benchmarks Index Score (Partial Only)   *338.7*


[email protected]:/# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 5.56097 s, *193 MB/s*


----------



## fm7

100% designed by Scaleway R&D teams​

​

*The C2 chassis has 18 servers per 3U *​

 ​




*Scaleway C2 @ Online.net / Iliad DC-3*


----------



## DomainBop

> Scaleway C2 offers
> 
> 
> €11.99month
> Avoton C2550
> 4 Dedicated x64 Cores
> 8GB Memory
> 50GB SSD Disk
> 
> 
> €17.99month
> Avoton C2750
> 8 Dedicated x64 Cores
> 16GB Memory
> 50GB SSD Disk



Online.net released their new Personal Range dedicateds today.  €20.00 setup fee again and:


1. €8.99 Dedibox SC Avoton C2350 (2 cores) 4GB RAM, 500GB HDD or 120GB SSD (replaces the 5.99 2GB RAM Nano U2250)


2.€15.99 Dedibox XC Avoton C2750 (8 cores) 16GB RAM, 1TB HDD or 250GB SSD (double the RAM of the old offering and SSD increased from 120 to 250GB)


----------



## willie

That's interesting, comparing the new online.net personal range to the new Scaleways.  The monthly charge is a little lower than the C2S/C2M and you get direct disk, but there's the setup fee.  The 750GB ftp backup for 5eur/mo available with those servers is also nice.


----------



## graeme

Could anyone run networking benchmarks and/or ioping on scaleway and/or online.net personal and/or OVH cloud?


----------



## fm7

Scaleway dashing.io (Twitter post, March 25, 2016)


----------



## willie

graeme said:


> Could anyone run networking benchmarks and/or ioping on scaleway and/or online.net personal and/or OVH cloud?



Scaleway C1 (ARM) ioping:


10 requests completed in 9.02 s, 900 iops, 3.52 MiB/s
min/avg/max/mdev = 684 us / 1.11 ms / 1.50 ms / 285 us


 dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 10.2818 s, 104 MB/s


Do you have a specific network benchmark you want me to run?


----------



## DomainBop

graeme said:


> Could anyone run networking benchmarks and/or ioping on scaleway and/or online.net personal and/or OVH cloud?



Disk:
*Scaleway C1 (Armv7)*
10 requests completed in 9.0 s, 875 iops, 3.4 MiB/s
min/avg/max/mdev = 886 us / 1.1 ms / 1.4 ms / 186 us


 dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 9.654 s, 111 MB/s


*Online.net Dedibox SC (Avoton C2350)*
--- /tmp (ext4 /dev/sda2) ioping statistics ---
10 requests completed in 9.0 s, 7.4 k iops, 28.9 MiB/s
min/avg/max/mdev = 114 us / 135 us / 153 us / 14 us


dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 12.1061 s, 88.7 MB/s


*Online.net Dedibox SC (VIA Nano U2250)*
--- /tmp (ext4 /dev/sda2) ioping statistics ---
10 requests completed in 9.0 s, 2.2 k iops, 8.7 MiB/s
min/avg/max/mdev = 347 us / 449 us / 545 us / 60 us


dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 14.7896 s, 72.6 MB/s


*OVH VPS-SSD  (Public Cloud OpenStack version):*
--- /tmp (ext4 /dev/vda1) ioping statistics ---
10 requests completed in 9.0 s, 2.0 k iops, 7.9 MiB/s
min/avg/max/mdev = 355 us / 492 us / 570 us / 70 us


 dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 2.21723 s, 484 MB/s


*Additional Disk (OVH Public Cloud):*
--- /opt (ext4 /dev/vdb) ioping statistics ---
10 requests completed in 9.0 s, 686 iops, 2.7 MiB/s
min/avg/max/mdev = 914 us / 1.5 ms / 2.9 ms / 494 us


dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 3.07906 s, 349 MB/s


================================================


Network:


*OVH VPS-SSD  (Public Cloud OpenStack version):*


Location        Provider    Speed
CDN            Cachefly    11.3MB/s


Atlanta, GA, US        Coloat        1.31MB/s 
Dallas, TX, US        Softlayer    7.93MB/s 
Seattle, WA, US        Softlayer    8.41MB/s 
San Jose, CA, US    Softlayer    7.38MB/s 
Washington, DC, US    Softlayer     7.44MB/s 


Tokyo, Japan        Linode        5.76MB/s 
Singapore         Softlayer    5.66MB/s 


Rotterdam, Netherlands    id3.net        11.8MB/s
Haarlem, Netherlands    Leaseweb    11.9MB/s 


Hosted by Orange (Paris) [1.71 km]: 9.007 ms
Testing download speed........................................
Download: 100.06 Mbit/s
Testing upload speed..................................................
Upload: 95.62 Mbit/s
 


*Scaleway C1 (Armv7)*


Location        Provider    Speed
CDN            Cachefly    99.4MB/s


Atlanta, GA, US        Coloat        1.20MB/s 
Dallas, TX, US        Softlayer    10.1MB/s 
Seattle, WA, US        Softlayer    7.49MB/s 
San Jose, CA, US    Softlayer    7.62MB/s 
Washington, DC, US    Softlayer     11.9MB/s 


Tokyo, Japan        Linode        6.39MB/s 
Singapore         Softlayer    2.42MB/s 


Rotterdam, Netherlands    id3.net        45.8MB/s
Haarlem, Netherlands    Leaseweb    49.9MB/s 


Hosted by NEOTELECOMS (Paris) [1.59 km]: 4.18 ms
Testing download speed........................................
Download: 918.68 Mbit/s
Testing upload speed..................................................
Upload: 177.68 Mbit/s
 


*Online.net DC2 (Dual Xeon E5620) *


Location        Provider    Speed
CDN            Cachefly    83.9MB/s


Atlanta, GA, US        Coloat        16.1MB/s 
Dallas, TX, US        Softlayer    11.5MB/s 
Seattle, WA, US        Softlayer    8.03MB/s 
San Jose, CA, US    Softlayer    9.29MB/s 
Washington, DC, US    Softlayer     17.4MB/s 


Tokyo, Japan        Linode        10.3MB/s 
Singapore         Softlayer    6.56MB/s 


Rotterdam, Netherlands    id3.net        62.1MB/s
Haarlem, Netherlands    Leaseweb    101MB/s 


Hosted by NEOTELECOMS (Paris) [1.59 km]: 1.882 ms
Testing download speed........................................
Download: 904.85 Mbit/s
Testing upload speed..................................................
Upload:338.54 Mbit/s
 


*Online.net DC3 (Xeon E3-1220)*


Location        Provider    Speed
CDN            Cachefly    98.5MB/s


Atlanta, GA, US        Coloat        15.5MB/s 
Dallas, TX, US        Softlayer    15.2MB/s 
Seattle, WA, US        Softlayer    10.9MB/s 
San Jose, CA, US    Softlayer    11.7MB/s 
Washington, DC, US    Softlayer     19.1MB/s 


Tokyo, Japan        Linode        7.92MB/s 


Singapore         Softlayer    8.10MB/s 


Rotterdam, Netherlands    id3.net        63.1MB/s
Haarlem, Netherlands    Leaseweb    58.8MB/s


Hosted by NEOTELECOMS (Paris) [1.30 km]: 1.796 ms
Testing download speed........................................
Download: 891.31 Mbit/s
Testing upload speed..................................................
Upload: 408.21 Mbit/s


----------



## willie

Scaleway C2M ioping:


10 requests completed in 9.00 s, 2.98 k iops, 11.6 MiB/s
min/avg/max/mdev = 259 us / 335 us / 515 us / 73 us


I tried to get a C2L but they were out of stock.  I've been running a few other benchmarks on the C2M but want to release it tonight or tomorrow, so let me know if there's any particular tests anyone wants.


----------



## fm7




----------



## graeme

Thanks @willie and @DomainBop for running those tests.


I have been testing two other providers (Bytemark and Upcloud) recently for a project of mine, and everyone seems to provide bandwidth reasonable close to limit within Europe. They are all a lot slower to Asia or the US. Upcloud has very fast storage.


I did some download testing on an OVH dedi I have access to, and OVH is easily the slowest to the US (quite good to Asia though). I also got a rather unimpressive reply to a question - although that maybe because they have an issue at the moment and are handling a lot of enquiries. Among other things they said the Ceph storage is mounted locally, which confuses me.

One thing I did find out is that you cannot seamlessly move from a cloud VPS to a public cloud instance.


On the whole I am not that keen on using OVH for anything that matters. How good is online.net customer service and does anyone have any idea how reliable are those cheap dedis are likely to be?


----------



## willie

I don't think Scaleway is that solid right now.  It's a cool product but it feels like something in beta test.  There will probably also always be hardware shortages.  That said I'm getting more and more paranoid about how to do HA, so would want to spread critical services across multiple providers, meaning Scaleway can be one of them.  I've gotten reasonable replies from their support though.  I don't know if having a developer account (3 euro/month) mattered for that.


----------



## drmike

Just wanted to say thanks to everyone on this thread.  Nice to see numbers and experiences.   I like what online.net is doing, even though some rough spots to it and bumps along the way.


Saw someone with one of the Avotons pushing a ton of data - like 30TB in 24 hours.  Pretty impressed that they let someone push packets like that


----------



## DomainBop

willie said:


> I don't think Scaleway is that solid right now.  It's a cool product but it feels like something in beta test.



The Online compiled Debian ARM kernels have had some bugs .  There have been a couple of bugs on various kernel upgrades that caused iptables to spew out _"can't initialize iptables table `filter': Table does not exist (do you need to insmod?"_ error messages.  I last encountered this problem today after changing to the 4.5 docker kernel on Debian Jessie (today's problem was easily solved with a quick "mv /lib/modules /lib/modules.old" and "bash -x /usr/local/sbin/oc-sync-kernel-modules".   IPtables errors on other kernel versions have also been easily solved by adding things like module-init-tools that were missing from an earlier version of Online's Debian templates, etc...note the missing module-init-tools bug was fixed in September)



> Saw someone with one of the Avotons pushing a ton of data - like 30TB in 24 hours.  Pretty impressed that they let someone push packets like that



Online will tolerate short term bursts of data usage like that for a few days but if someone does that every day  they're going to run into problems like this guy did http://www.webhostingtalk.com/showthread.php?t=1481943



> does anyone have any idea how reliable are those cheap dedis are likely to be?



The hardware and network are very reliable on those sub $10 dedis at Online and Kimsufi and you rarely, if ever, will need to open a ticket (especially at Kimsufi/OVH because their automated monitoring system is excellent and automatically dispatches technicians when it detects a problem).


----------



## willie

One of the ingredients of Scaleway was supposed to be an S3-like object store at .02/GB/month, triple replication, and no bandwidth charges.  It got oversubscribed instantly (people seemed to be using it as a CDN) and so it's been unavailable to new customers for the past 5 months or so while they scale it out.  There are some indications that they might reopen it soon, but who knows.  Its unavailability is one of the reasons Scaleway doesn't seem completely together yet.


----------



## fm7

IMO "mission critical" is incompatible with non-ECC RAM, non-redundant local storage,  shared chassis, ...


Online.net offers low-priced (as low as 20€/month) A/B powered enterprise-grade HP/Dell servers with IPMI, RAID HW, SAS HDs, ECC RAM, plus serious SLAs and inexpensive true SAN-HA storage.


Scaleway's baremetal server is just Arnaud's "cloud alternative" as the Online.net's CEO/CTO thinks the regular VPS/VM brings the worst of worlds: what is bad in dedicated servers plus what is bad in shared hosting.


The thing is ... this site is about VPS, public cloud and I think it is delusional to talk "mission critical" applications running on VPS.


----------



## fm7

willie said:


> One of the ingredients of Scaleway was supposed to be an S3-like object store at .02/GB/month, triple replication, and no bandwidth charges.  It got oversubscribed instantly (people seemed to be using it as a CDN) and so it's been unavailable to new customers for the past 5 months or so while they scale it out.  There are some indications that they might reopen it soon, but who knows.  Its unavailability is one of the reasons Scaleway doesn't seem completely together yet.



I guess part of problem is because the content of all volumes attached to a Scaleway server is copied to the permanent storage each time the server is stopped.


Recently Online.net's CEO posted on his Twitter account a short video showing a row of racks filled with storage hardware but didn't comment the intended usage. At the time he was asked about the "buckets":


HADJEDJ Vincent ‏@VincentHadjedj Mar 12


@online_fr bientôt le retour des Buckets chez scaleway ?


Translated from French by Bing


@online_fr soon the return of the Buckets in scaleway?


Online.net - Arnaud ‏@online_fr Mar 12


@VincentHadjedj oui !


----------



## DomainBop

fm7 said:


> I think it is delusional to talk "mission critical" applications running on VPS.



That depends on who controls the hypervisor. If you control it and use virtualization strictly to run your own company's apps then it is a reliable way to stretch your resources and save money and you can achieve uptime that is comparable to, or equals,  a dedicated server.  If you're using a typical VPS company** however where you have no control over the environment (node setup, your neighbors, provider errors that result in downtime/data loss, maintenance scheduling, etc) then a VPS probably isn't the best place for anything mission critical.


_**I'm defining 'typical VPS company' as one using off the shelf software(that usually contains obfuscated code that requires putting a ticket in with the developer when something goes wrong) and more times than not rented servers and rented IP space..and did I mention overselling?_


----------



## drmike

DomainBop said:


> I think it is delusional to talk "mission critical" applications running on VPS.



Broken iPB quote... so....


I am on fence about running anything important at all on a VPS.  I do it, and annually I am reminded with horror as to why not to.  Have had downed instance issues.  Abuse slapped when something went afoul another time.  Provider changing this and that.   Really adds up to like 3-6 events I'd say at average provider.  Ones not with events are just not building anything usually (i.e. coasting).


I've had issues with dedis just the same frequency annually.  Infamous power fail of DC, DDoS of their network, etc.


So I am in the boat of cheap dedicated at this point.   ARM is fine, cheap is fine.  I think this is the current evolution of the market myself.  At least for those of us long into VPS.


----------



## willie

Certainly people build businesses around AWS all the time, where EC2 basically amounts to overpriced VPS.  I think the most important thing is eliminate SPOFs.  Multiple servers from multiple vendors in multiple locations, etc.


----------



## drmike

willie said:


> Certainly people build businesses around AWS all the time, where EC2 basically amounts to overpriced VPS.  I think the most important thing is eliminate SPOFs.  Multiple servers from multiple vendors in multiple locations, etc.



No doubt AWS and Google and some other monoliths offer things more business reliable.  But you are going to pay heavily for it.  Amount people spend on services like that is pretty insane often.  At the numbers I see from folks, I'd be dealing with dedis from better DC suppliers directly myself.


Definitely the many vendor route in many locations is one that appeals more to me than monolith worship.  Quite a big niche AWS and others have carved out though.


----------



## fm7

willie said:


> Certainly people build businesses around AWS all the time, where EC2 basically amounts to overpriced VPS.  I think the most important thing is eliminate SPOFs.  Multiple servers from multiple vendors in multiple locations, etc.



SPOF would be the catastrophic impediment but how about the more mundane noisy neighbor? Or vastly different performance characteristics of AWS instances? 


BTW (Wikipedia):


Mission critical system is usually the online banking system, railway and aircraft operating and control system, electric power systems, and many other computer systems that will affect the business and society seriously if downed.


----------



## drmike

fm7 said:


> SPOF would be the catastrophic impediment but how about the more mundane noisy neighbor? Or vastly different performance characteristics of AWS instances?



Anyone here using / tried / aware of AWS platform and what they are actually running to make that all work?  What is the virtualization based on?


I haven't heard anything that I recall about noisy neighbors with AWS or similar large competitors.  Remains sort of ahh magical in some ways.. Clearly there have been complaints like total service on smaller instances = really slow.


I am betting they invested heavily in setting proper resource limits to contain things from getting too ugly.  Seems to be where the real shops differ from run of the mill Solus and pray brands.


----------



## DomainBop

drmike said:


> Anyone here using / tried / aware of AWS platform and what they are actually running to make that all work?What is the virtualization based on?



AWS is Xen, same as Rackspace, Oracle, Aliyun, Verizon Enterprise .


http://www.xenproject.org/help/presentations-and-videos/video/amazon-the-art-of-using-xen-at-scale.html


----------



## drmike

DomainBop said:


> AWS is Xen, same as Rackspace, Oracle, Aliyun, Verizon Enterprise .
> 
> 
> http://www.xenproject.org/help/presentations-and-videos/video/amazon-the-art-of-using-xen-at-scale.html



But but but I thought Xen was dying  ?


----------



## fm7

drmike said:


> Anyone here using / tried / aware of AWS platform and what they are actually running to make that all work?  What is the virtualization based on?
> 
> 
> I haven't heard anything that I recall about noisy neighbors with AWS or similar large competitors.  Remains sort of ahh magical in some ways.. Clearly there have been complaints like total service on smaller instances = really slow.
> 
> 
> I am betting they invested heavily in setting proper resource limits to contain things from getting too ugly.  Seems to be where the real shops differ from run of the mill Solus and pray brands.





*Google Compute Engine and Predictable Performance*



 



> Tim Freeman
> 
> 
> *July 1, 2012*
> 
> I raised my eyebrows at one statement Google is making about Google Compute Engine:
> 
> 
> 
> Deploy your applications on an infrastructure that provides consistent performance. Benefit from a system designed from the ground up to provide strong isolation of users’ actions. Use our consistently fast and dependable core technologies, such as our persistent block device, to store and host your data.
> 
> 
> 
> 
> While many talk about how one IaaS solution will give you better performance than another, one of the more bothersome issues in clouds is whether or not an instance will give you _consistent_ performance. This is especially true with I/O.
> 
> 
> A lot of this performance consistency problem is due to the “noisy neighbor” issue. IaaS solutions typically have some kind of multi-tenant support, multiple isolated containers (VM instances, zones, etc.) on each physical server. The underlying kernel/hypervisor is responsible for cutting each tenant off at the proper times to make sure the raw resources are shared correctly (according to whatever policy is appropriate).
> 
> 
> AWS, while nailing many things, has struggled with this. I’ve heard from many users that they’re running performance tests on every EC2 instance they create in order to see if the neighbor situation looks good. This only gets you so far, of course: a particulary greedy neighbor could be provisioned to the same physical node at a later time.
> 
> 
> Taking the concept further, I’ve been in a few conversations where the suggestion is to play “whack-a-mole” and constantly monitor the relative performance, steal time, etc., and move things around whenever it’s necessary. (That sounds like a great CS paper, but stepping back… that’s just kind of weird and crazy to me if this is the best we can do.)
> 
> 
> The best approach on most clouds (except Joyent who claims to have a better situation) is to therefore use the biggest instances, if you can afford them. These will take up either half or all of the typical ~64-70GB RAM in the servers underlying the VM: no neighbors, no problems. Though other kinds of “neighbors” are still an issue, like if you’re using a centralized, network-based disk.
> 
> 
> So how serious is Google in the opening quote above? What different technology is being used on GCE?
> 
> 
> A Google employee (who does not work on the GCE team but who I assume is fairly reporting from the Google I/O conference) tweeted the following:
> 
> 
> 
> Google compute is based on KVM Linux VMs. Storage: local ephemeral, network block, google storage #io12
> 
> 
> 
> 
> KVM.
> 
> 
> Years ago, we investigated various techniques we could use in the Nimbus IaaS stack to guarantee that guests only used a given amount of CPU percentage and network bandwidth _while also allowing colocated guests to enjoy their own quota_. Pure CPU workloads fared well against “hostile” CPU based workloads. But once you introduced networking, the situation was very bad.
> 
> 
> The key to these investigations is introducing pathologically hostile neighbors and seeing what you can actually _guarantee_ to other guests, including all of the overhead that goes into accounting and enforcement.
> 
> 
> That was on Xen, and it’s not even something the Xen community was ignoring, it’s just a hard problem. And since then I’ve seen that the techniques and Xen guest schedulers have improved.
> 
> 
> But I haven’t seen much attention to this in KVM (though I admit I haven’t had the focus on this area that I had in the past).
> 
> 
> So we have this situation:
> 
> 
> AWS uses Xen.
> 
> AWS and Xen historically have issues with noisy neighbors.
> 
> Google uses KVM, not historically known for strong resource isolation.
> 
> Google is claiming consistent performance as a strong selling point.
> 
> 
> Do they have their own branch, a new technique? Are they actually running SmartOS zones + KVM? I’m really curious what is happening here. Surely they’ve seen this has been an issue for people on AWS for years and would not make such a bold claim without testing the hell out of it, right?
> 
> 
> Another thing they’re claiming is a “consistently fast and dependable” network block device. Given the a priori failure mode problems of these solutions, I’m doubly curious.
> 
> 
> UPDATE: This talk from Joe Beda has some new information, slide 14: Linux cgroups – I also heard via @lusis that they worked with RedHat on this.
> 
> 
> UPDATE: comment from Joe Beda:
> 
> 
> “We are obviously worried about cascading failures and failure modes in general. Our industry, as a whole, has more work to do here. This is an enormously difficult problem and I’m not going to start throwing rocks.
> 
> 
> That being said, I can tell you that our architecture for our block store is fundamentally different from what we can guess others are doing and, I think, provides benefits in these situations. We can take advantage of some of the great storage infrastructure at Google (BigTable, Colossus) and build on that. Our datacenters are really built to fit these software systems well.”
> 
> 
> http://www.peakscale.com/noisyneighbors/


 
 


*BTW* DigitalOcean is XEN; Vultr, Atlantic.net, Profitbricks, KVM; and Linode replaced XEN by KVM.


 



Linode: goodbye Xen and welcome KVM!


 



> June 16, 2015 12:01 pm
> 
> *Happy 12th birthday to us!*
> 
> Welp, time keeps on slippin’ into the future, and we find ourselves turning 12 years old today. To celebrate, we’re kicking off the next phase of Linode’s transition from Xen to KVM by making KVM Linodes generally available, starting today.
> 
> *Better performance, versatility, and faster booting*
> 
> Using identical hardware, KVM Linodes are much faster compared to Xen. For example, in our UnixBench testing a KVM Linode scored 3x better than a Xen Linode. During a kernel compile, a KVM Linode completed 28% faster compared to a Xen Linode. KVM has much less overhead than Xen, so now you will get the most out of our investment in high-end processors.
> 
> KVM Linodes are, by default, paravirtualized, supporting the Virtio disk and network drivers. However, we also now support fully virtualized guests – which means you can run alternative operating systems like FreeBSD, BSD, Plan 9, or even Windows – using emulated hardware (PIIX IDE and e1000). We’re also working on a graphical console (GISH?) which should be out in the next few weeks.
> 
> In a recent study of VM creation and SSH accessibility times performed by Cloud 66, Linode did well. The average Linode ‘create, boot, and SSH availability’ time was 57 seconds. KVM Linodes boot much faster – we’re seeing them take just a few seconds.
> 
> *How do I upgrade a Linode from Xen to KVM?*
> 
> On a Xen Linode’s dashboard, you will see an “Upgrade to KVM” link on the right sidebar. It’s a one-click migration to upgrade your Linode to KVM from there. Essentially, our KVM upgrade means you get a much faster Linode just by clicking a button.
> 
> *How do I set my account to default to KVM for new stuff?*
> 
> In your Account Settings you can set ‘Hypervisor Preference’ to KVM. After that, any new Linodes you create will be KVM.
> 
> *What will happen to Xen Linodes?*
> 
> New customers and new Linodes will, by default, still get Xen. Xen will cease being the default in the next few weeks. Eventually we will transition all Xen Linodes over to KVM, however this is likely to take quite a while. Don’t sweat it.
> 
> On behalf of the entire Linode team, thank you for the past 12 years and here’s to another 12! Enjoy!
> 
> -Chris
> 
> 
> 
> 
> 
> 
> 
> 
> https://blog.linode.com/2015/06/16/linode-turns-12-heres-some-kvm/


----------



## willie

EC2 used to have bad NN problems unless you used very big instances, in which case it was merely incredibly expensive.  I don't know if it's better now, but they've introduced a cpu-time accounting system where you get a certain amount of CPU credit for each hour you pay for on the instance, up to some maximum.  So e.g. if you idle for 2 hours you're then allowed to use 100% cpu for 10 minutes before getting throttled, that sort of thing, parameters depend on the instance type.


I hate doing anything computation intensive on VPS's these days.  I love my cheap-ass Hetzner dedicated server, or my Scaleways since that's what we're talking about here.  I remember starting a 9 hour computation on the Hetzner one night, 100% cpu on all 4 cores, then checking the result in the morning.  It had worked properly but had misformatted the output, printed 2 columns in the wrong order or something like that.  I could have spent 15 minutes whipping up a script to re-order the output file, but instead I spent 30 seconds fixing the relevant print statement in the original program, then restarted the 9 hour computation and left for work, so I had correct output waiting when I got home.  It was incredibly satisfying to be able to do that.


For really big-time cheap single-box CPU, this was near unbelievable (out of stock now but might return): https://www.wholesaleinternet.net/out-of-stock/?id=277  a dual E5-2670 (i.e. 16 cores, 32 threads) with 32GB ram and 240GB SSD for $49/month.  A bit underprovisioned in ram and disk for typical server uses, but amazing if all you wanted was to compute.  They have some bigger setups in stock right no at still very attractive prices, though I have no idea how good their network etc. is.


----------



## drmike

willie said:


> fm7 said:
> 
> 
> 
> 
> AWS uses Xen.
> 
> AWS and Xen historically have issues with noisy neighbors.
> 
> Google uses KVM, not historically known for strong resource isolation.
> 
> Google is claiming consistent performance as a strong selling point.
> 
> 
> 
> 
> 
> 
> EC2 used to have bad NN problems unless you used very big instances, in which case it was merely incredibly expensive.
Click to expand...


So buy tenants off a box basically for isolation.  Dedi wins unless their software / panel is just truly that awesome or API advance in your world is that developed.


The Xen vs. KVM in this big farms is interesting.   Xen hasn't been getting much love for years.  Performance on it hasn't been keeping up (indicated in the above too).



willie said:


> I love my cheap-ass Hetzner dedicated server, or my Scaleways since that's what we're talking about here



Count me in.  This is how to roll.  Scaleway is making it affordable.  Really compelling offers, moreso than most cheapie hosts even.  



willie said:


> For really big-time cheap single-box CPU, this was near unbelievable (out of stock now but might return): https://www.wholesaleinternet.net/out-of-stock/?id=277  a dual E5-2670 (i.e. 16 cores, 32 threads) with 32GB ram and 240GB SSD for $49/month



Those boxes are mutations that are going to go wrong sideways.  Believe that's all the same as their infamous 96GB boxes.  Those are computation boards and not meant for this stuff to random consumers.  Yeah decent deal, can buy these at like $150~ outright.


WSI is alright, even though I took to jabbing Aaron for his intimate dealings and having his hands and feet in other pockets while in public being all they are different companies.   Guy sure wrongly endorses / shills so, that's how I feel about that.  Their network is alright there though and with the new other side of town DC should be better all around.  I remember when staff had to get in vehicle there and drive across town to do support.  It's a hobby location, I wouldn't limited budget put my stuff there without redundancy live second location elsewhere.


----------



## willie

Scaleway, Hetzner, and that WSI E5 cost roughly the same per passmark, the difference is that Scaleway bills hourly, making it tempting to spin up ten Scaleways instead of 3-4 E3's or 1-2 E5's for a short period (few days or whatever) instead of running your task a lot longer on fewer monthly billed boxes.  The trouble is Scaleway has constant hardware shortages so it's not at all clear that you can spin up five of them whenever you want. 


I don't understand the issue with those E5 servers or the 96GB ones?  I saw an LET thread saying they have no KVM, but Hetzner doesn't either and I haven't needed it (Hetzner rescue system was enough).  The E5's let you boot an ISO image so you can always reinstall, other than that you need good frequent backups.  Yeah I'm not sure what WSI's situation is in other regards.  I was surprised to see some overlap with Joe's Datacenter, where I've sometimes thought of parking a box.


What do you mean about being able to buy those E5-2670's for $150 outright?  I've never seen anything like that.  L5520's or whatever, maybe, but those are much much slower.  WSI now has E5 configs with two SSD's instead of one, making them more useful (RAIDable), fwiw.


----------



## drmike

willie said:


> Scaleway, Hetzner, and that WSI E5 cost roughly the same per passmark, the difference is that Scaleway bills hourly, making it tempting to spin up ten Scaleways instead of 3-4 E3's or 1-2 E5's for a short period (few days or whatever) instead of running your task a lot longer on fewer monthly billed boxes.  The trouble is Scaleway has constant hardware shortages so it's not at all clear that you can spin up five of them whenever you want.
> 
> 
> I don't understand the issue with those E5 servers or the 96GB ones?  I saw an LET thread saying they have no KVM, but Hetzner doesn't either and I haven't needed it (Hetzner rescue system was enough).  The E5's let you boot an ISO image so you can always reinstall, other than that you need good frequent backups.  Yeah I'm not sure what WSI's situation is in other regards.  I was surprised to see some overlap with Joe's Datacenter, where I've sometimes thought of parking a box.
> 
> 
> What do you mean about being able to buy those E5-2670's for $150 outright?  I've never seen anything like that.  L5520's or whatever, maybe, but those are much much slower.  WSI now has E5 configs with two SSD's instead of one, making them more useful (RAIDable), fwiw.



Scaleway remains killer, and thus, forget about seeing inventory available steady and any time soon.  Not going to happen.


I believe WSI is using these Quanta Windmill systems, ignore price on this:


http://www.ebay.com/itm/QUANTA-WINDMILL-SYTEM-2-NODES-4x-XEON-8-CORE-E5-2660-2-2GHz-16GB-RAM-2x-250GB-/131639736542


A rack full of them:
http://www.ebay.com/itm/52X-QUANTA-WINDMILL-OPEN-COMPUTE-NODES-4x-E5-2660-2-2GHZ-16GB-2x-250GB-WITH-RACK-/201426399499


Not the prices I said, but I know the base boards are available super cheap.. even loaded nodes are like $500 per populated unit full sale price there.


These were Quanta made for Facebook and spec like a computation farm build.


The Delimiter guys are familiar with the boards.  I think they acquired some at some point and decided against using those.


No traditional ports.  Just NIC which doubles are IPMI interface and allegedly insecure.


@mikeyur


----------



## fm7

drmike said:


> So buy tenants off a box basically for isolation.  Dedi wins unless their software / panel is just truly that awesome or API advance in your world is that developed.



I think dedi ever wins. 


Google's *Predictable Performance *angle was used to attract number crunching users, in special consultants and engineering firms hired by big corporations to solve complex problems. Instead the firms spending CAPEX to build their (scientific/engineering) clusters Google's mermaid promised no upfront costs, no cancellation fees, only pay for what you use. Considering numerical methods usually take a lot of cpu, memory, I/O,  the "predictable performance" pitch is sort of marketing ploy because you want/need one VM per server. 



willie said:


> Scaleway, Hetzner, and that WSI E5 cost roughly the same per passmark,



If you are using a cluster of servers to run solvers you will be interested to check the LINPACK or benchmarks like that. Or roughly, Byte's Double-Precision Whetstone.


----------



## willie

Holy cow Drmike, thanks for those ebay links, it's tempting to buy some of those and colo them.  Where's the security issue with the pseudo-IPMI if you're on a routed ethernet port? 


Passmark has been a very good estimate of actual performance for the stuff I've been doing, basically distilling database dumps in a way that parallelizes well.  So far I do it semi-manually with a few python scripts but if I had the inclination and access to a ton of machines, I could put some more serious orchestration together.  I had been thinking of doing that with 100 or so Scaleway C1's last year, before meeting the reality that it's not possible to get that many on demand.


One annoying misfeature of the WSI offers is that all network traffic counts against your 33TB monthly allocation, even to another server in the same data center.  That makes it hard to use separate servers for computation and storage, because of the high traffic between them.


----------



## fm7

E3-1240v3


Double-Precision Whetstone  *3113* MWIPS


8 CPUs in system; running 8 parallel copies of tests
Double-Precision Whetstone 35840 MWIPS


-------


Dedibox Kidéchire (VIA Nano U2250) (2€)


1 CPU in system; running 1 parallel copy of tests


Double-Precision Whetstone *1654* MWIPS


-----


Scaleway C1 (ARMv7 *32-bit*) (3€)


Double-Precision Whetstone  *553* MWIPS


4 CPUs in system; running 4 parallel copies of tests
Double-Precision Whetstone 2221 MWIPS


-----



Scaleway C2 S (C2550) (9€)


Double-Precision Whetstone  *1997* MWIPS


4 CPUs in system; running 4 parallel copies of tests
Double-Precision Whetstone  7985 MWIPS


-----


Scaleway VPS (C2750) (3€)


Double-Precision Whetstone   *1989* MWIPS


2 CPUs in system; running 2 parallel copies of tests
Double-Precision Whetstone   3978 MWIPS


----------



## willie

Thanks FM7.  The E3 numbers are interesting because the E3-1240 is a 4-core machine with 8 threads, so I'd expect the parallel benchmark result to be at best about 5x the single threaded result.  But instead it is over 11x.  My current stuff is all integer but the floating benchmarks are nice to have.  I think though that people doing heavy duty numerics these days tend to use GPUs.


----------



## fm7

willie said:


> Thanks FM7.  The E3 numbers are interesting because the E3-1240 is a 4-core machine with 8 threads, so I'd expect the parallel benchmark result to be at best about 5x the single threaded result.  But instead it is over 11x.





Not that interesting 


GOVERNOR is set to ondemand


# of Cores


4


# of Threads


8


Processor Base Frequency


3.4 GHz


Max Turbo Frequency


3.8 GHz




:~# cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 60
model name      : Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz
stepping        : 3
microcode       : 0x17
cpu MHz         : *2800.000*
cache size      : 8192 KB


processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 60
model name      : Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz
stepping        : 3
microcode       : 0x17
cpu MHz         : *800.000*
 


processor       : 2
vendor_id       : GenuineIntel
cpu family      : 6
model           : 60
model name      : Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz
stepping        : 3
microcode       : 0x17
cpu MHz         : *800.000*


=================


# of Cores


4


# of Threads


8


Processor Base Frequency


3.3 GHz


Max Turbo Frequency


3.7 GHz




:~# cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 58
model name      : Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz
stepping        : 9
microcode       : 0x15
cpu MHz         : *1600.000*


*E3-1230 V2*


8 CPUs in system; running 1 parallel copy of tests
Double-Precision Whetstone                     *3499* MWIPS


8 CPUs in system; running 8 parallel copies of tests
Double-Precision Whetstone                    33143 MWIPS


9,5x 


I posted that E3 result as reference.


----------



## DomainBop

Scaleway added two more VPS offers to their lineup today:


€5.99/Month
4 x86 64bit Cores
4GB Memory
100GB SSD Disk
1 Flexible public IPv4
200Mbit/s Unmetered bandwidth


€9.99/Month
6 x86 64bit Cores
8GB Memory
200GB SSD Disk
1 Flexible public IPv4
200Mbit/s Unmetered bandwidth


compare Scaleway's new VPS offerings to the competition they're trying to kill (OVH Public Cloud Instances):


1 vCore
2.4 GHz
4 GB RAM
20 GB SSD
100 Mbps best effort
€5.99 /month


2 vCores
2.4 GHz
8 GB RAM
40 GB SSD
100 Mbps best effort
€11.99 /month


----------



## willie

Heh, http://instantcloud.io used to give you 30 minutes of a C1, but now it's 20 minutes of a 2-core VPS.


Edit: no idea how to undo the screwed up formatting, sorry.


----------



## fm7

Date: Fri, 8 Apr 2016 16:57:53 +0000
From: Scaleway <[email protected]>


...



The VC1 preview is now finished, the new VC1 cloud servers have now reached General Availibility and can be used to scale out to thousands of servers.Chekout the full announcement on the blog!
For our users waiting for the C2 General Availibility, we recommend to start with the VC1L server and to scale up to the C2 when it reaches General Availibility.
We want to thank you for your trust, we've recorded over 500 000 server startup requests!


----------



## willie

By the way, the new Scaleway VPS don't look especially more attractive than the corresponding OVH other than the availability of hourly billing without a deposit.  An Avoton core is around 20% the speed of an E3 core so maybe 25-30% of the speed of OVH's E5 cores.  Given that you're on a shared machine either way, you can't use 100% cpu all the time on either one, you get about equivalent total CPU and much better single threaded performance comparing the OVH and Scaleway 4GB plans (4 Avoton cores vs one E5 core, some of the time).  The OVH 2GB and 8GB plans seem to be better cpu-wise than Scaleway, plus they're available in Canada as well as in France. 


The Scaleway VPS's do seem nice for testing purposes and stuff like that--spin up, use for a while, and spin down when done.  But I still find the dedicateds to be a more interesting proposition.


Meanwhile, WSI has expanded the offerings of those E5 boxes (they have a 32GB, 2x240GB SSD one for $59/mo etc.) so that also seems like a useful and so-far unique resource, despite its shortcomings.


----------



## fm7

willie said:


> By the way, the new Scaleway VPS don't look especially more attractive than the corresponding OVH other than the availability of hourly billing without a deposit.



In France, I think Scaleway VPS is more attractive because its data center infrastructure is much better than OVH's facilities.


Also it should be noted OVH may host over 80 VPS per node.



> FS#12307 — FS#16729 — VPS CLOUD 2016 - SBG host560095.sbg1
> 
> Details
> We have detected an incident on host560095.sbg1, apparently a failure on the cooling system.
> 
> Impacted VPS:
> vps195294
> vps204862
> vps207809
> vps214245
> vps232955
> vps234613
> vps234767
> vps235270
> vps235314
> vps235324
> vps235329
> vps235341
> vps235345
> vps235358
> vps235385
> vps235434
> vps235435
> vps235436
> vps235438
> vps235449
> vps235450
> vps235452
> vps235453
> vps235479
> vps235530
> vps235534
> vps235535
> vps235572
> vps235637
> vps235639
> vps235656
> vps235667
> vps235669
> vps235761
> vps235813
> vps235848
> vps235886
> vps235946
> vps235952
> vps236106
> vps236107
> vps236120
> vps236184
> vps236226
> vps236229
> vps236583
> vps238129
> vps247938
> vps248724
> vps249091
> vps249401
> vps249409
> vps249460
> vps249503
> vps249553
> vps249554
> vps249617
> vps249624
> vps249655
> vps249657
> vps249662
> vps249668
> vps249674
> vps249685
> vps249748
> vps249752
> vps249753
> vps249761
> vps249765
> vps249767
> vps249768
> vps249769
> vps249818
> vps249819
> vps249820
> vps249829
> vps249859
> vps249864
> vps249868
> 
> Our technicians are working on the issue.
> 
> Date: Tuesday, 23 February 2016, 19:56PM
> Reason for closing: Done
> Additional comments about closing: All impacted VPS are up and running


----------



## willie

I don't know that the 80 vps per node is so terrible.  I'll guess it's a 16 core e5 server with 128gb of ram and 2gb vps's, and if they're like here most of the vps will be idle most of the time, so you should be able to get reasonable cpu bursts.  So it's about 1/5th of an E5 core per vps, where Scaleway is maybe equiv. to 1/2 an E5 core, but big vps nodes are usually not cpu bound.   Hard to tell.  I'd still say get a dedi if cpu really matters.


I defer to your knowledge about the ovh vs online.net networks in France.  I might try one of the ovh ones (2gb, Canada) for a month and can run some benchmarks if I do that.  I was a bit disappointed with the Scaleway C2M (dedi) on a compilation test (compiled ffmpeg).  It took about 11 minutes on 8 cores, vs about 3 minutes on an i7-3770 with 4 cores or < 19 minutes on a C1 on 4 cores.  It spent a lot of time at the end on a single yasm process, so the single threaded performance is significant.  I'd like to try it on an OVH high-cpu cloud instance sometime but I don't want to pay the $40 deposit for that.


----------



## DomainBop

willie said:


> By the way, the new Scaleway VPS don't look especially more attractive than the corresponding OVH other than the availability of hourly billing without a deposit.  An Avoton core is around 20% the speed of an E3 core so maybe 25-30% of the speed of OVH's E5 cores.  Given that you're on a shared machine either way, you can't use 100% cpu all the time on either one, you get about equivalent total CPU and much better single threaded performance comparing the OVH and Scaleway 4GB plans (4 Avoton cores vs one E5 core, some of the time).  The OVH 2GB and 8GB plans seem to be better cpu-wise than Scaleway, plus they're available in Canada as well as in France.



OVH is more attractive if you need CPU power but not if you need disk space.


*Cost for a VPS with 50GB storage, smallest 2GB RAM offer:*


Scaleway 2.99 includes 50GB


OVH public cloud VPS SSD 4.59-6.19 ( 2.99 VPS w/10GB disk + 40GB extra disk: 1.60 for 200 iops or 3.20 for 800 iops)


OVH VPS SSD 7.99 (2.99 VPS w/10GB disk + 50GB extra disk for 5.00)



> I don't know that the 80 vps per node is so terrible.



The user experience usually depends more on how well the provider manages their nodes than it does on how many vps are on the node (except in the case of extreme overselling but a case could be made that overloading a node with customers is an example of poor node management).


----------



## willie

True about the disk space, though keep in mind it's non-RAID and there's no convenient backup method currently being offered to new users.  SIS was intended for that, but it had scaling problems so it's limited to old users for now.  They claim it will be back soon, we'll see.


----------



## fm7

willie said:


> I don't know that the 80 vps per node is so terrible.



Resource contention.


----------



## DomainBop

Host                                Loss%   Snt   Last   Avg  Best  Wrst StDev
 1. 163-172-25-1.rev.poneytelecom.eu  0.0%    11    0.8   0.9   0.7   1.7   0.0
 2. 195.154.1.240                     0.0%    10    1.1   1.2   0.9   2.6   0.3
 *3. bb2-dc3-bb1.ams1.poneytelecom.eu  0.0%    10   16.6  16.5  16.4  17.3   0.0*
 4. 195.154.1.185                     0.0%    10   17.5  17.4  17.3  17.5   0.0
 5. 163-172-209-20.rev.poneytelecom.  0.0%    10   15.6  15.6  15.5  15.6   0.0










AMS1 new location (Evoswitch DC). Online Dedibox XC and SC lines, and OneProvider C2350 and C2750 offers  for the AMS1 launch.  Scaleway adding AMS1 location sometime in the future.


----------



## DomainBop

This announcement will no doubt get some people excited because people have been begging for this since Scaleway launched:
 



> CentOS General Availability
> 
> 
> We've been working with the CentOS community to support CentOS on arm since several month and we're proud to announce that we're now able to provide CentOS on both ARM and x86!


----------

