I don't know where they're physically located, I don't have numbers but it certainly 'feels' like local SSD storage in normal use. But I was just pointing out that, since they use the ethernet interface, and as you pointed out they have greater density than the Vias (it's
12 per 2U chassis, but in this video they say
252 per rack which would mean a full 42U of them with no other gear), it's unlikely that the Scaleways have that same network bottleneck.
It certainly seems to me that the Scaleway hardware is an evolution of the idea behind the Via gear. That was either custom or highly specialised Dell hardware with an unusual x86 processor (as far as I can guess, the
XS11-VX8 was built (nonexclusively) for them by Dell DCS - who do custom data centre hardware - since basically the only mentions of it on the web in English at least are DCS press releases, articles written about DCS press releases and stuff about Online, and Online do have pictures of Dell prototype hardware). But the big downside is the need to have a 2.5" disk for each server - they're pretty much at the upper limit on the density you can pack in, given that you're not really constrained by heat.
Which is why the Scaleways are different - storage moved off the individual nodes to pack them in tighter, ARM processors are excellent in terms of heat to cope with the higher density, and (my speculation is) better networking since there's higher demand on the network from the storage, there's higher bandwidth needed per server in ~2014 than there was in ~2009, and more servers crammed in means that uplink issues would be more likely to impact customers.