amuck-landowner

Meet the new NEC DX1000, up to 46 servers in a 2U Chasis!

TheLinuxBug

New Member
Hello all!

I do not post very often, but I thought this was worth sharing!

Not sure how many of you got the opportunity to visit the WorldHostingDays USA Convention this week, but it was a blast!

While at the show there were many vendors, but one that stood out and caught my attention was this new offering from NEC.

Meet the DX1000 Micro Modular Server:

dx1000.png

Download the full spec sheet: here

I honestly thought I had some picture of the modules from the conference on my phone, but seems I somehow deleted or misplaced them.  If I can find some better pics I will try to update later.  I am also going to try to get a better scan of the spec sheet, which I will just update in place when I get a second. I couldn't find a copy on their website at the time of writing.

More interesting though is this document on using these for OpenStack clustershere

The above guide also has some better pictures of the unit and shows the individual modules.

To be honest, I am not sure if this is 'Industry news' or not, but as I wasn't sure the best category for this, this seemed the most fitting.

Supposedly it is possible using these to fit up to 16 chassis or 736 servers in a single 42U rack, each chassis using only two 1,600 Watt 80 PLUS Platinum certified hot plug power supplies for 46 servers (or drive storage modules).

Each server is an Intel Atom C2750 (2.4GHz/8-core/4MB Cache) or Intel Atom C2730 (1.7GHz/8-core/4 MB Cache) with:

  • Up to 32GB of DDR3-1600 ECC LV SO-DIMM memory
  • 128GB-1TB mSATA SSD
  • 1 x PCIe x8 Gen 2 slot
  • 2 x 2.5GbE links to switch module
  • Embedded BMC with IPMI 2.0
  • 2 x 40GbE OSPF+ uplinks
  • 1 x 1000BASE-T for management
  • Can operate in temps 50-104F (10-40C)
Other interesting things about these chassis:
  •  You can also exchange a compute module for a drive module which fits additional 2.5Inch SATA 500GB or 1TB drives. 
  • Each compute module starts at $800.00 and you have to purchase a minimum of 16 compute modules with each chassis.
  • These start around 16K USD but at the cost per core there are pretty awesome chassis!
  • Can operate at higher ambient temperatures than standard server boards
With all this power in such a small amount of rack space one would think that these types of server setups will become more and more popular, as it allows you to run more servers in less space.  Also, with the low power consumption of these units and their high temperature tolerances it allows an even bigger savings as they don't require the same amount of cooling and power that you would need for standard setups.
What do you guys think, are these types of setups desirable? Would you use these in production (as a provider)?  Have you looked at any other similar server setups?

Look forward to hearing what you guys think!

Edit: modified my typo on the amount of servers that can fit in a single rack.

Cheers!
 
Last edited by a moderator:

MannDude

Just a dude
vpsBoard Founder
Moderator

TheLinuxBug

New Member
Well, I figure most doing initial investment in this would likely only purchase the original 16 servers and then upgrade as needed.  This means that the other 26 slots can be used for up to 1TB 2.5" drives. So there is the ability to get a fair amount of storage, but yeah, if you filled the whole thing with compute modules and were limited to only the onboard mSATA drives, you would be a little limited on space.  The company I work for already uses some of the SuperMicro MicroCloud's, but this still offers more density in a smaller space.  If you are a start up and don't want to pay a ton for your colo, starting out with one of these and upgrading as you go could be quite useful.  Plus they have quite low power usage and you can run them in a warmer environment than the SuperMicro Clouds, so this could be useful for use in environments with less than stellar cooling (Countries where cooling is at a premium would be who this caters to I would think, you can afford to run less cooling and still run the boards optimally).

Cheers!
 

DomainBop

Dormant VPSB Pathogen
That's pretty awesome. I think they would be pretty desirable for specific applications, though can't see your average hosting or VPS provider jumping on board simply because of the CPU and storage limits per node.
The deployment guide is targeted to private cloud deployments not vanilla hosting provider setups.

For hosting providers that offer private clouds (and enterprises looking for a cost effective hardware solution) this could be a great cost saver (really off topic note: private clouds and SaaS are where the real profits are in the cloud, not public cloud offerings)
 
Last edited by a moderator:

devonblzx

New Member
Verified Provider
It's neat but that is about all, from my perspective.  I'd imagine each one would use around 10A, you'd long exhaust the allowed power density of a cabinet in any standard datacenter before filling it with these.  Probably would be cheaper and just as useful to have a quarter of the server density or a larger chassis for most applications.   Most datacenters only allow for so much power per cabinet due to cooling and other restrictions.  In my experience, this is usually 60 to 80A which means you'd get about 6 of these in a cabinet.

I'm sure there are applications for these in enterprise consumers that manage their own small datacenters with large clusters.  I don't think 46 atoms would do much good in most virtualization.  I think you'd be better off with 12 high end Xeons or something similar.
 
Last edited by a moderator:

drmike

100% Tier-1 Gogent
Looks nice... hype density isn't going to work with end companies renting cheap DC space though.  As mentioned hitting power ceiling real quick.  Will require extra drops which are not cheap.

Unsure how NEC is moving equipment.  Not a company you see much of these days due to price mainly.

Clearly this play is all about clusters and clouds.  Saw that recently with the new Xeon whatever which is a power sipper and intended for such form factor with fast interconnects.

Interesting gear, but changes whole model for anyone now in VPS looking at these.

The Atom CPU is the buzzkill on this.
 

TheLinuxBug

New Member
One of the other things I thought about when looking at this was the fact that you don't HAVE to use all of them for cloud, potentially you could use just as many as you need and sell the other modules as dedicated servers.  Also, the network fabric on these is nice, the ability to provide 2 x 2.5GbE to each module is pretty nice, especially if you get a customer needing to have a high traffic load you can easily bond these and have up to 5GbE at your fingers on a single server.  I did consider the OSPF+ links to be a bit of a hassle in the same way though, cause not many DCs provide 40GbE OSPF+ links to customers without customer request, and I am sure those would be quite expensive, even if your commit was only gigabit.

I guess it is something nice to dream about, but your probably right when it comes down to the small vps hosts in this industry, it is probably more than they would want to mess with.  Plus the entrance price tag is a bit steep with this hardware still. As @DomainBop said, it would probably end up being used more in enterprise situations with custom offerings.

Thanks for your feedback guys, keep it coming! :)

Cheers!
 
Last edited by a moderator:

ndelaespada

Member
Verified Provider
More cpu power also means more power consumption though, they're trying to keep it under certain limit energy wise.
 

DomainBop

Dormant VPSB Pathogen
Same design as HPs Moonshot, both not very powerful CPU wise.
Similar but the Moonshot is 45 server cartridges in a 4.3U form factor case...the NEC is 46 in a 2U

These Moonshot case studies (http://www8.hp.com/us/en/products/servers/moonshot/index.html#case) are probably a pretty good example of the types of companies the NEC DX1000 will attract (mainly enterprise: 20th Century Fox, PayPal, a few universities).  The only hosting company I've seen using the Moonshot is Webtropia/myLoc.

edited to add a link to the myLoc Moonshot case study because it details the cost savings they've achieved: http://www8.hp.com/h20195/V2/GetDocument.aspx?docname=4AA5-3549ENW&cc=us&lc=en
 
Last edited by a moderator:
Top
amuck-landowner