Options for isolating sites: Vms, vs containers, cs different users


Active Member
I have multiple web sites (and a few other processes) I want to run in an environment that is flexible and as low maintenance as possible. This includes my sites and customer sites, production and development. Most share a common platform (Python, Django, Postgres, Linux (mostly Debian)). I need to be able to give

At the moment they are all running in separate VPSs, and some on shared hosting. The problem with multiple unmanaged VPSs is that it is a lot of stuff to manage.

I have been experimenting with running the sites on a single VPS with multiple users. It is a "cloud" one so can be scaled up as needed, and there is only one OS and shared libraries to upgrade. The problem is relying on permissions to separate sites from each other, and to give users access to sites is quite fiddly, particularly as I am paranoid enough to run app servers as a different user from the code they exectute. I have not ruled it out as a solution, but it is not as straightforward as expected

I thought of running my own VPSs on a dedi, which is cost effective, but adds one more component to manage. It gives me a lot of isolation.

I think some sort of container or jail solution will give me the best of both worlds, but I am not familiar enough with it to pick suggestions? I am willing to consider using any *nix OS, although Debian Linux is what I am most familiar with.

Resource isolation is not an issue: it will not be running anything I expect to cause problems. Easy admin and security are.

Any suggestions?
Last edited by a moderator:


Active Member
1. I dislike control panels, but Cloudlinux looks distinctly interesting. I was thinking of something like Alpine Linux (or something else with grsecurity to provide real jails) or a BSD and jails, but this looks easier.
2. Why did I not think of that!?  The more I think about it, the more I think it is the best approach. Thanks.


The Irrational One
Retired Staff
Or you can use any of the container virtualization suites to package up and deploy each site, such as LXC and Docker.  


Active Member
Containers were the lines on which I was thinking, but:

  1.  Docker means a container per application, which complicates things for sites that require more than one application. Will something else work better?
  2. It seems a lot of work for single deployments. I can see the advantages if you are deploying lots of instances of an app, but it seems significant extra work when each application is only deployed once
  3. I know how to poke around inside a VPS or server. I can ssh in do just about anything and see anything when debugging issues I cannot immediately reproduce locally. Do containers make this more difficult?
  4. The whole development and maintenance workflow seems a lot more complex.

A lot of these objections may be due to my ignorance of containers, so please correct me if I am wrong :)
  • Like
Reactions: fm7


100% Tier-1 Gogent
I am a n00b still with Docker.  The groupies of Docker will bitch at what I say no doubt.

You @graeme would do good under Docker.  But but but, have to learn the intricacies of it and get comfortable with it.  The other thing is, you can't go running Docker on OpenVZ really.  So looking at KVM virtualization or your own dedicated server to run things.  Yes, some providers have Docker bastardizations under OpenVZ.

Docker while container --- it's mega light... It's essentially you have the OS base image that hangs out and then these containers are diffs over top of that - changes and what not, your own data, code, whatever.  The change size can be quite minimal.  So say you run Debian as base and big bulk, well each container is not overhead and disk consumption of the whole Debian OS, it's tiny fraction of it.

These containers are the same experience as having VPS instances otherwise, aside from being on the same VPS or dedi underneath.  Thing much like OpenVZ  without having a public IP for each Docker instance like you often for for OpenVZ.

Now the PITA part of Dockerland (and their still are lots) is you have to plumb the security right or going to be prone to self harm.  Mandatory front side firewall to start.  Likely going to want to reverse proxy only what you want exposed.

To me, Docker and related are like a cloud in a single account (be it VPS or dedi).  Good for multiplexing what you have and doing more with less baremetal.

You could do similar on baremetal with KVM or OpenVZ or Proxmox.  But that is going the dedi route.

MYTH: Docker means a container per application. It doesn't have to.  That's just how lots of people are using Docker.

TRUTH: The whole development and maintenance workflow seems a lot more complex.

That's how I feel about the abstraction of Docker and what I consider shit for doc and materials.  Those that live on bleeding edge fashion are always oblivious of such and go RTFM.    Same folks I'd love to hand a shovel and some working tools to and tell them to just get to work and watch them stand in confusion.
Last edited by a moderator:


The Irrational One
Retired Staff
You can run docker under OpenVZ if it's a newer kernel.  However, if you want to make your life easier then just use a KVM to run the containers.

One way you can use a container is simply mount the port to a local port (e.g. for one python site, etc.) and then use a reverse proxy in nginx or apache to expose that port to the web.  

Micro-services trend was what made the "One Container = One Docker".  Mostly since Docker provides automation tools for service deployment and scaling.  If you containerized every single service/application inside docker, then you can setup scaling easily without having to deploy the rest of your stack.  You can use a docker container like a regular VPS if you want, just how you want to use it really.  It really depends on how you want to go about it, but if you just set everything up within docker then it doesn't make sense to even use docker right?  Rather just use the bare KVM/OpenVZ/metal system straight up.