# Per customer /64's?



## Francisco (Jul 11, 2013)

Hi everyone,

I was hoping for some feedback on what you feel would be the best way to handle per customer /64's.

Since IRC is a fairly popular thing around our parts, we're needing to start assigning /64's to clients as most IRC networks limit the amount of connections from the same /96 -> /64 to stop spam bots/etc.

Actually assigning the subnets isn't hard, the question is more "how do you feel we should handle a migration path?". I was thinking originally that we should add a button on a sep page that would:

- Assign a /64 per location a user has a service in

- Revoke all V6's they currently have assigned

- Enable the option for them to assign V6's themselves on their /vserver/*/ipaddresses/ page

Now, this is all well and good but I feel the 'remove all' option was a little rough. The next thing I was thinking was:

- Assign a /64 per location a user has a service in

- Convert all current entries into their /64 (we're replace the first half of the IP with their /64). RDNS/etc would be transferred over.

- Enable the option for them to assign V6's themselves on their /vserver/*/ipaddresses/ page

The other option I was thinking was allowing side by side usage with the /128 allocations we do now. I don't really like this option as we're going to be flooding our DB with IP's and will be all around quite messy.

I'm open for whatever feedback I can get. The migration path won't be required but would provide those that want designated subnets the option of it.

There's the other option of assigning a /64 to each VPS but I felt this was a little on the wasteful side.

Feedback?

Francisco


----------



## trewq (Jul 11, 2013)

The way I would do it is allocate all customers a /64 but let them keep their /128 IPs for a few days to a week. This will allow for everyone to have time to migrate over to the new range.


----------



## D. Strout (Jul 11, 2013)

While I know it would be messy, side-by-side makes the most sense for existing customers if they request a /64. New customers could be given a choice at signup. *@**trewq* has a good idea, but that might be a bit hard to coordinate. I like the idea of converting the addresses, but I'm not sure I understand what you're getting at. Could you go over it with example addresses to make it a bit clearer?


----------



## Francisco (Jul 11, 2013)

D. Strout said:


> While I know it would be messy, side-by-side makes the most sense for existing customers if they request a /64. New customers could be given a choice at signup. *@trewq* has a good idea, but that might be a bit hard to coordinate. I like the idea of converting the addresses, but I'm not sure I understand what you're getting at. Could you go over it with example addresses to make it a bit clearer?


So lets say you have the following single IP's on a VM:

2605:6400:0002:fed5:0022:0000:426e:7b23

2605:6400:0002:fed5:0022:0000:6a48:0180

2605:6400:0002:fed5:0022:0000:9c04:df63

And you were assigned the /64:

2605:6400:0003:1234/64

Your IP's would be swapped to:

2605:6400:0003:1234:0022:0000:426e:7b23

2605:6400:0003:1234:0022:0000:6a48:0180

2605:6400:0003:1234:0022:0000:9c04:df63

I was thinking about the per VM /64 but I don't really see a reason for it unless someone is trying to get it static routed.

Francisco


----------



## D. Strout (Jul 11, 2013)

That's what I figured you meant. In some ways a good idea, but really the user could do that themselves. Nothing beats keeping the same IPs. On that note, couldn't you do a conversion process like you mentioned, but then have a separate database of the old IPs that reroutes them to the new, equivalent ones? Still as you say somewhat messy on your end, but on the user's end they don't have to manage the two sets of IPs.

Side note: I'm curious, how will you set up automatic addition of addresses for KVM? Don't you have to be able to run commands on the VM, which you can't under the more isolated KVM?


----------



## Francisco (Jul 11, 2013)

D. Strout said:


> Side note: I'm curious, how will you set up automatic addition of addresses for KVM? Don't you have to be able to run commands on the VM, which you can't under the more isolated KVM?


Right, and that's why we couldn't automate the enabling/disabling for V6  Only thing we'll be doing is the IP locks.

Francisco


----------



## D. Strout (Jul 11, 2013)

So what about the rest of my post?  Feasible?


----------



## wlanboy (Jul 11, 2013)

First thought:

Noone is typeing in IPv6 addresses.

So just drop all and reassign new /64.

Second thought:

What a mess about the AAAA entries...



Francisco said:


> - Assign a /64 per location a user has a service in - Convert all current entries into their /64 (we're replace the first half of the IP with their /64). RDNS/etc would be transferred over. - Enable the option for them to assign V6's themselves on their /vserver/*/ipaddresses/ page


It is the best solution:


Don't have to reenter all rDNS
Easier to find corresponding AAAA entries
Just have to change the prefix


----------



## rds100 (Jul 11, 2013)

Add the new addresses but don't remove the old ones. Don't create unnecessary work for the people that don't care about IRC.

Or even give the people an option to decide whether they want the new addresses or not.


----------



## Francisco (Jul 11, 2013)

rds100 said:


> Add the new addresses but don't remove the old ones. Don't create unnecessary work for the people that don't care about IRC.
> 
> Or even give the people an option to decide whether they want the new addresses or not.


No one will be forced into it but everyone will be assigned a /64 in each location they have a VPS.

Francisco


----------



## Francisco (Jul 11, 2013)

wlanboy said:


> First thought:
> 
> Noone is typeing in IPv6 addresses.
> 
> ...


Right, users that decide to migrate would have their RDNS updated automatically, etc.

Francisco


----------



## willie (Jul 12, 2013)

IRC networks are operating on a presumption of a /64 per person?  Sheesh.  I wonder how long it will take for ipv6 address exhaustion to become an issue like ipv4 has now. 

Is a /64 independently routable?  It will be interesting having something like that on $15/year vps's. 

If the existing addresses you've given to clients are all from the same /64 or from just a few of them, it seems simplest to just let them keep the existing handful of addresses while also getting a /64 if they want/need it or automatically.


----------



## Ruchirablog (Jul 12, 2013)

willie said:


> If the existing addresses you've given to clients are all from the same /64 or from just a few of them, it seems simplest to just let them keep the existing handful of addresses while also getting a /64 if they want/need it or automatically.


This +1 

/64 for every vps is just jeeez!   :wacko:


----------



## Francisco (Jul 12, 2013)

willie said:


> IRC networks are operating on a presumption of a /64 per person?  Sheesh.  I wonder how long it will take for ipv6 address exhaustion to become an issue like ipv4 has now.
> 
> Is a /64 independently routable?  It will be interesting having something like that on $15/year vps's.
> 
> If the existing addresses you've given to clients are all from the same /64 or from just a few of them, it seems simplest to just let them keep the existing handful of addresses while also getting a /64 if they want/need it or automatically.


We'll be using soft boundaries for the subnets so things don't turn into a clusterfuck on our routers  Things will still use a /48 netmask just we'll segment a /64 within that for each user.

Most IRC networks expect a /64 or /96 to take out 'a server or client' that might be abusive. All brokers on the market assign either /64's or /48's for free so they have every reason to use those guidelines.

Francisco


----------



## texteditor (Jul 12, 2013)

tbqh cutting subnets smaller than /64 on a regular basis would cause a bigger headache


----------



## D. Strout (Jul 12, 2013)

Francisco said:


> We'll be using soft boundaries for the subnets so things don't turn into a clusterfuck on our routers  Things will still use a /48 netmask just we'll segment a /64 within that for each user.


I believe that's how Crissic Solutions does it too. It does keep things much cleaner.


----------



## acd (Jul 12, 2013)

I would appreciate more than a window of 48 hours to renumber my VMs. This is what I suggest for a transition strategy:

1. All current ipv6 allocations are marked legacy. If a user clicks disable in S2, notify them that if they do this, the IP will be deallocated and a new one assigned.

2. Set a hard deadline 3-6 months out for customer renumbering at which point all legacy ipv6 allocations will be forcibly removed.

3. All new ipv6 allocations are made from your /64s per user per site.

4. Keep a soft cap of ipv6s per VM, but allow that to be lifted as necessary. Show this soft cap somewhere in S2.

Feature requests:

Add an interface that gives an overview of assigned IPs per-customer, stating what VM they are on, and status of primary/secondary/disabled.

Allow manual ipv6 selection, (for example, if we want to do prefix translation).

Allow moving non-primary ipv4s around through the panel.

Allow "route all in netmask via X" option for KVMs.


----------



## lbft (Jul 12, 2013)

One thing about the per-customer-per-location /64 - it leaks information that a pair of IPv6 addresses on different VMs belong to the same customer. I'd imagine there'd be some customers who'd like to keep their projects separated (different/conflicting communities, targeting multiple niches within the same overall market, people doing hosting for multiple different organisations, etc.)

Your existing policy means an account belongs to a person only and only one account per person. That means some people will have VMs on their account for different purposes. I bet some subset of them would prefer that not be public.


----------



## wlanboy (Jul 12, 2013)

lbft said:


> Your existing policy means an account belongs to a person only and only one account per person. That means some people will have VMs on their account for different purposes. I bet some subset of them would prefer that not be public.


Good point. Did not think that far.

I thought he was talking about per node and not per customer.


----------



## willie (Jul 12, 2013)

Oh yeah, I didn't take in the "per location" thing.  That's not good.  If I have multiple vps's I'd want them to have unrelated addresses.  I had always thought a standard user allocation was /112 and I'd be fine with that, but it should be a separate /112 (or /96 or whatever) for each vps, preferably assigned randomly (independently) from whatever bigger pool they are coming from.


----------



## SkylarM (Jul 12, 2013)

D. Strout said:


> I believe that's how Crissic Solutions does it too. It does keep things much cleaner.


That would be correct.

Are you planning to automate that? May just need to do a support request for per container /64's? I'd say having the ability to request /64's per container is a good idea for those who wish to keep project separate, generally speaking I don't think many users would need this though.


----------



## VPSCorey (Jul 13, 2013)

It's annoying that panels do not support /64 assignments.  Though some ISP's are doing /128's to customers as well.  Our idea was for /48's per datacenter and base the 64's off that.


----------

