amuck-landowner

Hostress, LLC has acquired GreenValueHost v2 Clients and domain.

drmike

100% Tier-1 Gogent
Ahh the weekend ended their time like ahhh 15+ hours ago... and yesterday being Sunday I think things there were fine.

Sounds like this shituation happened overnight Sunday into Monday, but could be something else going on.

It's brutal either way.  I feel for Tom.  I'd be looking at every log, system access, etc.   I am not a fan of inheriting things standing in-place from other shops unless the owners have a financial implication carrot to behave.  People get pissed, sometimes the workers and sometimes random person that helped way back whenever uses the situation to slide back in the open door.  I condone none of this sort of behavior and if someone did such for real shame on them and enjoy the karmaburger.

The poor customers, the ones that know nothing of the GVH history, the handoff, etc. and just are being hacky sacked around.
 

Tyler

Active Member
I think now would be a great time to remind everyone to check the integrity of their backups and their backup software.​
 

drmike

100% Tier-1 Gogent
This shituation happened 12-1:30PM Eastern Time on Sunday.

More than a dozen nodes wiped.

IPMI exploited.  Looks like lots of upgrades of firmware due and some front side protection of IPMI.

Someone entered box, rm -rf'd the files literally.  Dropped partitions too.

Those boxes are totally gone.  Already brought back up with fresh OS and empty containers.

Customers should claim the account credits they are due likely per the Terms of Service.  Over here: http://greenvaluehost.com/termsofservice.html

5. SERVICE LEVEL AGREEMENT

GreenValueHost agrees to maintain at least a 99.9% server and network uptime per any given calendar month. For each 30 minutes of downtime experienced beyond 99.9%, you shall be eligible for a 10% refund of your purchase to your GreenValueHost account credit balance. Refunds may not exceed 100% at any given billing period.
So get your 100% free month for your 100% empty container.

Someone needs to pull that GVH site offline... As is, it is bullshit and misleading....

Code:
Contact GreenValueHost
Green Value Hosting, Inc.
PO Box 972
Normal, Illinois 61761United States
Code:
13. JURISDICTION AND CHOICE OF LAW
13a. Provisions of this Agreement are made under all respects of the laws of the United States and the state of Illinois.
13b. In the event in which litigation as permitted under this Agreement is initiated, jurisdiction shall be solely within Cook County, Illinois.
Code:
12. NETWORK & SERVICES ABUSE REPORTS
Abuse reports may be directed towards our designated network compliance officer at [email protected].
 
Last edited by a moderator:

HBAndrei

Active Member
Verified Provider
These poor souls (gvh clients) have been passed around like a bong at a frat party... and been screwed multiple times like a chick at a frat party... now their walk of shame back home, empty handed, dirty and abused =/
 
Last edited by a moderator:

DomainBop

Dormant VPSB Pathogen
This shituation happened 12-1:30PM Eastern Time on Sunday.


More than a dozen nodes wiped.


IPMI exploited.  Looks like lots of upgrades of firmware due and some front side protection of IPMI.


Someone entered box, rm -rf'd the files literally.  Dropped partitions too.
...and yet no mention of the hack when he sent an email later in the day to customers and posted on LET. Instead he lied and called it "data corruption" and tried to pin some of the blame for the data loss on the "previous owners of your services" Jonny and Duke. Somebody needs to clue him in that 46 states have data breach notification laws and by lying to customers and trying to cover up the fact that it was a breach not "data corruption" he opens himself up to a lot of potential liability if any of those customers decide to take further action.


the email he sent to customers:

Recently we’ve discovered that there has been data corruption on some of our VPS Nodes where your VPS may have been hosted. Unfortunately, none of the previous owners of your services had any backup systems in place.


We would like to remind you as stated with your previous providers that they informed you that backups were not in place so this was to be expected if there was data loss such as this issue now.


Going forward we have corrected that issue and all containers will be backed up weekly to an offsite location starting this week. We feel that backups should always be in place no matter what. This was one of the issues on our list to fix when we acquired the companies you were originally with before. However, only acquiring the companies less than a week ago we didn’t have enough time to do that before this happened.


If you have an affected container on one of these nodes it will be recreated and a new welcome email will be sent to you with the information. The current ETA is approximately 24 to 48 hours.


I apologize for any inconvenience this may have caused you. We look forward to continuing our relationship with you as a client. If you have any questions do not hesitate to create a ticket in our support portal.


If you have multiple VPS with our company we suggest trying to access each one, not every VPS was affected from this.


LET post:

I'm writing one response to this because I'm busy trying to get everyone back online.


I did apologize in the email. Nobody had backups in place prior to this event, customers were told no backups.


I had plans to start doing backups next week free of charge but I didn't get to do it in time.


Backups will start this week after all the containers have been fixed.


I sent the email to anyone that was active in WHMCS so everyone knew backups would be in place in the future. There is really nothing more to say.

Liability was removed from having backups prior to this due to it being disclosed to the customer.


Anyone that has active services is more than welcome to make a support ticket. Ticket answering times are delayed at this time due to all hands on deck fixing the issue.


I'm trying to make this a better service in the future by taking backups going forward. There has been alot of progress on GVH this far and it won't stop now or in the future. This only affected a portion of clients not everyone.


I appreciate everyone's concern on the matter and it will be resolved soon.

Somebody tell him his liability for the data breach wasn't removed just because backups weren't included in the plans.  :rolleyes:


TL;DR: strike one UGVPS, strike two DigTheMine, strike three Hostress v1, strike four Hostress v2. 4 for 4 on customers getting screwed over the past 2 1/2 years.
 

Munzy

Active Member
So IPMI exploit that I think was used has been floating around for some time (~1 year). That means data was able to be compromised for at least that same period.

Also how do you rm -rf / --no-preserve-root a server if you don't have the password to the server. You would still need to get past the login prompt. (Yes, I understand you could do it with a RAID rebuild or live ISO) It was inferred via the posts I read that it was done on the host node?
 

DomainBop

Dormant VPSB Pathogen
So IPMI exploit that I think was used has been floating around for some time (~1 year). That means data was able to be compromised for at least that same period.
They rent from HudsonValleyHost and I'd be willing to bet GVH/Hostress servers aren't the only ColoCrossing servers with unpatched IPMIs...
 

drmike

100% Tier-1 Gogent
So IPMI exploit that I think was used has been floating around for some time (~1 year). That means data was able to be compromised for at least that same period.

Also how do you rm -rf / --no-preserve-root a server if you don't have the password to the server. You would still need to get past the login prompt. (Yes, I understand you could do it with a RAID rebuild or live ISO) It was inferred via the posts I read that it was done on the host node?
It's unclear again how the rm -rf'ing technically happened.  There is back pedaling off the IPMI entry.  I know the firmware on the IPMIs was old and 'ploitable.  IPMI might have been on public IPs with no ACL, VPN, etc. to access....   If so that's a self imposed gun in the mouth.

Unsure how the files got dropped and forensic data wasn't preserved and nothing log wise was found.   Pretty comprehensive job or incompetence.  Unsure which or if both.

Sad that in the end customers indeed got FUCKED.    People need to get their heads of out their literal asses about customer centered business.  Backups are necessary and so is practical security as a company policy.  I'd put good money on a wager saying this whole rm'ing was preventable.
 

Munzy

Active Member
Bought two e6520 nodes plus threw some proxmox on it, and moved nearly all my low end VPS to it. Nice not having to be apart of this drama shitstorm anymore.
 

drmike

100% Tier-1 Gogent
Bought two e6520 nodes plus threw some proxmox on it, and moved nearly all my low end VPS to it. Nice not having to be apart of this drama shitstorm anymore.
Frontrange  / TSS / Colo@?

This is what I did a while back... Mostly.   I keep a few VPS instances for location or features (filtering).  Can count them now on one hand versus needing a spreadsheet.
 
Last edited by a moderator:

AnthonySmith

New Member
Verified Provider
Also how do you rm -rf / --no-preserve-root a server if you don't have the password to the server. You would still need to get past the login prompt. (Yes, I understand you could do it with a RAID rebuild or live ISO) It was inferred via the posts I read that it was done on the host node?

You reboot the server from the IPMI, catch /edit grub go in to single user mode and your done, the server is your plaything, once you have IPMI you can do what ever you want really as if you had physical access.
 

DomainBop

Dormant VPSB Pathogen
RFO emailed to customers about the hack/data loss:


As you are aware on June 28th 2015 our VPS nodes had complete data loss. I am sending the final report on what surrounded the issues and the final events.

On June 28th 2015 between 12:00pm -1:15pm GMT -4:00 SolusVM started reporting VPS Nodes offline. Upon further review we realized there was data loss on these nodes. Shortly after investigating the issue we came to the conclusion that all data was lost on the nodes. We have tried to recover the data however the data was lost in a way that we could not get it back.

On June 28th 2015 Between 3:00pm – 10:00pm GMT -4:00 We started to reinstall the affected VPS Nodes and started recreating the VPS containers with a fresh install of the OS you last had installed.

On June 28th 2015 around 8:00pm GMT -4:00 I sent out an email stating Data Corruption. We were still investigating the root cause and I felt this was the best way to inform you without any facts besides the Data has been lost.

Between June 28th and July 1st 2015 we have been investigating the issue. We have had other people from the industry help us investigate the root cause on how the data loss happened. The truth is the GVH and TacVPS infrastructure had so many entry points / open exploits and without the logs we cannot come to a definitive answer on how this happened.

July 1st 2015: Today we have started taking FTP backups on ALL VPS NODES. This is free of charge to you and will happen weekly. It will take a few days for all nodes to back-up to the FTP server. We have secured all of the exploits that were on the infrastructure to ensure this type of loss does not happen in the future.

I would like to remind everyone while this did happen while you were under the care of Hostress, LLC. I only have had access to this infrastructure for a period of less than 10 days. It would be impossible for me to know exactly what was open and what was running on each individual VPS Node. I have been busy at work since day one trying to get all of these services online and working the way they should be. GVH came with a lot of issues, a lot of abused nodes, tons of servers that some people didn’t even know what they were for at the time.

I can promise you from this minute forward that as a hosting provider Hostress, LLC will do everything in its power to keep your data and information safe. We can safely say that only the VPS nodes were exposed due to issues within the infrastructure prior to receiving the servers during the purchase of the assets of GVH/TacVPS.

Anyone that wants to speak directly to me regarding these series of events can open a support ticket and ask to speak to me. Due to the massive amounts of tickets, response times to support and billing tickets are delayed at this time. Please do not bump your existing tickets as it will take longer to respond to you. Please try to refrain from opening multiple tickets at this time as this will make our support responses slower for other customers.

Thank you,

The Hostress Team
 

joepie91

New Member
So, about that RFO... 

the data was lost in a way that we could not get it back.
Weasel wording... sounds like they're trying to hide the reason. What was the "way" in question?

The truth is the GVH and TacVPS infrastructure had so many entry points / open exploits and without the logs we cannot come to a definitive answer on how this happened.
"Open exploits"? Incredibly vague term, doesn't really describe what's going on. "Without the logs"? Why weren't there any logs?

Today we have started taking FTP backups on ALL VPS NODES. This is free of charge to you and will happen weekly. It will take a few days for all nodes to back-up to the FTP server.
Okay, fair.

We have secured all of the exploits that were on the infrastructure to ensure this type of loss does not happen in the future.
And what 'exploits' would those be? Why is there no full disclosure on this, now that they have supposedly been fixed?

I only have had access to this infrastructure for a period of less than 10 days. It would be impossible for me to know exactly what was open and what was running on each individual VPS Node. I have been busy at work since day one trying to get all of these services online and working the way they should be. GVH came with a lot of issues, a lot of abused nodes, tons of servers that some people didn’t even know what they were for at the time.
This should have been sorted out before the sale/handover occurred.
 

RLT

Active Member
Fools rush in where angles fear to tread.


I think I would have found a good management company and worked out a contract for the servers to be checked and updated fast. Its silly to try and do that with a small team. Knowing gvh history and the fact that duke didn't have it long enough to get the mess fixed. I would have expected these problems.
 

drmike

100% Tier-1 Gogent
Congrats! You understood 50% of the thread! Time for an offer soon, eh?
INb4 offer....  Then again, 5 posts shown on account view public side and 2 of them are sales jobs already.  

iFi Host isn't going to find much good taking this power sales bulldozer approach.
 
Top
amuck-landowner