Major incident at WHC.ca regarding hosting (1.Viewing)

MapleDots said:
Server reimaging - does that not mean trying to initiate a backup from a previous server image?

If that was the case the server should be in a restored state to an earlier date right?

Not sure if this provides any answers, was it malicious intent or was it an accident.

WHC needs to be fully transparent with this and then state how an event like this can be prevented in the future. Anything else will cause uncertainty and probably an exodus of clients.

In all plain English, what happened and how can it be prevented from ever happening again.

Well, considering they said the individual was locked out. Tells me it was malicious. It leaves a lot of questions on the table.

1. Did this individual hack the server? If so how did they do it?

2. If the servers were being copied, who has this information now?

3. What personal information got exposed?

Sorry, but this is not a great update nor did it provide any major insight.

Don’t forget in May we got an email that their systems got compromised. So since May they didn’t do anything to secure their systems? That was a sign to bail at that time…
 
Picture0003.png



https://www.itworldcanada.com/article/web-hosting-canada-reveals-cause-of-outage/457684
 
It’s interesting that WHC is distinguishing and mentioning that Sibername accounts had recoverable backups. I don’t get it. Was WHC not doing backups?
 
Lesson for everyone here, if you leave your backup in the webhosts hands you are in trouble.
It only takes a minute to generate a full account backup and then download it.

When was the last time you did one?
 
You know, WHC is going to lose some business over this I am sure. But to be honest this is not something that occurs often and even the web hosts that mention that they do backups they cannot guarantee it for you. Their TOS covers their butt. So, it really leaves the onus on the customer to do their own backups.

I had my own server and I backed up offsite daily. That’s just how a business should operate. If you’re running a website and email you have to pay to back up. It’s a cost that you have to accept. I hope WHC can recover smoothly from this.
 
Received an update which specifically says this was NOT a ransomware attack


Dear WHC Client,

As some of you may already know, WHC suffered a major incident Saturday morning, impacting a subset of clients with web hosting, email hosting and reseller hosting accounts. We want to write to you today to provide an update.

First off, we wanted to reassure all our clients that the security situation has been addressed and contained early on Saturday morning. If your website and emails are up and if you are not hosted on one of the impacted servers, you are not affected. Any affected account holders will be receiving further communications shortly and can already consult our Incident FAQ.

As a small business ourselves, we deeply empathize with our affected clients during this stressful time and our team is working diligently to restore websites and emails. We are and will continue to be there for you.

We have confirmed this was not a ransomware attack and there is no indication that data of any kind was ever downloaded, exported, shared, or exposed. The authorities have also been notified and an investigation is ongoing.

Our other regular operations continue to function normally, but our support and systems teams are prioritizing service recovery requests so other requests may take longer to respond to. While our focus has been on addressing the data disruption, we are committed to frequent communication with our affected clients on our Incident Update page.

We sincerely appreciate the outpouring of support we have received over the past few days as our teams continue to work tirelessly around-the-clock to get impacted accounts back online.

Sincerely,

WHC team
 
Sometimes if you catch something early enough it doesn’t get to ransomware. It doesn’t mean it wasn’t…and you can’t conclude it’s not without a proper investigation. That doesn’t happen in a week.
 
I mean at that size, you'd expect some better backups/DR capabilities to be in place? Yeah, it's up to customers to back up their sites, yada yada, but still, this was pretty big.

Good on WHC though recovering as much as they did.
 
Groot said:
I mean at that size, you'd expect some better backups/DR capabilities to be in place?

The key failing was not following industry standard procedure.

Their production system, backups and (supposedly) offsite backups were all on the same system and location & with the same access, thereby negating the entire benefit of offsite backups - that being physical distance between the two + lack of operator access.

I would imagine it was a cost-saving mechanism, but WHC really should have contracted through a 3rd-party cloud provider and then upload daily backups of customer data that were NOT accessible to even inside WHC people. That's what offsite is, storage that is far, far away from the production environment in case of fire, flooding, intrusion, theft, employee malfeasance, etc.

These offline data companies (like BackBlaze) simply store data, and there is no external access to delete, and if someone were to contact them and say "I'm from WHC, please delete all our data", even with the correct credentials, they'd tell them to FU. Their job is to protect data, not just let any Tom, Dick or Harry run rampant deleting offsite backups.

WHC staff or operator access should not have control over deleting or changing WHC's offsite backups, only production and local backups. There needs to be a big wall between production/local and offsite, or it doesn't work.

By having production, backup and (cough) offsite backup data effectively on the same systems & user access, as well as at the same basic location/area, there were effectively no backups in case of disaster, as everyone involved at WHC quickly found out.
 
For DN.ca I use cPanel to do a full account backup EVERY single day and they download both to my google drive and my local computer. From there on they are stored for 90 days with the oldest one deleting when the new on comes in.

Since all our pictures are stored on ImagePost.org the entire backup with database is under 25mb. Our forum software is so slim and streamlined that a full restore takes under 5 minutes and we are up and running again.

So if we get hacked I can run a mirror site, fix the hack and bring the site back live very fast.

It is the responsibility of a good webmaster to do this and not depend on the webhost. It is easy to lay blame but really anyone that depends on a website and blindly leaves it up to the webhost is living in a dream world.

Had DN.ca been hosted on WHC we would have been up and running as soon as WHC moved us to a new server. Worst case there would have been a few hours propagation delay and the most we can ever lose is one days worth of posts if we have to restore from the previous day.

So as much as one would blame WHC, the responsibility also lies heavily on the lazy webmasters who have all the tools but refuse to take the time to use them.
 
MapleDots said:
It is the responsibility of a good webmaster to do this and not depend on the webhost. It is easy to lay blame but really anyone that depends on a website and blindly leaves it up to the webhost is living in a dream world.

Sorry, but this is wrong, as they expressly stated in ads and other areas of the site "Offsite Backups" and although most people had local backups, some did not work (probably WHC screwed up the backup) and others were corrupted. Without offline backups, there was no way to try different ones.

Go read some of the comments on their FB page and you'll see plenty of people with local backups that simply would not work. I think WHC had problems well before this.

Or what if the customer's backup HD blew up, system got stolen, or damaged by electricity, fire, or flooding? crap happens, that's why you pay for a host that states they use "offline backups".

Myself, I had newer backups downloaded, but the first few didn't work when uploaded and as this process took a week to work through, I was forced to go to an older "known good" backup just to get up and running. It was truly a nightmare and there is no way to 100% prepare for something like this when the host issues run so deep.

What if all your recent backups are corrupted/unusable at the source, and then your host melts down totally with no offline backups? What emergency plan do you have?
 
DomainRecap said:
Sorry, but this is wrong, as they expressly stated in ads and other areas of the site "Offsite Backups" and although most people had local backups, some did not work (probably WHC screwed up the backup) and others were corrupted. Without offline backups, there was no way to try different ones.

Back in the day of Lunarpages I too believed my webhost when they said they did the backups. That was a mistake and I no longer trust any company to do the backups for me. In fact if a company did do them for sure I would still do mine just in case the company somehow got a ransomware attack or something.

For me having my own backup copies stored on a cloud and on a local computer makes sure I never get into a compromised position.
 
MapleDots said:
Back in the day of Lunarpages I too believed my webhost when they said they did the backups. That was a mistake and I no longer trust any company to do the backups for me. In fact if a company did do them for sure I would still do mine just in case the company somehow got a ransomware attack or something.

For me having my own backup copies stored on a cloud and on a local computer makes sure I never get into a compromised position.

To be honest, I wouldn't trust anyone with my backups either...
 
MapleDots said:
For me having my own backup copies stored on a cloud and on a local computer makes sure I never get into a compromised position.

Once again:

What if all your recent local backups on your HD & cloud are unusable due to being corrupted/unusable at the source, and then your host melts down totally with no offline backups? What emergency plan do you have?

This is what happened at WHC for many people, who held locally stored backups that were corrupt. With nothing offline at WHC and their local servers and backups toasted, a lot of people who used your infallible strategy were totally screwed.

Offsite backups by the host should not be the only solution for the end user (locally stored backups, multiple storage locations, etc.), but they are one more level of protection and flexibility should the crap really hit the fan. To not understand their importance in MIS is insane.

There were more problems at WHC than just the meltdown, and once again, as I posted in that same message, go to their FB "meltdown" post and see how many people are having trouble restoring local backups they have stored, often because the backup itself was corrupted at the source.

I experienced this myself and was forced to use a "known good" (from testing) backup just to get back online.
 
rlm said:
To be honest, I wouldn't trust anyone with my backups either...

Sure, if you run your own server you can have direct access to the backup generation tools and process, as well as testing each and every backup in a development environment, but I don't know of any commercial host that allows this kind of access.

With companies like WHC, you download the Jet backup file and trust them that it's not corrupted or otherwise unusable.
 
And guys, I apologize if I'm being blunt here, but I am a WHC customer and I was right there on the front lines experiencing it.

And it was a nightmare, and as other customers noted, they had never seen a sheer hosting disaster like this, ever.

I had recent backups and I scheduled a restore. Several days later (yes, it was a hosting holocaust and everything took forever) it would fail, and I would try the next-newest backup, and a day or two later it too would fail - rinse and repeat. By then it was impossible to get through to support (phones were off and email was clogged) but finally the theory was that the most recent backups were corrupted. Many others experienced the same issue.

This was an absolute nightmare and every request took days to complete, and I did everything right. I had local backups, I had them stored locally, on a network server and also on the cloud. And I still got totally hosed and lost a pile of data, so for a couple of Monday Morning Quarterbacks on non-WHC hosting to tell me "what they would have done" and absolving WHC for a very significant lapse in procedure, is a tad irritating.

This will be my final post on this matter.
 

Sponsors who contribute to keep dn.ca free for everyone.

Sponsors who contribute to keep dn.ca free.

Members who recently read this topic: 1

Back
Top Bottom