top of page
Search

The New Face of Web Site Disaster - Botnets

  • Writer: Athens Kastome
    Athens Kastome
  • Dec 1, 2019
  • 2 min read

I remember a few years back hearing about the blackouts in California (oh yes, the good ol' Enron days). It was quite shocking to hear that major dot-coms were down for hours booter. Even the "365 Main" facility in San Francisco with its earthquake proof infrastructure lost power, proving that no matter how well equipped, no single location can withstand a big disaster.


Nowadays this is less and less a real issue for web sites - hurricanes and power failures are not an excuse to stop providing service: Amazon and Google showed that you can reach close to 100% reliability (barring software bugs) by eliminating all physical single points of failure. Today in the 'cloud computing' age, every web site can get Amazon-like reliability without worrying about a power failure in its office in Mountain View or a natural disaster at its co-location farm - and all this for just hundreds of dollars a month.


But as the local disaster problem is solved, there's a new one that may shape the way we think of disaster recovery. Register.com got hit by a massive Distributed Denial of Service (DDoS) attack on its Domain Name System (DNS) servers. This attack will have many casualties - not just Register.com's users who may have their web sites unavailable if they used Register.com's DNS services but also all those hit by the collateral damage. We don't yet have any technical information on how the attack was done, but a DDoS attack is typically 'logical' and not geographical - if your site is somehow 'logically' connected to a site that is being attacked, you will also be DDoS'ed and that won't be nice.


When Blue Security was DDoS'ed a few years ago the attackers decided to take down Blue Security's providers along with everything else hosted there, in all of the provider's geographical locations.


A DDoS attacks the servers wherever in the world they may be. Even if you span your server across multiple physical locations the attack will be done on all of them. No matter how distributed your servers there is always a limit to the number of transactions you can handle in a single second, and once the attacking botnet (a network of software robots, or bots, that run autonomously and automatically) passes this limit, then your services will effectively be denied stresser. You will then have nothing to do but lean back in your chair and wait for the attack to end and count the lost visitors/revenue/reputation with every minute passed.


While cloud computing can save you from Hurricane Katrina, if someone decides to DDoS anyone - even facebook.com - they only need to pay a fee; there is nothing Facebook - even with its massive server infrastructure - can do to stop them.


We simply don't know how to stop a DDoS attack in progress (snake oil solutions aside). The only solution is to raise security awareness with administrators so that they will run sufficient security tests on their servers and eliminate any botnet code hiding on hundreds of thousands (some say millions) of servers. This will reduce the size of botnets and make DDoS less practical (or more expensive).

 
 
 

Recent Posts

See All
Obtaining the Ball Into the Fairway

In this article I will talk about one of the more innovative golf swing techniques which will help you get the ball into the fairway...

 
 
 

Opmerkingen


©2019 by My Site. Proudly created with Wix.com

bottom of page