David Richards is CEO and co-founder of US and Sheffield-based WANdisco, a public software company specialising in distributed computing. In this article, he explores how outages, such as that experienced by Twitter this week, can cost tech companies dearly.
Whenever a website goes down, it only takes a matter of minutes before the news reaches social media. Earlier this week, however, Twitter was offline for over two hours due to what it claimed were ‘technical difficulties’.
With over 300 million users worldwide, downtime is instantly noticed, no matter how brief or how isolated the incident. Most news outlets in the US have reporters whose entire beat is dedicated to covering the movements and developments of the world’s biggest internet businesses, from Facebook and Google, to eBay and Salesforce. Indeed, reporters in the UK media had covered the story within 30 minutes, further exacerbating the reputational damage of an overworked server.
For several years now, CIOs have been taking the issue of data security seriously, using firewalls and to a lesser extent, encryption, to protect against hackers. But as Twitter’s failings this week demonstrate, clearly there’s more that can be done to prevent outages.
In an increasingly online world, the dangers of server downtime, including reputational and financial damage, cannot be underestimated. In many cases, blackouts are usually caused by a collapse in the system designed to handle the large amounts of data being exchanged.
Some of the world’s largest firms and financial institutions have been left exposed by major deficiencies in their processing and storage of data, running up costs that have spiralled into the millions. Unless the right steps are taken, the catastrophic results of inefficient management will become ever more frequent as the world grows increasingly reliant on data.
To counter this, it is imperative businesses learn from the mistakes of their larger counterparts. Amazon, Google and Target have all experienced server downtime in recent months, which had catastrophic affects on profit margins and customer trust. More recently BT, TalkTalk and Vodafone brought the dangers of server downtime to the forefront of the UK’s security agenda once again.
Outages are not always accidental, with around half planned to allow for system updates and maintenance. However, potential financial losses are so large that business leaders should be increasingly concerned about the threats posed by data blackouts, not to mention the risks involved with inefficient data management.
In all instances, being able to continue functioning from another server, one that is completely up to date, affords large organisations the protection they need in the first instance, preventing a downward spiral of productivity and most importantly, customer trust.
Consider this: when Amazon’s US site goes down, the online retailer loses $1,000 a minute. Similarly, when Google shuts down, global internet traffic falls by 40%.
Such an extraordinary impact demonstrates just how catastrophic server downtime can be. It affects everyone in a business, where the ability to stay online is the backbone of any international organisation in the 21st century.
Unplanned downtime, when sheer volume of online traffic causes servers to experience ‘technical difficulties’ is the most pressing problem facing businesses operating online.
CIOs need to be more proactive in their approach to online integrity and this goes above and beyond simply using a firewall to deflect malicious attempts at server downtime.
I strongly believe businesses will have to guarantee their ability to stay online in order to stay competitive in a world where the internet may well define our future.
Read more on the topic of cyber security in the next Tech City News magazine, due out in February. Subscribe here to receive your free copy.