Thursday 3 September 2020

The DRaaS Data Protection Dilemma

Written by Sarah Doherty, Product Marketing Manager at iland

Around the world, IT teams are struggling with choosing between less critical, but important tasks, versus focusing on innovative projects to help transform your business. Both are necessary for your business and need to be actioned, but should your team do all of it? Have you thought about allowing someone else to guide you through the process while your internal team continues to focus on transforming the business? 

DRaaS Data protection dilemma; outsourcing or self-managing?
Disaster recovery can take a lot of time to properly implement so it may be the right time to consider a third-party provider who can help with some of the more routine and technical aspects of your disaster recovery planning. This help can free up some of your staff’s valuable time while also safeguarding your vital data.

Outsourcing your data protection functions vs. managing them yourself
Information technology has raised many questions about how it really should be done. Some experts favour the Disaster Recovery as a Service (DRaaS) approach. They believe that data protection, although necessary, has very little to do with core business functionality. Organisations commonly outsource non-business services, which has driven many to consider the idea of employing third parties for other business initiatives. This has led some companies to believe that all IT services should be outsourced, enabling the IT team to focus solely on core business functions and transformational growth.

Other groups challenge the concept and believe that the idea of outsourcing data protection is foolish. An organisation’s ability to quickly and completely recover from a disaster - such as data loss or an organisational breach - can be the determining factor as to whether the organisation will remain in business. Some may think that outsourcing something as critical as data protection, and putting your organisation’s destiny into the hands of a third party, is a risky strategy. The basic philosophy behind this type of thinking can best be described as: “If you want something done right, do it yourself.”

Clearly, both sides have some compelling arguments. On one hand, by moving your data protection solution to the cloud, your organisation becomes increasingly agile and scalable. Storing and managing data in the cloud may also lower storage and maintenance costs. On the other hand, managing data protection in-house gives the organisation complete control. Therefore, a balance of the two approaches is needed in order to be sure that data protection is executed correctly and securely.

The answer might be somewhere in the middle
Is it better to outsource all of your organisation’s data protection functions, or is it better to manage it yourself? The best approach may be a mix of the two, using both DRaaS and Backup as a Service (BaaS). While choosing a cloud provider for a fully managed recovery solution is also a possibility, many companies are considering moving away from ‘do-it-yourself’ disaster recovery solutions and are exploring cloud-based options for several reasons.

Firstly, purchasing the infrastructure for the recovery environment requires a significant capital expenditure (CAPEX) outlay. Therefore, making the transition from CAPEX to a subscription-based operating expenditure (OPEX) model makes for easier cost control, especially for those companies with tight budgets.

Secondly, cloud disaster recovery allows IT workloads to be replicated from virtual or physical environments. Outsourcing disaster recovery management ensures that your key workloads are protected, and the disaster recovery process is tuned to your business priorities and compliance needs while also allowing for your IT resources to be freed up.

Finally, cloud disaster recovery is flexible and scalable; it allows an organisation to replicate business-critical information to the cloud environment either as a primary point of execution or as a backup for physical server systems. Furthermore, the time and expense to recover an organisation’s data is minimised, resulting in reduced business disruption.

Consequently, the disadvantages of local backups is that it can be targeted by malicious software, which targets backup applications and database backup files, proactively searching for them and fully encrypting the data. Additionally, backups, especially when organisations try to recover quickly are prone to unacceptable Recovery Point Objectives (RPO).

What to look for when evaluating your cloud provider

It is also essential when it comes to your online backups to strike a balance between micromanaging the operations and completely relinquishing any sort of responsibility. After all, it’s important to know what’s going on with your backups. Given the critical nature of the backups and recovery of your data, it is essential to do your homework before simply handing over backup operations to a cloud provider. There are a number of things that you should look for when evaluating a provider.
  • Service-level agreements that meet your needs.
  • Frequent reporting, and management visibility through an online portal.
  • All-inclusive pricing.
  • Failover assistance in a moment’s notice.
  • Do it yourself testing.
  • Flexible network layer choices.
  • Support for legacy systems.
  • Strong security and compliance standards.
These capabilities can go a long way towards allowing an organisation to check on their data recovery and backups, on an as-needed basis, while also instilling confidence that the provider is protecting the data according to your needs. The right provider should also allow you the flexibility to spend as much or as little time on data protection, proportional to your requirements.

Ultimately, using cloud backups and DRaaS is flexible and scalable; it allows an organisation to replicate business-critical information to the cloud environment either as a primary point of execution or as a backup for physical server systems. In most cases, the right disaster recovery provider will likely offer you better recovery time objectives than your company could provide on its own, in-house. Therefore as you review your options, cloud DR could be the perfect solution, flexible enough to deal with an uncertain economic and business landscape.

Wednesday 2 September 2020

Top Five Most Infamous DDoS Attacks

Guest article by Adrian Taylor, Regional VP of Sales for A10 Networks 

Distributed Denial of Service (DDoS) attacks are now everyday occurrences. Whether you’re a small non-profit or a huge multinational conglomerate, your online services—email, websites, anything that faces the internet—can be slowed or completely stopped by a DDoS attack. Moreover, DDoS attacks are sometimes used to distract your cybersecurity operations while other criminal activity, such as data theft or network infiltration, is underway. 
Why are DDoS attacks bigger and more frequent than ever?
DDoS attacks are getting bigger and more frequent
The first known Distributed Denial of Service attack occurred in 1996 when Panix, now one of the oldest internet service providers, was knocked offline for several days by an SYN flood, a technique that has become a classic DDoS attack. Over the next few years, DDoS attacks became common and Cisco predicts that the total number of DDoS attacks will double from the 7.9 million seen in 2018 to something over 15 million by 2023.

But it’s not just the number of DDoS attacks that are increasing; as the bad guys are creating ever bigger botnets – the term for the armies of hacked devices that are used to generate DDoS traffic. As the botnets get bigger, the scale of DDoS attacks is also increasing. A Distributed Denial of Service attack of one gigabit per second is enough to knock most organisations off the internet but we’re now seeing peak attack sizes in excess of one terabit per second generated by hundreds of thousands, or even millions, of suborned devices. Given that IT services downtime costs companies anywhere from $300,000 to over $1,000,000 per hour, you can see that the financial hit from even a short DDoS attack could seriously damage your bottom line.

So we’re going to take a look at some of the most notable DDoS attacks to date. Our choices include some DDoS attacks that are famous for their sheer scale while others are because of their impact and consequences.

1. The AWS DDoS Attack in 2020
Amazon Web Services, the 800-pound gorilla of everything cloud computing, was hit by a gigantic DDoS attack in February 2020. This was the most extreme recent DDoS attack ever and it targeted an unidentified AWS customer using a technique called Connectionless Lightweight Directory Access Protocol (CLDAP) Reflection. This technique relies on vulnerable third-party CLDAP servers and amplifies the amount of data sent to the victim’s IP address by 56 to 70 times. The attack lasted for three days and peaked at an astounding 2.3 terabytes per second. While the disruption caused by the AWS DDoS Attack was far less severe than it could have been, the sheer scale of the attack and the implications for AWS hosting customers potentially losing revenue and suffering brand damage is significant.

2. The MiraiKrebs and OVH DDoS Attacks in 2016
On September 20, 2016, the blog of cybersecurity expert Brian Krebs was assaulted by a DDoS attack in excess of 620 Gbps, which at the time, was the largest attack ever seen. Krebs had recorded 269 DDoS attacks since July 2012, but this attack was almost three times bigger than anything his site or, for that matter, the internet had seen before.

The source of the attack was the Mirai botnet, which, at its peak later that year, consisted of more than 600,000 compromised Internet of Things (IoT) devices such as IP cameras, home routers, and video players. Mirai had been discovered in August that same year but the attack on Krebs’ blog was its first big outing.

The next Mirai attack on September 19 targeted one of the largest European hosting providers, OVH, which hosts roughly 18 million applications for over one million clients. This attack was on a single undisclosed OVH customer and driven by an estimated 145,000 bots, generating a traffic load of up to 1.1 terabits per second, and lasted about seven days. The Mirai botnet was a significant step up in how powerful a DDoS attack could be. The size and sophistication of the Mirai network were unprecedented, as was the scale of the attacks and their focus.

3. The MiraiDyn DDoS Attack in 2016
Before we discuss the third notable Mirai DDoS attack of 2016, there’s one related event that should be mentioned: On September 30, someone claiming to be the author of the Mirai software released the source code on various hacker forums and the Mirai DDoS platform has been replicated and mutated scores of times since.

On October 21, 2016, Dyn, a major Domain Name Service (DNS) provider, was assaulted by a one terabit per second traffic flood that then became the new record for a DDoS attack. There’s some evidence that the DDoS attack may have actually achieved a rate of 1.5 terabits per second. The traffic tsunami knocked Dyn’s services offline rendering a number of high-profile websites including GitHub, HBO, Twitter, Reddit, PayPal, Netflix, and Airbnb, inaccessible. Kyle York, Dyn’s chief strategy officer, reported, “We observed 10s of millions of discrete IP addresses associated with the Mirai botnet that were part of the attack.”

Mirai supports complex, multi-vector attacks that make mitigation difficult. Even though Mirai was responsible for the biggest assaults up to that time, the most notable thing about the 2016 Mirai attacks was the release of the Mirai source code enabling anyone with modest information technology skills to create a botnet and mount a Distributed Denial of Service attack without much effort.

4. The Six Banks DDoS Attack in 2012
On March 12, 2012, six U.S. banks were targeted by a wave of DDoS attacks—Bank of America, JPMorgan Chase, U.S. Bank, Citigroup, Wells Fargo, and PNC Bank. The attacks were carried out by hundreds of hijacked servers from a botnet called Brobot with each attack generating over 60 gigabits of DDoS attack traffic per second.

At the time, these attacks were unique in their persistence: Rather than trying to execute one attack and then backing down, the perpetrators barraged their targets with a multitude of attack methods in order to find one that worked. So, even if a bank was equipped to deal with a few types of DDoS attacks, they were helpless against other types of attack.

The most remarkable aspect of the bank attacks in 2012 was that the attacks were, allegedly, carried out by the Izz ad-Din al-Qassam Brigades, the military wing of the Palestinian Hamas organisation. Moreover, the attacks had a huge impact on the affected banks in terms of revenue, mitigation expenses, customer service issues, and the banks’ branding and image.

5. The GitHub Attack in 2018
On Feb. 28, 2018, GitHub—a platform for software developers—was hit with a DDoS attack that clocked in at 1.35 terabits per second and lasted for roughly 20 minutes. According to GitHub, the traffic was traced back to “over a thousand different autonomous systems (ASNs) across tens of thousands of unique endpoints.

Even though GitHub was well prepared for a DDoS attack their defences were overwhelmed—they simply had no way of knowing that an attack of this scale would be launched.

The GitHub DDoS attack was notable for its scale and the fact that the attack was staged by exploiting a standard command of Memcached, a database caching system for speeding up websites and networks. The Memcached DDoS attack technique is particularly effective as it provides an amplification factor – the ratio of the attacker’s request size to the amount of DDoS attack traffic generated – of up to a staggering 51,200 times.

And that concludes our top five line up – it is a sobering insight into just how powerful, persistent and disruptive DDoS attacks have become.

Tuesday 1 September 2020

Cyber Security Roundup for September 2020

A roundup of UK focused Cyber and Information Security News, Blog Posts, Reports and general Threat Intelligence from the previous calendar month, August 2020.

Taking security training courses and passing certification exams are common ingredients in the makeup of the vast majority of accomplished cybersecurity and information security professionals. As such, two security incidents last month raised more than just a surprising eyebrow or two within the UK security industry. 

The first involved the renown and well respected United States security training company, The SANS Institue, announcing that a successful email phishing attack against one of its employees resulted in 28,000 personal records being stolenSANS classified this compromise as "consent phishing", namely where an employee is tricked into providing malicious Microsoft Office 365 OAuth applications access to their O365 accounts. In June 2020, Microsoft warned 'consent phishing' scams were targeting remote workers and their cloud services.

The second incident involved British cybersecurity firm NCC Group, after The Register reported NCC marked CREST penetration testing certification exam 'cheat cheats' were posted on Github. El Reg stated the leaked NCC marked document "offered step-by-step guides and walkthroughs of information about the Crest exams.  With those who posted the documents claiming that the documents contained a clone of the Crest CRT exam app that helped users to pass the CRT exam in the first attempt. CREST, a globally recognised provider of penetration testing accreditations, conducted their own investigation into the Github post and then suspended their Certified Infrastructure Tester (CCF Inf) and Certified Web Application Tester (CCT App) exams.

Reuters reported British trade minister Liam Fox email account was compromised by Russian hackers through a spear-phishing attack. This led to leaks of sensitive US-UK  trade documents in a disinformation campaign designed to influence the outcome of the UK general election in late 2019.

UK foreign exchange firm Travelex is still revelling from the double 2020 whammy of major ransomware outbreak followed by the impact COVID-19, and has managed to stay in business thanks a bailout arranged by their business administrators PWC. 

Uber's former Cheif Security Officer has been charged with obstruction of justice in the United States, accused of covering up a massive 57 million record data breach in 2016. Uber eventually admitted paying a hacking group $100,000 (£75,000) ransom to delete the data they had stolen.

The British Dental Association advised its dentist members that their bank account details and correspondence with them were stolen by hackers.  A BDA spokeswoman told BBC News it was possible that information about patients was also exposed, but remained vague about the potential context. The cyber breach was likely caused by a hack of the BDA website given it was taken offline for a considerable amount of time after reporting the breach.

Its seems that every month I report a huge cloud misconfiguration data beach, typically found by researchers looking for publicity, and caused by businesses not adequately securing their cloud services.  This month it was the turn of cosmetics giant Avon after researchers 'SafetyDetectives" found 19 million records were accessible online due to the misconfiguration of a cloud server.  Accurics separately reported misconfigured cloud services accounted for 93% of 200 breaches it has seen in the past two years, exposing more than 30 billion records. Also predicting cloud services data breaches are likely to increase in both velocity and scale, I am inclined to agree.
Crime Dot Com: From Viruses to Vote Rigging, How Hacking Went Global
Finally, I was invited to review a pre-release of Geoff White’s new book, Crime Dot Com: From Viruses to Vote Rigging, How Hacking Went Global”. I posted a book review upon its release in August, I thoroughly recommend it. The book is superbly researched and written, the author’s storytelling investigative journalist style not only lifts the lid on the murky underground world of cybercrime but shines a light on the ingenuity, persistence and ever-increasing global scale of sophisticated cybercriminal enterprises. While this book is an easily digestible read for non-cyber security experts, the book provides cybersecurity professionals working on the frontline in defending organisations and citizens against cyber-attacks, with valuable insights and lessons to be learnt about their cyber adversaries and their techniques, particularly in understanding the motivations behind today's common cyberattacks.

Stay safe and secure.

BLOG
NEWS
VULNERABILITIES AND SECURITY UPDATES
AWARENESS, EDUCATION AND THREAT INTELLIGENCE