DDoS Defense Archive

The need for bot management is fueled by the rise in automated attacks. In the early days, the use of bots was limited to small scraping attempts or spamming. Today, things are vastly different. Bots are being used to take over user accounts, perform DDoS attacks, abuse APIs, scrape unique content and pricing information and more. In its “Hype Cycle for Application Security 2018,” Gartner mentioned bot management at the peak of inflated expectations under the high benefit category.

Despite serious threats, are enterprise businesses adopting bot management solutions? The answer is, no. Many are still in denial.  These businesses are trying to restrain bots using in-house resources/solutions, putting user security at risk. In a recent study, Development of In-house Bot Management Solutions and their Pitfalls, security researchers from ShieldSquare found that managing bots through in-house resources is doing more harm than the good.

Against 22.39% of actual bad bot traffic, advanced in-house bot management solutions detected only 11.54% of bad bots. Not only did these solutions fail at detecting most of the bad bots, but nearly 50% of the 11.54% detected were also false positives.

Bot management
Figure 1: Bots Detected by In-house Bot Management Solutions vs. Actual Bad Bot Percentage

So why do in-house bot management solutions fail? Before we dive deeper into finding out the reasons behind the failure of in-house bot management solutions, let’s look at a few critical factors.

More Than Half of Bad Bots Originate From the U.S.

As figure 2 shows (see below), 56.4% of bad bots originated from the U.S. in Q1 2019. Bot herders know that the U.S. is the epicenter of business and showing their origin from the U.S. helps them in escaping geography-based traffic filtration. For example, many organizations that leverage in-house resources to restrain bots often block the countries where they don’t have any business. Or, they block countries such as Russia, suspecting that’s where most of the bad bots originate. The fact is contrary: Only 2.6% of total bad bots originated from Russia in Q1 2019.

bot management
Figure 2: Origin of Bad Bots by country

 

Cyber attackers now leverage advanced technologies to sift through thousands of IPs and evade geography-based traffic filtration. When bots emanate from diverse geographical locations, solutions based on IP-based or geographical filtering heuristics are becoming useless. Detection requires understanding the intent of your visitors to nab the suspected ones.

One-Third of Bad Bots Can Mimic Human Behavior

In Q1 2019 alone, 37% of bad bots were human-like. These bots can mimic human behavior (such as mouse movements and keystrokes) to evade existing security systems (Generation 3 and Generation 4 bad bots, as shown in figure 3).

bot management
Figure 3:  Bad Bot Traffic by Generation

Sophisticated bots are distributed over thousands of IP addresses or device IDs and can connect through random IPs to evade detection. These stealthy detection-avoiding actions don’t stop there. The programs of these sophisticated bots understand the measures that you can take to stop them. They know that apart from random IP addresses, geographical location is another area that they can exploit. Bots leverage different combinations of user agents to evade in-house security measures.

In-house solutions don’t have visibility into different types of bots, and that’s where they fail. These solutions work based on the data collected from internal resources and lack global threat intelligence. Bot management is a niche space and requires a comprehensive understanding and continuous research to keep up with notorious cybercriminals. Organizations that are working across various industries deploy in-house measures as their first mitigation step when facing bad bots. To their dismay, in-house solutions often fail to recognize sophisticated bot patterns.

Recommendations

Deploy Challenge-Response Authentication

Challenge-response authentication helps you filter first-generation bots. There are different types of challenge-response authentications, CAPTCHAs being the most widely used. However, challenge-response authentication can only help in filtering outdated user agents/browsers and basic automated scripts and can’t stop sophisticated bots that can mimic human behavior.

Implement Strict Authentication Mechanism on APIs

With the widespread adoption of APIs, bot attacks on poorly protected APIs are increasing. APIs typically only verify the authentication status, but not the authenticity of the user. Attackers exploit these flaws in various ways (including session hijacking and account aggregation) to imitate genuine API calls. Implementing strict authentication mechanisms on APIs can help to prevent security breaches.

Monitor Failed Login Attempts and Sudden Spikes in Traffic

Cyber attackers deploy bad bots to perform credential stuffing and credential cracking attacks on login pages. Since such approaches involve trying different credentials or a different combination of user IDs and passwords, it increases the number of failed login attempts.  The presence of bad bots on your website suddenly increases the traffic as well. Monitoring failed login attempts and a sudden spike in traffic can help you take pre-emptive measures before bad bots penetrate your web applications.

Deploy a Dedicated Bot Management Solution

In-house measures, such as the practices mentioned above, provide basic protection but do not ensure the safety of your business-critical content, user accounts and other sensitive data. Sophisticated third- and fourth-generation bots, which now account for 37% of bad-bot traffic, can be distributed over thousands of IP addresses and can attack your business in multiple ways. They can execute low and slow attacks or make large-scale distributed attacks that can result in downtime. A dedicated bot management solution facilitates real-time detection and mitigation of such sophisticated, automated activities.

Source: https://securityboulevard.com/2019/11/why-organizations-are-failing-to-deal-with-rising-bot-attacks/

Tens of thousands of Wi-Fi routers are potentially vulnerable to an updated form of malware which takes advantage of known vulnerabilities to rope these devices into a botnet for the purposes of selling distributed denial of service (DDoS) attack capabilities to cyber criminals.

A new variant of Gafgyt malware – which first emerged in 2014 – targets small office and home routers from well known brands, gaining access to the devices via known vulnerabilities.

Now the authors of Gafgyt – also known as Bashlite – have updated the malware and are directing it at vulnerabilities in three wireless router models. The Huawei HG532 and Realtek RTL81XX were targeted by previous versions of Gafgyt, but now it’s also targeting the Zyxel P660HN-T1A.

In all cases, the malware is using a scanner function to find units facing the open internet before taking advantage of vulnerabilities to compromise them.

The new attacks have been detailed by cybersecurity researchers at Palo Alto Networks. The Gafgyt botnet appears to be directly competing with another botnet – JenX – which also targets the Huawei and Realtek routers, but not Zyxel units. Ultimately, the attackers behind Gafgyt want to kill off their competition by replacing JenX with their own malware.

“The authors of this malware want to make sure their strain is the only one controlling a compromised device and maximizing the device’s resources when launching attacks,” Asher Davila, security researcher at the Palo Alto Networks Unit 42 research division told ZDNet.

“As a result, it is programmed to kill other botnet malware it finds, like JenX, on a given device so that it has the device’s full resources dedicated to its attack”.

Control of the botnet allows its gang to launch DDoS attacks against targets in order to cause disruption and outages.

While the malware could be used to launch denial of service campaigns against any online service, the current incarnation of Gafgyt appears to focus on game servers, particularly those running Valve Source Engine games, including popular titles Counter-Strike and Team Fortress 2. Often the targeted servers aren’t hosted by Valve, but rather are private servers hosted by players.

The most common reason for attacks is plain sabotage of other users: some young game players want to take revenge against opponents or rivals.

Those interested in these malicious services don’t even need to visit underground forums to find them – Unit 42 researchers note that botnet-for-hire services have been advertised using fake profiles on Instagram and can cost as little as $8 to hire. Researchers have alerted Instagram to the accounts advertising malicious botnet services.

“There’s clearly a younger demographic that they can reach through that platform, which can launch these attacks with little to no skill. It is available to everyone and is easier to access than underground sites,” said Davila.

As more IoT products become connected to the internet, it’s going to become easier for attacker to rope devices into botnets and other malicious activity if devices aren’t kept up to date.

The routers being targeted by the new version of Gafgyt are all old – some have been on the market for more than five years – researchers recommend upgrading your router to a newer model and that you should regularly apply software updates to ensure the device is as protected as possible against attacks.

“In general, users can stay safe against botnets by getting in the habit of updating their routers, installing the latest patches and implementing strong, unguessable passwords,” Davila explained.

The more frequent the better, but perhaps for simplicity, considering timing router updates around daylight savings so at least you’re updating twice a year,” he added.

Source: https://www.zdnet.com/article/this-aggressive-iot-malware-is-forcing-wi-fi-routers-to-join-its-botnet-army/

There’s been a massive decrease in the amount of server hacks on Rainbow Six Siege since Ubisoft initiated a strategy to combat denial-of-services and distributed denial-of-service (DoS/DDoS) attacks. Taking a number of measures, including having less matches on each server and monitoring network traffic, has yielded considerable results, making the shooter much more stable.

In a report from Ubisoft, DoS/DDoS attacks are down 93% since many of the precautions outlined were taken. Ban waves have been introduced to detect perpetrators, servers now take on less than three matches each, punishment for quitting too many matches – a side-effect of players caught in an attack, known as the escalating abandon sanction – has been disabled, and there’s heightened network traffic monitoring.

Legal action against a number of offenders, and people hosting and offering the services behind these attacks, is being pursued. While anyone caught has been banned, the report states that “prominent” attackers and cheat-makers are the ones facing legal threat. Finally, Ubisoft are working with the Microsoft Azure team to develop broader solutions that will provide “a substantial impact on DDoS, DoS, Soft Booting, and server stressing.”

This plan was revealed back in September, when hacks had become regular enough to necessitate game-wide action. Cheating players were slowing matches down via manufactured lag in order to force opponents to quit. Such behavior spiked around the start of the Operation Ember Rise season.

The BBC interviewed one of the purveyors of these cheats a while back, who claimed top ranked players are among his customers. He made £1,500 a week from selling the hacks, and at the time said his work wasn’t detected by the game – odds are his methods are now ironed out.

Source: https://www.pcgamesn.com/rainbow-six-siege/protections

A number of South African internet service providers (ISPs) are limping away from a widespread distributed denial of service (DDoS) attack on Sunday.

According to a MyBroadband report, ISPs Afrihost, Axxess, and WebAfrica are all currently affected.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

As of Monday morning, both Afrihost and Axxess are still struggling with intermittent connectivity and poor network performance.

WebAfrica failed to provide an update.

It’s not yet clear when their services will be restored.

DDoS attacks in a nutshell

A DDoS attack inundates the target server with too many requests, slowing it down to a crawl and in some cases bringing it to a complete halt.

More famous attacks in the past include 2016’s DynDNS attack, which left a vast swathe of the internet inaccessible.

In the same year, the SABC was also a victim of an attack.

Reddit, the PlayStation Network and the now defunct Mt. Gox bitcoin exchange have all suffered similar attacks in the past.

The DDoS on these ISPs comes just days after Sabric (the SA Banking Risk Info Centre) announced that South Africa’s banks were hit by DDoS attacks of their own.

Source: https://memeburn.com/2019/10/ddos-attack-afrihost-axxess-south-africa/

Outages lasted for a full working day as the Route 53 DNS system was disrupted

Businesses were unable to service their customers for approximately eight hours yesterday after Amazon Web Services (AWS) servers were struck by a distributed denial-of-service (DDoS) attack.

After initially flagging DNS resolution errors, customers were informed that the Route 53 domain name system (DNS) was in the midst of an attack, according to statements from AWS Support circulating on social media.

From 6:30pm BST on Tuesday, a handful of customers suffered an outage to services while the attack persisted, lasting until approximately 2:30am on Wednesday morning, when services to the Route 53 DNS were restored. This was the equivalent of a full working day in some parts of the US.

“We are investigating reports of occasional DNS resolution errors. The AWS DNS servers are currently under a DDoS attack,” said a statement from AWS Support, circulated to customers and published across social media.

“Our DDoS mitigations are absorbing the vast majority of this traffic, but these mitigations are also flagging some legitimate customer queries at this time. We are actively working on additional mitigations, as well as tracking down the source of the attack to shut it down.”

The Route 53 system is a scalable DNS that AWS uses to give developers and businesses a method to route end users to internet applications by translating URLs into numeric IP addresses. This effectively connects users to infrastructure running in AWS, like EC2 instances, and S3 buckets.

During the attack, AWS advised customers to try to update the configuration of clients accessing S3 buckets to specify the region their bucket is in when making a request to mitigate the impact of the attack. SDK users were also asked to specify the region as part of the S3 configuration to ensure the endpoint name is region-specific.

Rather than infiltrating targeted software or devices, or exploiting vulnerabilities, a typical DDoS attack hinges on attackers bombarding a website or server with an excessive volume of access requests. This causes it to undergo service difficulties or go offline altogether.

All AWS services have been fully restored at the time of writing, however, the attack struck during a separate outage affecting Google Cloud Platform (GCP), although there’s no indication the two outages are connected.

From 12:30am GMT, GCP’s cloud networking system began experiencing issues in its US West region. Engineers then learned the issue had also affected a swathe of Google Cloud services, including Google Compute Engine, Cloud Memorystore, the Kubernetes Engine, Cloud Bigtable and Google Cloud Storage. All services were gradually repaired until they were fully restored by 4:30am GMT.

While outages on public cloud platforms are fairly common, they are rarely caused by DDoS attacks. Microsoft’s Azure and Office 365 services, for example, suffered a set of routine outages towards the end of last year and the beginning of 2019.

One instance includes a global incident with US government services and LinkedIn sustaining an authentication outage towards the end of January this year.

Source: https://www.cloudpro.co.uk/cloud-essentials/public-cloud/8276/aws-servers-hit-by-sustained-ddos-attack