Blocking DDoS Archive

  • Cybersecurity company Recorded Future conducted a research study on the history of Iran’s hacker culture, its ties to the country’s government and mistakes the loosely tied-together group has made along the way.
  • Forums started in 2002 have provided a launch point for a series of sophisticated attacks against world governments and companies throughout the past two decades, according to the report.

Iranian hackers have congregated since at least 2002 in online forums to share tips on the best ways to create successful cyberattacks.

Those conversations have given birth to some of the most significant global cybersecurity incidents, including devastating attacks on Saudi Aramco, attacks against the public-facing websites of large banks and espionage campaigns on a wide range of Western targets, according to new research by cybersecurity intelligence firm Recorded Future.

Among the findings in the report:

  • A forum called “Ashiyane,” created by a cybersecurity company called the Ashiyane Digital Security Team, served as a medium for Iranian contractors to show off their talents for executing successful online offensive campaigns.
  • The forum was one of Iran’s most popular with around 20,000 users and had direct ties to Iran’s Islamic Revolutionary Guard Corps.
  • Many of the hackers on the forum considered themselves “gray hats,” a term for hackers that participate in both legitimate and criminal cyber actions. It’s a mixture of the term “white hat,” which refers to ethical hackers, and “black hats,” which refers to hackers who take part in malicious or illegal activities.
  • During the Iranian green movement of 2009, the forum was one of only a few that remained in use as Iran’s government cracked down on hacking websites.
  • The forum’s archives feature details of how participants shared information on how to execute distributed denial of service attacks, or DDOS attacks, which are meant to push websites out of service by flooding them with information, as well as Android exploits and commonly used cyberattack techniques.
  • The forum was shutdown in 2018. Though the reason for the shutdown is not clearly known, Recorded Future cites sources as saying the forums became involved in online gambling, an endeavor explicitly prohibited in the Islamic state.

Source: https://www.cnbc.com/2019/01/16/new-research-offers-a-glimpse-inside-the-online-forums-where-iranian-hackers-congregate.html

The breathtaking pace at which everyone and everything is becoming connected is having a profound effect on digital business, from delivering exceptional experiences, to ensuring the security of your customers, applications, and workforce.

Consider this: There are over 20 billion connected devices and more than 2 billion smartphones in use today. Gartner predicts that by 2022, $2.5 million will be spent every minute in the IoT and 1 million new IoT devices will be sold every hour.

No longer can you secure the perimeter or a centralized core and trust that nothing will get in or out. Effective security depends upon an in depth strategy – from the core to the edge – that enables you to protect your most valuable assets by implementing proactive protection closer to the threats and far away from your end users.

The Evolution of a Digital Topology

Centralized computing systems were never an extraordinarily efficient or cost-effective way to process huge volumes of transactional data for throngs of online users concurrently. The search for more engaging experiences at digital touchpoints paved the way for cloud and distributed computing to exploit parallel processing technology in the marketplace.

This worked for a while, until streaming video and other rich media became the norm across the Internet and users had very little tolerance for glitches or latency. The problem is, dragging every experience back and forth to a centralized cloud doesn’t resolve the critical issues of capacity and traffic pile ups.

It’s one of the great misconceptions of the Internet that “the last mile” is the bottleneck. The issue instead lies within the cloud data centers and backbone providers, which typically only have a few hundred Tbps capacity – not enough to deliver the kind of experiences or security your customers expect.

The demand for more real-time business moments between things and people at digital touchpoints is pushing us all toward the edge. Which is a good thing. It’s already expanding business opportunities, and fundamentally changing how we live, interact, shop, and work.

It’s forcing businesses to adapt, either by pushing faster development, becoming more agile in their processes, favoring faster features over perfect features, or all three. The problem is, security teams aren’t currently set up to handle this kind of disruption on top of the need to monitor, develop insights, and adapt processes based on soak time they simply don’t have anymore.

All the while, attacks continue to grow and target with more precision. Trust based on a single network location is no longer enough.

Enter Security at the Edge

Security at the edge is an approach to defending your business, your customers – all of your users – from security threats by deploying in depths defense measures closer to the point of attack and as far away from your assets (your people, applications, or infrastructure) as possible. Security at the edge allows InfoSec pros to address three critical security imperatives.

1. Scale We live in a time when attackers hold unprecedented power and there’s simply no way to summons the capacity you need to defend yourself in a data center. Even the largest cloud data centers can be overwhelmed by the attacks we’re seeing. And even if it was physically possible to equip the cloud data center with enough capacity, the cost would be prohibitive.

This is becoming an even more widespread problem with the rise of IoT. There are now billions of devices connected at the last mile, with powerful CPU and little or no security.

The only way to prevent this is by intercepting the enormous volumes of attack traffic at the edge, where there is the capacity to mount a viable defense and stop attacks from reaching and swamping your data centers.

2. Intelligence It’s now imperative that you protect applications and APIs deployed anywhere – in your data centers or in the public cloud – with DDoS protection, web app firewall, and bot management. An intelligent defense strategy has become more important as more people than ever are accessing your apps through APIs from mobile devices. What’s more, the millions of bots being deployed by malicious actors are becoming extremely sophisticated at evading traditional defenses.

But protecting your apps, APIs and users is about more than just capacity, it requires cutting edge threat intelligence. Threat intelligence should leverage a multilayered approach of machine learning and human intelligence where both data scientists and algorithms perform statistical, trend, and pattern analysis of structured and unstructured data to identify and mitigate new attack vectors before anybody else. The key is that this is all happening at the edge, closer to the attack point and farther away from you and your end users.

3. Expertise Nothing tops human expertise. Not only do you need the network capacity that the ever-growing threat of volumetric DDoS attacks demand, but you also need the expertise to understand what the data, the patterns, and anomalies are telling you.

Along with sophisticated technology and a security at the edge approach, industry experts are capable of helping you make sense of the threats you face everyday. And as you know, attackers never sleep. The only response: always-on, 24x7x365 monitoring, scrubbing, and DDoS mitigation services.

Connecting to the Future

At the end of the day, it’s all about connecting to your customers and your employees; your apps and data; and to the countless IoT devices out there. Simply put: You need to be everywhere your customers are. When it comes to performance it has to be fast. And when it comes to security it needs to be proactive and in depth.

As nearly everyone and everything gets connected, the data required to function in the digital world risks not only being congested in the core but, even worse, caught up in large-scale cyberattacks. And cloud data centers are struggling to keep up.

Delivering engaging and glitch free digital business moments securely is the heart and the backbone of everything your digital business stands for. And In spite of how remarkably the Internet has grown and evolved over the past 20 years, we believe the most dramatic digital experiences are yet to come.

As a result, the world is now realizing just how important a security-at-the-edge strategy can be – one that brings users closer to the digital experiences and knocks down attacks where they’re generated. One that breeds trust and puts the confidence and control back in your hands.

Source: https://securityboulevard.com/2019/01/from-the-core-to-the-edge-3-security-imperatives-and-the-evolving-digital-topology/

2018 brought massive, hardware-level security vulnerabilities to the forefront. Here’s the five biggest vulnerabilities of the year, and how you can address them.

2018 was a year full of headaches for IT professionals, as security vulnerabilities became larger and more difficult to patch, since software mitigations for hardware vulnerabilities require some level of compromise. Here’s the five biggest security vulnerabilities of 2018, and what-if anything-you can do to address them in your organization.

1. Spectre and Meltdown dominated security decisions all year

On January 4, the Spectre and Meltdown vulnerabilities allowing applications to read kernel memory were disclosed, and posed security problems for IT professionals all year, as the duo represented largely hardware-level flaws, which can be mitigated-but not outright patched-through software. Though Intel processors (except for Atom processors before 2013, and the Itanium series) are the most vulnerable, microcode patches were necessary for AMD, OpenPOWER, and CPUs based on Arm designs. Other software mitigations do exist, though some require vendors to recompile their programs with protections in place.

The disclosure of these vulnerabilities sparked a renewed interest in side-channel attacks requiring manipulative or speculative execution. Months later, the BranchScope vulnerability was disclosed, focusing on the shared branch target predictor. The researchers behind that disclosure indicated that BranchScope provides the ability to read data which should be protected by the SGX secure enclave, as well as defeat ASLR.

Between the initial disclosure, Spectre-NG, Spectre 1.2, and SpectreRSB, a total of eight variants were discovered, in addition to related work like SgxPectre.

2. Record-breaking DDoS attacks with memcached

Malicious actors staged amplification attacks using flaws in memcached, reaching heights of 1.7 Tbps. The attack is initiated by a server spoofing their own IP address-specifying the attack target address as the origin address-and sending a 15-byte request packet, which is answered by a vulnerable memcached server with responses ranging from 134KB to 750KB. The size disparity between the request and response-as much as 51,200 times larger-made this attack particularly potent.

Proof-of-concept code, which can be easily adapted for attacks was published by various researchers, among them “Memcrashed.py,” which integrates with the Shodan search engine to find vulnerable servers from which you can launch an attack.

Fortunately, it is possible to stop memcached DDoS attacks, though users of memcached should change the defaults to prevent their systems from being abused. If UDP is not used in your deployment, you can disable the feature with the switch -U 0. Otherwise, limiting access to localhost with the switch -listen 127.0.0.1 is advisable.

3. Drupal CMS vulnerability allows attackers to commandeer your site

A failure to sanitize inputs resulted in the announcement of emergency patches for 1.1 million Drupal-powered websites in late March. The vulnerability relates to a conflict between how PHP handles arrays in URL parameters, and Drupal’s use of a hash (#) at the beginning of array keys to signify special keys that typically result in further computation, allowing attackers to inject code arbitrarily. The attack was nicknamed “Drupalgeddon 2: Electric Hashaloo” by Paragon Initative’s Scott Arciszewski.

In April, the same core issue was patched for a second time, relating to the URL handling of GET parameters not being sanitized to remove the # symbol, creating a remote code execution vulnerability.

Despite the highly publicized nature of the vulnerability, over 115,000 Drupal websites were still vulnerable to the issue, and various botnets were actively leveraging the vulnerability to deploy cryptojacking malware.

ZDNet’s Catalin Cimpanu broke a story in November detailing a new type of attack which leverages Drupalgeddon 2 and Dirty COW to install cryptojacking malware, which can proliferate due to the number of unpatched Drupal installations in the wild.

4. BGP attacks intercept DNS servers for address hijacking

Border Gateway Protocol (BGP), the glue that is used to determine the most efficient path between two systems on the internet, is primed to become the target of malicious actors going forward, as the protocol was designed in large part before considerations of malicious network activity were considered. There is no central authority for BGP routes, and routes are accepted at the ISP level, placing it outside the reach of typical enterprise deployments and far outside the reach of consumers.

In April, a BGP attack was waged against Amazon Route 53, the DNS service component of AWS. According to Oracle’s Internet Intelligence group, the attack originated from hardware located in a facility operated by eNet (AS10297) of Columbus, Ohio. The attackers redirected requests to MyEtherWallet.com to a server in Russia, which used a phishing site clone to harvest account information by reading existing cookies. The hackers gained 215 Ether from the attack, which equates to approximately $160,000 USD.

BGP has also been abused by governments in certain circumstances. In November 2018, reports indicated that the Iranian government used BGP attacks in an attempt to intercept Telegram traffic, and China has allegedly used BGP attacks through points of presence in North America, Europe, and Asia.

Work on securing BGP against these attacks is ongoing with NIST and DHS Science and Technology Directorate collaborating on Secure Inter-Domain Routing (SIDR), which aims to implement “BGP Route Origin Validation, using Resource Public Key Infrastructure, [which] can address and resolve the erroneous exchange of network routes.”

5. Australia’s Assistance and Access Bill undermines security

In Australia, the “Assistance and Access Bill 2018,” which provides the government “frameworks for voluntary and mandatory industry assistance to law enforcement and intelligence agencies,” basically provides government access to the contents of encrypted communication. It is essentially the definition of a self-inflicted wound, as the powers it provides the government stand to undermine confidence in Australian products, as well as the Australian outposts of technology companies.

The bill hastily passed on December 7 and was touted as necessary in the interest of “safeguarding national security,” though subtracting perpetrators. Australia has seen a grand total of seven deaths related to terrorist activities since 2000. Additionally, the bill permits demands to be issued in relation to “the interests of Australia’s foreign relations or the interests of Australia’s national economic well-being.”

While the bill appears to not provide the full firehose of user data unencrypted to government agencies, it does permit the government to compel companies to provide content from specific communication, though forbids the disclosure of any demands made to companies about compliance. Stilgherrian provides a balanced view of the final bill in his guide on ZDNet.

Source: https://www.techrepublic.com/article/5-biggest-security-vulnerabilities-of-2018/

The October 2016 cyberattack on Dyn should have been an object lesson on how to build Domain Name System infrastructure that would resist a distributed denial of service attack. Unfortunately, I believe we have yet to incorporate the fundamental lesson from this attack.

The DDoS attack on Dyn occurred just over two years ago. The Mirai botnet, a botnet that consisted of hundreds of thousands of compromised “internet of things” devices, was used to send an enormous amount of traffic at Dyn’s authoritative DNS servers, which rendered them incapable of responding to legitimate queries. Major organizations that relied on Dyn for their authoritative DNS service, including Twitter, CNN, Netflix and The New York Times, were unreachable for hours.

I believe one of the central takeaways from the Dyn attack was — as simple as it seems — that you shouldn’t put all your eggs in one basket. In DNS terms, this means that you shouldn’t rely exclusively on a single DNS provider to host your internet-facing DNS data. Organizations that relied on Dyn were unreachable for hours during the attack, whereas organizations that hedged their bets by taking the precaution of using multiple providers weathered the attack with minimal downtime.

I gave a talk in London the month after the attack in which I reminded listeners of the “Multiple Egg-Basket” rule — something I had actually stopped mentioning years before because it struck me as too obvious to warrant a mention. One of the attendees caught me during the next break and told me that his company happened to have exactly the setup I’d recommended: They were a Dyn customer, but they also used a handful of their own external DNS servers. As they relied heavily on their online presence, they used a third-party service to monitor the availability of their website 24 hours a day. During the hours-long attack on Dyn, they were only briefly unreachable.

It seems like a simple precaution, right? Unfortunately, it’s not always as simple as you might think. It’s very easy to synchronize basic DNS data among multiple providers. If, for example, you want to use Dyn and one of its competitors to host your internet-facing DNS data, you generally use one provider to manage that data and tell the other provider to get their copy of the data from the first; this is what we refer to in the business as “secondary DNS servers.”

However, many DNS providers now offer, and some customers use, value-added services, such as traffic distribution based on a querier’s location; this enables a customer to direct a querier to the closest web server. In my experience, it’s those value-added services that pose a problem because there’s no standard way to synchronize their configuration among providers. If you laboriously configure Provider A’s system with rules to send all of your customers to the closest web or application servers you offer, you’d have to do the same with Provider B while using the proprietary interface they offer. And if you change Provider A’s configuration in real time in response to conditions, such as if one of your web or application servers failed or was brought down for maintenance, there’s no standard way for that provider to inform the other.

There have been discussions within the Internet Engineering Task Force, the organization responsible for developing and enhancing internet protocols, to come up with some standard means of specifying and synchronizing these value-added services, but there is still progress to be made. Even if such a mechanism existed, there’s not much incentive for providers to support it: Most providers charge customers based on the volume of queries they receive, so when you make it easy for another provider to serve one of your customers, you’re making it just as easy for them to take some of your revenue.

But the benefits of using multiple DNS providers, in my opinion, are important enough for customers to insist that their providers offer some mechanism — perhaps based on the transfer of well-documented metadata or the use of a well-designed API — to synchronize these value-added services. Only then can we implement the lessons that the attack on Dyn should have taught us.

Source: https://www.forbes.com/sites/forbestechcouncil/2018/12/19/the-forgotten-object-lesson-of-the-dyn-ddos-attack/#73ac242f2c06

 

As attackers begin to use multiple command and control systems to communicate with backdoors and other malware, how can organisations ensure that they detect such methods and that all C&C systems are removed, including “sleepers” designed to be activated at a future date.

When it comes to all kinds of cyber defence, it is always less expensive to prevent attacks and infections than to deal with them once they are in place.

This is especially true in the case of botnets. What is a botnet? A botnet is an army of mini-programs; malicious softwaredesigned to infiltrate large numbers of digital devices and then use them for any number of tactics.

For example, botnets can be instructed to steal data or launch huge distributed denial of service (DDoS) attacks and all while simultaneously stealing the electricity and computing power to do it.

Most organisations aim to have at least some cyber security in place. These fundamentals can include Items such as implementing the most effective choice of anti-malware, configuring digital devices and software with as much hardened security as practical and ensuring that security patches are always applied swiftly.

However, even when an organisation invests in implementing cyber security essentials, it is still no guarantee of a fully bot-free environment.

As shown in all of the major breaches that hit the headlines, hackers are very keen on remaining unnoticed for as long as possible. The dwell time is the term given to the duration between the initial intrusion and the point of discovery.

In many cases, Yahoo, Starwood and other mega breaches included, you might have noticed that the dwell time was measured not in hours, days, weeks or even months – it was years between the initial intrusion and eventual discovery.

Determined hackers design their bots to be as stealthy as possible, to hide as best as they can and to communicate as efficiently and discreetly as possible.

When researchers find new botnet armies, they often do it by accident and say things like, “We stumbled across this data anomaly”, eventually tracing the cause back to a new botnet force.

Although botnet communications may try to hide, the thing about bots is that they generally need to communicate to work. These botnets used to work through command and control servers. That meant that disconnecting communications between bots and their botnet command and control servers was enough to “decapitate” the bot and render it unable to steal anything or accept new commands.

However, newer botnets are smarter. They still need to communicate, but now many of them can spawn dynamic, peer-to-peer networks.

Bots do still need instructions to work and they also need destinations to send anything they steal. Identify and block those communication routes and your bots will cease to offer their bot master any value.

The challenge is that not all organisations use or install the technologies that can detect and block bots.

For the few organisations that do have the budget and motivation to ramp up their anti-bot defences, there is plenty that can be done.

It starts with ramping up the security that prevents initial infection and locking down unnecessary trust permissions. Prevention is still better than detection and the expenses involved in containing and resolving the threat. There is a huge difference in the efficacy of many security products and sadly, many of those with the highest marketing budget are far from the most effective.

There are also great, real-time security technologies that can detect, alert or block botnet activity in real-time. These operate by continually analysing network traffic and local system logs.

If your organisation does not have the budget for real-time monitoring, then it is still worth inspecting devices and checking for any suspicious processes that seem to be taking up a lot of memory – especially if any users are reporting their device has slowed down. That can be an indicator of compromise, but only when the botnet is awake and active.

And if you are wondering just how much of a threat botnets are, just think about this: most of our attention as cyber security professionals is on the botnets we can detect and eliminate from our own environments, but the internet of things and sub-standard security present in many devices means the internet is riddled with enough botnets to effectively stop the internet from working.

The only thing stopping that from happening at present is that it would harm rather than profit the hackers. After all, 2019 might just be the year when the proceeds from cyber crime reaches a trillion dollars.

Source: https://www.computerweekly.com/opinion/Security-Think-Tank-Smart-botnets-resist-attempts-to-cut-comms