DDoS Protection Specialist Archive

The issue impacts users of the vendor’s Cloud WAF product.

Imperva, the security vendor, has made a security breach public that affects customers using the Cloud Web Application Firewall (WAF) product.

Formerly known as Incapsula, the Cloud WAF analyzes requests coming into applications, and flags or blocks suspicious and malicious activity.

Users’ emails and hashed and salted passwords were exposed, and some customers’ API keys and SSL certificates were also impacted. The latter are particularly concerning, given that they would allow an attacker to break companies’ encryption and access corporate applications directly.

Imperva has implemented password resets and 90-day password expiration for the product in the wake of the incident.

Imperva said in a website notice that they learned about the exposure via a third party on August 20. However, the affected customer database contained old Incapsula records that go up to Sept. 15, 2017 only.

imperva

“We profoundly regret that this incident occurred and will continue to share updates going forward,” Imperva noted. “In addition, we will share learnings and new best practices that may come from our investigation and enhanced security measures with the broader industry. We continue to investigate this incident around the clock and have stood up a global, cross-functional team.”

Imperva also said that it “informed the appropriate global regulatory agencies” and is in the process of notifying affected customers directly.

When asked for more details (such as if this is a misconfiguration issue or a hack, where the database resided and how many customers are affected), Imperva told Threatpost that it is not able to provide more information for now.

Source: https://threatpost.com/imperva-firewall-breach-api-keys-ssl-certificates/147743/

The number of DDoS attacks detected by Kaspersky jumped 18% year-on-year in the second quarter, according to the latest figures from the Russian AV vendor.

Although the number of detected attacks was down 44% from Q1, the vendor claimed that this seasonal change is normal as activity often dips in late spring and summer. However, the spike was even bigger when compared to the same period in 2017: an increase of 25%.

Application attacks, which the firm said are harder to defend against, increased by a third (32%) in Q2 2019 and now constitute nearly half (46%) of all detected attacks. The latter figure is up 9% from Q1 2019, and 15% from Q2 2018.

Crucially, the seasonal drop in attacks has barely touched targeting of the application layer, which fell just 4% from the previous quarter.

These attacks are difficult to detect and stop as they typically include legitimate requests, the firm said.

“Traditionally, troublemakers who conduct DDoS attacks for fun go on holiday during the summer and give up their activity until September. However, the statistics for this quarter show that professional attackers, who perform complex DDoS attacks, are working hard even over the summer months,” explained Alexey Kiselev, business development manager for the Kaspersky DDoS Protection team.

“This trend is rather worrying for businesses. Many are well protected against high volumes of junk traffic, but DDoS attacks on the application layer require them to identify illegitimate activity even if its volume is low. We therefore recommend that businesses ensure their DDoS protection solutions are ready to withstand these complex attacks.”

Kaspersky also recorded the longest DDoS attack since it started monitoring botnet activity in 2015. Analysis of commands received by bots from command and control (C&C) servers revealed one in Q2 2019 lasting 509 hours, which is nearly 21 days. The previous longest attack, observed in Q4 2018, lasted 329 hours.

Source: https://www.infosecurity-magazine.com/news/ddos-attacks-jump-18-yoy-in-q2/

Cloudflare, the backbone of many of the web’s biggest sites, experienced a global outage that left many wondering what could have happened.

The fragility of the internet was exposed yesterday (2 July) when users across the world came across many websites displaying the error message ‘502 Bad Gateway’. Shortly after, social media was flooded with questions as to what caused such an outage across seemingly unconnected sites.

Soon after, Cloudflare, a content delivery and DDoS protection provider, said an error on its part was behind the massive outage. A quick look at the company’s systems status page showed that almost every major city in the world was affected in some way, including Dublin.

23 minutes after Cloudflare confirmed that it was experiencing issues, it announced that it had “implemented a fix”. 35 minutes later, it revealed the cause of the outage.

“We saw a massive spike in CPU that caused primary and secondary systems to fall over,” a statement said. “We shut down the process that was causing the CPU spike. Service restored to normal within ~30 minutes.”

Soon after, it announced that normal operations had resumed. So what could have caused such a major outage so soon after another one that occurred on 24 June?

Testing processes were ‘insufficient in this case’

In a blogpost, Cloudflare CTO John Graham-Cumming was able to reveal that the CPU spike was the result of “bad software deploy that was rolled back”. He stressed that this was not the result of a well-crafted DDoS attack.

“The cause of this outage was deployment of a single misconfigured rule within the Cloudflare Web Application Firewall (WAF) during a routine deployment of new Cloudflare WAF managed rules,” Graham-Cumming said.

“We make software deployments constantly across the network and have automated systems to run test suites, and a procedure for deploying progressively to prevent incidents. Unfortunately, these WAF rules were deployed globally in one go and caused today’s outage.”

He went on to admit that such an outage was “very painful” for customers and that the company’s testing processes were “insufficient in this case”.

This outage was different to the one that occurred on 24 June, which Cloudflare described as the internet having “a small heart attack”. It was revealed that network provider Verizon directed a significant portion of the internet’s traffic to a small company in the US state of Pennsylvania, resulting in a major information pile-up.

Source: https://www.siliconrepublic.com/enterprise/cloudflare-outage-502-bad-gateway-explained

An internal Cloudflare problem caused websites to fall bringing some parts of the internet to a crawl.

Global internet services provider Cloudflarehad trouble, and when it has problems, the internet has trouble, too. For about an hour, websites around the globe went down with 502 error messages.

The problem has now been fixed, and the service appears to be normally running. It’s still not entirely clear what happened.

In a short blog post, Cloudflare CTO John Graham-Cumming explained:

“For about 30 minutes today, visitors to Cloudflare sites received 502 errors caused by a massive spike in CPU utilization on our network. This CPU spike was caused by a bad software deploy that was rolled back. Once rolled back the service returned to normal operation and all domains using Cloudflare returned to normal traffic levels.”

Cloudflare CEO Matthew Prince subsequently explained the failure happened because:

“[A] bug on our side caused Firewall process to consume excessive CPU. Initially appeared like an attack. We were able to shut down process and get systems restored to normal. Putting in place systems so never happens again.”

Both Graham-Cumming and Prince emphasized this service disruption was not caused by an attack. Nor, Prince tweeted, was this a repeat of the Verizon Border Gateway Protocol network problem, which troubled Cloudflare and the internet last week.

How could this simple mistake cause so many problems? Cloudflare operates an extremely popular content delivery network (CDN). When it works right, its services protect website owners from peak loads, comment spam attacks, and Distributed Denial of Service (DDoS) attacks. When it doesn’t work right, well, we get problems like this one.

Cloudflare CDN works by optimizing the delivery of your website resources to your visitors. Cloudflare does this by delivering visitors to your website’s static from its global data centers. Your web server only delivers dynamic content. In addition, generally speaking, Cloudflare’s global network provides a faster route to your site than a visitor going directly to your site.

Its CDN is the most popular such service with 34.55% of the market. Amazon CloudFront is second with 28.84%. With over 16 million Cloudflare-protected sites, including BuzzFeed, Sling TV, Pinterest, and Dropbox, when Cloudflare has trouble, many of these websites are knocked off the internet.

Prince admitted this problem was the biggest ever internal Cloudflare problem. Prince tweeted:

“This was unique in that it impacted primary and all fail-over systems in a way we haven’t seen before. Will ensure better isolation and backstops in the future. Still getting to the bottom of the root cause.”

The problem also affected Cloudflare’s DNS service and its CDN.

To Cloudflare’s credit, the company is taking the blame and being transparent about what went wrong. At the same time, the episode emphasizes how much the internet now depends on a few important companies instead of many peer-to-peer businesses and institutions.

Source: https://www.zdnet.com/article/cloudflare-stutters-and-the-internet-stumbles/

Check out the top five cybersecurity vulnerabilities and find out how to prevent data loss or exposure, whether the problem is end-user gullibility, inadequate network monitoring or poor endpoint security defenses.

The threat landscape gets progressively worse by the day. Cross-site scripting, SQL injection, exploits of sensitive data, phishing and denial of service (DDoS) attacks are far too common. More and more sophisticated attacks are being spotted, and security teams are scrambling to keep up. Faced with many new types of issues — including advanced phishing attacks that are all too successful, and ransomware attacks that many seem helpless to prevent — endpoint security strategies are evolving rapidly. In the SANS “Endpoint Protection and Response” survey from 2018, 42% of respondents indicated at least one of their endpoints had been compromised, and 20% didn’t know if any endpoints had been compromised at all.

How are hackers able to wreak havoc on enterprisesand cause sensitive data loss and exposure? The answer is through a variety of cybersecurity vulnerabilities in processes, technical controls and user behaviors that allow hackers to perform malicious actions. Many different vulnerabilities exist, including code flaws in operating systems and applications, systems and services misconfiguration, poor or immature processes and technology implementations, and end user susceptibility to attack.

Some of the most common attacks that resulted in data breaches and outages included phishing, the use of stolen credentials, advanced malware, ransomware and privilege abuse, as well as backdoors and command and control channels on the network set up to allow continued access to and control over compromised assets, according to the Verizon “2019 Data Breach Investigations Report,” or Verizon DBIR.

What are the major types of cybersecurity vulnerabilities that could lead to successful attacks and data breaches and how can we ideally mitigate them? Check out the top five most common vulnerabilities organizations should work toward preventing or remediating as soon as possible to avoid potentially significant cybersecurity incidents.

1. Poor endpoint security defenses

Most enterprise organizations have some sort of endpoint protection in place, usually antivirus tools. But zero-day exploits are becoming more common and many of the endpoint security defenses in place have proved inadequate to combat advanced malware and intrusions targeting end users and server platforms.

Causes. Many factors can lead to inadequate endpoint security defenses that become vulnerabilities. First, standard signature-based antivirus systems are no longer considered good enough, as many savvy attackers can easily bypass the signatures. Second, smart attackers may only be caught through unusual or unexpected behaviors at the endpoint, which many tools don’t monitor. Finally, many endpoint security defenses haven’t offered security teams the ability to dynamically respond to or investigate endpoints, particularly on a large scale.

How to fix it. More organizations need to invest in modern endpoint detection and response tools that incorporate next-generation antivirus, behavioral analysis and actual response capabilities. These tools provide more comprehensive analysis of malicious behavior, along with more flexible prevention and detection options. If you’re still using traditional antivirus tools, consider an upgrade to incorporate more behavioral inspection, more detailed forensic details and compromise indicators, as well as real-time response capabilities.

2. Poor data backup and recovery

With the recent threat of ransomware looming large, along with traditional disasters and other failures, organizations have a pressing need to back up and recover data. Unfortunately, many organizations don’t excel in this area due to a lack of sound backup and recovery options.

Causes. Many organizations neglect one or more facets of backup and recovery, including database replication, storage synchronization or end-user storage archival and backup.

How to fix it. Most organizations need a multi-pronged backup and recovery strategy. This should include data center storage snapshots and replication, database storage, tape or disk backups, and end user storage (often cloud-based). Look for enterprise-class tools that can accommodate granular backup and recovery metrics and reporting.

3. Poor network segmentation and monitoring

Many attackers rely on weak network segmentation and monitoring to gain full access to systems in a network subnet once they’ve gained initial access. This huge cybersecurity vulnerability has been common in many large enterprise networks for many years. It has led to significant persistence in attackers compromising new systems and maintaining access longer.

Causes. A lack of subnet monitoring is a major root cause of this vulnerability, as is a lack of monitoring outbound activity that could indicate command and control traffic. Especially in large organizations, this can be a challenging initiative, as hundreds or thousands of systems may be communicating simultaneously within the network and sending outbound traffic.

How to fix it. Organizations should focus on carefully controlling network access among systems within subnets, and building better detection and alerting strategies for lateral movement between systems that have no business communicating with one another. They should focus on odd DNS lookups, system-to-system communication with no apparent use, and odd behavioral trends in network traffic. Proxies, firewalls and microsegmentation tools may help create more restrictive policies for traffic and systems communications.

4. Weak authentication and credential management

One of the most common causes of compromise and breaches for this cybersecurity vulnerability is a lack of sound credential management. People use the same password over and over, and many systems and services support weak authentication practices. This is one of the major causes of related attack vectors listed in the Verizon DBIR.

Causes. In many cases, weak authentication and credential management is due to lack of governance and oversight of credential lifecycle and policy. This includes user access, password policies, authentication interfaces and controls, and privilege escalation to systems and services that shouldn’t be available or accessible in many cases.

How to fix it. For most organizations, implementing stringent password controls can help. This may consist of longer passwords, more complex passwords, more frequent password changes or some combination of these principles. In practice, longer passwords that aren’t rotated often are safer than shorter passwords that are. For any sensitive access, users should also be required to use multifactor authentication for accessing sensitive data or sites, often with the aid of multifactor authentication tools.

5. Poor security awareness

Much has been written about the susceptibility of end users to social engineering, but it continues to be a major issue that plagues organizations. The 2019 Verizon DBIR states that end user error is the top threat action in breaches. Many organizations find the initial point of attack is through targeted social engineering, most commonly phishing.

Causes. The most common cause of successful phishing, pretexting and other social engineering attacks is a lack of sound security awareness training and end-user validation. Organizations are still struggling with how to train users to look for social engineering attempts and report them.

How to fix it. More organizations need to conduct regular training exercises, including phishing tests, pretexting and additional social engineering as needed. Many training programs are available to help reinforce security awareness concepts; the training needs to be contextual and relevant to employees’ job functions whenever possible. Track users’ success or failure rates on testing, as well as “live fire” tests with phishing emails and other tactics. For users who don’t improve, look at remediation measures appropriate for your organization.

While other major cybersecurity vulnerabilities can be spotted in the wild, the issues addressed here are some of the most common seen by enterprise security teams everywhere. Look for opportunities to implement more effective processes and controls in your organization to more effectively prevent these issues from being realized.

Source: https://searchsecurity.techtarget.com/feature/How-to-fix-the-top-5-cybersecurity-vulnerabilities