Defend Against DDoS Archive

NEWS

The Serious Organised Crime Agency has taken its website offline due to a distributed denial-of-service attack.

By Tom Espiner, ZDNet UK, 3 May, 2012 15:02

The UK law enforcement agency asked its hosting provider to take the site down at approximately 22.00 on Wednesday, and the site was taken offline at around 22.30, a SOCA spokesman told ZDNet UK on Thursday. The site remained offline at the time of writing.

“The site was taken offline last night to limit the impact of a distributed denial-of-service attack (DDoS) against other clients hosted by our service provider,” the SOCA spokesman said. “The website only contains publically available information.”

The spokesman declined to say who the agency thought was behind the attack, but said it did not pose a security risk.

While website attacks are “inconvenient to visitors”, SOCA does not consider maintaining the necessary bandwidth to deal with DDoS a good use of taxpayers’ money, the SOCA spokesman said.

A Twitter news feed that claims links to the Anonymous hacking collective publicised the DDoS on Thursday, but did not claim responsibility.

“TANGO DOWN: DDoS attack takes down site of UK Serious Organised Crime Agency (SOCA),” said the @YourAnonNews feed.

The SOCA website was taken offline in June 2011, in an action that was claimed by LulzSec, a hacking group affiliated to Anonymous.

“What is surprising is that defence and intelligence levels have not been improved sufficiently since the last successful DDoS attack on SOCA in June 2011,” said Ovum analyst Andrew Kellett. “Hacktivist attacks targeting particular operations have been known to be both persistent and long-standing, requiring extensive DDoS defences.”

SOCA announced last week that it worked with the FBI to take down 36 websites used to sell stolen bank card data.

On Thursday Cabinet Office minister Francis Maude said that SOCA had “recovered nearly two million items of stolen payment card details since April 2011 worth approximately £300m to criminals” in a speech made in Estonia.

 

Source: http://www.zdnet.co.uk/news/security-threats/2012/05/03/soca-website-taken-down-in-ddos-attack-40155157/

TrustSphere says its TrustVault product helps crucial emails get through–even in the midst of a denial of service attack–by correctly identifying trusted senders.

As annoying as spam is, an overactive spam filter is almost worse when it prevents important messages from getting through.

A company called TrustSphere says the TrustVault product it introduced this week can act as a counterweight to the spam filter, using a type of “social graph” to identify trusted senders and allow their messages to get through–even in the midst of a crisis such as a distributed denial of service attack on an executive’s email account.

“Inside the the organization, we’re effectively mapping who’s speaking to whom and turning that into an enterprise social graph,” Manish Goel, CEO of TrustSphere, said in an interview. “We’re tracking who’s speaking with whom and how often–what’s the cadence of communication.” In that way, TrustVault can identify the trustworthy senders and allow their messages to go through, even if they would otherwise be blocked by a spam filter.

So far, this social graph is based entirely on the exchange of email, although TrustSphere is working on ways of integrating social media and voice over Internet protocol communications for a more complete picture, Goel said. But TrustSphere is applying elements of social networking theory such as Dunbar’s number, anthropologist Robin Dunbar’s concept that humans can only track a limited number of relationships, often theorized as about 150, and rely on “circles of trust” for more extended relationships. In this way, TrustSphere models trustworthy connections at the organizational level, as well as at the individual level. TrustVault is also linked to a related service, TrustCloud, which tracks the reputation of email accounts across the Internet.

TrustSphere doesn’t filter the content of the messages at all, looking only at the pattern of communication and touching only the email header fields, Goel said. The service does detect email authentication methods, such as the use of Sender Policy Framework tagging, but it’s counted as an indicator of trustworthiness rather than a final verdict, he said.

Messages cleared by TrustVault can still go through anti-virus and spyware scans, and even previously trusted senders can be screened out if they start exhibiting suspicious behavior, Goel said. But sometimes letting the right messages through can be as important as keeping the wrong ones out. For example, corporations targeted by activists or hactivists sometimes have the email accounts of top executives rendered useless when they are flooded by messages sent by angry consumers or generated by bots. With TrustVault, the messages from known senders could be delivered to the executive being targeted, while all the rest would be routed for review by an administrative assistant.

One of the company’s oldest customers, the doctors.net.uk social network for physicians in the U.K., has been using a version of the same technology to allow email that uses words like “Viagra” or “penis” to get past spam filters when those words are used in a legitimate medical context, rather than for spam or pornographic promotions, Goel said.

“This also allows you to turn up the threshold on the aggressiveness of your spam filters without missing messages,” Goel said. “I liken this to why cars have brakes–to allow you to go faster. Spam filtering is very much focused on identifying the bad guys. We’re using the good and the bad to improve the overall security infrastructure.”

Founded in Singapore, TrustSphere is just now bringing its product to the U.S. market.

Source: http://www.informationweek.com/thebrainyard/news/email/232901586

London police charged five individuals under the Computer Misuse Act for their role in launching distributed denial-of-service attacks against commercial websites. Authorities believe the suspects are connected to the Anonymous hacking group, a loosely affiliated band of web savvy, politically motivated individuals. The hacktivist gang is being investigated for its role in taking down a number of high-profile websites.

The credentials of 30 million online daters were placed at risk following the exploit of an SQL injection vulnerability on PlentyOfFish.com. Creator of the Canada-based site, Markus Frind, said it was illegally accessed when email addresses, usernames and passwords were downloaded. He blamed the attack on Argentinean security researcher Chris Russo, who Frind claimed was working with Russian partners to extort money. But Russo said he merely learned of the vulnerability while trawling an underground forum, then tested, confirmed and responsibly reported it to Frind. He never extracted any personal data, nor had any “unethical” intentions.
Facebook announced a new security feature designed to deter attackers from snooping on users who browse the social networking site via public wireless networks. Users can now browse Facebook over “HTTPS,” an encrypted protocol that prevents the unauthorized hijacking of private sessions and data. The site was spurred on to add the security feature after a researcher unveiled a Firefox plug-in, known as Firesheep, that permits anyone to scan open Wi-Fi networks and hijack, for example, Twitter and Facebook accounts. HTTPS will eventually be offered as a default setting to all users.

For a third time, a California lawmaker introduced a bill that would update the state’s data breach notification law, SB-1386, to include additional requirements for organizations that lose sensitive data. The proposal by Sen. Joe Simitian (D-Palo Alto), would require that breach notification letters contain specifics of the incident, including the type of personal information exposed, a description of what happened and advice on steps to take to protect oneself from identity theft. Twice before, the bill has gone to former Gov. Arnold Schwarzenegger’s desk to be signed but was vetoed.

Facebook, MySpace and YouTube are the most commonly blacklisted sites at organizations, according to a report from OpenDNS, a DNS infrastructure and security provider. The yearly report, based on data from some 30 billion daily DNS queries, found that 23 percent of business users block Facebook, 13 percent restrict reaching MySpace, and 12 percent ban access to YouTube. Meanwhile, the OpenDNS-run PhishTank database found that PayPal is the most phished brand, based on verified fraudulent sites.

Google, maker of the Chrome web browser, made a feature available that allows users to opt out of online behavioral advertising tracking cookies. The tool, called “Keep My Opt-Outs,” is available as an extension for download. The announcement comes on the heels of a Federal Trade Commission report urging companies to develop a ‘do not track’ mechanism so consumers can choose whether to allow the collection of data regarding online browsing activities. Browser-makers Mozilla and Microsoft also announced intentions to release similar features for their browsers.

Verizon announced plans to acquire Terremark, a managed IT infrastructure and cloud services provider known for its advanced security offerings, for $1.4 billion. Verizon plans to operate Terremark as a standalone business unit. “Cloud computing continues to fundamentally alter the way enterprises procure, deploy and manage IT resources, and this combination helps create a tipping point for ‘everything-as-a-service,’” said Lowell McAdam, Verizon’s president and chief operating officer.

Source: http://www.scmagazineus.com/news-briefs/article/197112/

There has already been much fallout from the recent massive release of information by the WikiLeaks organisation–including attacks on WikiLeaks itself by those angered by its actions that aimed to disrupt and discredit the organisation. This saw WikiLeaks targeted by a variety of sustained distributed denial of service (DDoS) attacks that aim to make its web presence inaccessible.

Although these attacks were seen to be relatively modest in size and not very sophisticated, the publicity that they received has served to raise awareness of the dangers of such attacks, which can be costly and time-consuming to defend against. DDoS attacks occur when a hacker uses large-scale computing resources, often using botnets, to bombard an organisation’s network with requests for information that overwhelm it and cause servers to crash. Many such attacks are launched against websites, causing them to be unavailable, which can lead to lost business and other costs of mitigating the attacks and restoring service.
DDoS attacks are actually extremely widespread. A recent survey commissioned by VeriSign found that 75% of respondents had experienced one or more attacks in the past 12 months. This is echoed in recent research published by Arbor Networks of 111 IP network operators worldwide, which showed that 69% of respondents had experienced at least one DDoS attack in the past year, and 25% had been hit by ten such attacks per month. According to Adversor, which offers services to protect against DDoS attacks, DDoS attacks now account for 4% of total internet traffic. Another provider of such services, Prolexic Technologies, estimates that there are 50,000 distinct DDoS attacks every week.

The research from Arbor Networks also shows that DDoS attacks are increasing in size, making them harder to defend against. It found that there has been a 102% increase in attack size over the past year, with attacks breaking the 100Gbps barrier for the first time. More attacks are also being seen against the application layer, which target the database server and cripple or corrupt the applications and underlying data needed to effectively run a business, according to Arbor’s chief scientist, Craig Labovitz. Among respondents to its survey, Arbor states that 77% detected application layer attacks in 2010, leading to increased operational expenditures, customer churn and revenue loss owing to the outages that ensue.

Measures that are commonly taken to defend against DDoS attacks include the use of on-premise intrusion detection and prevention systems by organisations, or the overprovisioning of bandwidth to prevent the attack taking down the network. Others use service providers, such as their internet service provider (ISP) or third-party anti-DDoS specialists, which tend to be carrier-agnostic, so not limited to the services offered by a particular ISP. The first two options are time-consuming and costly to manage by organisations and they need the capacity to deal with the massive-scale, stealthy application-layer attacks that are being seen.
With attacks increasing in size and stealthier application-layer attacks becoming more common, some attacks are now so big that they really need to be mitigated in the cloud before the exploit can reach an organisation’s network. ISPs and specialist third-party DDoS defence specialists monitor inbound traffic and when a potential DDoS attack is detected, the traffic is redirected to a scrubbing platform, based in the cloud. Here, the attack can be mitigated thus providing a clean pipe service–the service provider takes the bad traffic, cleans it and routes it back to the network in a manner that is transparent to the organisation.

Guarding against DDoS attacks is essential for many organisations and vital especially for those organisations with a large web presence, where an outage could cost them dearly in terms of lost business. DDoS attacks are becoming increasingly targeted and are no longer just affecting larger organisations. Rather, recent stories in the press have shown that organisations of all sizes are being attacked, ranging from small manufacturers of industry food processing equipment and machinery through to large gambling websites.
By subscribing to cloud-based DDoS mitigation services, organisations will benefit from a service that not only provides better protection against DDoS attacks than they could achieve by themselves, but can actually reduce the cost of doing so as the cost of hardware and maintenance for equipment required is spread across all subscribers to the service and organisations don’t need to over-provision bandwidth as the traffic is directed away from their networks. For protecting vital websites, subscribing to such a service is akin to taking out insurance for ensuring that website assets are protected, and the organisation can protect itself from the cost and reputational damage that can follow from a successful DDoS attack that renders services unavailable.

Source: http://www.computerweekly.com/blogs/Bloor-on-IT-security/2011/02/ddod-attacks-coming-to-a-network-near-you.html

Fighting the malware fight all over again
Kevin Beaver, CISSP
Updated: 02-11-2011 10:46 am

Remember the good old days of floppy disks and macro viruses? Back then, we thought things were complex. How could enterprises possibly gain any semblance of control over these new-fangled security threats that were targeting their users?

As years went by, we finally got our arms around this malware thing – until now. Maybe it’s just me, but malware is all we seem to be hearing about in the IT headlines, and it is only getting worse. Bots, advanced persistent threats and the like seem to be the hot-button issue in IT security right now.

Spam, denial of service attacks and information leakage (to name a few) can all be sourced with ease from widespread malware infections. For example, Symantec’s MessageLabs Intelligence has found that infected computers in some botnets send on average more than 600 spam e-mails per second. This is big business!

Of course, I also realize that the marketing machine is at work here and we cannot believe everything we hear. Trend Micro claims that 3.5 new malware threats are released every second. So what is that tens, if not hundreds, of thousands of encounters with malware in any given enterprise on any given day? Wow, is the sky falling?

On the other hand, Cisco ScanSafe claims that in 2010, a representative 15,000-seat enterprise would experience about 5.5 encounters with malware on any given day. That’s a relatively low number I suppose, but it is still a very big problem.

Remember that security is about control and visibility. Reality has shown us that many enterprises do not really have the necessary control and visibility into their networks to keep the bad guys at bay. This is especially true when it comes to malware. Suddenly (albeit shortsightedly), security issues like Web-based SQL injection and lost laptops are taking a backseat so enterprises can get their arms around this “new wave” of malware out there.

I can attest to the complexities and problems associated with both sides of the equation. On the proactive side, people are not being, well, proactive enough with information security. The assumption is that we have policies, we have technical controls in place and we are not getting hit with malware (as far as we know), therefore all is well. It’s not that simple, but still it is the way that many enterprises operate.

On the reactive side of the equation – that is once network administrators determine that’s something is awry and an infection is present – enterprises tend not to have a reasonable response plan in place. Even when a seemingly appropriate response is carried out, it is often not adequately dealt with and the malware often comes back.

Case in point: I worked on a recent project where a large enterprise originally got hit with some nasty command and control malware. A few thousand computers were infected. They responded by cleaning up the affected systems but they didn’t look deeply and broadly enough throughout their network to see where else the malware was lurking. A few months later, the bot reared its ugly head again. This time they were hit much harder and had more than 10,000 systems become infected. Ouch.

So what do we have to do if we are going to stand a chance against this (re)emerging malware threat? Big government politicians like Joe Lieberman believe that more regulation is the answer. In reality, if you look at the details of the proposed Rockefeller-Snowe Cybersecurity Act of 2009 (Senate Bill 773) and the Lieberman-Collins-Carper Protecting Cyberspace as a National Asset Act of 2010 (Senate Bill 3480) and combine them with the Federal government’s track record, regulation will likely serve to cause more problems than it fixes. In fact, regulation and government interference in the free market is arguably one of the greatest threats to information security today.

Sure, given the right scenarios and people public-private partnerships could work well. In fact, many are saying that we need more cooperation between the Federal government and the private sector to help fend off cyber-threats. Isn’t that called InfraGard?

Back to my main point, with the large majority of malware now gaining its foothold via the Web, we no doubt have a huge problem on our hands.

It seems we have reached a point where we have gotten this perimeter security thing down pat. Ditto with wireless networks. Patch management and strong password enforcement is even coming of age. All in all things are good. But as with world politics and religion and all their associated threats, we must not let our guard down – especially with malware. The bad guys definitely have the upper hand right now and I suspect that’s not going to change any time soon. Good for our industry, not so good for business.

Source: http://www.securityinfowatch.com/get-with-it-7