London police charged five individuals under the Computer Misuse Act for their role in launching distributed denial-of-service attacks against commercial websites. Authorities believe the suspects are connected to the Anonymous hacking group, a loosely affiliated band of web savvy, politically motivated individuals. The hacktivist gang is being investigated for its role in taking down a number of high-profile websites.

The credentials of 30 million online daters were placed at risk following the exploit of an SQL injection vulnerability on PlentyOfFish.com. Creator of the Canada-based site, Markus Frind, said it was illegally accessed when email addresses, usernames and passwords were downloaded. He blamed the attack on Argentinean security researcher Chris Russo, who Frind claimed was working with Russian partners to extort money. But Russo said he merely learned of the vulnerability while trawling an underground forum, then tested, confirmed and responsibly reported it to Frind. He never extracted any personal data, nor had any “unethical” intentions.
Facebook announced a new security feature designed to deter attackers from snooping on users who browse the social networking site via public wireless networks. Users can now browse Facebook over “HTTPS,” an encrypted protocol that prevents the unauthorized hijacking of private sessions and data. The site was spurred on to add the security feature after a researcher unveiled a Firefox plug-in, known as Firesheep, that permits anyone to scan open Wi-Fi networks and hijack, for example, Twitter and Facebook accounts. HTTPS will eventually be offered as a default setting to all users.

For a third time, a California lawmaker introduced a bill that would update the state’s data breach notification law, SB-1386, to include additional requirements for organizations that lose sensitive data. The proposal by Sen. Joe Simitian (D-Palo Alto), would require that breach notification letters contain specifics of the incident, including the type of personal information exposed, a description of what happened and advice on steps to take to protect oneself from identity theft. Twice before, the bill has gone to former Gov. Arnold Schwarzenegger’s desk to be signed but was vetoed.

Facebook, MySpace and YouTube are the most commonly blacklisted sites at organizations, according to a report from OpenDNS, a DNS infrastructure and security provider. The yearly report, based on data from some 30 billion daily DNS queries, found that 23 percent of business users block Facebook, 13 percent restrict reaching MySpace, and 12 percent ban access to YouTube. Meanwhile, the OpenDNS-run PhishTank database found that PayPal is the most phished brand, based on verified fraudulent sites.

Google, maker of the Chrome web browser, made a feature available that allows users to opt out of online behavioral advertising tracking cookies. The tool, called “Keep My Opt-Outs,” is available as an extension for download. The announcement comes on the heels of a Federal Trade Commission report urging companies to develop a ‘do not track’ mechanism so consumers can choose whether to allow the collection of data regarding online browsing activities. Browser-makers Mozilla and Microsoft also announced intentions to release similar features for their browsers.

Verizon announced plans to acquire Terremark, a managed IT infrastructure and cloud services provider known for its advanced security offerings, for $1.4 billion. Verizon plans to operate Terremark as a standalone business unit. “Cloud computing continues to fundamentally alter the way enterprises procure, deploy and manage IT resources, and this combination helps create a tipping point for ‘everything-as-a-service,’” said Lowell McAdam, Verizon’s president and chief operating officer.

Source: http://www.scmagazineus.com/news-briefs/article/197112/

There has already been much fallout from the recent massive release of information by the WikiLeaks organisation–including attacks on WikiLeaks itself by those angered by its actions that aimed to disrupt and discredit the organisation. This saw WikiLeaks targeted by a variety of sustained distributed denial of service (DDoS) attacks that aim to make its web presence inaccessible.

Although these attacks were seen to be relatively modest in size and not very sophisticated, the publicity that they received has served to raise awareness of the dangers of such attacks, which can be costly and time-consuming to defend against. DDoS attacks occur when a hacker uses large-scale computing resources, often using botnets, to bombard an organisation’s network with requests for information that overwhelm it and cause servers to crash. Many such attacks are launched against websites, causing them to be unavailable, which can lead to lost business and other costs of mitigating the attacks and restoring service.
DDoS attacks are actually extremely widespread. A recent survey commissioned by VeriSign found that 75% of respondents had experienced one or more attacks in the past 12 months. This is echoed in recent research published by Arbor Networks of 111 IP network operators worldwide, which showed that 69% of respondents had experienced at least one DDoS attack in the past year, and 25% had been hit by ten such attacks per month. According to Adversor, which offers services to protect against DDoS attacks, DDoS attacks now account for 4% of total internet traffic. Another provider of such services, Prolexic Technologies, estimates that there are 50,000 distinct DDoS attacks every week.

The research from Arbor Networks also shows that DDoS attacks are increasing in size, making them harder to defend against. It found that there has been a 102% increase in attack size over the past year, with attacks breaking the 100Gbps barrier for the first time. More attacks are also being seen against the application layer, which target the database server and cripple or corrupt the applications and underlying data needed to effectively run a business, according to Arbor’s chief scientist, Craig Labovitz. Among respondents to its survey, Arbor states that 77% detected application layer attacks in 2010, leading to increased operational expenditures, customer churn and revenue loss owing to the outages that ensue.

Measures that are commonly taken to defend against DDoS attacks include the use of on-premise intrusion detection and prevention systems by organisations, or the overprovisioning of bandwidth to prevent the attack taking down the network. Others use service providers, such as their internet service provider (ISP) or third-party anti-DDoS specialists, which tend to be carrier-agnostic, so not limited to the services offered by a particular ISP. The first two options are time-consuming and costly to manage by organisations and they need the capacity to deal with the massive-scale, stealthy application-layer attacks that are being seen.
With attacks increasing in size and stealthier application-layer attacks becoming more common, some attacks are now so big that they really need to be mitigated in the cloud before the exploit can reach an organisation’s network. ISPs and specialist third-party DDoS defence specialists monitor inbound traffic and when a potential DDoS attack is detected, the traffic is redirected to a scrubbing platform, based in the cloud. Here, the attack can be mitigated thus providing a clean pipe service–the service provider takes the bad traffic, cleans it and routes it back to the network in a manner that is transparent to the organisation.

Guarding against DDoS attacks is essential for many organisations and vital especially for those organisations with a large web presence, where an outage could cost them dearly in terms of lost business. DDoS attacks are becoming increasingly targeted and are no longer just affecting larger organisations. Rather, recent stories in the press have shown that organisations of all sizes are being attacked, ranging from small manufacturers of industry food processing equipment and machinery through to large gambling websites.
By subscribing to cloud-based DDoS mitigation services, organisations will benefit from a service that not only provides better protection against DDoS attacks than they could achieve by themselves, but can actually reduce the cost of doing so as the cost of hardware and maintenance for equipment required is spread across all subscribers to the service and organisations don’t need to over-provision bandwidth as the traffic is directed away from their networks. For protecting vital websites, subscribing to such a service is akin to taking out insurance for ensuring that website assets are protected, and the organisation can protect itself from the cost and reputational damage that can follow from a successful DDoS attack that renders services unavailable.

Source: http://www.computerweekly.com/blogs/Bloor-on-IT-security/2011/02/ddod-attacks-coming-to-a-network-near-you.html

Fighting the malware fight all over again
Kevin Beaver, CISSP
Updated: 02-11-2011 10:46 am

Remember the good old days of floppy disks and macro viruses? Back then, we thought things were complex. How could enterprises possibly gain any semblance of control over these new-fangled security threats that were targeting their users?

As years went by, we finally got our arms around this malware thing – until now. Maybe it’s just me, but malware is all we seem to be hearing about in the IT headlines, and it is only getting worse. Bots, advanced persistent threats and the like seem to be the hot-button issue in IT security right now.

Spam, denial of service attacks and information leakage (to name a few) can all be sourced with ease from widespread malware infections. For example, Symantec’s MessageLabs Intelligence has found that infected computers in some botnets send on average more than 600 spam e-mails per second. This is big business!

Of course, I also realize that the marketing machine is at work here and we cannot believe everything we hear. Trend Micro claims that 3.5 new malware threats are released every second. So what is that tens, if not hundreds, of thousands of encounters with malware in any given enterprise on any given day? Wow, is the sky falling?

On the other hand, Cisco ScanSafe claims that in 2010, a representative 15,000-seat enterprise would experience about 5.5 encounters with malware on any given day. That’s a relatively low number I suppose, but it is still a very big problem.

Remember that security is about control and visibility. Reality has shown us that many enterprises do not really have the necessary control and visibility into their networks to keep the bad guys at bay. This is especially true when it comes to malware. Suddenly (albeit shortsightedly), security issues like Web-based SQL injection and lost laptops are taking a backseat so enterprises can get their arms around this “new wave” of malware out there.

I can attest to the complexities and problems associated with both sides of the equation. On the proactive side, people are not being, well, proactive enough with information security. The assumption is that we have policies, we have technical controls in place and we are not getting hit with malware (as far as we know), therefore all is well. It’s not that simple, but still it is the way that many enterprises operate.

On the reactive side of the equation – that is once network administrators determine that’s something is awry and an infection is present – enterprises tend not to have a reasonable response plan in place. Even when a seemingly appropriate response is carried out, it is often not adequately dealt with and the malware often comes back.

Case in point: I worked on a recent project where a large enterprise originally got hit with some nasty command and control malware. A few thousand computers were infected. They responded by cleaning up the affected systems but they didn’t look deeply and broadly enough throughout their network to see where else the malware was lurking. A few months later, the bot reared its ugly head again. This time they were hit much harder and had more than 10,000 systems become infected. Ouch.

So what do we have to do if we are going to stand a chance against this (re)emerging malware threat? Big government politicians like Joe Lieberman believe that more regulation is the answer. In reality, if you look at the details of the proposed Rockefeller-Snowe Cybersecurity Act of 2009 (Senate Bill 773) and the Lieberman-Collins-Carper Protecting Cyberspace as a National Asset Act of 2010 (Senate Bill 3480) and combine them with the Federal government’s track record, regulation will likely serve to cause more problems than it fixes. In fact, regulation and government interference in the free market is arguably one of the greatest threats to information security today.

Sure, given the right scenarios and people public-private partnerships could work well. In fact, many are saying that we need more cooperation between the Federal government and the private sector to help fend off cyber-threats. Isn’t that called InfraGard?

Back to my main point, with the large majority of malware now gaining its foothold via the Web, we no doubt have a huge problem on our hands.

It seems we have reached a point where we have gotten this perimeter security thing down pat. Ditto with wireless networks. Patch management and strong password enforcement is even coming of age. All in all things are good. But as with world politics and religion and all their associated threats, we must not let our guard down – especially with malware. The bad guys definitely have the upper hand right now and I suspect that’s not going to change any time soon. Good for our industry, not so good for business.

Source: http://www.securityinfowatch.com/get-with-it-7

User forum Whirlpool was hit by a distributed denial-of-service (DDoS) attack last night, according to the site’s hosting provider BulletProof Networks.

Although BulletProof Networks chief operating officer (COO) Lorenzo Modesto first said that Whirlpool was the only one of its customers to be affected by the attack, he said later that its public and private managed cloud customers were experiencing intermittent degraded network performance also.

“BulletProof customers have been kept in the loop throughout (per our standard procedures),” Modesto said.

Modesto added that BulletProof had discussed the issue with Whirlpool, resulting in the site being offline last night while the provider gathered more information. The site is back online this morning.

“We made the decision to bring Whirlpool back online in the early hours of this morning through one of our international [content distribution network points of presence] that are usually used to deliver local high-speed content to the offshore users of customers like Movember,” Modesto said.

“We’re continuing the forensics just in case they’re needed and are keeping an eye Whirlpool,” he added.

The attack had come from servers in the US and Korea, according to BulletProof.

“We’ve also been able to record server addresses and other relevant details and have escalated the source servers to the relevant providers in Korea and the US,” he said. “If we need to, we’ll pass all details onto the [Australian Federal Police] with whom we’ve built a good relationship, but we’ll see how this pans out for the moment.”.

This has not been the first DDoS attack to hit the popular site. Last June it experienced ten hours of downtime from a DDoS attack.

BulletProof Networks had also collected internet protocol addresses from that attack, but decided not to prosecute as a “sign of good will”, saying that DDoS was recognised more as a protest than a crime.

However, not all DDoS perpetrators have received the same treatment in the past. Recently Steven Slayo, who was part of the anonymous band which launched attacks against government sites last year over the government’s planned mandatory internet service provider level internet filter was taken to court over his actions.

He pleaded guilty, but escaped criminal conviction because the magistrate deemed him an “intelligent and gifted student whose future would be damaged by a criminal record”.

Source: http://www.zdnet.com.au/whirlpool-hit-by-ddos-attack-339308730.htm

The Wireshark development team has released version 1.2.14 and 1.4.3 of its open source, cross-platform network protocol analyser. According to the developers, the security updates address a high-risk vulnerability (CVE-2010-4538) that could allow a remote attacker to initiate a denial of service (DoS) attack or possibly execute arbitrary code on a victim’s system.

Affecting both the 1.2.x and 1.4.x branches of Wireshark, the issue is reportedly caused by a buffer overflow in ENTTEC (epan/dissectors/packet-enttec.c) – the vulnerability is said to be triggered by injecting a specially crafted ENTTEC DMX packet with Run Length Encoding (RLE) compression. A buffer overflow issue in MAC-LTE has also been resolved in both versions. In version 1.4.3, a vulnerability in the ASN.1 BER dissector that could have caused Wireshark to exit prematurely has been corrected.

All users are encouraged to upgrade to the latest versions. Alternatively, users that are unable to upgrade to the latest releases can disable the affected dissectors by selecting “Analyze”, then “Enabled Protocols” from the menu and un-checking “ENTTEC” and “MAC-LTE”.

More details about the updates, including a full list of changes, can be found in the 1.2.14 and 1.4.3 release notes. Wireshark binaries for Windows and Mac OS X, as well as the source code, are available to download and documentation is provided. Wireshark, formerly known as Ethereal, is licensed under version 2 of the GNU General Public Licence (GPLv2).

Source: http://www.h-online.com/open/news/item/Wireshark-updates-address-vulnerabilities-1168888.html