Australians love to place bets online but little do some punters know of the dangers lurking in cyberspace.

Melbourne-based internet betting and entertainment website Sportsbet.com.au found out about these dangers the hard way when in 2009 the company was the target of a distributed denial of service attack (DDoS).

A DDoS attack involves harnessing hundreds or thousands of computers to simultaneously bombard a website with data so it becomes overwhelmed. The computers in such attacks have typically been infected with malware so they can be used without the consent or knowledge of their owners.

According to the company, traffic on the Sportsbet site can reach 2000 hits per second as punters place bets on race days and cyber criminals are keen to try and take a share of the money. Heightened attention during the Spring Carnival race in Melbourne during 2009 proved a viable opportunity for attack on its services.

Competitors TABCorp, Sportingbet and Centrebet all faced attack over the same time frame.

Sportsbet IT security manager, Gonzalo Ernst, told Computerworld Australia the company managed to mitigate against heavy traffic resulting from the attack.

“We had help from our internet service provider [ISP] because it’s a bandwidth attack and can only be done at the ISP level,” he said. “We have an agreement with our ISP to offer protection.”

According to Ernst, there were rumours of more DDoS attacks in 2010 on betting agency websites but it has not experienced a DDoS attack since the X-Series was installed.

While the Sportsbet website experienced service degradation for only two hours during its attack, the IT department made a decision to upgrade its firewalls to ensure the security infrastructure had the capacity to handle future attacks.

At the time, the company was using a C12 security offering from its vendor Crossbeam but, following the attack, it upgraded to the X-Series combined with a Check Point firewall.

The new updated Crossbeam firewall handles 10 to 13 million connections per second., allowing the company to prevent connection attacks, in which millions of connections of directed at a homepage to pull it down.

Online betting was a growth industry for Sportsbet, continuing to double traffic to the company’s website.

Crossbeam Australia and South East Asia regional sales director, Andrew Draper, said in a statement that Sportsbet had been working with the vendor since 2006.

“In our [Australian] customer base they are completely unique in that they are a 100 per cent Web-based business. We’re not working with other online betting agencies in Australia at present,” he said.

While he would not name any other Australian customers, Draper said it does play in the telecommunications, university, financial services, insurance and government sectors.

Ernst advised other companies to have a close partnership with their ISP and good monitoring tools in place.

“The important thing is that once you get an attack is to know what kind of attack it is.”

The collective known as Anonymous has declared it will target Sony as a result of the company’s legal action against PlayStation 3 jailbreakers GeoHot and Graf_Chokolo.

A spokesperson for the collective writes, “Congratulations, Sony. Your recent legal action against our fellow hackers, GeoHot and Graf_Chokolo, has not only alarmed us, it has been deemed wholly unforgivable.”

Anonymous is perhaps most famous for its denial of service attacks (DDoS) against Amazon, PayPal, Visa and Mastercard for their perceived anti-WikiLeaks behaviours. Both Visa and Mastercard’s websites were brought down as a result of DDoS attacks.

Earlier this year, GeoHot and fail0verflow (a group that includes Graf_Chokolo) exposed the PlayStation 3’s root key after the removal of OtherOS from the console. Doing so has exposed Sony’s platform to rampant piracy.

London police charged five individuals under the Computer Misuse Act for their role in launching distributed denial-of-service attacks against commercial websites. Authorities believe the suspects are connected to the Anonymous hacking group, a loosely affiliated band of web savvy, politically motivated individuals. The hacktivist gang is being investigated for its role in taking down a number of high-profile websites.

The credentials of 30 million online daters were placed at risk following the exploit of an SQL injection vulnerability on PlentyOfFish.com. Creator of the Canada-based site, Markus Frind, said it was illegally accessed when email addresses, usernames and passwords were downloaded. He blamed the attack on Argentinean security researcher Chris Russo, who Frind claimed was working with Russian partners to extort money. But Russo said he merely learned of the vulnerability while trawling an underground forum, then tested, confirmed and responsibly reported it to Frind. He never extracted any personal data, nor had any “unethical” intentions.
Facebook announced a new security feature designed to deter attackers from snooping on users who browse the social networking site via public wireless networks. Users can now browse Facebook over “HTTPS,” an encrypted protocol that prevents the unauthorized hijacking of private sessions and data. The site was spurred on to add the security feature after a researcher unveiled a Firefox plug-in, known as Firesheep, that permits anyone to scan open Wi-Fi networks and hijack, for example, Twitter and Facebook accounts. HTTPS will eventually be offered as a default setting to all users.

For a third time, a California lawmaker introduced a bill that would update the state’s data breach notification law, SB-1386, to include additional requirements for organizations that lose sensitive data. The proposal by Sen. Joe Simitian (D-Palo Alto), would require that breach notification letters contain specifics of the incident, including the type of personal information exposed, a description of what happened and advice on steps to take to protect oneself from identity theft. Twice before, the bill has gone to former Gov. Arnold Schwarzenegger’s desk to be signed but was vetoed.

Facebook, MySpace and YouTube are the most commonly blacklisted sites at organizations, according to a report from OpenDNS, a DNS infrastructure and security provider. The yearly report, based on data from some 30 billion daily DNS queries, found that 23 percent of business users block Facebook, 13 percent restrict reaching MySpace, and 12 percent ban access to YouTube. Meanwhile, the OpenDNS-run PhishTank database found that PayPal is the most phished brand, based on verified fraudulent sites.

Google, maker of the Chrome web browser, made a feature available that allows users to opt out of online behavioral advertising tracking cookies. The tool, called “Keep My Opt-Outs,” is available as an extension for download. The announcement comes on the heels of a Federal Trade Commission report urging companies to develop a ‘do not track’ mechanism so consumers can choose whether to allow the collection of data regarding online browsing activities. Browser-makers Mozilla and Microsoft also announced intentions to release similar features for their browsers.

Verizon announced plans to acquire Terremark, a managed IT infrastructure and cloud services provider known for its advanced security offerings, for $1.4 billion. Verizon plans to operate Terremark as a standalone business unit. “Cloud computing continues to fundamentally alter the way enterprises procure, deploy and manage IT resources, and this combination helps create a tipping point for ‘everything-as-a-service,’” said Lowell McAdam, Verizon’s president and chief operating officer.

Source: http://www.scmagazineus.com/news-briefs/article/197112/

There has already been much fallout from the recent massive release of information by the WikiLeaks organisation–including attacks on WikiLeaks itself by those angered by its actions that aimed to disrupt and discredit the organisation. This saw WikiLeaks targeted by a variety of sustained distributed denial of service (DDoS) attacks that aim to make its web presence inaccessible.

Although these attacks were seen to be relatively modest in size and not very sophisticated, the publicity that they received has served to raise awareness of the dangers of such attacks, which can be costly and time-consuming to defend against. DDoS attacks occur when a hacker uses large-scale computing resources, often using botnets, to bombard an organisation’s network with requests for information that overwhelm it and cause servers to crash. Many such attacks are launched against websites, causing them to be unavailable, which can lead to lost business and other costs of mitigating the attacks and restoring service.
DDoS attacks are actually extremely widespread. A recent survey commissioned by VeriSign found that 75% of respondents had experienced one or more attacks in the past 12 months. This is echoed in recent research published by Arbor Networks of 111 IP network operators worldwide, which showed that 69% of respondents had experienced at least one DDoS attack in the past year, and 25% had been hit by ten such attacks per month. According to Adversor, which offers services to protect against DDoS attacks, DDoS attacks now account for 4% of total internet traffic. Another provider of such services, Prolexic Technologies, estimates that there are 50,000 distinct DDoS attacks every week.

The research from Arbor Networks also shows that DDoS attacks are increasing in size, making them harder to defend against. It found that there has been a 102% increase in attack size over the past year, with attacks breaking the 100Gbps barrier for the first time. More attacks are also being seen against the application layer, which target the database server and cripple or corrupt the applications and underlying data needed to effectively run a business, according to Arbor’s chief scientist, Craig Labovitz. Among respondents to its survey, Arbor states that 77% detected application layer attacks in 2010, leading to increased operational expenditures, customer churn and revenue loss owing to the outages that ensue.

Measures that are commonly taken to defend against DDoS attacks include the use of on-premise intrusion detection and prevention systems by organisations, or the overprovisioning of bandwidth to prevent the attack taking down the network. Others use service providers, such as their internet service provider (ISP) or third-party anti-DDoS specialists, which tend to be carrier-agnostic, so not limited to the services offered by a particular ISP. The first two options are time-consuming and costly to manage by organisations and they need the capacity to deal with the massive-scale, stealthy application-layer attacks that are being seen.
With attacks increasing in size and stealthier application-layer attacks becoming more common, some attacks are now so big that they really need to be mitigated in the cloud before the exploit can reach an organisation’s network. ISPs and specialist third-party DDoS defence specialists monitor inbound traffic and when a potential DDoS attack is detected, the traffic is redirected to a scrubbing platform, based in the cloud. Here, the attack can be mitigated thus providing a clean pipe service–the service provider takes the bad traffic, cleans it and routes it back to the network in a manner that is transparent to the organisation.

Guarding against DDoS attacks is essential for many organisations and vital especially for those organisations with a large web presence, where an outage could cost them dearly in terms of lost business. DDoS attacks are becoming increasingly targeted and are no longer just affecting larger organisations. Rather, recent stories in the press have shown that organisations of all sizes are being attacked, ranging from small manufacturers of industry food processing equipment and machinery through to large gambling websites.
By subscribing to cloud-based DDoS mitigation services, organisations will benefit from a service that not only provides better protection against DDoS attacks than they could achieve by themselves, but can actually reduce the cost of doing so as the cost of hardware and maintenance for equipment required is spread across all subscribers to the service and organisations don’t need to over-provision bandwidth as the traffic is directed away from their networks. For protecting vital websites, subscribing to such a service is akin to taking out insurance for ensuring that website assets are protected, and the organisation can protect itself from the cost and reputational damage that can follow from a successful DDoS attack that renders services unavailable.

Source: http://www.computerweekly.com/blogs/Bloor-on-IT-security/2011/02/ddod-attacks-coming-to-a-network-near-you.html

Fighting the malware fight all over again
Kevin Beaver, CISSP
Updated: 02-11-2011 10:46 am

Remember the good old days of floppy disks and macro viruses? Back then, we thought things were complex. How could enterprises possibly gain any semblance of control over these new-fangled security threats that were targeting their users?

As years went by, we finally got our arms around this malware thing – until now. Maybe it’s just me, but malware is all we seem to be hearing about in the IT headlines, and it is only getting worse. Bots, advanced persistent threats and the like seem to be the hot-button issue in IT security right now.

Spam, denial of service attacks and information leakage (to name a few) can all be sourced with ease from widespread malware infections. For example, Symantec’s MessageLabs Intelligence has found that infected computers in some botnets send on average more than 600 spam e-mails per second. This is big business!

Of course, I also realize that the marketing machine is at work here and we cannot believe everything we hear. Trend Micro claims that 3.5 new malware threats are released every second. So what is that tens, if not hundreds, of thousands of encounters with malware in any given enterprise on any given day? Wow, is the sky falling?

On the other hand, Cisco ScanSafe claims that in 2010, a representative 15,000-seat enterprise would experience about 5.5 encounters with malware on any given day. That’s a relatively low number I suppose, but it is still a very big problem.

Remember that security is about control and visibility. Reality has shown us that many enterprises do not really have the necessary control and visibility into their networks to keep the bad guys at bay. This is especially true when it comes to malware. Suddenly (albeit shortsightedly), security issues like Web-based SQL injection and lost laptops are taking a backseat so enterprises can get their arms around this “new wave” of malware out there.

I can attest to the complexities and problems associated with both sides of the equation. On the proactive side, people are not being, well, proactive enough with information security. The assumption is that we have policies, we have technical controls in place and we are not getting hit with malware (as far as we know), therefore all is well. It’s not that simple, but still it is the way that many enterprises operate.

On the reactive side of the equation – that is once network administrators determine that’s something is awry and an infection is present – enterprises tend not to have a reasonable response plan in place. Even when a seemingly appropriate response is carried out, it is often not adequately dealt with and the malware often comes back.

Case in point: I worked on a recent project where a large enterprise originally got hit with some nasty command and control malware. A few thousand computers were infected. They responded by cleaning up the affected systems but they didn’t look deeply and broadly enough throughout their network to see where else the malware was lurking. A few months later, the bot reared its ugly head again. This time they were hit much harder and had more than 10,000 systems become infected. Ouch.

So what do we have to do if we are going to stand a chance against this (re)emerging malware threat? Big government politicians like Joe Lieberman believe that more regulation is the answer. In reality, if you look at the details of the proposed Rockefeller-Snowe Cybersecurity Act of 2009 (Senate Bill 773) and the Lieberman-Collins-Carper Protecting Cyberspace as a National Asset Act of 2010 (Senate Bill 3480) and combine them with the Federal government’s track record, regulation will likely serve to cause more problems than it fixes. In fact, regulation and government interference in the free market is arguably one of the greatest threats to information security today.

Sure, given the right scenarios and people public-private partnerships could work well. In fact, many are saying that we need more cooperation between the Federal government and the private sector to help fend off cyber-threats. Isn’t that called InfraGard?

Back to my main point, with the large majority of malware now gaining its foothold via the Web, we no doubt have a huge problem on our hands.

It seems we have reached a point where we have gotten this perimeter security thing down pat. Ditto with wireless networks. Patch management and strong password enforcement is even coming of age. All in all things are good. But as with world politics and religion and all their associated threats, we must not let our guard down – especially with malware. The bad guys definitely have the upper hand right now and I suspect that’s not going to change any time soon. Good for our industry, not so good for business.

Source: http://www.securityinfowatch.com/get-with-it-7