Uncategorized Archive

With an increased focus on better patient outcomes and reduced costs, the healthcare industry is slowly but surely moving towards digitisation and healthcare organisations today are increasingly using IT for diagnosis and care. The availability and use, of sophisticated diagnosis techniques like teleradiology (where the attending physician remotely interprets the patient condition using biomedical devices), means that paperlessness is becoming the order of the day. The growth of concepts like Telemedicine and Telehealth (including m-health which uses mobile technology for diagnosis and care) indicates that the boundary of the hospital is expanding and the number of points of care treatments are increasing rapidly.

Ironically though, while enabling medical practitioners reach out to their patients in much better ways, technology has made the delivery of healthcare more complex. As patients and doctors become increasingly mobile, healthcare stakeholders need to follow the right process, provide information where and when needed, collate data from and to a variety of devices. All of this increases the likelihood of security breaches and loss of patient health data. Therefore, healthcare organisations today are under intense pressure and scrutiny, for security, privacy and compliance.

According to a Healthcare Information Management Systems Society (HIMSS) 2009 survey, the top three security concerns for Healthcare CIOs are around the areas of internal breach, regulatory compliance, and inadequate deployment of technology. Solutions that help meet regulatory requirements, mitigate security threats and streamline risks are increasingly being sought after.

Being compliant helps healthcare organisations to reduce patient risk and increases patient confidence. It prevents the resulting damage to the reputation of the organisation and costly fines/ penalties for the organisation and its executives. Compliance prevents loss in revenue and reduces the likelihood of professional damage to healthcare workers. It also enables doctors to easily work with any hospital across any geography using standards based tools for diagnosis and care.

In emergency situations, the use of standards based tools ensures for example, that an ambulance moving on the road easily interfaces with any nearby hospital. Use of standardised tools also provides alarms and warnings like temperature changes within a lab or chemical spills and increases patient safety within a hospital. On a larger scale it helps the government in disease surveillance.

Becoming Compliant

As governments across the world and the general public insist that healthcare organisations take appropriate steps to ensure the proper use, and protection of personal information, leaders in healthcare, business, technology, and information security need to collaborate and adopt standards that help reduce inconsistencies, inefficiencies and high costs associated with the exchange of health information.

The process of gaining compliance calls for the coming together of IT functions is in the areas of data confidentiality, integrity, availability, and auditability. Compliance can be obtained through mandated standards by bodies like the National Accreditation Board for Hospitals & Healthcare Providers (NABH) or the Health Insurance Portability and Accountability Act (HIPAA).

Helping ensure a regulatory compliance however’ poses a great challenge for IT managers. Most regulations do not specifically state what they require from an IT perspective; often different regulations apply to a given organisation making it difficult for IT managers to know what they must do to meet their compliance goals.

Although some vital differences exist among the various regulations, there is a substantial amount of overlap because they all deal with the fundamental issues of data security and privacy. An optimal way to address regulations is to first understand the potential threats and vulnerabilities of the data and network, and then create an effective and secure technology solution built on a well-designed infrastructure. This helps to easily deal with any new regulation that becomes law.

Categorising Vulnerabilities

By grouping protection techniques and vulnerabilities into categories as under confidentiality, integrity, availability and auditability, IT managers can create a common baseline for establishing guidelines that help achieve compliance. This process scales with the evolving landscape of new threats and new security measures can be incorporated easily.


Last year, we discussed whether or not things like Operation Payback by Anonymous (DDoSing sites of organizations they didn’t like) was really the equivalent of a modern-day sit-in protest, rather than a criminal hacking, as law enforcement (and victims) wanted to allege. It appears that this may be a question that courts are going to need to answer. Nick points us to the news that the lawyer for a homeless guy accused of setting up a DDoS on the City of Santa Cruz (he was pissed about a law) is claiming that DDoS attacks are legal and protected speech in the form of a protest:

“There’s no such thing as a DDoS ‘attack’,” Leiderman said. “A DDoS is a protest, it’s a digital sit in. It is no different than physically occupying a space. It’s not a crime, it’s speech.”

Leiderman said the crimes shouldn’t be prosecuted at all. “Nothing was malicious, there was no malware, no Trojans. This was merely a digital sit in. It is no different from occupying the Woolworth’s lunch counter in the civil rights era.”

In this case, the case has nothing to do with Anonymous, Lulzsec or any of those high profile groups, but they might want to pay attention to the case. It seems that some of those already arrested in various sweeps against Anonymous and Lulzsec have indicated that they’reconsidering the same defense strategy. In that last one, involving Mercedes Haefer, who was charged with being a part of Anonymous, her lawyer is pointing out that President Obama has asked supporters to overload the switchboards of Congress — and that’s a form of a denial of service attack:

“I think this is a political persecution, end of story,” Cohen said. “This administration wants to send a message to those who would register their opposition: ‘you come after us, we’re going to come after you.’ That’s what has happened in the Eric Holder Department of Justice.”

“When Obama orders supporters to inundate the switchboards of Congress, that’s good politics, when a bunch of kids decide to send a political message with roots going back to the civil rights movement and the revolution, it’s something else,” Cohen told TPM, stipulating that he was not indicating that his client was even involved. “Barack Obama urged people to shutdown the switchboard, he’s not indicted.”

Not surprisingly, I’m sympathetic to this argument, though I do wonder how well it’ll play in court. In both of these cases, I think a decent case can be made that the actions are a form of speech, in that they were both designed to protest certain actions. The question is whether or not the courts will recognize them as legitimate and protected protests. And that may very well come down to the judges in the cases.

In 2007, a Google engineer, Michal Zalewski, published a memo detailing a potential vulnerability of both Apache and IIS Web Servers after investigating the HTTP/1.1 “Range” header implementation. He reported then:

it is my impression that a lone, short request can be used to trick the server into firing gigabytes of bogus data into the void, regardless of the server file size, connection count, or keep-alive request number limits implemented by the administrator. Whoops?
A proof of concept for the Apache DDoS tool was published as a Perl script on the August 19 ”Full Disclosure” security mailing list. On August 24, the Apache Security Team published a memo explaining:

It most commonly manifests itself when static content is made available with compression on the fly through mod_deflate – but other modules which buffer and/or generate content in-memory are likely to be affected as well. This is a very common (the default right!?) configuration.

The attack can be done remotely and with a modest number of requests leads to very significant memory and CPU usage.

Active use of this tools has been observed in the wild.

There is currently no patch/new version of apache which fixes this vulnerability. This advisory will be updated when a long term fix is available. A fix is expected in the next 96 hours.

On Friday, Apache published a second advisory in which they explain how Apache httpd and its so called internal ‘bucket brigades’ deal when a server processes a request to return multiple (overlapping) ranges; in the order requested. A single request can request a very large range (e.g. from byte 0- to the end) 100’s of times in a single request. Currently this kind of requests internally explode into 100’s of large fetches, all of which are kept in memory in an inefficient way.

This is being addressed in two ways. By making things more efficient. And by weeding out or simplifying requests deemed too unwieldy. There are several immediate options to mitigate this issue until a full fix is available.
Apache’s mitigation strategies ranged from completely disallowing the Range header, to limiting the size of requests, to deploying a custom Range counting module. Lori MacVittie detailed how the mitigation strategies could be implemeted with Big-IP.

Botnets have been taking down web sites for years by overwhelming sites with too much traffic. But now the swarms of compromised computers are being unleashed for the first time on an old kind of vulnerability: Google Dorks.

Google Dorks have been around for a while, as the name for an attack where hackers scan web sites, using commonly used links within company networks, to see if there are any unsecure links that can be used to break into a company’s web site. A report being released today by Imperva warns that the combination of the highly automated botnets and the Google Dorks are a new vector for hackers to break into companies on a massive scale.

Hackers sometimes manually scan sites for such stray links, but that’s like looking for a needle in the haystack. They have now figured out how to automate their scanning. They do so by getting botnets, or farms of compromised computers that have been hijacked without the owners’ knowledge. These botnets are used to automatically search through a series of links that may be related to a company’s web site. They use the botnets and Google Dorks to uncover weaknesses, and then they launch conventional hacking attacks against them. The result of these attacks can be contaminated web sites, data theft, data modification, or compromised company servers.

The hackers can efficiently use popular search engines as an attack platform to retrieve sensitive data. Botnets automate the process and can evade anti-automation detection techniques commonly deployed by the search engine providers. By using bots that are distributed throughout the world, the hackers fool the search engines into thinking that the searching is being done by real human individuals, not a herd of bots controlled by a hacker.

“This is what the hackers do to conduct cyber reconnaissance,” said Rob Rachwald, a senior security strategist at security firm Imperva, in an interview. “This used to be a manual process, but now it’s automated.”

With the automation, attackers can get a filtered list of potentially vulnerable web sites in a very short time. Mining search results can expose neglected sensitive files and folders, and unearth network logs and unprotected network-attached devices.

With botnets, the hackers can run 80,000 queries in a day, eluding detection and efficiently fishing for attack targets. Imperva’s Application Defense Center observed a particular botnet in action during the May-June time frame and witnessed its use against a well-known search engine provider. By tracking this botnet, Imperva found how attackers lay the groundwork to simplify and automate the next stages in an attack campaign against web apps.

“We found out because we were observing,” Rachwald said.

Today, search engines detect automated search routines by detecting the searcher’s internet protocol, or IP, address. If the same address is used over and over again for slightly different searches, the search engines block it. But botnets consist of computers scattered around the world, all using different IP addresses. Hackers can hide their identities behind these botnets, which are available on the underground for rental.

The botnets can be used with a distributed search tool to find distinguishable resource names and specific error messages that say more than they should. Dorks are often exchanged between hackers in forums. Some of the lists of Dorks are posted on various web sites. Dorks and exploits go hand in hand.

In the attack that Imperva observed, the attackers used dorks that match vulnerable web applications and search operators that were tailored to a specific search engine. For each unique search query, the botnet examined hundreds of returned results. Full told, the number of queries topped 550,000 queries, including one day with 81,000 queries — all via single botnet.

The attackers targeted e-commerce sites and content management systems. The more success they had, the more the attackers refined their search terms. Imperva saw 4,719 different variations of dorks used in the attacks.

Fortunately, there are some solutions that Google, Bing and Yahoo can use to protect against these attacks. Search engines are in a unique position to identify botnets that abuse their services and can thus find out more about the attackers. The search engines can identify unusual queries such as those that contain terms from publicly available Dork databases, or queries that look for sensitive files. By doing so, search engines can come up with more blacklisted IP addresses. Google can force some searchers to fill out a CAPTCHA form, (where you look at handwritten characters and type the word that you see), to prove they are human searchers.

Rachwald said that web site creators should attack themselves using common Dork search terms and find out if they are vulnerable. They should also mask their links so that they are harder to guess.Web application firewalls should be able to detect and block attempts at finding application vulnerabilities. The web sites can also use reputation controls to block attacks coming from known malicious sources.

Hackers launched cyber attacks on a number of government websites starting at 6 p.m. Thursday, but failed initially to bring some of the websites to their knees because of enhanced security protection.

Anonymous, an online international group of self-described anarchist hackers, targeted websites related to the Telecommunications Directorate (TÄ°B). The hackers, who tried to block access to the websites belonging to TÄ°B, failed to achieve their goals until 9 p.m.

With an election three days away, access to Turkey’s telecoms authority website, identified as a main target in the protest against the planned new Internet filtering system, was blocked.

While authorities worked to limit the disruption, other sites were also blocked including those related to social security, meteorology and several telecoms-related sites.

One of these was the official site where people can report inappropriate Internet content.

Anonymous threatened to attack Turkish government websites around two weeks before Aug. 22, the date when a new filtering system the Turkish government unveiled in May is to enter into force.

The codename of the cyber attacks was “operationturkey” and the first website to become a target was “www.tib.gov.tr,” TÄ°B’s official website. The hackers also attacked the websites of other units operating under TÄ°B, including the Internet Information Report Center (www.ihbar.org.tr), www.guvenliweb.org.tr and www.guvenlicocuk.org.tr. The attacks were characterized as distributed denial of service (DDoS) attacks.

They then targeted websites of a number of public institutions and political parties.

Anonymous’ cyber attacks were continuing as of Friday.