Posted By: Jamie Winterton
What should one do if one discovers a security problem on the internet? It’s not a question most internet users have to ask themselves, but something that hackers – whose hats range from snow white benevolence to pure black evil – must consider carefully. There are a variety of options. Some choose to leave it alone. Doing nothing is usually a safe bet. Some will choose the responsible disclosure route – contacting the company and having a private conversation about the vulnerability, its risk, and ways to remediate. Sometimes there are financial incentives. I don’t mean the nefarious ones – selling the vulnerability on the dark web, for example – but many companies and government organizations have established “bug bounty” programs, wherein a researcher who finds a security problem can get paid for disclosing it, the amount depending on the severity and potential impact of the bug. When implemented well, these programs have helped companies improve their security posture by getting new eyes on old problems that may have been overlooked.
But even as security researchers are incentivized by these new programs, independent security research in general is sharply constrained by legislation. One example is a recent bill in Georgia, one of the strictest measures proposed at the state level. Senate Bill 315 proposes to amend the Official Code of Georgia to create “the new crime of unauthorized computer access” and aligns significant punishments to match. The unfortunate part of the bill (which has yet to be signed by the governor) is that it provides no exemption for responsible disclosure, meaning that well-intentioned security researchers – those who report vulnerabilities rather than exploiting them or selling them – have no legal protection.
The bill specifically states that the newly-defined crime does not include “Cybersecurity active defense measures that are designed to prevent or detect unauthorized computer access” – but what the bill’s authors fail to realize is that offense is an essential part of defense in cybersecurity.Industry and government employ “red teams” – people whose job it is to attack specified networks – but even the best red teams can’t find everything. Computer networks are complex multi-layered systems. “You can’t find all the bugs yourself,” said Katie Moussouris, who designed Microsoft’s bug bounty program and developed ‘Hack the Pentagon’, the US federal government’s first bug bounty program. “Whether you’re a well-funded government like the U.S. or anyone else, you have to work with the hacker community.” Oddly, Georgia SB315 excludes “persons who are members of the same household”, so hacking your spouse’s computer or devices is apparently acceptable to Georgia’s lawmakers.
Georgia should be well aware of how vulnerable systems can be exploited by bad actors. Just last month, the city of Atlanta was hit by a ransomware attack. The attack paralyzed the city for over a week – online services like utility or parking payments were frozen and court proceedings were halted as city officials scrambled to restore backups or implement manual methods as a fall-back. And even though this variant of ransomware is sophisticated, forensic research showed that the city had also been a victim of much simpler ransomware because it simply hadn’t patched. “[These] results definitely point to poor cybersecurity hygiene on the part of the City and suggest this is an ongoing problem, not a one time thing,” said Jake Williams, founder of Rendition Infosec (also located in Georgia).
Stifling cybersecurity laws aren’t just the domain of state legislation. The federal Computer Fraud and Abuse Act (CFAA) has long been held in contempt by security researchers for its vague language and heavy penalties. Like Georgia SB315, the CFAA doesn’t define “authorized access”, and doesn’t include any provisions for responsible disclosure. Originally drafted in 1984 (Ronald Regan’s response to the Matthew Broderick movie ‘Wargames’, or so the story is told), the original intent was to protect computer networks critical to national security. But as Internet technology rapidly expanded, the nebulous language of the CFAA was applied broadly, doling out harsh penalties to security researchers – penalties that often exceeded those for violent crimes. The sadly iconic example is that of Aaron Swartz, a brilliant young computer scientist who believed strongly in the freedom of information. Aaron violated the terms of service on an MIT site, and downloaded almost 5 million documents from a paywalled site that contained academic journal articles. These articles, I might add, were most often derived from federally-funded university research and as such should be accessible to the taxpayers that funded them. When he was discovered, he was charged with excessive fines and considerable jail time. “Stealing is stealing whether you use a computer command or a crowbar,” said Carmen Ortiz, one of the prosecutors in the Swartz case, even though many actual crowbar-related crimes would have netted milder consequences. Ortiz never got to see the case through – Aaron tragically committed suicide, despondent over the potential effects the case would have on his mission and on his personal life. We’ll never know what intellectual contributions have been lost as a result.
Laws like Georgia SB315, which looks a lot like the CFAA, lack the sophistication to delineate beneficial research and malicious activity. It’s time to codify responsible disclosure and provide legal avenues for “white hat” hackers at the federal and state levels. Without explicit legal coverage, even those who responsibly disclose vulnerabilities are at risk. Consider the case of Justin Shafer, who discovered that a dentist’s office was storing unencrypted patient information online. The server was not password-protected and easily accessed via a browser – invoking questions from the security community on whether this actually was “unauthorized access” – and Shafer appropriately alerted the dental company of his findings. His reward was a 6:00 am raid by the FBI at his home, and being hauled to jail in his boxer shorts. One of the agents told Shafer he had “exceeded authorized access,” a crime under the CFAA. Shafer didn’t sell the data, publish the vulnerability openly or on the “dark web”, or otherwise exploit his finding. Instead, he worked with DataBreaches.net, a to inform the company of their problem. But the CFAA doesn’t cover intention very well, nor does it do a good job of defining “authorized access”, as this event plainly shows.
Patterson Dental didn’t respond to Shafer or DataBreaches, which begs the question: who is liable for exposed data, the one who leaves them exposed, or the one who discovers the exposure? The CFAA only addresses the latter. (The Federal Trade Commission is interested in the former, but their attention has been spotty and their punishments uneven.)
Arizona has a similar law on its books to address “computer tampering”. Arizona Revised Statute 13-2316 is a bit more sophisticated than Georgia SB 315 in a few ways. One improvement is how intent is addressed. The ARS includes the following language in its definition of “computer tampering”:
“the intent to devise or execute any scheme or artifice to defraud or deceive, or to control property or services by means of false or fraudulent pretenses, representations or promises.” (bold added)
While not explicitly providing a safe harbor for security researchers, the ARS provides language that can be used to differentiate white-hat security research from malicious activity. Another positive example from the ARS is the following:
“Recklessly using a computer, computer system or network to engage in a scheme or course of conduct that is directed at another person and that seriously alarms, torments, threatens or terrorizes the person. For the purposes of this paragraph, the conduct must both:
(a) Cause a reasonable person to suffer substantial emotional distress.
(b) Serve no legitimate purpose.” (bold added)
If the ARS stopped at that, malicious activity could be reasonably prosecuted, clearing the road for security researchers to contribute positively. Responsible disclosure, when it is done right, causes no emotional distress because it’s done privately. A company may feel threatened, but a proper responsible disclosure serves a very clear purpose – informing the company of issues that would be exploited by a less ethical hacker, and ways these issues can be remediated. Ethical hacking falls outside the scope quoted above.
Unfortunately, the ARS then demolishes these protections by then including more CFAA-style verbiage in its conclusion. The statute lists “Knowingly obtaining any information that is required by law to be kept confidential” and “Knowingly accessing any computer, computer system or network or any computer software, program or data that is contained in a computer, computer system or network” as class 6 felonies (punishable up to two years in prison). So even with some efforts towards specific language and delineation of malicious activity versus ethical up front, the ARS then obviates these portions with vague and threatening language, echoing the CFAA and therefore invoking all the same issues.
After Aaron Swartz’s suicide in 2013, Rep. Zoe Lofgren (D-CA) introduced ‘Aaron’s Law’, an attempt to modernize federal law around computer fraud and properly balance penalties for computer-related crimes. It addresses the Internet in its modern form, not as an exclusive network of defense and university machines. The proposed Aaron’s Law removes terms of service violation from automatic prosecution, and delineates harmful crimes from standard internet activities. Aaron’s Law wouldn’t have solved all the issues with the CFAA – an explicit safe harbor for researchers still hasn’t been proposed – but it would have been a significant and timely upgrade to the current law.
Aaron’s Law never caught the traction it needed, despite having both a Democrat and a Republican sponsor. As Slate’s Justin Peters noted in his analysis of Aaron’s Law, “Congressmen, like most people, care a lot more about meting out punishment to “bad hackers” than they do about offering justice to “good hackers.” And so the CFAA remains an impediment to well-intentioned security researchers, a chilling consideration for those who could contribute positively to our collective security. It’s a choice that every ethical hacker needs to soberly consider. I personally have chosen not to pursue independent security research, knowing that the response to anything I find and responsibly disclose might be that I’m arrested in front of my children. For me, no security improvement or bug fix will ever be worth it. Fortunately, others are willing to take the risk, and they contribute meaningfully to security through bug bounty programs, responsible disclosure, and constructive dialogue with companies and government organizations. I’m grateful for their efforts, and I hope we can establish a legal “safe harbor” to not just protect them, but encourage other talented hackers to help. Goodness knows, we need it.