The Internet of Things Cybersecurity Improvement Act of 2017: How It Helps the Good Guys―And (Potentially) Hurts Them Too

Posted by: Jordan Brunner

Internet_of_ThingsEvery day, we become more connected. And not just through Facebook. Thomas Friedman spoke truly and prophetically in 2005 when he wrote that the world is flat, largely as the result of information technology. The relentless rise of innovations in technology has facilitated the sharing of information and the breakdown of barriers. One of the most prominent trends in this vein is the increasing connectivity of devices. Computers, once the clunky domain of militaries and universities, now exist in everything from refrigerators to Jeeps known as the Internet of Things (IoT).

These connections carrying with them enormous potential for productivity and human happiness. Home IoT devices will eventually plan our commutes and prepare our meals before we even rise from bed. Industrial IoT devices ensure properly stocked inventories and provide a more efficient processing infrastructure. Medical IoT keep us fitter and healthier for longer.

But with these benefits come concerns. Being connected every moment of the day means being engaged every moment of the day―whether you like it or not. Just as Facebook collects data on each user every time they log in, so also do the myriad devices that are now operate as our own personal computers all at once. This means there is no time when we are being updated, instructed, monitored, and even controlled by something or someone. This has profound implications for privacy, especially depending on who has access to that information.

But access isn’t just about privacy. Access is also about security. Sure, we may be fine with corporations having the data to improve our consumer experience, or law enforcement and intelligence agencies having the data to catch criminals or terrorists if they use proper legal process. Or perhaps we don’t―numerous developments within the past few years have targeted both corporations for their access (think the Data Brokers controversy) and law enforcement and the intelligence community for their access (think Carpenter v. United States). Yet the group of actors we surely don’t want having access to the information are those actors which law enforcement and the intelligence community seek to stop: criminals and terrorists.

Recently, we have seen a spate of cybersecurity incidents that demonstrate the vulnerability of a myriad of devices to these actors. That Internet-connected Jeep? It’s hackable while you’re driving it. The baby monitor you use to watch over your newborn? Hackers are doing that for you. The pacemaker that is keeping the blood pumping to your heart? It can be turned off remotely. To combat these incidents of cyber theft and disruption, it is necessary to both identify vulnerabilities and patch them, but also to prevent the vulnerabilities in the first place.

Thankfully, we have a good (if somewhat incomplete) solution to both of those problems. Recently, Senators Mark Warner, Ron Wyden, Cory Gardner, Steven Daines, and Margaret Wood Hassan introduced the Internet of Things Cybersecurity Improvement Act (IoTCA) of 2017. In short, the Act is designed to secure IoT devices that are vulnerable to hacking by requiring government contractors to ensure that there is no known security vulnerabilities in their devices based on guidelines promulgated by the executive branch, and by requiring they use protections like “industry-standard” encryption. And as Randal Milch points out, the Act uses the government’s purchasing power to set what may well become a norm across industries.

But as Milch also points out, perhaps the most important part of the Act is its limitation of liability for what are commonly called “white hat” hackers. “White hat” hackers are those hackers who hack “for good,” or to ethically ensure that security systems are not vulnerable to attack. By contrast, “black hat” hackers are those who hack for criminal or terroristic purposes, and “grey hat” hackers are those who have feet in both camps. White hat hackers are especially useful in protecting systems from intrusion, and have been used by the Pentagon during so-called “bug bounty” programs, and by the FBI to assist in law enforcement operations.

Activity by white hat hackers has traditionally been criminalized under the Computer Fraud and Abuse Act (CFAA), though many are not charged due to prosecutorial discretion. While there are many arguments for reforming or abolishing the CFAA, it remains a much-relied on tool in the prosecutor’s toolkit.

So, what does this have to do with the IoTCA? Well, to understand that, you have to understand something about copyright law. Congress passed the Digital Millennium Copyright Act (DMCA) in 1998 to implement the WIPO Copyright Treaty and the WIPO Performances and Phonograms Treaty, both signed in 1996. These treaties and the DMCA created “anti-circumvention” laws―that is, the laws were designed to prevent people from reverse engineering technology to access works that were copyrighted. The operative provision of the DMCA, 17 U.S.C. § 1201(a), reads, “No person shall circumvent a technological measure that effectively controls access to a work protected under [the Copyright Act of 1976].” 17 U.S.C. §§ 1203 & 1204 attach civil and criminal penalties to any violation of § 1201(a), respectively.

This is where the IoTCA comes in. Section 3(c)(3)(a) of the Act amends 17 U.S.C. § 1203 by adding subsection (d), which would read

A person shall not be held liable under this section if the individual—

(1) in good faith, engaged in researching the cybersecurity of an Internet-connected device of the class, model, or type provided by a contractor to a department or agency of the United States . . .

Section 3(c)(3)(b) of the Act amends 17 U.S.C. § 1204, again by adding subsection (d), which would read

Subsection (a) shall not apply to a person who—

(1) in good faith, engaged in researching the cybersecurity of an Internet-connected device of the class, model, or type provided by a contractor to a department or agency of the United States . . .

The IoTCA also limits criminal liability for these kinds of actions under the CFAA. In short, the IoTCA provides an exception from civil and criminal liability for those who reverse engineer devices to understand where their vulnerabilities lie, so long as they do so in “good faith.” Thus, the IoTCA protects white hat hackers from being sued or thrown in jail.

But that isn’t the end of the story. Before moving on, it is important to understand that while the DMCA liability is bifurcated into both civil and criminal provisions outlining liability, some of the relief to which it entitles plaintiffs under the civil provisions looks an awful lot like what law enforcement might seek in a criminal context. For instance, under the court’s equity power, it can “order the impounding, on such terms as it deems reasonable, of any device or product that is in the custody or control of the alleged violator and that the court has reasonable cause to believe was involved in a violation” (§ 1203(b)(2)) (emphasis added) and may also, as part of finding a violation, “order the remedial modification or the destruction of any device or product involved in the violation that is in the custody or control of the violator or has been impounded under paragraph (2).” (§ 1203(b)(6)) (emphasis added).

Now, to delve into a potentially thorny issue. The DMCA does not define “custody or control” as used in § 1203, outlined above. Why is that a problem? Well, in the law, there are “canons” of statutory construction―which is a fancy way of saying there are methods of interpreting a statute. The primary canon is the “plain meaning rule”, which says we are to follow the plain meaning of the statute. This has been articulated in numerous cases, a recent one of which is the U.S. Supreme Court decision in Sebelius v. Cloer (2013). The most obvious way to do this is to use the dictionary definition of the statute if there is no definition of a word or phrase in the statute itself. The words “custody” and “control” are defined thus by Merriam-Webster thus: “Custody” is means “immediate charge and control (as over a ward or suspect) exercised by a person or an authority.” “Control” means “to exercise restraining or directing influence over,” or “to have power over.”

Let me use an initial hypothetical to illustrate the basic issue that is presented by these plain meaning definitions of Section 1203(b) of the DMCA. Suppose a hacker wanted to use devices connected to the Internet to target a website that he (or she) disagreed with – let’s use Breitbart – with a distributed denial of service (DDoS) attack. So, the hacker breaches the defenses of thousands of devices which are insecure, from refrigerators to Fitbits, and uses them as “bots” or “botnets” in the effort to flood Breitbart with an overload of traffic, causing the site to shut down. The hacker then steals the source code for the site and selectively cuts some of the articles written by Breitbart authors to poke fun at President Trump. Breitbart brings a “John Doe” suit against the hacker, in part requesting that the court use its power under § 1203 to shut down all devices being used to flood the site.

From the plain reading of the statute, it seems the court would be able to do this―after all, the devices are, as those words are plainly defined, both in the custody and under the control of the hacker as bots. But that’s a lot of angry IoT owners. Think about that―your refrigerator or Alexa shut down because of one judge’s order that might not even be in your state.

Now let me use another hypothetical, this one a bit more nuanced. Suppose a hacker finds a flaw that constitutes a serious vulnerability in a large part of the internet’s infrastructure, and wants to fix it. He (or she) reverse engineers the code, develops a solution, and then sends it out to those who need it. However, the hacker then uses knowledge developed during that experience to do further research, and develops a tool that can be used for malicious reasons. The tool, unbeknownst to the hacker, is later used for that malicious reason, to carry out an attack similar to the one carried out against Breitbart. The victim of that attack brings suit, and all devices are again turned off on a court order, including the hacker’s.

Having outlined these scenarios, here’s the punchline – neither of these hypotheticals are actually hypothetical. Both happened in real life.

The first hypothetical happened in two very high-profile cases in 2016. The first happened when cybersecurity expert Brian Krebs’ blog, KrebsOnSecurity, was hit with a massive DDoS attack―there are “some indications that th[e] attack was launched with the help of a botnet that has enslaved a large number of hacked so-called “Internet of Things” (IoT) devices – routers, IP cameras, and digital video recorders (DVRS).” The second case happened when three college students harnessed the power of connected devices to attempt to gain more exploits in the online game Minecraft. The students’ effort “slowed or stopped [the Internet] for nearly the entire eastern United States, as the tech company Dyn, a key part of the internet’s backbone, came under a crippling assault.”

The second hypothetical (mostly) happened last year, when Marcus Hutchins, a British cybersecurity researcher, developed a kill switch to stop the WannaCry ransomware attack, but then was indicted by the Justice Department with a violation of the CFAA due to his alleged development of “Kronos”, software that was susceptible to being used as ransomware.

Here’s why all this is important: By limiting liability under the CFAA and the DMCA, the sponsors of the IoTCA have removed the most obvious barrier to security researchers being able to develop solutions to security problems that have led to ubiquitous cyber attacks. But there remains the lingering question of whether, in limiting liability for the white hat hacker, the current laws also still leave open the possibility that (1) nefarious hackers could still cause reverberating impacts due to a judge’s order shutting down devices, and (2) the powers of the court remain intact to order seemingly drastic relief against someone who might not be in danger of paying thousands of dollars or jail time under the limits of liability. Even as it tries to clear things up in the IoT age, the IoTCA provides quite a bit of judicial discretion in an area that continues to rapidly change.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s