TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

""

In 2014, the Internet of Things (IoT) and big data were two of the hottest buzz words among privacy professionals. This year, “robotics” may be one of our oft-spoken words. In this post, we look at two of the challenges that robotics brings. One challenge facing privacy professionals is how to address potential privacy issues as autonomous robots powered by big data and network connectivity are brought into our personal spaces. Another, often equally challenging issue, is how to implement robotics in a legal and regulatory landscape that was designed, in many cases, for the relatively slow-paced technologies of the Internet where the chirps of dial-up modems broadcast our connections.

According to a recent report, robot sales increased by 12 percent last year and that trend looks to continue. The same report notes the potential for autonomous robotics to disrupt both the economy and our culture. While many implementations of robotics may still be years away, we already use robots to vacuum floors and mow our lawns. Robots have been used in factories for some time, and robots are being developed for mining operations and the military. Increasingly, robots are used in healthcare settings to assist surgeons and even to provide personal care to sick or elderly patients. Adam Theirer recently recounted his experience at the 2015 Consumer Electronics Show, where manufacturers displayed lifelike communications androids and personal care robots. Theirer noted that “we can expect (robots) to be fully networked, data-collecting machines that will know as much about us as any human caregiver, or possible much more.”

As robots collect and communicate data about individuals in a variety of settings and circumstances, how can manufacturers comply with the regulatory frameworks and social norms that shape society’s expectations of privacy?

In 2012, the EU-funded RoboLaw project was launched to explore the regulation of robots. Last year, RoboLaw presented a report to EU legislators in which RoboLaw offered recommendations for the regulation of robotics. The report characterizes privacy and data protection as one of five main legal areas that will impact the development of autonomous machines. While acknowledging the potential for rigid regulations to stifle innovation, the report also notes that the absence of clear guidelines may have the same effect by leaving robotics developers in the dark. RoboLaw researchers suggest that “privacy by design” is likely to be essential to addressing privacy issues in the development of robotics. The report acknowledges that this will not always be an easy task. Privacy concerns will need to be weighed with potentially competing values like safety and efficiency. And robots may find themselves in circumstances that developers could not predict.

These challenges are familiar in kind, if not in magnitude and combination, to those already facing the privacy community. Autonomous robotics brings together IoT and big data—both of which have presented challenges for industry, regulators and privacy advocates. The utility of traditional implementations of Fair Information Practice Principles such as data minimization and notice and choice have already been brought into question. Autonomous robotics is likely to further challenge our traditional notions. As we develop our approach to these challenges, we would be well served to look beyond our fears of what might come and follow the RoboLaw researchers in looking at how our approach can provide incentives for beneficial innovation.

But developing a privacy and data protection framework for the future of robotics is not our only challenge. We must also face the challenges that our current framework presents for the robotics industry. Autonomous robots will need to collect a vast array of information from their environments. And developers may not be able to anticipate all of the environments in which a robot may find itself. How would one best design a connected robot to avoid running afoul of the Children’s Online Privacy Protection Act when encountering children in a shopping center? If a robot is designed to respond to vocal commands, how does a developer avoid violating electronic eavesdropping laws? If an autonomous robot is providing interactive services to people in public and needs to know its own precise location, how will it successfully avoid knowing an individual’s precise location without first obtaining affirmative express consent?

To be sure, there are likely a variety of innovative ways for robotics developers to address these issues. But will the costs associated with those innovations be reasonable in light of the benefits? In some cases, the answer will be yes. However, just as many of us now find the Electronic Communications Protection Act to be ill-suited to the world of cloud applications, we will likely find many of our current privacy frameworks to be ill-suited to the world of robotics.

As robotics technologies proliferate, we therefore need to look both forward and backward as we assess the appropriate regulatory framework. Useful innovation may be stymied both in the overzealous haste to address purely speculative harms as well as in the failure to amend regulations best suited for obsolete technologies.

1 Comment

If you want to comment on this post, you need to login.

  • comment Richard • Feb 20, 2015
    I cannot help but think of Asimov's laws of robotics, written about in his many influential science fiction stories.  The basic tenet of 'above all thins, do not harm to people' is one that perhaps we can still learn from, and perhaps our rules in such a new area would do well to start from such a position - which would then be translated I suggest into a 'privacy first' model  in this particular realm.