TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | Is a Prior Consent-Based Model Viable for a Voluntary Code of Conduct? Related reading: Evolving privacy law 'exciting' for IAPP Westin Scholar

rss_feed

""

While it’s clear those participating in the National Telecommunications and Information Administration (NTIA) multi-stakeholder effort to create a voluntary code of conduct on the commercial uses of facial recognition technology are interested in seeing a code come to fruition, the process seems to have hit a bit of a speed bump.

At yesterday’s NTIA-led meeting in Washington, DC – the fifth in this process – advocates, industry and government representatives debated whether a code that restricts the capture of images taken in public spaces would conflict with constitutional guarantees; whether the code should focus on the collection of facial recognition data or the use of it, and if a prior-consent model is viable.

One overarching point of contention: What constitutes a privacy invasion in the first place? Alvaro Bedoya, chief counsel to the Senate Judiciary Subcommittee on Privacy, Technology and the Law and representing Sen. Al Franken (D-MN), himself, distilled it into the classic use-vs.-collection debate. Some in the room felt if a company they’d never heard of used facial recognition technology to identify them without their consent an inherent privacy violation would have occurred. The other side didn’t feel a violation occurred without some kind of harm related to the use of that identification.

This potential stumbling block arose as the meeting looked at proposed principles, submitted by stakeholders earlier this month, which might be incorporated into a future code. The 34 proposed principles on the consolidated list vary from allowing businesses the freedom to use facial recognition technology on private property to allowing consumers choice about when such technology is used at all, anywhere.

Stakeholder-Submitted Proposals

The American Civil Liberties Union (ACLU), which submitted the majority of the proposals (see the full list of proposed principles here), suggested entities must receive informed, written consent from data subjects before they can be enrolled, for example, in a facial recognition database.

Carl Szabo of NetChoice wondered, though, how to deal with such cases where more covert surveillance may be necessary, like when trying to identify sex offenders or shoplifters, for example.

Chris Calabrese, representing the ACLU, said that’s where context-sensitive consumer choice should perhaps come into play. Perhaps surprisingly, Szabo and Calabrese agreed that perhaps it’s more important to focus on use of the data. Another ACLU proposal suggested, for example, a faceprint may not be sold or shared without the individual’s consent, and a company should make it “convenient and accessible” for individuals to “withdraw use of their (facial recognition) data,” with an action as easy as clicking a link.

Similarly, the Center for Digital Democracy suggested facial recognition services that target or include adolescents should be required to spell out the intended uses and companies using facial recognition should be required to publish an annual public report on its operations, developments and testing results.

Tim Sparapani, representing the Application Developers Alliance, opposed the ACLU’s proposal that the code should be based on prior consent. “There are plenty of actors now that don’t get consent before they get a biometric,” he said. While the right to remain anonymous in public is a fundamental right, he argued, a code based on that is going to be problematic due to logistical impossibility, and he would move against such a model.

The Interactive Advertising Bureau, representing more than 600 media and tech companies, submitted proposals suggesting the code encourage a harm-based approach in which “effective solutions” to specific risks are found to “avoid hampering current and future legitimate uses of facial recognition technology.” It also proposed the code apply equally “to all applications, both online and offline.”

The Center for Democracy and Technology’s Joe Hall doesn’t like the idea of a harms-based approach. He thinks that’s far too narrow.

“It’s a simple fact that as a member of the public, you become aware that people are capturing your facial print while you move about town, and now there is a severe chilling effect to moving about town anymore,” he said. “I don’t think you can call (a chilling effect on moving about freely) a harm, like being injured or financially damaged,” but that doesn’t mean the ability to move freely isn’t something worth protecting.

Bedoya reminded the group that the code aims to align itself with the Consumer Privacy Bill of Rights (CPBR), which “affords protections to all data, not just data that purports to do harm. So if we want to implement the CPBR, we for sure need to illustrate harms, but we can’t limit ourselves to that.”

Another point of contention involved whether encryption requirements should be part of the code, which drew some debate over the merits of encryption itself.

Alex Stoianov of the Office of the Privacy Commissioner of Ontario phoned in to point to published evidence that an encrypted facial recognition image can be reverse-engineered—even if the initial template is proprietary.

“This notion of non-reconstruction is nonexistent,” he said. “It can be done in 90 percent of cases. So I wouldn’t count on that.”

The proposed code’s authors would become “the laughing stock of the privacy community,” Stoianov said, and the code’s credibility would “go down to zero” if the code were built with the idea that individuals couldn’t be identify based solely on a biometric template. In fact, reports a paper submitted by Stoianov and his colleague for the NTIA process, templates generated by a facial recognition system serve “as a surrogate of a person’s identity and are sensitive PII by virtue of the fact that they are linked to an identifiable individual.”

Perhaps it’s not surprising, then, that the conclusion of the meeting was a general agreement that the stakeholders are too far apart in their disagreements for work to begin on actually drafting a code. While there are officially only two more meetings left in the NTIA process, it may very well be that additional meetings are necessary.

In the near term, the next meeting is June 3 in Washington, DC, at which the group plans to identify a list of risks and a list of definitions toward a facial recognition code.

Comments

If you want to comment on this post, you need to login.