IAPP Privacy Advisor https://iapp.org/news/privacy-advisor/ Privacy Advisor - Original reporting from IAPP staff and contributed features from IAPP members. Understanding marketing privacy: Overlooked aspects, key questions and practical audits https://iapp.org/news/a/understanding-marketing-privacy-overlooked-aspects-key-questions-and-practical-audits https://iapp.org/news/a/understanding-marketing-privacy-overlooked-aspects-key-questions-and-practical-audits In the dynamic marketing and privacy realm, the intricate dance between innovative strategies and compliance is more crucial than ever. As legislative landscapes shift at an unprecedented pace, marketers must navigate this changing terrain with agility. For example, the integration of privacy by design and privacy by default principles is not merely a checkbox for compliance — it's a strategic imperative.

Frequently overlooked aspects

From a marketing perspective, minimal changes to adjust strategy can be made daily. Though changes may be small, they can be important for compliance. These possible, and sometimes overlooked, changes will affect privacy as well.

Website modifications. Adding new forms or extra data fields sounds easy, but beyond the visual appeal, marketing and privacy must both assess the impact of even the smallest changes on user data and the collection of new data.

Cookie considerations. The world of cookies is multifaceted. Delving deeper into distinctions between session and persistent cookies, or first-party and third-party cookies, is not just a technicality but a commitment to transparency. A clear and accessible cookie statement builds trust through articulate communication. This also means if anything changes in the cookies — or other tracking technologies — that are being used, the cookie statement needs to be updated.

New partners or vendors. The integration of new partners into the marketing technology stack is not a one-off event. Also, it is very "easy" to engage a new third party. Sometimes only a few clicks are needed. Engaging legal teams and privacy from the beginning and during kickoffs can alleviate delays that may come later. Regularly reviewing agreements ensures ongoing compliance, transforming what might seem like routine administrative work into a critical aspect of risk mitigation and relationship management.

Collecting ancillary information. A legal basis for each piece of information collected is paramount. For marketing purposes, consent is often the most common legal basis. Beyond mitigating privacy risks, it emphasizes clarity in communication about the purpose behind collecting specific information, fostering transparency.

Updating privacy notices. Consistency is the bedrock of trust. Regularly updating and maintaining clear and uniform privacy notices demonstrates a commitment to transparency, building trust with users and contributing to positive user experiences. Communicate these changes rather than expecting the user to discover them. Also, if new data is collected or shared with third parties, keep in mind updating the privacy statement won't be enough.

Utilizing free tools. While the allure of free tools is undeniable, the associated long-term implications must not be overshadowed. These tools tend to come with preset terms and clickwrap agreements that are generally accepted nonchalantly. Since no "real" contracts are needed, legal review tends to be overlooked. Educating teams about the risks, particularly with artificial intelligence tools like ChatGPT, is essential for making informed decisions that balance innovation with privacy considerations. The terms and conditions of free tools often contain critical details that might conflict with an organization's privacy stance. A careful examination of these terms is not only a legal necessity but an investment in long-term privacy compliance.

Abandonment tracking and user notification. Abandonment tracking, while a valuable tool, requires upfront user notification. Proactively incorporating a notification system ensures transparency and aligns with evolving privacy expectations, enhancing the overall user experience.

Data sharing with third parties. Securing third party-specific consent for data sharing for marketing purposes is imperative. Receiving and, thus, having the data as a third party is not enough. If consent for sharing the data is needed, it must be requested for the specific purposes of third-party use, respecting user choices.

Email collection. Implementing clear opt-in mechanisms for email collection and sending newsletters is a foundational step. Automatically consenting attendees into marketing lists without explicit permission can lead to serious violations, eroding trust and damaging brand reputation.

Spam complaints. Beyond legal consequences, unsolicited emails can tarnish a brand's image. Aligning marketing practices with regulations like the ePrivacy Directive, EU General Data Protection Regulation and the U.S. Controlling the Assault of Non-Solicited Pornography And Marketing Act ensures not just compliance but also positive brand perception and sustained user engagement.

Be creative: Work with marketing

For privacy professionals, it can feel like marketing colleagues speak a completely different language. But through close contact and working together, one can strengthen the other. Try to look at legal requirements with a creative eye, just as marketers look at data with a creative eye.

For example, opting for a creative cookie banner is not just a compliance necessity but an opportunity to engage users. Collaborating with the marketing team to design a banner that aligns with the overall aesthetic of the website turns a mandatory element into a user-friendly experience that reinforces brand identity.

Transforming privacy statements into engaging, user-friendly experiences contributes not only to user satisfaction but also to a positive brand image. Visualize the process where possible, since this reduces user effort and enhances brand perception, fostering a sense of transparency. Also, ditch the checkbox for the privacy notice, since it should be an informative document, and not one to get consent for.

Introducing privacy considerations early in the development process is strategic, as well as efficient. Collaborating with marketing to integrate privacy into decision-making ensures it becomes an integral part of the organizational culture, encouraging a holistic approach to privacy.

Finally, seamlessly checking and auditing the organization's processes ensures privacy considerations do not stand alone but are an embedded aspect of routine operations. Encouraging a collaborative approach where marketing actively participates and takes shared responsibility for data protection and privacy promotes a culture of continuous improvement.

Practical audits

An audit guide can help to jump start your process. Seamlessly referring to the audit guide within the organization's processes ensures privacy considerations are not stand-alone entities but are embedded aspects of routine operations. Encouraging a collaborative approach, where marketing actively participates in audits, fosters a shared responsibility for privacy and promotes a culture of continuous improvement.

Balancing innovation, compliance is a shared goal

By meticulously addressing often overlooked aspects, conducting thorough audits and infusing creativity into privacy practices, organizations can establish robust frameworks that not only comply with regulations but also build trust, foster positive user experiences, and ultimately ensure sustainable growth in the digital era. Balancing innovation with compliance is not just a necessity, it's a shared goal for marketing and privacy that propels organizations toward a future where privacy and innovation coexist harmoniously.

]]>
2024-03-18 12:20:39
Framework debate shows as Kentucky nears comprehensive privacy law https://iapp.org/news/a/framework-debate-shows-as-kentucky-nears-comprehensive-privacy-law https://iapp.org/news/a/framework-debate-shows-as-kentucky-nears-comprehensive-privacy-law Stakeholder opinions vary on the preferred framework for a U.S. comprehensive state privacy law. As the debate rages on, the Kentucky General Assembly offered the latest example of the framework competition that is shaping the perceived state privacy law "patchwork" in recent years.

Kentucky House Bill 15, a comprehensive bill modeled after Virginia's privacy law passed in 2021, has approval from both assembly chambers following a unanimous passage out of the Senate 11 March. The bill, introduced for the first time during the 2024 legislative session, will head back to the House for concurrence on minor Senate amendments and then head to the governor's desk.

If enacted, HB 15 would take effect 1 Jan. 2026.

"This puts us in line with neighboring states such as Virginia, Tennessee and Indiana in terms of language used," state Rep. Josh Branscum, R-Ky., said during HB 15's 29 Feb. hearing before the Senate Standing Committee Economic Development, Tourism and Labor. The bill sponsor touted HB 15 as a "workable solution" to ensure consumer rights and protections while noting the proposal is "a great starting point" and "a framework for our legislature to improve upon for sessions to come."

House Bill 15 is a near-copycat of Virginia's opt-out statute. It starts with identical coverage thresholds of entities that control or process personal data on more than 100,000 consumers or derive 50% of revenue from selling the data of more than 25,000 consumers. The states also share requirements for data protection impact assessments, processing deidentified or pseudonymous data, user opt outs for targeted advertising and data sales, and a 30-day cure provision.

The expected passage comes as a competing bill, Senate Bill 15, was abandoned in its third year of consideration following a majority approval by the Senate during the 2023 session. Instead of a formal reconciliation of provisions between the two bills, Kentucky state lawmakers lined up behind HB 15.

Senate Bill 15 is a more nuanced bill that tracks closer — but not exactly — to Connecticut's privacy law passed in 2022.

The 2024 iteration of SB 15 defined covered entities as companies controlling or processing data on more than 50,000 consumers or those deriving 50% of revenue from data sales involving more than 25,000 consumers. It also proposed recognition of universal opt-out mechanisms while using a broader definition for targeted advertising. Past versions of the bill sought to be even more unique, proposing opt-in consent and a hybrid private right of action.

"It's just really frustrating to know we need to have protections in place and then see a lobby strong enough to water those protections down," said state Sen. Whitney Westerfield, R-Ky., sponsor of SB 15. "So now it looks like we've done something that'll be good enough. … Sometimes when you do just a little bit, it doesn't have to meaningful."

Compliance cakewalk?

Virginia's framework — or more commonly referred to as the original Washington Privacy Act — is the foundation for enacted privacy laws in all states besides California. Kentucky represents an effort to mostly duplicate Virginia. Other states have taken it upon themselves to add, remove or alter provisions for the purpose of consumer protection or lightening perceived burdens on businesses.

With Kentucky's HB 15, a majority of covered entities are likely to have a compliance program in place if they are a national organization complying with Virginia copycats around the U.S. Kentucky-based entities on the lower end of the coverage threshold are most likely to be impacted when the bill passes.

Stites & Harbison Member Sarah Cronan Spurlock, CIPP/US, indicated HB 15 carries a "more conventional definition of consumer where it's only residents of Kentucky, excluding that employment or commercial context," which is notable for Kentucky entities tackling privacy compliance for the first time. The top task for those businesses, however, is assessing whether the bill even applies to them.

"I think the challenge for anyone doing business in Kentucky is going to be first to consider if the new law affects them given the processing thresholds and the potential exemptions for certain businesses or with respect to certain types of data they maintain," Spurlock said, noting additional requirements to prove qualifications for an exemption. "It can get tricky when your business falls under the law while portions of your stored data are exempt."

The 30-day cure period included in HB 15 under exclusive attorney general enforcement does not sunset, adding extra cushion in the event of a violation. While some stakeholders argue cure opportunities weaken privacy laws, it does foster future accountability and vigilance.

"The attorney general can initiate an action in the event you continue a violation of something you've already cured and said you wouldn't violate again," Spurlock said. "An individual company isn't going to get that right to cure for the same violation every time it happens. So there's no sunset, but you certainly can't just disregard a prior remedy."

The competition

Consideration of multiple comprehensive privacy bills is not uncommon. Florida, Indiana and Washington are examples of states that juggled competing bills with varying success.

There are a few reasons why Kentucky's situation is unique and plays directly into the state privacy law patchwork debate.

Few states have passed a comprehensive privacy bill on its first introduction after considering separate or competing bills in preceding years. The decision to push ahead with HB 15 hinged on alignment and uniformity.

Westerfield cited prior conversations with House leadership where he was told SB 15 would "make Kentucky an island," something the Kentucky General Assembly had done on prior policy matters but allegedly lost the appetite for with data privacy. 

"It's an argument to which I do not subscribe. I think it's baloney," Westerfield said. "I can name the seven provisions from my bill, down to the chapter, subsection and paragraph, that differed from Virginia. They're material differences that are meaningful, but the bills were nearly identical. It was not an island."

He added SB 15 was an opportunity for Kentucky to "give legislators in other states some cover to do something different" and find a better balance between consumer protections and business needs than Virginia's "boilerplate" provides. Notably, members of the Senate Standing Committee Economic Development, Tourism and Labor told Rep. Branscum they would approve HB 15 despite a desire for opt-in provisions — which Westerfield's SB 15 offered in years prior.

Consumer Reports long supported versions of Westerfield's bill until this year, when he made another round of concessions to stakeholders and lawmakers. The sticking point fueling the opposition, according to Consumer Reports Policy Analyst Matt Schwartz, continues to be the absence of impactful data minimization standards, UOOMs recognition, authorized agent rights and the PRA.

"In a lot of cases they're just looking at what might be the lowest hanging fruit. Exploring what bills other states have done that passed relatively easily," Schwartz said. "The be-all and end-all of consumer privacy legislation should not be ease of compliance for businesses. It's a consideration, but that seems to be elevated as the goal above all else."

Another wrinkle Kentucky brings to the patchwork discussion is a lack of wholesale alignment. HB 15 is modeled after Virginia's 2021 statute. Most core principles have not changed in that law, but the Virginia General Assembly passed meaningful amendments to its comprehensive framework the last two years, including substantial children's privacy amendments days before the Kentucky Senate approved HB 15.

Passing an outdated framework as a starting point may no longer be sufficient given the way modern technology advances. Westerfield, who is not seeking reelection to the Kentucky Senate in November, fears the growing complexity and pace of digital policy matters will render HB 15 useless if there's not constant legislative attention.

"When they come back next year, I just don't see there being a desire or urge to (improve the bill)," Westerfield said. "They'll say, 'Oh, we did that last year. We don't need to do that again.' Like there's a reset on the counter about how bad things have to get before there is the oomph to do it again."

]]>
2024-03-12 12:07:10
Platform aims to streamline children's privacy compliance for online game developers https://iapp.org/news/a/childrens-online-gaming-privacy-platform-looks-to-make-compliance-seamless-for-developers https://iapp.org/news/a/childrens-online-gaming-privacy-platform-looks-to-make-compliance-seamless-for-developers As emphasis on protecting children's personal data grows, compliance with a variety of increasingly complex global regulations to support that protection is now a vital business necessity.

Into the current global children's data privacy paradigm now steps k-ID.

The startup developed a technology platform to protect children's privacy in the online gaming industry, and also offers publishers and game developers solutions to comply with a growing number of data privacy regulations in more than 200 markets around the world. In multiple pre-seed and seed financing rounds last year, k-ID attracted USD5.4 million for its platform and resource development.

k-ID CEO Kieran Donovan indicated online gaming is a great starting point for tackling children’s privacy issues because of the sheer volume of children playing video games online throughout the world. However, the hope is that the k-ID platform, described by Donovan as "sector-agnostic," will eventually be extended for use by companies in a number of industries that collect and process children's personal data.

"The gaming industry is uniquely positioned to scale a solution like this very quickly," Donovan told the IAPP. "We have many different publishers publishing games on all the platforms, so there are lots of challenges to solve and that is going to keep us busy. But we'll always be led by where the most complex challenges are, and that’s what we want to solve."

While k-ID has been available for a limited number of early-access customers, the platform became accessible to all prospective users 6 March. Donovan said there is not yet a timeline to sell k-ID for use in other industries besides online gaming.

Along with Donovan, k-ID's founding partners bring a wealth of experience that feed into the platform's goals. Chief Legal Officer Timothy Ma, CIPP/E, CIPM, served as head of international privacy and data protection at Tencent. Chief Safety Office Jeff Wu was a leading trust and safety veteran at Google and Meta. And Chief Revenue Officer Julian Corbett is a former video game industry executive.  

As online game developers expand their reach, they will need to craft user privacy notices and establish privacy principles that account for the laws and regulations for each country their players reside in. Ma said game developers' long-held practice of tuning their products' compliance to meet the U.S. Children's Online Privacy Protection Act will no longer be acceptable as regulators in other countries look to assert their authority over laws in their jurisdiction.

"If you want to build a global program, you have to comply with all the global privacy laws, and it's an enormous task and extremely difficult," Ma said. "Even with large platforms, they basically take COPPA as a baseline for compliance and think that suffices other global privacy laws. We all know that this is not the right way of approaching this issue."

To be customizable to varying regulations, the k-ID platform combines three key solutions. The first is its Global Compliance Engine that configures an age-appropriate game experience based on a child’s location, age and digital maturity, and is COPPA Safe Harbor certified by the Entertainment Software Ratings Board, which is a recognition generally reserved for the actual games themselves.

The second component is the Global Compliance Database, which is a comprehensive list of all global children's privacy regulations impacting gaming and is updated daily by the k-ID team.

To incorporate the platform within a company's tech stack, Donovan said k-ID offers easy API integration. The platform starts out free while offering more sophisticated subscription options at a cost.

"We really want to democratize access to meaningful compliance for everyone," Donovan said. "That means two things: One, it has to be affordable, and the solution starts out free. Two: It also means that in terms of the ease of integration, we have built this so that it is as close to click-drag-submit as you can possibly get."

The third solution rounding out the k-ID platform is its Family Platform. It is a universal, cross-platform single-sign-on portal for families to manage their child's online gaming experience. Parents can tune their child's gaming experience with a slider that either includes more content and preferences for a more digitally mature minor or dials it back for a younger or less digitally mature child.

"When you think about family platforms today, you think typically about processes that are incredibly painful to go through to get on boarded as a parent. There are lots of steps and lots of verification," Donovan said. "You don't want it to be something that's a chore to set up, and that then gives them that meaningful connection to what their kids up to."

]]>
2024-03-06 12:10:58
Bases jurídicas para el tratamiento de datos personales de colaboradores en México https://iapp.org/news/a/bases-juridicas-para-el-tratamiento-de-datos-personales-de-colaboradores-en-mexico https://iapp.org/news/a/bases-juridicas-para-el-tratamiento-de-datos-personales-de-colaboradores-en-mexico Las relaciones laborales involucran un tratamiento de datos personales por parte del empleador. Este tratamiento puede comenzar desde que el candidato envía su curriculum, se llevan a cabo las entrevistas de trabajo, se aplican los exámenes apropiados, hasta su eventual contratación como empleado e incluso después de concluida la relación de trabajo. En estas etapas, el tratamiento de datos personales debe regirse por la legislación aplicable; en el caso de empresas privadas, es la Ley Federal de Protección de Datos Personales en Posesión de los Particulares (LFPDPPP, por sus siglas) y su reglamento (RLFPDPPP).

Estos cuerpos normativos establecen los límites, alcances y obligaciones que tienen los entes privados respecto al tratamiento de datos personales de las personas físicas. Uno de los principios fundamentales de la protección de datos es el consentimiento, el cual establece que todo tratamiento de datos personales estará sujeto al consentimiento del titular, salvo las excepciones que prevea el artículo 10 de la LFPDPPP.

En consecuencia, los empleadores deben contar con el consentimiento de sus empleados o bien con una base jurídica para tratar sus datos personales; no puede asumirse que, al haber una relación laboral, el empleador puede tratar cualquier dato del empleado. Al contrario, existen ciertos datos que el empleador deba tratar por ley, otros que son necesarios para cumplir las obligaciones que surjan del contrato laboral y otros donde el consentimiento del titular sea necesario para tratarlos. Por ejemplo, la Ley Federal del Trabajo (LFT) establece, en su artículo 25, los datos personales que, por ley, el empleador debe tratar como: nombre, nacionalidad, edad, sexo, estado civil, beneficiarios, Clave Única de Registro de Población (CURP), Registro Federal de Contribuyentes (RFC) y domicilio.

En este sentido, el empleador puede tratar los datos mencionados anteriormente en términos de la excepción establecida en el artículo 10, fracción I de la LFPDPPP, la cual señala que no será necesario el consentimiento del titular para tratar sus datos personales cuando dicho tratamiento esté previsto en una ley. No obstante, esta excepción solo sería aplicable para esos datos personales en particular, por lo que, si el empleador quisiera tratar otros datos—demográficos, psicométricos, los relacionados a sueldos y prestaciones o incluso los recabados en conciliación o juicio laboral—, sería necesario que el empleador utilice otra base para tratarlos o bien identifique la ley que prevea específicamente el tratamiento que desee llevar a cabo.

Por tanto, cualquier tratamiento de datos personales que realice el empleador debe contar con una base jurídica apropiada además de respetar el resto de los principios de protección de datos, proporcionalidad y finalidad, entre otros. El primer principio establece que solo podrán ser objeto de tratamiento los datos necesarios, adecuados y relevantes para las finalidades que se hayan obtenido. En este sentido, el empleador deberá tratar los datos personales necesarios dentro del marco de la relación laboral. Por su parte, el principio de finalidad establece que los datos personales deberán ser tratados para el cumplimiento de la finalidad o finalidades establecidas en el Aviso de Privacidad. Por lo que todo dato personal recolectado debe tener una finalidad que sea informada al empleado a través de este documento.

]]>
2024-03-05 09:52:16
La ANPD y la inteligencia artificial en Brasil https://iapp.org/news/a/la-anpd-y-la-inteligencia-artificial-en-brasil https://iapp.org/news/a/la-anpd-y-la-inteligencia-artificial-en-brasil La inteligencia artificial (IA) no es un concepto nuevo, más el reciente avance de nuevas herramientas digitales impulsadas por la IA—resultado y parte del acelerado proceso de evolución de esta tecnología—ha generado encendidas discusiones en todo el mundo acerca de la necesidad de su reglamentación.

En Europa, el Consejo y el Parlamento Europeo alcanzaron un acuerdo político sobre la Ley de Inteligencia Artificial para crear un primer marco regulador para la IA, dando un paso significativo y convirtiendo Europa en un punto de referencia clave en este ámbito. Además, la firma de este acuerdo por parte de los países del bloque europeo ha despertado y apresurado otros gobiernos acerca de la necesidad de tomar medidas concretas para abordar los desafíos éticos, legales y sociales que plantea la inteligencia artificial.

En el contexto brasileño, estas discusiones han sido protagonizadas, en parte, por la Autoridad Nacional de Protección de Datos Personales (ANPD, por sus siglas) entidad que, durante la segunda mitad del 2023, emitió una serie de documentos, organizó seminarios y conferencias, y emitió comunicados oficiales relacionados con la regulación de la IA en el país.

Dentro de este compendio de documentos, se incluyen análisis exhaustivos de proyectos de ley, así como un estudio técnico para sustentar la convocatoria a comentarios por parte de la sociedad civil sobre la viabilidad de establecer y administrar un sandbox regulatorio para discutir cuestiones conexas a temas que involucran la protección de datos personales y la IA. 

El Proyecto de Ley 2.338/2023

El análisis técnico del Proyecto de Ley 2.338/2023 hecho por la ANPD trae algunas perspectivas interesantes con relación a la reglamentación de la IA; este proyecto representa un esfuerzo concreto de los legisladores para establecer un marco normativo sólido que regule de manera efectiva el uso y desarrollo de la IA en Brasil.

En sus comentarios acerca de este proyecto de ley, la Autoridad Nacional de Protección de Datos Personales propone la creación de una estructura regulatoria integral con la ANPD como órgano central consultivo y regulador.

Trayendo las experiencias realizadas en la Unión Europea, la ANPD sugiere cuatro instancias distintas y complementarias, que deberán actuar de manera articulada y coordinada: (i) el organismo regulador central o autoridad competente; (ii) el Poder Ejecutivo, responsable por la elaboración de políticas públicas para el desarrollo de sistemas de IA; (iii) organismos reguladores sectoriales que actuarían en coordinación con el organismo regulador central; y (iv) el consejo consultivo, un organismo para asegurar la participación de la sociedad civil en los procesos de toma de decisiones de otras instancias.

Uno de los aspectos más relevantes enfatizados por la ANPD en sus comentarios acerca del Proyecto de Ley 2.338/2023 es la importancia de un enfoque centralizado para la regulación y orientación de la IA. La existencia de una autoridad reguladora única no solo garantizaría una orientación más clara y consistente para los diversos sectores implicados en el amplio espectro de la inteligencia artificial, sino que también contribuiría a minimizar las ambigüedades, interpretaciones contradictorias y preocupaciones sobre la privacidad y la protección de datos personales que puedan surgir en este contexto tan complejo y dinámico. 

El sandbox regulatorio

En colaboración con el banco de desarrollo de América Latina y el Caribe (CAF, por sus siglas), la ANPD ha publicado un estudio técnico acerca de la posibilidad de coordinar un sandbox regulatorio para inteligencia artificial y protección de datos personales en Brasil.

Con base en este estudio técnico, la ANPD organizó una consulta pública para recibir aportes y comentarios de la sociedad civil y de entidades extranjeras sobre el desarrollo de un proyecto piloto para un sandbox regulatorio que trabajará con modelos que utilicen IA generativa y entrenen sus algoritmos con datos personales.

El proyecto pretende crear un ambiente controlado donde se realicen pruebas de tecnologías asociadas con la IA, permitiendo la implementación de innovaciones de forma segura y respetando los estándares brasileños de protección de datos personales, además de ayudar en la elaboración de regulaciones adaptables para que el gobierno brasileño sea capaz de afrontar los desafíos planteados por la IA.

Actualmente, el proyecto piloto se encuentra en la etapa de evaluación de todos los aportes y comentarios recibidos; la ANPD ha divulgado que recibió cerca de 71 contribuciones a raíz de la consulta pública. De ellos, 35 fueron de entidades privadas o grupos económicos; cinco del sector público; 10 de la sociedad civil; seis del mundo académico; y 15 de ciudadanos. De estos aportes, 66 provinieron de Brasil y 5 del exterior. 

Mapa de temas prioritarios para 2024 a 2025

El 13 de diciembre de 2023, la ANPD anunció la primera edición de su Mapa de Temas Prioritarios (MTP) para el bienio 2024 a 2025, estableciendo los temas que tendrán prioridad para los efectos de estudios y planificación de las actividades de inspección para los próximos dos años.

El MPT eligió cuatro temas prioritarios para 2024 a 2025: derecho de los titulares de datos personales; tratamiento de datos personales de niños y adolescentes; inteligencia artificial y procesamiento de datos personales; y extracción de datos y agregadores de datos.

Estos temas prioritarios fueran definidos a partir de un mapa de riesgo y de severidad que tomó en cuenta el impacto que estos desafíos pueden tener en la sociedad civil brasileña, donde el uso de la inteligencia artificial y el tratamiento de datos personales ha recibido la puntuación máxima de relevancia en la escala adoptada por la ANPD.

Como resultado, el uso de la inteligencia artificial en sistemas de reconocimiento facial y su impacto en la privacidad de las personas fueron señalados como uno de los principales puntos de enfoque para la primera mitad de 2025. 

Conclusiones

El Mapa de Temas Prioritarios, junto al análisis de proyectos de ley, organización de seminarios y eventos, y la coordinación del sandbox regulatorio, refleja el firme compromiso de la ANPD de consolidarse como uno de los principales protagonistas en el acalorado debate brasileño sobre la regulación de la IA, buscando siempre posicionarse como una entidad sólida y confiable capaz de colmar el vacío legislativo existente en torno a este tema. Además, las iniciativas implementadas por la ANPD en 2023 y a inicios de 2024 la colocan en una posición interesante para expandir sus actividades regulatorias e incluir también la gobernanza de la IA en su lista de actividades.

]]>
2024-03-05 09:40:15
El CFT emite lineamientos para planes de protección de datos personales para el sector público en Argentina https://iapp.org/news/a/el-cft-emite-lineamientos-para-planes-de-proteccion-de-datos-personales-para-el-sector-publico-en-argentina https://iapp.org/news/a/el-cft-emite-lineamientos-para-planes-de-proteccion-de-datos-personales-para-el-sector-publico-en-argentina El Estado argentino es uno de los principales actores en la recolección, almacenamiento y procesamiento de datos personales. Bajo la Ley 25.326 de Protección de Datos (LPDP, por sus siglas), el Estado se encuentra facultado para tratar los datos personales en el marco de sus funciones propias. Sin embargo, este hecho no implica relativizar o disminuir la obligación del sector público y de cada uno de sus organismos de cumplir con la ley, sino todo lo contrario; por sus propias funciones, el Estado tiene un deber legal y ético de cumplir y hacer respetar la ley y sus normas complementarias.

Es por ello que el Consejo Federal para la Transparencia (CFT) emitió sus Lineamientos para la formulación de un Plan de Protección de Datos Personales para el sector público. El objetivo es brindar asistencia a las organizaciones públicas para gestionar adecuadamente los datos personales.

El CFT es un organismo que está constituido por un representante de cada una de las provincias y un representante de la Ciudad Autónoma de Buenos Aires. Tiene como objetivo la cooperación técnica entre las distintas jurisdicciones y la concertación de políticas públicas de transparencia y de protección de datos personales a nivel federal. El CFT fue creado por el artículo 29 de la Ley 27.275 de Acceso a la Información Pública. Está dispone que tendrá su sede en la Agencia de Acceso a la Información Pública de la cual recibirá apoyo administrativo y técnico para su funcionamiento, a partir de un enfoque integral a fin de lograr una mejor calidad de vida de la ciudadanía.

Los Lineamientos destacan que es importante que el plan de protección de datos personales esté documentado y se revise y audite periódicamente. Además, se prevé que un plan de protección de datos personales debería:  

  1. establecer los objetivos del organismo en materia de protección de datos personales;
  2. identificar la normativa aplicable;
  3. establecer las vías para recolectar datos personales, los tipos de datos que se recolectan y su flujo interno;
  4. definir las bases legales para el tratamiento de los datos;
  5. describir las medidas adoptadas para garantizar la seguridad de los datos personales;
  6. establecer los plazos de conservación; 
  7. detallar de manera práctica la forma en la que se garantizarán los derechos de los titulares de los datos;
  8. definir el modo en el que se capacitará y concientizará al personal en el manejo seguro y responsable de datos personales y la privacidad;
  9. establecer un plan de acción para incidentes de seguridad, que incluya la notificación de las autoridades y personas afectadas;
  10. definir los procesos de supervisión interna para corroborar el cumplimiento de las políticas y procedimientos establecidos;
  11. cumplir con el principio de la responsabilidad proactiva (este punto incluye la designación de un delegado de protección de datos y la realización de evaluaciones de impacto); e
  12. implementar una política de privacidad.

Sobre este último punto, los Lineamientos establecen el carácter esencial de la política de privacidad en el plan de protección de datos personales y destacan que este documento permite a los organismos públicos cumplir con el deber de informar.

En relación con su contenido, los Lineamientos disponen que debe incluir información sobre las finalidades del tratamiento, los datos que se recolectan, los derechos que asisten a los titulares y cómo se pueden ejercer, el derecho de iniciar un reclamo ante la Agencia de Acceso a la Información Pública, los plazos de conservación, las medidas de seguridad adoptadas, el registro de la base de datos, las cesiones que se hagan y si se realizan transferencias internacionales. Al respecto, recomienda utilizar como guía la política modelo de protección de datos del sector público aprobada por la Resolución 40/2018 de la Agencia de Acceso a la Información Pública.

También se remarca que casi el 80% del sector público contrata proveedores de servicios de alojamiento de datos en la nube, tanto públicos (como la Empresa Argentina de Soluciones Satelitales S.A., o AR-SAT) o privados. Señala además que los proveedores del sector privado de este tipo de servicios suelen encontrarse en el extranjero y se indica que, para ello, es necesario que los responsables del tratamiento implementen acuerdos de procesamiento de datos. Este tipo de contratos permiten al responsable controlar las condiciones en que se va a brindar el servicio independientemente del lugar en que se aloje el encargado o el sub-encargado.

Asimismo, el CFT resalta que, si bien los Lineamientos constituyen una guía en la materia, cada ente (sea organismo o provincia) debe adecuarlos a los desafíos y realidades de sus jurisdicciones, teniendo en cuenta la protección de los datos personales desde el diseño de las políticas públicas y durante todo el ciclo.

Por último, en las consideraciones finales de los Lineamientos, el CFT reiteró la necesidad de modernizar los marcos normativos con nuevos principios y derechos. En este sentido, destacó que el Proyecto de Ley de Protección de Datos Personales elaborado desde la Agencia de Acceso a la Información Pública propone un cambio de paradigma: pasar de un modelo de protección de registros de datos, a uno que garantice el derecho humano de las personas.

]]>
2024-03-05 09:39:34
A new era of US privacy policy? National security restrictions on personal data transactions https://iapp.org/news/a/a-new-era-of-u-s-privacy-policy-national-security-restrictions-on-personal-data-transactions https://iapp.org/news/a/a-new-era-of-u-s-privacy-policy-national-security-restrictions-on-personal-data-transactions On 28 Feb., U.S. President Joe Biden signed what the White House called a "groundbreaking" new executive order on "Preventing Access to Americans' Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern." The president also sent a letter to Congress explaining the move.

The executive order kicks off multiple government workstreams, including a forthcoming regulation from the U.S. Department of Justice, which would block or place restrictions on designated personal data transactions with foreign adversaries of the U.S. and their proxies. Similar existing national security regulations define "countries of concern" to include China, Cuba, Iran, North Korea, Russia and Venezuela.

Commercial data privacy and security rules, including recommendations to implement privacy enhancing technologies, are also expected as part of a separate regulatory process through the Department of Homeland Security. These future rules would include minimum privacy and security standards that must be met before organizations engage in certain transactions that would otherwise be prohibited under the DOJ regulations.

Closing the digital gates

To begin the rulemaking on personal data transactions, the Biden-Harris administration has opted to engage in a two-part regulatory process via the DOJ. That process began immediately with the unofficial release of an advance notice of proposed rulemaking. Once the ANPRM is published in the Federal Register, stakeholders will have 45 days to submit comments. Subsequently, the DOJ will issue a draft regulation, which will be subject to a second round of comments from interested parties. With this timeline in motion, final rules are unlikely to be complete before the presidential election at the end of 2024.

The executive order is consistent with the current trend of heightened expectations for due diligence in transactions involving bulk personal data or sensitive data. Privacy professionals and security teams should pay close attention to this rulemaking, especially if their organizations engage in the buying or selling of bulk personal data — or any amount of sensitive data about U.S. government personnel such as members of the military.

High threat count

In explaining its unprecedented executive order, the White House fact sheet describes concerns about "extraordinary threats" from foreign adversaries who purchase commercially available data on U.S. citizens in order to "engage in malicious cyber-enabled activities, espionage, coercion, influence, and blackmail; build profiles on and target activists, academics, journalists, dissidents, government personnel, political figures, and members of nongovernmental organizations and marginalized communities for surveillance, influence, and intimidation; to curb dissent and for other nefarious purposes."

The concerns echo alarm bells that scholars and national security experts have been repeatedly ringing in recent years. Duke University's Sanford School of Public Policy released a report last fall on the sale of data on U.S. military personnel, based on an investigative analysis of data brokers' due diligence around transactions. The study found that many data brokers exhibit a "lack of robust controls" around the purchase of U.S. military data, even in some cases when the purchaser was located outside of the U.S.

Out with localization; in with targeted exclusion

Some commentators have classified the executive order as a proposed restriction on the flow of data across borders. Yet the contours of the proposal do not neatly fit into this category. Though national borders would play a role, the location of personal data is not the primary factor for determining whether a transaction would be banned or restricted.

The proposal would have the effect of restricting the sale of data to entities within certain countries, but it also restricts the sale of that same data to certain individuals and organizations no matter where they are located, when the U.S. designates them as proxies for countries of concern. There are no proposed restrictions on where personal data is stored or the means through which it is transferred. Instead, the determining factor is the nature of the party purchasing the data.

Steps toward 'carefully calibrated' regulations

As previewed in the ANPRM, the proposed regulatory regime would be targeted within multiple dimensions with its scope limited to covered types of:

  • Personal data
  • Data subjects
  • Transactions
  • Selling entities
  • Purchasing entities

Each of these restrictions of coverage is explored below.

Sensitive personal data and 'covered personal identifiers'

The draft ANPRM considers the inclusion of four sensitive data categories within the scope of covered data, plus "covered personal identifiers." The sensitive categories of data include:

  • Biometric data, defined as "measurable physical characteristics or behaviors used to recognize or verify the identity of an individual, including facial images, voice prints and patterns, retina and iris scans, palm prints and fingerprints, gait, and keyboard usage patterns that are enrolled in a biometric system and the templates created by the system."
  • Precise geolocation and related sensor data (with precision standards to be determined).
  • Human genomic data, whether the entire set or a subset of an individual's genetic sequencing data. As part of the program, there would also be restrictions on access to human biospecimens from which this data could be derived.
  • Personal health data, with the definition adopted from the Health Insurance Portability and Accountability Act.
  • Personal financial data (subject to exclusions) including "data about an individual’s credit, charge, or debit card, or bank account, including purchases and payment history; data in a bank, credit, or other financial statement, including assets, liabilities and debts, and transactions; or data in a credit or consumer report."

Beyond these traditional sensitive data categories, the "personal identifiers" considered to be included under the ANPRM are those that are "reasonably linked to an individual, and that — whether in combination with each other, with other sensitive personal data, or with other data that is disclosed by a transacting party pursuant to the transaction and that makes the personally identifiable data exploitable by a country of concern — could be used to identify an individual from a data set or link data across multiple data sets to an individual."

As contemplated, the rule would designate a comprehensive list of identifiers that would fall under the same restrictions as sensitive data when linked with another "listed identifier" in a transaction or series of transactions to related covered recipients:

  • Government ID numbers
  • Financial account numbers
  • Device-based or hardware-based IDs
  • Advertising IDs
  • Demographic or contact data, except when linked only to other demographic or contact data (including name, birthdate, birthplace, zip code, address, phone number, email address and similar public account identifiers)
  • Account authentication data (username, password, etc.)
  • Network-based identifiers
  • Call-detail data (customer proprietary network information under existing telecom privacy rules)

Like the demographic exception, the final three categories above would also be excluded from restrictions in situations where they are linked only to other identifiers within those three categories.

General exceptions are also proposed for public data, trade secrets, proprietary information, personal communications and expressive materials.

Bulk personal data of US citizens

The prohibitions contemplated in the ANPRM would apply to any transaction involving the covered personal data of a set number of Americans over a threshold for each type of sensitive data category. The ANPRM explains that these limits would be based on a risk assessment of human-centric and machine-centric characteristics of relevant datasets. The DOJ is particularly interested in stakeholder feedback on this section of the proposal, though it also includes specific questions on all aspects of the ANPRM.

The DOJ proposes two different possibilities for the bulk thresholds, a low and a high proposal, reprinted below. If a transaction or set of related transactions included the specified data of more than the listed number of U.S. persons (or devices in the case of geolocation data), it would be prohibited under the proposed rule.

  • Human genetic data: Low > 100 persons; High > 1,000 persons.
  • Biometric identifiers: Low > 100 persons; High > 10,000 persons.
  • Precise geolocation data: Low > 100 devices; High > 10,000 devices.
  • Personal health data: Low > 1,000 persons; High > 1 million persons.
  • Personal financial data: Low > 1,000 persons; High > 1 million persons.
  • Covered personal identifiers: Low > 10,000 persons; High > 1 million persons.

Any amount of 'government-related' data

The covered personal data of U.S. government personnel would not be subject to the proposed volume thresholds. Transactions involving such data — the same sensitive data types identified above — would be flatly prohibited under three scenarios:

  • If a transacting party identifies the data as being linked or linkable to categories of U.S. government personnel.
  • If the data is linked to categories of data that could be used to identify U.S. government personnel.
  • If the data is linked or linkable to a list of defined sensitive government locations.

Do not sell to the block list

U.S. persons would be banned from knowingly engaging in "covered data transactions" with countries of concern or any person who is a national of one of those countries. "Transactions" are more than just sales. They include "any acquisition, holding, use, transfer, transportation, exportation of, or dealing in any property in which a foreign country or national thereof has an interest."

The draft rule would categorically prohibit any such transaction if it meets the data requirements explained above and falls into any one of the following categories:

  • Data brokerage, meaning "the sale of, licensing of access to, or similar commercial transactions involving the transfer of data from any person (the provider) to any other person (the recipient) where the recipient did not collect or process the data from the individuals linked or linkable to the collected or processed data."
  • A vendor agreement for any goods or services, including cloud-computing services, in exchange for payment or other consideration.
  • An employment agreement, excluding independent contractors.
  • An investment agreement, in which any person "obtains direct or indirect ownership interest rights in relation to (1) real estate located in the U.S. or (2) a U.S. legal entity."

The final three categories above (vendor, employment and investment agreements) would only be prohibited in relation to covered data transactions if they fail to meet the security requirements later to be specified through DHS rulemaking. The DOJ ANPRM includes a preview of the likely requirements, which "would be based on, as applicable and appropriate, existing performance goals, guidance, practices, and controls, such as the Cybersecurity and Infrastructure Security Agency Cybersecurity Performance Goals, National Institute of Standards & Technology Cybersecurity Framework, NIST Privacy Framework, and NIST SP 800-171 rev. 3."

Until the DHS rules are finalized, DOJ is proposing to "decline to regulate restricted covered data transactions" that would be subject to the new DHS privacy and security rules.

Contracts required for all foreign transactions

Further, the order would prohibit data brokerage transactions with any foreign person unless the U.S. provider contractually requires the foreign recipient to refrain from the onward transfer of the data to a country of concern or covered person through a "subsequent covered data transaction." That is, restrictive contractual terms would be required for all transfers of bulk sensitive data or government-related data to any non-U.S. entity.

Just the beginning

There is much left to discuss. Apart from the difficulty of comparing the executive order to existing data rules, it has renewed conversations in Washington, D.C., and other policy hubs about digital sovereignty and the role of national security law in restricting the flow of sensitive personal data between entities and across borders.

As stakeholders engage in the multipart comment process, the bulk data restrictions are likely to be further refined and clarified to ensure they reflect the administration's stated policy goals.

]]>
2024-03-04 12:40:14
US executive order will address brokers' sensitive data transfers to 'countries of concern' https://iapp.org/news/a/us-to-issue-executive-order-targeting-transfers-of-sensitive-data-to-countries-of-concern https://iapp.org/news/a/us-to-issue-executive-order-targeting-transfers-of-sensitive-data-to-countries-of-concern U.S. President Joe Biden announced plans for an executive order that would bar data brokers from selling U.S. citizens' sensitive personal data to entities located in or affiliated with adversarial countries.

According to a fact sheet published by the White House 28 Feb., the executive order will direct the Department of Justice to develop regulations prohibiting data brokers from carrying out transfers to so-called "countries of concern" that involve troves of sensitive personal information. The designated countries of concern are China, Cuba, Iran, North Korea, Russia and Venezuela, according to multiple press reports.  

The types of sensitive data to be protected under the order are "genomic data, biometric data, personal health data, geolocation data, financial data, and certain kinds of personal identifiers," per the fact sheet. The White House claimed adversarial nations were accessing vast amounts of Americans' personal data for blackmail and surveillance purposes, and could utilize advanced artificial intelligence systems to advance those malicious goals.

As part of the order, the DOJ will work in conjunction with the Department of Homeland Security to "set high security standards to prevent access by countries of concern to Americans' data through other commercial means, such as data available via investment, vendor, and employment relationships."

Another key point of emphasis is a directive to the DOJ to "issue regulations that establish greater protection of sensitive government-related data, including geolocation information on sensitive government sites and information about military members."

Free data flow intact

According to press reports, White House officials said the executive order is not a departure from the G-7 principle of "Data Free Flow with Trust," which Information Technology Industry Council Senior Vice President of Policy and General Counsel John Miller said "is essential for U.S. competitiveness."

"We appreciate that the Biden Administration aims to craft targeted rules to address a specific national security threat and has structured the rulemaking process in a way that ensures opportunities for necessary and robust stakeholder engagement," Miller said in a statement. "The administration has also been clear that today's action is no substitute for a federal privacy law, which is the strongest and most comprehensive way to protect Americans' personal data."

Additionally, the Network Advertising Initiative came out in support of Biden's order. 

"The NAI supports the President’s plan to ban sales of sensitive U.S. consumer data to foreign adversaries," President and CEO Leigh Freund said in a statement. "The nonconsensual sale of U.S. consumer data to foreign governments is unethical and poses a serious privacy threat to consumers." 

While the executive order may not outright end the commercial sale of personal data, Center for Democracy and Technology Vice President of Policy Samir Jain told the IAPP the order could have a positive chilling effect, of sorts, on data brokers' business practices insomuch that they take a more risk-adverse approach to selling Americans' sensitive personal data.

Jain echoed Miller's sentiment that the executive order was not an adequate replacement for U.S. Congress falling short in efforts to pass a comprehensive privacy law that would curb consumer harms as the executive order aims to do.

"It'll be interesting what spillover effect it has on the commercial market more generally," Jain said. "They'll need to set up compliance programs and get greater awareness of the data they have so they don't engage in these illicit kinds of transfers."

Presidential authority

Biden issued the executive order under the authority of the International Emergency Economic Powers Act, which gives the president broad powers to "investigate, regulate, and prohibit certain financial transactions following a declaration of an 'unusual and extraordinary threat' originating outside the United States."

CDT's Jain said DOJ rulemaking could potentially generate some criminal penalties for flagrant violations of the executive order because it invokes the security of U.S. citizens' sensitive data as a national security consideration, not just a commercial one. He cited Know Your Customer banking regulations as an example data brokers may soon have to follow, which are intended to prevent banks from engaging in business that may facilitate crime.

"I suspect data brokers will have to move in that direction, and if transferring genomic data is banned, for instance, they could decide altogether that they just won't sell it because it's too risky," Jain said. "But there's a whole range of penalties that could be imposed under the IEEPA. The penalties will have to depend on the intentionality."

Ultimately, the responsibility will fall on data brokers to ensure data they exchange does not end up in the wrong hands once the DOJ finalizes regulations to meet the objectives of Biden's order.

"There are innumerable details between the announced goal and companies knowing how to implement these global due-diligence programs," Georgia Tech School of Cybersecurity and Privacy Chair of Law and Ethics Peter Swire, CIPP/US, told the IAPP. "But data brokers will be responsible for conducting that due-diligence against transferring their data in bulk to countries of concern."

Potential loopholes

Some skepticism exists among privacy professionals and technology industry stakeholders if the executive order is the right tool for executing the policy of preventing the exploitation of data by malicious actors working on behalf of, or within an adversarial country, which could additionally carry unintended commercial impacts.

Jain said the executive order, as proposed, may not account for subsequent sales of sensitive data that has been re-sold several times over by various data brokers around the world, which could pose enforcement challenges.

"One of the interesting issues to look at when we see actual proposed rules is how the DOJ will try to reach beyond the initial transaction," Jain said. "It'll be interesting how far they go to block that subsequent chain of transactions."

Georgia Tech's Swire questioned if the order would actually meet its stated goal of preventing malicious foreign entities from accessing citizens' sensitive data. He said there's "still a serious question" regarding the effectiveness of the proposed rules and how they'll prevent access to data "by an advanced, persistent threat."

Swire added Biden's executive order represents a different data protection strategy from regulations engineered to protect individuals' privacy, such as the previously introduced American Data Privacy and Protection Act.

"Most privacy rules focus on potential harm to an individual," Swire said. "The rationale for the rule is to stop the most sophisticated hackers from getting access to data even though data sales would continue in vast majority of commercial settings."

]]>
2024-02-28 13:30:47
AI, adtech, children's privacy in sights of ICO for 2024 https://iapp.org/news/a/uk-information-commissioner-shares-ico-priorities-for-2024 https://iapp.org/news/a/uk-information-commissioner-shares-ico-priorities-for-2024 In a packed room at the IAPP Data Protection Intensive: UK 2024 in London, U.K. Information Commissioner John Edwards kicked things off with a punchy speech and Q&A with attendees, laying out the agency's priorities for 2024. Undergirding his regulatory philosophy is the need to protect personal information "while still championing innovation and privacy-respectful practices." 

Children's privacy and third-party advertising cookies were among the top three priorities outlined by the commissioner. However, the "biggest question on my desk," Edwards said, is artificial intelligence.  

The not-so-small question: artificial intelligence 

Edwards was clear that the U.K. General Data Protection Regulation provides the agency with plenty of enforcement ammunition in the AI space and that "bespoke" AI regulation is not necessary. But the rise of AI also raises questions, including those around user control, loss of sensitive personal information and potential regulatory gaps.

Add to that the emergence of generative AI, and an additional layer of questions must be asked: "When is it lawful to scrape data from the internet to train generative AI models? Are people's rights being meaningfully protected when AI models are built using their information? What do concepts such as purpose limitation and accuracy really mean in this context?" 

In a Q&A after his prepared speech, Edwards carried that sentiment forward, saying, "We won't be talking about AI regulation in a few years" because AI will be connected to virtually all aspects of the economy and society. "While there are lots of questions to consider," Edwards said, "what is clear is that any generative AI model must be developed and used in a way which complies with the U.K.'s GDPR." 

For starters, Edwards said the agency has opened a consultation series on generative AI, "outlining our initial thoughts and approaches, seeking to provide clarity in a changing technological landscape." For more clarity, the ICO's consultation focuses on different parts of the law, as in the first chapter, which explores the lawful bases used for scraping public data from the internet in order to train generative AI models. 

"Our initial thoughts are that the legitimate interests lawful basis may be valid," Edwards said, "but only if the model's developer can ensure they pass the three-part test of purpose, necessity and balancing people's rights with the rights of the developer." 

The ICO published the second chapter earlier this week, which explores purpose limitation and whether it "should be applied at different stages of the AI lifecycle," he said. Future chapters will include the ICO's expectations around compliance with accuracy, accountability and "controllership across the supply chain." 

For this consultation series, Edwards asked the privacy and AI governance community for their thoughts and insights. 

Edwards said regulation of AI is not just for the ICO alone, but crosses the data protection, competition and consumer protection fields. The agency is "working closely with the CMA on a joint statement setting out our positions concerning the foundation models that underpin generative AI." He said the ICO hopes to publish the joint statement later this year. 

Separately, the ICO is working with the Financial Conduct Authority, Office of Communications, and Competition and Markets Authority "to support the responsible development of AI," which includes a pilot "AI and Digital Hub" service that "will provide tailored regulatory support to innovators" who are bringing new products and services to market, Edwards said. 

Biometrics: Not below the ICO's radar

A subset of AI regulation, biometrics is also in the ICO's sights, as was evidenced by its recent enforcement action against Serco Leisure, in which it ordered the company to stop using facial recognition to monitor employee attendance, which had been tied to their pay. According to the ICO, Serco did not offer employees with a genuine choice or alternative means to log their hours, "which increased the imbalance of power between the employer and employees." 

Though it is one example, Edwards said, "Biometrics is an area of developing interest, and organizations need to know how to use this technology in a way that doesn't interfere with people's rights." 

Are the kids alright?

Children's privacy is a growing area of interest for the ICO, Edwards said, with resounding questions, such as: "Should the social media platforms allow users under the age of 16 to have accounts? Should they have stronger checks and balances, or does that responsibility lie with parents?" 

Though Edwards said such issues will involve broader societal input, he specifically warned that "children's privacy will form a large part of our ongoing work this year." Now two years in, the Children's code has prompted positive changes in the online ecosystem, according to Edwards, who said that "some of the largest online platforms have improved their default settings, reduced targeted advertising and included parental controls." 

Edwards said the ICO has been working with industry via voluntary audits, one-to-one engagement on data protection impact assessments and enforcement "to help them get it right." The agency also recently published an updated opinion on age assurance in tandem with Ofcom and continues cross-regulatory work "to ensure our priorities remain aligned as the Online Safety Act comes into force." This also includes further work with Ofcom on content moderation guidance.

Notably, there is enforcement in the works. Edwards did not get into details but said the ICO's cases against Snap and TikTok "are ongoing, and there are several other investigations underway that I can't give details about." 

Cookie banners: 'A daily reminder of the lack of real choice' 

The ICO has long worked in the advertising technology space, and this year, Edwards said the agency "will be prioritizing fair use of cookies." He said, "cookie banners are the most visible manifestation of data protection law" and often demonstrate "the power imbalance we face when we go online." 

The ICO looked at the top 100 websites in the U.K. and found 53 may have used "non-compliant cookie banners." Those companies went on notice, he said, and of those 53, 38 complied, an 80% success rate. 

When asked about the other 20%, Edwards did not go into details, but suggested the noncompliant sites may face additional scrutiny. 

To help individuals navigate the complexity of the adtech space, Edwards announced the ICO is hosting its "first-ever hackathon, with internal colleagues and external technical experts, focusing on how we can monitor and regulate cookie compliance at scale." The goal, Edwards said, is to create "the prototype of an automated tool" to assess cookie banners across the web and alert when they breach data protection law. 

"Our bots are coming for your bots," he warned. 

]]>
2024-02-28 13:21:31
A look at proposed US state private sector AI legislation https://iapp.org/news/a/a-look-at-proposed-u-s-state-private-sector-ai-legislation https://iapp.org/news/a/a-look-at-proposed-u-s-state-private-sector-ai-legislation More than a quarter of U.S. state legislatures are considering bills that would regulate the private sector's use of artificial intelligence. With the federal government yet to pass a law governing this topic, state lawmakers are demonstrating a willingness to jump into the void.

The current state lawmaking climate around AI appears remarkably similar to the consumer privacy space after California passed the California Consumer Privacy Act in 2018. Following California, state legislatures across the country considered many different types of consumer privacy bills, including bills modeled on the CCPA, Washington state's Privacy Act and other approaches, including a Uniform Law Commission model act. The Washington Privacy Act only emerged as the prevailing model for non-California consumer privacy laws in the last two legislative cycles — although the bill was, of course, never passed in Washington.

We are beginning to see common categories and themes emerge when analyzing the current landscape of proposed state private sector AI bills. Although some bills blur the lines between these, the categories include algorithmic discrimination, automated employment decision-making, AI Bill of Rights and "working group" bills. In this article, we provide readers with a snapshot of these emerging categories and discuss some of the bills that fit into them. Although the descriptions below are by no means exhaustive, they provide a useful guide to making sense of what may otherwise appear to be chaos. Finally, although not discussed below, it is important to note the California Privacy Protection Agency is currently drafting regulations on automated decision-making technology.

Algorithmic discrimination

The first set of bills takes a broad approach to combating "algorithmic discrimination," which is generally defined as an automated decision tool's differential treatment of an individual or group based on their protected class. These bills place the burden on AI developers and businesses using AI, often referred to as deployers, to proactively ensure that the technologies are not creating discriminatory outcomes in the consumer and employment context. The nine states currently considering such bills are California (Assembly Bill 2930, formerly AB 331), Connecticut (Senate Bill 2), Vermont (H.710 and H.711), Hawaii (House Bill 1607 and its companion SB 2524), Illinois (HB 5116 and HB 5322), New York (A8129, S8209, and A8195), Oklahoma (HB 3835), Rhode Island (HB 7521), and Washington (HB 1951). Note, however, that the Rhode Island and Washington bills appear to have died.

While each of these bills differ in important respects, many are modeled after one another and impose similar obligations upon developers and deployers of AI. Provisions found in most of these bills require regular impact assessments of AI tools to ensure against discrimination; disclosure of such assessments to government agencies; internal policies, programs and safeguards to prevent foreseeable risks from AI; accommodating requests to opt-out of being subject to AI tools; disclosure of the AI's use to affected persons; and an explanation of how the AI tool uses personal information and how risks of discrimination are being minimized.

Among these requirements, some of the bills differentiate between "high risk" AI systems, generative AI, and general purpose or foundational AI models, imposing different obligations for each. Some of the draft legislation also imposes a duty of reasonable care standard upon developers and deployers to avoid algorithmic discrimination. Notably, most of these bills rely on government enforcement, with only a few providing a private right of action for violations.

Automated employment decision-making

While the bills discussed above take an expansive approach, the next category of pending legislation focuses on the use of AI technologies in the employment context. These bills generally target AI tools, commonly referred to as "automated employment decision tools," or "predictive data analytics" used by employers to make employment decisions about hiring, firing, promotion and compensation. To date, the following five states have introduced bills specifically targeting this area: Illinois (HB 3773), Massachusetts (H.1873), New Jersey (S1588), New York (A7859, S5641A and S7623A), and Vermont (H.114). Note that a few laws in this category have already been enacted in Illinois (AI Video Interview Act), Maryland (HB 1202) and New York City (Local Law 144).

Common features among these bills require employers to provide advance notice to and obtain consent from job applicants and employees who are subject to AEDTs, explain the qualifications and characteristics that AI will assess to candidates, and conduct and disclose regular impact assessments or bias audits of AI tools. Most of these bills, however, include carveouts for the use of AI when promoting diversity or affirmative action initiatives. Requirements also apply to developers of AEDTs to provide bias auditing services and certain disclosures to deployers regarding the tool's intended uses and known limitations.

Several of these bills also include additional provisions regarding employee privacy. They restrict the types of employee personal information employers can collect and disclose, and require advance written notice of and certain limitations on, the use of employee monitoring devices. A few of these bills, like New York's S7623A and Vermont's H.114, also prohibit employers from relying "solely" on an AEDT output when making hiring, promotion, termination, disciplinary or compensation decisions.

AI Bill of Rights

The next set of bills introduced this year would establish an AI Bill of Rights. Examples of these bills can be found in Oklahoma (HB 3453) and New York (A8129 and its companion S8209).

While they overlap in some ways with the bills discussed above, these proposed bills would provide state residents the rights to know when they are interacting with AI, to know when their data is being used to inform AI, not to be discriminated against by the use of AI, to have agency over their personal data; to understand the outcomes of an AI system impacting them and to optout of an AI system. Oklahoma's HB 3453 would also grant rights to rely on a watermark to verify the authenticity of a creative product and to approve derivative media generated by AI that uses a person's audio recordings or images.

"Working Group" bills

The final category of AI bills takes a more wait-and-see approach by creating government commissions, agencies or working groups to study the implementation of AI technologies and develop recommendations for future regulation. Such bills can be found in Utah (SB149), Florida (HB 1459), Hawaii (HB 2176 and SB 2572) and Massachusetts (S.2539). Note, however, that some of the bills already mentioned, like Connecticut's SB 2, also provide for the creation of a commission to similarly assess future AI policy.

These bills outline the makeup of the working groups, usually providing appointment authority to the state's governor, legislature and preexisting state departments, and allowing participation by industry stakeholders. The working groups are tasked with developing acceptable use policies and guidelines for the regulation, development and use of AI technologies in the state.

One bill to highlight in this category is Utah's SB 149, which appears likely to pass after receiving Senate approval on 13 Feb. 2024. In addition to creating an Office of AI Policy and AI Learning Laboratory Program to analyze potential AI legislation, the bill includes a regulatory mitigation licensing scheme where participants of the AI Learning Laboratory Program can avoid regulatory enforcement while developing and analyzing new AI technologies. Outside the working group context, the bill also imposes liability on uses of generative AI that violate consumer protection laws if not properly disclosed.

]]>
2024-02-28 13:20:51
Privacy is 'North Star' for Chevron's Boshell https://iapp.org/news/a/privacy-is-north-star-for-chevrons-boshell https://iapp.org/news/a/privacy-is-north-star-for-chevrons-boshell "I am not data."

These words are displayed on a piece of art hanging prominently in Paige Boshell's office. There, she said, because "I think of every person my decisions, or my advice, will impact."

Privacy counsel at multinational energy corporation Chevron, Boshell, CIPP/E, CIPP/US, CIPM, FIP, PLS, began her law career in 1992 as a financial services regulatory lawyer, drawn to the fact that there was "a right answer, a correct way to do things." Her work transitioned into privacy — including time as privacy counsel at financial services company USAA and associate general counsel, privacy at social media platform Meta — and Boshell said she came to love working in the field's "gray area" and is "energized" by the work "every day."

"It's not for the faint of heart. It's not for the young lawyer who wants to be certain, who wants to find the right answer. You have to be willing to play the long game in privacy because you have to look at all the different parts of the organization that impact individuals' privacy at the end," she said.

At Chevron, where she began in March 2023, Boshell leads privacy for the Americas and the Asia-Pacific region. Both areas are teeming with legislative privacy activity — from new laws in China, India and Vietnam, with more to come around the world, to California, Colorado, Connecticut and others, in the U.S. In navigating the hectic terrain, Boshell said privacy's fundamental principles are her "North Star" and she is tuning in with interest as different jurisdictions establish their own take on those core principles.

"Some parts of it are very technical, but what I love about it is practicing in the gray areas. You look at things, you assess them, and you can react emotionally or instinctively, or logically and technically, but if your thinking is informed by those core principles you can make some really great decisions," she said. "But on the other hand, it is true that so many different jurisdictions are coming up with highly technical and sometimes inconsistent requirements. What I like is it's a mix of judgment, understanding and appreciation of and respect for privacy and individuals.

"It really takes a lot of faith in your ability to understand and implement the principles, recognizing that there are technical variations proliferating throughout the country, throughout the world," she said. "But you have to have this north star and you have to assume the people you are working with are doing their best, and that if they are missing a piece of the puzzle that you explain it in a way that is accessible to them."

After the Gramm-Leach-Bliley Act was enacted in 1999 — a "sea change for banks" that happened while she was in financial services law at the firm Bradley Arant Boult Cummings in Birmingham, Alabama — Boshell said her "cradle to grave" background with banking clients made working with the act's consumer financial privacy protections "a natural fit."

"Banks always had a sense of protecting their customer confidentiality, so it wasn't that far of a stretch, but the concept of data inventory, the concept of disclosing practices, the concept of having specific requirements in vendor contracts was all new and different," she said. "I could work on the service provider issues, I could work on the consumer disclosure uses, I could work on privacy programming and privacy strategy and so it was a natural evolution. I really liked the privacy work and the principles behind it."

Boshell especially liked that federal regulators were looking at data and data practices as something the consumer has an interest in, should understand and should participate in. That concept, and the privacy practice, further exploded when the EU General Data Protection Regulation was adopted in 2016.

"It's certainly become a lot more complicated than when I started, but I think my vantage point from those very naïve first days has really helped inform my practice as a whole and the way I look at privacy strategy. You're not just trying to comply with the law that's in effect now, or in six months. If you have a larger practice, you have to keep in mind what other jurisdictions are doing, you have to keep in mind the trends toward greater regulation, you have to keep in mind the technological trends, the different ways of collecting data. You really have to look at the individual problem or issue, or the business goal or strategy, in the context of this larger privacy world and not just the privacy legal world."

Inspired by her father, who was a "brilliant litigator," Boshell said she always knew she wanted to be a lawyer and loved the sense of "a corporate practice as advocacy for the individual." Her career in privacy has been "wonderful," as she's watched the field grow into what it is now.

"I can anticipate where it's going for some issues, and I cannot anticipate where it's going for others. It really has been fascinating to see the development and to see the impact on individuals and the way individuals have used technology for good. I'm also in cybersecurity, so I see how a lot of that is used for bad, but I think at least on the privacy side, there's room for ideas," she said. "It's another reason I love this practice. How do we fit these ideals into a tech that we don't quite know the capabilities of yet. That's the scary thing about AI. That was the scary thing about biometrics. That was the scary thing about geolocation. How do you apply these core principles to a technology that is rapidly evolving and is enabling the rapid development of new data practices. I think that's where those of us who really have a passion for privacy can help advise our clients because we think that way, we dream that way, we breathe that way."

Privacy practitioners, Boshell said, are lifelong learners.

"The laws are evolving, the tech is evolving, the data practices are evolving — and will continue to do so for the foreseeable future," she said. "So you have to be someone who likes to practice in the gray, who likes to flex into different areas, who likes to have a true North Star but apply it in vastly different use cases."

]]>
2024-02-28 11:49:28
The 'hidden obligation' rides again! EU representatives under GDPR, DSA, NIS2 and others https://iapp.org/news/a/the-hidden-obligation-rides-again-eu-representatives-under-gdpr-dsa-nis2-and-others https://iapp.org/news/a/the-hidden-obligation-rides-again-eu-representatives-under-gdpr-dsa-nis2-and-others Organizations outside the EU have been managing the EU General Data Protection Regulation's obligation to appoint an EU-based representative since 2018 even though it may not be as well known as other components of the law, such as the data protection officer and cross-border transfer limitations. With a number of new EU laws requiring a representative, such as the Digital Services Act, as well as those to come in next few years, such as the Artificial Intelligence Act, now is the time for organizations outside the EU and their advisors to reengage with EU representative obligations.

The beginning: GDPR data protection representative

Article 27 of the GDPR requires organizations outside the EU to appoint a representative within the EU to act as their point of contact so data subjects and EU authorities can reach them with concerns about data processing and — for the individuals — exercise the rights regarding their personal data provided under GDPR.

Unfortunately, when the GDPR became enforceable in 2018, this requirement was a hidden obligation. It was not discussed with particular enthusiasm in the EU simply because it was not relevant. At the time, privacy professionals in Europe focused on ensuring their EU-based clients met the other obligations the GDPR placed upon them, and appointing a representative was not one of them.

The effect kept the representative obligation out of the discussion elsewhere; if the EU was not discussing this requirement, why would anyone outside the EU? It was not on their radar, possibly because more attention was focused on the DPO, cross-border transfers and data protection impact assessments under the GDPR. This is not to suggest those fundamental parts of GDPR are unimportant, but rather to note the representative obligation remained hidden for large numbers of the organizations to which it was intended.

This was not the case with everyone, but compliance was certainly not universal.

Brexit caused another issue by adding a separate obligation to appoint a U.K. representative for organizations without a U.K. establishment. Perhaps due to a lack of knowledge about the representative role, few EU companies sought a U.K. representative. A number of U.K. companies targeting sales in the EU — now established solely as a "third country" from the EU GDPR's perspective — did appoint an EU GDPR representative, but not all.

While the representative obligation has been enforced, this has been sporadic. A few big names, most significantly ClearView AI, received orders to appoint a GDPR representative; however, the main focus of the EU authorities during the first five years of GDPR enforceability has been within its borders.

However, the representative role gradually emerged from the shadows in the early 2020s, as more EU-based organizations — concerned by greater scrutiny and enforcement of international data transfers — began to apply greater due diligence to their partners and suppliers outside the EU. This led to an increased focus on the compliance of those external organizations and a vigorous approach to vendor management, partly caused by the increased use of privacy platforms to manage compliance. The resurgence of the representative was not directly driven by EU enforcement activities but by the commercial requirement to show GDPR compliance.

The representative renaissance

By including a representative obligation in a raft of new regulations and directives, the EU made it clear the representative obligation is not going anywhere. Some of these are already enforceable and some have been brought into effect but have not yet reached the deadline by which enforcement can commence as of February 2024. In the case of the Network and Information Security Directive, enforcement has been an option since 9 May 2018 — before the GDPR — although the number of organizations it applied to outside the EU is potentially much smaller than under the NIS2, which will replace it later this year.

Some specifics of the obligations under new EU laws are set out below. Each obligation only applies to organizations providing services in the EU and they must be made in writing, e.g., under contract, rather than via an informal arrangement. A summary table explaining which types of service each of these regulations and directives apply to is available here.

Digital Services Act legal representative

Currently, the most discussed example of these new laws is the DSA. Applying to "providers of intermediary services" — a wide definition covering almost any organization that provides a service digitally — the DSA places obligations to ensure illegal online content can be removed, illegal products and services are made unavailable, and online traders can be identified, among others.

To ensure the effectiveness of these expectations, against a backdrop of reluctance from providers across the globe to impose additional gatekeeping to their services, Article 13 of the DSA anticipates providers with no EU establishment targeting their services to the EU will appoint a legal representative in the EU.

The representative will be appointed in the most relevant EU member state identified by the service provider and will act as the point of contact to remove illegal content, make illegal products unavailable and provide information about online traders. One of the most significant aspects, when compared to the GDPR representative obligation, is the need to notify the details of that representative to the digital services coordinator in their respective EU country. This prevents a wait-and-see approach to compliance, where some elements are only added when the specific need arises, e.g., only appointing a representative if and when someone asks who their DSA legal representative is. It will be clear to the EU authorities when a representative is appointed, as the DSC will have a specific record of when it was notified of the appointment.

Network and Information Security Directive legal representative

The current NIS Directive — enacted in each EU member state, rather than having direct effect as is the case for the other laws described here — applies primarily to providers of critical infrastructure-type services such as water, energy, transport, etc. and requires them to have minimum cybersecurity standards to ensure uninterrupted public services.

However, it also applies to online marketplaces that allow the creation of contracts between two external parties, online search engines and cloud computing service providers. When these companies have no EU establishment, they are expected to appoint an EU representative.

The NIS2 Directive, which replaces the NIS on 18 Oct. 2024, applies to a much wider group of organizations, taking in many additional providers of services delivered online, which the DSA will also cover. These include domain-registration services and domain name system service providers. Where the organization is at least medium-sized — meaning it has either 250 employees, an annual turnover of 50 million euros or an annual balance sheet of 43 million euros — is the sole provider in the country or their disruption would have a significant impact, these organizations include data centers, content delivery networks, social networks and IT managed services, including security managed services.

Under the NIS2, regulated organizations outside the EU, and their representatives, will also need to be registered with the relevant authority in the most relevant EU member state. The EU Agency for Cybersecurity will prepare an EU-wide list of these organizations from the information provided to each member state.

Terrorist Content Online Regulation legal representative

The Terrorist Content Online Regulation is a relatively brief document compared to the others and deals with a single issue. However, it does so in a manner that is likely to be challenging for those to whom it applies.

Essentially, the TCO requires a hosting service provider — any organization that makes information provided by a user publicly available — to remove terrorist content, meaning material which incites, solicits or instructs terrorist activity, within one hour of receiving an order to do so from a competent authority. The hosting service provider will receive a 12-hour warning before it receives its first order. It will not receive advance warning for subsequent orders.

Many privacy pros have found the GDPR timelines challenging, particularly the 72-hour breach notification and requirement to provide a formal response to some data requests within one month. With the TCO timeline being so much shorter, they may find it even more difficult to achieve. Hopefully the competent authorities will apply a degree of reasonableness in their enforcement of the time limit, considering relevant factors.

This one-hour time limit also applies to organizations outside the EU that are likely in entirely different time zones and with different primary languages. There is an argument that this requirement is anticompetitive, as only the largest platforms could apply the resources necessary to meet the one-hour deadline, preventing smaller organizations from operating in the EU.

Although the one-hour clock only starts when the hosting provider itself receives the request, the organization's EU legal representative will be expected to receive and identify the removal order and forward it to their client quickly. If the representative takes three days to forward the request, their client's compliance with the one-hour timescale is unlikely to be viewed as an overall success. This does at least prevent any delay by the representative from hampering the compliance of their clients, but it may be difficult to argue before the Court of Justice of the EU that any delay was the fault of their representative.

There is another interpretation of the representative's obligation. The TCO expects the high service provider to grant its representative "the necessary powers and resources to comply with those removal orders," which might be interpreted as giving the representative access to the client's hosting service and the powers to take down material themselves. This has clear issues under the  GDPR and NIS/NIS2, including protecting the personal data processed by those clients and ensuring the security of their networks. It's hard to imagine this was the intent, but it is possible, given the TCO's strong desire to have that material taken down as an immediate priority.

The relevant competent authority in the designated member state must be notified of the legal representative's details.

Data Governance Act legal representative

The Data Governance Act applies to significantly fewer organizations than the laws listed above. It is intended only to cover organizations facilitating voluntary data sharing, either for commercial benefit, such as with data intermediation service providers, or charitable purposes, such as with data altruism organizations. The DGA aims to increase trust in data sharing, strengthen mechanisms to increase data availability and overcome technical obstacles to the reuse of data. To achieve this, it facilitates data sharing for appropriate purposes and protects the data being shared.

The legal representative role for organizations outside the EU is largely limited to the usual representative activity: receiving communications on behalf of their clients within the EU. A curious additional obligation has been added under the DGA: the legal representative is expected to "comprehensively demonstrate to the competent authorities … upon request, the actions taken, and provisions put in place … to ensure compliance with this Regulation."

The competent authority in the relevant EU member state is to be notified of the legal representative's details as part of a wider obligation on data intermediation service providers and data altruism organizations to register.

Federal Act on Data Protection legal representative

For the sake of completeness, it's also worth noting the obligation under Switzerland's Federal Act on Data Protection to appoint a data protection representative when an organization lacks a Swiss location. This requirement became enforceable in September 2023.

The obligation arises in fewer circumstances than the EU or UK equivalents, as it only applies to organizations acting in the role of data controller, not data processors as is the case with the GDPR, which undertake the processing of Swiss personal data regularly and on a large scale.

Conclusion

Although these additional EU representative obligations place additional requirements onto the already substantial compliance burden for organizations outside the EU, the purpose and benefits are clear: without an EU point of contact, the effectiveness of EU laws — and therefore the protections of the individuals based in the EU — would be hindered in a very real way.

The challenge now is meeting these obligations in an affordable and operationally achievable manner. Time will tell how many organizations fail to do so, and the implications of those failures for them.

]]>
2024-02-22 13:42:32
Momentum to better protect children's privacy in Australia https://iapp.org/news/a/momentum-to-better-protect-childrens-privacy-in-australia https://iapp.org/news/a/momentum-to-better-protect-childrens-privacy-in-australia There is global momentum to bolster children's privacy protections — including in Australia. In countries such as the U.K. and the U.S., privacy reforms have sought to advance the best interests of the child, create a safe space for children to thrive online and empower responsible parents and caregivers. Australia's current privacy regulatory landscape requires an overhaul to pursue these goals.

According to the Australian Community Attitudes to Privacy Survey, released in August 2023, protecting their child's personal information is a major concern for 79% of Australian parents, with only 50% of parents feeling they can protect their child's privacy. For 91% of parents, privacy is of high importance when deciding whether to provide their child with access to digital devices and services.

The increased use of artificial intelligence, generative AI technology and social media apps by children presents new privacy risks and exposes them to harms including economic exploitation and exposure to dark patterns or explicit material, bullying and harassment — such as deepfakes and doxxing — emotional distress, and threats to physical safety.      

There has been a rise of educational technology in Australian schools, with a recent Human Rights Watch report finding 89% of global educational apps and websites assessed, including those used by Australian schools, put at risk or directly violated children's privacy by using personal information for purposes unrelated to education — including extensive tracking and sharing of personal information with advertising technology companies.

Recent regulatory changes and proposed privacy reforms in Australia intend to address emerging privacy risks and potential harms to children, with a focus on digital harms.

Privacy Act

Australia's Privacy Act 1988 does not currently specifically address children's privacy. Along with state health information laws, the act does require that an individual has capacity to consent, if it is used as a lawful basis to collect, use or disclose personal information (used in limited scenarios). Capacity is determined by the organization or agency handling the personal information.

As there is no prescribed age to determine capacity, the Office of the Australian Information Commissioner recommends that, as a general rule: An individual under the age of 18 has the capacity to consent if they have the maturity to understand what's being proposed; if a child lacks maturity, a parent or guardian may be able to consent on their behalf; and if it is not practical to assess the capacity of individuals on a case-by-case basis, it can be assumed that an individual over the age of 15 has capacity.

There is a permitted exemption under the Privacy Act for organizations providing a health service to collect from, use or disclose to a responsible person (including a parent or guardian) a child’s health information. The OAIC recommends, as a general rule, when determining to disclose health information, a health service provider should also consider the child's degree of autonomy, and understanding of the relevant issues, circumstances and the nature of the information being handled. An example is shared where a child explicitly asks for information to be kept in confidence, such as a pregnancy or mental illness, as a reason not to disclose their health information to a parent. If the child is determined to lack capacity, they may still be able to contribute to decisions and should be involved in the decision-making process to the extent possible.

eSafety Commissioner and the Online Safety Act    

The eSafety Commissioner has pioneered Safety by Design to make digital spaces safer and more inclusive to protect vulnerable persons, such as children. Its broad advocacy and powers relating to online safety intersect with privacy goals to protect children from harms online, but the eSafety Commissioner highlighted they are two related, but distinct, concepts.

Australia's Online Safety Act 2021 gave the eSafety Commissioner new powers to protect the online safety of all Australians, with particular focus on protecting children from online abuse and exposure to harmful content. Under the act, the eSafety Commissioner can require online service providers to report on how they comply with Basic Online Safety Expectations. The expectations include protections for children from content that is not age appropriate. Reasonable steps online service providers may take to meet these expectations are based on the nature of the business, but may include ensuring default privacy and safety settings of services targeted at, or used by children, are robust and set to the most restrictive level.

In March 2023, as part of a government response, the eSafety Commissioner submitted to the government a roadmap on age verification, in particular to prevent and mitigate harm to children from online pornography.

Proposed Privacy Act reforms and Children's Online Privacy Code

In 2019, the Australian Competition and Consumer Commission released its Digital Platforms Inquiry Report and in February 2023 the Attorney-General's Department publicly released the Privacy Act Review Report, both of which raised the need to better protect children's privacy online. In response to the attorney-general's report, the Australian government in September 2023 recognized "(c)hildren are particularly vulnerable to online harms. Children increasingly rely on online platforms, social media, mobile applications and other internet connected devices in their everyday lives."

The government agreed to include in its proposed Privacy Act reforms that a child would be defined as an individual under 18 and to introduce a Children's Online Privacy Code that applies to online services "likely to be accessed by children." To the extent possible, the scope of the code is to align with the U.K. Age-Appropriate Design Code. The government's intention to create a code that is consistent with and adapted from global standards will be practical for Australian entities with a global presence to comply with.

The government also agreed in principle that:

  • The act should codify that valid consent must be given with capacity, including children.
  • Collection notices and policies must be clear and understandable to its audience, particularly any information addressed specifically to a child.
  • Entities should regard the best interests of the child when determining whether the collection, use or disclosure of personal information relating to a child is fair and reasonable in the circumstances.
  • A right to de-index online search results containing personal information about a child should be introduced.
  • Direct marketing to a child should be prohibited unless the personal information used for direct marketing was collected directly from the child and the direct marketing is in the child's best interests.
  • Targeting to a child should be prohibited, with an exception for targeting that is in the child's best interests.
  • Trading in the personal information of children should be prohibited.

Keeping the momentum going

The office of the eSafety Commissioner has said it will work with the tech industry to develop codes to help online service providers comply with obligations under the new Online Safety Act, which may include further children's privacy-related requirements. On 20 Feb., Australia and the U.K. signed an Online Safety and Security Memorandum of Understanding to advance online safety, including to work together to protect children’s safety and privacy through regulatory coherence between the two countries.

Australian Prime Minister Anthony Albanese also recently proposed bringing forward legislation in response to the Privacy Act review, including laws to prevent doxxing. A recent open letter from children's privacy advocates to the attorney general, minister for families and social services, and minister for communications voices concern over the "extensive process" for privacy reform in Australia, and its urgency to protect children's privacy in a data-driven economy.

Further consultation and legislative proposals are anticipated, and the government agreed those developing a Children's Online Privacy Code will be required to consult broadly with children, parents, experts, advocates, industry and the eSafety Commissioner. Australian businesses providing services that are likely to be accessed by children should start to evaluate their privacy practices and keep a watch on how local and global standards evolve.

]]>
2024-02-22 13:40:25
Defining 'comprehensive': Florida, Washington and the scope of state tracking https://iapp.org/news/a/defining-comprehensive-florida-washington-and-the-scope-of-state-tracking https://iapp.org/news/a/defining-comprehensive-florida-washington-and-the-scope-of-state-tracking You asked, we listened. Inquiries have reached a steady hum over how the IAPP determines which bills to include in its US State Privacy Legislation Tracker. As state privacy legislation grows in number and complexity, questions arise with respect to those bills that occupy the fuzzy gray area between comprehensive and not — namely, Florida's Digital Bill of Rights and Washington state's My Health My Data Act. And to you, state tracking aficionado, we offer an explanation of our classification.

Perhaps it's best to work backward from what isn't comprehensive. As defined in the tracker, a bill is not considered comprehensive if "it does not qualify due to its scope, coverage, or rights."

A bill is narrow in scope if it applies only to a specific set of data types, like financial or health data, or data subjects, like children. A bill is narrow in coverage if its applicability includes only a single industry, like the automotive industry, or if its thresholds apply, in practice, to only a handful of companies. A bill is narrow in rights if it is targeted at providing only one or two consumer data rights, such as deletion or correction.

Florida's Digital Bill of Rights contains certain identifying features of bills included in the tracker but fails to be comprehensive in coverage due to relatively high thresholds of application that limit applicability to just a handful of controllers.

Correspondingly, Washington's MHMDA reads initially as a bill regulating digital health information — patently not comprehensive — but includes provisions that cause it to have broad coverage across industries in practice, giving it some characteristics of comprehensive state bills. However, it remains outside the tracker's purview because its scope applies only to consumer health data.

The state tracking cottage industry has arrived at a delicate consensus. Sampling seven other trackers, four do not consider Florida's bill to be comprehensive: Husch Blackwell, Future of Privacy Forum, Sourcepoint, and Transcend, while three do: Bloomberg, OneTrust and Termly. None lump MHMDA together into the morass of comprehensive legislation.

While this article represents the IAPP's current stance, it may adjust this position in the future in light of new information, bills, stakeholders or member feedback. The IAPP will annually reassess its position on the definition of "comprehensive" to best stay current with state legislation trends.

Florida

Sunshine State lawmakers made waves in June by passing Senate Bill 262, the Digital Bill of Rights, which affords new data privacy rights to Floridians, among other provisions. While at a glance the act has all the makings of a fully-fledged, comprehensive privacy law, a peek under the hood reveals that its requirements do not apply to the overwhelming majority of businesses given its significantly narrowed definition of controller.

Under the statute, a controller must make over USD1 billion in global annual revenue and any of the following:

  • Derives at least 50% of its global gross revenue from the sale of online advertising.
  • Operates a smart-speaker and voice-command service with an integrated virtual assistant connected to a cloud computing service that uses hands-free verbal activation.
  • Operates an app store or a digital distribution platform that offers at least 250,000 apps for consumers to download.

Considering the limited universe of companies operating in the smart speaker, app store or online advertising industries — an amount further reduced by the global annual revenue threshold — the list of entities that presently fall under the law's jurisdiction is not long. While the law does contain certain other obligations that apply more broadly, such as requiring consumer consent for the sale of "sensitive data" or expanding opt-out rights, its overall scope is not considered comprehensive.

Contrast the Digital Bill of Rights with the California Consumer Privacy Act or Utah's Consumer Privacy Act, which each have USD25 million gross annual revenue thresholds and do not expressly target the smart-speaker or app store industries. Or compare it to the Virginia model, which regulates controllers that process personal data for at least 100,000 consumers annually as a starting threshold. Or to the Nevada approach, which eschews numerical thresholds entirely and instead captures organizations based on their collection or use of personal information of that state's citizens. Each of these thresholds' orders of magnitude cover more businesses than does Florida's disproportionately high threshold.

Washington

Washington's MHMDA initially reads as strictly sectoral in scope, intended to expand data subject rights over their digital health data. This characterization vastly undersells the potential reach of the bill, which has been heralded as arguably "the most consequential privacy legislation enacted since the original California Consumer Privacy Act."

Key to this characterization is a sweeping definition of consumer health data that includes any personal information "linked or reasonably linkable to a consumer and that identifies the consumer's past, present, or future physical mental health status." A nonexhaustive list of data types follows, establishing just how far this language reaches. It includes, for example, "data that identifies a consumer seeking health care services" or "any service provided to a person to assess, measure, improve, or learn about a person's mental or physical health." Colorable arguments can be made that, based on even the slightest relation to health services, categories like search, shopping history or any online research into topics related to health, wellness, nutrition or fitness, will fall under the MHMDA's jurisdiction.

Washington judges will have much to parse out in drawing the MHMDA's parameters when it comes into effect later this year. If its reach fulfills the extensive expectations anticipated by many, this may demand a conversation over including it on the state tracker. However, the text and intent of the bill indicate otherwise. Right now, the MHMDA stands as a digital health bill put forth to protect Washingtonians' health privacy as "part of a comprehensive pack of legislation" responding to the U.S. Supreme Court decision in Dobbs v. Jackson Women's Health Organization.

For readers outside the US

To some — especially those coming from a non-U.S. perspective — referring to any state legislation as comprehensive may feel like an affront to the language itself. That most U.S. state legislation exempts small- and medium-sized businesses from their jurisdiction via a revenue threshold or comes loaded with industry-specific exemptions for health, financial or children's personal information, to name a few, would seem disqualifying when analyzing for comprehensiveness.

For comparison, the EU General Data Protection Regulation — progenitor for the bulk of the substance comprising state bills — doesn't include any revenue threshold. Instead, a controller incurs obligation where it processes personal information, and provides goods or services accessible to consumers in the EU or European Economic Area, or monitors user behavior in the EU or EEA. Likewise, while the GDPR does exempt certain entities in certain situations, it does not exempt large sectors of its economy in the way U.S. state laws do. Other international privacy statutes lack similar revenue-based or sectoral exemptions, such as China's Personal Information Protection Law, India's Digital Personal Data Protection Act or Brazil's General Data Protection Law, to sample a few.

The American conception of comprehensiveness in a privacy context refers instead to a notion that the law applies to all consumers with respect to their relationships with larger online entities that collect and process personal information. To explain such exemptions: U.S. lawmakers generally prioritize, as a policy matter, innovation from small- and medium-sized businesses, leading to revenue thresholds. States must also contend with federal preemption from sectoral privacy laws like the Health Insurance Portability and Accountability Act, Fair Credit Reporting Act and Gramm-Leach-Bliley Act, requiring sectoral exemptions. None of this is to mention the multidimensional policy debate around tackling Big Tech, which heavily influenced Florida's bill.

Language is, of course, constantly evolving in all directions inside the U.S. and these exemptions have not slowed the word "comprehensive" from establishing its place in the American lexicon as an apt description of the sort of privacy legislation discussed herein.

None of this is to say that businesses operating in Florida or Washington should disregard the Digital Bill of Rights or MHMDA due to their exclusion from the IAPP's tracker. Quite the opposite: privacy professionals operating in these states should keenly monitor the development of these acts, which likely will result in enforcement, litigation and new compliance obligations, and possibly spur legislative mimicry across the country. The IAPP has covered Florida's Digital Bill of Rights and Washington's MHMDA in-depth and will continue to alert the privacy community as they develop and as other states pass relevant legislation.

]]>
2024-02-22 11:42:35
Implications of EDPB's looming 'pay or OK' guidelines https://iapp.org/news/a/privacy-and-adtech-implications-surrounding-metas-pay-or-ok https://iapp.org/news/a/privacy-and-adtech-implications-surrounding-metas-pay-or-ok EU data protection authorities have yet to decide how the "pay or OK" user consent model fits under the EU General Data Protection Regulation. The debate stems from Meta's move to charge for ad-free Facebook and Instagram services in the EU, a practice that raises questions around informed consent and a clear opt-out option.

"Within this particular context, and if we are looking at consent under the GDPR, are people really being given a free choice?" Norway data protection authority, Datatilsynet, Head of International Tobias Judin said during a recent IAPP LinkedIn Live. "People are contacting us at the Norwegian DPA because they feel that they are not being given a free choice. Maybe they cannot afford to pay a fee and so probably in some cases, they feel that the 'no way option,' leaving the platform, is also not a realistic option, whereby they may feel coerced to consent."

Norway has been out in front of the Meta's targeted advertising issues and the "pay or OK" debate since last year. Datatilsynet placed an interim ban on Meta's data processing for behavioral advertising in July 2023 before petitioning to the European Data Protection Board for a ban across the European Economic Area. The EDPB sided with Norway, issuing a binding decision to ban Meta's practices 1 Nov. 2023.

Days before the EDPB ban became public, Meta rolled out its "pay or OK" system, which immediately drew criticism from the Datatilsynet and fellow EDPB board members. Since the decision, the EDPB has used plenary meetings to debate its position and potential guidelines on "pay or OK" while consumer organizations have also begun asking the board to clarify whether Meta's system is GDPR compliant.

"If we now say 'OK, this business model is perfectly valid,' then this is what the internet is going to look at, and it could happen that you are faced with 'pay or OK' choices for every service you potentially use," Judin said. "Therefore, it's quite urgent for data protection authorities to have a harmonized approach to this because essentially it is now or never."

Consumers and consent

The heart of the issue for businesses is the dwindling number of legal bases for processing under the GDPR. User consent as a legal basis is the go-to option for advertising technology businesses for years, particularly as it relates to applying third-party cookies and other user tracking.

Monetization of services is important to every business and some advertisers argue that paying to access services is a standard business model. 

"A big stakeholder is the user, of course, who wants to benefit from free information and content," Criteo Vice-President of Government Affairs and Public Policy Nathalie Laneret, CIPP/E, CIPM, said during the LinkedIn Live. "When he or she consents, they receive ads that are relevant to their preferences."

Laneret also said it is important to "take a balanced approach" to analyzing data processing practices and see who is affected, including the company, advertisers, publishers, the data subject, etc. Every "stakeholder" will face implications surrounding a solution to data processing monetization.

In terms of consumer impact, EU citizens are protected from an imbalance of power within the GDPR that states citizens have the right to refuse to allow companies to process personal data. University of Surrey associate professor in Law and University of Oxford Research Associate Mikołaj Barczentewicz said the "scale" of impact or invasiveness of the data processed could be subjective.

In a recent letter urging the EDPB to take a stance on the model, 28 consumer advocacy groups wrote regarding how "pay or OK" is likely to proliferate across industries if it goes unaddressed.

"We believe that Meta, and other companies likely to follow suit, are cognizant of the fact that a majority of users will neither be able nor willing to pay a fee," the groups wrote, also noting widespread adoption of the model would "wash away all realistic protections against surveillance capitalism."

Potential solutions

The Digital Services Act aims to create transparency between consumers and online platforms, and it may offer partial guardrails against "pay or OK." Laneret indicated sensitive data processing that could potentially be looped in with "pay or OK" consent is unlawful with the DSA, which has been enacted.

"With the Digital Services Act and its Article 26 … it will be prohibited to profile users on the basis of sensitive personal data," Laneret said. "So, I think I would somehow say this issue (of sensitive personal data processing concerns) is actually solved."

Alternatives to the "pay or OK" model do exist, according to Barczentewicz. Contextual advertising that allows companies to provide services through funding presents a solution for consumers who want to protect their data without paying a fee. However, social media platforms may not be able to secure contextual advertising with advertisers looking to market to a specific audience that could be found through targeted ads.

An EDPB position that does not balance user and business interests has risks to all, Barczentewicz said. He indicated the best way to avoid those risks is for all parties to speak their minds regarding potential impacts and hardships.

"My recommendation for businesses, the ones that rely on online advertising, or targeted advertising, is to make their voices known publicly," Barczentewicz said. "For example, if they can show that moving from personalized advertising to contextual advertising would make it impossible to serve some cohorts of users, well, that would be the kind of data that would be very helpful at this stage in the public conversation."

]]>
2024-02-20 12:25:20
Access to credit risk data: The scope of automated decision-making in Chile's draft law https://iapp.org/news/a/access-to-credit-risk-data-the-scope-of-automated-decision-making-in-chiles-draft-law https://iapp.org/news/a/access-to-credit-risk-data-the-scope-of-automated-decision-making-in-chiles-draft-law For a region like Latin America, where credit and banking access is especially challenging and often limited by consumers' credit history, it is relevant to understand the role of companies generating predictive instruments and their level of consumer transparency.

Chile is anticipated to enact a new personal data protection law and, like the EU General Data Protection Regulation, the draft grants data subjects the right to object to being subject to decisions based on automated data processing. It also grants the right to access information on the logic used in decisions determined through automated data processing. However, a question arises as to whether the proposed regulation will apply to consumer reporting companies that use predictive models to determine credit risk scores, which are definitive when accessing financial products.

Financial institutions use credit risk scores to evaluate and predict a consumer's credit behavior. Scoring models use mathematical and statistical data to compute a consumer's specific level of default risk, considering information like past insolvency or default on obligations, which is commonly shared among financial entities. Companies generate predictive values, evaluate and determine credit risk based on a certain score, and share a report with financial institutions, which then evaluate the findings and determine the conditions of the financial products and services offered. The companies do not necessarily have a direct relationship with the consumer, so any dispute would be through the financial entity.

The problem is credit score reports are often the only instrument used to evaluate consumers' access to financial products like mortgages, credit cards and bank accounts, so the accuracy and completeness of these reports are critical. Inaccurate credit score reports cause great harm by reducing the chances of accessing credit and banking services. This situation is particularly dramatic for low-income families who, for example, rely on short-term loans to meet their monthly living expenses.

The fact that consumers can only dispute the information contained in these reports with financial institutions puts them at an enormous disadvantage, with no way of accessing details on the logic used to create their score. As financial institutions are only a recipient of a credit score report, they cannot provide this information, which is vital for consumers to understand whether the information they received is objective due to the risk of discrimination based on data used.

Despite the need for greater transparency, algorithm-based scoring methods are considered confidential and are protected as trade secrets to prevent other companies from copying models, as well as manipulation by consumers. The absence of a standard, easily verifiable mathematical model used by companies has further contributed to the lack of transparency in the credit industry.

The opacity surrounding existing rating methods has been criticized for preventing consumers, stakeholders and regulators from being able to challenge these models. However, existing credit-rating systems are an acceptable way of measuring a person's financial health and are generally considered fair and objective.

A 7 Dec. 2023 ruling by the Court of Justice of the European Union on automated individual decision-making could solve this dilemma at the European level. The decision, which prohibits certain automated credit scoring and extended data retention practices under the GDPR, analyzes the scope of the regulation's Article 22 and specifically states the concept of "decision" could be applied to companies that develop predictive tools for third parties but have no direct interference in the subsequent decisions made. In other words, according to the CJEU, the simple fact of presenting risk scores would fall within the scope of what is understood by decisions based on automated individual decision-making insofar as they are decisive for a third party, to whom they are transmitted to establish, execute or terminate a contractual relationship with a given person.

Article 8 of Chile's draft regulations replicates Article 22 of the GDPR. On automated individual decisions the provision states, "The data subject has the right to object to and not be subject to decisions based on automated processing of his personal data, including profiling, which produces legal effects on him or significantly affects him." Unlike the GDPR's Article 22, the provision does not establish that the decision must be based "solely" on the automated processing of data, which could result in wider applicability of this right since it will be sufficient for automated processing to be considered in the decision.

Additionally, like the GDPR, the draft contemplates the right of access to information of data subjects. Article 5 specifically refers to data subjects' right to request "significant information on the logic applied in the event that the data controller carries out data processing" in accordance with Article 8. Therefore, if an interpretation similar to that of the CJEU were to be applied, consumers in Chile could object to companies conducting automated processing with their data, unless it is necessary for the performance of a contract, their prior and express consent is given, or the processing is authorized by law. On the other hand, consumers could directly ask companies to provide information on how scoring and weighting logics are developed.  

Unlike the GDPR and other EU regulations, Chile's draft law does not include recitals to help interpret Article 8's scope, especially regarding whether the concept of decision can be extended to entities not directly participating in determining whether a loan or other financial product is granted. In this sense, the administrative interpretation of this rule by the Personal Data Protection Agency, which would be created under the draft legislation, will be relevant, as well as the interpretation of courts in cases that may arise.

Decisions based solely on automated decision-making can be considered a danger to consumers as they can create situations of discrimination that violate individuals' fundamental rights. Thus, the scope of Article 8 in Chile's draft law must be understood within the protective logic that seeks to inform all regulations on personal data protection and also in line with the comparative experience of not leaving individuals who are subject to these decisions unprotected.

]]>
2024-02-20 12:11:07
Still growing up: Top takeaways from the FTC's proposed COPPA rule update https://iapp.org/news/a/still-growing-up-top-takeaways-from-the-ftcs-proposed-coppa-rule-update https://iapp.org/news/a/still-growing-up-top-takeaways-from-the-ftcs-proposed-coppa-rule-update Since the last Children's Online Privacy Protection Rule update in 2013, the conversation over regulation of children's online activity has echoed throughout the halls of capitals across the U.S. and around the world. On 11 Jan., the U.S. Federal Trade Commission issued a notice of proposed rulemaking to begin its latest update of the COPPA Rule, picking up where it left off in its 2019 request for comment. With this proposed rulemaking, the FTC adds to the attention given to children's privacy protection.

The COPPA statute, passed in 1998, authorizes the FTC to issue clarifying regulations, which it does in the form of updating what is known as the COPPA Rule. By posting its proposed updates in the Federal Register, the FTC commenced a 60-day public comment period. The FTC expects healthy interest in the rule update — the 2019 review saw over 175,000 comments submitted — and will review public comments and incorporate changes as it sees fit. Comments on the proposed rule must be received by 11 March.

The specificity of consent

Under existing rules, and absent an applicable exception, an operator of a commercial website or online service directed to children under 13 years old may collect, use or disclose children's personal information only upon obtaining verifiable parental consent. A single consent could cover all of an operator's privacy practices.

The January proposed rules require a business to obtain two new layers of parental consent, beyond what is required for the collection of children's data: for its (a) disclosure to third parties and (b) use for purposes of maximizing user engagement.

This proposal reflects a continued shift toward specificity of consent and away from its bundling. Under the EU General Data Protection Regulation, consent must be specific. Consent is invalid if not obtained in relation to "one or more specific" purposes of data processing, giving the data subject's choice about each and preserving for them a degree of control and transparency. Similarly, Washington state's My Health My Data Act requires separate, affirmative consents "for any collection, use, disclosure, or other processing of consumer health data beyond what is necessary to provide a consumer-requested product." In insisting that operators obtain verifiable parental consent for specific uses of children's data, the FTC aims to cultivate more informed — if not fatigued — parents.

Back to school: Expanding the universe of who can provide consent in the classroom

In the educational context, whether schools as well as parents or legal guardians may consent on a child's behalf has been subject to debate, competing policy goals and overlapping legal obligations. Careful to not overburden districts by requiring parental consent for every educational technology use, and to not rely upon ad hoc decisions from teachers, the FTC has advanced a flexible middle ground in the draft. It proposes a school authorization exception from COPPA's notice and consent requirements, instead mandating a detailed written agreement between the edtech provider and school to govern the relationship.

Under the proposal, this agreement must identify the individual providing consent and specify that the school has authorized them to do so. This departs from previous COPPA guidance, which vested decision-making in the school or school district, favoring trained, institutional staff over teachers. The contract must also include other information, including limitations on the use and disclosure of student data, the school's direct control over that use and disclosure, and the operator's data retention policy.

To help students and parents understand the contours of school privacy, the operator would also be required to provide notice on its site or service that it has obtained authorization from a school to collect a child's personal information; it will use and disclose that information for school-authorized purposes only; and the school may review information collected from a child and request its deletion. Operators would be required to allow schools to review the personal information collected, a right previously afforded only to parents.

These changes continue the thorough spate of FTC oversight of the growing edtech industry. Through a policy statement and settlement, the agency has emphasized edtech operators' obligations to ensure that parents consent — and not assume a school has done so — and to limit use of that personal information to the purpose for which consent was given. In requiring the establishment of a detailed contractual relationship between provider and educator, the proposed rules appear aimed at mitigating these same issues.

Going beyond the business-to-consumer relationship in adtech

Children's privacy protections as written have largely grown outdated and maladapted to handle today's restless and increasingly complex advertising technology ecosystem. Regulators crafted the 2013 COPPA Rule update with more traditional notions of online advertising in mind, predominantly wary of operators' first-party relationships with children. Presently, many businesses processing children's data do not have direct relationships with them, instead exchanging children's data across a network of third parties. Throughout the proposed rules, the FTC seeks to rewrite the rules of children's privacy for the modern online advertising economy, expanding their scope to include business-to-business scenarios.

In particular, the FTC set its sights on the use of children's personal information in advertising under the guise of internal operations. COPPA includes an exception to its notice and consent requirements for operators that collect persistent identifiers for the "sole purpose of providing support for the internal operations of the website or online service." Businesses handling children's personal information for purposes such as evaluating advertising campaign effectiveness, measuring conversion, detecting fraud and compliance have found refuge under the internal operations exception.

The proposed rules restrict operators' use of personal information to a clarified narrow interpretation of the exception, emphasizing the rule's purpose limitation. Notably, while ad attribution remains a permitted internal operation under the exception, the FTC noted purposes such as behavioral advertising or profile-building will not qualify for the exception and will require verifiable parental consent.

The proposed rules call for transparency in this area as well, requiring operators relying on the internal operations exception to "specifically identify the practices for which the operator has collected a persistent identifier and the means the operator uses to comply with the definition's use restriction" in their privacy notices.

In a similar vein, the FTC proposed expanding the categories of evidence it will consider in analyzing whether an operator's website or online service is directed to children — and thus, the extent of that operator's COPPA obligations — to include "marketing or promotional materials or plans, representations to consumers or to third parties, reviews by users or third parties, and the age of users on similar websites or services." Where the rule initially focused chiefly on representations made on an operator's site or service, the proposal accounts for situations in which an operator makes one statement of audience composition or intended audience to adtech service providers, while indicating differently to users.

Developing the mixed audience framework and diving into the age verification discourse

Distinguishing whether a website or online service is directed at children or a general audience has never been an exact science — such determinations result from a multifactor "totality of the circumstances" test that assesses a site or service's intended audience to determine whether it must comply with COPPA. If a site is child-directed, COPPA always applies. If it is directed at a general audience, COPPA only applies if the operator has actual knowledge about a user's age. But the test's outcome is not strictly binary: the rule provides a narrow exception for a site or service that may be directed to children but does not target children — also known as a "mixed audience" site or service.

Currently, the mixed audience category sits within the child-directed category. Websites within this group may screen users based on age and need only adhere to COPPA's notice and consent obligations for those users who identify themselves as under thirteen. Building on its YouTube settlement, the FTC now proposes a clarifying, standalone definition of a "mixed audience website or online service" as "one that meets the criteria of the Rule's multi-factor test but does not target children as the primary audience."

This clarification aims to establish the mixed audience label not as a broad exception, but rather as incentive for companies to know the age of their users. Through a neutral age gate, operators can determine whether each user should receive the COPPA-compliant or the general audience version of their service.

The means through which operators may collect age information remain fluid. Paralleling similar work by the U.K. Information Commissioner’s Office, the FTC has increasingly disfavored self-declaration mechanisms that children often easily circumvent. Mindful to avoid impeding innovation in age screening, the FTC's proposed definition permits operators to collect age information or use "another means that is reasonably calculated, in light of available technology, to determine whether the visitor is a child."

Combatting nudging

In these rules the FTC attempts to rein in companies using children's data to maximize engagement. Aside from requiring verifiable parental consent when using or disclosing children's data to nudge users toward greater usage, the proposed rules limit the internal operations exception by prohibiting "using or disclosing personal information in connection with processes, including machine learning processes, that encourage or prompt use of a website or online service." Operators intent on nudging children toward greater usage of a site or service must then make online disclosures such that parents have sufficient notice and can provide informed consent.

In affirmatively looping in parents and injecting friction into operators' pursuit of engagement, the FTC follows the lead of lawmakers diligently working to regulate young people's use of social media. California has been most active in this space, passing 2022's now-enjoined Age Appropriate Design Code Act, which inter alia limited business's use of dark patterns and nudging, and recently proposing the Social Media Youth Addiction Law. Several other states have passed likeminded legislation, which litigants have frequently challenged on First Amendment grounds. Considering many online services' business models beget UX designs that increase screen time, this proposed COPPA Rule adjustment may face similar resistance.

Giving detail to data security requirements

For much of the FTC's time spent regulating data security, businesses determined reasonableness of safeguards for personal information by looking at evolving case law and industry standards. The FTC was forced to develop a new approach after a 2018 appellate court ruling held that the term "reasonable" was "devoid of any meaningful standards" and that the FTC's approach said "precious little about how this is to be accomplished."

FTC settlements have since included far more prescriptive technical safeguards. This round of COPPA Rule changes would formalize such requirements, at least within the realm of kids' data, requiring operators to tailor their more detailed security programs "to the sensitivity of children's information and to the operator's size, complexity, and nature and scope of activities."

These proposed rules find parentage in recent consent decrees, including Retina-X and Unixiz, focused on alleged failure to provide adequate safeguards for children's personal information. The agency also attempts to harmonize COPPA data security rules with those prescribed by the Gramm-Leach-Bliley Act's Safeguards Rule, which governs the data security practices of certain financial institutions.

Additional takeaways

While the proposed rules contain plenty more substance to grapple with, some points deserve further mention.

Data minimization. In no uncertain terms, the FTC reiterates that COPPA prohibits operators from collecting more personal information than reasonably necessary for children's participation in a game, the offering of a prize, or another activity, even if an operator has validly obtained verifiable parental consent. This emphasis comes at a time when data minimization has increasingly been the forgotten Fair Information Practice Principle, and reinforcing that not even verifiable parental consent can overcome these limits should send warning to operators of the FTC's intent to mitigate overcollection.

The mutable definition of personal information. The proposed rules expand the definition of personal information to broadly include biometric identifiers that "can be used for the automated or semi-automated recognition of an individual." While a broad functional definition, this definition remains narrower than that offered by 2023 guidance on biometrics. Due to statutory constraints, the FTC opted not to align with other jurisdictions in defining personal information to include inferences.

Audio file exception. Codifying a 2017 enforcement policy statement, the FTC created an exemption from COPPA's verifiable consent requirement for operators that collect an audio file of a child's voice as a replacement for written words, provided the file is retained only for as long as necessary to fulfill the request for which it was collected. Necessary to adapt to increasing reliance on voice-assist technology, the change comes with several restrictions, including an online notice provision and strict purpose limitation.

Methods of parental consent. The FTC proposed adding knowledge-based authentication and facial recognition technology, which were both previously approved through a different process, to its enumerated list of approved methods. Meanwhile, an application for the use of facial age estimation as another method of verified parental consent is currently awaiting approval.

Avatars. The FTC seeks comment on whether, among other possible identifiers, an avatar generated from a child's image constitutes personal information. Such a change may have broad implications for the metaverse and virtual and augmented reality platforms forecasted as social venues for children in the not-so-distant future.

Safe Harbor updates. The proposed rules include updated criteria for approval and revocation of Safe Harbor participation, as well as reporting and recordkeeping requirements for Safe Harbor entities, including publishing lists of their participating operators. Aligning with its bend towards more detailed data security requirements, the FTC proposes additional "reasonable procedures" that a Safe Harbor must require its participating operators to establish and maintain, and expands the scope of Safe Harbor assessments to include comprehensive review of both privacy and security policies.

]]>
2024-02-16 12:26:52
How to build a ROPA to fit business, privacy needs https://iapp.org/news/a/how-to-build-a-ropa-to-fit-business-privacy-needs https://iapp.org/news/a/how-to-build-a-ropa-to-fit-business-privacy-needs Most data protection laws either require formal records of processing activities or have notice and/or privacy impact assessment requirements that, in a way, induce businesses to create one to comply. The need to maintain a comprehensive ROPA increases with the business' size and complexity of its processing activities.

There are a few tips and tricks that may help to create a document that is both useful and, hopefully, stands the test of time.

Before you start building a ROPA

Before starting the ROPA process, evaluate your resources. Are you able to use a tool — one off-the-shelf or built in-house — or will this be maintained in an Excel spreadsheet? In answering these questions, consider your business privacy risk profile and do a cost/benefit analysis. Who will manage the ROPA going forward and how?

Like any good architect, you will need to ensure you have a good grasp of your organization, including how and where it operates to build a document that is fit for your business and purpose. Build a ROPA that will stand the test of time by making it user-friendly and easy to maintain. If your ROPA is too granular and business entries cannot be updated efficiently or correctly, you will easily have thousands if not hundreds of thousands of entries. This is likely not sustainable, compliant or useful.

Above all, ensure you have appropriate backing of business leadership, which will set a top-down tone on the importance of privacy.

Building the ROPA

  1. Leverage existing processes to capture processing activities systematically. Mature organizations usually have well-developed vendor assurance programs and IT change management processes. Whether personal data is involved should be assessed, and if so, the business should be directed to enter a new processing activity or update an existing one.
  2. Create a standard naming convention for ROPA and other entries, including data protection impact assessments and transfer impact assessments. This will make it easier for the business to update entries instead of creating new ones.
  3. For each department, create ready-to-use purpose of processing responses, with options to insert "other" as a catch all category and allow further explanations. This is usually the hardest part for a lay person to articulate. As you build your ROPA you can then start to see patterns for processing activities and adjust the ROPA purpose of processing categories accordingly.
  4. To the extent possible, create ready to use categories — like data subjects, third parties, internal parties and data elements — that are easily understood so the business may cohesively and accurately enter their processing activity. Adjust these categories as you build your ROPA to align with the reality of your business more closely. A tightly controlled taxonomy of privacy terms will help you create and maintain a useful document.
  5. Use your ROPA to populate your data map. Ideally, integrating your ROPA to other systems, for example, procurement and digital assets, helps build a more holistic view. Having a good understanding of where your data resides, systems and parties that have access to it, and vendors associated with each processing activity will help a business better prepare for the ever-changing privacy landscape. A solid data map will help answer questions regarding TIAs, sale of personal data, and quickly identify sources of information to respond to an access request. More importantly, if you intend to use artificial intelligence, it will be useful to know what data AI access has to so you may prepare accordingly.
  6. Ensure your ROPA not only meets requirements of the EU General Data Protection Regulation's Article 30 but is a useful document for drafting privacy notices in the jurisdictions where your business operates. As jurisdictions around the world pass new data protection laws or strengthen existing ones, it is important to build a ROPA that allows compliance with multiple standards.
  7. Create checkpoints in the ROPA regarding your vendor data protection agreements, international data transfer mechanisms, consent mechanisms and any associated digital security controls which will allow you to better understand your risks, mitigate them and/or accept them, depending on business risk tolerance.
  8. Have clear roles and responsibilities within the business to create and update ROPA entries. If a processing activity has privacy risks, adequately relay those to the business owner, who in turn mitigates or accepts them. Document the process.
  9. If you're using a tool, create rules in the document to trigger tasks for high-risk processing activities. Cast the net wide so it may include all processes that applicable jurisdictions consider high risk. Depending on how your privacy office is set up and who enters or reviews ROPA entries, it is good to have a baseline of automation that triggers DPIA tasks. The privacy officers can then decide whether to conduct DPIAs. Similarly, you can build rules in the ROPA that will trigger automated TIA tasks. Again, the privacy officer upon closer examination will decide whether to conduct a TIA. They will also instruct the business if the vendor contract will need a transfer mechanism, such as standard contractual clauses, and any supplementary measures.
  10. For businesses operating in jurisdictions with Works Councils — which may have the right to be informed, consulted or to approve — it is advisable to create automated tasks to ensure applicable processing activities are reviewed and put through the appropriate channels for council's review or approval.
  11. If you're using automated tools, ensure your ROPA information is transferred to the TIAs, and/or DPIAs so the same information doesn't have to be entered repeatedly. You'll quickly lose the business if the process is overly cumbersome and laborious.
  12. Populate legal entity information with the data protection officer's name, the supervisory authority with which the DPO is registered, whether the intragroup data transfer agreement is signed and whether the legal entity is registered with the applicable data protection authority. Keeping this information readily available and up to date is especially important for businesses with an appetite for mergers and acquisitions.

After building your ROPA

And now the good news. Once your ROPA and various assessments are sufficiently populated you may use the data to substantively inform leadership on the privacy health of the business, including risks and data management efficiencies and inefficiencies.  

A mature ROPA will help you draft privacy notices and respond to access requests relatively easily, thus creating both efficiency and compliance. Furthermore, you can prepare for upcoming changes in the law or court decisions in particular areas where your business may be exposed by calculating risks or planning for appropriate mitigations.

Building the ROPA is only the first step. You need to foster a good working relationship across the business to ensure there is a robust process for populating, updating and adjusting the document, as needed. You may need to actively train privacy and other staff on how to populate and maintain the ROPA. This process may take years to become part of a business's culture.

Wherever your business may be in the privacy journey, it is important to accept that 100% compliance is difficult, if not impossible, to achieve. GDPR and other laws like it take a risk-based approach. As such, it is important to focus on high-risk areas for your business — be that children's data, use of AI, use of biometric data, direct marketing, or sale of personal data, for example. Build a robust ROPA with proportionate resources and processes that fit your business privacy risk profile.

]]>
2024-02-13 11:48:15
State appeals court rules first CPRA regulations enforceable https://iapp.org/news/a/state-appeals-court-rules-cpra-regulations-can-take-effect https://iapp.org/news/a/state-appeals-court-rules-cpra-regulations-can-take-effect Covered entities under the California Consumer Privacy Act are finally set to face fresh requirements under previously suspended rules drafted by the California Privacy Protection Agency. The California 3rd District Court of Appeals ruled in a 3-0 decision to allow the CPPA to immediately begin enforcing its first set of California Privacy Rights Act regulations following a prior court-ordered delay.

The CPRA rules concerning data processing agreements, consumer opt-out mechanisms, mandatory recognition of opt-out preference signals, dark patterns and consumer request handling were initially set to be enforced 1 July 2023. The Sacramento County Superior Court ruled 30 June 2023 in favor of a complaint filed by the California Chamber of Commerce to delay rules enforcement 29 March.

Third District Court of Appeals Associate Justice Elena Duarte wrote in the reversal decision the imposed delay "would disregard the unambiguous (CPRA) provision" staking the start of rules enforcement to 1 July 2023. She added, "The voters intended to strengthen and protect consumers' privacy rights regarding the collection and use (including sale) of their personal information."

The initial decision by Sacramento County Superior Court Judge James Arguelles focused on the grace period between rules finalization, as he wrote voters "intended there to be a gap." The CPRA rules in question were only finalized 30 March 2023, which left less time to prepare for enforcement than the six-month ramp-up period provided under the statute.

"We are pleased with the decision. This ruling ensures all aspects of the regulations adopted by the California Privacy Protection Agency last year are again enforceable, just as the voters intended when they enacted Proposition 24," CPPA Executive Director Ashkan Soltani said in an agency press statement on the appellate court ruling.

The appellate decision also has implications on future rulemaking efforts by the CPPA, allowing for enforcement of future regulations upon their finalization. The agency is currently working through its next rulemaking initiative concerning cybersecurity audits, risk assessments and automated decision-making technologies.

While the rules will be enforced immediately, the CPPA embedded a potential discretionary enforcement reprieve in its first CPRA rulemaking while recognizing the shorter-than-expected grace period for covered entities. A rule was drafted to allow the CPPA to "consider all facts it determines to be relevant, including the amount of time between the effective date of the statutory or regulatory requirement(s) and the possible or alleged violation(s) of those requirements, and good faith efforts to comply with those requirements."

It's unclear whether the CPPA will provide any sort of leeway given businesses have had nearly 11 months since the rules were finalized to adjust or improve their data practices according to the statute.

"The California voters didn't intend for businesses to pick and choose which privacy rights to honor. We are pleased that the court has restored our full enforcement authority, and our enforcement team stands ready to take it from here," CPPA Deputy Director of Enforcement Michael Macko said in the agency's statement. "This decision should serve as an important reminder to the regulated community: now would be a good time to review your privacy practices to ensure full compliance with all of our regulations."

CCPA enforcement has yet to produce many notable actions besides a USD1.2 million settlement against multinational retailer Sephora over alleged "Do Not Sell" violations. In other instances, the CPPA and the California attorney general's office have conducted enforcement sweeps and served cure notices.

The lift on enforcement will undoubtedly bring increased activity, particularly in the advertising technology space as many of the rules that will be enforced relate to targeted advertising and consent around how those ads are produced and served.

"California has set the bar with a very simple 'flip of the switch' Global Privacy Control," Digital Content Next CEO Jason Kint said. "Google and Meta's proxies are now out of runway to slow down enforcement and must finally meet the letter and spirit of the law — allowing the public to opt out of their data being shared across the web. And switching away from Chrome to Brave or Firefox is now a no-brainer as Google continues to drag out removal of tracking cookies."

]]>
2024-02-12 12:05:02
Opting In-n-Out: Five key analyses for adtech privacy law compliance https://iapp.org/news/a/opting-in-n-out-five-key-analyses-for-adtech-privacy-law-compliance https://iapp.org/news/a/opting-in-n-out-five-key-analyses-for-adtech-privacy-law-compliance Media companies and online businesses must comply with abundant diverging privacy and data protection law requirements across jurisdictions. With respect to targeted advertising, companies face particularly complex rules on opt-in consent and opt-out requirements. Smaller and newer businesses often find this exceedingly challenging, as they rely on advertising technology services and data brokerages to compete with more established companies, which have more — and more direct — consumer relationships and data. Accordingly, smaller businesses depend less on third-party data sharing and unsolicited marketing communications that trigger regulatory requirements and scrutiny.

Under the EU General Data Protection Regulation, for example, a news site operator wanting to serve interest-based advertisements must obtain express, affirmative, specific, informed and voluntary opt-in consent before placing cookies and using that personal data for marketing. If it wants to bolster its own data with mailing lists and information from third parties, the operator may need to notify data subjects and confirm that the third party obtained consent.

In practice, companies prompt users for consent regarding cookies with banners, offering "accept all" and "reject all but necessary" choices and unchecked boxes regarding marketing emails or newsletter subscriptions, with an additional "double opt-in consent" confirmation in Germany.

If a business prompts consumers in California with such consent requirements, however, it may violate the requirement of waiting at least 12 months following an opt out before asking for authorization for selling or sharing personal information for cross-context behavioral advertising. Instead, the business must recognize universal opt-out signals and offer opt outs for certain disclosures of personal information and email marketing, which an EU-style "cookie banner" cannot achieve — as the California Privacy Protection Agency expressly notes in §7026(a)(4) of its regulations.

Smaller companies often lack the resources to fully localize their disclosures and opt-in/opt-out mechanisms for each jurisdiction and every adtech service. Even with a "highest common denominator approach" — complying with the strictest data privacy requirements — they may fail on different particulars in some jurisdictions given increasingly prescriptive and intricate requirements. 

Practically, businesses may forgo using new adtech features and return to contextual advertising or paid services or operate only on larger platforms that cover most compliance requirements. But many smaller and newer companies believe this may stymie their competitiveness.

Alternatively, businesses can develop risk-based approaches to address requirements under the laws most likely to be enforced against them. Considering the fast-moving regulatory landscape, a risk-based approach may improve a business's ability to handle vast amounts of personal information in a more informed, structured and accountable way. It requires an understanding of applicable requirements and careful monitoring of the enforcement landscape through five key analyses.

Which particular activities trigger opt-in and opt-out requirements?

Under GDPR Article 6(1), companies must justify personal data processing with a legal basis, required, for example, when acquiring a mailing list, collecting an email address during account registration, collecting browsing information via cookies for marketing purposes, or enabling third-party disclosures via cookies or pixels. Additionally, under national laws implementing the ePrivacy Directive, businesses must obtain consent before sending marketing emails or placing cookies on user devices, unless necessary  to provide a service specifically requested by the data subject.

Across the pond, companies must fairly inform American consumers of data processing practices and offer opt-out rights. Consent is required in limited circumstances, for example, regarding SMS marketing, children's data, sensitive consumer data or biometric information. If consumers opt out of selling or sharing personal information for cross-context behavioral advertising purposes, businesses must wait 12 months before resoliciting an opt in. This may present practical challenges, for example where consumers can opt out through opt-out preference signals without providing their name, or via offline requests not easily connected to online accounts or unregistered website visitors.

Which activities does an advertising initiative or technology involve?

A business collecting personal data with first-party cookies or sending marketing emails may not trigger compliance requirements under U.S. privacy laws beyond having to offer an unsubscribe option in messages. The same business may have to obtain separate and specific prior consent for cookie placement, data use for marketing and sending emails under laws in the EU and other countries. SMS marketing triggers consent requirements and heightened litigation risks in the U.S. but is treated similar to email marketing in the EU. 

Companies must carefully analyze which activities a particular advertising campaign involves and determine what opt-in and opt-out requirements are triggered. Former Westin Fellow Anokhy Desai's web tracking technology index provides helpful context.

Which player(s) in the adtech ecosystem must, can or should ensure compliance with opt-in and opt-out requirements?

Numerous entities contribute to serving ads and handling personal data, from advertisers and organizations that want their products or services in front of a target audience to publishers, organizations producing content that attracts an audience, and all the adtech providers, data brokers, networks, exchanges and platforms in between. 

Publishers are often best positioned to inform consumers about privacy choices, but do not always understand all technical details or compliance requirements. Service providers may not be able to obtain consent or offer opt-out choices themselves but can support compliance by designing technologies and nudging their customers — publishers and advertisers — with default settings, standard contracts, whitepapers and FAQs. Advertisers tend to incur risk because ads prominently feature their brands and can drive compliance via contracts and financial incentives. 

Each entity must carefully analyze its role in delivering ads and processing personal data to determine which obligations it must tackle and which it must ensure other adtech players handle.

Which risk factors should companies consider as a priority?

Businesses selling only to other businesses often face less privacy law exposure, although some jurisdictions require prior opt-in consent for B2B marketing emails, and California includes employees and business representatives in its definition of "consumers" under the California Consumer Privacy Act. Note that B2B-focused companies selling data processing services or compliance solutions may incur greater risk, given the propensity for incentives for customers of those services to apply greater due diligence. These businesses can reduce exposure to misrepresentation claims by refraining from exaggerating values in privacy policies, codes of conduct or advertisements.

Businesses marketing or selling goods or services to individuals for personal, family or household purposes face comparably more risk, as they collect personal information on individuals' interests and preferences beyond their commercial role. Compliance and litigation risks increase exponentially if they use sensitive personal information for advertising purposes, including data on biometrics, health, children, race or sexual orientation. 

Under the CCPA, companies must enable consumers to "Limit the Use of My Sensitive Personal Information," unless a statutory exemption applies. Likewise, in Colorado, companies must obtain consent for sensitive data, and under Washington state's My Health My Data Act, companies need to obtain signed authorization to sell consumer health data. Consequently, many companies now consider sensitive data "off limits" for advertising purposes.

Companies can also mitigate risk by geographically limiting advertising, for example, in Brazil, California, the EU and Quebec. Some adtech service providers proactively offer geo-blocking or restricted data processing for cookie placement, social media campaigns and bulk email initiatives.

When considering the impact of particular laws, companies should assess how readily a law is enforced by plaintiffs' attorneys or regulators. Many businesses have stopped sending SMS messages in the U.S. or using biometric information in Illinois to avoid Telephone Consumer Protection Act or Biometric Information Privacy Act class action lawsuits, respectively. Most companies also prioritize GDPR compliance, in response to fines administered by EU data protection authorities, and CCPA compliance, given two state agencies focus on enforcement and actively furnish information requests, warnings and regulations. 

Additionally, companies should carefully monitor engagement, opt-out rates and complaints. Many consumers find retargeting, cold calls, unsolicited SMS messages and excessive marketing emails irritating. Even if potential conversion rates look promising, companies should heed consumer sentiment conveyed through opt-out requests or disengagement. The CCPA ballot initiatives demonstrated a mass, state-wide opt-out movement against increasingly intrusive data processing and advertising practices, and workarounds or lobbying may provoke further negative consumer sentiment and diminish trust.

How are enforcement and litigation trending?

After answering the four preceding conceptual questions, companies should continuously follow current enforcement and litigation trends. In the EU, DPAs publish enforcement reports and detail priorities. Stateside, the Federal Trade Commission and other regulators offer guidance and generate case law. U.S. class action firms also persistently try new theories or build on successful precedents.

The CCPA, as amended by the CPRA

California prescribes strict rules around personal information. Entities within or reliant upon adtech should give particular concern to the California Privacy Right Act's broad "sale or sharing" provision. 

Where a "sale" includes third-party disclosure of personal information for "monetary or other valuable consideration," "sharing" more narrowly involves disclosures "for cross-context behavioral advertising, whether or not for monetary or other valuable consideration." This encompasses disclosures in exchange for using targeted advertising services. Businesses utilizing these services must then provide notice and grant opt-out rights.

The CPRA vests enforcement powers in the California attorney general's office and CPPA. It does not provide for a private right of action, except in the context of data security breaches. Despite this, plaintiffs have continued to allege CCPA violations, which have largely been dismissed but nonetheless impose legal cost.

State wiretapping laws

The California Invasion of Privacy Act — the state's wiretapping prohibition — requires consent for third-party interception of live communications. The CIPA protects against "intentional wiretapping, willfully attempting to learn the contents or meaning of a communication in transit over a wire, and attempting to use or communicate information obtained as a result of engaging in either of the two previous activities," as stated in Tavernetti v. Super. Ct.

Plaintiffs have asserted wiretapping claims over various technologies and features, including chat, call, or keystroke recording, website analytics, pixels and session replay technology. Courts have since grappled with categorizing third-party technology embedded on websites, resulting in unsettled statutory application.

Class action filings have hit record numbers in response to two federal appellate decisions in particular:

  • The U.S. Court of Appeals for the 9th Circuit held in Javier v. Assurance IQ that consent to tracking via session replay technology cannot apply retroactively. 
  • In Popa v. Harriet Carrier Gifts, the 3rd Circuit, reviewing violation of Pennsylvania's wiretapping law, held that a party to a conversation can be liable for its own "interception" of that conversation. 

Accordingly, judges have upheld claims that companies using session replay technology collected user information without consent while users sought life insurance quotes, as in Hazel v. Prudential Financial, and that retail companies using session replay technology aided and abetted wiretapping, as in Saleh v. Nike and Yoon v. Lululemon.

ECPA

The Electronic Communications Privacy Act extended restrictions on telephone wiretaps to include transmissions of electronic data by computer. It consists of three acts: the Wiretap Act, regulating the interception of communications; the Stored Communications Act, regulating communications in storage and ISP subscriber records; and the Pen Register Act, regulating the use of pen register and trap-and-trace devices. Claims have primarily relied on the Wiretap Act and the Stored Communications Act, but recently have also asserted Pen Register Act claims and state law equivalents.

Challenges over cookie usage under these acts date back decades, with little success. The court in In re DoubleClick held that unauthorized collection of personal information alone does not amount to "economic loss" and could not support the standing. Following Spokeo v. Robins, where the Supreme Court held that harm must be concrete, particularized, and actual or imminent, standing often presents an insurmountable obstacle to plaintiffs' claims.

So long as judges continue to assign negligible value to certain privacy harms, businesses will have little concern over ECPA claims and tracking technologies. Should plaintiffs prove more concrete harms, for example with sensitive personal information, businesses must assure they place cookies with valid consent, showing that access is explicitly authorized.

CFAA

The Computer Fraud and Abuse Act provides prohibits unauthorized access to computers, including accessing "a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain and alter."

Plaintiffs have long alleged cookie placement on their device to be unauthorized and a CFAA violation. However, in Bose v. Interclick, a district court judge dismissed a plaintiff's claim for failure to plead sufficient injury required for statutory damages. Like under the ECPA, standing issues often stymie CFAA cases over adtech products.

TCPA

The Telephone Consumer Protection Act aims to "protect residential telephone subscribers' privacy rights and to avoid receiving telephone solicitations to which they object." Under it, the FTC established the National Do Not Call Registry, permitting residential telephone subscribers to object by proxy to telephone solicitations by registering one's number.

Practically speaking, and of import to businesses concerned with marketing and lead generation, the TCPA, alongside other marketing-focused legislation such as the U.S. Controlling the Assault of Non-Solicited Pornography And Marketing Act and Canada's Anti-Spam Law, requires consent for direct marketing messages. TCPA litigation has continued to grow in number and has shifted toward class action lawsuits. 

VPPA

The Video Privacy Protection Act prohibits companies from disclosing video rental history information without written consent. Courts have applied the definition of "video tape service provider" to include online video providers that collect personally identifiable information. 

Plaintiffs have engaged in VPPA litigation over decades, culminating in seminal decisions in In re Nickelodeon Consumer Privacy Litigation and In re Vizio Consumer Privacy Litigation that outlined the contours of personal information under the statute. 

Early common web beacons initially attracted VPPA class action lawsuits. Since then, VPPA pixel litigation has proliferated. Through October 2023, almost 200 proposed privacy class action lawsuits had been filed citing the VPPA.

Two defenses to VPPA claims have prevailed, most notably:

  • Courts dismissed plaintiffs' claims on grounds that plaintiffs could not be considered "subscribers" under the statute, when they did not receive anything in return for their general subscriptions to a website or when video content was applicable to anyone, as in Gardener v. MeTVJefferson v. Healthline Media, and Carter v. Scripps Networks. Note that this argument has not always prevailed, such as in Harris v. Pub. Broad. Serv. and Goldstein v. Fandango Media.
  • Defendants also argued that information disclosed could not sufficiently identify the plaintiff, like in Ghanaat V. Numerade Labs, where plaintiffs inadequately alleged that the sharing of their Facebook IDs, coupled with the URLS of videos watched, disclosed personally identifiable information.

HIPAA

The U.S. Department of Health and Human Services, Office for Civil Rights guidance on use of online tracking technologies — including cookies, web beacons or pixels, session replay scripts and fingerprinting scripts — by covered entities and business associates under the Health Insurance Portability and Accountability Act states all "individually identifiable health information" collected on a covered entity's or business associate's website or app is protected health information. It remains unclear, however, whether any mere web query by an unregistered user on a publicly available website will be presumed to contain or constitute health information, given that family members, researchers, business partners and many others may access health care focused websites for reasons other than personal health conditions.

Pixel providers have faced claims of wrongful collection of PHI from hospitals and health care providers that installed the technology on their websites, causing many organizations in the industry to question to what extent they can use beacons, pixels, cookies or other adtech products and remain in compliance.

HIPAA-regulated entities may also incur liability for impermissibly disclosing PHI to tracking technology vendors under the HIPAA Privacy Rule, which minimally requires business associate agreements with tracking technology vendors. 

Additionally, the FTC has enforced unfair competition law against health care platforms using common adtech services, including pixels, imposing penalties and requiring companies to send security breach notifications to consumers whose web browsing history was tracked and transferred.

Driver's Privacy Protection Act

The Driver's Privacy Protection Act restricts personal information disclosure by requiring departments of motor vehicles to obtain driver affirmative consent. Up to October 2023, pixels embedded in department of motor vehicle websites resulted in almost 70 proposed privacy class action lawsuits. Plaintiffs in Gershzon v. Meta Platforms, for example, survived dismissal after claiming their personal information had been sent using pixels embedded on DMV websites.

Lessons from these cases go beyond pixel usage or DMVs: web tracking technology providers must understand the websites and applications that implement their products or services so as to avoid similar highly industry-specific claims.

Other U.S. privacy laws

Plaintiffs have applied numerous other federal and state privacy laws to suits involving use of adtech services while legislatures and regulators keep adding requirements at a rapid pace. Companies must methodically review applicable laws in California and at the federal level, as well as newly added state laws.

Practical recommendations

All this said, professionals looking to optimize compliance in light of a quickly evolving privacy landscape shaped by regulation, legislation, enforcement, and best practices can start by focusing on a few key points.

Understand, assess, and document tools and data processing activities. Businesses and counsel must communicate with software developers about code included on a website or app.

Conduct a cost-benefit analysis. Consider industry exposure, litigation developments and robustness of the business's documentation and compliance program. Monitor opt-out rates and listen to customer complaints.

Avoid or minimize use of sensitive personal information. Heightened risks apply for data relating to biometrics, health, children, race and sexual orientation.

Optimize service provider agreements. Ensure vendors or service providers are limited, by agreement or otherwise, to using website activity only for defined activities, such as analyzing the website's functionality for a business's benefit, rather than for the provider's own independent purposes. 

Localize cookie banners and opt-out mechanisms to applicable requirements. Opt-in banners optimized for compliance with the GDPR are not ideal for California and likely even violate CCPA.

]]>
2024-02-09 12:05:20