IAPP Privacy Perspectives https://iapp.org/news/privacy-perspectives/ Privacy Perspectives - The op-ed page for the privacy industry. New rules of the road can sustain US leadership on interoperable digital data flows https://iapp.org/news/a/new-rules-of-the-road-can-sustain-u-s-leadership-on-interoperable-digital-data-flows https://iapp.org/news/a/new-rules-of-the-road-can-sustain-u-s-leadership-on-interoperable-digital-data-flows U.S. President Joe Biden closed February 2024 with the Executive Order on Preventing Access to Americans' Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern, which signaled important developments on the country's plans to position and guard itself — through new rules established by the Department of Justice — against global adversaries such as China, Iran and Russia. The executive order speaks volumes about how the U.S. views the next-generation impacts of data flows on the digital economy and how it can be better equipped as a global leader.

Meanwhile, a slew of Congressional actions, including a newly introduced bill from the House Energy and Commerce Committee, are also tackling the same considerations. They aspire to solve the issues of overcollection, sharing and sale of consumers' personal data, including sensitive data by certain third parties that are loosely defined as "data brokers."

The takeaways

The U.S. is not declaring a position on data localization

For some, the knee-jerk reaction to the executive order is that the U.S. is taking similar actions to countries like India, which created provisions to provide the government with full discretion to restrict data to certain countries, by notification, in its 2023 Data Protection Act. More steeped privacy professionals might compare the order to earlier draft stages of India's bill, which would have created a "black list" of nations with whom data flows were restricted.

Furthermore, some readers may believe the order's language resembles the data localization provisions in the EU's approach to digital markets and privacy, including, for example, the EU Data Act. Before jumping to such a conclusion, it is important to evaluate the executive order in a broader context by looking at the bigger picture of the U.S.'s digital trade and data flow policies over time to understand that data localization is not the means to the end here.

Data free flow with trust remains the essential guiding principle for international cooperation on data flows, coming out of World Economic Forum meetings, as well as meetings with the G7, G20 and the Organisation for Economic Co-operation and Development in recent years.

The U.S. is not trying to close off digital data flows or retreat into a cocoon of its own. Rather, it is setting restrictions that impose stronger safeguards to strengthen its value as a digital trade partner and its role as a world leader, while continuing to allow the free flow of data with other countries that follow the same guiding principles. For example:

The economic consequences of the new restrictions are meant to sustain and promote the digital economy

While the executive order does not delineate an estimated dollar amount by which the new restrictions may impact the digital economy, in 2016 the McKinsey Global Institute estimated the international flow of data contributing to the world economy to be valued at USD11 trillion by 2025. Quantifying how businesses derive value from the data remains challenging and elusive.

As a result of the executive order, U.S. companies may feel like taking a more cautious approach toward digital data flows. For U.S.-headquartered multinational companies working with international vendors and third parties located in other nations and subject to their laws, which were not previously restricted, the new restrictions may create further barriers and/or limitations on sharing bulk data and engaging in retail, commercial, financial and government transitions for companies, regardless of size. This may put downward pressure on their participation in the digital economy, especially for U.S. companies coping with many different laws, standards and frameworks in the absence of a comprehensive federal privacy law.

On the flip side, it could encourage more companies to proactively review their data mapping or data review practices, while leveraging interoperable frameworks such as the global CBPR. This can support better return on investment for companies looking to take a global privacy compliance approach and potential reductions in duplicative spending on the many frameworks that support compliance — from standard contractual clauses to the EU-U.S. DPF to CBPRs. It can also incentivize the role of accountability agents that work in coregulatory models with the government to help provide a light-touch approach to enforcement and monitor the playing field for bad actors.

Finally, what appears to be happening is a closer alignment between the U.S.'s targeted approach to data flows and the EU's, India's and other countries' built-in protections for sensitive data in their national data privacy laws and regulations.

The executive order alludes to converging global definitions of data protection and adequacy

Maintaining adequacy in data protection should and will remain a top priority for the U.S., particularly in light of executive action like this one. The uncertainty of the state of trans-Atlantic data flows after the "Schrems I and II" decisions placed U.S. data practices under heightened scrutiny. This executive order signals the country's desire to take concrete steps to protect the commercial and government data of its own citizens.

Other nations are already acting to secure their data protection via adequacy provisions in their national privacy regulations. For example, Article 45 of the EU General Data Protection Regulation allows for the transfer of personal data to a third country, when the third country ensures an "adequate level of protection." In a similar vein, the executive order signals the U.S. is committed to partnering with like-minded countries with similar levels of adequacy to impose robust data practices that are "adequate" in nature, which will serve as an effort to future-proof and strategically equip the U.S. going forward.

The opportunity for newer, stronger and more secure approaches to data privacy

The order candidly states AI-based malware use, related spoofing incidents and cyber threats are rampant in nations where data privacy controls are lax or where foreign governments can easily access the data of U.S. consumers because they claim national security exemptions, allowing strategic access for them to manipulate the data.

The order also provides hope for next-generation technologies and ways to share data responsibly and efficiently, noting a process to build new regulations that will potentially standardize and incentivize the use of privacy-enhancing technologies through joint efforts by the DOJ and the Department of Homeland Security. The field of PETs has already been defined and loosely framed through guidance by the National Institute of Standards and Technology, so this executive order effort could strengthen the potential use of the technology.

PETs leverage advanced cryptography and statistics to link data or servers, to allow for responsible data sharing without identifying the data. They can include a range of tools such as homomorphic encryption, federated learning, synthetic data and differential privacy. Implementing regulations will likely provide more clarity on the minimum privacy and security requirements — and use of PETs — that companies should be leveraging, which will help spark more innovative solutions to data privacy problems.

Future considerations

While the order's fact sheet focuses on restricting the bulk purchase and sharing of sensitive data, it also leaves a trail of unanswered questions that public stakeholders can answer through the Advance Notice of Proposed Rulemaking by the DOJ's National Security Division.

The definitions of personal information, sensitive data and special category data vary, as does the way that data is shared between affiliated and nonaffiliated entities.

For example, in a post about recent privacy complaints, the FTC said, "Browsing and location data are sensitive. Full stop." The FTC has been pushing the envelope on this. The bottom line has been vouching for heightened protections and responsibilities when processing sensitive data, including geolocation data, health data and browsing data, as well as how to prevent inappropriate access — without consumer consent — by third parties, including data brokers. Taking a closer look at the FTC's agenda with regard to protecting sensitive data sheds light on what is to come. Are we going to see more concrete enforcement action, and potentially a "commercial surveillance" rule, in this space, as well as potential further alignment between the U.S. and other countries in their approach to privacy?

The order includes "genomic and personal health data, financial data, geolocation" and other personal identifiers, as well as sensitive government data of military members and government sites, in its definition of sensitive data. Will this sensitive data also be subject to protections in recently passed U.S. state consumer privacy laws, data broker registration laws, as well as those outlined in older sectoral laws like the Health Information Portability and Accountability Act or Title V of the Gramm-Leach-Bliley Act?

Under the order, U.S. government-related data is a specifically protected category that the attorney general says, "poses a heightened risk of being exploited by a country of concern to harm United States national security." This is because it is linked, or linkable, to categories of current or recent federal government employees, contractors or senior officials or it is linkable to sensitive locations controlled by the government, such as military bases or government properties. To what extent will America's list of foreign adversaries and allies continue to change and evolve? How will this impact our digital trade and data flows with these nations?

The term "data broker" is defined broadly, capturing the wide swath of third parties that process and share consumer data. Will this result in subjecting more companies that collect and sell personal data to the executive order than those that register as data brokers and are subject to U.S. state consumer privacy and data broker laws?

The order defines "country of concern" as a designated foreign government that "has engaged in a long-term pattern or serious instances of conduct significantly adverse to the national security of the United States or the security and safety of United States persons, and poses a significant risk of exploiting bulk sensitive personal data or United States Government-related data to the detriment of the national security." It delegates final authority to designate specific countries of concern to the attorney general. How will the current and future attorney general exercise this authority?

]]>
2024-03-21 12:00:26
The minimization principle and the best interests of children in the digital world: A Brazilian perspective https://iapp.org/news/a/the-minimization-principle-and-the-best-interests-of-children-in-the-digital-world-a-brazilian-perspective https://iapp.org/news/a/the-minimization-principle-and-the-best-interests-of-children-in-the-digital-world-a-brazilian-perspective Expansive access to the internet has solidified the integration of children and adolescents into the virtual world. According to UNICEF, "one in three internet users is a child under 18 years of age," in Brazil, 92% of children and adolescents reportedly use the internet. Despite the benefits of digital inclusion and connectivity, there are risks associated with excessive device use and exposure to inappropriate content.

Under Brazil's General Data Protection Law, the processing of minors' personal data must strictly align with their best interests, and access to applications should not be conditional on obtaining personal data. However, the effectiveness of risk mitigation tools is often tied to the processing of personal data of children and adolescents. Although not the sole mitigation option, technical and organizational measures can attenuate risk and establish access control and content limitation mechanisms.

Therefore, understanding the legal aspects of processing minors' personal data is vital for appropriate and effective protection. The following analysis is derived from Brazil’s LGPD, but numerous similarities with data protection norms globally, such as the EU General Data Protection Regulation, allows the overall reasoning to be applied elsewhere.

The literal interpretation of Brazilian law suggests that processing children's personal data necessarily requires consent from a parent or guardian. This has spurred discussion on the limits and conditions for processing minors' personal data, leading Brazil's Data Protection Authority to clarify that such data can be processed under the legal bases provided in the LGPD — which include, but are not limited to, consent — as long as the best interest of the minors is observed.

The minimization principle under the LGPD mandates that data used in processing should be "pertinent, proportional, and not excessive," limited only to what is necessary for its intended purposes. In practice, however, while the need to observe the minimization principle is clear, its application is sometimes misconstrued. Some stakeholders apply the minimization principle as an absolute rule, pursuing data minimization at any cost, even if it compromises the best data processing practices for a particular context.

When processing the data of minors this misunderstanding persists, blurring the conceptual distinction between "best interest" and the principle of minimization. However, data minimization and the child’s best interest are neither identical nor mutually exclusive.

There are scenarios where limiting access to minors' data is the most appropriate way to protect their best interest, particularly in cases of targeted advertising and predictive analyses. Conversely, comprehensive or more intrusive data processing might offer more effective control mechanisms for children’s online safety, being plausibly justified by prioritizing the best interests of children and adolescents. Thus, it is important to recognize that the child's best interest is not always synonymous with data minimization; in some cases, a more extensive processing of minors' data aligns better with their best interests.

Another principle, necessity, is associated with not processing excessive data. Nonetheless, the use of data that is insufficient to achieve a certain goal, e.g., child's online protection, can also be a violation of data protection law due to the under use of the necessary information. Furthermore, the robustness of collected data is directly related to the quality of data processing results. Insufficient data robustness (input) may lead to compromised output quality, potentially undermining the primary goal of protecting the best interest of children and adolescents. This shows that minimization of data cannot be a goal in itself, as this can — in certain contexts — lead to questionable results.

Processing data for security purposes can include the use of sensitive information, if the goals to be achieved are proportionate to the interference being caused by the activity. Current capacity for personal data processing enables the identification of users through various elements, including soft biometrics. This category includes data such as typing patterns, the position of the mobile device, and height relative to the ground, among others. When combined, these elements can contribute to assessing the probable age of a user. The discrepancy between self-declared age and observed behavior is crucial in managing minors' access to content and mitigating risks. Other data categories, like accessed websites and screen activity patterns, can also be incorporated into a predictive analysis to determine the real age of users more accurately.

While this procedure is not foolproof, it can better age-verification mechanisms by going beyond the usual, and frequently insufficient, self-declaration; the discrepancy between the declared age and personal data being collected can be an important tool in managing minors’ exposure to risk and inappropriate content. This use can be seen in recent proposals worldwide for industries including pornography (online) and alcoholic beverages (offline).

Moreover, if analyzing these inferences can reveal the identities of minors claiming to be older, it should also be possible to identify those portraying themselves as younger online. As recently highlighted by the European Commission's decision in December 2023, ensuring a safer online environment for children is a priority, and adopting more intensive personal data processing measures can be an adequate pathway to ensure children’s online safety.

It can be asserted, therefore, that when it comes to data protection law, principles of the minor's best interests and of minimization, albeit not incompatible, are not to be simply inferred from one another. That is because prioritizing minors' best interests does not necessarily equate to minimizing data being processed. Although data minimization is crucial for the protection minors in various contexts, handling this information involves nuances, especially when ensuring a safer online environment.

Identification through personal data processing, while subject to legal and ethical considerations, can help effectively managing minors' access to different types of content. In this context, despite implying a more encompassing data processing, certain data processing measures can be effective mechanisms in safeguarding the best interest of the child in a digitalized world.

]]>
2024-03-05 10:45:34
Toward a risk-based approach? Challenging the 'zero risk' paradigm of EU DPAs in international data transfers and foreign governments' data access https://iapp.org/news/a/towards-a-risk-based-approach-challenging-the-zero-risk-paradigm-of-eu-dpas-in-international-data-transfers-and-foreign-governments-data-access https://iapp.org/news/a/towards-a-risk-based-approach-challenging-the-zero-risk-paradigm-of-eu-dpas-in-international-data-transfers-and-foreign-governments-data-access Since the Court of Justice of the European Union's "Schrems II" judgment in July 2020, European data protection authorities in the EU have developed a "zero risk" theory in relation to Chapter V of the EU General Data Protection Regulation. They asked data controllers and processors that transfer personal data outside the EU to "eliminate" all risks of access to European personal data by the intelligence and law enforcement agencies of foreign countries whose legal systems do not include data protection safeguards that are essentially equivalent to those mandated by EU law. 

At first, the "zero risk" approach concerned transfers of European personal data to such countries. As a result, there has been growing legal and commercial pressure for many non-EU companies to localize data in Europe and propose so-called "sovereign" solutions. However, this has often been deemed insufficient by DPAs and other authorities that have highlighted the risk of extraterritorial access to data stored in Europe and asked that any risk of such access by foreign authorities be "eliminated" as well. 

The legal actions by DPAs have been combined with political action by European governments. Several initiatives have been undertaken in this respect, including the ongoing discussions at the EU Agency for Cybersecurity about the introduction of "sovereignty requirements" into the EU Cybersecurity Certification Regime for Cloud Services.

In an extensive study published today, I claim the DPAs' "zero risk" theory, which is very similar to the "immunity from foreign laws" political proposal, is overly restrictive, is not mandated by the GDPR and could have a number of adverse effects. 

To be sure, the DPAs' stance on these issues is understandable. First, DPAs are obliged to enforce compliance with "Schrems II." Second, DPAs seek to fulfill their role as the ultimate guardians of European personal data in an age where government surveillance has attained a high level of sophistication. Third, DPAs provide oversight in an exceedingly complex area and, thus, are drawn to solutions that are as straightforward to comprehend as possible. Unfortunately, attaining simplicity regarding government access to data creates insurmountable challenges and unintended adverse effects in practice.

The notion that data controllers can take measures to entirely "eliminate" any risk of unauthorized access to European personal data by foreign governments is grounded on questionable assumptions, including the belief that companies headquartered in the European Economic Area are shielded from direct or compelled access. It is also marked by a lack of clarity surrounding terms like "sovereign solutions;" unverified claims suggesting ownership or staff requirements can confer "immunity" from foreign laws; questionable interpretations of the GDPR, such as automatically categorizing requests from foreign countries as "disclosures" not authorized by Article 48 of the GDPR; and unrealistic expectations, such as the idea that a social media company could provide its global services in the EU without transferring user posts and interactions to countries outside the EU. This line of thinking leads to impractical solutions that have significant costs.  

The GDPR, the Charter of Fundamental Rights and EU law as a whole do not mandate such an absolutist approach to data transfer risks at the expense of innovation, economic growth and other rights guaranteed by the charter. On the contrary, they allow a more nuanced and risk-based approach to data transfers that envisions data protection measures proportionate to the risks at hand. This approach takes into account the nature of the data, the likelihood of access by foreign governments and the severity of the potential harm. 

After an exhaustive analysis of all judicial and DPA decisions on these matters since July 2020, the 95-page study formulates 12 recommendations, six inviting a risk-based approach to international data transfers and six others concerning the critical issue of extraterritorial access to data localized in Europe.

Concerning the first issue, the study suggests the European Data Protection Board, DPAs, and ultimately the European Commission and other relevant European institutions should revisit, clarify and coordinate their views and the interpretation of rules on international data transfers in order to:

Enable consideration of past practice and empirical context in assessing risk

DPAs should acknowledge the significance of the "practice related to the transferred data," as highlighted in the final version of the EDPB Recommendations on supplementary measures.

Explore scalable transfer solutions for startups and SMEs

European authorities should explore, develop and promote transfer solutions tailored for startups and small to medium-sized enterprises that may lack the financial resources needed for extensive legal expertise and detailed transfer impact assessments. 

Recognize that Chapter V of the GDPR does not mandate the degradation of services that inherently rely on global data flows

DPAs should acknowledge that a proportionate approach to Chapter V does not preclude data transfers that are initiated and sought by individuals themselves and are indispensable to enable the exercise of other rights in the EU Charter of Fundamental Rights, such as freedom of expression and information. Specifically, how can users share posts on social networks and interact with a global audience without transferring data beyond EU borders? 

Should we contemplate geoblocking not only on social networks but also on communication platforms, video-sharing sites, online collaboration tools, forums, messaging services and even any EU website that contains personal data? Does Chapter V of the GDPR require the EU to be disconnected from the global internet? 

Provide workable solutions for EU businesses that rely on cross-border data flows

Similar considerations arise for numerous EU businesses that depend on cross-border data transfers for their operations, such as to provide requested services like online bookings and travel agencies, detect and prevent fraud, and defend against cyberattacks. Crafting viable solutions necessitates a nuanced approach based on risk assessments and proportionate safeguards rather than stopping cross-border data flows that are essential to the functioning of the service. 

Reassess the EDPB's supplementary measures and the practices of European DPAs under the prism of a risk-based approach

The EDPB should revisit its recommendations on supplementary measures and its practices and interpretation of the GDPR to clarify that it enables a risk-based approach to data transfers that ensures measures designed to protect the data are proportionate to the transfer risks at hand. Moreover, the EDPB should establish an expert group tasked with identifying and describing use cases necessitating cross-border data flows most commonly faced by organizations and the available and appropriate measures that might be applied to them. 

Enable a more flexible interpretation of Article 49 derogations

DPAs have precluded, in theory, the use of derogations, further compounding the complexities of data transfers. In practice, though, DPAs have accepted the use of derogations in some cases to permit some EU institutions to continue to use tools that have "become indispensable to the daily functioning" of such institutions, as shown by the
European Data Protection Supervisor's decision on the video-conferencing tool used by the CJEU. It could be useful, then, to adopt a more flexible approach to derogations for all organizations wishing to use similar essential tools and services.

Concerning the use of cloud service providers or other companies that localize their data and services in the EU but are subject to foreign laws, it may be useful for DPAs and other authorities in the EU to reflect, among other things, on the following issues:

Determine the relevance of the proposed criteria for "immunity from foreign laws"

The study finds that data localization, headquarters, ownership and local staff requirements do not truly ensure "immunity from foreign laws." In reality the primary criterion is the personal jurisdiction of the foreign country as understood by that country, as well as its ability to "compel" the production of data by imposing sanctions. European institutions, such as the European Commission or DPAs, should study these questions more thoroughly before supporting the introduction of such strict requirements in the context of the EU Cybersecurity Strategy or the GDPR.

Clarify the meaning of "compliant EEA-sovereign cloud solutions"

The EDPB should explain the meaning of "compliant EEA-sovereign cloud solutions" or abandon ambiguous references to the politically connotated term "digital sovereignty."

Assess the impact of "immunity from foreign laws" requirements

The European Commission, in the context of the EUCS negotiations, should assess the impact that "immunity from foreign laws" requirements could have on issues such as innovation in Europe and ensuring high levels of cybersecurity, which is required by the GDPR.

Explore the relevance of adequacy decisions in addressing extraterritorial data access requests

The European Commission and the EDPB should clearly explain the significance of obtaining an adequacy decision when grappling with the issue of extraterritorial requests to access data situated within the EU. CSPs and other companies spend billions to localize data in Europe in order to offer better protections. Strikingly, these efforts seem to place companies in a more precarious situation compared to when they transfer the same data to the U.S. or other countries that benefit from an adequacy decision.

Consider trade-offs between encryption and functionality

Trade-offs should be considered when employing encryption as a safeguard for data at rest against unauthorized access, especially when weighed against the challenge of functionality loss that encryption may cause, significantly constraining the utilization of AI and cloud computing technologies.

Reflect on satisfactory solutions for the EU-US e-evidence agreement challenges

The privacy community in the EU could play a useful role in assisting the European Commission with constructive ideas on how the ongoing negotiations of the EU-U.S. e-evidence agreement could effectively address and satisfactorily resolve the conflicts of laws related to Article 48.

Moving away from a zero-risk approach in favor of a more flexible and risk-based interpretation of Chapter V of the GDPR appears legally justified. Such flexibility could offer pragmatic, feasible solutions to the day-to-day challenges organizations face and provide relief to data controllers and processors throughout Europe. The EDPB and DPAs, however, lack the capacity to provide definitive solutions in relation to these issues; only governments can do so. As the study concludes, democratic governments must intensify recent efforts at promoting "data free flow with trust" and advancing the concept of "trusted government access." International negotiations are emerging as the most viable, if not the sole, avenue for forging consensus on the protocols governing access to personal data that impacts the rights and interests of individuals in other countries.

This study will be interesting for data controllers, processors, practitioners, regulators, policymakers (especially amid EUCS negotiations), academics and all GDPR enthusiasts!

]]>
2024-02-21 12:00:07
Waiting for certainty: When a comprehensive national privacy law is not coming https://iapp.org/news/a/waiting-for-certainty-when-a-comprehensive-national-privacy-law-isnt-coming https://iapp.org/news/a/waiting-for-certainty-when-a-comprehensive-national-privacy-law-isnt-coming In Samuel Beckett's play "Waiting for Godot," the two main characters wait the entire production for a mysterious man named Godot who continuously sends word that he will appear but never does. A major theme for this existentialist story is that life is absurd, including the suffering involved in waiting for something that may never arrive.

Each time a new U.S. state privacy law is passed, or a regulatory body announces a new data protection enforcement action, I am reminded of this play and how privacy professionals keep waiting for a federal comprehensive privacy law — but continue to be disappointed.

Unlike the characters in "Waiting for Godot," maybe we don't have to sit around and wait for something that will never arrive. Perhaps we should consider alternative solutions — ones that have worked in the past when other policy debates have stalled. Using a piecemeal approach instead of comprehensive federal policy could be the solution to gain more clarity and certainty in privacy governance.

Having witnessed policymaking up close for many years while working in U.S. Congress, I've learned some progress is better than no progress with such an important issue. A former member of Congress used to say, "Don't let the perfect be the enemy of the good." I learned on the Hill that you often have to consider a different approach to achieve the goal.

If the holistic approach isn't working, try it piece by piece.

Current federal privacy laws are sector-specific. Privacy professionals are already accustomed to the segmented regulatory environment with the Children's Online Privacy Protection Act for online services directed toward children, the Health Insurance Portability and Accountability Act for patient health information, and the Family Educational Rights and Privacy Act for access to educational records, just to name a few.

As states fill the void with consumer rights legislation, and in the absence of overarching federal legislation, we should consider pivoting our efforts to codifying the best of those state laws into small victories at the federal level.

If asked, many members of Congress would agree they prefer single-subject legislation, so let's give them some. Let's pull the best and most tightly drawn ideas from state bills and deploy them at the federal level, one at a time. For example, a 2023 Pew Research Center opinion survey showed 67% of Americans did not understand what companies are doing with their data. This supports the need for strong notice and transparency requirements for businesses that are found in many consumer privacy protection laws passed at the state level.

The ability for a consumer to access personal information collected by companies, correct that information and have that data deleted upon request is a staple of state-passed privacy laws. Providing a consumer the right to control their data is noncontroversial and if this policy were put before the U.S. Congress it would pass with overwhelming majority in both the House and Senate.

We are all familiar with the "Unsubscribe" link at the bottom of businesses' email solicitations. This ability to opt-out of having personal data processed for advertising purposes is another commonly found provision in state-passed privacy laws. Opt-in provisions are also prevalent but with more nuances. Addressing the opt-out and opt-in rights of consumers at a federal level would offer certainty to consumers and clarity to businesses on how personal data should be treated.

A comprehensive national privacy law is our "Godot" and unlikely to arrive any time soon. Once we understand this, we should look for victories where we can find them. Difficult at best to get through Congress at any time, a comprehensive bill becomes an impossibility during a presidential election year. Pivoting to alternative solutions is something we owe consumers, who just want to know they are protected in this digital age.

]]>
2024-02-08 12:42:27
Behavioral characteristics as a biometric: Something to keep an eye(scan) on https://iapp.org/news/a/behavioral-characteristics-as-a-biometric-something-to-keep-an-eyescan-on https://iapp.org/news/a/behavioral-characteristics-as-a-biometric-something-to-keep-an-eyescan-on The U.S. consumer privacy laws that took effect in 2023, and those slated to do so later this year, will impact multiple industries and sectors. They will regulate items as diverse as universal opt-out signals, dark patterns and data protection assessments. And despite significant variations in scope, application and enforcement, they all contain a relative constant — biometrics.

Traditionally, biometric systems — designed to capture and compare certain identifiers to previously documented references — have relied predominantly on biological traits that could not be altered, like fingerprints and iris shape, to serve as identifiers. 

For years, Illinois' Biometric Information Privacy Act, with its expansive application, daunting private right of action and successive case law, has generally been regarded as the gold standard for entities collecting or processing biometric information. However, despite its reputation, the BIPA regulates only biometric identifiers concisely defined as "a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry."

Following the BIPA's outline, the biometric-specific privacy laws enacted in Texas and Washington state similarly define biometric identifiers within the more traditionally understood scope of biological characteristics or patterns. The same is true for Connecticut's Data Privacy Act, Utah's Consumer Privacy Act, Virginia's Consumer Data Protection Act and the entirety of the 2024 class of privacy laws in Florida, Montana, Oregon and Texas.

Such a biologics-dependent, conceptual framework for biometrics may soon see a deliberate and significant expansion beyond purely fixed characteristics. Looking closely at the nuances of the mosaic of laws that will impact the collection and use of biometrics in the U.S., it seems behavioral characteristics may soon play a much larger role. 

Similar to the EU General Data Protection Regulation's broad definition of biometric data — which includes an individual's physical, physiological and behavioral characteristics — the California Consumer Privacy Act as amended by the California Privacy Rights Act, the Colorado Privacy Act and New Jersey's Senate Bill 332 each include behavioral characteristics in their definitions of biometric information and biometric identifiers, respectively. Notably, sleep, health and exercise data is included under physiological or behavioral characteristics that qualify as biometric information in California and behavior patterns and characteristics are included as defined biometric identifiers in Colorado and New Jersey.

Following in those progressive definitional footsteps, Washington state's My Health My Data Act and Nevada's consumer health data privacy law expressly include "behavioral characteristics" as a category of protected consumer health data. Nevada's law goes even further than the MHMDA and expressly includes such alterable identifiers as tattoos, scars and bodily marks in its biometrics definition.

Not surprisingly, the U.S. Federal Trade Commission and federal government have not been sitting by idly while states address biometrics. In its May 2023 policy statement, the FTC broadly defined biometric information to include "data that depict or describe … behavioral traits, characteristics, or measurements of or relating to an identified or identifiable person's body" and specifically included in its definition "characteristic movements or gestures." The FTC in a recent blog post named protecting biometrics information as a top "priority." Also, the executive branch, by way of President Joe Biden's executive order on artificial intelligence, has expressly acknowledged the power of biometrics, directing that its application to movement-related traits, like gaze direction and hand motions, be carefully considered to allow for the equitable application of AI-technologies.    

This confluence of pioneering statutory language in comprehensive privacy laws from California and Colorado, and in consumer health data privacy laws from Nevada and Washington, coupled with the breadth of the federal government's definitional position, may have paved the way for a much broader general interpretation of what qualifies as biometric data going forward. Even though more than a dozen state legislatures tried, and failed, to pass biometric data laws in 2023, it will be telling to see if any states, or new regulations, move beyond regulating collection protocols for biometric identifiers and delve deeper into the processing and use of such biometrics, however broadly defined. 

Perhaps a more layered approach and risked-based analysis of the actual processing of biometrics will prove to be a more efficient way to cultivate innovation and develop technologies without sacrificing important privacy and equity concerns. Examples of this approach can already be found in states like California and cities including New York, which have imposed a human monitoring and control obligation in addition to certain automated decision-making when decisions of consequence are at stake.  

Regardless of what the future of biometric regulation holds, given the heightened protections and rights already afforded to biometrics as a universally recognized category of sensitive data, businesses deploying any form of unique identifier collection or processing may do well to quickly add the examination of their biometrics practices and policies to their New Year's resolution list.

]]>
2024-02-05 11:47:58
Study: Privacy is a key to customer trust https://iapp.org/news/a/study-privacy-is-a-key-to-customer-trust https://iapp.org/news/a/study-privacy-is-a-key-to-customer-trust More than 160 countries have omnibus privacy laws, yet business leaders recognize privacy is more than a compliance exercise — it is a business imperative inextricably tied to customer trust. Ninety-four percent of organizations said their customers would not buy from them if they did not adequately protect data. And customers are looking for demonstrable evidence. Ninety-eight percent said external privacy certifications — like ISO 27701 and the APEC Cross-Border Privacy Rules — are important in their buying decisions.

These are some of the findings in the Cisco 2024 Data Privacy Benchmark Study, released 25 Jan., which draws on more than 2,600 anonymous responses from security and privacy professionals in 12 geographies. 

Strong support for privacy laws

Privacy laws put additional costs and requirements on organizations, including the need to catalog and classify data, implement controls and respond to user requests. Despite these requirements, organizations continue to overwhelmingly support privacy laws. Eighty percent of respondents said privacy laws have had a positive impact on them, versus only 6% who said the impact has been negative.   

Attractive economics

Privacy has continued to provide attractive financial returns for organizations around the world. In this year's survey, 95% indicated privacy's benefits exceed its costs. While privacy budgets remained roughly flat, on average, for 2023, at USD2.7 million, the average return on privacy investment was 1.6 times, meaning the average organization gets USD160 of benefit for each USD100 of privacy investment. Thirty percent of organizations are getting returns of at least two times their privacy investment. 

Slow progress on AI and transparency

Consumers are concerned about artificial intelligence use involving their data today and 60% have already lost trust in organizations over their AI practices. Ninety-one percent of respondents said their own organization needed to do more to reassure customers their data was being used only for intended and legitimate purposes in AI. This percentage was 92% last year, which indicates not much progress has been made.    

When asked what they are doing to build confidence in their AI use, organizations cited several initiatives. Fifty percent are ensuring a human is involved in the process, 50% are trying to be more transparent about AI applications and 49% have instituted an AI ethics management program.

Concerns with generative AI

Generative AI applications have the power to use AI and machine learning to create new content quickly, including text, images and code. Seventy-nine percent of respondents said they are already getting significant value from generative AI. But this new tech brings risks and challenges, including the protection of personal or confidential data entered into these tools.   

Over two-thirds of respondents indicated they were concerned about the risk of the data being shared with competitors or with the public. Nonetheless, many of them have entered information that could be problematic, including 48% who said they have entered nonpublic information about the company. Fortunately, many organizations are starting to put in place controls on the tools used or data entered.

Recommendations

The findings point to specific recommendations for organizations' privacy programs:

  • Provide greater transparency in how your organization applies, manages and uses personal data, as this will go a long way toward building and maintaining customer trust.
  • Establish protections, such as AI ethics management programs, involving humans in the process, and working to remove any biases in the algorithms when using AI for automated decision-making involving customer data.
  • Apply appropriate control mechanisms and educate employees regarding the risks associated with generative AI applications.
  • Continue to invest in privacy to realize the significant business and economic benefits for your organization.
]]>
2024-01-31 11:45:17
Celebrating Data Privacy Day with 'optimism,' 'conviction' https://iapp.org/news/a/celebrating-data-privacy-day-with-optimism-conviction https://iapp.org/news/a/celebrating-data-privacy-day-with-optimism-conviction Sixty-eight percent of the titles on LinkedIn's Jobs on the Rise list didn't exist 20 years ago. While that's a mind-blowing stat, it's not a surprise for many of us in the privacy community as we are some of the first to have the word "privacy" in our job title. In the span of just 20-plus years, an entire profession of lawyers, project managers, engineers and product managers dedicated to privacy has emerged, as well as a smattering of privacy forward startups and companies. With the rise of artificial intelligence and the continuing growth of data driven industries, the demand for privacy professionals and privacy-focused companies will only continue.

Since the EU General Data Protection Regulation passed in 2018, LinkedIn has seen a surge in privacy forward startups and job postings: Privacy engineers, data management engineers, privacy program managers, privacy lawyers and privacy ethicists. In the last five years, the number of LinkedIn members globally with the title chief privacy officer increased around 35%, privacy engineer job titles grew by 40% and LinkedIn members who identified GDPR as a skill grew over 30%.

This talent pool is strong, but currently small, and for that reason is in extremely high demand, with a median tenure at a company of just 1.2 years. And similar to other industries, including green talent, privacy jobs are subject to a gender gap — 67% men compared to 33% women — which is quite a shift from 20-plus years ago when many professionals were covering privacy as an add-on to their primary job.

All of this frames my optimism-with-conviction for 2024 and beyond. As privacy professionals celebrate Data Privacy Day, keep in mind that as generative AI and other new technologies emerge, there will be an even greater need for professionals specializing in privacy engineering, privacy law, privacy product management, data management and privacy ethics. Just as GDPR sparked new careers in privacy, I strongly believe the emergence and responsible growth of AI will do the same.  

]]>
2024-01-29 11:46:22
Breaking down the digital wall to obtain a data privacy role https://iapp.org/news/a/breaking-down-the-digital-wall-to-obtain-a-data-privacy-role https://iapp.org/news/a/breaking-down-the-digital-wall-to-obtain-a-data-privacy-role I have been working in a data privacy or security role for over 20 years, but got my start as a data privacy professional by accident. I was in the right place at the right time.

At the time, there were no privacy frameworks, no formal training and no preference that one hold a legal degree to assume privacy work — I am not a legal professional today. Data privacy was not a main driver in organizations and was often separated from information technology. It was seen more as a legal or compliance function, rather than as adding value to the organization's brand.

California passed its first data breach notification law, the California Civil Code 1798.29 and 1798.82, in 2003, setting the groundwork for companies to inform impacted individuals that their information may be subject to "unauthorized acquisition of computerized data that compromises the security, confidentiality, or integrity of personal information maintained by the agency." I was tasked with implementing a nuanced program to ensure compliance with the code and hold companies accountable.

While the latter may be true, throughout the past 20 years, privacy professionals have continued to be front and center in maintaining a balance between innovation and trust.

Admittedly, I have taken my understanding of data privacy for granted as I have worked in and seen the industry change from its inception. Much of my knowledge comes from firsthand struggles in aligning privacy with information technology, cybersecurity and physical security. I have sat in meeting rooms, working on data mapping and supporting information governance roles, where I had to explain, more than once, "what does privacy mean, anyway?"

As I took on roles with increased responsibilities, I had the opportunity to interact with thoughtful and passionate people who want to move into data privacy roles but cannot as companies prefer skilled workers. Most junior roles require at least two years of experience, making it challenging for professionals or recent graduates to move into the data privacy field.

Some have reached out to ask where they can start. I recommend:  

  • Assessing how your current experience or education matches to entry level data privacy roles. I am a firm believer that understanding basic compliance control frameworks, being effective at navigating ambiguity and recommending balanced solutions can get your foot in the door. Teaching prospective hires privacy rules is easier than teaching critical thinking or soft skills.
  • Networking with professionals who are responsible for a privacy function. If you are employed, seek out the leader responsible for this function within your current organization, inform them of your skills and ask for opportunities to learn or for mentorship. If you are a new or soon-to-be graduate, begin networking with professionals in the area, attend privacy or cybersecurity events, and leverage your professors' network.
  • Seeking out job shadowing or internship opportunities. Companies are generally open to job shadowing or cross training for employees who meet performance standards. This not only improves the employee's experience but increases abilities for internal mobility. Ask your manager, human resources or privacy employees about those options. Recent graduates or those in their final two years of school should consider an internship to assess a future career while gaining experience.
  • Assessing the benefits of professional membership and certifications and pursuing opportunities to connect with professionals in your area. Professional certifications and connections can help with your education in and appreciation of this fascinating field.
  • Staying up to date with regulations and trends. Regulatory data privacy updates, evolving court cases and more are found almost daily in blogs and news articles. I like to ask any potential privacy candidate what resources they use to keep current on privacy regulations and developments.

Data privacy is growing and there is a need for diverse candidates. Invest a few hours each week to make connections, find learning resources or attend local conferences. Much of my professional success has been due to networking and building a coalition of professionals who can discuss common industry problems. It also makes for a good therapy session.

As my general counsel stated when I was given the responsibility of implementing controls to comply with California's 2003 data breach notification law, "proactively get out there and meet people. Don't let people first meet you when there is a privacy incident."

]]>
2024-01-17 13:00:37
Privacy laws, ethics and the conundrum of DNA https://iapp.org/news/a/privacy-laws-ethics-and-the-conundrum-of-dna-2 https://iapp.org/news/a/privacy-laws-ethics-and-the-conundrum-of-dna-2 By Martin Gomberg, CIPP/E

When the nonanonymized genomic data of an individual is processed for any purpose — including medical, law enforcement or retail consumer uses — the sensitive personal data of all related individuals, directly or indirectly identifiable, is also processed. This includes the personal data of those unaware of the processing, as well as those who won't or can't provide consent. Processing an individual's data without their knowledge, consent or an appropriate legal means is, by definition, surreptitious.

A DNA sample processed to identify an inheritable health risk for one brother who wants to know, could identify potential risks for another brother who adamantly does not want to know his risk. Even if the processing is artifactual and unintended, information has been processed by a company and some part of that data is related, directly or indirectly, to both brothers. Of course, the genomic profiles of two brothers are not exact. Just under half of their DNA is inherited from their father, with each brother sharing only some commonalities. The other half is inherited from the mother's familial line and, with genetic mutations each carry, this DNA relates the brothers and expresses their unique individuality.

Regarding individuals' privacy, and the emerging language in privacy laws, it is less about the specifics of the data processed and more about the challenges to processing sensitive, related and reasonably associated individuating data.

There is no more intimate exposure of our person than through our most generationally persistent and impactful data, our DNA. It can reveal our ancestral makeup, family relations, traits we pass forward, predisposition to disease and the specific diseases we carry, our physical characteristics, and our predilection to specific behaviors and behavioral abnormalities. It can inform on paternity, infidelity, criminality and other sociolegal questions. It can predict likely longevity.

Unlike other data typically anticipated by privacy laws, such as the personal data of individuals or shared households, DNA is extra-individual. The data is not only associated with us and our familial households, but also others to whom we are biologically related. Yet our innate curiosity and personal interest in knowing about ourselves, our relatives and our ancestry compels millions of individuals to happily contribute a swab or vial of saliva, granting consent to for-profit direct-to-consumer genomic processing companies to assess our genotypical makeup and relatedness.

But privacy laws do not handle the interrelatedness of individuals well. Nor do they consider how our actions, disclosures or processing we consent to impacts next or descendent generations. Clearly DNA data is personal and highly sensitive. But can laws treat the DNA data shared in a genetic cohort the same as an individual's personal data?

Regulatory challenges

Whether processing of one family member's DNA is at the same time processing of another's without their knowledge is the core of the question of whether the processing is, in fact, surreptitious. It is also the core of regulatory challenges direct-to-consumer companies may face.

There are nearly 200 privacy laws worldwide. In privacy there are principles, truths and structures common to existing and emerging laws everywhere across the globe. Irrespective of which law, primary among these are informed consent and the use of personal data in the legitimate interest of the individual, company, community, national or public good, in contracts, necessary processing and in compliance with law and authorities. None fit retail genomic data well.

There are 12 U.S. state privacy laws currently enacted and approximately 20 others pending. Irrespective of even the best efforts to manage security and privacy, aligning current policies and processing to existing and continually emerging domestic and potentially global regulations will be challenging. Each law imposes increasing demands on informed disclosures, processing, and protecting consumers and children.

Articles 4(13) and 9(4) of the EU General Data Protection Regulation disallow the processing of genetic (special category) health data, but because of its importance, allow member states further conditions, controlling or enabling medical, scientific, and clinical use and research. But retail direct-to-consumer processing is treated inconsistently and is covered under varied regulation across EU member states. Through medical device, patient's rights, bioethics, health or genetic regulations, it may be disallowed in part, completely, or require a local or medical facilitator. Some U.S. companies avoid these complexities by not participating in European markets.

The U.S. has been more favorable to direct-to-consumer genomic processing. Absent comprehensive privacy regulations, it has largely been treated as a retail consumer service, unrestricted in most states as to data collection, use and sharing. But with increasing scrutiny by the U.S. Federal Trade Commission, new state privacy, health and child protection laws, and as the U.S. adopts a more GDPR-like posture and language, there may be new challenges to direct-to-consumer uses of genomic data.

Like the GDPR, California's Consumer Privacy Act and other new laws define "personal information" broadly, to include any information reasonably linked or associated, directly or indirectly, to "an identified or identifiable individual." All known family members share an identified relationship through inheritable DNA. When processing a father's genomics, his inheritable DNA profile — positive, benign and deleterious — is processed and identified. It is the father's right to know and consent for himself. But absent a means of consent, his children, once they are adults, are passive recipients of the processing performed under the consent given by their father.

Blood relatives are all reasonably linked by DNA. They are identifiable by genetic genealogy. Cousins estranged, or who never met, share familial data. With the qualifiers reasonably linked or associated, personal data of one family member is related to others, identified or not.

Other laws add to these challenges. Connecticut, Colorado and Virginia require opt-in consent for processing of sensitive data. And, once given, a means to later revoke consent. The CCPA identifies genetic data as sensitive, and its inferential uses can be restricted by consumers. Utah requires notice and a right to opt out.  Without notice of processing to impacted individuals, identified or identifiable, no means to consent is made available to them, and neither the restriction, revocation nor opting out of the processing is possible.

Colorado and California each disallow dark patterns, arguably a consequence of surreptitious DNA processing. All nonanonymized uses of DNA, regulated or not, share the issue of unintended processing of potentially exposable, usable or referential data about individuals. Also, the processing of individuating data, but without basis, informed consent or even subject knowledge.

Even with the best of efforts toward compliance, newly adopted EU-like language in U.S. law, a flood of emerging state regulation and legislation, what has been termed the "murky" nature of consent, and the nature of DNA, may prove to be difficult or insurmountable for some businesses.

Default consent and surreptitious processing

One of the more difficult challenges of genomic processing is in the use of consent. Consenting to the processing of our DNA is consenting to an exposure of the potential genomic makeup of close family, distant relatives and descendants. Surreptitious processing is any clandestine, covert or unauthorized processing. Artifactually processing the personal data of others under the valid consent of one party may be construed as surreptitious, as it is uninformed and without a choice or opportunity to opt in or out.

Dark patterns

Does surreptitious processing in this context of default consent — a preticked box that defaults to "Yes, I consent to the processing of my DNA" — meet the criteria of a dark pattern? Absent any question of consent or any informed choice by its inheritors, it would seem so. Typically, a dark pattern refers to a deceptive interface that compromises individual choice and intent. In this case it is less a matter of deceptive design and more about an imposed and obscured default consent, and an absence of the means, or opportunity, to exercise informed choice.

Does granting consent for DNA processing today deny next generations informed choice?

Every lifespan is granular, stacked generation on generation, great grandparent to great grandchild bracketed by all that came before and those ahead. The interests, choices and disclosures of one generation can trample and compromise those of the next. Great-grandparents, grandparents, parents, children, grandchildren and others share a lifespan together, each making choices, decisions and impactful disclosures about themselves, their family and their lives. The inheritance of a "default consent" — not given by our children or theirs directly, but by us as a parent, grandparent or others — to the disclosure of their genetic complement denies choice.

DNA data is unlike other personal data. It resists transparency, minimization, erasure and retention limits. Its individuation is not linear. A disclosure of self is also a disclosure of descendants and relatives, and it generationally persists. Parents, children, grandchildren and cousins each make each other relatable and findable.

In a paper titled "Murky Consent: An Approach to the Fictions of Consent in Privacy Law," Professor Daniel Solove states "in most circumstances, privacy consent is fictitious. Privacy law should take a new approach to consent that I call 'murky consent.' Traditionally, consent has been binary — an on/off switch — but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious."

The recent 23andMe data breach that exposed data of individuals of Ashkenazi Jewish heritage is egregious, but the potential risk will only increase as we collect more data, increase the its density, keep it perpetually and use it in new ways. Neither technology nor privacy laws can fully address this. Privacy expectations cannot be met where personal data is inheritable, touches multiple people or crosses generational barriers. Consent is a poorly fit control and, as a minimal safety net, should expire, have limits, include designated proxies for postmortem decisions and require periodic revisitation for renewal.

And, it is only time before digital IDs and other individuating data emulate DNA to be persistent, associative, inheritable, relatable and immutable, requiring us to rethink privacy laws. Even now, there are several companies and technologies that promise the sequencing of an individual's entire genome to an immutable blockchain.

Interpolating from Maslow's Law of the Instrument, when all you have is a privacy law, all personal data looks like data relating to a natural (living) person. Except it doesn't and likely will increasingly less. We each suffer a cognitive bias toward the tools we hold most familiar. Privacy laws are a poor fit for genomic data. They are also inadequate for blockchain, transindividual, postmortem, massively dense and generationally persistent data — and genomic data is increasingly all of these.

There is a difference between synchronic consumer data, where applicability of the law is constrained to the lifespan of a living individual, and perpetual consumer data that survives generations passing. The same laws or language may not fit both. Consumer privacy laws are myopic in focusing only on protecting the rights of individuals living today. Even legal bases like consent on an individual's behalf, public interest or a company's legitimate interest fail where the needs, risks and context for upcoming generations is unknown. DNA spans generations and people, both those alive and yet to be born. Ethical responsibility will only increase with the sophistication and sensitivity of technologies as they increase our reach and expose more relationships to others.

DNA's 'unique' nature

Society demands a lot from its DNA data, continually increasing the density and persistence of data to support research and innovation. Europe's "1+ Million Genomes" initiative aligns 25 EU countries, Norway and the U.K. in establishing a genomic infrastructure for medical research and clinical trials. In the U.S., the National Institute of Health's National Human Genome Research Institute anticipates genomics research will potentially generate up to 40 exabytes of data in the next decade. Individual privacy benefits from minimization and limits to the data we hold and expose. Innovation in medicine, health and scientific research will only benefit from increased collection, dense aggregation, and the retention and persistence of more data over time.

Increasingly, synthetic DNA, or derived datasets, are being used to create pseudo datasets from real data for research purposes. These reduce the risk of exposing "live" individuating data. This is both an opportunity and risk. Larger, denser, richer and generationally persistent datasets are needed for research and innovation, and can be created using derivatives of the genetic profiles of millions. Persistence will allow its reuse in new ways for generations.

But with greater density and persistence, the risk of compromise and the attractiveness of the target also increase. With technology or statistical advancement, localization, reidentification and individuation remain risks for a dataset that, by design, violate retention limits, data minimization, the rights to know and access it, and other principles associated with privacy laws. DNA is a conundrum for privacy, and privacy laws do not align well with genomic data.

Genomic processing is critical for research and clinical trials but is challenged by the absence of an obvious legal basis that spans individuals, relations and generations, or the processing or disclosure of the sensitive personal information of nonconsenting or uninformed individuals and future generations. It seems unavoidable that the processing of the data of one person under consent is also a violation of the right to informed consent for another. It is the unintended consequence of current privacy laws, and the persistent, inheritable and unique nature of DNA.

]]>
2023-12-13 13:25:00
Empowering users: A universal interface for digital ad preferences https://iapp.org/news/a/empowering-users-a-universal-interface-for-digital-ad-preferences https://iapp.org/news/a/empowering-users-a-universal-interface-for-digital-ad-preferences A study published earlier this year by the European Commission, and conducted on its behalf by AWO, found numerous negative impacts of the digital advertising market on advertisers, publishers, users and society.

For example, disinformation websites are funded through digital ads, harming democracy and diverting revenues from legitimate publishers. Furthermore, the market's complexity and lack of transparency prevent advertisers from ensuring their ads aren't placed next to content that may hurt their reputation. This problem, like others, stems from the market's reliance on personal data and tracking to deliver ads and measure their performance.

The study argues that simplifying the enforcement of existing privacy laws could help remedy these problems by reducing the market's reliance on personal data. This would involve targeting key points in the data supply chain, to reduce the amount of work needed to make privacy law as effective as possible.

AWO proposed establishing a "single interface" for users to control their digital advertising preferences. This proposal has attracted considerable interest, as it would create a paradigm shift in the digital ads market, radically changing it for the better. Indeed, the idea is being explored by industry and the European Commission as part of its Cookie Pledge.

What is the single interface?

Section 8.3 of the AWO study suggests the European Commission work to establish "a single interface where individuals can easily indicate their preferences for data collection and targeting across the entire digital advertising ecosystem."

Through such an interface, users could:

  • Select the types of ads they want to receive on all sites, platforms and apps.
  • Switch off targeted ads all together (a legally enforceable "Do Not Track" signal).
  • Include publishers they trust with their personal data on an inclusion list.

Using data for targeting and evaluating ad performance would still be possible, but it would be based on trust and a less intrusive approach that minimizes the processing of personal data.

The opportunity

First, the single interface would give users more control over their personal data than any other technological or legal solution could. It would automate, centralize and universalize user choices across the market, eliminating the need to deal with countless consent banners and ad settings. Lowering this burden on users is crucial given "cookie fatigue" and users' varying levels of digital literacy. It would also increase consumer empowerment beyond the gains in transparency expected from the Digital Services Act.

Second, the single interface would simplify the exercise of data protection rights. Instead of individually checking the legal compliance of thousands of companies' use of data, data protection authorities could use conformity with the signals of the single interface as a proxy. It would expedite both compliance and enforcement by regulating the market from the bottom up, rather than from the top down.

Third, the single interface would cause a paradigm shift in the digital advertising industry. Currently, by default, companies in the industry can gather as much personal data as they see fit based on their interpretation of privacy legislation. Through the single interface, data collection would be prevented by default, unless users agree to share it. This would significantly curb profiling and tracking, reducing companies' ability to rely on personal data to power their digital advertising services. It would institutionalize the EU General Data Protection Regulation's principles of privacy by design and by default, ending the era of indifference to legal norms by ensuring companies' autonomy does not override that of users.

This would significantly reduce the importance of personal data as a resource in digital advertising, leading companies to look for alternatives. Companies would reallocate resources toward solutions that rely on little to no personal data, such as contextual advertising, increasing the latter's effectiveness and uptake.

Fourth, the single interface would mitigate the market's negative societal impacts. At present, tracking allows websites hosting disinformation to generate higher revenues by demonstrating they have presented ads to "high value users," such as those who have visited a car company's website and may therefore be in the market to buy a car. Blocking tracking by default would restrict disinformation website's access to this type of information, reducing revenues. Curbing tracking would also reduce its climate impact, make it more difficult to manipulate users and reduce state agents' and criminals' ability to spy on users.

Finally, through the "Brussels effect," the single interface could set an example for regulating the digital economy, strengthening the EU's leadership in this area. It would set an innovative standard for data governance that could inspire other jurisdictions to build upon. It would also set a standard for user control that could be applied in other sectors, like internet of things and recommender systems.

The challenge

Establishing the single interface won't be without its challenges. It is a project that should, therefore, be considered carefully and informed by academic research and stakeholder input.

One of the biggest challenges is governance. Who will manage and design the single interface, when an entity in control could abuse its power as a gatekeeper over users' data to its benefit? For example, a browser operator could manipulate the design of the single interface to allow the collection of more personal data than its customers (e.g. publishers) and competitors, thereby comparatively increasing the attractiveness of its services. This problem was highlighted by discussions around the ePrivacy Regulation's Article 10 on browser-based consent, as well as the U.K. Competition and Markets Authority's investigation into Google's "Privacy Sandbox" browser changes.

Governance concerns could be tackled through a co-regulatory working group including EU policymakers, relevant industry segments — such as platforms, advertising technology intermediaries, advertisers, publishers — and civil society. Also, as a common and uniform interface, such as an application programming interface, it could be accessed through browsers, operating systems and even app stores. With no single entity in control, the single interface would meet its objectives without favoring any particular market segment.

Creating an interoperable interface that can be accessed by all relevant market participants would be technically complex. One solution is to build upon a system already in development, the proposed European Digital ID. Similar to the single interface, the eID Regulation would establish a framework to automate the secure communication of user data based on their preferences, and only to the extent required, for example, to use banking services or file taxes. The eID framework could be a stepping stone toward the single interface. 

Additionally, although the initial work of establishing the single interface will be demanding, adapting it over time based on market innovation should be relatively easier. This would make it significantly more flexible and future proof than laws such as the EU GDPR and the DSA.

The single interface would also need to ensure companies generating revenues from digital ads, particularly publishers, can still contact their users directly. If users turn off the use of personal data for digital ads by default, publishers should still be able to present them with reasons to opt in, for instance, because access to ad performance data can help them generate higher revenues. When prompted to do so, users would be able to inclusion-list the publishers they trust and want to support. This would not take the form of a "cookie wall," restricting access to users that do not inclusion-list the publisher in question, as it would defeat the point of the interface.

Finally, the language used by the single interface and its design will determine users' ability to understand it. It should give users simple and direct access to the most privacy-friendly options, and more in-depth granular options, like allowing the use of only some categories of their personal data or only for some purposes. User-friendly design and language are, therefore, crucial.

The way forward

The single interface is a comprehensive solution that would bring about a paradigm shift in the digital advertising market. Establishing the single interface — and getting it right — is worthwhile because of its ability to make the market more balanced, privacy-friendly and sustainable.

]]>
2023-12-08 10:00:31
A 'slippery slope' of 'sousveillance' https://iapp.org/news/a/a-slippery-slope-of-sousveillance https://iapp.org/news/a/a-slippery-slope-of-sousveillance I first stumbled upon the term "sousveillance" a few years back, when my sister sent me a link to an article called "The psychology of privacy in the digital age." Coined by Steve Mann, the neologism "sousveillance" draws upon the French word sous, meaning below, and refers to a member of the public, rather than a company or authority, recording someone's activity.

Its applications are varied — ranging from "policing the police" instances of civilians filming encounters with law enforcement, to random acts of kindness videos on TikTok.

Around this time, I was bristling at — or more accurately, vehemently proselytizing against — an invitation to join my family's account on Life360, which allows users to track the mobile phone locations of its members in real time. Admittedly, my family joined Life360 to keep a protective eye on my nephews' movements, with the teens entering the "fly the coop" phase of their lives.

I could think of nothing worse, knowing my loved ones could see every movement of my smartphone and, by association, me. This form of sousveillance, however well meaning, felt invasive and creepy. I imagined awkward family dinners, where someone might ask why "my smartphone" had travelled across town, in the middle of the night, or when "my phone" hadn't returned to "home base" after a night out. No thank you.

Fast forward to September 2023, when I participated in the Sydney Marathon. As I crossed the finish line, a text message from race organizers beeped on my smartphone confirming my race time. Seconds later, yet more notifications, this time from family and friends, congratulating me on my time, pace and splits.

I was confused and conflicted to realize so many people knew my results instantly, almost before I had time to scrutinize the numbers myself. Yes, their support and well-wishes were appreciated. But equally so, it was disquieting to know so many people had been tracking my 42.2-kilometer race, whenever my chipped bib crossed timer mats strategically positioned throughout the course.

The same technology used to record my run splits, and to prove I hadn't short-changed any meters along the route, also allowed anyone with the Sydney Marathon app to track my run. Sure, I'd shared the app with my nearest and dearest, and in doing so, gave them consent to follow my progress on race day.

But I hadn't, knowingly, given consent for anyone else to track my race day performance. Yet the messages I received crossing the finish line suggested otherwise.

This got me thinking about my privacy and, specifically, about the dimensions of my privacy choices.

On the one hand, I'd decided opting in to my family's Life360 account wasn't worth the invasion to my privacy, even though these were people with whom I shared DNA. On the other, I'd knowingly given my permission for the same cohort to track my marathon performance. The hypocrisy wasn't lost on me. Using location-based technology to allow my family to track my chipped bib on race day was acceptable. Yet the same technology when used to track my mobile phone location went a step too far.

This contradiction reflects the dimensions of privacy and the complexity of decisions around it, including how a decision I make today regarding my privacy today only reflects my current state of mind and my assessment of the privacy context. Such decisions have implications for my privacy in a different time span. My psychology and psyche may change over time, with implications for how I manage privacy transactions with myself and with others. What works for me today will not necessarily remain true during different scenarios, settings, applications and points in time.

Giving me agency to control my privacy fully allows me to assess and reassess the complex risk-reward privacy equation. This assessment is almost never binary. Rather, it's more like a privacy gauge. Sometimes my consent is complete, with the needle fully tuned to the 100% gauge indicator, but the needle more often oscillates between 0% and 100% radial markings.

Another consideration is the ability for anyone with the Sydney Marathon app to track my movements. Had I given my consent for anyone to track my run?

With neither the time nor inclination to review the Sydney Marathon policies and privacy statements to confirm what I had agreed to, or hadn't opted out of, I suspect consent-esque terms were most likely embedded in the waiver runners signed upon registration. And then, likely buried beneath the legalese around not holding event organizers responsible for a participant's demise, come race day.

Considering the time lapse between registration and race day, I'd have expected such an important, privacy-impacting consideration as the tracking of one's location to have been spotlighted and repeated often, or, at the very least, to have received larger real estate amid the surrounding legalese.

So, what's the harm in people following my run? A lesser risk of harm relates to preservation of ego. Despite competing with 13,000-plus marathon finishers, long distance running is an activity performed solo, in the presence of others. In some races, you bonk. In others, you fly.

I want to control the narrative of my own race. But I lose agency over the story of my performance when everyone has access to my stats and can form their own judgement about how my race unfolded.

A more serious risk of harm is a reality for some when anyone knows your every move. Life-threatening harms arise when others with an intent to hurt, can follow your movements in real time. The physical and psychological strain of running a marathon is difficult enough, without having additional fear and mental anguish of worrying whether your tormentor is weaponizing your bib location to track your movements on course.

Both events highlight the use of technology to surveil everyday activities. Here lies my deepest concern: the seemingly innocuous application of sousveillance technology to normalize everyday situations, targeted at users who are our trusted inner circle.

Through its ease of use and seemingly harmless application, we sleepwalk our way to a world where all-seeing technology is pervasive and more ingrained into our lives. When this happens, it becomes less prone to scrutiny, and it entrenches our dependency. It contributes to the gradual erosion of our privacy, by the people we connect with and to, and is a slippery slope to more invasive forms of sousveillance and surveillance.

Equally, it highlights a potential for large-scale surveillance to become more accepted. We risk falling into the faulty logic of "what-about-ism" regarding big tech overreach. Why worry about this big, bad Big Brother application, because "what about" this other invasive sousveillance I've subscribed to, to make my life easier? Will these privacy microintrusions erode our aversion to bigger intrusions and make us more likely to acquiesce to large-scale surveillance proper?

I'm not willing to turn my privacy gauge to 100% acceptance in the pursuit of my passion for running. I want to be better informed about the privacy implications of participating in an event. I want to change privacy-impacting decisions as often as I change my runners. I want to run where I choose my watchers. I want to narrate my own race story.

]]>
2023-12-07 13:00:22
Emerging trends in fintech privacy: 5 key areas to watch in 2024 https://iapp.org/news/a/emerging-trends-in-fintech-privacy-5-key-areas-to-watch-in-2024 https://iapp.org/news/a/emerging-trends-in-fintech-privacy-5-key-areas-to-watch-in-2024 Over the past decade, financial technology companies have transformed how consumers think about banking and financial services with user-centric technologies and practices. The innovative products and services they offer are facing increased scrutiny from state and federal regulators.

In 2024, fintechs should be prepared to navigate the following five data privacy and security topics.

Scrutiny of third-party tracking pixels, other tracking technologies may increase 

Many companies use third-party pixels, cookies, software development kits and similar technologies to track user activity or serve targeted advertising. Regulators have been scrutinizing these practices when they involve potentially sensitive personal data, especially when it is used for purposes consumers may not understand or consent to, like profiling or advertising.

Taking a broad view of what constitutes "sensitive" personal data, the U.S. Federal Trade Commission recently warned five tax preparation companies that they must obtain affirmative express consent before allowing these technologies to share financial data, marital status or family information for advertising purposes. Lawmakers are also taking notice. In July, six senators released a report calling for the investigation and prosecution of tax-prep companies for potential violations of privacy law in their sharing of "sensitive" consumer data via pixels and other tracking technologies. Lawsuits against the companies named in the report quickly followed.

These actions follow regulator scrutiny and enforcement actions against companies transmitting medical or health-related data via pixels under similar theories. In light of the FTC's actions and the U.S. Consumer Financial Protection Bureau’s past attention on behavioral advertising practices, fintechs that share financial or transaction data with third parties for advertising or other purposes may soon be scrutinized by regulators under the same theories.

Regulators will increasingly expect appropriate use, governance of AI technologies

In June, the CFPB published an issue spotlight on the use of AI chatbots by banks and financial institutions. The spotlight notes that, while financial institutions relied increasingly on AI chatbots due to cost-savings, their use poses a number of risks. These include privacy and security risks, which if left unmitigated, may make companies noncompliant with federal consumer financial laws.

AI applications can have tremendous benefits for consumers and companies alike, but regulators are likely to scrutinize companies that use AI in ways that harm consumers, whether by design or failure of proper governance.

Potential mandates around financial data rights 

The CFPB may mandate fintechs to provide specific personal financial data rights. The CFPB recently proposed a new rule to give consumers increased access and portability rights to their financial data. The proposed rule implements a dormant provision from Section 1033 of the 2010 Dodd-Frank Act and would apply to certain financial institutions, card issuers, digital wallet providers, and other companies and fintechs with data about covered financial products and services.

It would require these entities to provide consumers with certain transaction, balance, upcoming bill, account and other personal data upon request. Consumers could also authorize third-party companies, including those offering competing services, to request this data, and companies would need to maintain developer interfaces that comply with the rule's technical standards, to allow consumers to have their data ported to these companies.

The CFPB hopes this right will allow consumers to seamlessly switch between financial service providers, increasing competition. Third-party financial service providers that obtain data in this way would be subject to specific data privacy and security requirements, including regarding transparency, consent, use limitations, and onward transfer restrictions.

Data brokers may face new requirements 

In remarks given at the White House in August, CFPB Director Rohit Chopra announced plans to introduce new Fair Credit Reporting Act rules to regulate so-called "data brokers."

The proposal outline defines data brokers broadly as companies that collect, aggregate, sell, license or otherwise share personal data with other companies. These proposed rules would define data brokers as consumer reporting agencies, requiring them to comply with an array of FCRA requirements, including ensuring data accuracy, preventing its misuse and significantly restricting its use for marketing or advertising purposes.

States have also increasingly regulated data broker practices, with Texas enacting one of the strictest data broker laws in the country, and California enacting the Delete Act. In both cases, the CFPB is concerned with compliance challenges posed by AI and the changing financial data landscape.

New information security, breach response obligations likely

Federal and state regulators may continue to impose new information security and breach response obligations on fintechs.

Just this month, the New York Department of Financial Services updated its cybersecurity regulations for financial service companies, enhancing chief information security officer reporting and board oversight responsibilities, imposing new obligations for privileged system access and multifactor authentication, and requiring regulator notifications within 24 hours whenever ransomware or extortion payments are made.Last month, the FTC amended its Safeguards Rule to impose new FTC data breach reporting obligations on certain nonbanking financial institutions.

Tips for how to prepare 

Existing privacy programs may be leveraged or enhanced to prepare fintechs to respond to these emerging trends. Specifically:

  • Personal data inventories and mapping can help define what data is shared with third-parties via tracking technologies, utilized with AI services, and disclosed for new financial data rights, as well as to identify whether the shared data means the company is a data broker. This foundational information will help enable appropriate responses to these trends.
  • Pixel and tracker governance approaches, where standards are set for the use and configuration of third-party technologies, and the technologies used are known and configured to interoperate with consent management tools, can help address state-law obligations and minimize potential scrutiny from regulators.
  • Consent-management tools can be leveraged to obtain opt-in consent or allow opt-out rights, in line with federal and state laws for use of third-party cookies, pixels, SDKs or other technologies that transmit data for advertising, profiling or other potentially unexpected purposes.
  • Accurate descriptions about personal data practices in privacy notices, webforms, and customer-facing apps and webpages that reflect personal data use and sharing practices help keep trust and avoid regulatory scrutiny. Don’t skip reviewing these disclosures regularly, and do so in connection with privacy impact assessments of new technologies and use cases.
  • Privacy and AI risk assessment processes can be utilized to identify potential risks with AI uses, and to help define the necessary controls and testing to mitigate them.
  • Incident response programs and plans used by security and privacy teams can be updated to confirm that new incident reporting obligations are considered and addressed.

By tracking these trends and continuing to support and evolve privacy programs and operations, fintechs can innovate and grow while navigating these emerging privacy and cybersecurity challenges.

]]>
2023-11-29 12:15:31
'Pay or consent:' Personalized ads, the rules and what's next https://iapp.org/news/a/pay-or-consent-personalized-ads-the-rules-and-whats-next https://iapp.org/news/a/pay-or-consent-personalized-ads-the-rules-and-whats-next In a widely discussed move, Meta gave Facebook and Instagram users the choice between paying for an ad-free experience or keeping the services free of charge using ads. The legal reality behind that choice is more complex. Users who continue without paying are asked to consent to the processing of their data for personalized advertising. In other words, this is a "pay or consent" framework for the processing of first-party data. 

Even though Meta's "pay or consent" framework is now reportedly a key target for a number of data protection authorities, this model is common in European digital services. Newspapers like Spiegel, Zeit and Bild present their readers with "pay or consent" choices, and such practices have already been subjected to scrutiny by DPAs, who, until now, leaned toward a permissive approach. 

Personalized advertising: Contractual necessity or consent?

Under the EU General Data Protection Regulation, personal data may only be processed if one of the lawful bases from Article 6 applies. They include, in particular, consent, contractual necessity and legitimate interests. When processing is necessary for the performance of a contract, according to Article 6(1)(b), then that is the basis on which the controller should rely. You may think if data processing, e.g., for targeting ads, is necessary to fund a free-of-charge service, that should count as contractual necessity. The authorities do not dispute that in principle, but there is a tendency to interpret contractual necessity very narrowly. Notably, in December 2022, the European Data Protection Board decided in Facebook and Instagram should not have relied on that ground for the personalization of advertising. And earlier this month, the EDPB decided Meta should also not rely on the legitimate interests basis.

The adoption of a narrow interpretation of contractual necessity created an interpretative puzzle. If we set aside the legitimate interests basis under Article 6(1)(f)), in many commercial contexts, we are only left with consent as an option, outlined in Article 6(1)(a). This is especially true where consent is required, not due to the GDPR but under national laws implementing the ePrivacy Directive (Directive 2002/58/EC); that is, for solutions like cookies or browser storage. Note, though, that these are not always needed for personalized advertising. The puzzle is how to deal with consent to processing needed to fund the provision of a service that does not fit the narrow interpretation of contractual necessity.

Consent, as we know from Articles 4(11) and 7(4), must be "freely given." In addition, Recital 42 states: "Consent should not be regarded as freely given if the data subject has no genuine or free choice or is unable to refuse or withdraw consent without detriment." The EDPB gave self-contradictory guidance by first saying withdrawing consent should "not lead to any costs for the data subjects," but soon after adding that the GDPR "does not preclude all incentives" for consenting.

Despite some differences, at least DPAs in Austria, Denmark, France and Spain, and the Conference of the Independent DPAs of Germany generally acknowledge that paid alternatives to consent may be lawful. Notably, in a recent Grindr appeal, the Norwegian Privacy Board also explicitly allowed that possibility.

The CJEU and "necessity" to charge "an appropriate fee"

In its July 2023 Meta decision, the Court of Justice of the European Union weighed in, though in the context of third-party-collected data, saying if that kind of data processing by Meta does not fall under contractual necessity, then: 

"(...) those users must be free to refuse individually, in the context of the contractual process, to give their consent to particular data processing operations not necessary for the performance of the contract, without being obliged to refrain entirely from using the service offered by the online social network operator, which means that those users are to be offered, if necessary for an appropriate fee, an equivalent alternative not accompanied by such data processing operations."

Intentionally or not, the court highlighted the interpretative problem stemming from a narrow interpretation of contractual necessity. The court said, even if processing does not fall under contractual necessity, it may still be "necessary" to charge data subjects "an appropriate fee" if they refuse to consent. Disappointing some activists, the court did not endorse the EDPB's first comment that refusal to consent should not come with "any costs". 

Even though the court did not explain this further, we can speculate it was not willing to accept the view that all business models simply have to be adjusted to a maximally prohibitive interpretation of the GDPR. The court may have attempted to save the GDPR from a likely political backlash to an attempt to use it to deny Europeans a choice of free-of-charge services funded by personalized advertising. Perhaps the court also noted that other EU laws, e.g., the Digital Markets Act, rely on the GDPR's definition of consent, which gives an additional reason to be cautious in interpreting this concept in ways that are not in line with current expectations.

Remaining questions

Based on previous statements from DPAs, there are a number of questions that will likely be particularly important for future assessments of "pay or consent" implementations under the GDPR and ePrivacy rules. The following list may not be exhaustive but aims to identify the main issues.

How specific should the choice be? The extent to which service providers batch consent to processing for different purposes, especially if users are not able (in a "second step") to adjust consent in a more granular way, is likely to be questioned. This is a difficult issue because giving users full freedom to adjust their consent could also defeat the purpose of having a paid alternative. 

In a different kind of bundling, service providers may make the paid alternative to consent more attractive by adding incentives like access to additional content or the absence of ads (including nonpersonalized ads). On one hand, this means service providers incentivize users not to consent, making consent less attractive in comparison. This could be seen as reducing the pressure to consent and making the choice more likely to be freely given. On the other hand, a more attractive paid option could be more costly for the service provider and thus require a higher price.

What is an "appropriate" price? The pricing question is a potential landmine for DPAs, which are emphatically ill-suited to deal with it. Just to show one aspect of the complexity: setting as the service's historical average revenue per user from personalized advertising as a benchmark may be misleading. Users are not identical. Wealthier, less price-sensitive users, who may be more likely to pay for an add-free option, are also worth more to advertisers. Hence, the loss of income from advertising may be higher than just "old ARPU multiplied by the number of users on a no-ads tier," suggesting a need to charge the paying users more than historical ARPU merely to retain the same level of revenue. Crucially, the situation will likely be dynamic due to subscription "churn," or users canceling their subscriptions, and other market factors. The economic results of the "pay or consent" scheme may continue to change, and setting the price level will always involve business judgment based on predictions and intuition. 

Some authorities may be tempted to approach the issue from the perspective of users' willingness to pay, but this also raises many issues. First, the idea of price regulation by privacy authorities, capping prices at a level defined by the authorities' idea of what is acceptable to a user, will likely face serious proportionality and competence scrutiny, including under Articles 16 and 52(1) of the Charter of Fundamental Rights. Second, taking users' willingness to pay as a benchmark implicitly assumes a legally protected entitlement to access the service for a price they like. In other words, this assumes users are entitled to specific private services, like social media services. This is not something that can be simply assumed, it would require a robust argument — and, arguably, constitute a legal change that is appropriate only for the political  legislative process. 

Imbalance: Recital 43 of the GDPR explains consent may not be free when there is "a clear imbalance between the data subject and the controller." In the Meta decision, the CJEU admitted the possibility of such an imbalance between a business with a dominant position, as understood in competition law, and its customers. This, too, may be a difficult issue for DPAs to deal with, both for expertise and competence reasons. 

The scale of processing and impact on users: Distinct from market power or dominance, though sometimes conflated with it, are the issues of the scale of processing and its impact on users. An online service provider, e.g., a newspaper publisher, may have relatively little market power but may be using a personalized advertising framework, such as a real-time bidding scheme facilitated by third parties, that is very large in scale and with more potential for a negative impact on users than an advertising system internal to a large online platform. A large online platform may be able to offer personalized advertising to its business customers, while sharing little or no information about who the ads are shown to. Large platforms have economic incentives to keep user data securely within the platform's "walled garden," not sharing it with outsiders. Smaller publishers participate in open advertising schemes, where user data is shared more widely with advertisers and other participants. 

Given the integration of smaller publishers in such open advertising schemes, an attempt by DPAs to set a different standard for consent just for large platforms may fail as based on an arbitrary distinction. In other words, however attractive it may seem for the authorities to target Meta without targeting the more politically powerful legacy media, this may not be an option.

What's next?

We don't yet know the full text of the EDPB's most recent decision related to Meta's personalized advertising, but the available information suggests it did not address the question of a paid alternative to consent. Perhaps Ireland's Data Protection Commission, to whom the EDPB decision is addressed and who will accordingly publish their own Meta decision soon, will include some relevant remarks. However, it is also possible that we will need to await the conclusion of the reportedly ongoing investigations. 

EDPB Chair Anu Talus told Politico DPAs will investigate ad-free paid subscriptions offered as an alternative to consent. She even said the EDPB is looking at "a fundamental change in the structures of digital marketing." If she means a crackdown on free-of-charge services that cannot be funded without personalized advertising, then this may be hard to square with the approach taken by the CJEU in the Meta judgment.

From a longer-term perspective, it is worth noting that the EU Council's 2021 mandate for the ePrivacy legislative process includes an explicit recognition of paid alternatives to consent in Recital 20aaaa. However, that recognition is qualified by an analogous consideration of "imbalance" under the GDPR, so even if the text is adopted, it will not override all the debates that are likely to take place in the near future.

]]>
2023-11-20 11:50:42
Farewell to a trailblazer: Helen Dixon exits https://iapp.org/news/a/farewell-to-a-trailblazer-helen-dixon-exits https://iapp.org/news/a/farewell-to-a-trailblazer-helen-dixon-exits Helen Dixon was the world's first global privacy regulator. Her departure from the role of Ireland's Data Protection Commissioner, after an eventful 10-year term, marks the end of an era in technology regulation. It is an era marked by the arrival of the EU General Data Protection Regulation, its initial implementation — not just in Europe but also in the U.S. and the rest of the world — as well as its enforcement. Now a new era dawns, with policymakers' attention in Europe, the U.S. and China shifting to the regulation of artificial intelligence.

While she perhaps didn't "make EU regulation great again," as some flame-throwing advocates and politicians had hoped, Dixon was a paragon of judicious, balanced, disciplined and principled enforcement and regulation. She transformed the DPC from a small regional office to a Dublin-headquartered powerhouse with more than 220 expert staff, including some of the leading minds in privacy regulation anywhere in the world.

She stood up for fair and proportionate regulation as the way to ultimately protect privacy rights and the free flow of data in the EU. Despite being in the crosshairs of unrelenting public, political and industry pressure, she achieved a unique, unprecedented enforcement record, blazing a trail for global privacy enforcement agencies by bringing to heel not just enormously powerful multinational companies but also formidable government agencies.

How does a regulator from an island 250 miles off the coast of the continent, home to just 5 million people, become the linchpin of Europe's — and in many respects, the world's — privacy regime? Credit — or, depending on your point of view, debit — the GDPR's one-stop-shop mechanism.

Under it, non-European companies became subject to primary jurisdiction of the regulator in the country when their main establishment was in the EU. And a long list of technology giants, including Apple, Alphabet (Google), Meta (Facebook, Instagram and WhatsApp) and Microsoft, as well as HP, IBM, Intel, LinkedIn, Oracle, Qualcomm, Salesforce and many more, have made the Emerald Isle their European home.

To be clear, choosing Ireland had nothing to do with the country's data protection policies and everything to do with tax laws. But this effectively charged Dixon with responsibility for not just 500 million European consumers but also for the billions of consumers these tech companies have in the U.S. and the rest of the world.    

It also placed Dixon in an untenable situation. While the U.S. projected its immense economic and technological power to the EU via these corporate establishments in Ireland, the EU projected its regulatory — and some would say moral — authority back to the U.S. via Ireland's DPC. In many cases, Dixon found herself between a rock and a hard place. American companies warned that aggressive enforcement of an often-vague legislative mandate would result in real, concrete economic harm. Whereas European bureaucrats, the advocacy community, and increasingly European media and politicians, complained the laws weren't implemented and enforced fast enough, harshly enough, and with sufficient vigor and zeal.

This tension came to a head with the unraveling of the EU-U.S. Safe Harbor arrangement and later the Privacy Shield. These developments laid bare the fact that EU policymakers have at best limited ability to affect U.S. surveillance reform, not to mention Washington's legislative agenda. As their fallback option, European advocates pushed to squeeze U.S. companies, for example by curtailing Meta's ability to function, which requires free flow of data across the Atlantic. They hoped by pressuring those companies, they could motivate U.S. policymakers to act.

In reality, alas, this was a one-dimensional chess game that repeatedly resulted in stalemate. Surveillance reform in the U.S. involves interests far stronger than even the mighty Silicon Valley lobby, not least those of national security agencies of European countries themselves. In the meantime, cutting off trans-Atlantic data flows and business practices would impose steep costs on European consumers, businesses, scientific research and economic growth. 

Emerging from this thicket of interests, Ireland's DPC compiled an unprecedented enforcement track record. Some of the major cases it resolved were the signature data transfer disputes, including, of course, the references of the so-called 'Schrems' cases to the Court of Justice of the EU; domestic action rightsizing the Irish public services card; and decisions concerning Facebook's targeted advertising, WhatsApp's transparency obligations and Meta's security breach, which affected the data of 500 million users.

By any measure, the case volume and fines processed by the DPC dwarfed those of other regulators in or out of the EU.

Cynics who have argued the DPC had to be forced into action by its associates on the European Data Protection Board failed to see the forest for the trees. In fact, the DPC led massively complicated investigations, including the grunt work of meticulous fact finding and legal analysis, which sometimes takes years. In these cases, the role of other data protection authorities was limited to raising their hands at the end of the process to argue "the fine should've been steeper."

So, please give credit where credit is due.    

Dixon's shortcomings seldom manifested in courts or the regulatory arena. It's in the court of public opinion that she became a lightning rod for criticism by advocates, politicians and fellow DPAs. Perhaps in 2018, when she effectively became the European — or even global — privacy commissioner, she should have launched a comprehensive communications strategy in 24 European languages with a focus on Brussels, which is home to the European Commission, the EDPB, advocacy organizations and media outlets. At the time, though, the DPC was a small domestic agency. Over the years, Dixon enhanced the DPC's public footprint. She raised public awareness of data protection and the GDPR by regularly speaking at conferences and giving press interviews, including profiles by 60 Minutes and The New York Times.

In many ways, Dixon's challenges were baked into GDPR itself, with its soft legal obligations and byzantine mechanism for resolving disputes between regulators. As we enter a new era of regulation affecting AI, a technology that could profoundly impact the future of humanity, we should follow the trail Dixon set for us and learn from her experiences as the first and strongest global privacy regulator.

]]>
2023-11-15 12:56:24
Takeaways from the IAPP AI Governance Global Conference https://iapp.org/news/a/takeaways-from-the-iapp-ai-governance-global-conference https://iapp.org/news/a/takeaways-from-the-iapp-ai-governance-global-conference If you were unable to attend the IAPP's inaugural AI Governance Global Conference 2023 in Boston, we have you covered. We attended and summarized several key themes from the event for you.

Artificial intelligence governance programs have some key elements

AI governance programs at many companies consist primarily of some or all of the following components. 

  • A cross-functional stakeholder committee for setting risk tolerance, reviewing AI use cases, and/or developing protocols and policies. 
  • Guiding AI principles, policies and guardrails that provide the company with a framework for buying, developing, integrating, and using AI both internally and externally. 
  • AI impact assessments to document and understand risks, mitigations and expected outcomes from a cross-functional perspective. 
  • Internal processes or third-party tools for testing fairness for AI uses.
  • Training for stakeholders involved in AI development, procurement and use cases.

Privacy teams play an important role

Privacy teams are playing an important role in AI governance because they understand how to assess risks and apply mitigating controls. Privacy teams also have existing process development and assessment protocols that can be leveraged and customized for AI governance. However, privacy teams are not necessarily owning AI governance. At many companies, business stakeholders are stepping up to own or co-own AI governance, especially at companies where AI is playing an important role in the company’s products and service offerings. Other key stakeholders involved in AI governance include security, legal (including IP and litigation), HR, procurement, data science, technology, product, compliance and risk.

Resources are a challenge

Many companies are struggling with resources for AI governance, especially when the issues are viewed solely from a compliance perspective. Some companies are having success finding resources by working with business stakeholders to understand the internal and external opportunities that AI governance programs can help enable. Privacy teams are often working with legal and business stakeholders to appropriately calibrate the risks AI can present, and opportunities it can enable, so that AI governance programs get the necessary business buy-in and can effectively manage risk and enable innovation.

Leverage what you have

AI governance programs do not have to be robust to get started. Many companies started by leveraging processes and policies they already had in place and modified to take into account AI risks and opportunities. For example, companies with existing frameworks for managing vendor risk reviewed and updated these to address AI. Data classification policies were also helpful tools to help determine what data is appropriate to input into third party AI applications, especially when they are accompanied by AI-specific guardrails.

You know how to do this

Developing an AI governance approach is manageable, and if you are a privacy professional, you already have many of the key skills. For example, when it comes to assessing AI uses, you can adapt skills you have learned from assessing privacy risks and apply those skills to this new context. Identify the risks (if any), pick mitigations (if needed), define expected operations fairness and outcomes, test before launch (if the consequences of getting it wrong could cause harm), monitor the AI use once deployed for vulnerabilities and proper operating (if called for), and refine it to achieve objectives. Work cross-functionally to identify who will be responsible for each of these tasks. And, similar to security incident reporting, have a clear path for both internal and external parties to contact the company with possible concerns or issues related to AI use, and policies for how these reports are addressed.

Use an appropriate framework

Pick and adapt, or draft, a framework for AI governance. There are various frameworks, but there was a lot of discussion about the NIST Artificial Intelligence Risk Management Framework. Do not view frameworks as one-size-fits-all; rather, pick and adapt, or draft one, based on how your business operates.

Ignoring or banning AI is not a solid strategy

Do not try to ban AI or ignore the opportunities that AI may present despite the risk. Ignoring important business opportunities, such as for efficiency or innovation, may lead to an even bigger risk to your business. Work to understand the opportunities for the company, both internally and externally and enable them with appropriate risk assessment and mitigation practices that are tailored to business risk appetite.

Regulators are paying attention

On the global stage a variety of regulators are focused on AI, including privacy and data protection regulators. For many, holding companies accountable for AI uses that have not had risks appropriately assessed and mitigated is a priority, especially where this results in harms to people. At the same time, there is no consensus among regulators about how AI risks should be assessed or mitigated.

Disgorgement is a threat

In the U.S., the Federal Trade Commission has ordered disgorgement of data and AI models in consent decrees resulting from investigations and enforcement actions. This type of remedy may be one that U.S. regulators increasingly seek when models are developed in ways that they view as violating the law.

You are not alone 

Benchmark with peers at other organizations. Like governments that are collaborating to address AI principles and codes of conduct, many companies are collaborating and benchmarking to set up their AI governance approaches.

Consult available resources

Many organizations are sharing resources about their approaches to aspects of AI governance. In addition to the NIST framework, look for and consider these resources as you help formulate your company's approach. Multinational principles and codes like the OECD Guiding Principles for Organizations Developing Advanced AI Systems and OECD International Code of Conduct for Organizations Developing Advanced AI Systems show where there is an emerging consensus from regulators. Data protection authorities like the U.K. Information Commissioner's Office and France's Commission nationale de l'informatique et des libertés have issued guidance and resources. Civil society organizations like the Future of Privacy Forum have resources like Best Practices for AI and Workplace Assessment Technologies and the Generative AI for Organizational Use: Internal Policy Checklist. Companies like Microsoft are also sharing resources like its Responsible AI Principles and Approach. The IAPP is also compiling and sharing resources on AI governance.

]]>
2023-11-15 11:58:08
UK First-tier Tribunal overturns ICO enforcement action against Clearview AI https://iapp.org/news/a/uk-first-tier-tribunal-overturns-ico-enforcement-action-against-clearview-ai https://iapp.org/news/a/uk-first-tier-tribunal-overturns-ico-enforcement-action-against-clearview-ai In October, the U.K.'s First-tier Tribunal overturned the Information Commissioner's Office May 2022 fine and enforcement notice issued against Clearview AI. Clearview AI has no presence in the U.K., but its database includes images of individuals in the country scraped from public sites.

The ICO issued the fine on the basis that Clearview AI was processing personal data related to the monitoring of the behavior of individuals in the U.K., which triggered the extraterritorial application of U.K. data protection law. The FTT concluded the processing did not itself amount to monitoring but that U.K. data protection law could (in principle) apply to this processing because it was "related to" monitoring carried out by Clearview AI's clients. However, all of Clearview AI's clients were foreign government agencies carrying out criminal law enforcement and national security functions, with no U.K. or European Economic Area clients.

The FTT held that U.K. data protection law could not have extraterritorial effect in this specific situation. While the provisions on processing in connection with acts of foreign governments may only be of relevance to some readers, the conclusions on the breadth of the extraterritorial scope of data protection laws will be of wider interest, in particular as the U.K. General Data Protection Regulation is still in identical terms to the EU GDPR on this point. 

Recap

Clearview AI is incorporated in Delaware, U.S., and does not have an establishment in the EU or U.K. It offers a service to clients where they can upload a photo, which matches against a database containing billions of photos obtained from scraping publicly available websites using automated programs.

Depending on the source of the image, additional information will also be collected by the company's scrapers as metadata, such as a link to the associated social media profile, HTML "hover text" associated with that image and a static URL.

In creating the database, Clearview AI created a set of vectors for each facial image using their machine learning facial recognition algorithm and sent these to be stored in a database. If faces are similar, the vectors will be stored closer together within the digital space, creating clustering. This clustering process is referred to as "indexing" in the decision. Given the vast size of this database, the tribunal found it reasonable to infer that images of U.K. residents are held within the database as well as images taken while in the country.

In May 2020, a decision was made to stop commercial clients using Clearview AI. This means the service is now only available to non-U.K./EU criminal law enforcement and national security agencies (and their contractors) to aid national security and criminal law enforcement.

In July 2020, the ICO began a joint investigation into the company with the Office of the Australian Information Commissioner into the company, and in May 2022, the ICO fined Clearview 7.5 million GBP and issued an enforcement notice.

What was in issue before the FTT?

The EU and U.K. GDPR both apply, on an extraterritorial basis, when a controller or processor outside the U.K. processes personal data relating to individuals in the U.K., where those processing activities relate to monitoring the behavior of the individuals within the country. 

Meaning of 'relating to' the 'monitoring' of 'behavior'

The FTT held that the processing Clearview AI carried out was "related to" the monitoring of behavior that its clients carried out, as there was a very close connection between the creation, maintenance and operation of the database and the clients' monitoring. This gives a very broad territorial scope to the U.K. GDPR: it can apply to controllers or processors outside the U.K. who do not themselves monitor the behavior of individuals in the U.K. if their processing is "related to" monitoring of behavior carried out by others. The FTT noted "there must be a relationship between the processing of the individual's personal data and the monitoring of behaviour that is in issue."

The FTT said that behavior "indicates something more than simply being alive;" this would reveal that a person is doing something as opposed to language relating to a person's characteristics. The FTT gave examples: where someone is, what they are doing, or what they are holding/carrying. Clearview AI's images showed its clients information such as relationship status and occupation or pastime(s) — i.e., "behavior." 

The FTT accepted that Clearview AI's creation of the vectors and clusters of images did not constitute monitoring. However, the FTT found Clearview AI's clients would be able to use the company's images to establish where a person was at a particular time, to watch a person over time by submitting images of the same person at different times, and to combine this with other surveillance they may be carrying out. The FTT concluded this amounted to monitoring.

The FTT drew attention to the use of the word "tracked" in Recital 24. In the FTT's view, the verb "to track" can bear two meanings: one pursuit of a person over time and the other being establishing a position at a fixed period. This interpretation seems incorrect to us. For example, the Oxford English Dictionary's definition of "track" as a verb only includes examples of usage that show tracking over a period of time.

Processing of personal data

The FTT agreed that the images and additional information in Clearview AI's database — such as name, relationship status, where the person is based, occupation or pastimes — constitute "personal data."

The FTT also confirmed that Clearview AI's activities amounted to "processing" — for example, scraping the images from the internet (collection), holding/storing the images and creating vectors from the stored images.

Scope of the EU GDPR and UK GDPR

The ICO took action in relation to some processing by Clearview AI that preceded Brexit and some that took place post-Brexit. Article 2(2)(a) GDPR makes clear the GDPR does not apply to processing that falls outside EU law. In oral arguments, the ICO accepted that this provision would cover processing carried out by overseas governments. 

The FTT concluded Clearview AI's processing related exclusively to acts carried out by or for overseas governments, such that GDPR was not applicable. Post-Brexit, the U.K. GDPR provides that its extraterritorial provisions only apply to processing, which, pre-Brexit, would have been subject to the GDPR. Accordingly, the processing post-Brexit was also out of scope. As a result, the ICO had no jurisdiction to issue the monetary penalty notice or enforcement notice.

The decision, therefore, seems to turn on the parties' acceptance that acts of foreign governments fall beyond the scope of GDPR. Public international law is complex, and the principle is more nuanced than the tribunal suggested. It is not clear to what extent this was argued before the FTT.

Even if this principle is correct regarding acts taken by or on behalf of foreign governments, it is not clear if it should extend to the actions of a commercial organization, carried out speculatively, with the intent to develop a business providing services to foreign governments. The 2006 Court of Justice of the European Union decision invalidating the EU-U.S. Passenger Name Records Agreement decision (C-317/04 and C-318/04) from 30 May 2006 concluded the European Commission's adequacy decision was invalid because the decision concerned the processing of personal data outside the scope of Union law (namely, transfer of PNR data to U.S. authorities by airlines for security purposes, in line with U.S. statutes) (para.59).

The CJEU did note, however, that the initial collection of data by the airlines to sell tickets would be subject to Union law (para.57). This decision is not identical to the Clearview AI situation: the airlines had their own independent purposes for processing the data prior to transfer. As Clearview AI has no commercial purpose aside from the use by overseas authorities, the case could possibly offer some justification for the approach taken by the FTT. 

Clearview as a controller 

As a final point of interest, the FTT found there were two activities of processing — creation of the database and matching with client images. The FTT then held that Clearview AI was a controller for the first activity and a joint controller with its clients for the second activity. The FTT further held that Clearview AI was also a processor for both activities. The FTT stated that Clearview AI determines the purpose of processing and that both the company and its clients determine the means of processing by uploading images. The FTT stated these conclusions without presenting any legal analysis of the terms controller, joint controller and processor.

The conclusion that Clearview AI is both controller and processor for the same processing activities is inconsistent with guidance from both the European Data Protection Board and the ICO. The conclusion that Clearview AI and its clients can be joint controllers where only the company determines the purpose of processing and where a decision to use technology is treated as a determination of means of processing is inconsistent with CJEU case law and the guidance mentioned above. This suggests a lack of analysis by the FTT, which may affect the weight that should be given to the other, more central, aspects of the decision. 

It is worth noting that FTT decisions do not constitute binding authority, and therefore, while the decision is clearly of wider interest any future Tribunal would not be bound to follow it. Whether the ICO seeks to pursue any further avenue of appeal against the decision remains to be seen. However, given that Clearview AI does not now operate or offer its services in the U.K., and this case dates back some time to the tenure of the previous commissioner, it may be that the ICO considers that there is little merit in pursuing the matter further.

]]>
2023-11-14 15:00:33
Children's privacy laws and freedom of expression: Lessons from the UK Age-Appropriate Design Code https://iapp.org/news/a/childrens-privacy-laws-and-freedom-of-expression-lessons-from-the-uk-age-appropriate-design-code https://iapp.org/news/a/childrens-privacy-laws-and-freedom-of-expression-lessons-from-the-uk-age-appropriate-design-code The pace of digital change accelerates at a staggering rate. Five years ago, children's online privacy was not yet a mainstream issue among regulators or policy makers. And now, even the wider world has started to understand the real-life consequences of an internet that was not designed with children in mind.

Young people's personal data has been drawn into the attention economy, via the business models of online platforms. Those growing up today face dangers that did not exist when they were born. We need an internet that allows children to safely develop, learn, explore and play in a manner appropriate to their age. The importance of tackling this challenge grows ever stronger, while artificial intelligence becomes central in shaping the online experience.

The U.K. was first out of the blocks in addressing this challenge with the statutory Age-Appropriate Design Code, which came into full effect in 2021. It was developed by the U.K. data protection regulator, the Information Commissioner's Office.

The U.K.'s breakthrough was followed by California adopting the California Age-Appropriate Design Code Act in 2022. The state's legislators recognized the U.K.'s leadership and expertise in implementation. The legislative findings of the CAADCA stated: "It is the intent of the Legislature that businesses covered by the California Age-Appropriate Design Code may look to guidance and innovation in response to the Age-Appropriate Design Code established in the United Kingdom when developing online services, products, or features likely to be accessed by children."

Today, other countries are taking inspiration from the U.K.'s model and engaging in the global conversation about how to best protect children's digital privacy.

But, in September 2023, progress received a significant, concerning setback. The U.S. District Court in California granted an injunction against the CAADCA, on First Amendment grounds in the NetChoice v. California Attorney General Rob Bonta case related to freedom of expression. The judgment was wide-sweeping. It declared legal incompatibilities: from age-assurance provisions right through to the requirement for impact assessments. Even while putting impediments in place, the issuing judge accepted "the State's assertion of a concrete harm to children's well-being, i.e., the use of profiling to advertise harmful content to children."

California took inspiration from the original U.K. Age-Appropriate Design Code, so perhaps the state's legal experts can also gain from our first-hand experience in developing and implementing it. As U.K. Information Commissioner and Deputy Commissioner at that time, we had direct involvement and oversight of this work at the ICO. After seeing the U.K.'s AADC come into effect, we feel that age-appropriate design codes are compatible with freedom of expression. We also want to share why such protections will make the digital realm safe for children. Regulatory design and thoughtful implementation each play a key role in ensuring the right balance is struck between free expression and children's privacy.

In the U.K., we needed to address significant concerns raised by the media about freedom of expression. We felt surprised by the media's reaction because we had expected that resistance would mostly come from the major technology companies. But surprise always attends innovative concepts. So, we listened to the concerns and made changes while upholding the essence of the code.

Key components of the UK AADC

The requirement for the ICO to prepare the AADC was contained in Section 123 of the Data Protection Act 2018. That piece of legislation completed the U.K.'s implementation of the EU General Data Protection Regulation, where the GDPR allowed for further discretion. The GDPR was later copied over into U.K. law and became the U.K. GDPR.

The requirement to draft the code was added to the Data Protection Act 2018 during its passage through Parliament by Baroness Beeban Kidron: a member of the House of Lords, advocate for children's digital rights, and chair of the NGO 5Rights Foundation, which gathered a powerful bipartisan grouping for her amendment. After a great deal of input and debate from both houses of Parliament, the U.K. government supported the amendment. The U.K. minister for data protection, Margot James, became a vocal supporter of the code.

What were the key requirements in Section 123? The ICO was to draft a code which recognized that children have different needs at different ages. Standards in the code had to weigh the best interests of children, considering the U.K.'s obligations under the United Nations Convention on the Rights of the Child. And the AADC would apply to any online service that was "likely to be accessed by children." This last, broad-scope provision was vitally important. It recognized the challenges caused by laws that focused only on services specifically targeted or explicitly directed at children. The "likely to be accessed" test would ensure that the AADC's standards applied where children went in the reality of daily online life. Crucially, under the law, regulated companies had a duty to provide privacy to children by design and default. That obligation recognized that parents and children were outmatched by complex and innovative technological systems. The AADC also allowed for a great deal of creativity from companies about how to provide privacy.

The code is statutory. This means that it was laid in the U.K. Parliament by a negative-resolution procedure. Such status also means the ICO, along with courts and tribunals in the U.K., must consider the code’s standards when enforcing the GDPR in relation to children and online services.

The AADC itself contains 15 standards. They are complementary and seek to build up a holistic approach to protecting children's privacy online. The code's standards are proportionate and risk-based, ensuring organizations take responsibility for implementing the standards in the context of their unique services. While the code is detailed in its guidance, it equally allows online service providers choice in how to develop practical solutions.

Explaining its standards one-by-one goes beyond the scope of this article, but a few are important to note. The first standard focuses on the best interests of the child, which must affect the design and development of online services. While the phrase "best interests" may seem nebulous on paper, in reality, a decision taken with children in mind looks quite different from one that only considers commercial outcomes. Next comes the second standard, which concerns data protection impact assessments. This pivotal process enables companies covered by the code to understand the risks and design their mitigations. Further, AADC standards mandate data protection by design and use of default settings.

Importantly, AADC promotes age assurance, not age verification. It ensures that online services follow a risk-based approach to recognizing the age of individual users, requires that they effectively apply the standards in this code to children and youth. Services can either establish age with a level of certainty that is commensurate with the risks or apply the standards to all. The risk-based approach therefore does not require age verification for all online services. In low-risk scenarios, the age assurance provided by simple declaration can be appropriate. Alternatively, companies can apply all the standards to how all users are treated.

The AADC and freedom of expression

Freedom of expression is of crucial importance to U.K. society and enshrined in the U.K. Human Rights Act of 1998. As a public body, the ICO must account for the Human Rights Act in all decisions and dealings. Like the U.S., the U.K. has a long history of jurisprudence related to free expression.

The process for conducting a data protection impact assessment is a flexible one. Assessments can accommodate other rights in the trade-offs that must be made, including with freedom of expression. The AADC explains that when children's interests and commercial value are finely balanced, concern for the child should prevail. But that does not mean freedom of expression cannot be considered, as well. An impact assessment can consider the benefits of using personal data, guided by a risk-based approach toward proportionality.

The U.K. AADC does not require content moderation. The only focus on content relates to profiling using children's personal data. The code restricts how personal data can be used to profile and recommend content to children. It also ensures that recommendations are not detrimental to their health and well-being. The AADC specifically states "the ICO does not regulate content." It also makes clear that the ICO uses its regulatory discretion to defer to other established guidance and expertise when considering harms to well-being linked to profiling. Without objective evidence, the ICO cannot act and the AADC does not enable the office to become a content regulator.

As a public body subject to the Human Rights Act, the ICO is also bound to consider the relevance of freedom of expression when taking any AADC-related enforcement action under GDPR. The ICO's wider Regulatory Action Policy further ensures the enforcement approach is proportionate to the risk of harm. Therefore, the code is not an absolute set of standards and allows for a measure of regulatory discretion.

ICO's strategy behind the AADC

The AADC came out of significant consultation and engagement. During development and implementation, we: Issued an open call for evidence before drafting the code; released a draft of the AADC for open consultation; held 57 roundtable meetings with stakeholders; and socialized and explained the code for major technology firms in Silicon Valley.

Engaging with stakeholders marked the AADC's development. When media raised concerns about the code's possible impacts on access to their digital content and advertising models, we listened carefully. The news media also supported the AADC's aims. One newspaper, The Daily Telegraph, regularly set out its support as part of a wider campaign about children's online safety.

We held many meetings with the U.K. News Media Association. After hearing their concerns, we gave reassurances that the expectations around age assurance would be risk-based and proportionate. Given the special importance of protecting free expression, we developed a special set of Media Frequently Asked Questions about the AADC. We also ensured that the FAQs would have additional status. The FAQs were laid in Parliament alongside the AADC itself, as part of the explanatory memorandum.

Our FAQs sought to explain how the media could implement the code's standards. The document recognized the importance of free expression, analyzed the likely risk profile of the media sector, and suggested that formal age-verification should not be necessary.

Our FAQs also explained that the U.K.'s ePrivacy directive already set rules around consent for cookies, which required a default-setting. We also referenced the ICO's guidance on data protection and its journalism exemption under the GDPR. That carve out recognizes a broad definition of freedom of expression, covering citizen journalism and not just mainstream media.

Since the code came into force in 2021, the ICO has continued to provide guidance with additional resources and supplementary guidance. Those resources address data protection impact assessment requirements for different sectors as well as what child-centered digital design means in practice.

International trends in children's privacy and safety

The safety-by-design principles in the U.K.'s AADC align with international instruments, such as the Organisation for Economic Co-operation and Development's "Recommendation of the Council on Children in the Digital Environment." Recall that the U.S. supported its adoption at the OECD Council. Alongside that recommendation, the OECD Guidelines for Digital Service Providers have been developed in recognition of the essential role developers play in providing a safe and beneficial digital environment for children.

The guidelines set out the following:

  • Take a child safety by design approach when designing or delivering services.
  • Ensure effective information provision and transparency through clear, plain and age-appropriate language.
  • Establish safeguards and take precautions regarding children's privacy, data protection and the commercial use of such data.
  • Demonstrate governance and accountability.

This international direction is complemented by the work the IEEE Standards Association is doing to develop 2089-2021 – Standard for Age Appropriate Digital Services Framework. This standard will encourage global adoption of privacy-engineering expectations, which in turn will accelerate practical implementation.

Other jurisdictions are planning to introduce age-appropriate design codes. Australia recently announced privacy law reforms that will include a code similar to AADC.

Lessons learned and the way forward

After two full years, the initially controversial AADC has not been legally challenged in the U.K. We believe the ICO's policy of engagement, support and guidance created reassurance for companies as they made investments in children's privacy. The AADC is an agile, thoughtful piece of regulation that will deliver effective protections for young people online, who deserve a digital world where they can develop, learn and play.  

Regulators must address the public dangers and reassure those who fear unintended consequences. Both messages will advance the implementation of age-appropriate design codes in practice. In the California AADC, a working group is envisioned to be established and charged with advising the legislature on practical implementation, mirroring what the ICO has done in the U.K.

The concern that risk assessments, such as DPIAs, constrain freedom of expression should be challenged. Assessments of risk are fundamental to driving default protections for children. The October 2023 preliminary enforcement action by the ICO against a large social media platform highlighted the importance of risk assessments. The company allegedly rolled out generative AI chatbots for kids to use without conducting an impact assessment. Such enforcement actions do not prescribe solutions or constrain the free choice of technology companies. Rather, they place responsibility on companies to be accountable about how they prevent harms that stem from their use of children's personal data.

Recall that privacy impact assessments originated in North America, in the early 2000s. Subsequently, the EU imported the practice into the GDPR. Today, U.S. companies conduct impact assessments as a central component of their corporate governance, week-in and week-out. Mitigating against risk is a central feature of President Joe Biden's October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Privacy and children are both referenced in the order, and we struggle to envision its implementation without impact assessments as a regulatory tool.

Privacy-engineering approaches can help provide technical solutions that mitigate the risks for children. Doing so, they do not need to undermine the fundamental role the internet plays in helping people find and receive information, which relies on free expression, including the airing of children's views. We lose when we see the trade-offs as a zero-sum game.

The U.K. AADC has already made a fundamental difference to how platforms protect children's privacy. While more remains to be done, the following changes were introduced in light of the AADC:

  • Facebook and Instagram limited targeting based on age, gender and location for under 18-year-olds.
  • Instagram launched parental supervision tools, along with new features like "Take A Break" to help children manage their time on the app.
  • YouTube turned off autoplay by default and turned on a "Take A Break" feature and bedtime reminders by default for Google Accounts owned by under those under 18.
  • TikTok changed its default privacy setting to private for all registered users ages 13-15. With a private TikTok account, only someone who the users approve can view their videos.

There is no evidence these changes have undermined freedom of expression. They have, however, made a meaningful difference to children's privacy online. In fact, a major technology company has shared with us that the AADC had a greater impact than GDPR enforcement actions, highlighting the code's importance. Furthermore, many U.S. companies are already globally implementing standards in the code, because following its standards increases trust and bolsters confidence in their services. U.S. children and families are benefitting from these protections. It would seem appropriate if they had a voice in the ongoing development and enforcement of an age-appropriate internet.

Freedom of expression is not a trivial matter. Nor is privacy, particularly as it affects children. Along its regulatory journey in the U.K., the AADC established beyond all doubts the close-knit relationship between children's privacy and their safety. Policy makers around the world have gone on record to testify to that link. In the U.S., there is so far no clear path to explain how legislation that protects child privacy is incompatible with First Amendment protected speech. We hope that our lessons from the U.K. can show a path forward for achieving both ends.

]]>
2023-11-13 11:46:25
Exploring challenges with law enforcement access to data https://iapp.org/news/a/exploring-challenges-with-law-enforcement-access-to-data https://iapp.org/news/a/exploring-challenges-with-law-enforcement-access-to-data Most people would likely support providing law enforcement with the necessary digital evidence to investigate and prosecute heinous crimes such as the dissemination of child sexual abuse material. Such evidence is usually obtainable through traditional legal processes, but barriers emerge when it is located abroad and especially when a non-U.S. company holds it. Despite the resolution for personal data to flow between the European Union and the U.S., there is often no expeditious way to get data for law enforcement purposes.

Leaders from around the world convened in the U.K. this fall to explore this precise topic and broader issues like the impact of emerging technology like artificial intelligence on law enforcement. The central theme was balancing law enforcement's need for electronic evidence with privacy considerations.

Attendees included industry executives, law enforcement leaders, diplomats and elected officials, and privacy practitioners and scholars. I was particularly interested in attending, given both our work on privacy and the nexus of it with law enforcement. The Ditchley Foundation convened this program as part of its ongoing Data in Democracies program.

Background and current landscape

If data was purely stored in the U.S. by U.S. companies, access by law enforcement would be less of a hurdle. But the interconnected nature of the internet and data storage means data is not always in the U.S. Therefore, there needs to be a way for U.S. law enforcement to gain access to digital evidence stored abroad and for foreign law enforcement to make legitimate requests for data in the U.S. Concerns often arise surrounding the need to balance individual privacy and ensuring legitimate uses and purposes by law enforcement.

A Mutual Legal Assistance Treaty is typically used if data is held in one country and another seeks it for law enforcement purposes. That process can take months or longer, which means the evidence or subject might be lost.

Eventually, the U.S. Clarifying Lawful Overseas Use of Data Act was passed to supplement the MLAT process, which permits select countries to enter into agreements with the U.S. to use their own legal authorities to access electronic evidence, assuming they have adequate substantive and procedural laws. Data not covered by an agreement will still resort to an MLAT. This is often critical for foreign law enforcement since so much data is held within the U.S. Currently, there are CLOUD Act agreements in place with Australia and the U.K.

However, other developments are relevant too. For example, the OECD Declaration on Government Access to Personal Data Held by Private Sector Entities from December 2022 serves as a political commitment of 38 OECD countries and the EU on common approaches to safeguarding privacy when accessing data for law enforcement purposes. Also, the e-Evidence regulation and directive on access to electronic evidence applies within the European Union. In addition, the Council of Europe Convention on Cybercrime (commonly called the Budapest Convention and its Second Additional Protocol)  provided new pathways to obtain select data between signatories.

Why this matters

A key question is whether there will be additional CLOUD Act agreements, specifically, whether there will be more bilateral agreements and/or multilateral agreements. The U.S. and EU began negotiations around an EU-U.S. agreement again in March 2023, but there is still a long path ahead to make that a reality.

At the Ditchley convening, there was a strong sentiment that it is important to enable law enforcement to have a manner to obtain data expeditiously. While the MLAT process exists, multiple attendees noted that it is time-consuming and there is a risk for evidence to be destroyed or lost. This risks victims not being aided or future crimes being committed. Examples of the CLOUD Act being used so far between the U.K. and the U.S. noted at the convening included saving hundreds of children from abuse and prosecuting multiple arms dealers.

In addition, multiple individuals noted companies would benefit from additional agreements. Currently, there is uncertainty on how to process data access requests in the context of the EU General Data Protection Regulation, along with when a company is subject to multiple sets of laws. For example, there is not always a clear category to describe the legal basis for processing and voluntary cooperation between law enforcement has become scrutinized. In addition, companies are also receiving access requests from around the world in significant quantities. This results in inconsistent requests and burdens without a more standardized process.

Looking ahead and challenges

The U.K.-U.S. agreement was progress, but this still leaves out most countries. At the convening, there was a lengthy discussion on sources of agreement, disagreement, and best practices moving forward.

Several key themes emerged. A recurring one was the need for transparency. While law enforcement cannot make all specifics public because of the sensitivity of investigations, it was encouraged for past success stories and/or general examples to be shared so the public and privacy advocates are better informed.

Relatedly, the theme of trust was noted between law enforcement and the privacy community. More dialogues between the groups could advance this to explore these types of topics and what is and is not done with the data. Related to trust is ensuring data requested by law enforcement is for legitimate purposes instead of targeting specific individuals or for political purposes. Lastly, it was highlighted that additional agreements should be considered a priority by the U.S. and countries worldwide.

Overall, I found this a productive and beneficial convening, but there is still work to be done. Like most policy matters, a balance is essential and having conversations is critical even if there is not always agreement. I commend Ditchley for its important work on this topic and the convening.

]]>
2023-11-06 12:30:36
Study: Younger consumers are more active on privacy https://iapp.org/news/a/study-younger-consumers-are-more-active-on-privacy https://iapp.org/news/a/study-younger-consumers-are-more-active-on-privacy Younger consumers — especially those in their 20s and 30s — are acting in greater numbers to protect their privacy, compared with older consumers.

Over 40% of consumers aged 18-34 have exercised their data subject access rights, enabling them to find out what personal data companies have about them. But only 15% of consumers aged 55-64, and 6% of consumers aged 75 and older, have done so. More younger consumers have also switched providers over privacy practices and requested changes or deletions to their data. Interestingly, they also feel more confident that they can adequately protect their personal data.

These are among the findings in the Cisco 2023 Consumer Privacy Survey, which draws on anonymous responses from 2,600 adults in 12 countries. 

Government's role in privacy

Consumers want government to take the lead in protecting privacy, and respondents overwhelmingly indicate support of their country's privacy laws. Sixty-six percent of survey respondents said privacy laws have had a positive impact, compared with only 4% who said they’ve had a negative impact.

Privacy law awareness

Awareness of privacy law is a critical enabler of consumer confidence. Among consumers who are not aware of their country's privacy laws, 40% felt confident they could protect their personal data. Among consumers who are aware of the privacy laws, it's 74%, a significant difference.

AI value, risk

Consumers see value in artificial intelligence and over half said they are willing to share their anonymized data to make AI products better. At the same time, they are concerned about how AI is being used today and 60% indicated they have already lost trust in organizations over their AI use.

A relatively small segment — 12% — of consumers are using generative AI tools regularly. These consumers are generally aware of the privacy risk, that is, that the data could be shared, but only 50% say they are refraining from entering personal or confidential information into generative AI.

Recommendations for organizations

  • Educate consumers about privacy laws and their rights. Individuals who know about these protections are more likely to trust organizations with their personal data and have confidence their data is protected.
  • Adopt measures for responsible data use. Consumers are very concerned about organizations' use of their personal data in AI. Organizations need to build and maintain consumer confidence by implementing a governance framework centered on respecting the individuals’ privacy, increasing transparency on how data is used, and working to eliminate bias in automated decision-making.
  • Enact appropriate controls on the use of generative AI. Regular generative AI users are aware of the risks that the data they enter could be shared, but only half are refraining from entering personal or confidential information. Organizations need to establish controls to help protect this information.

Consumers are demonstrating that they are willing to act to protect their data, and privacy remains a critical element of their confidence and trust. Especially as the technology unlocks new capabilities, it is incumbent on governments, organizations and individuals to each take action to protect our personal data.

]]>
2023-10-20 11:58:54
Saudi Arabia publishes final Personal Data Protection Law https://iapp.org/news/a/saudi-arabia-publishes-final-personal-data-protection-law https://iapp.org/news/a/saudi-arabia-publishes-final-personal-data-protection-law On 7 Sept., the Saudi Data and Artificial Intelligence Authority formally released the Kingdom of Saudi Arabia Personal Data Protection Law. Enforcement of the law will begin 14 Sept. 2024, which gives organizations one year to prepare for compliance.

This law is the first privacy law in the KSA that aligns the kingdom with international privacy laws, in particular, the EU General Data Protection Regulation, along with some localization that addresses the Middle Eastern culture and adopts the latest guidelines and mechanisms toward the proper implementation of the law through its published regulations.

Personal data cross-border transfer regulation

Although the final wording of Cross-Border Data Transfer Article 29 in the final KSA PDPL was complicated, the regulation outlines the corresponding mandates to Article 29 in a simple and organized manner in line with the GDPR, where it allows the transfer of data on three grounds.

The transfer of data is allowed on:

  1. The adequacy decision for countries, other sectors and international organizations (Articles 3 and 4) shall be determined and issued by the competent authority and concerned entities, along with explaining the adequacy process, highlighting the assessment criteria and frequency or revision mandates.
  2. Transfers are subject to appropriate safeguards (Article 5) if there is no adequacy decision for the destination country, along with listing the different types of approved safeguards from the competent authority, e.g., binding corporate rules, standard contractual clauses, compliance certification mechanism, and the use of enforceable code of conduct.
  3. Derogations for specific situations (Article 6) where there is no adequacy decision meanwhile infeasibility to rely on appropriate safeguards per Article 5, different scenarios of derogations have been listed in line with Article 49 of the GDPR except for not explicitly requiring the data subject's consent. 

There are four scenarios when a transfer should be stopped or prohibited: if it impacts national security or the kingdom's interests, the results of a transfer impact assessment show a high risk to the privacy of data subjects, the invalidity of appropriate safeguards adopted by data controllers, or inability of data controllers to comply with the adopted appropriate safeguards. If one of those scenarios occurs, the transfer will need to be stopped, and the TIA must be redone. The regulation has considered the latest mechanisms introduced after the "Schrems II" decision, i.e., mandating a TIA to transfer data to countries without adequate decisions (Article 8).

PDPL implementing regulation

The PDPL implementing regulation is considered the main regulation besides the Cross-Border Data Transfer Regulation. It clarifies and adds further requirements in the law separate from Article 29 of data transfer.

Data subject rights

The regulation has included verbal requests to data subject request's under the authentication mandate. This is believed to be a burden on data controllers to comply with, especially regarding the operational and accountability aspects. Under the implementing regulation, there is no guidance on the definition of data scope under any of the rights that used to be one of the challenges faced with GDPR that may lead to data controllers receiving an infeasible amount of requests or complaints. Finally, there is no allowance under the new regulation for data controllers to charge data subjects for DSRs deemed excessive or repetitive. However, they can reject requests with justification.

The lawfulness of processing data

Article 16 of the implementing regulation provides guidance on the processing of data under legitimate interest, where it is now introduced with restrictions and precise criteria for using it as a lawful basis. Additionally, data controllers must conduct legitimate interest assessments before processing data in line with Articles 6 and 35 of the GDPR. 

Sub-processing

Under the PDPL, data controllers are required to periodically conduct compliance assessments of selected data processors to ensure they are in compliance with the law. This may create a burden on data controllers as they will assume sole accountability for all data processing activities conducted by the data processor before the competent authority and the data subject.

Information security

Article 23 identifies the mandates of securing personal data by referring to National Cybersecurity Authority measures, standards, controls, or best cybersecurity international standards if the NCA does not regulate the data controller. Additionally, the regulations added a significant word to point (a) of the article not stated in the law — necessary. "Data Controllers to implement the necessary security and technical measures to mitigate potential risks on personal data." The addition is impactful as data controllers are mandated to implement information security controls defined by the NCA on all personal data processing activities equally, regardless of scope. Without the ability to prioritize, this may place undue cost and time implications for data controllers.

Data breach notification 

The regulation introduced almost equal criteria that organizations must notify the competent authority and data subject that a data breach has occurred within 72 hours and immediate notification, respectively. This may be a cause for concern with regards to notifying the data subject mandate as it would have been more relevant to be in case of confirmed impact or potentially high impact on the data subject to avoid the possible reputational effect on data controllers.

Privacy impact assessments

The implementing regulation mandates data controllers conduct documented privacy impact assessment in nine different scenarios of personal data processing including, whenever data processing involves anonymization, sensitive personal data, use of new technologies, etc. This is in line with Article 35 of the GDPR. 

Processing health and credit data 

Articles 26 and 27 add more restrictive and specific measures for processing health and credit data. For example, organizations must adopt a restrictive and limited "need to know basis" approach to minimize accessibility, documentation of all processing stages with specifying an owner for each stage when processing health data.

However, more challenging requirements require data controllers to adopt all relevant measures and standards issued by other competent authorities in the health and financial sectors when processing health and credit data. It is an additional responsibility for data controllers to cross-check data protection requirements from different laws and regulations from other authorities than the data protection competent authority.

Processing data for promotional awareness; direct marketing purposes 

Article 28 of the implementing regulation requires data controllers to collect consent from data subjects before processing their data for promotional and awareness purposes. Under the article, there is an important indirect exemption for data controllers if there was a previous interaction between the data controller and data subject. This is similar to Article 21 of the GDPR, allowing data controllers to rely on legitimate interest with the right to object for profiling and direct marketing purposes when data controllers promote their products and services.

On the other hand, Article 29 of the implementing regulation introduced similarities with Article 28 of the Implementing Regulation, which covers the processing of direct marketing purposes, including sending promotional communications to data subjects. However, Article 29 mandates data controllers collect consent from data subjects before processing data without including the indirect exemption mentioned in Article 28.

If Article 29 requires consent collection in direct marketing processing in terms of profiling and analytics, i.e., not for sending promotional communications to the data subject, then this is a more significant challenge as sending them will be allowed for the mass audience, while targeted audience won't be permitted without consent. Hence, this would require further clarification and guidance from the competent authority.

What is next? 

It is important to note the consequences of noncompliance per the law are intolerable in two instances: In the event of the deliberate unlawful disclosure of sensitive personal data, an individual could receive up to two years in prison and/or a fine of SAR3 million. An organization that violates the law could receive a warning or a fine of SAR5 million. If it receives a fine, the court or competent authority could require the organization's data controller to publish it in one or more local newsletters at their expense.

Organizations must design their privacy programs carefully to ensure compliance within the first year, while planning for advanced maturity levels in the following years. They should implement foundational principles and, when applicable, incorporate those requirements into their operational processes at the bare minimum to prove compliance before the competent authority. Finally, instead of prioritizing tooling and automation in the first year of compliance, they should become part of the maturity roadmap to achieve standardization and efficiency in subsequent years.

]]>
2023-10-19 12:28:30