May 2024 | Monthly Recap
From the cringy Recall by Microsoft, to Ryanair processing Biometric Data, all the way to the ICO issuing a "warning shot" for the AI industry - we got you covered.
Howdy everyone! Welcome back to our monthly recap on all things privacy, tech, AI and whatever else we thought was worth diving into.
As usual, we don’t cover everything that happened last month but instead we prefer to focus on topics or developments that are either interesting, disturbing, absurd, or funny (in no particular order). If you feel something happened this month that should have been included but wasn’t, then please, drop everything and let us know!
Happy reading for now and we hope you enjoy it!
Recall Cringe
“Sure, I’d be happy if Microsoft would take constant screenshots of my screen!” Said no one ever. But Microsoft still thought it’s a good idea to release their new feature “Recall” for Copilot Plus PCs that keeps track of everything you see and do on your computer and, in return, gives you the ability to search and retrieve anything you’ve done on the device.
It includes logging things you do in apps, tracking communications in live meetings, remembering all websites you’ve visited for research, and more. All you need to do is perform a “Recall” action, which is like an AI-powered search, and it’ll present a snapshot of that period of time that gives you context of the memory.
The company notes that an intruder would need a password and physical access to the device to view any of the screenshots, which should rule out the possibility of anyone with legal concerns ever adopting the system. Ironically, Recall’s description sounds eerily reminiscent of computer monitoring software the FBI has used in the past. Microsoft even acknowledges that the system takes no steps to redact passwords or financial information.
The UK ICO had already released a statement that they will be investigating the tool.
USA
Bill for American Privacy Rights Act referred to full committee
On May 23, 2024, the House Energy and Commerce Committee published a blog announcing that the discussion draft of the American Privacy Rights Act (APRA) was forwarded, without amendment, to the full committee by a voice vote. Although this is a great step forward, there are some topics that may lead to difficulties moving forward. The main ones being the inclusion of COPPA which was basically just added “as-is” to the Act without incorporating it more thoroughly and not including suggested changes to COPPA that would provide more protection over children’s privacy. Other topics that seem to be getting pushback include data broker rules and appropriate AI provisions.
Although these topics are very important, we hope that all lawmakers and stakeholders involved will show flexibility and manage to find a middle ground and not lead to another unsuccessful run at getting a federal privacy law adopted in the US.
No piece of legislation is “perfect” and the importance of getting this one in place has never been more urgent then now. There is always an option to amend it in the future should that become necessary. You can read more about the topic here.
A healthy approach can save $ (or bitcoin)
Health-tech companies have always been a target for cyberattacks. That makes sense of course given the value and sensitivity of the data they collect and process. But a hacker doesn’t even need to access that data and sell it to earn money. If they manage to breach a major player and cause a widespread fallout in the health-care sector, then a nice pay day is waiting for them. How nice? How does $22M sound?
That’s the amount the Health insurance company UnitedHealth Group confirmed to have paid for a ransomware payment (with bitcoin apparently). Here’s the thing though - the hackers got access to the data via UnitedHealth’s, Change Healthcare, through a server of that was not protected by multi-factor authentication. Another great example on the important of vetting and knowing all third parties involved in processing data (subsidiary or otherwise) because even if a company has all the best security measures in place, hackers can always find the weaker link in the supply chain and pay a visit.
Ok, that’s settled then?
Although this settlement isn’t new, around 800,000 consumers will be notified of refunds from a $7.8 million privacy settlement between the FTC and online therapy provider BetterHelp.
Apparently BetterHelp thought that they could help even better if they share consumers' health data for advertising and sharing personal data for retargeting all while promising consumers that it would only disclose personal health data for limited purposes. I think we can all agree that the settlement amount together with requirements to obtain affirmative express consent" before disclosing personal data, implement a data retention schedule as well as a comprehensive privacy program that includes strong safeguards to protect consumer data, is the very least that this settlement should include.
Speaking of settling…
Maybe companies have started to realize that there is a shift in approach and they won’t be able to get away with doing whatever they want with consumer data. Or maybe headlines like, “The U.S. Federal Communications Commission announced fines totaling $200 million to wireless carriers AT&T, Sprint, T-Mobile and Verizon over alleged nonconsensual geolocation data sharing” are making them rethink litigation strategies.
Whatever the reason, we hope it’ll make other companies start rethinking their approach. For the sake of consumers and that of our society. Data broker Kochava seems to have re-read the room and now wants to facilitate settlement talks during its ongoing privacy lawsuit. What about? Oh well, the usual of course - engaging in unfair business practice by selling precise geolocation data without consent.
EU
Some Guidance Please
In a new guidance released by the French Data Protection Authority (CNIL), users are advised not to collect personal data from public online spaces for direct marketing purposes using web scraping tools. In 2019, CNIL clarified that data that is publicly accessible is still personal and cannot be reused without the consent of the individual. Read more here (French)
Microsoft again in the crosshairs
The European Commission has stepped up its enforcement action against Microsoft, requesting further information related to suspected violations of the DSA linked to Bing's generative AI features. If Bing fails to respond, the Commission may impose fines up to 1% of its total annual income or worldwide turnover.
No cookies for me please
Although google keeps pushing off the the start of the phase-out of 3rd party cookies (the deadline was previously set for the end of 2024 but is now postponed to at least early 2025), it looks like some are already preparing for the day after. This includes IAB Sweden which offered six tips to advertisers on how to adapt to the end of third-party cookies.
Speaking of cookies
Spain's data protection authority, updated its third-party cookie guidelines in response to the EDPB's opinion on pay-or-consent advertising models which states that models allowing users to pay a fee to opt out of targeted marketing did not offer a valid consent option.
UK
Let’s consult on it?
The UK ICO has launched the fourth part of their consultation series and this time on data subject rights in generative AI. As generative AI models often use personal data during training and deployment, organizations must ensure that individuals can exercise their data rights. As part of its mission to promote innovation and protect personal data in AI development, the ICO seeks evidence on effective methods organizations use to meet these legal obligations. Read more here
Global
This just in – people care about their privacy!!
Shocking, right?! Two recent surveys show that 80% of individuals worry and care about their privacy. The first was conducted by the The Office of the Privacy Commissioner of Canada respondents consider protecting customer information to be a high priority and that most were aware of their privacy obligations under the law.
The second was conducted by the Office of the Privacy Commissioner of New Zealand which found that more than 80% of respondents said they want control over collection and use of their data, to know when data is used in automated decision-making and the right to ask companies to delete data.
A month full of Biometrics
This month saw a lot of headlines relating to Biometric Data processing. Specifically the topic of facial scanning for specific purposes seems to be a hot topic at the moment.
As a quick refresher regarding biometric data: Biometric data under the GDPR is considered a “special category” of personal data (Art. 9), which also prohibits the processing such data unless one of the exemptions apply. Like many other articles in the GDPR, the language leaves room for interpretation and a pretty large grey area.
This isn’t ideal for businesses to plan ahead. However, a number of authorities across the globe are investigating or raising concerns about the use of facial recognition and the outcome may lead to some more certainty and clearer guidance. Some of these developments include:
Italy's data protection authority announced it will investigate facial recognition technology used in surveillance systems at the city of Rome's metro stations.
Two separate complaints were filed to the French and Belgian data protection authorities over Ryanair’s use of biometric data in their identity verification process that uses facial recognition for all users without a Ryanair account, purportedly to “protect customers from internet scams”
(Outside Europe) Bermuda's Human Rights Commission raised concerns that the use of facial recognition that is to be included among upgrades in a new security camera network being installed across the island, could violate residents' constitutional rights.
The Netherlands' DPA is already one step ahead by releasing guidance on facial recognition which answers questions on facial recognition programs. For example, it says that the technology can be used if it is "necessary for authentication or security purposes," otherwise the use of biometric data should be limited and can not be used to confirm someone's identity.
The EDPB also issued an opinion that assesses 4 scenarios on the use of facial-recognition technology to improve the processing of airport passengers and the legality of such usage. The opinion states regarding the fourth scenario (which seems most suitable to the Ryanair scenario) that “flow at airports can be achieved in a less intrusive manner and the negative impact on the data subjects’ fundamental rights and freedoms that could result from a data breach in a centralised database of biometric data seems to outweigh the anticipated benefit resulting from the processing. Therefore, the processing cannot meet the necessity and proportionality principles”. Well, that seems to answer at least the questions around Ryanair.
The one that keeps on giving
This next one is also about biometrics and a story we brought up in a few of our earlier updates. But, as it continues to make headlines, we thought why not keep you up to date as well. Yes, you guessed it, the crypto project Worldcoin Foundation that issues you crypto in exchange for you biometric data (iris scans). Well, now Hong Kong also ordered them to stop operating within the country due to concerns about its privacy and data practices.
This adds up to all the other countries that have already done so and more will probably follow. Maybe that’s why the Worldcoin Foundation has announced the release of an open-source system that it claims can improve the protection of sensitive information, including biometric data, that it migrated to the new system and, in doing so, deleted the iris codes generated when people signed up.
If this is true, then that’s a great step (someone must be reading our newsletter)! But, is it true?
And one more for good luck!
Target (the retail giant) seems to have wanted to join the party and uses cameras and advanced surveillance systems in its store locations to “surreptitiously collect, possess or otherwise obtain” the biometric data of its customers. At least that’s what the class action lawsuit claims. It also claims that collecting, storing and using individuals’ biometric data without obtaining informed written consent violates all three prongs of the llinois’ Biometric Information Privacy Act (BIPA).
This is another absurd outcome that comes from the lack of a federal privacy act in the US. Where only consumers in one state are protected from a company exploiting their biometric data whereas consumers in other states just have to accept it.
Speaking of privacy class actions against Target, here’s another one that claims Target violates the privacy of its email subscribers by embedding hidden spy tracking pixels, which capture and log sensitive information, within its marketing emails. Maybe they need to reevaluate their privacy functions over at target because these could get expensive.
And now, how about some AI stuff?
Finally, its official (kinda)!
The Council of the European Union formally approved the EU AI Act. Once the presidents of the Council of the European Union and the European Parliament sign it, it will be published in the EU Journal and will enter into force 20 days later. Phew, that took a while. But hey, it’s not easy adopting the first on a new breakthrough technology such as AI.
But that’s not all!
Colorado also recently passed an AI bill, one of the first pieces of legislation in the U.S. regulating AI. The Colorado AI Act is a cross-sectoral AI governance law covering the public sector. You can read more about it here.
Just a sneak peak
The EDPB task force published a preliminary report on ChatGPT investigation on the issues related to how ChatGPT interacts with EU personal data laws. The preliminary investigation determined which elements of the GDPR apply to the various inquiries being made around OpenAI's operations and addresses the topics of lawfulness, fairness, Transparency and information obligations, Data Accuracy, and Rights of the data subject.
The report also includes a questionnaire that was developed within the context of the investigation. It points out that EU supervisory authorities are independent and as such are free to modify the questionnaire or to add further questions. However, we believe it is safe to assume the general structure and questions included, will be at the base of any further investigations. You can find the full (preliminary) report here.
They’ve been busy in the UK
The ICO issued a warning to companies to not disregard data protection risks when launching AI-powered chatbots. It started with an investigation into Snap and its ‘My AI’ chatbot following concerns that Snap had not met its legal obligation to adequately assess the data protection risks posed by the new chatbot.
Following the investigation, the ICO stated that “Our investigation into ‘My AI’ should act as a warning shot for industry. Organisations developing or using generative AI must consider data protection from the outset, including rigorously assessing and mitigating risks to people’s rights and freedoms before bringing products to market.
“We will continue to monitor organisations’ risk assessments and use the full range of our enforcement powers – including fines – to protect the public from harm.”
Well, you can’t say you haven’t been warned.
In this regard, the ICO also has a Guidance on AI and data protection published on their website. Companies should probably use that as reference when planning their AI projects.
And, if safety is your thing, then the International Scientific Report on the Safety of Advanced AI should have you covered (with respect to general-purpose AI at least). The report was commissioned by the UK government and is an interim report that sets out an up-to-date, science-based understanding of the safety of advanced AI systems.
New Legislations (they just keep on coming)
Colorado
By passing SB 205, Colorado became the first state to pass a comprehensive privacy law for artificial intelligence systems.
The final form of Colorado SB 205 establishes requirements for both developers and deployers of "high-risk" AI systems, which could potentially chill the use of AI applications in various employment applications beyond sourcing, selection, and termination.
Minnesota
Minnesota has become the latest US state to pass a privacy bill, the Minnesota Consumer Data Privacy Act (MCDPA).
The Vatican
The Vatican City State has issued a new decree on the protection of personal data. The new rule applies to data processing within Vatican City, except for personal uses, publicly disclosed data, and anonymized data.
That’s it!
See you next month.