June 2024 | Monthly Recap
Take a break from all the football and let's talk about Apple, Texas, drivers’ data, data brokers, Costco's ad network, Change Healthcare, new legislation coming into force today and many (many) more!
Heya!
Take a break from all the football (we mean soccer - clarification for the American reader), and get updated on all things privacy, tech, AI and whatever else we thought was worth diving into with our monthly recap for June!
As usual, we don’t cover everything that happened last month but instead we prefer to focus on topics or developments that are either interesting, disturbing, absurd, or funny (in no particular order). If you feel something happened this month that should have been included but wasn’t, then please, drop everything and let us know.
Hope you enjoy and happy summer!
Make sure to subscribe here on Substack, or via LinkedIn
Let’s talk about Apple
Apple. What’s not to love? Their story, their legendary previous leadership, their products, their approach… the list goes on. And if all that isn’t enough, Apple has championed privacy along the years and built products to protect their users and secure their data. In 2022, Apple took this commitment to another level by introducing App Tracking Transparency (ATT) with their iOS 14.5 roll out.
This feature was extremely important for a number of reasons such as helping raise awareness of the tracking and data collection that happens to people every day, helping fight a complex ecosystem of apps, social media companies, data brokers, and ad tech firms that track users online and offline, harvest their personal data and fuel a $227 billion-a-year industry and, giving (at least some) control back to the individual over their personal data and providing them with a less intrusive and more enjoyable user experience.
Another (more recent example) is Apple’s long anticipated announcement of when and how they will join the AI craze. Apple waited patiently and built quietly before making any announcements or launches.This not only allowed Apple to watch and learn what not to do, but also built anticipation around their announcement (they were always marketing geniuses).
But the wait was finally over this month, when Apple introduced its personal intelligence system, Apple Intelligence. And yes, you guessed right, it’s privacy focused by offering on-device processing and groundbreaking infrastructure like Private Cloud Compute. Also, Apple did their homework while waiting and building and understood that one of the big controversies around AI was the usage of personal and sensitive data for training models. So they of course made sure to announce that they “(we) do not use our users' private personal data or user interactions when training our foundation models”.
But, as these things often do, it also pissed off some very big players in the tech ecosystem. When it comes to the AI launch, Elon Musk threatened to ban Apple devices from his companies over Apple’s ChatGPT integrations (yes, of course it’s integrated to openAI). Here are some snippets of things Elon said about the subject:
“Apple using the words ‘protect your privacy’ while handing your data over to a third-party AI that they don’t understand and can’t themselves create is *not* protecting privacy at all!” or, “It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!”, but also “stop this creepy spyware”.
And what about the regulators? Sure, Apple didn’t only piss off competitors, they also pissed off regulators. Globally. Why? Well, they claimed that Apple was engaging in anti-competitive behavior. And (in the case of ATT for example) by blocking the ability of third parties to serve ads, they (Apple) can now leverage all their user data to serve their own ads. A report concluded a year after the launch that Apple’s Search Ads business grew 94.8% year-over-year, while Facebook’s adoption dropped 3% to 82.8%. We’re not number geeks but these do look significant.
Now we won’t dive into the world of competition law (however fascinating it is) but competition enforcement agencies across the globe seem to have agreed that there is room for some digging around. Some still are. Leading me to the latest headline “Apple Is First Company Charged Under New E.U. Competition Law”. Now, not all anti-competition investigations against Apple are necessarily connected to their privacy focused features like ATT. This last one, for example, relates to Apple’s App Store “steering” policies and that they allegedly violate the EU’s new Digital Markets Act (DMA). If found to be true, Apple could be fined up to 10 percent of its annual global revenue for infringement, or $38 billion based on last year’s numbers.
Apple is already delaying the rolling of three features (Phone Mirroring, SharePlay Screen Sharing enhancements, and Apple Intelligence) to EU users due to those regulatory uncertainties around the EU's DMA.
US 🇺🇸
Yiiiihaaaaa here comes Texas!!
Ok, fine… that’s a bit of a cliché but we mean it well.
Texas has been busy and seems to take their new privacy law seriously. How? Well Texas being Texas, it’s all about enforcement and that’s why the Attorney General of Texas launched “a major data privacy and security initiative, establishing a team that is focused on aggressive enforcement of Texas privacy laws”. (Ehmm how aggressive will this “aggressive enforcement” be?).
The announcement states that this data privacy team will focus on the enforcement of Texas’s privacy protection laws including the Data Privacy and Security Act, the Identity Theft Enforcement and Protection Act, the Data Broker Law, the Biometric Identifier Act, the Deceptive Trade Practices Act and federal laws including the Children’s Online Privacy Protection Act (COPPA) and the Health Insurance Portability and Accountability Act (HIPAA).
What can we say, although not all of those are purely “Texas’s privacy protection laws”, we can’t say we’re mad at the initiative. Go Texas!
Don’t say you haven’t been warned
Just 2 days after the announcement by the Texas Attorney General, another announcement made by the Attorney General said they opened “an investigation into several car manufacturers after widespread reporting that they have secretly been collecting mass amounts of data about drivers directly from their vehicles and then selling that data to third parties - including to insurance providers.
And then another one about 2 weeks later saying it “has issued letters notifying over one hundred companies of their apparent failure to register as data brokers with the Texas Secretary of State as required by Texas’s newly enacted Data Broker Law’.
Well, I guess that answers our question earlier about how aggressive they plan to get. It’s no secret that the lack of enforcement is a key issue in the world of privacy so this is a welcome development. But let’s see how this plays out in the long run.
Speaking of drivers’ data…
This is not an issue only in Texas of course and has been for quite some time. And yet, it’s only now making headlines in the New York Times? What’s all the noise about?
Well, insurance companies allegedly collect driving data from smartphone apps that have safety features such as Life360 and GasBuddy (unless users opt out. Classic US).
Why is this an issue? Well because this source of data allows the insurance industry to build a score around your driving data, which is then used to create a driving score that reflects a person’s driving habits. This score can be used to determine insurance rates, with safer drivers potentially paying lower premiums. Overall it doesn't sound too bad when put that way and maybe even good if you’re a safe driver. But, as in many other things in data privacy, we need to understand all the other “scoring” or “profiling” they can do with that data and who else has access to it (data brokers?). And then, the whole thing deserves a different reaction than “what’s the big deal?”
Well that was a quick
Right, so about the data brokers we just mentioned, that’s already a thing when it comes to sharing driving data with insurance companies. This basically means they have the data to share in the first place, and we can be pretty sure it’s not only insurance companies they’re sharing (i.e. selling) it to. But it seems that the insurance companies are going to struggle a bit more to get their hands on this type of data for a bit (at least from one data broker) who announced that they “stopped accepting data from car makers and no longer sells the information to insurers”. Maybe we’re already seeing the effect from Texas’s “aggressive enforcement” who launched an investigation on this?
There’s data and then there’s shoppers data
This one took longer than expected but comes as no surprise - US retailer Costco is “building out an ad network built on its trove of loyalty membership data, using its 74.5 million household members’ shopping habits and past purchases to power targeted advertising on and off its website to begin targeted ads based on shoppers' data”.
Say it ain't so!
Apparently, Facebook investors felt that they were misled about the misuse of its user data by the company and third parties and the subsequent lawsuits had a significant effect on share value. Fair enough, they are referring to cases around 2017/18 (including the Cambridge Analytica case), and yet, we’re having a hard time taking these claims seriously.
Partially because we’re talking about a company that is under control by a guy who said things like this about his users and their data:
“People just submitted it. I don't know why. They trust me. Dumb f*cks”.
Or, a company where their engineers stated to a court that we (Facebook) have no idea where we keep all the data. And you don’t need to be a genius to figure out that if you don’t know where something is, you also don’t know who has access to it or who it’s being shared with.
In any event, it’s hard to imagine that these are things that wouldn’t have come up with a little due diligence. Having said that, the fact that it happened, doesn’t mean it should have happened, and maybe a bit of accountability towards their shareholders will make them reflect a bit more on their practices (although we won’t be holding our breath).
This sounds healthy
Change Healthcare confirms a February ransomware attack on its systems, which brought widespread disruption to the U.S. healthcare system for weeks and resulted in the theft of medical records affecting a “substantial proportion of people in America. This little company apparently has access to massive amounts of health information on about a third of all Americans.
What’s a little bit more worrying is that Change said in its latest statement that it “cannot confirm exactly” what data was stolen about each individual, and that the information may vary from person to person.
They also said that it has begun the process of notifying affected individuals whose information was stolen during the cyberattack individuals should receive notice by mail beginning late July (ehm, 5 months later?).
In any event, if you do get a notice from them about being affected, we’d recommend you take it seriously as the types of data affected are substantial including personal data (such as names and addresses, dates of birth, phone numbers and email addresses), government identity documents (such as Social Security numbers, driver licenses and passport numbers), medical records and health information (such as diagnoses, medications, test results, imaging and care and treatment plans) and, health insurance information (including plan and policy details, as well as billing, claims and payment information). So basically, everything
EU 🇪🇺
How about we educate you?
About privacy and legal bases for processing of course.
Apparently, Microsoft’s 365 Education services violate children’s data protection rights.
At least this is what the complaint by NOYB states which was lodged with Austria’s data protection authority. Specifically pointing out that when pupils wanted to exercise their GDPR rights, Microsoft said schools were the “controller” for their data. However, the schools have no control over the systems.
Hmmm that's strange. NOYB also filed a second complaint that accuses Microsoft of also secretly tracking children saying it found tracking cookies that were installed by Microsoft 365 Education despite the complainant not consenting to tracking.
Given the long process involved in these types of complaints (a problem in itself that we won’t get into here), it’ll take some time before we know the outcome of this.
And yet again they make a headline...
This is one that we’ve been addressing pretty much every month and so we are too involved and feel obliged to continue as long as they’re making headlines.
This month, Worldcoin agrees to pause data collection in Spain until the end of the year or until the final resolution of the investigation by the Bavarian data protection authority (where the company has its main establishment in Europe).
This comes after The Spanish Agency for Data Protection ordered the company to stop the collecting and processing of personal data that it was carrying out in Spain last March as a precautionary measure. You can check out more details in our last updates. Stay tuned for more on this one.
AI 🤖
Say what?
Meta decided it will change its privacy policy so that it allows them to collect user posts from Facebook, Instagram, and other Meta platforms, to train their AI models. Just like that.
Nothing wrong with that right? Well it caused quite some backlash and NOYB took initiative and filed complaints with 11 EU DPAs.
NOYB argues that users are not given a substantial choice over the matter. Users do not have to “opt-in” to this change, which will take effect automatically, while opting out is “extremely complicated,” as opposed to simply clicking a button. The fact that this would effectively give Meta access to the personal data of about four billion users and use it for ”experimental technology essentially without limit” makes the whole thing even more worrying. Meta is of course “confident” that it is compliant with EU privacy law. As always.
On the same topic, the DPA of Hamburg DPA urged consumer attention to Meta's AI training data practices and provided instruction on how users can opt out (without mentioning that the process is extremely complex). They also mentioned that they’re in touch with other DPAs on the topic and that they will be working together to address it on an EU wide level.
The Irish DPA confirms this with a statement saying that “The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA. The DPC, in cooperation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.” We wonder who in the US is “engaging with Meta on this issue”.
And now, how about some responsible AI?
Responsible AI. Not a thing that makes headlines these days. Maybe because irresponsible AI makes better headlines?
Well this didn’t seem to stop OpenAI’s former chief scientist, Ilya Sutskever, to leave OpenAI and launch a new AI company that is building safe superintelligence (SSI).
In a tweet Ilya writes “SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI. We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace. Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
Ahhh what a relief to know that there seems to be a responsible adult in the room after all.
The whistles are starting to blow
A group of current and former OpenAI employees have come forward to express concerns about the company’s culture and priorities. They claim that OpenAI is racing to build the most powerful AI systems ever created, prioritizing profits and growth over safety and ethics. As if to confirm this from the consumer side, Ali Farhadi, CEO of Allen Institute for AI said the breakneck pace of AI development has broken consumer trust. This shows that a short time win is only that, short time. Companies that prioritize them before safety and ethics won’t last in the long run and will fail in one way or another. Even if it’s something “minor” as consumer trust. We’ll definitely keep you posted on developments here.
And now for some guidance
All around the world there is a lack of guidance for companies to help them deal with the roll out of AI in their organizations. Luckily there are some countries that have heard the frustration and put in the effort needed to tackle these issues. Or at least try.
Singapore announced its Model AI Governance Framework and its AI Governance Playbook
Hong Kong's PCPD releases AI management framework
Spain publishes guidance on AI strategy
The European Data Protection Supervisor released guidance on privacy considerations (this one is for EU institutions around the development and use of generative AI but there could be some interesting takes in there)
New Legislation
How comprehensive is too comprehensive?
Well, if you ask Rhode Island, then that bar is not too high. Or at least it would seem so based on their “comprehensive” privacy law that just passed. Nevertheless, it is a law that protects privacy in the US, so yay! Here’s the full text.
Protect the children
New York passes children's privacy and social media bills. The Stop Addictive Feeds Exploitation (SAFE) for Kids Act will restrict a child’s access to addictive feeds on social media, and the New York Child Data Protection Act will keep children’s personal data safe. Or will try to anyway.
And now to an even funner section - new legislation that is coming in effect and must be complied with. Yaaay!
Here’s the bottom line - as of July first (so now), there are 3 comprehensive state privacy laws that come into effect. Those are Texas , Florida, and Oregon.
Additionally, July 1st is also the deadline under Colorado's CPA to recognize universal opt-out mechanisms and the deadline for data brokers to report 2023 DSR metrics under California's CCPA. So no time to lose to get your ship in order! Check out our blog from the past where we provided a more detailed breakdown of what all those different laws require/need. » US Privacy Laws - 2024
That’s it! See you next month.
This newsletter is brought to you by hoggo's founders. hoggo is an AI-driven platform for B2B trust where sellers can showcase & improve compliance and buyers can evaluate, manage and monitor them.