Rechercher dans ce blog

Monday, August 23, 2021

Apple is bringing client-side scanning mainstream and the genie is out of the bottle - ZDNet

apple-ces-2019-ad-privacy-gettyimages.jpg
Image: Apple

Apple clearly thought it was onto a winner with its child sexual abuse material (CSAM) detection system and, more than likely, it was expecting more of the usual gushing plaudits it is used to. It's not hard to imagine Cupertino thinking it had solved the intractable problem of CSAM in a way that best suited itself and its users.

Apple claims its system is more private because it doesn't actively scan or monitor photos uploaded to its servers, unlike pretty much everyone else in the industry, but as the weeks go by, it looks increasingly like Apple has created a Rube Goldberg machine in order to differentiate itself.

The consequences of this unilateral approach are far-reaching and will impact everyone, not just those in the Apple walled garden.

Governments have been pushing for big tech to create decryption abilities for some time. One way to reach a compromise is to have an encrypted system but not allow the users to encrypt their own backups, thereby allowing some visibility into content, while another is to have a full end-to-end encrypted system and inspect content when it is decrypted on the user device for viewing.

While the rest of the industry settled on the former, Apple has switched lanes onto the latter.

This shift occurred just as Australia handed down its set of draft rules that will define how its Online Safety Act operates.

"If the service uses encryption, the provider of the service will take reasonable steps to develop and implement processes to detect and address material or activity on the service that is or may be unlawful or harmful," the draft states.

See also: Apple to tune CSAM system to keep one-in-a-trillion false positive deactivation threshold

Canada goes a step further in a similar draft. In its iteration, it is demanding proactive monitoring of content relating to CSAM, terrorism, violence-inciting, hate speech, and non-consensual image sharing, and creating a new Digital Safety Commissioner role to assess whether any AI used is sufficient, according to University of Ottawa law professor Dr Michael Geist.

Should it become law, online communication services in Canada would also have 24 hours to make a decision on a piece of harmful content.

How that potential law interacts with Apple's decision to set a threshold of 30 CSAM images before injecting humans into the process and inspecting the content's metadata will be something to watch in future.

While the Canadian proposal has been deemed to be a collection of the worst ideas from around the world, the likes of India, the United Kingdom, and Germany are likewise pushing forward with internet regulation.

Apple has said its CSAM system will start only with the United States when iOS 15, iPadOS 15, watchOS 8, and macOS Monterey arrive, meaning one might be able to argue Apple will be able to avoid the regulations of other western nations. 

But not so fast. Apple privacy chief Erik Neuenschwander said in a recent interview that the hash list used to identify CSAM will be built into the operating system.

"We have one global operating system," he said.  

Even if Apple has consistently stated its policies aim to prevent overreach, use by corrupt regimes, or false suspensions, it's not clear how Apple will answer one very important question: What happens when Apple is issued with a court order that goes against its policies?

There's no doubt non-US legislators will take a dim view if the sort of systems they want are available on Apple devices.  

"We follow the law wherever we do business," Tim Cook said in 2017 after the company pulled VPN apps from its Chinese app store.  

Following the law: Citizen Lab finds Apple's China censorship process bleeds into Hong Kong and Taiwan

While there are plenty of worthy concerns and questions about Apple's system itself, the consequences of the existence of such a system is cause for greater concern.

For years, Apple has pushed back on demands from US authorities to help unlock phones of people alleged to be involved in mass shooting. When responding to FBI demands in 2016, Cook wrote a letter to customers that rebutted suggestions that unlocking one phone would be the end of the matter, and said the technique could be used over and over again.

"In the wrong hands, this software -- which does not exist today -- would have the potential to unlock any iPhone in someone's physical possession," the CEO said.

The key to Apple's argument was the words between the emdashes, and now in August 2021, while that exact capability does not exist, an on-device capability is set to appear on all its devices, and that's a good enough reason for concern.

"Apple has unilaterally chosen to enrol its users in a global experiment of mass surveillance, seemingly underestimated the potential costs this could have on individuals who are not involved in the manufacture or storage of CSAM content, and externalised any such costs onto a user base of one billion-plus individuals around the world," Citizen Lab senior research associate Christopher Parson wrote.

"These are not the activities of a company that has meaningfully reflected on the weight of its actions but, instead, are reflective of a company that is willing to sacrifice its users without adequately balancing their privacy and security needs."

For the sake of argument, let's give Apple a pass on all of its claims -- perhaps the biggest of the tech giants can resist legislative pressure and the system remains fixated only on CSAM within the United States. However, this will take eternal vigilance from Apple and privacy advocates to ensure it follows through on this. 

The bigger problem is the rest of the industry. The slippery slope does exist, and Apple has taken the first step down. Maybe it has boots with ice grips and has tied itself to a tree to make sure it cannot descend any further, but few others do.

Suddenly, on-device scanning has become a lot less repugnant because if a company as big as Apple can do it, and they promote themselves on the basis of privacy and continue to sell squillions of devices, it must therefore be acceptable to users. 

Building on that, shady businesses that want to upload data to their own servers now potentially have a nomenclature built out for them by Apple. It's not the user's data, it's safety vouchers. What previously could have been deemed a form of exfiltration is now done to protect users, comply with government orders, and make the world a safer place.

Those systems that follow in the wake of Apple are unlikely to have as much concern for user privacy, technical expertise and resources, ability to resist court orders, or just flat out good intentions that Cupertino appears to have.

Even if Apple were to dump its plans tomorrow, it's too late. The genie is now out of the bottle. Critics and those who want to pursue an on-device approach will simply say Apple has buckled to pressure from extreme sections of the privacy debate if it does decide to change its mind.

Companies are going to compete over who can best poke around on devices, boast about how many of their users were arrested, and how that makes them safer than other choices. Missing in this will no doubt be the number of mistakes made, edge cases that are never properly considered, or anguish caused to some of those who pay for devices. It's not going to be pretty.

Apple doesn't seem to grasp that it has turned its user's relationship with its products from one of ownership into a potentially adversarial one.

If your device is scanning content and uploading it somewhere, and you cannot turn it off, then who is the real owner? It's a question we will need to answer soon, especially because client-side scanning is not going away.

ZDNET'S MONDAY MORNING OPENER 

The Monday Morning Opener is our opening salvo for the week in tech. Since we run a global site, this editorial publishes on Monday at 8:00am AEST in Sydney, Australia, which is 6:00pm Eastern Time on Sunday in the US. It is written by a member of ZDNet's global editorial board, which is comprised of our lead editors across Asia, Australia, Europe, and North America. 

PREVIOUSLY ON MONDAY MORNING OPENER:

Adblock test (Why?)


Apple is bringing client-side scanning mainstream and the genie is out of the bottle - ZDNet
Read More

No comments:

Post a Comment

Google's encryption-breaking Magic Compose AI proves iPhone shouldn't support RCS messaging - BGR

For years, Google has been dying to come up with an iMessage equivalent, a key iPhone feature that’s probably responsible for stealing plent...