Apple is a business.
This is the first thing you should know about it. It’s a company that exists to make money.
It’s not your friend. It’s not a superhero. It’s not a religion.
As a company, it invites you to buy its products and services. If you don’t like what it has to offer, you’re free to move on.
And I think that this confusion is at the heart of a lot of the criticism that Apple has received over the new child safety features that it is introducing. It’s quite a complicated and charged subject, and both Apple’s messaging, along with how the media have reported those messages, have created more confusion.
Add to that the fact that some people get very upset when Apple does something that doesn’t fit in with how they see the company, and it’s a recipe for disaster.
However, the other day Apple released a document that went into great detail as to how the system will work, the steps that exist to keep false positives to a minimum, the mechanisms in place to prevent governments, law enforcement, and even malicious or coerced reviewers from abusing the system, and how Apple maintains the end user’s privacy throughout.
According to Apple, “the system is designed so that a user need not trust Apple, any other single entity, or even any set of possibly-colluding entities from the same sovereign jurisdiction (that is, under the control of the same government) to be confident that the system is functioning as advertised.”
It’s a deep document, but it’s well worth a read.
Must read: Apple iPhone could be forced to use USB-C instead of Lightning
But these are just words on a page.
It ultimately comes down to one thing.
Do you trust Apple?
Well, do you?
I think that this is a deep question, and one that goes further than scanning for images of child abuse (something that most people will think is a good thing for Apple to be doing).
The trust issue here goes deeper.
First, Apple has developed an on-device scanning system that can detect — with great accuracy — specific information.
Right now, Apple is using this to filter out CSAM (child sexual abuse material) and to detect sexually explicit images sent or received by children via iMessage, but there’s nothing that prevents that mechanism being used to detect anything, whether it be religious, political, terrorist-related, pro/anti leanings on vaccines, cat photos, or anything else.
And that scanning mechanism is backed into its devices.
The Apple of the here and now might hand-on-heart swear that this system will only be used for good and that it won’t abuse it, but this is only reassuring to a point.
Let’s take some simple but contemporary examples such as COVID-19 anti-vax misinformation, or climate-change denialism. What if Apple decided that it was in the interests of the greater good to identify this material and step in to prevent its dissemination? Might not be a bad thing. Might be a thing that enough people could get behind.
And the CSAM mechanism would technically make this possible.
Would it be right?
One could argue that CSAM is illegal while anti-vax or climate-change misinformation is not.
OK, but laws vary from country to country. What if a country asked Apple to step in to identify and report other material that is illegal in that country? Does it become a game of cherry-picking what material to detect and what not to detect based on the PR fallout?
What if Apple decided to scan for any and all illegal material?
The mechanism to do this is in place.
Also, this is not only a question of space, but of time.
The people at the helm of Apple today will not be the people at its helm in the future. Will they be so motivated to protect user privacy? Could they become complicit with abusing the system because of governmental pressures?
These are all slippery-slope arguments, but that doesn’t eliminate the fact that slippery slopes do indeed exist and that vigilance itself is not a bad thing.
Do you trust Apple?