Would you use a mixed reality headset with a dozen cameras that could scan your eyes and living room if Facebook created one? Would you dish if Amazon.com had a journaling app that asked you to provide behavioral data every day? Would you divulge your information if Google created a watch that required you to record your mood swings in order to create a profile of your mental health?
You have also drawn attention to the distinct niche Apple has established for itself if you said “no” to these questions. Over the past ten years, it has established a reputation for safeguarding the privacy of its clients, and it is increasingly putting that reputation to the test. The business revealed new tools at its World Wide Developers Conference this week that would analyze more consumer personal data than ever before, including mental health information and retinal scans.
The company’s new Journal app, which will come out this September, prompts users to write about their experiences throughout the day based on where they’ve been, what music they’ve listened to and what exercise they might have done. A new feature in Apple’s Health app meanwhile encourages people to log their daily emotions on their iPhone or Apple Watch. The goal is to “enable users… to better care for their mental health.”
That, along with all the private footage processed on the company’s new headset, is an extraordinary level of personal data to trust to a single tech company. But privacy advocates this week were like mimes in a library — completely silent. No complaints. That’s because Apple, with its estimated 2 billion active devices across the world, has built some of the highest levels of consumer trust in the tech industry, and there seems little to worry about. The company is so well-respected for its approach to data protection that Anthropic, one of the world’s most cutting-edge AI companies, borrowed text from Apple’s Terms of Service to morally steer its own “ethical” competitor to ChatGPT.
Five years ago, at the height of the Facebook and Cambridge Analytica scandal, the idea of selling a computer that scanned your retinas, or an app that tracked your mental health, would have seemed deeply invasive. But Apple can pitch such services, which need that data to become more personalised because the company doesn’t collect it or repackage it for advertisers, the longstanding business model for social media and online search firms.
Apple can maintain its reputation for privacy because it is a devices company, meaning its apps process information on devices themselves, rather than on servers in the cloud. An array of increasingly powerful proprietary chips, like the R1 that powers the sensors and cameras in the new Vision Pro headset, make that possible. Apple has been upgrading Siri to take more commands offline, for instance, utilizing its Neural Engine on iPhones and iPads with the A12 Bionic chip or later versions. By contrast, Amazon’s Alexa only works when its devices are connected to the Internet.
The optical data that Apple collects through the Vision Pro is encrypted, stays on the headset, and isn’t shared with Apple itself or with third-party apps and websites, the company said this week. The same goes for the reams of new behavioral data that its Health and New Journal app will be collecting.