At its annual developers conference on Monday, Apple announced its long-awaited artificial intelligence system, Apple Intelligence, which will customize user experiences, automate tasks and — CEO Tim Cook promised — usher in a “new standard for privacy in AI.”
While Apple maintains that its internal AI was created with security in mind, its partnership with OpenAI has drawn widespread criticism. OpenAI tool ChatGPT has long been the subject of privacy concerns. It launched in November 2022 and collected user data without explicit permission to train its models, and only started allowing users to opt out of such data collection in April 2023.
Apple says the ChatGPT partnership will only be used with explicit permission for isolated tasks such as composing emails and other writing tools. But security professionals will be watching closely to see how these and other concerns play out.
“Apple is saying a lot of the right things,” said Cliff Steinhauer, director of information security and engagement at the National Cybersecurity Alliance. “But it remains to be seen how it will be implemented.”
Apple is a latecomer to the generative AI race and has lagged behind peers like Google, Microsoft and Amazon, where its shares have been boosted by investor confidence in AI companies. Apple, meanwhile, has so far refrained from integrating generative AI into its major consumer products.
The company would have you believe the wait was intentional — as a means to “apply this technology responsibly,” Cook said at Monday’s event. While other companies have been quick to bring products to market, Apple has spent the last few years building the majority of the Apple Intelligence offering with its own technology and its own fundamental models, to ensure as little user data as possible leaves the Apple ecosystem.
Artificial intelligence, which relies on collecting large amounts of data to train language learning models, poses a unique challenge to Apple’s long-standing focus on privacy. Vocal critics like Elon Musk have argued that maintaining user privacy and integrating AI is impossible. Musk even said he would ban his employees from using Apple devices for work if the announced updates come through. But some experts disagree.
“With this announcement, Apple is paving the way for companies to balance data privacy and innovation,” said Gal Ringel, co-founder and CEO of data privacy software company Mine. “The positive reception to this news, unlike other recent AI product releases, shows that building the value of privacy is a strategy well worth pursuing in today’s world.”
Many recent AI releases have ranged from dysfunctional and silly to downright dangerous – harkening back to Silicon Valley’s classic “move fast and break things” ethos. Apple appears to be taking an alternative approach, Steinhauer said.
“If you think about the concerns we’ve had about AI so far, it’s that platforms often release products and then fix things as they emerge,” he said. “Apple is proactively addressing people’s common concerns. It is the difference between security by design and security by design, which will always be imperfect.”
At the heart of Apple’s privacy assurances regarding AI is its new Private Cloud Compute technology. Apple attempts to do most of the computing to run Apple Intelligence functions on devices. But for functions that require more processing than the device can handle, the company will outsource processing to the cloud while “protecting user data,” Apple executives said Monday.
To achieve this, Apple only exports data necessary to fulfill each request, creates additional security measures around the data at each endpoint, and does not retain the data indefinitely. Apple will also publicly post all tools and software related to its private cloud for third-party verification, executives said.
Private Cloud Compute is “a remarkable leap forward in AI privacy and security,” says Krishna Vishnubhotla, vice president of product strategy at mobile security platform Zrijke, adding that the independent inspection component is especially notable.
“These innovations not only promote user trust, but also promote higher security standards for mobile devices and apps,” he said.