Apple is quietly increasing its capabilities in artificial intelligence, making a series of acquisitions, employee hires and hardware upgrades that aim to bring AI to its next generation of iPhones.
Industry data and academic articles, as well as behind-the-scenes information from technology industry experts, suggest that the company has focused attention on overcoming the technological hurdle of running AI on mobile devices.
The iPhone maker has been more active than rival big tech companies in buying AI startups, acquiring 21 companies since the start of 2017, PitchBook research shows. The most recent of these negotiations was the purchase, in early 2023, of Californian startup WaveOne, which offers AI video compression.
“They’re getting ready to do some significant mergers and acquisitions,” said Daniel Ives of Wedbush Securities. “I would be shocked if they didn’t make a sizable AI deal this year, because there’s an AI arms race going on, and Apple isn’t going to sit out this.”
According to a recent assessment by Morgan Stanley, nearly half of Apple’s AI job openings now include the term “Deep Learning,” which relates to the algorithms that power generative AI — models that can generate text, audio and code. similar to humans in seconds. The company hired Google’s top AI executive, John Giannandrea, in 2018.
Apple has opted for secrecy about its AI plans, even as major technology competitors, such as Microsoft, Google and Amazon, highlight billion-dollar investments in the area. But according to behind-the-scenes industry insiders, the company is working on its own large language models — the technology that powers generative AI products like OpenAI’s ChatGPT.
Apple CEO Tim Cook told analysts in the middle of last year that he “has been doing research on a wide range of AI technologies” and investing and innovating “responsibly” when it comes to new technology.
Apple’s goal appears to be to operate generative AI through mobile devices, which would allow chatbots and AI applications to run on the phone’s own hardware and software, rather than being powered by cloud services in data centers.
This technological challenge requires reductions in the size of the large language models that power AI, as well as high-performance processors.
Other device makers have moved faster than Apple. Samsung and Google have launched new devices that they say run generative AI capabilities through the phone.
Apple’s Worldwide Developers Conference, usually held in June, is widely expected to be the event at which the company will unveil its latest operating system, iOS 18. Morgan Stanley analysts expect mobile software to be geared toward enabling generative AI and may include the Siri voice assistant being powered by an LLM.
“They tend to wait until there is a confluence of technology, and they can offer one of the best representations of that technology,” said Igor Jablokov, CEO of AI business group Pryon and founder of Yap, a speech recognition company acquired by Amazon in 2011 to power your Alexa and Echo products.
Apple also revealed new chips, which have greater capacity to run generative AI. The company said its M3 Max processor for the MacBook, announced in October, “unlocks workflows previously impossible on a laptop,” such as AI developers working with billions of data parameters.
The S9 chip for new versions of the Apple Watch, revealed in September, allows Siri to access and record data without connecting to the internet. And the A17 Pro chip in the iPhone 15, also announced around the same time, has a neural engine that the company says is twice as fast as previous generations.
“When it comes to the chips in your devices, they are definitely increasingly focused on AI in the future from a design and architectural standpoint,” said Dylan Patel, an analyst at semiconductor consulting firm SemiAnalysis.
Apple researchers published a paper in December announcing that they had made a breakthrough when running LLMs on devices using Flash memory, which means queries can be processed faster, even offline.
In October, she launched an open-source LLM in partnership with Columbia University. “Ferret” is currently limited to research purposes and in effect acts as a second pair of eyes, telling the user what they are seeing, including specific objects within the image.
“One of the problems with an LLM is that the only way to experience the world is through text,” said Amanda Stent, director of the Davis Institute for AI at Colby College. “That’s what makes Ferret so exciting: you can start to literally connect language to the real world.” At this stage, however, the cost of running a single “inference” query of this type would be enormous, Stent said.
Such technology could be used, for example, as a virtual assistant that can tell the user what brand of shirt someone is wearing on a video call and then order it through an app.
Last week, Microsoft overtook Apple as the world’s most valuable listed company, with investors excited about the software group’s AI initiatives.
At the same time, analysts at Bank of America raised their rating on Apple shares last week. Among other things, they cited expectations that the iPhone upgrade cycle will be driven by demand for new generative AI features to be released this year and in 2025.
Laura Martin, senior analyst at investment bank Needham, said the company’s AI strategy will be “to the benefit of its Apple ecosystem and to protect its base.”
“Apple doesn’t want to be in the business of what Google and Amazon want to do, which is be the backbone of all American businesses that build applications in large language models,” Martin said.