Anticipation fuels AI.
Apple‘s AI advancements at WWDC fuel market enthusiasm and boost stock value.
Myths about AI debunked.
Despite rapid advances, human-level AI remains a distant reality, not as imminent as perceived.
The independence of Apple’s AI.
Apple develops internal language models, emphasizing capability over collaboration with OpenAI.
The scene at Apple’s WWDC (World Wide Developers Conference) last week was, in some ways, emblematic of the era. The tech giant’s AI announcements were highly publicized ahead of the show. The products themselves were interesting, but not the “next iPhone” that some expected.
Still, the market loved it, adding hundreds of billions to Apple’s market cap in just a few days.
A good part of this AI moment is based on anticipation: there is a belief that models will continue to improve, products will improve, and people and companies will accept. We don’t know for sure where all this is taking us, but it is going somewhere. , and that seems to be enough. The demos should work eventually.
I spent the week in Silicon Valley visiting sources and technology companies (starting at Apple and ending at NVIDIA) to get a sense of where we are on the continuum, who is prepared to lead, and how power is shifting. Much of what I learned will appear in future stories and episodes of the Big Technology Podcast. So stay tuned. But this is what stands out in my notebook:
Room for AI models to improve
It doesn’t look like generative AI will reach resource limits anytime soon, at least according to those closest to the work. AI research houses are focusing on constraints like computing, data, and energy. But they also realize that there is room to improve the current set of models by improving the selection of the right data, fine-tuning the models, and developing new capabilities such as reasoning. Meanwhile, incoming computing improvements should lead to more powerful and efficient training and inference.
The next 18 months will be interesting.
Expectations could remain unrealistic
Still, the popular conversation about AI tends to present human-level artificial intelligence as something that is just around the corner. It’s not. The next generation of models will be impressive, but the release of ChatGPT (released in a version of OpenAI’s GPT-3) followed shortly by the release of GPT-4 made the pace of AI development seem, to many, faster than it is. Training and perfecting these models takes a long time.
So while GPT-5 and its peers will be hotly hyped and anticipated, the push toward reasoning and AI agents may be more tangible in the near term compared to the size of the model.
OpenAI could be a placeholder in Apple’s intelligence
Sam Altman is a master negotiator, but what if he simply keeps the seat warm inside the new “iPhone AI” for Google? Recently, Bloomberg’s Mark Gurman reported that Apple is not paying OpenAI for the use of ChatGPT on its next generation of iPhones. He’s also been negotiating with Google for 4 or 5 months for a similar location, he told me.
The key question then is: who occupies the predetermined position? Google pays Apple $20 billion a year to be the default search engine for its products, and if you can figure out the economics, a similar (or smaller) deal could eventually supplant ChatGPT with Gemini as Apple’s default AI. Manzana. (Gurman talks more about this recently on the Big Technology Podcast.)
NVIDIA key ratio
I recently spent a wild day inside NVIDIA, speaking with company leaders from morning to night about the technology driving this moment. There will be much more to come from NVIDIA in the coming months, but here’s a fun fact: NVIDIA has more software engineers than hardware engineers. NVIDIA’s domain has much more than chips, starting with the fact that its software is essential for training AI models, and the company’s workforce reflects this.
Apple tests small language models
The real surprise at WWDC was that Apple used many of its models to power Apple Intelligence, and not OpenAI’s. ChatGPT was mainly an add-on to the company’s demos.
To make its AI experience work, Apple created a series of small language models that reside on the device. These more focused, less compute-intensive AI models are good at specific tasks like proofreading and running together (The Verge has a good article).
For those watching, this showed that A) Apple can make real progress in developing AI models and B) These smaller models can be useful in bringing big ideas to life, even for the largest companies. Now we’ll wait to see if those demos work in real life.
Keynote USA News
For Latest Apple News. Follow Keynote USA News on Twitter Or Google News.