With the cover lifted on macOS Tahoe 26, iOS 26, iPadOS 26, watchOS 26 and tvOS 26, app developers will be looking to upgrade their apps to take advantage of the new Liquid Glass interface as well as bunch of new programming interfaces Apple has opened up. And that means they’ll be using a new version of Xcode – the developer toolkit that is required for building apps.
One of the new additions to Xcode 26 from pervious versions is the ability for developers to use ChatGPT to help build their applications. Here’s what Apple says:
Xcode has built-in support for ChatGPT, and developers can use API keys from other providers, or run local models on their Mac with Apple silicon, to choose the model that best suits their needs. Developers can start using ChatGPT in Xcode without needing to create an account, and subscribers can connect their accounts to access more requests
So, developers will be able to integrate their applications with OpenAI’s large Language Models (LLMs) directly within applications.
But the use of AI goes further. The new Foundation Models framework enables developers to use Apple Intelligence within their applications, even when offline, by using the on-device AI models Apple has developed. Apple says developers can access the Apple Intelligence model with as few as three lines of code.
Guided generation, tool calling and other features are all built into the framework making it easier to implement generative capabilities into an existing app. So, we can expect AI to, perhaps invisibly, become part of our everyday experience.
This is what I think Apple sees the endgame as. Rather than users using third-party services to do their AI tasks, the AI will be seamlessly built into the app in subtle, perhaps unnoticeable, ways.
I see it as being like the evolution of autocorrect. Back in the old days, spellchecking was a feature that was added to word processing software. Originally, you had to invoke spellcheck but that evolved into real-time checking with those pesky squiggly red lines. Microsoft Word 2007 introduced Autocorrect, which automatically fixed things like consecutive capital letters and common typos.
Incidentally, there’s a fascinating article on the history of autocorrect at Wired.
Today, I expect autocorrect which has become a lot smarter to not only fix my typos but correct words when I use them incorrectly (like the wrong to or too when my fingers hit the keyboard incorrectly).
AI will become like that. Today it’s all a bit clunky. But over time, it will become so embedded in what we do that we won’t think twice about it.
The biggest issue in this will be there trustworthiness of those AI models. When autocorrect makes a mistake, we usually pick it up and make a manual adjustment. But what if an app uses an AI model that has been intentionally corrupted or that has been trained with erroneous data? Will we even know?
The World Economic Forum has noted that “the intertwined risks of AI-driven misinformation and disinformation” were a major global risk. And as AI become more embedded in our day to day life, perhaps without us even knowing of noticing, this will become an even bigger risk.
Anthony is the founder of Australian Apple News. He is a long-time Apple user and former editor of Australian Macworld. He has contributed to many technology magazines and newspapers as well as appearing regularly on radio and occasionally on TV.