I’m going to come right out and say it – I’m a generative AI sceptic. The error rate when it comes to answering fact-based questions and creating summaries is too high for my liking. And Apple Intelligence is doing little to dissuade me from that position.
Generative AI relies on two things: software models, or algorithms, that can ‘make sense’ of a bunch of data; and data that is used to ‘train’ the models. I’ve put those words in quote marks for a specific reason. There is a strong tendency for people to anthropomorphise generative AI and ascribe human characteristics and expectations to the software.
The software models are, in simple terms, proximity and probability tools that determine what words are likely to fit together best and respond to a query or request based on the best fit of the words with your question.
Today, I asked Apple Intelligence, using the Writing Tools tool, to summarise two documents, separately, in point form. Both were interviews where there were only two speakers, designated Speaker 1 and Speaker 2. The topic of conversation was reasonably narrow and there were mentions of several different companies and technologies.
The result was a totally unusable mess. The summary mixed technologies with the wrong companies and drew conclusions that were clearly fallacious. Fortunately, I was alert to this and did not rely on the summaries. But others may not be so alert.
I’m already seeing students use generative AI tools to assist them wth their school work, usually with a teacher’s consent (possibly, that’s tacit rather than specific). And I’m seeing meeting summaries pop up all over the place.
I recently shared a podcast, about the impact of generative AI on human cognition, and the person I shared it with generated an AI summary, using Gemini, rather than listen to the podcast.
The irony did not escape me.
When generative AI hit the masses in 2023, it was seen as a massive boon. But it’s not a perfect technology. Its environmental impact is substantial, accuracy is questionable and there is the ethical question of how these tools acquire the massive amounts of data they need.
I admit that I was optimistic when ChatGPT first appeared. It seemed that a tool that could make a significant difference to my work was coming. But I’m left unable to trust the technology. And the ethics of how these tools were created and their ongoing impact makes me even more sceptical of their benefit.
Anthony is the founder of Australian Apple News. He is a long-time Apple user and former editor of Australian Macworld. He has contributed to many technology magazines and newspapers as well as appearing regularly on radio and occasionally on TV.