Jaap Arriens | Nuphoto | Getty Images
An artificial intelligence feature on iPhones is generating fake news alerts, stoking concerns about the technology’s ability to spread misinformation.
Last week, a recently launched feature by Apple which summarizes user notifications using AI, broadcast inaccurately summarized BBC News app notifications about the broadcaster’s story regarding the PDC World Darts Championship semi-final, claiming to wrong that British darts player Luke Littler had won the championship.
The incident occurred a day before the tournament final, which Littler won.
Then, just hours after that incident, a separate notification generated by Apple Intelligence, the tech giant’s AI system, falsely claimed that tennis legend Rafael Nadal had come out as gay.
The BBC has been trying to convince Apple to fix the problem for about a month. The British public broadcaster complained to Apple in December after its AI function generated a fake headline suggesting that Luigi Mangione, the man arrested in connection with the murder of health insurance company UnitedHealthcare CEO Brian Thompson in New York, had committed suicide – which never happened.
Apple was not immediately available for comment when contacted by CNBC. On Monday, Apple told the BBC that it was working on an update to address the issue by adding a clarification on when Apple Intelligence is responsible for the text displayed in notifications. Currently, generated news notifications appear as coming directly from the source.
“Apple Intelligence features are in beta and we are continually making improvements using user feedback,” the company said in a statement shared with the BBC. Apple added that it encourages users to report a problem if they see an “unexpected notification summary.”
The BBC is not the only news organization to have been affected by inaccurate summaries of Apple Intelligence news notifications. In November, the feature sent an AI-summarized notification falsely claiming that Israeli Prime Minister Benjamin Netanyahu had been arrested.
The error was reported on the social media app Bluesky by Ken Schwencke, editor-in-chief of the investigative journalism site ProPublica.
CNBC reached out to the BBC and The New York Times for comment on Apple’s proposed solution to its AI feature’s misinformation problem.
The problem of AI misinformation
It’s an Apple feature said is designed to help users scan their notifications for key details and reduce the overwhelming barrage of updates that many smartphone users experience.
However, this has given rise to what AI experts call “hallucinations”: AI-generated responses that contain false or misleading information.
“I suspect Apple won’t be alone in having difficulty with AI-generated content. We’ve already seen many examples of AI services confidently telling untruths, called “hallucinations,” Ben Wood , chief market analyst focused on technology. research firm CCS Insights, told CNBC.
In Apple’s case, as the AI attempts to consolidate notifications and condense them to display only a basic summary of the information, it mixes up words in a way that characterizes events inaccurately, while presenting them confidently as facts.
“Apple had the added complexity of trying to compress content into very short summaries, which ended up delivering the wrong messages,” Wood added. “Apple will undoubtedly seek to resolve this issue as quickly as possible, and I’m sure its competitors will be watching its response closely.”
Generative AI works by trying to find the best possible answer to a question or prompt inserted by a user, drawing on large amounts of data on which its large underlying language models are trained.
Sometimes the AI doesn’t know the answer. But because it was programmed to always present a response to user prompts, this can lead to cases where the AI actually lies.
It’s unclear when Apple will fix the bug in its notification summary feature. The iPhone maker said it expects one to arrive in “the coming weeks.”