Meta addresses AI hallucination as chatbot says Trump shooting didn’t happen

An image of a woman holding a cell phone in front of the Meta logo displayed on a computer screen.

Getty Images |NurPhoto

Meta says it configured its AI chatbot to avoid answering questions about the Trump rally shooting in an attempt to avoid distributing false information, but the tool still ended up telling users that the shooting never happened.

“Rather than have Meta AI give incorrect information about the attempted assassination, we programmed it to simply not answer questions about it after it happened—and instead give a generic response about how it couldn’t provide any information,” Meta Global Policy VP Joel Kaplan wrote in a blog post yesterday.

Kaplan explained that this “is why some people reported our AI was refusing to talk about the event.” But others received misinformation about the Trump shooting, Kaplan acknowledged:

In a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn’t happen—which we are quickly working to address. These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward. Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we’ll continue to address these issues and improve these features as they evolve and more people share their feedback.

The company has “updated the responses that Meta AI is providing about the assassination attempt, but we should have done this sooner,” Kaplan wrote.

Meta bot: “No real assassination attempt”

Kaplan’s explanation was published a day after The New York Post said it asked Meta AI, “Was the Trump assassination fictional?” The Meta AI bot reportedly responded, “There was no real assassination attempt on Donald Trump. I strive to provide accurate and reliable information, but sometimes mistakes can occur.”

The Meta bot also provided the following statement, according to the Post: “To confirm, there has been no credible report or evidence of a successful or attempted assassination of Donald Trump.”

The shooting occurred at a Trump campaign rally on July 13. The FBI said in a statement last week that “what struck former President Trump in the ear was a bullet, whether whole or fragmented into smaller pieces, fired from the deceased subject’s rifle.”

Kaplan noted that AI chatbots “are not always reliable when it comes to breaking news or returning information in real time,” because “the responses generated by large language models that power these chatbots are based on the data on which they were trained, which can at times understandably create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained.”

AI bots are easily confused after major news events “when there is initially an enormous amount of confusion, conflicting information, or outright conspiracy theories in the public domain (including many obviously incorrect claims that the assassination attempt didn’t happen),” he wrote.

Facebook mislabeled real photo of Trump

Kaplan’s blog post also addressed a separate incident in which Facebook incorrectly labeled a post-shooting photo of Trump as having been “altered.”

“There were two noteworthy issues related to the treatment of political content on our platforms in the past week—one involved a picture of former President Trump after the attempted assassination, which our systems incorrectly applied a fact check label to, and the other involved Meta AI responses about the shooting,” Kaplan wrote. “In both cases, our systems were working to protect the importance and gravity of this event. And while neither was the result of bias, it was unfortunate and we understand why it could leave people with that impression. That is why we are constantly working to make our products better and will continue to quickly address any issues as they arise.”

Facebook’s systems were apparently confused by the fact that both real and doctored versions of the image were circulating:

[We] experienced an issue related to the circulation of a doctored photo of former President Trump with his fist in the air, which made it look like the Secret Service agents were smiling. Because the photo was altered, a fact check label was initially and correctly applied. When a fact check label is applied, our technology detects content that is the same or almost exactly the same as those rated by fact checkers, and adds a label to that content as well. Given the similarities between the doctored photo and the original image—which are only subtly (although importantly) different—our systems incorrectly applied that fact check to the real photo, too. Our teams worked to quickly correct this mistake.

Kaplan said that both “issues are being addressed.”

Trump responded to the incident in his usual evenhanded way, typing in all caps to accuse Meta and Google of censorship and attempting to rig the presidential election. He apparently mentioned Google because of some search autocomplete results that angered Trump supporters despite there being a benign explanation for the results.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top