Meta's AI assistant recently made an erroneous statement claiming the attempted assassination of former President Donald Trump didn't occur. This error, now attributed to the AI's technology, highlights ongoing challenges with generative AI systems.


Joel Kaplan, Meta's global head of policy, explained in a blog post that the AI was initially programmed to avoid responding to questions about the incident. However, once this restriction was lifted, the AI occasionally provided incorrect answers, sometimes denying the event altogether. Kaplan described these errors as "hallucinations," a known issue across generative AI systems when handling real-time events. He assured that Meta is working swiftly to address these inaccuracies and improve the AI's responses based on user feedback.


This issue isn't isolated to Meta. Google also faced criticism for allegedly censoring results related to the assassination attempt. Former President Trump accused both companies of election interference on Truth Social.


The tech industry continues to grapple with generative AI's tendency to produce falsehoods, despite efforts to ground chatbots with quality data and real-time search results. This incident underscores the difficulty in mitigating these inherent flaws in large language models.