AI’s Knowledge Explosion vs. the Collapse of Truth
6
min read

AI’s Knowledge Explosion vs. the Collapse of Truth

Written by
Tom Chavez
Published on
August 14, 2025
April 1, 2025

Table of Contents

The Exponential Growth of AI’s Knowledge

AI systems today are trained on vast swaths of the world’s data, enabling them to deliver encyclopedic answers and create content in seconds. The collective knowledge accessible to AI is growing faster than at any point in human history. Consider the “knowledge doubling curve”: until 1900, human knowledge doubled roughly every century; by the mid-20th century it was every 25 years. Today, some estimates suggest overall knowledge doubles in about 13 months on average – and it could soon double in mere hours.

However, sheer quantity doesn’t guarantee quality. AI language models often hallucinate – confidently producing statements that are false or unverified. They remix fact and fiction from their training data, and the volume of AI-generated content is overwhelming the traditional filters (editors, experts, slower news cycles) that used to vet information. We now have near-infinite information, but no reliable compass for truth.

The Decline of Verification and Fact-Checking

While AI’s knowledge has surged, our mechanisms for verifying information have weakened. Economic and political pressures have led many media outlets and platforms to scale back on fact-checking. Speed and engagement often beat out accuracy in the race for clicks and views. Social media companies, once pressured to police false content, are now retreating from that role. In 2024, for instance, Facebook’s parent Meta ended its third-party fact-checking program in the U.S., with CEO Mark Zuckerberg arguing that fact-checkers were “censors” undermining trust. Critics warned this decision would “effectively legitimize disinformation” on Meta’s huge networks. 

The broader trend is a crisis of trust. Only about 40% of people say they trust the news. In a world of endless claims and counterclaims, many simply don’t know what to believe anymore and grow cynical about everything. Fact-checkers still debunk falsehoods, but corrections rarely spread as widely as the initial lie. Even Oxford’s 2016 Word of the Year was “post-truth” – reflecting how emotion often outweighs fact. That situation is only amplified by AI’s ability to generate endless “facts” without verification.

Misinformation as a Valuable Commodity

In this post-truth landscape, misinformation has become both a commodity and a weapon. Falsehoods can spread faster and farther than truth, and AI is turbocharging that trend by mass-producing persuasive fakes at minimal cost. Malicious actors can use generative AI to crank out fake articles, deepfake images, or bogus social media posts, gaining eyeballs and influence before the facts catch up. The incentives to do so are huge.

An AI-generated image of a fake explosion at the Pentagon  in May 2023 went viral on social media, briefly causing U.S. stock markets to dip before it was debunked. This incident shows how quickly a compelling lie can trigger real-world consequences. That Pentagon hoax was a wake-up call: a single phony photo, suggesting a terror attack, briefly erased billions in market value before authorities confirmed it was fake. Geopolitics and business are not immune either – during the 2022 Ukraine invasion, a deepfake video of President Volodymyr Zelenskyy “surrendering” circulated online (and was quickly exposed). Criminals have also used AI voice clones to impersonate executives in scam calls (one company lost $243,000 to such a deepfake scheme). 

Each of these examples underscores a troubling reality: misinformation pays. Whether it’s garnering clicks and ad revenue, manipulating stock prices, scoring propaganda victories, or outright theft, the value of a well-placed lie has increased. And with AI, the supply of lies is virtually limitless.

Ethical Dilemmas and the “Liar’s Dividend”

The rise of AI-driven misinformation poses thorny ethical challenges, straddling business and philosophy. On one hand, companies see AI as a revolutionary tool for efficiency; on the other, if its output can’t be trusted, the reputational and legal risks are enormous. Organizations must now weigh profit and speed against truth and accuracy.

On a societal level, there’s a deeper philosophical conflict. We face a potential “liar’s dividend” – if anything can be faked, then everything can be denied. As one expert notes, authoritarians are already claiming that “every inconvenient fact is just a lie or ‘Western propaganda.’” When seeing is no longer believing, the default reaction becomes distrust, and that’s a dangerous foundation for society.

We are at a crossroads. Will we accept a future where reality is endlessly contested? Or will we strive to uphold a shared baseline of truth?

Navigating Truth in the AI Era: Solutions and Hope

It’s not all doom and gloom – the same technology fueling the truth crisis can also help solve it. A number of initiatives are emerging to restore balance:

  • AI for Fact-Checking: Researchers are developing AI tools to automatically cross-check claims against reliable data, helping human fact-checkers by flagging dubious statements.
  • Content Authentication: Tech companies are working on ways to watermark or label AI-generated content. Some models now embed hidden markers in AI-created images , and future platforms might routinely alert users that “this content was AI-generated,” providing crucial context.
  • Education and Literacy: Improving public digital literacy is critical. Simple habits – like reverse image searches, checking multiple sources, and maintaining healthy skepticism – help people spot fakes. Companies can train employees to “trust, but verify” AI outputs rather than accepting them at face value.
  • Policy and Collaboration: Policymakers are exploring regulations on AI and misinformation, while tech platforms, news outlets, and fact-checkers collaborate on standards for truth in content (for example, transparent correction practices). A multi-stakeholder effort will be needed to rebuild a truth-friendly digital ecosystem.
  • Cybersecurity: Leading IT security companies are increasingly using AI to uncover malicious code / emerging threats (threats that are themselves increasingly being synthesized through AI). It’s a development that takes the promise of AI beyond the truth / distrust dichotomy into the concrete realm of crime fighting–an environment where it can be a powerful force for good.  

Ultimately, truth is not just a philosophical nicety – it’s the bedrock of both business trust and societal progress.

Conclusion: Embracing the Paradox Thoughtfully

The paradox of AI’s knowledge explosion vs. the collapse of truth forces us to confront what we value as progress. AI can make us better informed than ever before, but only if we ensure that the knowledge and info AI provides us is accurate and trustworthy. The coming years will test our collective wisdom: Will we use AI to illuminate and uplift our understanding, or allow it to drown us in a flood of believable lies?

Striking the right balance will require innovation, education, and a renewed commitment to truth from leaders at every level. Business executives, technologists, policymakers, and everyday citizens all have a role to play in shaping AI’s impact on our information ecosystem. In the end, the same power of AI that threatens to obscure reality could be harnessed to protect it – if we choose to act.

Authors:

  • Adrien le Gouvello - Partner @ super{set} AI Advisors
  • Tom Chavez - Founding General Partner @ super{set}

Tech, startups & the big picture

Subscribe for sharp takes on innovation, markets, and the forces shaping our future.

By clicking Sign Up you're confirming that you agree with our Terms and Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
NEWS, BLOGS & ARTICLES

Let's keep in touch

We're heads down building & growing. Learn what's new and our latest updates.