AI Effect: Are Google, Quora, and Wikipedia losing their jobs too?
It’s not just people — even Google, Quora, and Wikipedia risk being replaced by AI
By Sanjay Dubey
Whenever we talk about the negative impacts of Artificial Intelligence, one thing everyone seems absolutely certain about is that millions of people across the world will lose their conventional jobs to AI. And reports suggest it’s already happening.
What few seem to acknowledge, however, is that people aren’t alone in this. The internet’s most familiar tech giants — the very platforms that shaped the internet as we know it — might also lose their long-held roles to AI.
There was a time when a question meant a Google search. If you wanted a quick fact, you landed on Wikipedia. If you sought lived experiences or quirky personal takes, you turned to Quora. These platforms built their identities as the internet’s go-to spaces for trustworthy information, structured facts, and diverse opinions.
But that world is quietly, swiftly slipping away.
A new kind of information engine is taking over — AI-powered conversational tools like ChatGPT, Gemini, and Claude. And ironically, the biggest search engine on the planet now finds itself scrambling to survive the AI revolution it helped set in motion.
What’s quietly being upended isn’t just technology — it’s the entire architecture of how knowledge flows on the internet.
The shifting role of Google
For years, Google wasn’t just a search engine. It was the front door to the internet. Type a query, and it would politely line up possible answers — from Wikipedia articles to news reports, blog posts, and Quora discussions.
It didn’t answer your question directly. It showed you where you could find the answers — like a helpful librarian.
That model began to strain under the weight of AI-powered responses. Tools like ChatGPT proved people didn’t always want a list of websites. They wanted clean, thoughtful, instant answers to their questions — all in one go.
In other words, they wanted a research assistant, not a librarian. Someone who could find the library, search for relevant books, take notes, summarise them, and hand over the final brief.
To stay relevant, Google changed its game. Its AI Overviews now do what independent AI tools do — give instant answers. Often, users don’t even need to click on links anymore.
And in the process, Google began eating into the very platforms — like Wikipedia, Quora, Reddit, and countless blogs and forums — that once fed its search empire.
Where earlier you’d type something like “What’s it like to live in Tokyo?” and find a Quora thread, a Wikipedia entry, or independent web pages, today you get a neat AI summary at the top of the page. Links to Quora, Reddit, Wikipedia, and independent blogs have been pushed further down — sometimes out of sight altogether.
And the fallout is bigger than it looks.
Why this is a problem for Quora and Wikipedia
Take Quora. Its entire model relies on people searching for questions on Google, landing on its pages, reading personal answers, and maybe adding an answer or a question of their own.
Less Google traffic means fewer readers. Fewer readers mean fewer contributors. And a slow decline in the community that keeps it alive. It also means fewer subscribers and lesser ad revenue.
Quora knows this, which is why it’s pivoted to its own AI platform, Poe. But even that feels like a catch-22. The more it leans into AI, the less distinct its original human-powered information model becomes. It then simply becomes another AI tool.
Wikipedia faces a similar — but more complex — challenge. It’s built by volunteers and funded by donations from readers who value free, reliable information.
If AI tools answer basic factual queries directly, or if Google’s AI summaries make visiting Wikipedia unnecessary, its traffic drops.
It’s an existential issue for Wikipedia, not just because fewer visitors mean fewer donations. It depends on engaged readers to actively add, edit, and correct its content. Without regular readers, the pool of contributors shrinks, eventually affecting both the quality and the quantity of its knowledge base.
And Wikipedia faces a unique dilemma. It can’t become an AI tool itself. Its open, community-driven, non-commercial model isn’t built for AI generation.
Yet it can’t ignore the AI wave either. In a world where most new online content may be AI-written, what should Wikipedia consider credible sources? Will it start citing AI-generated articles?
That’s a difficult question for a platform obsessed with citation integrity.
What’s happening to the Internet itself
But the implications go far beyond just these three platforms. What’s really at stake is the broader health of the internet’s knowledge ecosystem.
For decades, the web thrived on a messy but vibrant network of content creators — big newsrooms, universities, passionate bloggers, niche forums, and independent experts. Google Search was the gateway that gave them visibility.
Sure, the system wasn’t perfect. SEO manipulation, clickbait, and low-quality content often polluted results. But in principle, it was open. If you had something valuable to say, you stood a chance of being found.
AI-powered summarisation — whether from Google’s own tools or independent chatbots — threatens to centralise the flow of information. In this new model, only a handful of big, officially credible sources (governments, major media outlets, academic publishers) are likely to remain linked.
AI summaries in Google might reduce the visibility of junk SEO pages — and that’s no bad thing. But in doing so, they risk discouraging honest, thoughtful creators too. If people stop reaching, reading, and supporting independent work, what incentive remains for anyone to keep producing it?
The internet could then be overrun by AI-generated noise — a loss for users, and even for AI itself, which depends on quality human-created content to learn and evolve.
And that’s a dangerous proposition.
A healthy internet depends not just on information being available, but on open, transparent access to diverse, competing perspectives. If AI tools — including Google’s — keep pulling attention toward untraceable, context-free summaries, we risk losing the very diversity that has kept the internet from turning into a gated, corporate-controlled knowledge silo.
Of the three, perhaps only Wikipedia has no easy escape without betraying its core principles of human-generated, community-moderated content.
But the steps taken by Google and Quora seem self-defeating too. Google is fast becoming an AI knowledge tool itself — with more control over its output than ChatGPT or Gemini. Quora is trying to survive by pivoting to AI-powered chat experiences.
But by summarising knowledge rather than leading to it, they risk undermining the very ecosystem they depend on.
Google, Quora, and Wikipedia aren’t just platforms under threat. They’re early warnings.
If the very tools we built the internet around are struggling to justify their existence in an AI-driven world, what does that mean for the rest of us?