AI Effect: Are Google, Quora, and Wikipedia losing their jobs too?
It’s not just people — even Google, Quora, and Wikipedia risk being replaced by AI
By Sanjay Dubey
Whenever we discuss the negative impacts of Artificial Intelligence, there’s a widespread belief that millions of people across the world are bound to lose their conventional jobs to AI. Some reports suggest that this might already be happening.
What few seem to acknowledge, however, is that people aren’t alone in this. The internet’s most familiar tech giants — the very platforms that shaped the internet as we know it — might also lose their long-held roles to AI.
There was a time when a question meant a Google search. If you wanted a quick fact, you landed on Wikipedia. If you sought lived experiences or quirky personal takes, you turned to Quora. These platforms built their identities as the internet’s go-to spaces for trustworthy information, structured facts, and diverse opinions.
But that world is quietly and swiftly slipping away.
A new kind of information engine is taking over — AI-powered conversational tools like ChatGPT, Gemini, and Claude. And ironically, the biggest search engine on the planet now finds itself scrambling to survive the AI revolution it helped set in motion.
What’s quietly being upended isn’t just technology — it’s the entire architecture of how knowledge flows on the internet.
The shifting role of Google
For years, Google wasn’t just a search engine. It was the front door to the internet. Type a query, and it would politely line up possible answers — from Wikipedia articles to news reports, blog posts, and Quora discussions.
It didn’t answer your question directly. It showed you where you could find the answers — like a helpful librarian.
That model began to strain under the weight of AI-powered responses. Tools like ChatGPT proved people didn’t always want a list of websites. They wanted clean, thoughtful, instant answers to their questions — all in one go.
In other words, they wanted a research assistant, not a librarian. Someone who could find the library, search for relevant books, take notes, summarise them, and hand over the final brief.
To stay relevant, Google changed its game. Its AI Overviews now do what independent AI tools do — give instant answers. Often, users don’t even need to click on links anymore.
And in the process, Google began eating into the very platforms — like Wikipedia, Quora, Reddit, and countless blogs and forums — that once fed its search empire.
Where earlier you’d type something like “What’s it like to live in Tokyo?” and find a Quora thread, a Wikipedia entry, or independent web pages, today you get a neat AI summary at the top of the page. Links to Quora, Reddit, Wikipedia, and independent blogs have been pushed further down — sometimes out of sight altogether.
And the fallout is bigger than it looks.
Why this is a problem for Quora and Wikipedia
Take Quora. Its entire model relies on people searching for questions on Google, landing on its pages, reading personal answers, and maybe adding one of their own.
Less Google traffic means fewer readers. Fewer readers mean fewer contributors. And that slowly chips away at the community that keeps it alive. It also means fewer subscribers and lower ad revenue.
Quora knows this. That’s why it’s pivoted to its own AI platform, Poe. But even that feels like a catch-22. The more it leans into AI, the less distinct its original, human-powered information model becomes. It risks becoming just another AI tool.
Wikipedia faces a similar — but even more complex — challenge. Like Quora, it depends on readers for both content and revenue, though through donations. But when AI tools start answering factual queries directly, or when Google’s AI summaries make visiting Wikipedia unnecessary, it threatens both the platform’s traffic and its ability to produce quality content at scale.
And Wikipedia faces a unique dilemma: it can’t become an AI tool itself. Its open, community-driven, non-commercial model isn’t built for AI-generated content.
Yet it can’t ignore the AI wave either. In a future where a large share of online content might be AI-written, what should Wikipedia treat as a credible source? Will it start citing AI-generated articles?
That’s a difficult question — especially for a platform obsessed with citation integrity.
What’s happening to the Internet itself
But the implications go far beyond just these three platforms. What’s really at stake is the broader health of the internet’s knowledge ecosystem.
For decades, the web thrived on a messy but vibrant network of content creators — big newsrooms, universities, passionate bloggers, niche forums, and independent experts. Google Search was the gateway that gave them visibility.
Sure, the system wasn’t perfect. SEO manipulation, clickbait, and low-quality content often polluted search results. But in principle, it was open. If you had something valuable to say, you had a chance of being found.
Now, AI summaries on Google — and across various AI tools — threaten to centralise this flow of information. In this emerging model, only a handful of big, officially credible sources (governments, major media outlets, academic publishers) are likely to remain linked.
This might reduce the visibility of junk SEO pages, which isn’t a bad thing. But it could also discourage honest, thoughtful creators. If people stop reaching, reading, and supporting independent work, what incentive remains for anyone to keep producing it?
The internet could then be overrun by AI-generated noise, a loss not just for users, but for AI itself, which depends on high-quality, human-created content to learn and evolve.
And that’s a dangerous proposition.
A healthy internet isn’t just about information being available. It’s about open, transparent access to diverse, competing perspectives. If AI tools keep serving up untraceable, context-free summaries, we risk losing the very thing that kept the internet from becoming a gated, corporate-controlled knowledge silo.