What does a bot-filtered version of life look like on the internet?
At the core of ChatGPT is a tension that could break at any moment. Does technology make our world bigger or smaller? That is, do chatbots driven by AI create new avenues for learning and exploration, or do they instead run the risk of compartmentalizing knowledge and leaving us with shaky access to the truth?
The company that created ChatGPT, OpenAI, announced earlier today that it has partnered with media giant Axel Springer, which appears to be a step in the right direction. As a result of the agreement, ChatGPT will be able to offer its users “summaries of selected global news content” that has been published by Politico and Business Insider, two of the news outlets in Axel Springer's portfolio. While the announcement leaves some details unclear, it seems that the bot will be able to generate responses based on Axel Springer stories when you ask ChatGPT, along with links to the stories themselves. Similarly, content from Axel Springer publications will be fed into OpenAI as training data, advancing the company's products—which might already be consuming the entirety of the internet.
It's certainly an odd move on the part of the publisher, who perhaps once upon a time saw a competitive advantage in preserving a unique voice, that is, one that is difficult for a chatbot to mimic. But for lending its labor, Axel Springer will get payment. That's definitely preferable to being taken advantage of for nothing, which is essentially what publishers in the industry are believed to have experienced as a result of generative AI. Axel Springer spokeswoman Julia Sommerfield sent me an email saying, "Our reporters at Politico and Business Insider will continue to deliver high-quality journalism," but she declined to provide any specific details about the transaction. The collaboration improves the ChatGPT user experience and adds a new revenue stream and distribution channel. An OpenAI representative refrained from commenting.
The press release refers to this as "global news content," which makes sense because the word "content" is ugly but helpful in describing what's going on in this situation. Fundamentally, generative AI treats all writing as slop that is pushed through pipes and spattered out the other end; it is unable to discern original journalism from any other type of writing. This makes the deal noteworthy for more reasons than just media nerds; it speaks to something about the direction that OpenAI wants to take the internet in.
The most advanced model available from OpenAI can't currently provide details about events that happened after April 2023. That is going to change. Axel Springer is apparently the first publisher to provide ongoing news stories in this manner, even though OpenAI has an agreement to use Associated Press archival material. Theoretically, this arrangement has the advantage of providing instant access to up-to-date, accurate information that can be tailored to the specific needs of ChatGPT users. Yet there is now a distancing effect brought about by the generative-AI era. Though they might give readers information and links to publications, ChatGPT, Bing, or Google's Gemini don't really seem to encourage interaction with those publications. Is clicking on the original news update still necessary if ChatGPT is replicating it? How many times, after reading a headline on a screen in an elevator or cab, have you searched for more information on Google?
When you consider what generative AI has accomplished elsewhere, the transition to news via chatbot seems paradoxical: Traditional search engines like Google are having a hard time keeping up with the volume of enhanced and optimized spam that they are receiving, and websites like CNET and Gizmodo have been publishing terribly flawed fake content in an attempt to remain relevant. The web as we know it is being largely destroyed by ChatGPT's underlying technology, which is also making ChatGPT more powerful.
The sheer volume of bad AI content will keep displacing reliable sources until it can be swiftly detected and removed. This will be acceptable as long as people can still access reliable information; major publishers are expected to survive and more will undoubtedly enter into agreements with OpenAI. However, this is a bad development for the web's overall diversity. The internet was, for a long time, a place of discovery, where people would hop from website to website to find various viewpoints and styles. In a way, it was democratizing and equalizing. That started to become less true when we started using gatekeeper social media platforms to access the internet, and it is getting less true still as new generative-AI infrastructure is being built on top of these ruins.
That appears to be the way that everything is going. Every single webpage and piece of writing: Plumbing is needed for a digital faucet. ChatGPT will have greater potential as a one-stop shop for browsing the internet as new content surges. It will provide something that is closer to the entirety of human knowledge as it stands at any given time, albeit with occasional "hallucinations" that cast a shadow of doubt over every interaction.
Maybe this transition will be seen in the end as an extension of human potential, despite how flawed it is now. More so than could be said of most websites, there are strong arguments that generative AI fosters creativity and makes good work easier. In any event, it marks the end of one era and the start of a new one that will be characterized by a chatbot's text box and its blinking cursor, which will be waiting for your command rather than the array of our digital creations.