Gold Rush
TL;DR
Perhaps we shouldn’t take this creativity for granted and instead explore ways to safeguard the interests of consumers, creators, and facilitators alike. It’s important to be mindful of the relentless mining of content and shift the focus to the incentives behind these efforts, considering the perspectives of everyone involved.
For years, web developers have been trained to build webpages using the most specific and semantic markup and schemas, hoping it would make their websites more accessible and, well, crawlable by search engines. Developers even added invisible captions to images, intended both as a fallback when images couldn’t load and as an accessibility feature. However, it seems that this very structure is increasingly being used against them. Google is now actively testing ways to serve web content directly to users. Not only through AI overviews but also by delivering entire cookie recipes, for example. While Google may claim that this will eventually benefit the original content creators, it’s debatable whether most users will visit the source site if the content is readily available through Google. In this way, the search engine is becoming a one-stop shop for content. It's as if every tech company is caught in an AI gold rush, racing to maximize profits and secure their place in the future. The real question is, are there limits to how far they can dig, and what happens when they inevitably hit a wall?
Paradoxically, and quite frankly, hypocritically, many criticize Google for limiting traffic to individual websites, yet they seem to have fewer complaints about OpenAI's ChatGPT. Most Google users likely prefer receiving content directly. Speaking for myself, I find it a major time saver when I don’t have to visit multiple websites just to look up a simple fact or some random statistic. Even though I’m fully aware that this means bypassing someone’s website—and their hard work—just to save a few seconds. At the same time, I often use my browser’s Reader Mode, which provides a clean and simple version of the webpage, showing only the article content. Not only do I avoid annoying cookie pop-ups, but it’s also generally faster, as I don’t need to search for the relevant information or wait for design elements I didn’t ask for to load. While I understand that websites are often designed with good intentions, I frequently find some interfaces difficult to navigate, particularly on a small screen like my phone. In this sense, I’m part of the problem, as are others who skip over websites and their ads, offerings, or other incentives for content creators.
The web has historically provided a rich ecosystem of information, much like libraries once did. However, just as the internet evolved how we access information, we’ve now found new ways to consume it: through Large Language Models (LLMs). This shift suggests that browsers may become less relevant over time, as we increasingly interact with an LLM "window" that gives us all the information we need. Although many of us reflect on the web as something not to be taken for granted, we're starting to accept that it now mainly serves as a training ground for LLMs, offering us easy access to information without the distractions of fonts, graphics, and other design elements that may disrupt our focus. Just as we now see books as somewhat cumbersome, I suspect we’ll come to view websites similarly—essentially as content wrapped in visual noise.
The web may be starting to age, and that’s okay. After all, the internet didn’t replace libraries; it just added another dimension. I believe the web will remain a valuable resource, possibly even romanticized in the future, for collecting information and accessing material that may not be easily captured by LLMs. As for content creators, I think there will always be a space to create valuable content, write compelling articles, and produce art. However, we may increasingly shift toward larger platforms like YouTube or Substack, which will likely be able to negotiate deals with LLMs so that original creators are compensated for their efforts. Those who continue running independent websites may find their content consumed by LLMs like a small fish swallowed by a whale. Yet, we might return to these older websites more than we expect, out of nostalgia—just like visiting a library. We might actually realize there’s nothing wrong with that and that information can now simply be accessed through different layers of abstraction.
So far, we’ve mainly discussed the evolution of content. When it comes to functionality on websites—or rather, web applications—these will likely become more and more abstracted through APIs. Consider HTML, the language and structure of the web that LLMs and search engines rely on to access information. APIs, particularly those that follow standardized protocols, are akin to this structure, but for functionality. Companies like Rabbit, with its AI device the Rabbit r1, are betting on connecting various popular apps and services so that users can simply ask for something, and the AI will perform the action behind the scenes. However, this is only possible when developers willingly expose their functionalities through API endpoints, which are essentially URLs for requesting actions rather than just answers. Apple has also recognized this shift and has launched its own developer platform, called Intent, which allows developers to expose certain functions to Apple.
However, if advertising remains the primary means of monetizing content and functionality, the question becomes whether abstracting these elements will lead to higher-quality output and genuine collaboration between publishers, developers, and tech companies. While tech companies might aim to offer perfect solutions to their users, they will need to make a deal with content creators and developers. Implementing a payment system could be one solution, but it may recreate past issues, such as the 30% cut taken by Apple or Google, which consolidates their power and limits competition. Alternatively, if developers open their software thoughtfully using agreed-upon standards and protocols, Apple and Google would become just two players in a larger ecosystem, rather than controlling it from the ground up, which is already the case with the various app stores. Apple may provide the most streamlined way to build and connect these "intents", but their goal is for developers to stick with them and trust that Siri will become the dominant AI assistant. Google likely has a similar strategy, though we’ll have to wait and see how they structure their approach. And we shouldn't forget about Amazon, either.
The elephant in the room is that we need content and great applications to power all this innovation. If creators and developers aren’t sufficiently incentivized to produce exciting new material, we could see a decline in creativity and commitment. While this might in fact cause new, big players to emerge, it may not be an efficient solution, as it may lead to yet another data-hungry tech company that both consumers and developers must feed. Perhaps now is the perfect time to address these issues while content creators, developers, and tech companies can still agree upon the basic needs. By working together, we can ensure that consumers enjoy fast and easy access to intelligent AI assistants, while the creators of the content still earn a living. If there’s one thing we shouldn’t take for granted, it’s the ingenuity and creativity of people online, and the immense value they bring to us through blogs, videos, and more. We have been fortunate to build a natural ecosystem based on solid standards and protocols, which we call the internet. Such ecosystems are rare and fragile, and we must protect them at all costs.