One Model to Rule them All
14 November 2023
Billions are being poured into AI, but what will this mean for our futures? Most of the talk is about existential risks and shorter-term issues, like bias. Meanwhile data is being harvested at massive scale, with little consideration of the collateral damage. Not enough is being said about how this will influence our informational landscape, so let’s try to correct that.
In August, Wired published an insightful and beautifully-lyrical piece: “Google's Answer Box Changed the Meaning of Information”. In it Elan Ullendorff worries “that the Answer Box is a premonition of where Google wants to go, a future in which we’re hurried toward destinations, journey be damned, and links are only included out of obligation, rather than invitation”. Well, as we pointed out in a previous blog post, Google has been heading that way for at least 14 years. Their move from being a search engine, to an answer engine, was on course well before generative AI came along. Now it is being sped up in the race for AI supremacy, and in the process it both suppresses and pollutes the web.
The race for AI supremacy is primarily about money and the power of having “one model to rule them all”. By model, we mean an AI model of human knowledge. The current main battleground is Large Language Models (LLMs), but an all-conquering multi-modal AI is the dream of every (aspiring) monopolist. Of course, a model needs a platform, and that platform is already in place for Google and Apple. After all, the search box is moulded into every browser and mobile device, and works as a prompt box for LLMs. When you control the distribution, you gain the power and influence to rule us all.
But back to Ullendorff, who postulates a “theory of technology that places every informational product on a spectrum” from Librarian to Doctor. We decided to summarise and adapt this insightful idea below:
|point you toward context
|protect you from context
|a vast and diverse knowledge system, representing the diversity of humans behind it
|condensed and synthesized conclusions, rather than raw information
|sometimes to get to a destination quickly, but also to have suggested other relevant destinations and informational portals
|to get answers, in the form of diagnosis or treatment
|to apply their expertise in way-finding rather than solutions, as a steward of the process, not the conclusion
|to save you time, applying their expertise and sheltering you from information that might be misconstrued or unnecessarily anxiety-provoking
|they may point you to a book that is an answer to a question you didn't even think to ask and sometimes when searching a different book may catch your eye
|you may be able to ask follow-up questions, if appointment time allows, but there will still be guard rails
How does it look when we map this theory about informational products to search and chatbots?
- Pointing you towards context, for search engines, means navigation and discovery of a diverse and transparent Web.
- Pointing you towards context, through search engines, embraces serendipity, human labour and creativity
- Protecting you from context, for chatbots, comes from convenient answers from a synthesised model of opaque data sources, with both safety and censorship.
- Protecting you from context, using chatbots, also keeps you on their platform.
- Protecting you from context, on chatbots, disconnects you from the ambiguity and depth of human knowledge.
Chatbots have their utility and place, notably when it comes to time saving and task assistance. But they degrade nuanced concepts of truth by privileging definite and shallow answers. At Mojeek we believe truth emerges by exploring the diversity of human expression, experiences and perspectives. It is why we care to our core about the freedom to seek.
You might think of us as your friendly, discreet Librarian.
14 November 2023