Martin Luerssen

By Martin Luerssen, CTO at Clevertar

Science fiction is – in the timeless words of Ray Bradbury – “any idea that occurs in the head and doesn’t exist yet, but soon will, and will change everything for everybody, and nothing will ever be the same again”. One of my favourite contraptions from science fiction is Star Trek’s replicator, which can conjure almost anything: food, medication, clothing, spare parts… Nothing is off limits – unless, of course, it compromises the show’s potential for drama! The replicator is fundamental to Star Trek’s utopian egalitarian society, ending poverty through endless production. If we had a replicator today, might we achieve the same outcome? Alas, scarcity is a defining feature of our current economic system. What will a farmer do if all the food can be replicated? Who needs transportation when you can just create everything on the spot? Any technology that causes mass unemployment would likely be banned by popular vote, regardless of the enormous benefits.

The probability of a replicator abolishing physical scarcity in our present-day lives is nil. But another, much more tangible idea might put an end to information scarcity, with potentially far-reaching implications.

That idea is Generative AI, a term primarily associated with machine learning algorithms that can produce large volumes of original content, such as text, code, images, video, or audio. 

Not everyone agrees that it’s entirely the right term, but inarguably it marks the transition from mostly human-generated content to mostly AI-generated content. OpenAI’s ChatGPT, Stable Diffusion, and other initiatives in this field have acquired enormous momentum, paving the way for additional products and features that make content generation easier than ever. Content creators will rapidly become accustomed to it, initially because it makes them super-productive, eventually because they must compete with everyone else who is now super-productive. The inevitable result will be a lot more content.

Who will consume all that content? There is only a limited amount of attention to go around, and it won’t necessarily grow just because there is more to take in. Instead, what we’re likely to observe is a continuation of the historical trend from broadcasting to narrowcasting, i.e., targeting a multitude of smaller but more engaged audiences. Social media has made it easier to discover and deliver to those audiences, but the cost of creating content for a limited viewership remains a challenge.

Generative AI will substantially help in addressing that, enabling higher quality content within ever more specialised niches. As automation progresses, the ultimate outcome is highly personalised content tailored to an audience of one.

In this world, you will no longer waste time on irrelevant and uninteresting material. Every information experience – an article, a show, a podcast – is completely unique to you, (hopefully!) controlled by you, and exactly as you want it to be. Generative AI will enable us to live in seamless information bubbles. Of course, it’s vital that these bubbles are good and healthy for us – and not just an exacerbation of the issues plaguing social media, echo chambers and all.

Generative AI poses unique challenges in this regard. One very hot topic right now is whether text-generating Large Language Models (LLMs) can be trusted to produce truthful output. You won’t need long to find an example of ChatGPT producing a manifestly wrong or inappropriate answer, which would be alarming if they are about to replace traditional search engines.

A powerful analogy here is that of LLMs as “blurry JPEGs” of the web, as they have to compress vast Internet knowledge into compact neural matrices, thus accumulating subtle but – because they are subtle! – very dangerous errors. Why would we ever prefer such a flawed copy over the authentic source? The counterpoint is obvious: human brains are information compressors, too. You can’t trust them to regurgitate perfect facsimiles – our memory is fallible and making mistakes is quintessentially human. Yet it’s clearly no barrier to us wanting to interact with one another.

Notably, however, we have successfully compensated for our many failings by augmenting ourselves. Once we started scratching letters into clay, we made knowledge more durable. We even created a machine that could determine “that which human brains find it difficult or impossible to work out unerringly” (Ada Lovelace, 1842) – aka the computer! It would only be fair for an LLM to likewise augment itself. Toolformer is a recent research model demonstrating that LLMs can teach themselves to use external tools (calculators, calendars, search engines, etc) to patch their weaknesses. For that reason alone, search engines won’t be replaced by LLMs – because LLMs will still need to search, just like we do.

A trade-off will always exist between how much effort we invest into an answer and how good it is. Lack of time or skill will mean that some people must choose between a potentially incorrect answer and no answer at all – and imposing the latter on someone is like saying starvation is better than living off welfare. This doesn’t mean we should gloss over some genuine issues.

LLMs must strive to differentiate fact from fiction and to remain humble in the face of uncertainty. At present, they are abysmal at this; but no one’s sanguine about that challenge. Alphabet’s shares took a major dive following a minor slip-up by their LLM, so there’s both money and motivation to make things better. We’re only at the starting line, and if there’s one thing our economic system excels at, it’s iterating a profitable idea to perfection.

A greater concern for the future of Generative AI lies in its potential to be exploited for malicious ends. Unlike Star Trek’s replicator, it can’t produce an actual bomb, but it can detonate a metaphorical one of misinformation, impersonation, and other harmful content that would have simply required too much effort previously. Unsurprisingly, what works for genuine content creators also works for scammers and nation-state threat actors – AI’s morality directly reflects whoever is wielding it. Yet, in turn, it can protect us from these dangers with its extensive knowledge and perpetual vigilance, stopping us from being bedazzled, overwhelmed, perhaps manipulated at all. Sadly, this won’t be implemented as fast as it should be. Prevention is always better than the cure, yet we struggle to learn that lesson.

Much of what we’ll see in the meantime is posts, tweets, essays, videos, and simply opinions that highlights the grave dangers of Generative AI, attracting plenty of clicks on social media just as every other piece of negativity in that sphere. Increasingly elaborate hacks of AI will attempt to show that it cannot work – yet it’s popular appeal will only increase. That’s because many of us know the tremendous promise that generative AI holds. It’s a new superpower for our minds. It’s science fiction, but right now. It will change everything for everybody, and nothing will ever be the same again. So instead of being paralysed by uncertainty, let’s embrace that change and seize the opportunity to harness it for the common good. The only barrier is us, as always.

Explore AI with us

It’s never too soon or too late to explore the world of artificial intelligence!

We’re keen to discuss how your business’ everyday problems could be solved with AI, and how they might positively impact your customers.

Please leave a message and we’ll get back to you soon.