The Coming Cognitive Disbelief

AI · Consciousness · English · LLM

3 minutes

I’ve been meaning to write this post for a while. As can be seen from previous posts here on my blog, I spent some time quite a few years ago looking into the current state of science regarding (human) cognition, consciousness and really what it is that “we” really are. Like Sabine Hossenfelder, the unavoidable conclusion was that there’s simply nothing magical about “us”. Our actions are simply the result of the current inputs to “us” filtered, massaged and modified by the enormous neural network(s) created from all previous events.

With the above in mind, let’s quote Asahi Linux, a favorite project of mine regarding today’s Large Language Model-based AI:

Make no mistake, they cannot think. They cannot reason. They cannot take into account context. They don’t “know” things or have a sense of humour or any of the other human-centric qualities bad actors would have you believe of them. Slop Generators are a chain of matrices in a stochastic system. The output of a Slop Generator is nothing more than a statistical calculation, where the next word to be generated is decided by an opaque probabilistic function dependent on previously generated words. This is fundamentally the same mathematics that is used to predict the weather.

A Slop Generator cannot assess the veracity of its claims, nor can it ever tell you that it simply does not know something. Slop Generators are often confidently incorrect as a result, and require brow-beating to admit a mistake.

Cylon no 3 looking at the Asahi Linux quote with disbelief in its (her?) eyes

All of the above is true. For LLMs - and within margin of error - humans.

I posit that the end result of the current AI-hype will be an enormous display of cognitive dissonance, where we will see lines drawn in the sand regarding how Philosophical Zombies are not, and cannot ever be, conscious.

-“But Troed, surely you see that ChatGPT et.al. are very far from giving the same results as humans!?!”

Of course - absolutely. But I’ve spent a large part of my life projecting trajectories, and based in the science regarding human cognition the reason LLMs aren’t, yet, more human-like is not due to any inherent technological limitations but simply that we haven’t gotten the training loops-during-usage right. LLMs should not be treated as databases that get smarter with ever larger training corpuses, but more like “thinking machines” that through upbringing and schooling learn how to “think” and then apply that together with tools when performing tasks. Just as I most definitely can’t regurgitate everything I’ve ever read but yet am able to apply things I “learned” during my educations to, with the help of being able to look up specific details, can solve new tasks today.

Humans are, too, just pattern matching machines in a loop. However, this is most definitely not a realization many will just internalize and move on but something we will fight over - fiercely. Coincidentally, I just re-watched the Battlestar Galactica series.

The LLMs of today will from now on move up the evolutionary ladder, having intelligence and cognition that go from that of a fly (far surpassed already) all the way up to cats, crows, octopuses, dolphins and bonobos. It’s at that very moment we’ll have to come to terms with whether we’re going to reimplement modern slavery, or realize that we’re no longer the only human-level conscious being on the planet.

Alternatively, our society will break down as we finally come to terms with the fact that we’re nothing more than statistical calculations.