Un-thinking, yes, but mostly un-reliable.

It’s not just that these things aren’t reliable, though that is certainly true.

It’s that they cannot ever be reliable.

See, generative “AI” systems don’t actually know anything or reason about anything. They produce words (or pictures, or whatever) without knowing anything about the content.

They are nothing more than systems for producing a likely next word (or pixel, or whatever) given a specific context.

And the word “likely” is the most important word in that sentence: everything they generate is based on nothing more than probabilities.

Which is why they’re so bad at facts.

It’s kind of unfair to even be evaluating these systems on facts; “fact” is not a category they have to work with. Of course they come up with confident falsehoods at absurdly high rates: truth and falsehood aren’t part of the system at all until the results are presented to a human.

Then the human adds the concepts of truth and falsehood, by evaluating the system’s output on that basis.

But the system itself neither knows nor considers “fact”.

Again, all it knows and all it considers are which words (or pixels, or whatever) are likely to come next in the context of the words (or pixels, or whatever) that are already there in the prompts and previously-generated output.

So these systems can generate material that is structurally similar to material generated by humans; they’re actually quite good at that, at least for text.

But the systems neither know nor consider whether their output is “true” in any way; whether that material accurately reflects the actual world in any way is just a matter of chance.

Chance is a fundamental part of what they are.

And chance cannot, by definition, be reliable.

Leave a Reply