This is totally unrealistic.

Now, I have been a big fan of Bruce Schneier’s work in cryptography for decades, so it pains me to say this but he’s wrong about the state of Artificial Intelligence (AI).

Very, very wrong.

In a recent blog post titled Building Trustworthy AI, he steps into the current furor of ChatGPT and other Large Language Models (LLMs).

It doesn’t go well.

To be completely clear, everything he says about the desirability of trustworthy AI is fine.

If fact, if it weren’t for one tiny underlying problem his post would be a good example of his usual thoughtful approach to tech-induced cultural issues.

Where it goes astray is in assuming that LLMs are anything that would deserve to be called AI and which could possibly be made trustworthy.

They’re not, and they can’t.

LLMs are artificial, but they don’t do anything you could reasonably call intelligent; they don’t know what the words they’re using mean and they don’t have any internal model of the world.

Nothing they say means anything to them, because there is no “them” in any way that could give meaning.

An LLM is nothing more than a system to construct eerily articulate pseudo-random text.

They are, as has been said, bullshit generators.

We really shouldn’t be surprised, then, that LLMs have absolutely no regard for whether what they say is true since they have no concept of what the words they’re using actually mean.

They can’t “lie”, because they have no idea of “true” or “false”.

And it really shouldn’t need saying but: you can’t make a system that doesn’t know what words mean trustworthy.

Leave a Reply