Your android replica is playing up again.
It’s no joke.

Credit where it’s due: generative AI is really quite impressive at creating plausible-seeming text.

But it’s very bad at creating factually accurate text.

Last week, the writer John Scalzi took a look at how well various genAIs did when asked factual questions to which he already had the answers (or could look the answers up on his own bookshelf). Specifically, he asked them about the dedications in various books he’d written or owned.

He wrote it up as a blog post titled ““AI”: A Dedicated Fact-Failing Machine, or, Yet Another Reason Not to Trust It For Anything“.

None of them did well at all.

He summarizes the experience with:

So it’s not just me, “AI” gets other book dedications wrong, and (at least here) consistently so. These book dedications are actual known facts anyone can ascertain — you can literally just crack open a book to see to whom a book is dedicated — and these facts are being gotten wrong, consistently and repeatedly, by “AI.” Again, think about all the things “AI” could be getting wrong that you won’t have such wherewithal to check.

This is a good example of the real driver behind the misunderstanding of generative AI that’s behind the push to adopt it everywhere.

It’s wrong a lot, but you have to already know the answers to realize that.

If you ask it a question, it will give you a confident answer; that’s what it’s for.

Now, since it knows nothing and understands nothing and is just assembling likely words to make likely sentences, that answer will have no necessary relationship to the real world; it may be correct, but if it is that will quite literally only be by chance.

But if you already knew the answer, you probably wouldn’t have asked the question in the first place so most people using it will never know when or how often it’s wrong.

That’s how it gets ya’.

And how this turns into a mass movement was also explained by a blog last week.

The writer Charlie Stross, in a blog post titled “Barnum’s Law of CEOs“, discussed some research from this summer that was described as a survey of:

roughly 7,000 professionals age 18 and older who work in the US, the UK, Australia, Canada, Germany, and New Zealand

That’s a pretty good sample size.

The most interesting result is given as:

AI adoption varies by seniority, with 87% of executives using it on the job, compared with 57% of managers and 27% of employees.

Which is interesting because it correlates perfectly with what Scalzi was demonstrating: that the more likely you are to actually know anything, the less likely you are to be impressed by and therefore use this stuff.

Because the people doing the actual work actually know things about it.

And they will notice when the chatbot gives an incorrect answer.

But the people whose job is to manage the people doing the actual work generally don’t, so they’re not going to know when the chatbot is wrong, so they’re more likely to be impressed by it.

This would not have been the case 100 years ago, when managers were much more likely to have started as workers themselves, but the rise of business schools and business degrees means that now a large percentage of managers (and an even larger percentage of executives) don’t really know anything other than “business”.

So they don’t actually know enough about what their employees do all day to be able to tell whether a chatbot is giving them facts or hallucinations about it.

And that’s why this stuff is being deployed everywhere.

The people making that decision, the people with the power in our society, simply don’t know enough to notice it when what the “AI” is saying is wrong.

Leave a Reply