Large language models, like ChatGPT and Google Bard, can interpret your sentences and give you intelligently formulated answers.

The problem with large language models is that they hallucinate as well.

Instead of calculation and reduction, these AI models use something more akin to inference and approximation.

They’re not directly accessing data, but rather…

They’ve absorbed it,

reconfigured it,

discarded their sources,

and added the ideas as a bias to their existing understanding.

They’re poised to replace human faculty while confidently and blindly feeding you misinformation.

Humans, however…

only say things they can 100% validate.

Humans only believe things they have or remember their sources for.

And luckily, humans only act confidently when their sources are validated.

The problem, really, with large language models, is that they wax lyrical just as loosely as humans do.

And they just as easily buy into their own bullshit.