Yesterday I wrote that I would never use generative AI — ChatGPT or similar apps — to write what you read here. But how short-sighted that was.
Last night, just a few hours after writing that, I read about a lawyer’s case that was thrown out by the judge because several of the precedents cited were fictitious. It seems the lawyer had used ChatGPT to look for legal precedents to support his client’s case, and the AI had helpfully thrown in several fictitious cases.
The judge was understandably angry and the lawyer appropriately mortified. No word about how the client felt.
Perhaps the lawyer should have known better than to trust ChatGPT to do his research. But he is most certainly not alone in that. Students and who knows how many other people are already using generative AI to write their themes, essays, term papers, reports, and anything else requiring a lot of text and some research.
However, the lawyer’s mistake got me thinking. While I would never ask ChatGPT to write something important — certainly not anything as important as a lawyer’s brief — I would and often do ask Google to look up something for me. And who’s to say the sources it finds aren’t using AI to write their content? Are any search engines today smart enough to exclude AI-generated content from their results? Would we want and should we trust search results that include AI-generated content?
To some extent “AI” can be interpreted as “computer-based,” and computers have been in use for a very long time. But generative AI is a different breed of cat. It doesn’t just speed up things that humans could do otherwise. It distills its human input and from it generates its own original content.
Compound that with search engines that are themselves using AI to assist, boost, or otherwise “improve” their results for the user. How will those search engines know that the information they find is not partly or wholly AI-generated? How will the user know if search results from any search engine can be trusted as not fictitious, not AI-generated?
So in addition to the “fake news” and rumors already circulated by ignorant or ill-informed people, we can now add AI-generated information that may or may not be true.
That said, I should amend my promise to never use AI-generated text here and say I will never knowingly use AI-generated text here. I read, listen to, and quote many different sources and although I generally stick to the most reputable, who’s to say they haven’t inadvertently incorporated AI-generated information that is false or misleading?
I could be catastrophizing and if so, I’ll eventually snap out of it. But for now, if you’ll excuse me, I have a headache …