My View: Dear AI: I’m Not Dead

Henry Feldman


“Ask the chatbot to write a biography of yourself. Do try this at home!” 

So suggested Gary Smith in a article I was reading about the generative artificial intelligence programs that have captured our hearts and minds. I knew asking would only depress me with bios of far more fabulously interesting people who share my name. But I’ve also written a computer program with a unique name and a theatrical play. Shall we see what the chatbot had to say?

“Who wrote DBMS/Copy?” I asked Google’s Bard. Here are the first two lines of its response:  

“DBMS/Copy was written by Henry Lewis Feldman, who was the founder and CEO of Conceptual Software, Inc. Feldman was a pioneer in the field of data management, and he developed DBMS/Copy to address the need for a tool that could easily transfer data between different software programs.”

Pretty cool: It knew I wrote the program! Being a “pioneer in the field of data management” is a stretch, but I’ll take the ego stroke.

After those two sentences, however, Bard took a hard left. I became a Henry Feldman born 19 years earlier with degrees from other colleges who started my company three years earlier than I did. And the kicker:

“Feldman died in 2015 at the age of 75. He is survived by his wife, two children and three grandchildren.”

Google says I have an extra grandchild! And that I’m dead.

How is this possible? Unfortunately, it’s baked into the design. Chatbots are nothing more than extremely impressive auto-complete programs. They have munched trillions of words to calculate the “most probable” word that should appear next based on what it has already written. 

If taking a scenic drive through CrazyTown is the “most probable” next direction, away it goes. But when did the most probable word become equal to the correct word?

As you can imagine, as a not-dead person with a long history in the computer field, I find myself exceptionally troubled that some of the largest companies in the world are falling over themselves to promote programs that lie. They give it the cute word — “hallucinations” — but they’re lies. (Originally, I thought it branched off to another “me” and followed his life, but he seems to not exist.)

As the chief technical officer of OpenAI (ChatGPT) said of chat bots: “May make up facts.” Since when is that OK? Why are we allowing them to push this stuff?

Behind the scenes, it seems there might be millions of poorly paid people around the world fixing the errors. That sounds more mechanical turk than artificially intelligent to me. And yes, the responses to my questions have changed. (Glad I saved the links.)

What happens when the responses (aka lies) are pasted on websites that get munched next year? And what if this article gets processed? Will my death just get another vote?

Please don’t use a chatbot for fact-finding. It’s not in the design. It doesn’t care. Use it to get a great excuse for why your homework isn’t done, because it will tap into the world of excuses and likely give you something better than you could ever imagine.

Don’t be like the lawyer who used a chatbot to find prior cases to include in a brief. He asked the chatbot if the cases were real. The chatbot said yes. 

And just so you know that the answer about my computer program wasn’t a fluke: I asked Bard about my play, Sea Level Rise: A Dystopian Comedy. It was performed three times in 2019 at a small theater festival in New York City, and the chatbot shared with me glowing reviews that I missed in The New York Times, Boston Globe and The Washington Post.

If only.

Leave a Reply

The Current welcomes comments on its coverage and local issues. All online comments are moderated, must include your full name and may appear in print. See our guidelines here.