by John Grabowski.
For the past few weeks everyone has been talking about artificially intelligent bots. These software creations, its proponents are saying, are going to write for us, think for us, and basically do just about everything for us except make the morning coffee. (Darn…That’s the one thing I would like.)
One bot in particular, called ChatGPT, has been getting most of the attention. Many people have tried it, with varying levels of success. Some are very impressed; others say it has a ways to go. But everyone agrees on one thing: This technology is going to change the world. In ways we can’t yet truly appreciate.
Educators and journalists fear that AI chatbots could be used by kids to complete essays and assignments. Surely there are many school students out there salivating at the thought of typing a few key phrases into a search box and seeing a term paper pop out. The use of AI-generated content in higher education has also raised questions about plagiarism.
Why do homework? Why pay attention? Who cares about news, thoughts, facts, and critical thinking when our phones and tablets can provide all the answers we’ll ever need while we just play Xbox?
Does this mean “original thought,” or at least original human thought, will become a rarity? It won’t…if you explain to your child that learning is one of the most important things we will ever do, even though - and especially when - it’s a whole lot easier to lay our tough questions at the foot of a bot. Why do homework? Why pay attention? Who cares about news, thoughts, facts, when our phones and tablets can provide all the answers we’ll ever need while we just play Xbox?
Making kids see they don’t want a world like this can be a tough sell. Just as remote controls have made getting up to change the channel seem like a herculean effort, AI bots can make thinking for yourself something children may feel they’ll never have to do.
Letting the bots do our heavy lifting for us will make us lazy and mentally out of shape the same way remote controls and value meals have made us physically out of shape.
Sadly, school sometimes reinforces this aversion to thinking—unintentionally. Overworked and underfunded educators are often pressured to teach to the tests, leaving precious little time to instill the joy of learning how to think. It’s a tough assignment, and one of those things, like algebraic proofs and Henry James, you don’t come to appreciate until you’re older.
But thinking independently is the single greatest skill one can learn in school—and in life! That’s what letting the bots do our heavy lifting for us will take away, making us lazy and mentally out of shape the same way remote controls and value meals have made us physically out of shape.
For one thing, thinking allowed people to create these bots in the first place, and these bots will no doubt be improved by thinking people. Without us, the bots could never exist at all. Take that!
AI technology today is where airplanes or automobiles were in 1908. Someday it will be astonishing.
But further, it’s still not clear how far bots can think. Sure they can fool us with responses we expect. But as any genius will tell you, the real smarts come from answers no one expects. Einstein, in his Theories of Relativity (yes, there are two) looked at common physics problems in entirely new ways; they had no antecedents, no references in their text to earlier work, because they were wholly original.
In many ways, Einstein did the very opposite of what the bots do: rather than take deep dives into existing knowledge and sew it all together, he assumed nothing when he figured out such puzzles as what happens when you travel alongside a light beam and how can people who experience two different realities both be correct? (Without pharmaceutical enhancements, that is.)
My point is that originality is a critical component of intelligence. Another is honesty. And that’s where AI seems to have a serious problem. A “truthfulness” problem. ChatGPT will often weave falsehoods and truths so tightly and subtly together that it is nearly impossible to tell them apart. It’s important children learn that they can't trust technology blindly. Computers may be faster than us, but we are still the best at passing judgments on the quality of information. So far, ChatGPT can’t tell sense from nonsense. It just pulls its information from the internet, because the internet is all it knows. And the internet is not the whole world of experience.
Remember the A in AI stands for “artificial.”
And bots apparently can’t even be trusted in their own world. Just the other night, 60 Minutes ran a piece where correspondent Lesley Stahl watched as her name was typed into ChatGPT. The results were filled with errors, ones that a quick search through Google could have uncovered.
Sure they will improve in time. As I recently wrote in my own LinkedIn post about the arrival of the bots, AI technology today is where airplanes or automobiles were in 1908. Someday it will be astonishing. We must never abandon the intellectual process and leave it to machines. It’s the only way us humans can “keep ‘em honest.”
Today's children need to know that’s the reason thinking for yourself is so important. They’ll be building up life experiences and learning how to make judgments. Bots are unable to do that. Remember the A in AI stands for “artificial.”
Unless you slept through January 6th, 2021, you’re already aware of the dangers of disinformation. Our children must master the art of critical thinking from an early age and rely on their own beautiful, amazing minds. As the famous astronomer and science writer Carl Sagan once said, “The brain is like a muscle. When it is in use we feel very good. Understanding is joyous.”
John Grabowski is a San Francisco Bay Area writer specializing in tech—specifically AI and chatbots, real estate and real estate tech. He has worked in PR, television news and advertising. He is also the author of two novels and a collection of short fiction. His latest novel, Made in the U.S.A., will be published by Arbiter Press early next year.