Opinion by David Lowery at The Hill – definitely worth a read.
Ursula K. LeGuin once wrote, “There is no right answer to the wrong question.” And while AI might struggle to understand the quip, we human readers get the point immediately. To solve a problem, you have to focus on what really matters about it.
That’s a big problem with the ongoing debate, most recently joined by Professor Nicholas Creel, over whether artificial intelligence models “learn” the same way as humans.
The simple answer, spelled out well in Erik J. Larson’s book “The Myth of Artificial Intelligence,” is that they don’t. They can’t handle or generate novelty, they become stymied when forced to operate with uncertainty or incomplete information, and they lack empathy and any real conception of understanding. They can copy and exploit almost infinite volumes of data, but they cannot extrapolate or intuit new truths from mere scraps of information as humans can.
I asked Microsoft’s AI, Copilot, the question, “Does AI reason in the same way humans reason?” This is the answer I received:
“AI relies on large datasets and algorithms to make decisions and predictions. It processes information based on patterns and statistical analysis. AI follows predefined rules and models to arrive at conclusions. It doesn’t have intuition or emotions influencing its decisions. AI can learn from data through techniques like machine learning, but this learning is based on mathematical models and not personal experiences.”
As the Human Artistry Campaign’s Moiya McTier has explained, real creativity flows from far more than crunching big data sets to pull out patterns and connections. It “is the product of lived experience and grows organically from the culture, geography, family, and moments that shape us as individuals.”