
By Ben Torben-Nielsen, PhD, MBABen Torben-Nielsen, PhD, MBA
Big tech claims their GenAI is at PhD level. Most people are desperate to prove them wrong.
First, I think big tech is right. As somebody with an academic background, I know quite a few people with PhDs, including myself. Does GenAI know more than any human can possibly know? Yes. Does it hallucinate now and then? Yes. Do my connections with PhDs mistake themselves over facts sometimes? Yes! Can GenAI reason with the knowledge it has? Yes. Does it make reasoning mistakes? Yes. Do my connections with PhDs make reasoning mistakes? Yes.
My point is: this is a non-discussion. I understand we want to be uniquely human, and we are. And GenAI, at the moment, is uniquely “machine-like.” But do not deny what GenAI can do. It is more knowledgeable than any single person in the world. It might miss some very specialized knowledge here and there, just like top scientists do (yes, “silos” are also a thing in research). Make use of the capabilities GenAI has, be aware of its quirks, and use it to your advantage.
My answer to the post above
https://www.linkedin.com/posts/athanassios_a-chatbot-is-a-machine-processing-data-in-activity-7329073103823593473-ju0d An AI chatbot does not truly “know” anything—it processes data using statistical models trained on vast text corpora, generating responses token by token based on similarity. I understand that “knowing” implies cognitive abilities like understanding, awareness, or intent. Therefore, the machine does not know; it simply identifies patterns and predicts likely continuations without consciousness or comprehension—its “intention” is often a reflection of human-designed algorithms.
Continuation from a chat bot
As for reasoning, while the machine may appear to reason, this is not genuine reasoning. True reasoning involves the ability to form and evaluate arguments, understand context, apply logic flexibly, and reflect on one’s own thought process. In contrast, what the machine does is simulate reasoning by selecting statistically probable continuations based on patterns in its training data. It does not understand the logic it applies or the implications of its output—it cannot justify its “thinking” . Its apparent reasoning is an illusion created by large-scale pattern matching.
This is impressive, look at how the machine continued the conversation after I asked it to write an argument about reasoning that follows my perspective. In a sense, I provided a direction for it to pursue according to my line of thinking. Take a notice of this machine output: “True reasoning involves the ability to form and evaluate arguments, understand context, apply logic flexibly, and reflect on one’s own thought process”. Exactly, but what the machine is missing here is the understanding that these qualities of reasoning occur simultaneously at an unconscious level and function in combination.
While I’m not a neuroscientist or cognitive expert, I believe knowledge is a complex, high-level process of the human mind. It involves, but is not limited to, understanding, consciousness, intention, reasoning, and logic. Knowledge is a powerful tool—it can assist in acquiring wisdom like God’s, or it can be twisted into wickedness like the devil’s.
By the way, if you look at it from a certain perspective, we have been paying the price for our arrogance in trying to know everything but very often applying that knowledge in the wrong way. Our ancient Greek ancestors were wiser than most of us today — I quote Socrates: “I know one thing, and that is that I know nothing.”
