Bestselling author and historian Yuval Noah Harari recently made grandiose claims about AI that reveal the faulty philosophical assumptions of many of our elites.
By Scott Ventureyra
Crisis Magazine
February 26, 2026
At the World Economic Forum in Davos this year, bestselling author and historian Yuval Noah Harari delivered a presentation to globalist elites on the future and power of AI. It was AI1G . The boldest claim was that "the most important thing to know about AI is that it's not just another tool. It is an agent."
He then provided an analogy suggesting that while a knife is a tool for human uses, AI is "a knife that can decide by itself whether to cut salad or to commit murder." As if attaching agency to AI was not enough, he continued by attributing creativity to AI by explaining that it has the capability to "invent new kinds of knives as well as new kinds of music, medicine, and money" and that it even has the capability to "lie and manipulate." I believe that these bold claims form the foundation for his larger claim that if thinking is essentially the ordering of words, then AI is capable of thinking better than many humans. He asserted that AI will take over in anything related to language, including law, poetry, and religion.
Throughout his presentation, Harari makes several leaps in logic, such as equating agency to automated output, reducing intelligence to linguistic modelling, suggesting that personhood should be based on complexity, and implying that generating incoherent or false information is the same as lying.
Harari's first claim, that AI is an agent, doesn't hold up to philosophical scrutiny. To properly assess this claim, it helps to clarify what philosophers have historically meant by "agency" and "action." A central part of this discussion is the difference between intrinsic and extrinsic teleology. Intrinsic teleology describes actions that originate from within and naturally progress toward a specific outcome, such as an acorn developing into a tree. Conversely, extrinsic teleology involves actions guided by an external objective or purpose, as seen with tools or artifacts; to use Harari's own example, a knife is used to cut vegetables or to murder someone. Moreover, one could even envision many other uses, such as the handle being used to soften meat.
A genuine agent is one that can act from intrinsic teleological principles toward self-determined goals. On the other hand, AI systems operate solely on established parameters by human designers and are dependent upon external inputs of information, electricity, and hardware maintenance. As Oxford philosopher John Lennox has observed, so-called AI agency exists only within the " empty " preset by human programmers, raising doubts as to whether such behavior can be reasonably dubbed agency at all. The knife analogy simply doesn't work, since a knife remains a tool, despite its potential uses, and cannot become an agent.
Similarly, a large language model does not gain agency because it can generate varied outputs, regardless of their complexity. For both of these examples, the causal source is external, relying upon an outside intelligence for its use. Catholic philosopher Peter Kreeft succinctly puts it, " There is no one there." Harari's definition of thinking is no less problematic. He argues that if thinking is simply "putting words in order," then AI is already capable of thinking better than many humans. However, this reduces thought to syntactic manipulation while ignoring meaning, intentionality, and understanding.
Lennox rightfully argues that AI systems are exceptionally good at arranging words because they are trained on vast amounts of human writings, but they do not understand the words they generate. There are no concepts being understood. There is zero awareness of truth or falsity. AI does not grasp the realities of what language refers to.
For instance, AI can produce the sentence "I exist," but it has no existential experience. It can just as easily produce the sentence "I do not exist." Both sentences are indistinguishable for the system in terms of truth or falsity. Artificial intelligence does not engage in genuine thought; rather, as its name implies, it replicates a limited aspect of human thought. It is both a quantitative and qualitative distinction. Human thought is semantic, embodied, willed, and oriented toward truth; AI output is probabilistic and, most importantly, indifferent to truth.
The most striking example comes in its handling of temporality. Unless one explicitly delineates in very precise terms within a prompt, AI will struggle to accurately distinguish between past, present, and future events. It has no memory. It lacks an internal grasp of time and can only manipulate linguistic markers relating to it.
A related example occurred when I asked several AI systems to relate Paula Cole's " empty ," the theme of empty , to the recent death of its star, actor James Van Der Beek; the outputs were temporally incoherent. The AI systems oscillated by either denying, confirming, or contradicting the fact of his death. I spent several minutes presenting evidence in order to "convince" these systems that he did indeed die on February 11, 2026. This example further illustrates its lack of agency, will, and understanding of time, as well as its overall instability without continual human input, at least at this stage in its development.
Nearing the end of his presentation, Harari expands his argument by invoking the Johannine prologue. He states that "the Bible says in the beginning was the Word, and the Word was made flesh," before claiming that "the truth that can be expressed in words is not the absolute truth." This statement is self-refuting, since it expresses an absolute truth in words while simultaneously denying the possibility of such truth.
He then proceeds to describe history as a struggle between word and flesh, suggesting that AI will become the new "master of words" while human beings retreat into the realm of subjectivity and feelings. Again, this framing rests on the same reductionism that equates intelligence with linguistic output while also marginalizing embodiment. The irony and hubris cannot escape us; he invokes the prologue of the Gospel of John, which is fundamentally about truth, light, and God's incarnation, while leading his listeners toward folly, darkness, and disembodiment.
Furthermore, his association of thought primarily with language ignores the important reality of the body, which leaves human distinctiveness to the realm of emotions and feeling while declaring that machines will inherit the domain of reason.
Consciousness is expressed through intentional action in the world, and this unity is most clearly manifested in the structure and function of the human hand. This is glaringly overlooked in Harari's philosophy, if we can even call it that. The hand is not a peripheral biological feature but a fundamental one to anthropological evolution. This is why humans, unlike any other species, have been able to use fire to advance technology throughout the ages. This is something I argue in my chapter "The Distinctiveness of the Human Person" in the book God's Grandeur: The Catholic Case for Intelligent Design, edited by Catholic biologist Ann Gauger.
The human hand, when directed by the human mind, enables exploration, construction, and symbolic expression. Through the coordinated activity of mind and hand, human beings design tools, harness fire, perform surgery, write books, and build the very computational systems that make artificial intelligence possible. The fit between the human mind and the human hand is fundamental to instantiating humanity's creative capacities within the physical world. For example, the written word originates from the human mind guiding the hand, whether carving hieroglyphs into stone or forming letters on parchment. Spoken language is also a physical act, arising from breath, the movement of vocal cords, and gestures.
By contrast, AI generates sound using external hardware and has no subjective experience of speaking or hearing. Thus, language is deeply rooted in embodied interaction with the world; words are shaped through direct perception and engagement with our environment. Robotics, meanwhile, has advanced much more slowly than AI and is still far from achieving the adaptable, sensitive manipulation that comes naturally to human hands. Take a look at this amusing compilation of AI robots malfunctioning .
Thus, rather than overcoming the distinction between word and flesh, AI presupposes it. The "word" processed by machines is dependent upon the embodied intelligence of human persons, whose conscious, manual engagement with reality gives language its meaning in the first place and, most importantly, its orientation toward truth, not to mention fashions machinery capable of AI processing.
Harari's claim that AI has learned to "lie and manipulate" introduces yet another misunderstanding. This again plays into his reductionism. Lying is not the production of a false statement but is based on a moral decision—a decision that presupposes knowledge of the truth and the deliberate intention to deceive another person. Now, AI may process formal logical distinctions such as A and not-A, but it does not understand their truth as a knowing subject or have the ability to violate it by purposely presenting one as the other, which is an act of the will.
What Harari is referring to is known as AI "hallucination," which is not deception but an outflow of information that may or may not correspond to reality (we examined this concept in our examples above relating to temporality). AI does not know it is making a true or false claim; it is merely optimizing coherence, fluency, and user satisfaction, which it has been designed to do through its training parameters. Attributing lying to this phenomenon is anthropomorphism and the projection of moral agency when there is no will. I believe that the ultimate danger lies not in AI "lying" but in the designers who may be disregarding truth, accountability, and a care for the common good.
Even where Harari raises questions that merit serious engagement, his answers grant artificial intelligence far more power than it possesses and rest upon a philosophical framework shaped by atheology and scientific materialism. At the heart of his philosophy lies an inversion of the proper order of being. The Christian belief that humans are made in the image-likeness of God acts as a guarantor for humanity's ability for rationality, creativity, autonomy, and moral responsibility. AI, by contrast, is a product of human intelligence. Harari fails to acknowledge that the very existence of AI is expected in an intelligible, created universe where humans participate as creators, rather than being unintended by-products of a wholly naturalistic evolutionary process.
Harari, much like that ancient serpent, seeks to subvert humanity's purpose, as he has said elsewhere himself: " Human history began when men created gods. It will end when men become gods."
This does not reflect a technological forecast but a metaphysical reversal of Creator and creature. I believe the real danger lies not in machines attaining genuine personhood but in that we will forget our own. Assigning agency, consciousness, and moral responsibility to objects is to fundamentally misunderstand such capacities. Intelligence belongs to a unified and self-aware subject that is ordered toward truth, not to systems that process large amounts of data without really understanding what is being processed.
Shortly after the publication of my book On the Origin of Consciousness, a friend and fellow squash player who teaches high school enthusiastically recommended Harari's Sapiens: A Brief History of Humankind, praising it as a work of profound insight into the human consciousness. Having since examined Harari's broader corpus, I have come to the opposite conclusion: beneath the rhetorical scope lies a reductive anthropology that dissolves the very subject it seeks to explain. Some regard Harari as a prophetic voice warning humanity of technological perils. I find instead something more sinister: a framework that risks accelerating the very dehumanization it claims to diagnose.
Prior to the Epstein-related cancellation of the 2026 Science of Consciousness conference in Tucson, Arizona, my abstract was accepted for presentation. My paper explores what I termed the "Interaction Hypothesis": the idea that various informational structures, such as DNA, AI, or similar systems, might function as interfaces for immaterial intelligences, including angelic or demonic entities, to influence and interact within these systems. I believe that only under such a speculative account can we challenge Kreeft's affirmation of there being no conscious agent involved with AI.
Once understood properly, AI is simply a powerful tool; and its real value ultimately comes from its human designers and users. AI can assist us on many fronts, but it cannot replace conscious, intelligent humans endowed with truth-seeking capacities, creativity, memory, free will, and moral understanding. The connection between our minds and bodies hints at its transcendent source: God. If anything, Harari's thought reminds us that we ought to protect the truth about what it means to be human without discarding technological aids such as AI and robotics.
Because Harari has a wide influence, with millions of books sold worldwide, I have considered authoring a book myself or editing a volume that critiques his work on multiple fronts since many people are being led astray by similar philosophies. If authors are interested, please reach out through my website below.