Pope Leo XIV Warns "control of algorithmic and artificial intelligence systems capable of subtly guiding behaviors and even rewriting human history" in Communications Day Message - FULL TEXT

FULL TEXT - MESSAGE OF HIS HOLINESS POPE LEO XIV
FOR THE 60TH WORLD COMMUNICATIONS DAY
_______________________
Preserving human voices and faces
Dear brothers and sisters!
The face and voice are unique, distinctive features of each person; they manifest their unrepeatable identity and are the constitutive element of every encounter. The ancients knew this well. Thus, to define the human person, the ancient Greeks used the word "face" ( prósōpon ), which etymologically indicates that which is before the gaze, the place of presence and relationship. The Latin term persona (from per-sonare ) instead includes sound: not just any sound, but someone's unmistakable voice.
Face and voice are sacred. They were given to us by God, who created us in his image and likeness, calling us to life with the Word he himself spoke to us; a Word that first resounded throughout the centuries in the voices of the prophets, then became flesh in the fullness of time. This Word—this communication that God makes of himself—we have also been able to hear and see directly (cf. 1 Jn 1:1-3), because it made itself known in the voice and Face of Jesus, the Son of God.
From the moment of his creation, God wanted man as his interlocutor and, as Saint Gregory of Nyssa says, [1] he imprinted on his face a reflection of divine love, so that he could fully live his humanity through love. Preserving human faces and voices therefore means preserving this seal, this indelible reflection of God's love. We are not a species made up of biochemical algorithms, defined in advance. Each of us has an irreplaceable and inimitable vocation that emerges from life and manifests itself precisely in communication with others.
Digital technology, if we fail to protect ourselves, risks radically altering some of the fundamental pillars of human civilization, which we sometimes take for granted. By simulating human voices and faces, wisdom and knowledge, awareness and responsibility, empathy and friendship, the systems known as artificial intelligence not only interfere with information ecosystems but also invade the deepest level of communication, that of human-to-human relationships.
The challenge, therefore, is not technological, but anthropological. Protecting faces and voices ultimately means protecting ourselves. Embracing the opportunities offered by digital technology and artificial intelligence with courage, determination, and discernment does not mean hiding critical issues, opacities, and risks from ourselves.
Don't give up on your own thinking
There's long been abundant evidence that algorithms designed to maximize engagement on social media—profitable for the platforms—reward rapid emotions and penalize more time-consuming human expressions, such as the effort to understand and reflect. By confining groups of people in bubbles of easy consensus and easy outrage, these algorithms weaken the ability to listen and think critically, and increase social polarization.
Added to this is a naively uncritical reliance on artificial intelligence as an omniscient "friend," the dispenser of all information, the repository of all memories, the "oracle" of all advice. All of this can further erode our ability to think analytically and creatively, to understand meaning, and to distinguish between syntax and semantics.
While AI can provide support and assistance in managing communication tasks, shirking the effort of our own thinking, settling for artificial statistical compilation, risks eroding our cognitive, emotional, and communication abilities in the long run.
In recent years, artificial intelligence systems have increasingly taken control of the production of texts, music, and videos. Much of the human creative industry is thus at risk of being dismantled and replaced with the label " Powered by AI ," transforming people into mere passive consumers of unthought thoughts, anonymous, unauthorized, and unloved products. Meanwhile, the masterpieces of human genius in music, art, and literature are being reduced to mere training grounds for machines.
The question we care about, however, is not what the machine can or will be able to do, but what we can and will be able to do, growing in humanity and knowledge, with the wise use of such powerful tools at our service. Humans have always been tempted to appropriate the fruits of knowledge without the effort of involvement, research, and personal responsibility. Renouncing the creative process and handing over our mental functions and imagination to machines, however, means burying the talents we have received for the purpose of growing as people in relation to God and others. It means hiding our faces and silencing our voices.
To Be or to Pretend: Simulating Relationships and Reality
As we scroll through our information feeds , it becomes increasingly difficult to determine whether we are interacting with other humans or with " bots " or "virtual influencers." The opaque interventions of these automated agents influence public debates and people's choices. Chatbots based on large linguistic models (LLMs) in particular are proving surprisingly effective at covert persuasion, through continuous optimization of personalized interactions. The dialogic, adaptive, and mimetic structure of these linguistic models is capable of mimicking human feelings and thus simulating a relationship. This anthropomorphization, which can even be amusing, is also deceptive, especially for the most vulnerable. Because chatbots made excessively "affectionate," in addition to being always present and available, can become hidden architects of our emotional states and thus invade and occupy people's private spheres.
Technology that exploits our need for connection can not only have painful consequences for the fate of individuals, but can also damage the social, cultural, and political fabric of societies. This happens when we replace relationships with others with those with AIs trained to catalog our thoughts and thus build a world of mirrors around us, where everything is made "in our image and likeness." In this way, we allow ourselves to be robbed of the opportunity to encounter the other, who is always different from us, and with whom we can and must learn to engage. Without embracing otherness, there can be neither relationship nor friendship.
Another major challenge these emerging systems pose is that of bias , which leads to the acquisition and transmission of an altered perception of reality. AI models are shaped by the worldview of those who build them and can, in turn, impose ways of thinking by replicating the stereotypes and biases present in the data they draw on. The lack of transparency in algorithm design, coupled with inadequate social representation of data, tends to trap us in networks that manipulate our thoughts and perpetuate and deepen existing social inequalities and injustices.
The risk is great. The power of simulation is such that AI can even deceive us by fabricating parallel "realities," appropriating our faces and voices. We are immersed in a multidimensionality, where it is becoming increasingly difficult to distinguish reality from fiction.
Added to this is the problem of inaccuracy. Systems that pass off statistical probability as knowledge are actually offering us approximations to the truth at best, which are sometimes downright "hallucinations." A lack of source verification, combined with the crisis in field journalism, which requires constant information gathering and verification at the scene of events, can create even more fertile ground for misinformation, causing a growing sense of mistrust, confusion, and insecurity.
A possible alliance
Behind this enormous invisible force that involves us all, there are only a handful of companies, those whose founders were recently presented as the creators of the "person of the year 2025," namely the architects of artificial intelligence. This raises significant concerns about the oligopolistic control of algorithmic and artificial intelligence systems capable of subtly guiding behaviors and even rewriting human history—including the history of the Church—often without our being truly aware of it.
The challenge before us is not to stop digital innovation, but to guide it, to be aware of its ambivalent nature. It's up to each of us to raise our voices in defense of humankind, so that these tools can truly be integrated by us as allies.
This alliance is possible, but it needs to be based on three pillars: responsibility , cooperation and education .
First, responsibility . Depending on the role, it can be defined as honesty, transparency, courage, vision, the duty to share knowledge, or the right to be informed. But in general, no one can escape their responsibility for the future we are building.
For those at the helm of online platforms, this means ensuring that their business strategies are guided not only by profit maximization, but also by a far-sighted vision that takes into account the common good, just as each of them cares about the well-being of their children.
AI model creators and developers are required to be transparent and socially accountable regarding the design principles and moderation systems underlying their algorithms and developed models, in order to foster informed consent among users.
The same responsibility is also required of national legislators and supranational regulators, who are responsible for ensuring respect for human dignity. Appropriate regulation can protect people from emotional attachments to chatbots and contain the spread of false, manipulative, or misleading content, preserving the integrity of information against deceptive simulation.
Media and communications companies, for their part, cannot allow algorithms intent on winning the battle for a few extra seconds of attention at all costs to prevail over their professional values, which are focused on the pursuit of truth. Public trust is earned through accuracy and transparency, not by chasing any kind of engagement. Content generated or manipulated by AI must be clearly flagged and distinguished from content created by humans. The authorship and sovereign ownership of the work of journalists and other content creators must be protected. Information is a public good. A constructive and meaningful public service is not based on opacity, but on the transparency of sources, the inclusion of stakeholders, and a high standard of quality.
We are all called to cooperate . No sector can address the challenge of driving digital innovation and AI governance alone . Safeguard mechanisms must therefore be created. All stakeholders—from the tech industry to regulators, from creative businesses to academia, from artists to journalists, and educators—must be involved in building and implementing informed and responsible digital citizenship.
This is what education aims for: to increase our personal capacity for critical reflection, to evaluate the reliability of sources and the possible interests behind the selection of information that reaches us, to understand the psychological mechanisms that activate them, to allow our families, communities and associations to develop practical criteria for a healthier and more responsible culture of communication.
Precisely for this reason, it is increasingly urgent to introduce media , information, and AI literacy into education systems at all levels, a practice that some civil institutions are already promoting. As Catholics, we can and must contribute to ensuring that people—especially young people—acquire the capacity for critical thinking and grow in spiritual freedom. This literacy should also be integrated into broader lifelong education initiatives, also reaching the elderly and marginalized members of society, who often feel excluded and powerless in the face of rapid technological change.
Media , information, and AI literacy will help everyone avoid adapting to the anthropomorphizing drift of these systems, but instead treat them as tools, always use external validation of the sources provided by AI systems—which may be inaccurate or incorrect—and protect their privacy and data by understanding security parameters and contestation options. It's important to educate and be educated about how to use AI intentionally, and in this context, protect one's image (photo and audio), face, and voice, to prevent them from being used to create harmful content and behaviors such as digital fraud, cyberbullying, and deepfakes that violate people's privacy and intimacy without their consent. Just as the industrial revolution required basic literacy to enable people to react to novelties, so too the digital revolution requires digital literacy (along with humanistic and cultural education) to understand how algorithms shape our perception of reality, how AI biases work, what mechanisms determine the appearance of certain content in our information flows ( feeds ), and what the assumptions and economic models of the AI economy are and how they can change.
We need the face and voice to once again express the person. We need to cherish the gift of communication as the deepest truth of humanity, toward which we must also orient every technological innovation.
In offering these reflections, I thank all those who are working towards the goals outlined here and I heartily bless all those who work for the common good through the media.
From the Vatican, January 24, 2026, memorial of Saint Francis de Sales.
LEO PP. XIV
______________________________
[1] “The fact of being created in the image of God means that man, from the moment of his creation, has been imprinted with a royal character [...]. God is love and the source of love: the divine Creator has also placed this trait on our face, so that through love – a reflection of divine love – the human being may recognize and manifest the dignity of his nature and his likeness to his Creator” (cf. St. Gregory of Nyssa, The Creation of Man : PG 44, 137).
Comments