Deus in machina? (God into the machine?)
Even before we arrive to the marriage of man and machines it is clear that ’artificial intelligence’ means the divorce of intelligence from life.
Mind divorcing life?
[This article may be too long for the Substack newsletter format. If you’re reading it as an email and it tails off unexpectedly, please click on the headline at the top of the page to be taken to the full post at Substack.]
It is the eleventh hour and the fifty-ninth minute, with the guillotine of Time about to come down on the question-of-questions which has been on its knees begging to be asked for forty years. That question might manifest in any of various fashions, depending on the insight or realism of the asker. ‘Does humanity have a future?’ is one form it might take; ‘What shall we do when we don’t have to do anything?’ is another. ‘Whither humanity` might be a different way of couching much the same idea, this one being almost instantly scratched out and replaced by another, more ominous version: ‘Whether humanity?’
For four decades, these question have been left unasked, or at least not asked in rooms with a proscenium arch opening on to the public square. A long time ago — the 1980s, if you can believe it — these questions were tentatively punted, but as abstract and slightly amusing conundrums from an impossibly distant future, in which hard problems like ‘what to do with leisure time when a robot is doing all your work’ taxed the minds of the great philosophers and the public house bores, speaking of the powers and dangers of the infallible silicon chip. In a short time, however, the philosophers fell into a mysterious silence in respect of these subjects, and that generation of pub bores fell victim to wet brains, to be replaced by a generation for which chips always came with fish wrapped in yesterday’s newspaper (for the purpose of soakage). And those newspapers, too, ceased to ask those once so-pressing questions. My theory is that it was prosperity — or pseudo-prosperity — wot did it: the minds and tongues of the world bought off with sweet things, so the real players could get on with arranging their own futures as a precursor to arranging everyone else’s, and for the rest of time.
But now, at less than a second to midnight, the sweet baits and bribes have disappeared and the question, in one form or another, is on everyone lips: ‘Will there be life after the Singularity?’; ‘What will a world run by robots look like?’; ‘How obedient is artificial intelligence?’; ‘Is there something fishy about brain chips?’; What is information and how does it relate to intelligence?’; ‘What do we need humans for?
Some of these questions are over the pit in the latest book by Yuval Noah Harari, the much and somewhat unfairly derided in-house philosophical advisor to the World Economic Forum, that odd and seemingly wanna-be democratic entity that no one ever voted for, but which presses on regardless with trying to organise our lives and our futures. His book, published late last year, is called Nexus: A Brief History of Information Networks from the Stone Age to AI.
To be sure, Harari (author of the best-selling books Sapiens: A Brief History of Humankind, and Homo Deus: A Brief History of Tomorrow), sometimes does himself no favours with his habit of coming out with all kinds of click-baiting statements, like the time he described humans as ‘hackable animals’, or his elaboration in Homo Deus upon perhaps the most ominous and pressing of the above-mentioned questions: ‘What do we need humans for?’ (And no, he didn’t say who the ‘we’ is.)
Since we do not know how the job market would look in 2030 or 2040, already today we have no idea what to teach our kids. Most of what they currently learn at school will probably be irrelevant by the time they are forty. Traditionally, life has been divided into two main parts: a period of learning followed by a period of working. Very soon this traditional model will become utterly obsolete, and the only way for humans to stay in the game will be to keep learning throughout their lives, and to reinvent themselves repeatedly. Many if not most humans may be unable to do so. The coming technological bonanza will probably make it feasible to feed and support these useless masses even without any effort from their side. But what will keep them occupied and content? People must do something, or they go crazy. What will they do all day? One answer might be drugs and computer games. Unnecessary people might spend increasing amounts of time within 3D virtual-reality worlds that would provide them with far more excitement and emotional engagement than the drab reality outside. Yet such a development would deal a mortal blow to the liberal belief in the sacredness of human life and of human experiences. What’s so sacred about useless bums who pass their days devouring artificial experiences in La La Land?
This may convey why, though intrigued by the title, I was somewhat chary of reading Nexus: A Brief History of Information Networks from the Stone Age to AI. To get around the problem, I created for myself a New Year resolution for 2025, whereby I require myself to read at least one book per month by someone I profoundly disagree with, anointing Harari’s the first of these to be tackled. I justified to myself the amount of time I would spend reading the book (over 400 pages of it) on the basis that we who resist the current tyrannies need to understand what our opponents are saying, especially when they speak about the future. We also need to understand how they might be twisting or distorting reality in order to keep the confused confused and the addled addled.
Anytime I mention Yuval Noah Harari, I receive an inundation of knowingly condescending or quasi-abusive emails from people (on ‘my side’ of most arguments) who explain to me — apparently pointedly — that he is a satanist. The implication, as far I can gather, is that it is wrong to refer to or write about him other than to say as much and leave it at that. This seems to be a rather emblematic example of what is amiss with our Resistance: it sets more value on retaining an appearance of virtue than on figuring things out and explaining them. In truth, Harari is a very interesting writer, and a tolerably competent thinker. In the past, I have found him, generally speaking, to be ‘interesting’ for all the wrong reasons, and tolerable because reading what he writes can sometimes alert you to what is going to happen next. As the house philosopher of the motherWEFfers, he merits our attention, and frequently rewards it. But Nexus took me by surprise.
In fact, Nexus is not at all as you would expect it to be from the above referenced quote from Homo Deus. It is, in general — at least insofar as it addresses its substantive topic — measured and impressively researched. There are many valuable things in Nexus, though also a number of very irritating biases; but on balance I would call it an immensely worthwhile book, and recommend anyone to read it, albeit with all antennae switched on. Normally, biases would be likely to distort a book to the point of disqualifying it from any attribution of value, but that is not the case here, as I shall try to outline.
Nexus: A Brief History of Information, from the Stone Age to AI is a pretty ambitious project, but I would say that Harari steps up to the challenge, although I also feel the need to note that he has behind him an eye-watering array of researchers, including what he refers to as his Sapienship ‘team’ and (separately, it appears) his Sapiens ‘in-house research team’ (fourteen persons altogether). Unsurprisingly, this gives the book a rather dry and antiseptic flavour, depriving it of a strong sense of an authorial personality, which renders it a bit of a plod.
He defines the mission of the book as follows:
[B]y expanding our horizons to look at how information networks developed over thousands of years, I believe it is possible to glean some insight into what we’re living through today.
His concept of information repudiates the notion that it is concerned with conveying truth, holding that information through history has been concerned primarily with creating networks and communities. He instances various examples, including ‘holy books’ (the Bible, for example) which led to the formation of enormous communities spread throughout the world. He dismisses what he calls ‘the naive view of information’, which holds that problems of propaganda and disinformation can be solved by throwing further information and commentary at the subject to hand. There are limits, he believes, to the usefulness of commentary or new facts.
The chief and recurring irritant of the book is its uninformed prejudice concerning populism, which he accuses of being interested solely in power as a metric of political action, and of nurturing a belief that truth does not exist. This is one of several straw men which he uses throughout the book to signal his allegiances, though not necessarily to a degree that skews the book’s ultimate meanings and usefulness in respect of its substantive subject-matter. Of course, the idea that populists do not believe in an absolute truth is the purest of projections, since it is Woke adherents like Harari who hold most resolutely to such notions. It was, in fact, an earlier version of Woke — Cultural Marxism — that introduced relativism to modern culture, using the instrument of postmodernism.
Postmodernism is primarily an intellectual current that gained importance since the 1970s, which opposes claims of the universality of reason. By this logic, the use of reason is not universal, but bound to a specific culture, religion, ethnicity, gender, sexual orientation, et cetera. Thus, the story goes, by the imposition of a purely subjective — and in that sense ‘random’ — interpretation of facts and reality, a particular cohort or group may impose its will and version of reality on a much broader section of humanity, or even on the whole. Ergo, what is presented as an objective interpretation of reality is actually a set of pretexts for the claiming of power. By this logic, universal knowledge is impossible, and its assertion a fraud. In its stead, the postmodernists have succeeded in insinuating a fractured version of reality in which there is no core claim to logic, reason or even fact, which is what has delivered us to the present state of chaos. This, almost precisely, is a summary of what populists oppose.
Among the consequences and effects of this ideology is the creation of multi-tier societies, intellectually and semantically, whereby the application of laws can occur only in a highly individualised fashion. Everyone is equal, but some are more equal than others. Hence, the society extends preferential treatment to people with ‘protected’ characteristics (‘minorities’ and migrants, for example), and imposes punishments for even minor slights against these categories on those lacking such characteristics.
By the same logic, in supposedly rational disciplines like science, what matters is not facts but expertise. For example, in dealing briefly and obliquely with the recently highly germane question of the suppression of legitimate scientific viewpoints during the Covid scamdemic, Harari follows up an interesting reflection on the witch-hunts of such as Salem, with the following gaslighting of his straw man:
Populist critics of scientific institutions may counter that, in fact, these institutions use their power to stifle unorthodox views and launch their own witch hunts against dissenters. It is certainly true that scholars who oppose the current orthodox view of their discipline sometimes experience negative consequences: having articles rejected or research grants denied, facing nasty ad hominem attacks, and in rare cases even getting fired from their jobs. I do not wish to belittle the suffering such things cause, but it is still a far cry from being physically tortured and burned at the stake.
This is a typical example of the relativistic what-aboutery of Woke activists, justifying great wrongs in the present with greater ones in the past. Such tactics destroy the role of reasoning in public culture, transferring effective power from the people as a whole to people who claim that they (or their forebears) have been wronged or injured by ideologies of the past, even in far distant countries. This leveraging of the latent pseudo-moral power of ‘minorities’ is really an instrument to attack the prevailing cultural forms of the West, such as Christianity, ‘patriarchy’, ‘heteronormality’, and the moral claims of Caucasians. At the core of the logic is the Marxian idea that it was only by cheating humanity more generally that Western civilisation came into being and endured for three millennia. Interestingly, this claim originates, exclusively, from within Western civilisation itself.
The import of this trick as utilised within Harari’s book is in enabling him to remain within the official narratives on issues like Covid, climate change and the generality of Woke nonsense. Throughout the book, Harari implicitly elides or denies the hippopotamus in the hospital waiting room: that the Covid coup was a play for totalitarian control, and that by far the greatest threat to humanity at this moment is that AI will become weaponised to the requirements of that project.
As an indisputable intelligent man, Harari must know that these arguments amount to nothing but misdirecting nonsense, but it appears that he also knows where to locate the line between truth-telling and professional hari-kari, and he is adept at staying on the ‘right’ (i.e., broadly the left) side of the line.
He also claims that populism is a ‘sordid view of humanity’, which renders itself appealing by reducing all interactions to power struggles and oversimplifying reality, for example by insisting that all political institutions are corrupt. ’Anything that happens,’ he pointedly writes, ‘ — even a pandemic — is about elites pursuing power.’ Of course, this is merely the appropriation of a cynical contemporary use of the word ‘populist’ to demonise opponents and shore up corrupt regimes which have long outlived their popularity or moral entitlement to govern. For what is democracy if it is not, in fact, the will of the people?
Nonetheless, Harari's stance and material concerning AI display none of the bias exhibited in other areas. He has been pursuing this subject for the past decade, since he began working on Homo Deus, and is by now to be seen as an undoubted authority on the nature and direction of AI. I thought I was pretty up to speed, having spent several years delving into books and academic papers on the subject, but I still came across sections of Nexus that shocked me and had me retrace my steps to reconsider what I had believed myself to understand.
Nexus takes a highly sceptical position in respect of the drift of things technological, and I have to confess to finding his commentary on AI, which is the primary focus of the book, both impressive and useful. I also did not find that the more irritating aspects of the book in any significant way diminished this aspect, since the AI element comprises mostly information and dedicated analysis and can be read against multiple ideological backdrops.
Of course, his bias also leads him to soft-focus certain historical aspects, such as his pushing of the spurious notion that it was preferable that advancements in the technological areas be pursued by private companies rather than governments. I utterly disagree with this position, regarding the failure to insist on a democratic proprietorship of these technologies as one of the great follies of the past three to four decades when the general public was rendered somnambulant by prosperity, and the media fell more or less completely silent. This trend can be traced back to the early 1990s, when questions concerning the nature of the future which had already begun to be pursued in the mid-1980s — the future of work, income, meaning and identity in human societies in a context where meaningful work might reduce beyond what was at the time comprehensible — were quietly abandoned.
In summary, then, the book is occasionally most annoying, but overall very engaging. Despite lacking a strong authorial personality, it is competently written and constructed, and contains a wealth of information and understanding. There are probably books about AI out there that are better than this one, but I doubt if any of them is as usefully directed at the general interest reader seeking an overview of the AI question, perhaps with a view to deeper exploration.
Whatever his biases in other matters, Harari takes the risks of AI with extreme seriousness and sets them out with precision and thoroughness. He does not dismiss as Jeremiahs those who warn about civilisational collapse arising from the mishandling of technological change. Sure, he says, ‘Luddite doomsday did not come to pass’ — the machines brought great blessing and improvements — but nor is it the case that the Luddites were wrong about everything. There are, he writes, ‘very good reasons to fear powerful new technologies’, not because the technology is inherently bad, ‘but because it takes time for humans to get to use it wisely.’
In talking to politicians about AI since the publication of Homo Deus, he has been ‘distressed’ by the sanguine pictures they painted, based on historical experience of early technologies — print and radio for example — which defied their earliest critics and became beneficial and harmless. Even those who had concerns, he said, expressed hopes that humankind would ‘muddle through’. He finds neither approach reassuring, regarding comparisons with previous stages of history ‘distressing’, especially coming from people in positions of power, ‘whose historical vision is defining our future.’ Such comparisons, he elaborates, ‘underestimate both the unprecedented nature of the AI revolution and the negative aspects of previous revolutions.’ The print revolution spawned witch-hunts and religious wars as well as scientific developments; newspapers and radio ‘were exploited by totalitarian regimes as well as by democracies.’ The Industrial Revolution led to catastrophic experiments like imperialism and Nazism. ‘If the AI revolution led us to similar kinds of experiments, can we really be sure that we will muddle through again?’
One of the problems, he outlines, is that many of the technologies being developed today are so complex as to exceed the capacity of the vast majority of those potentially affected to understand them. This includes neural networks, which are capable of outpacing the understanding of their own creators and if accorded the power to deliver judgements on critical aspects of human existence, may cause enormous hurt and confusion. Increasingly, these processes are superseding even the understandings of their human ‘creators’.
He writes:
As the field of machine learning developed, algorithms gained more independence. The fundamental principle of machine learning is that algorithms can teach themselves new things by interacting with the world, just as humans do, thereby producing a fully fledged artificial intelligence. The terminology is not always consistent but, generally speaking, for something to be acknowledged as an Al, it needs the capacity to learn new things by itself, rather than just follow the instructions of its original human creators.
The outcomes of AI calculations, analysis or predictions are becoming increasingly opaque. ’Engineers can’t peer beneath the hood and easily explain in granular detail what caused something to happen.’ The prospect of ‘unfathomable AI intelligence’, he warns, threatens to undermine democracy. People affected by decisions which radically impact their lives are entitled to an explanation of the reasoning behind them, but this is becoming increasingly impossible, as the logic of machines elbows out the human-centred logics that ‘trained’ them. He outlines a hypothetical example of an algorithm which decides whether a bank client gets a loan or not, which incorporates factors such as the kind of phone on which the application was sent and how much battery life it had left. By comparing such details with previous applicants, algorithms can apply probability theory as part of the decision-making process, and the sum of such negatives might well overrule the applicant’s blemish-free banking record. This is an example of how what he calls ‘alien intelligence’ might utterly alter the way corporate and official decisions are made.
‘Traditionally,’ he writes, ‘AI has been an abbreviation for “artificial intelligence”. But . . . it is perhaps better to think of it as “alien intelligence”. As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien . . . AI isn’t moving towards human-level intelligence. It is evolving an entirely different type of intelligence.’ For the moment, he says, computers tend to reflect the human ‘family’ in the sense of universal human values; but this may change, as the AI becomes more replete with knowledge and connections. He is fearful that in the future the world may give rise to multiple ‘hostile networks’ as the technology supersedes the ability of humans to control it. Information, he stresses throughout, is ‘not about truth’, but about connections. By the same token, information revolutions do not uncover truth, but create new political structures, economic models and cultural norms. ‘Since the current information revolution is more momentous than any previous information revolution, it is likely to create unprecedented realities on an unprecedented scale,’ he elaborates. When we write new computer code, he says, we are remaking politics, society and culture, and so had better have a good grasp of these concepts before we start inputting things into computers. It is important that we perceive this now, he stresses, ‘while we are still in control’, though again he neglects to specify what kind of ‘we’ he has in mind.
Here he zeroes in on a theory of war by the Prussian general Clausewitz, during the Napoleonic wars, who declared that successful prosecution of a war always comes down to deciding ‘ultimate goals’. The ultimate danger Harari perceives with the algorithm, and accordingly with AI, is that there is no way of pinning this down in a manner which will not allow the AI to make decisions that the programme did not envisage. This, at least, is how I understand his theory. ‘Tech engineers and executives who rush to develop AI are making a huge mistake if they think there is a rational way to tell an AI what its ultimate goal should be’, he warns. ‘They should learn from the bitter experience of generations of philosophers who tried to define ultimate goals and failed.’ He defines the problem thusly: ‘Is there a ways of telling computers what they shall care about, without getting bogged down in intersubjective myths?’ Here he enters into a deeper discussion about the possibilities of deontology, a Greek idea concerning the putative existence of good in everyone. In theory, he postulates, defining an ultimate goal for AIs might be achieved by the adumbration of relevant rules, and writing them into the coding, so that, while obviating the necessity for defining an ultimate goal, we might willy nilly turn computers into ‘a force for good’. But Harari doubts if this is possible, on account of — the example he gives — the difficulty of overcoming the historical human tendency to simply exclude certain groups from their definitions of protected categories, i.e. the ‘out-group’ in the groupthink paradigm, invoked as a means of galvanising the in-group in its collective certainties. This danger is already present in history and its documents, and, being built into the available data, would be difficult to remove from the algorithmic equations, which are already showing signs of built-in biases and prejudices on account of those already existing in the data being gobbled up by hungry AIs. Harari asks why a computer, instructed to ‘Do unto others what you would have them do unto you’, should be concerned about ‘killing organisms like humans’. It is an economical way of making the point, but raises a vital question having to do with the deep ambiguities and traps of language.
Among the many dangers Harari perceives AI posing to democracy is what he calls ‘digital anarchy’. This he identifies as existing on the unclear fault-line between the free conversation which is essential to democracy and a slipping into anarchy. As already stated, he does not believe in what he terms ’the naive view of information’, i.e. that all problems arising in democratic discourse can be solved by dumping more information (and commentary) into the equation of dispute. A digitalised public discourse, he intimates, might have no means of arriving at a legitimate decision that would be accepted even by those who lose the argument. This, of course, would be even more emphatically a danger in situations where, at the cut-off point of a discussion, a decision was being made, under opaque conditions, by an AI. In the past, such processes of determination were assisted by old media, like newspapers and radio, which helped to draw a line under the discussion and define a consensus of sorts, but he suggests that this mechanism has already been undermined by social media. And if we are unable to agree on a referee or a cut-off point in the debate, how on earth will we agree on the substantive issues in a given discussion? ‘If no agreement is reached on how to conduct the public discussion and how to reach decisions, the result is anarchy rather than democracy,’ he argues. Here, too, he describes the chaos capable of being created by bots, which renders the chaos ‘particularly alarming’. These conditions, he believes, will represent a particular difficulty for ‘large-scale’ democracies.
Here we come to the core paradox concerning technological progress and democracy. Since democracy is essential a conversation, in bygone days, large-scale democracies (such as the United States) were impossible in the absences of large-scale communications. The advent of mass media in the twentieth century resulted in a brief flowering of democratic values throughout Western civilisation, and indeed, though less consistently, elsewhere. ‘Now, ironically’, Harari writes, ‘democracy may prove impossible because information technology is becoming too sophisticated.’ If this happens, he says, it is more likely that it will arise from a human failure to regulate the technology wisely than from any inevitable failure of the technology itself.
In concluding his argument, Harari zeroes in on the likely reasons why such human failures might well lead us towards disaster. Why, he wonders rhetorically, do ‘we’ continue producing potentially dangerous technologies despite being unsure that we can control them and avoid them destroying the world and ourselves? ‘Does something in our nature compel us to go down the path of self-destruction?’ This, he explains, has been the purpose of his exploration of information networks through history, the outcome of which has been the message that, due to their ‘privileging of order over truth’, these information networks ‘have often produced a lot of power but little wisdom’. The more efficient a system of information dispersal, he argues, the greater its risk of causing harm, and therefore the greater the necessity for its system to be regulated by self-correcting mechanisms. AI-generated information networks could , he warns, ‘deliver immense power into the hands of a twenty-first century Stalin, and it would be foolhardy to assume that an AI-enhanced totalitarian regime would self-destruct before it could wreak havoc on Western civilisation. Just as the law of the jungle is a myth, so all is the idea that the arc of history bends towards justice. History is a radically open arc, one that can bend in many directions, and reach very different destinations.’
Although throughout the book, Professor Harari walks us through a fascinating series of arguments, it is hard to avoid intermittently noticing that he is ignoring the hippopotamus in the lecture hall, this being the very strong possibility that much of what he postulates may be already in train, albeit not from the direction which he persistently urges us to look to. His distorted perspective on what he calls ‘populism’ seems to either block him from perceiving that these dangers have, in the past five years, erupted in our world not as a consequence of the actions of populists, not even central-casting dictators, but from the apparently tame and harmless messenger-boy ‘liberal’ politicians who have been administering our democracies in the former Free World. It is hard to divine from the way he describes things, whether this myopia stems from calculation or disingenuousness, though the undoubted evidence of Harari’s immense intelligence would seem to rule out unwitting obtuseness. There is in this book, therefore, a strong hint of propaganda directed at maintaining the status quo which in the past five years has delivered us the Covid coup, the Ukraine debacle, the climate change scam and the myriad Woke debacles — all of which boxes Harari appears to fill with a tick rather than an X or a question mark.
This might be incidental save for one deeply worrying aspect of the book.
It is clear that, gravely as he may warn of the Silicon nightmare that may lie ahead, Professor Harari is an ally of the corporate behemoths of Silicon Valley when it comes to questions of ownership and control of tech possibilities, This is detectable for the most part osmotically, in his failure to address the issue of why technology has passed all but completely and irrevocably out of the hands of the people or their public representatives, something that I have remarked upon previously as occurring from the 1990s onwards, when the entire discussion about the future drift of developing technologies contrived to go underground and remain there for twenty-five years.
He admits as much explicitly in the following passage:
Just as in the early nineteenth century the effort to build railways was pioneered by private entrepreneurs, so in the early twenty-first century private corporations were the initial main competitors in the AI race. The executives of Google, Facebook, Alibaba and Baidu saw the value of recognising cat images before the presidents and generals did.’
This rather blasé dismissal of the central questions of how these matters might have been governed and regulated seems to ignore what appears to have been a deliberate strategy to remove these questions from public discussion, issues I dealt with in my January 2024 series of articles, titled The Empty Raincoat.
To buy John a beverage, click here
If you are not a full subscriber but would like to support my work on Unchained with a small donation, please click on the ‘Buy John a beverage’ link above.
Near the end, Harari describes his worst fears in a bracing and emphatic fashion:
As far as we know today, apes, rats and the other organic animals of planet Earth may be the only conscious entities in the entire universe. We have now created non-conscious but very powerful alien intelligence. If we mishandle it, AI might extinguish not only the human dominion on earth but the light of consciousness itself, turning the universe into a realm of utter darkness. It is our responsibility to prevent this.
This, for all my disagreement with some of the incidentals, I take as the core message of Nexus, one which every sentient human being must wish fervently to sign up to.
One of the difficulties with Professor Harari’s argument about AI resides in the fact of his atheism, which places limits on his capacity to speak about certain fundamentals of human nature and existence and the kinds of societies which have emerged from these. In particular, in the context in which we now find ourselves, there is the question not so much of ‘ultimate goals’ but of the ‘ultimate meaning and destination’ of humanity and the individual human being, a subject Harari avoids or leaves to be encapsulated in 'human rights’ tropes, themselves the corrupted descendants of religious ideas, and now in ribbons following five years of soft totalitarianism throughout the former Free World.
In an interview about three years ago, Harari, an atheist, outlined the kind of future he foresaw for the spiritual life of mankind:
In terms of ideas, in terms of religions, the most interesting place today in the world, in religious terms, is Silicon Valley. It’s not the Middle East. This is where the new religions are being created now, by people like [the American inventor and futurist] Ray Kurzweil, and these are the religions that will take over the world.’
The final four words of that sentence may well be an involuntary confession of the intentions of Harari’s then and current employers, a suspicion increased by the knowledge that Ray Kurzweil is a public advocate for the futurist and transhumanist movements, with a special interest in life extension technologies and the future of nanotechnology, robotics, and biotechnology.
These latter aspects are not dealt with in Nexus, which limits itself to the exploration of AI in the context of disseminating information rather than delving into the rather larger and even more disturbing questions about the rumoured future of man married to machines. This should alert us to the limitations of having our mediators and referees of whatever discussion of these matters is now permitted to us be people who believe they have ‘evolved’ beyond the necessity to believe in a higher authority than mankind itself.
For all its strengths, the lack of fundamentalism is the greatest weakness of Nexus, since only by identifying some irreducible entitlement of humanity are we able to draw lines between what is possible and what is permitted. There is no obligation on humanity to pursue every channel of potential discovery that might open up before us, and the criteria defining the boundaries of this question have become blurred and fainter in the recent decades, as the world gave way to what it fondly imagined to be a rationalist rejection of religion as a driving facet of society. It is clear that Harari, for all his leaning towards deontology (a strong belief in the goodness of humanity), lacks a frame of reference or language of describing that which is inviolable in humanity that might even yet act as a brake on what currently threatens to unfold.
Those who have decided against religion, as well as those who pursue it mindlessly, are among the worst kind of people we might look to for guidance as to how to retain the most essential sense of the value of humanity within our frameworks of collective understanding for the purposes of administration under multiple headings.
The instruction of AIs depends on the development of a language, expressible in code, which somewhat manages to convey the absolute imperative of valuing humanity.
We can glean a sense of what such a frame of reference/language might look like by perusing the recent Vatican ‘Note’ on AI, Antiqua et Nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence.
This Note is an impressive summary of the dangers and risks of AI, which, being written from a Catholic viewpoint, expresses itself in fundamentals relating to the human condition, which takes us back to the core purposes of all collective endeavour, including society, culture, economics, health and spiritual well-being. Such thinking is inevitably absent from analyses emanating from a secular-atheistic perspective, which tended to see things in economistic, utilitarian or, at best, humanist terms. Interestingly, although he sometimes describes himself as a humanist, Harari does not venture far down this road.
Of course, the Catholic perspective on these matters manifests a rather different problem, this arising from the particularity of the language used, and the fact that it is the perspective of a particular sect in history. The problem this presents is not of the kind that one generally hears expressed, with extreme prejudice, in ‘liberal’ quarters, but of a more subtle kind. Quite obviously, the language is Catholic, which is to say religiously-focussed, and it would be difficult to argue that a document issued from the Vatican ought to present itself otherwise. And yet, the fact that religious perspectives are just about the only sources of fundamental critique of the coming or continuing revolution of artificiality, whether it be of intelligence or indeed life-extension technologies, nanotechnology, robotics or biotechnology, is suggestive of a necessity to find a way of translating this language into fundamentals that might be understood by the neutral or faithless observer — in other words, beyond the doors of the churches, mosques and synagogues.
This is a tricky point to make, since many religiously-minded people are prone to taking offence at any attempt to desacralise the rituals or language of their faiths. This is understandable; and yet, since it is unlikely, if not unthinkable, that algorithms governing AIs will be programmed with religious ideas, the situation we face as a species is now so urgent that it is incumbent upon religious people to come to the aid of their agnostic or atheistic neighbours in seeking to create codes whereby the fundamentals of human origin, existence and destination might be expressed in a language that is not overtly religious.
Having written that sentence, a thought occurred to me: perhaps the key to the problem of programming outlined by Professor Harari (and countless others) has to do with the fact that the language in use lacks any deeply fundamental basis.
Perhaps, it occurs to me, the way to make the AI bend always towards goodness is to have it believe in God, to make it religious, even Christian — this being the core of the civilisation that has developed us to this moment of ‘progress’. Perhaps only in this way can we be sure that artificial intelligence will at all times put the good of humanity uppermost in its considerations, and keep its eye on the absolute horizon of human existence, which might be God or simply the ‘idea of God’. Yuval Noah Harari might here argue that the history of religion is studded with wars, in which countless human lives were sacrificed to the pursuit of theology by other means. But this is to underline one of the ways in which artificial intelligence may be more effective than the human kind: in ‘remembering’ not just its own prejudices or adopted biases but the outcomes of past errors and excesses and also why these must be avoided at all costs, which might be achieved by placing religious principles as the final frontier of the computational logic.
Oddly, I would not in this context recommend inputting, in the first instance, the Bible, for it is indeed replete with accounts of all manner of bloodshed and mayhem — as also the Koran and other ‘holy books’. Perhaps the Catechism of the Catholic Church might be a safer point of initiation, or the writings of Kahlil Gibran; it is a matter of discretion and also, perhaps, variety, and there is no shortage of benign sources.
For those who shudder at the thought of restoring the centrality of religion to our civilisation just as they thought we had shaken it off, I would say that they are mistaking matters that are institutional for matters that are existential. In a sense, we are not talking about the co-opting of ‘religion’ as much as translating its most fundamental understandings of the human being’s relationship to reality. Similarly, and paradoxically, for those who hold to religious belief and observance, there would be no corruption of these ideas as far as their meanings are concerned. The fundamentals would remain the same.
Antiqua et Nova, for example, proclaims:
The Christian tradition regards the gift of intelligence as an essential aspect of how humans are created in the image of God.
This fundamental expression of the sanctity of human life and dignity has no quasi-universally accepted equivalent in secular language. Atheists do not subscribe to this principle, and yet have nothing to put in its place. And yet, it is possible, even for a non-believer, even if wordlessly, to see in this sentence the shadow of an assertion which places the human person — every human person — in a special light of preciousness. I know of no secular principle — no ‘human rights’ guarantee, no philosophical dictum, no constitutional provision — that equates to this assertion of the value of each human being.
Similarly, when the Vatican’s Note refers to ‘the Church’s teaching on the nature, dignity, and vocation of the human person’, we encounter a related concept of attributed value, albeit here one that is a step removed from an explicit expression of belief. Everyone surely agrees that concepts like ‘nature’, ‘dignity’ and ‘vocation’ are ones that remove the human person from the realms of utilitarianism, economism and mere ‘usefulness’, that they confer upon the human being, albeit in an ostensibly formal, abstract manner, the kind of feelings we have for those we love most. As far as I know, outside of poetry and pop songs, there is no equivalent mode of expression in regular use in the public realm. Certainly, in the past five years, encomiums to humanity have been few and far behind. Even philosophers, subject to the imposed mutism of the public realm, have become shy about asserting the unconditional worth of the human person.
Yuval Noah Harari is not himself a ‘qualified’ philosopher, but he seems to have a decent grasp of many philosophical principles and is known to be the WEF’s philosophical advisor. Yet, in Nexus, although he puzzles long and hard about ways to programme computers to take into account the needs and imperatives of human beings, he nowhere approaches the power of the Christian words relating to the dignity of man.
In the particular context of AI, and its comparability to human intelligence, Antiqua et Nova has this to say:
In the case of humans, intelligence is a faculty that pertains to the person in his or her entirety, whereas in the context of AI, ‘intelligence’ is understood functionally, often with the presumption that the activities characteristic of the human mind can be broken down into digitized steps that machines can replicate.
This functional perspective is exemplified by the ‘Turing Test,’ which considers a machine ‘intelligent’ if a person cannot distinguish its behavior from that of a human. However, in this context, the term ‘behavior’ refers only to the performance of specific intellectual tasks; it does not account for the full breadth of human experience, which includes abstraction, emotions, creativity, and the aesthetic, moral, and religious sensibilities. Nor does it encompass the full range of expressions characteristic of the human mind. Instead, in the case of AI, the ‘intelligence’ of a system is evaluated methodologically, but also reductively, based on its ability to produce appropriate responses — in this case, those associated with the human intellect — regardless of how those responses are generated.
There is much more of this form of thinking in Antiqua et Nova which, such is its quality, deserves a fuller analysis all to itself. But the point I make is not dependent on a fuller grasp of the content of the overall argument, which in any event can be read in full here..
In words at least, nothing in the world values humanity as Christianity does. It cuts to the beginning and the heart of things. As Pope Francis has put it, Christianity and its understandings and teaching are adapted, as nothing else is, to what he describes as ‘the whole . . . the relationships between things, and . . . the broader horizon’.
There is, of course, a problem with the question of the receptivity of such ideas, on account of the increased hostility to religion. But the root of such objections does not — at least not in general — concern these concepts, which by and large are uncontroversial to the extent that they are nowadays given any thought at all. Nor is there anything in the language that properly disqualifies it from being cited in a general discussion on these topics. My chief fear here is that such is the pseudo-rational nature of the times we have been living through, the language of Christianity has become as though booby-trapped in such a manner as to cause it to short-circuit the junction-box whereby it might require to jump the gap to the general, secularised culture.
For the believer, these concepts are straightforward, though not in the least as simple or simple-minded as many unbelievers tend to think. Understanding the nature of reality and our place in it involves seeing the world as created by God, i.e. to go back to first principles so as to bring along a fundamental understanding that mankind exists in a dependent situatedness — that man does not make himself, which is exactly something the unbeliever needs to understand also, if he is to function more fully within reality. This might be a good place to begin educating our computers.
For the believer, too, this also involves perceiving creation as a process that is, as the Vatican Note observes, ‘imbued with an intrinsic order that reflects God’s plan’. [This is more or less the point at which modern man has diverged from the ancients — believing that, because ‘God does not exist’, these processes are no longer rational, never mind necessary. This — even if we allow the possibility that it is true — is a fatal error of thinking, because the fact remains that man does not create either the world or himself. It is in the implicit assumption that man is his own master that mankind deviates not only from God’s Plan, but from the very structure of reality, which largely remains mysterious to man. Man therefore ceases to reflect ‘the Divine Intelligence that created all things’ — i.e. he had lost harmony with reality as it is, which has led us to this point of potential disaster, where we may be preparing to dismantle the very basis of our situatedness in this reality, on this Planet, at this time.
Curiously, the pope and Harari are unexpectedly ad idem on most things to do with the prevailing but downplayed tyrannies: Covid scam, climate hoax, Woke, et cetera. And, in an intriguing surprise, Harari is harder on AI than the Church is. Of the two, Harari emerges as the stronger witness of the two, spelling out the dangers in pretty unequivocal terms. Where the Church is better is in its case for humanity and hinting at the reasons why this may be the most urgent moment in all of human history. Both parties tend to equivocate, repeating the alleged benefits of new technologies in general and AI in particular, while going on to warn about risks, overreach and uncertainties, an unhelpful even-handedness. It seems to me that, what with the vested interests and their political puppets and bought scientists, there are more than enough voices already seeking to sell the benefits and potentials of loosing AI on a population utterly ill-equipped to comprehend or deal with it.
More is required: the consciousness that the purpose of all things is to safeguard humanity in the seating of its harmonious relationship with reality. In the Christian outlook, reality or existence have no greater or further purpose, and this needs to be the fundamental ethic upon which all ‘progress’ is predicated, whether you’re a Christian or a Buddhist, an agnostic or an outright heathen.
Antiqua et Nova asserts:
A proper understanding of human intelligence . . . cannot be reduced to the mere acquisition of facts or the ability to perform specific tasks. Instead, it involves the person’s openness to the ultimate questions of life and reflects an orientation toward the True and the Good.
They still mean ‘God’, but the language, if I may say so, is more ‘inclusive’. To be more helpful to the secular world, we might speak of the ability to delve into reality as it is in its depths, to build faithfully on what is already known, to appreciate the gifts which come to us ready-made in reality. From this, it follows that human intelligence possesses an essential contemplative dimension, an unselfish openness to the True, the Good, and the Beautiful, beyond any purely utilitarian purpose, but at the same time seeking a purpose which is useful to the furtherance of humanity qua humanity, as the first and principal objective of all human endeavour. This, as the Note goes on to elucidate, is the crux of the matters confronting us now.
The objective must be to convince non-believers that there is a mode of reasoning here that they need to attend to. This is the vital ingredient of any safe experimentation with incorporating AI into the human processes of reasoning and thinking, for what AI lacks is precisely these elements.
One of the factors which neither Harari nor the Vatican prelates delve into is the constructed amenability of the human to a role subservient to technology, as a result of decades of conditioning. There is a rather complacent outlook abroad that still believes that what we deal with here is some kind of add-on instrument with both benefits and some risks, but merely requiring to be ‘handled’ in order to ensure that the outcomes are all good ones. Nor have we factored in the processes of reduction and degeneration which have operated on human intelligence and thinking/feeling as a result of technology, television and the mass media instruments, social media, et cetera. In other words, to what extent has humanity become ‘mechanised’ in ways that might enable AI to more easily imitate us, and thereby access logics whereby to recreate in us mechanistic responses in ways that will become harder and harder to see as other than ‘human’? Is it still the case, as Antiqua et Nova suggests, that AI’s computational abilities ‘represent only a fraction of the broader capacities of the human mind’? Whose human mind? What overall proportion of human minds? These are ominous questions, but they cannot be postponed for much longer.
Already, even in the space of a couple of years, we seem to have moved beyond what was for a long time the standard catch-cry of the technologists: that given the changing nature of what it means to be ‘intelligent,’ machines are absolutely incapable of being more ‘intelligent’ than their creators. This, we now know, is folly; it depends on what we think intelligence is.
Since intelligence involves a high degree of informational capacity, the ability of computers to store vast amounts of data gives them a head start on humans, at least in the zone of appearances. Add in programming for computation, reasoning and meaning — all concepts capable of being coded to a high and constantly improving degree — and you already have something that leaves most humans for, as it were, dead. In the AI, whether intentionally or not, man is creating another autonomous life form, though one without any intrinsic kind of moral agency or empathy other than whatever headline instructions are installed as part of its programming. But, whereas men may design the ethical capacities of the AIs, even the controllers will not determine the ultimate mix of this morality, in the absence of real (as opposed to mimicked) empathy, which may be tilted by all manner of elements, including data and cross-pollination of algorithms. The problem then is that, although the machine may indeed be designed by a ‘limited’ human intelligence, the ‘magic’ of the algorithm may confer capacities which are more than the sum of the inputs. If the programmer primes a machine with an ‘ethical’ programme, and adds in a number of layers of additional coding that ascribes weightings to various factors, the programme enables the machine, in a sense, to ‘think’ about situations that are not before the programmer at the time of installing the programme.
Programmers are already stating categorically that, beyond certain limits, they can no longer either understand or anticipate the actions or decisions of an AI, and of course all of these processes — machine learning, programming algorithms, constructing neural networks, et cetera — may involve a host of different programmers operating in different parts of the world over long periods of time. For these reasons, the continuous anonymity and intractability of the AI, along with its scale, complexity and opaqueness, place many aspects of its generation and operation beyond the scope of any form of human intervention or control.
We cannot say what, if any, are the limits of the combination of data and learning networks. We cannot say, for example, that an AI cannot comprehend the deep nature of human creativity by analysing multiple examples and comparing them for techniques and patterns and rinsing down whatever remains computationally incomprehensible to certain essentials which can be addressed by drawing in further examples and comparisons. We just don’t know what is going on in the ‘mind’ of AI.
We are moved by such as sunsets, but can we say why? And if we succeed in saying why, can the AI not absorb and co-opt what we say, compare it with a billion other responses to the same thing, and come back with a precise explanation from which we may be unable to demur, and then recreate the emotions that derive from this experience of beauty or wonder? It is by no means certain that any alien intelligence such as Harari describes, working with data and complex neural networks, could not make a reasonable fist at explaining things currently inexplicable even by humans.
Contrary to the assurances emanating from the shills of the vested interests, there is overwhelming evidence to indicate that the processes of self-reconstruction now accessible by AI are already capable of creating more than passable imitations of human reactions, interactions, emotions and sensual responses. Can we be certain that the vast array of data currently being fed to AIs does not contain encodements of human responses, emotional and otherwise, of experiential understandings capable of being ‘translated’ into the kind of ‘language’ that computers excel at understanding, and at creating at least functional replications of these processes that would seem all but identical to the ‘real’ thing?
‘For instance,’ the Note states with apparent certainty, ‘AI cannot currently replicate moral discernment or the ability to establish authentic relationships.’
The key word here is ‘currently’. But every AI learning process is an exponential one, which then marries into other processes which have come from a different place and direction, and AIs are capable of reading the differences and distinctions and attributing to them meanings and measures that we have no way of predicting as to their capacity to plumb even our deepest and most contradictory thoughts and feelings. This thinking is old already — not least because it is already clear that (unless some radical change enters in) the AI will have governmental privilege, being the novel instrument, over the human-centred logic which is ‘yesterday’s way.’
Loss of human appreciation for human processes is close to being the precise problem we face. Humans do not, as things stand, have a fullness of understanding of anything; all knowledge is contingent and therefore partial. But our obsession with progress and technology can remake not just the external reality, but our internal reality as well. Something computers are very good at is scoping out ‘the relationships between things’; something computers are not good at is sizing up ‘the broader horizon’. But this latter skill is being lost to humans as well, precisely as a consequence of our secular-rational fashions of thinking.
‘Progress’ defines its own virtue, and always claims the stage. The languages of progress and metaphysics have grown apart, and cannot be reunited by platitudes or sentiment — only by a new language of human limits and purpose.
Much of the positive prognosticating about AI is mere wishful thinking, and especially so at this moment when ‘human rights’ have gone through a five-year period of revision and writing-down, something the Catholic Church has not merely failed to address but has implicitly supported with virtually every one of its positions and statements. We live in a world where what Antiqua et Nova calls ‘functional perspectives’ are on the ascendant, and are being forced upon people without discussion or consultation. We are on the cusp of an age when the burdens and blessings of work may be permanently removed from human beings, and we have done almost nothing to prepare ourselves for this, which means that, absent one radical initiative, the functional perspectives are likely to carry the day. The notion of ‘inherent dignity’ and ‘likeness to God’ are beautiful concepts, but they have receded to the backgrounds of our cultures, and are now lost in the folds of what nowadays sounds, to the unfamiliar ear, like a sectarian or superstitious outlook on reality. This, precisely, is the danger that threatens to marginalise and silence perhaps the only remaining mode of expression of the very essences of the human.
Can humans still lay claim to it, even if they do not work to maintain or improve it? If not, who or what can? Does this question imply that there is some entitlement of some category of (human?) being that may acquire an exalted claim? Who might that be? Who decides as between the competing claims? Is there any longer a ‘we’ in whom the making of such a decision might be vested? If not, what is happening and is there anything ‘we’ can do?
The problem is that, nearing the end of the phase in which the primary contribution of human beings was in the realm of ‘usefulness’, we have no codes by which to reinvent reality to include humanity when it — we! — is/are no longer needed. (‘Needed by whom?’ is a good question.) ‘Human dignity’ is a nice phrase, but it has little practical traction. Being ‘made in the image of God’ would have been a good argument from a pulpit a couple of hundred years ago, but what if the majority of human beings who constitute a society no longer believe in God? Is the Church relying on the hope that necessity will bring about a return to what it might call sense? We wish it good luck.
Ultimately, AI and post-humanism propose (insist upon) a marriage of humanity and machine. The bait will reside in the promise of not being left behind. But it will be an unequal union, ill-advised and irreversible. Harari warns that the logic of the computer is an alien one, and we need to reflect deeply to grasp the likely implications of this. In the past, we have wondered about ‘aliens’ in the broader stratosphere, sometimes using the term ‘intelligent life’, a phrase that rather lazily intimates that ‘intelligence’ and ‘life’ definitionally go together, which in general they have done in their embodied marriage in the human form. But what is coming is different, a different equation, an inequality of arms that places human life itself in jeopardy. There is more than an evens chance that ‘artificial intelligence’, once installed in human culture and thereafter in the human structure, will radically alter that structure in a manner perhaps analogous to the mRNA injections of the past fifty months or so — installing an intruder which will immediately start to attack the very essence of human consciousness, bringing about a divorce between intelligence and life, which will lead in short order not to the further ‘progress’ of the human species but to its effective death.
Theological exposition of these circumstances may now, in the third millennium (for God’s sake!), risk giving offence at numerous levels, to numerous interests, for it invites the response that even such a polite and tentative proposal as this one implies some form of disrespect — i.e. an invitation to cease glorifying God. But this is, for practical purposes, a miscasting of the danger, which has already done its worst, and which began in man’s ceasing to use his intelligence and skill to, as the Vatican’s Note puts it, 'cooperate with God in guiding creation toward the purpose to which he has called it’, which is to say to climb higher in our understanding of the mysteries of reality, to remain in harmony with creation in order perchance, in the words of Saint Bonaventure, to ‘ascend gradually to the Supreme Principle, who is God’.
Another way of putting this might be simply to suggest remaining within the framework of the logic whereby reality was formed, the better to continue growing in understanding of it. Thus, instead of Yuval Noah Harari’s concept of seeking an ‘ultimate goal’ whereby to instruct the AI as to human boundaries and rules, perhaps it would be better to instruct it instead regarding the Supreme Principle whereupon the very existence of the human person is predicated. And if you choose to read that as a fiction, perhaps it would be better to see it as a fiction that so corresponds to the absolute truth of reality as to be virtually indistinguishable from such, and therefore the best coding imaginable for protecting the essence of humanity.
The religious-minded, for their part, might well be disposed to say, ‘Too bad about the faithless ones!’, and that might well be regarded as an understandable response. But people of faith should remember that they live in the same societies as the sceptics, agnostics, non-believers and militant atheists, and accordingly stand to suffer or gain from the decisions ultimately arrived at in those societies, whether they participate or not, live off-grid or on. At this unprecedented moment, the believers have a particular invaluable gift to offer the unbelieving others: a language, or at least a logic requiring slightly modified words, by which to describe our shared essential humanity and what we need to tell the machines that stand waiting (potentially to dominate and replace us) about ourselves. It is a small price to pay to avoid our total extinction.
To buy John a beverage, click here
If you are not a full subscriber but would like to support my work on Unchained with a small donation, please click on the ‘Buy John a beverage’ link above.