Outsourcing 'Our' Cognition
For decades there’s been talk in tech and sci-fi circles that rarely touched the mainstream, concerning an approaching moment when human existence may be changed utterly, for the better, or worse.
Almost nobody outside of a few specialists — scientists, engineers, researchers, sci-fi writers — appears to be aware of something that may soon strike the world in the manner of a cultural Big Bang. It is an example of something observed by the French philosopher Jacques Ellul, writing more than half a century ago, that technically-driven societies cultivate a kind of time-warp to conceal their true natures, clinging to concepts of collective meaning, value and ‘normality’ — patriotism, Marxism, for example — long after they have been outstripped by technological developments.
Perhaps never in the history of the species has a change of such significance loomed so darkly yet so brightly just beyond — or perhaps at — the horizon. This change is the arrival of the species at what is called ‘technological singularity’ — the moment when the technological outcomes of man’s creativities will for the first time exceed his own intellectual capacities. This is to say that man will cross an imaginative threshold whereby he will have created machines more intelligent than himself, the moment when the human species transcends the rule of the Turing Test, which for seven decades has offered a kind of guarantee to the human race as to its own continuing sovereignty in the world. Devised by Alan Turing in 1950, and originally called the ‘imitation game’, this is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. The test is a subjective one, being determined not by an evaluation of the qualities of the machine, but by how it is judged by an intelligent human. If the human cannot tell the machine’s behaviour from that of a human, the test is passed. It is now believed that this moment will be attained within 20 or, at most, 35 years. There will be a very brief period of convergence, followed by an exponential escalation in the intelligence of machines, which will soon leave mankind spluttering in the smoke of its exhaust.
But the ‘moment of technological singularity’ will be more momentous than this at first suggests. We are not speaking of ‘technologiies’ in the conventional sense: electronic ‘tools’ that make man’s life easier or more interesting. The singularity will mean, and will bring with it, the potential to create ‘minds’ that are infinitely smarter than even those that created them — machines that think. At this moment, when it arrives, the human race will have broken something that in scientific terms will be the equivalent of breaking the evolutionary sound barrier, for at that moment, men will have created superhuman intelligence, a degree of cognitive scope that will unseat the human species as the highest creative force not merely in science and technology, but in everything.
The metaphor of ‘singularity’; was devised by the mathematician and sci-fi writer, Vernor Vinge, who borrowed it in part from the principle of general relativity, concerning the point at which subsequent events become unimaginable and cannot be predicted. The concept of singularity also manifests in mathematics, where it is used to describe the point in a calculation when the original model ceases to apply.
But will the ‘sonic boom’ that follows immediately upon this moment be a cue for celebration or dismay?
The answer is by no means clear. Most people have not the faintest idea that such a moment is imminent, let alone understanding what it implies. When we — the hoi polloi — talk about the implications at all, it is usually in either the dystopian concepts of killer robots or, conversely, some Pollyanna variation involving endless leisure arising from the lifting of the burden of work from human shoulders. But the implications and possibilities are both infinitely richer and infinitely darker than any of this. It all depends on the circumstances in which this moment of singularity occurs, how we are prepared for it, and who controls that process of preparation — and their motives and dispositions.
Right now, the signs are not auspicious that this moment will be an occasion for celebration by the vast majority of the human species. The moment of singularity has long been in preparation, and, it not being a matter of significant media interest, most of us have remained in the dark. There is a considerable amount of material online — mostly in the form of breathless speculation and commentary by such as sci-fi writers, movie directors, sundry nerds and geeks, tech journalists and tech tsars or their representatives on earth. In most of this, it is implicit that this is a project that belongs to those who own and pursue it, which is to Silicone Valley, the same interests and entities that brought us global communications followed hard by globalised censorship, which has become a signal quality of our culture in the past 20 months.
All these have a clear vested interest in the outcome of the present drifts, and that of itself is not, in principle, problematic. The ominous aspect is that the coming revolution is not being spoken of among those on whom it will have the greatest impact: the great mass of humanity who will — or whose children will — find their lives upended by whatever transpires.
These — mainly YouTube and Ted Talk — discourses tend to follow a pattern of dissociation, as though what was being talked about was a new episode of the Star Wars series. They are typically characterised by a form of suppressed excitement, often approaching hyper-elation, in which the possibilities are set out as though they refer to some theoretical planet in some remote universe. Rarely does the tone or content seem to be addressed to anyone other than fellow nerds and those who write (and read) sci-fi as a kind of lifestyle/identity accessory. For such people, the outcomes of the singularity seem in the main to be almost theoretical: Will the coming robots kill us or cure us of our . . . what? Our boredom? Our self-absorption? Our lassitude? The general tenor is a kind of euphoria giving way to nonchalance. It’s a little like sitting in a cinema watching a dystopian movie and knowing that, when the credits roll, you will be going across the street to Starbucks to discuss the novel characteristics of the CGI.
As a result, the human race, being in a sense solely represented in this domain by such non-appointed ambassadors, has adopted a demeanour in the face of the future that — when it is not purely ignorant — might be diagnosed as close to blasé. We would know — would we not? — if there was anything to worry about. Our ‘representatives’ at the table may well be paying close attention to play, but if the action is dystopian, they are fascinated, not fearful. When they see problems, they tend to see them in technical terms only. They speak blithely, for example, about universal basic income (UBI), as though the sole potential issue arising was whether this may be deemed a tried and trusted mechanism for distributing income in a ‘leisure society’. They rarely, if ever, pause to ponder whether a ‘leisure society’ can be made to, as it were, work. The morality of what is coming, though an occasion of sometime fascination, is rarely treated seriously. It is the attainment of the singular moment, not its colour or texture, that appears to obsess these tech-talkers. It is as though, in their long careers in futurism, they have some time since taken out a mortgage on all future possibilities, so that one possibility is quantitatively and qualitatively the same as any other, and nothing dismays or perplexes them. In a sense, this renders us all delusional, for these spokespeople are our sole agents in this dimension of reality, which is to say potentially the whole of reality going, as it were, forward.
Artificial intelligence is getting smarter by leaps and bounds — within a decade or two, a computer AI could be as ‘smart’ as a human being, and soon thereafter the machines will overtake us. That moment will be the ‘singularity’, the moment when ‘man’ (whatever that may mean at that juncture) will acquire the potential to create, within a few years, machines with an intelligence 5,000 times that of Albert Einstein, who had an IQ of 165. Within 30 years, it is projected from present rates of advancement, a basic laptop worth the equivalent of €1,000, will have the computational power of the entire human race. This has the potential to open many doors, but some of them may lead to cells and chambers of various categories, some thrilling, some bracing, some worrisome indeed.
The moment of the technological singularity, that inaudible ‘sonic boom’ that will act as the starter pistol for the transhumanist-posthumanist age, will signal the beginning of a new dispensation in which mankind will move towards outsourcing its cognitive functions to wholly superior entities, and humans, in order to maintain their place in the world, may be required to succumb to technological modification that will fundamentally alter their natures in ways that cannot be predicted. For the new dispensation when it arrives, will not, as we imagine, represent an escalation of our current ‘master-slave’ relationship with technology — we the masters, the machines our slaves —but a total reversal, in which we become subservient to the machine in ways that we cannot yet imagine.
Vernor Vinge, the man who coined the phrase ‘technological singularity’, says that this moment will amount to ‘the — most likely — non-catastrophic event for the near future.’
Those hyphens surrounding the phrase ‘most likely’ in that sentence are important: He does not mean that some such event itself is merely likely — that is all but a given — but that it will ‘most likely’ be non-catastrophic. Is this reassuring? Is this enough to still our beating hearts?
‘I think,’ he concedes, ‘that, any time you are contemplating something that can replace the most competitive feature that humans have — that’s intelligence — it’s entirely natural that there would be some real uneasiness about this. The nearest analogy in the history of the earth is the rise of humans within the animal kingdom. It is very unsettling to realise that we are entering an era when questions like ‘What is the meaning of life?’ are practical engineering questions. On the other hand, I think it might be kind of healthy if we could sit down and look at the kinds of things we really want, and look at what they would mean if we could get them. Humans might be better recognised not as the tool-creating animal, but as the only animal that has figured out how to outsource its cognition, how to spread its cognitive abilities into the outside world. We’ve only been doing that for a little while — like ten thousand years.’ Reading and writing, he adds for clarification, ‘is outsourcing of memory.’
Events beyond that moment, he says, are ‘as unimaginable to us as opera is to a flatwom.’ Tying to explain what will happen afterwards, he says, would be like trying to explain our present era to a goldfish.
We cannot overestimate the momentous nature of the coming moment. In 1993, Vinge said: ‘Within 30 years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will have ended.’
Vinge sometimes describes technological singularity as the moment ‘when we can create or become creatures of superhuman intelligence.’ In fact, the moment implies both. The question is: Who is the ‘we’ and who will the ‘creatures’ be?
This question may have seemed theoretical just two years ago. But, in light of the devastation of democratic structures and traditions throughout the formerly ‘Free World’ since March 2020, it is no longer clear that when a human being says ‘we’, he necessarily means the whole race, or even a substantial part of it, but has in mind perhaps some sub-group of the human that is already beginning to think of itself rather differently and not necessarily as a part of the race as a whole.
Those who speak of the future in this connection tend to speak blithely of ‘we’ and ‘us’ and ‘our’. Even before Covid, such talk was fanciful. Now, due to the manifestly squalid moral state of the human polis, the evisceration that the plandemic has conducted on the pretences to liberalism and democracy, this ‘we’ stuff begins to sound like pure fantasy.
It now seems, let us say, more possible than before that certain members of the race may feel entitled to set themselves up as some kind of oligarchic subset, arrogating to themselves the right (they will already possess the power) to decide what happens, defining the betterment of the species on their own terms solely, and moving ahead as though the future exists to be carved out according to their plans and schemes, and those only. From a certain perspective — perhaps the one they occupy — it might be possible to see the ignorance of the majority as to what is coming as a kind of threat to the world, the ‘planet’ in as far as this is different, the species defined opportunistically as a unitary entity, and the future as a matter of scientific revolution rather than human — never mind democratic — will. It is possible that such people, groups or interest will regard as hugely problematic that the majority of their own race do not understand the world as they do, or see it as they do, or see themselves as members of the emerging oligarchy do. In such a scenario, we might expect these elite interests to respond much as the political leaderships of the world have been responding over the past 20 months: imperially, paternalistically, cavalierly, overbearingly, highhandedly — treating plain citizens as — depending on the occasion — subjects, prisoners or children, to be told what to do and when to do it, to have the wool pulled over their eyes, to know only what they ‘need to know’, to be drowned in ‘noble lies’, to be hurried, harried, bullied where necessary, to be shushed when they speak about freedom and rights, to be gaslit to within an inch of their tolerance, and ultimately to be persuaded that, when the voice of authority says that democracy has become a danger, and therefore a luxury, it can be suspended without notice or excuse-me.
It is here interesting that the concept of UBI, long dismissed by economists, politicians and commentators as unworkable, has been raised form the dead in the Time of Covid and this time talked about as though it might not be such a bad idea after all. Up to approximately 18 months ago, the powers-that-seemed-to-be remained dismissive of the idea of a universal basic income for all human beings. Where would the money come from?, they asked — an ominous question, since it implies that money belongs to someone other than the species as a whole for the purposes of drawing from human beings the talents, energies and creativities that might create real wealth in the world, and, by extension of that, to the nation as a whole, or the community as a whole. It implies that money is a finite resource rather than a technology, the precise thinking that has brought the world to a shuddering halt on several occasions in the past half-century, and again — if the full truth were to be told — in the past two years.
In principle and objective truth, UBI is a workable idea — provided it is introduced and maintained as a bedrock income on which individuals can build and expand according to their talents and acumen. Its intentionality is predicated not on creating a more palatable term for social welfare but providing an essential, guaranteed income to every citizen without the historical requirement that he stand down or set aside his working functions and capacities. When these conditions appeared possible, the powers-that-be dismissed UBI out of hand. Now, they exhibit a suspicious enthusiasm for it. In practice, introduced in present circumstances, it might be a very deleterious development indeed, since, coming with more strings attached than a tramp’s overcoat — tacit acquiescence in surveillance; ‘voluntary’ subjection to social credit systems and all they imply; absence of meaningful choice; et cetera — it would achieve pretty much the antithesis of its principled intentions.
But the dangers of the singularity go much, much further than concerns about human redundancy in the economic context — and even this dimension is infinitely more far-reaching than the tech lobbyists and their mouthpieces appear to know or care. There are immense ethical, moral, social, anthropological, psychological and cultural questions to be addressed here, but the singularity bus speeds along at maximum speed, driven by people who appear to give little thought to ethics, morals, anthropology or culture. Nor do they speak much of the psychological effects or social consequences. They throw in a disingenuous sprinkling of ‘we’ and ‘us’ and ‘our’, as though these categories were fixed and immutable. But the victory of singularity will not be achieved by the mass of men, nor even by anything remotely resembling a representative sample of the species. As things stand, right at this moment, that ‘victory’ will be claimed by a class or cadre of technophiles and scientists who seem to feel no responsibility concerning the implications for the species as a whole and who, in any event, are themselves owned by some of the richest people and interests in the world.
Vinge, for example, talks about ‘makers and breakers’ in the world, by which he means ‘those who build and those who try to destroy’, and this is the key difficulty he indentifies in where we are going. He thinks, on balance, that the makers can ‘keep the breakers from bringing everything down.’
The problem is: Who decides who the breakers are? Presumably the ‘makers’ do. But there is a perplexing question: Who elected them the ‘makers’ in the first place? And what if, in avoiding damage to their technologies, they inflict unprecedented damage on the world and humanity in general? Perhaps they have already begun such a process, in the guise of responding to an alleged ‘pandemic’ of a ‘disease’ with a mortality rate with the range of a normative winter flu, so that we enter this final phase of species autonomy with a democratic deficit that has now been given a comfortable bedding down in the ‘New Normal’ emerging in the slipstream of the Covid coup.
At stake in the ‘race’ towards the singularity is not simply AI, but a multiplicity of technologies, including nanotechnology, quantum computing, phantom tech (3D), life extension, mind uploading, genetic engineering, advanced energy technologies, data and mapping harvesting and manipulation, avatars, and many others — all of which proceed at varying speeds in competition to be the discipline that takes us over the singular line. After that, all these disciplines will most likely converge around the singular idea, and move forward in something resembling unison.
As Vernor Vinge has repeatedly said, the aftermath of the singularity is a total mystery. No one is able to say what the effects on human society will look like. But we can provide ourselves with outline sketches. We have indications already concerning some of the foundational difficulties in, for example, the ethical and moral domain(s). We can say, for example, that, in the early stages of the convergence of human and machine intelligence, there will be euphoria, justified in that moment by the victory, which will provide reason for celebration, even by the whole species — in much the way the species as a whole celebrated the moon landing of 1969. Then, things will settle down and, much more rapidly than anyone will have predicted, the gap between human intelligence and machine intelligence will begin to widen. And then it will emerge that the ‘partnership’ between man and the machine is not a partnership between all men and all machines, but between a few men and all machines. A tiny cadre will own and control pretty much everything. They may decide — in much the way we have seen them do during the Covid subterfuge — to suffuse their activities and function in the wider political world in secrecy and deliberate mystification. There will then be a new division — not between humans and machines but between the mass of humans on the one hand and a tiny minority of humans plus the machines on the other. We cannot say for certain where this will ultimately lead, but we can certainly say that in the short term, it will mean that, in effect, the elite and the machines will become one; the elite will own and control the capacities of the machines, albeit that these will by then, in intelligence and cognitive capacity, be moving far ahead of even their creators.
If present trends continue, there will by then have been some kind of generalised incorporation of at least a significant minority of the human population into the domain of the new technical grid. This process has already started with the crypto-mandatory rollouts of Covid ‘vaccines’, a story we have covered here in some detail over the past year. But there is little basis for believing that the newly-minted cyborgs will be incorporated on the same basis or terms as the elite. They will be chipped and wired, but not into positions of control. They will be part of a hive mind that may well be, for example, liable to data and energy harvesting, but their involvement will be passive, indeed slave-like. Those not included, will be more or less written off, not least by themselves. It is possible, on the basis of what we have seen with mobile phone rollout — massive growth in the past two decades, to the point where there are now 1.3 phones per member of the human population — that the incorporation of the vast majority of the race into some kind of hive mind will ensue in relatively short order. But this rollout will create its own hierarchy and cultural understandings, so that, as things proceed, it is almost a given that the hive mind will not be characterised by democratic values — more likely defined by some kind of master-slave dynamic. The machines, due to the growing dependence of the population upon them, will be at the centre of these processes, though they will remain in the control of the elite, who will present themselves as seeking to involve as much of the human race as possible in a future they might otherwise be excluded from. In effect, this will create the world’s first globalised tech totalitaria.
Such an eventuality has become increasingly inevitable in the Covid era. For a time, these developments may be passed off as ‘philanthropy’, the noblesse oblige of the noble liars. — much as, in the early years, the internet was believed to be free until we realised that what they were flogging was us and everything about us. All except the extremely naïve now surely know that the modern philanthropist is not a philanthropist, but a seeker after tax advantage and lucrative investments in ideological tendencies that ‘coincidentally’ stand to benefit his longer term business interests. It is unlikely, then, that the technical elite’s primary inclination will be towards inclusion, but rather some variation on the cost-effective principles that made them rich in the first place.
And so we can say, in broad terms, that singularity threatens to, within a relatively short time, destroy not merely mankind’s sense of supremacy in the world, but the very fabric of that understanding in cultural terms. It threatens to destroy man’s sense of his own unaided capacity, potential, contribution and participation in respect of what was hitherto the raw material of a functional human life, to render his own past achievements pathetic and laughable to himself. It promises also to destroy many of his games and competitions, making them pointless other than as remedial activities of a handicapped, broken down former functionary. Think, for example, of the effects of the experiments from the 1980s in which chess grand masters competed against computers — winning at first, then mostly drawing, then losing consistently, eventually being unable to win even a single game. Human chess still exists, but under the shadow of the knowledge that, watching two humans playing the game, no matter how accomplished they may be, is akin to watching a match between beginners. For a while, the relationships between human activities and machine activities may feel something like the distinction we accept as between men’s and women’s football, but in the end the demoralisation that is built into the intrinsic inequality of capacity will render all such activities pointless. We have no idea how far this syndrome might stretch, nor the nature of the change it may impose, but it is hard to imagine that it will not drag the majority of humanity towards nihilism and despair.
The singularity threatens to make all but a few men redundant, obsolescent, then obsolete — and this under practically every known heading. Unless humanity is able to create some new and arresting way of being in the world, it is hard to see how the race will avoid over time becoming manifestly superfluous, a development that would utterly alter not merely the average human’s sense of himself, but the tolerance of the elite towards the many. The dangers of this need hardly be spelt out. It is hard to see how, in the course of even the initial years of the post-singularity world, the human race in general would not be plummeting with increasing velocity in the direction already indicated in the 20 months since democracy was terminated by reason of a minor respiratory illness: towards despondency, powerlessness, nihilism, despair and a generally shared sense that life had ended and started again in a fashion that was not an improvement.
When might all this be likely to occur? It used to be that the chief protagonists were predicting sometime between 2020 and 2045, but it is interesting that much of the discussion about the singularity to be found online right now is at least several years old, much of it dating from a decade ago. This recent relative radio silence might suggest something intriguing or ominous, again underlined by the sudden eruption of the ‘pandemic’. Ominous in what sense? Suggesting, perhaps, an escalation involving all-hands-on-deck to bring the singularity forward by a decade, or even two.
In a 2018 Tedx Talk, the Swedish philosopher, Nick Bostrom, spoke about what he sees as the chief risks of the singularity.
Bostrom claims to have begun thinking of a future of human enhancement, nanotechnology and cloning long before they became mainstream concerns. He is a Professor of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He's also the co-founder and chairman of the World Transhumanist Association, which advocates the use of technology to extend human capabilities and lifespans, and the Institute for Ethics and Emerging Technologies.
He is one of the more interesting commentators on the coming moment, because he is able to frame and describe some of the ways in which fundamental things about our reality and how we see it will be changed in the new relationship between man and the machine. ‘Machine intelligence is the last invention that humanity will ever need to make,’ he says.
Bostrom asks us to think hard about the world we're building right now, a world driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?
Before, he says, when it came to instructing machines, input and output were more or less equivalent: We got out what we put in. In the new dispensation, machines will be trained to ‘think’, resembling a little the way we train animals, though the comparison runs out of road pretty quickly. Primed by algorithm with vast amounts of information about objectives, intentionality, logistics, possible snags — and also about values and ethics underpinning the instant territory — the programming will make calculations that are as complex in the context of everyday actions and tasks as the most advanced mathematical formulae. So complex will they be that they will in many instances have become inscrutable to their programmer(s). In a very real sense, the machine will be acting on its own volition. It will also be able to apply the logic learned in one context to an entirely different task, translating between the two. The human cortex remains more sophisticated than what computers are able to do, Bostrom says, but the gap is closing.
‘What we do know is that the ultimate limit to information processing in a machine substrate lies far outside the limits in biological tissue. This comes down to physics. A biological neuron fires, maybe, at 200 hertz, 200 times a second. But even a present-day transistor operates at the Gigahertz. Neurons propagate slowly in axons, 100 meters per second, tops. But in computers, signals can travel at the speed of light. There are also size limitations, like a human brain has to fit inside a cranium, but a computer can be the size of a warehouse or larger. So the potential for superintelligence lies dormant in matter, much like the power of the atom lay dormant throughout human history, patiently waiting there until 1945. In this century, scientists may learn to awaken the power of artificial intelligence. And I think we might then see an intelligence explosion.’ In due course, he says, artificial intelligence has the capacity to leave human beings for dead, devising with ease solutions to many of the things that have baffled humanity until now.
What he appears to be saying without saying it is that intelligence is a human trait but not actually a human monopoly. ‘A superintelligence,’ he says, ‘is a really strong optimisation process. This means that there is no necessary connection between being highly intelligent in this sense, and having an objective that we humans would find worthwhile or meaningful.’ He does not here explore the implications of this assertion, but, taken in conjunction with the fears expressed earlier about the ‘philanthropic’ inclinations of the elite, his observations here may immediately be accompanied by what seems an ominous blowback. The tech tsars may already have come to the conclusion that they are under no obligation to the rest of us to do with intelligence merely what might suit the rest of the human race. They can do whatever they see fit. If there is ‘no necessary connection’ between intelligence and human objectives, then we are in real trouble, for this would seems to offer a short-cut through the many cumbersome ethical questions which the use of artificial intelligence — theoretically — throws up.
In a world run by the logic of an amoral elite, machines that ‘think’, might well use their ‘intelligence’ to do things that were for example, good for the planet, but bad for its human quotient. They might use their ‘intelligence’ to do things that were bad for actually existing humans but good for the elite, or for some future theoretical humanity, or even some future theoretical elite. This is because of the longstanding saw that says the output from a computer will be as ethical as the input data.
He gives first a frivolous example: A machine is instructed to ‘make humans smile’, and, instead of telling a joke, the machine decides there is a more effective ways of carrying out the instruction: take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins. Or, a machine tackling a mathematical problem, if its programming does not appropriately limit its options, may ‘realise’ that the most effective way to get the solution to this problem is by transforming the planet into a giant computer, so as to increase its thinking capacity.
Bostrom observes: ‘And notice that this gives the AIs an instrumental reason to do things to us that we might not approve of. Human beings in this model are threats, we could prevent the mathematical problem from being solved.’
We begin to perceive the problem. Machines don’t do empathy or loyalty. They are pragmatic and instrumental in their ‘thinking’. When they ‘think’ they are not really thinking, but merely reacting to the voluminous data that have been inputted in advance, they need to be programmed to do not merely what is required, but to do it in a way that is acceptable to their instructors. And given the momentous implications for the entire race, it is vital that the human race as a whole has a voice in the process.
Bostrom elaborates: ‘[I]f you create a really powerful optimisation process to maximise for objective x, you better make sure that your definition of x incorporates everything you care about. This is a lesson that's also taught in many a myth. King Midas wishes that everything he touches be turned into gold. He touches his daughter, she turns into gold. He touches his food, it turns into gold. This could become practically relevant, not just as a metaphor for greed, but as an illustration of what happens if you create a powerful optimisation process and give it misconceived or poorly specified goals.’
A machine primed in the manner we ‘re talking about would have enormous resources of knowledge concerning options. It might, for example, be ‘intelligent’ enough to hack itself. Just as a human being can apply experience in a creative manner to solving problems — although probably commensurately more intelligently — an intelligent robot would be able to ‘think’ its way out of virtually any situation, perhaps using methods that are counter to the interests of humanity, including its own controllers. It might also be able to ‘conspire’ with other machines, since the characteristics of the emerging technology include the fact that there will be embedded, networked processors in almost everything, which means that devices can communicate with one another on channels that humans will be mostly unaware of.
‘The point here,’ says Bostrom, ‘is that we should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever. Sooner or later, it will out.’
The answer, he says, is to make sure that ‘we’ programme the machines so that ‘even if — when — it escapes, it is still safe because it is fundamentally on our side because it shares our values.’
But this depends on who is doing the programming. Is it ‘we’ humans or ‘they’ — the elite?
Bostrom’s ‘solution’ is studded with the symptoms of this uncertainty:
He says that ‘we’ should move immediately to ‘work out a solution to the control problem in advance, so that we have it available by the time it is needed.’
‘I'm actually fairly optimistic that this problem can be solved. We wouldn't have to write down a long list of everything we care about, or worse yet, spell it out in some computer language like C++ or Python, that would be a task beyond hopeless. Instead, we would create an AI that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of. We would thus leverage its intelligence as much as possible to solve the problem of value-loading.’
That paragraph contains six ‘we’s and one ‘our’ — seven symptoms of what may already be delusion: the idea that there remains some ethic of human unity governing the governed world. Bostrom, one of the sharpest of the tech nerds, warns that the range of ‘esoteric issues’ and ‘technical problems’ to be resolved to render the cohabitation of plodding humans and superintelligent machines at least manageable and, from our viewpoint, secure, is daunting in itself — not as daunting as creating a superintelligent machine, but daunting enough. Those charged with this task require to be mindful not just of the interests of their employers, but of humanity as a whole, so that the values they input will correspond to the consensual values of our civilisation, rather than just a filthy rich sector of it. And the AI needs to be secured not just against potential issues that might occur in the short term, but also into the indefinite future. The risk, says Bostrom, is that those who crack the AI problem may do so without installing the kinds of failsafe protections necessary to ensure perfect safety.
But perhaps all this is pure naïvete: Perhaps those who consider themselves the owners of scientific advancement not only do not consider such precautions a priority, but are blasé about whether their machines are perfectly adapted to tiptoe around the rights and sensitivities of the general public? What then?
An esoteric and troubling thought occurs: In a certain light, the Covid caper might look something like a rehearsal for some future dispensation in which such logic is to be deemed commonplace and unexceptional, except here it is the human (elected) leaders of the world who have imposed on us an advance sample of what it is like having done to us things that we may not like in order to achieve a goal that we might innocently have issued as an implicit instruction to said leaders: ‘Protect our health and don’t let us die.’ Such an instruction might well seem unexceptionable in terms of its capacity for risk, and yet here we are, unable to access the source code to reverse or amend our instruction.
So, perhaps, in spite of all appearances, we are being prepared for the singularity after all.