Discover more from John Waters Unchained
Rights for Robots in the Age of the Almighty Algorithm
The ethical life of robots is now a recognised topic of academic interest and scrutiny. What does this signify? Is it absurdity, crucial cultural shift or yet another play for power over humans?
It is strange to think that, even while human rights are being trampled into the Thibault Van Rennes of plush offices all over the entity formerly know as the ‘Free World’, in other rooms with more reasonably-priced carpets, the rights of robots are being discussed and considered with great deliberation and at least as much solemnity as attended the formulation of the American Declaration of Independence and the Universal Declaration of Human Rights all those centuries/decades ago. It is easy, too, to think of this as merely an absurdity, but of course it is entirely logical and strategic. The two events — the meeting on the stairs of human rights and robot rights, the first going down, the other climbing to the first floor — are inextricably coupled, twin parts of a single process.
Humans were given rights by the Powers not out of love or loyalty, but as a mechanism for controlling the shift from individual-centred work to technical, collectivised work. The purpose was to recreate rights as an endowment of the state rather than as a given reality by virtue of existence — for those who generate and extend feel entitled to withhold and eliminate. The process occurring now with robots and their much discussed ethical bedding is a continuation of this process, but not, as it appears, a shifting to a new constituency of ‘clients’. The extending of rights to robots relates entirely to the fate and security (or lack thereof) of humans, and it is wise therefore to note and consider who it is that is now extending these rights, and why. That they are being mooted primarily within the groves of academe, whence the world received such treasures as gay ‘marriage’ and transgender toilets, tells us that the ‘benefactor’ is most likely the category of party in a position to confer enormous bequests on such as universities.
Human rights, so called, related in the first instance to the human in his role as worker. In olden times, when men and women were defined by clear-cut roles — tradesman, labourer, housewife, retired elder — there was a distinctly defined hierarchy of honour, respected on all sides: skilled craftsmen at the top, thieves and vagabonds at the bottom. When crafts and handiwork began to be replaced by mechanistic jobs requiring little actual skill, ‘human rights’ were introduced as a way of regulating the sudden explosion of ‘democracy’. The era of individual, production-line rights contributed to shifting the dominant note of culture from work to personalised group identity. This technical revolution also ‘liberated’ women from the home, which of itself called for an alteration of the culture of both workplace and public spaces. This gave rise to political correctness, which had a dual function: to provide a kind of ‘constitution’ of human interaction in the public domain, and also to prepare the world for Cultural Marxism.
Now we have reached a new and not dissimilar frontier in the working life of mankind, albeit with even more far-reaching consequences. The last revolution saw the end of individuated skills; this one sees the end of direct human involvement in the processes of production. It is, in other words, the end of work as we have known it, a moment which ought rightly to prompt the most earnest soul-searching concerning the sources of sustenance and meaning in human existence, and how these might be perpetuated by some new stratagem. In a genuine democracy, this moment would already have given rise to a discussion as to the nature of the termination dividend accruing to the many human beings to be rendered obsolescent by the change, and the cultural adjustments requiring to be made to ensure the continuing contentedness and harmony of the species. Instead, we have an attempted global takeover by the world’s richest interests and proposals for minimal UBI-with-strings-attached, hive cities and couch-potato culture for the young, ‘comfort care’ for the old, and a growing movement to extend rights to the robots who will carry on operating what we used to call society.
Just as ‘human rights’ required to be seen in the context of controlling human behaviour and relationships, so it is with rights for robots. They derive not from some little-suspected instinct of empathy for inanimate ‘beings’ but from the evolving need to the elite to keep the majority of human beings in their boxes at the least possible expense.
This is why, over the past two years, while the groves of academe have fallen silent about growing incursions on the rights of human beings, they have been busily considering the ‘civil liberties’ of what used to be called ‘automatons’ but are now clearly the creatures and agents of whomsoever controls them with the iron rule of the algorithm. Hence, the flood of academic research papers with titles like Asessing the Moral Status of Robots, Caring for Robotic Care-Givers, Why Care About Robots? Empathy, Moral Standing, and The Language of Suffering.
It is clear on the face of things that robots, being inanimate machines, no more need ‘rights’ than fridges or motor cars. That such a discussion is occurring should give us reason to consider more deeply the direction the world is taking and how much this is likely to differ from anything we imagine as to that course.
The idea — now a commonplace futuristic trope — that technology might create hermetically sealed chambers of crucial, instant decision-making, in which the outcomes will be decided by software that writes itself and algorithms that outgrow the intelligence of their creators, is said to be upon us. We have touched on this in previous articles, referring to James Bridle’s fascinating 2018 book, New Dark Age: Technology and the End of the Future, in which he describes experiments conducted on nuclear fusion by the Californian research company Ti Alpha, resulting in the ‘Optometrist Algorithm’, which combines human and machine intelligence in search of an optimum method of problem-solving.
‘On the one hand is a problem so fiendishly complicated that the human mind cannot fully grasp it,’ he writes, ‘but one that a computer can ingest and operate upon. On the other is the necessity of bringing a human awareness of ambiguity, unpredictability, and apparent paradox to bear on the problem — an awareness that is itself paradoxical, because it all too often exceeds our ability to consciously express it.’
(By the way, it is not a response to any of this kind of thing to write in saying ‘machines don’t have souls’, or referring me to the magisterium. This is this: It’s not something else.)
Algorithmic systems designed in this way, Bridle writes, may become relevant not merely to solving technical and scientific questions, but may be applied also to issues of ‘morality’ and ‘justice’. He believes it will be a signal feature of the emerging technological dispensation that there will be aspects of everything that remain beyond human ken. ‘Admitting to the indescribable, he writes, ‘is one facet of a new dark age: an admission that the human mind has limits to what it can conceptualise.’ This ‘indescribability’ — or ‘technological opacity’ — must, it appears, be accepted with good grace by humanity, perhaps as religious-minded humans accept — or used to accept — the will of God.
The problem, then, is that, although the machine maybe designed by a human intelligence, the ‘magic’ of the algorithm may confer capacities which are greater than the sum of the inputs. If the programmer primes the machine with an ‘ethical’ programme, and adds in a number of layers of additional coding, ascribing weightings to various factors, the programme enables the machine, in a sense, to ‘think’ about situations that had not occurred to the programmer (perhaps because they were not yet even possibilities) at the time of installing the programme.
Among the things this hints at is that mainstream culture has a long way to go before it even begins to come to terms with the rule of the algorithm, which has already ushered in an undeclared era of hermetically-sealed, instant decision-making, in which the outcomes of life-changing decisions are decided by software that writes itself and algorithms with the capacity to outgrow the intelligence of their creators.
Listening to the almost total silence on issues relating to the present and coming rule of algorithm, one might imagine that it involves no more than technical alterations to existing ways of doing things. Nothing much to see here, then? But the algorithm society brings with it not merely practical changes in the way everyday benefits, freedoms, services and practices are delivered, but also transforms the moral and ethical fields in which these phenomena will be framed once the new dispensation is at full throttle. It will transform not merely our working lives — such as they may continue to be — but also our social and leisure lives; our relationships with public bodies and social services; our capacity to travel, the way we shop and do business; whether we are able to buy a car, rent an apartment, or get a job. The almighty algorithm will interpose itself as intermediary and arbitrator in virtually every context in which, hitherto, we interacted with employees of services or goods suppliers.
In a sense, you might say, the algorithm will become as a new emperor or sultan, or pharaoh — except that the algorithm will be closer to a god than to any other imaginable human person. It will be invisible but will see and survey all things. It will be ever-present everywhere, watching and judging the events, experiences, decisions and encounters of our lives. It will know all things — past, present and to come — even our most secret thoughts. It will intervene by a logic that will often be opaque and mysterious. It will be, or will seem to be, the Supreme Power, existing of itself and infinite in its own perfection. With the algorithm, all things will be possible. It will be almighty because it will be capable of doing anything we desire, should it agree to do so. In due course, the algorithm will replace the human soul, taking on the functions of memory, understanding and will, as humans move further and further from the centre of worldly events, barely noticing that they are doing so.
The almighty algorithm will decide whether you will be admitted to hospital, or allowed to adopt a child, whether your child will be forcibly adopted, and whether your family qualifies for social housing. Its decisions will be opaque and probably final. The ‘will-of-god’, not long disappeared as a cultural trope, will be supplanted by a new form of absolutism, in which the thought processes and criteria behind the most momentous outcomes or decisions will be non-discoverable and beyond appeal.
It is hard to avoid the thought that the relative silence concerning all these matters — which overwhelmingly relate to the next phase of the Silicon Valley takeover of the spectrum of human concern and activities — is neither an accidental omission nor an oversight. This has been a deliberate, systematic suppression of discussion, emanating precisely from the causative forces at the steering wheel of these phenomena. In other words, the reason why there is almost no coverage or discussion of these issues in mainstream media is not that they are not urgent or interesting, but that that Big Tech, which exerts undue influence and control over legacy media, does not want them covered or discussed.
An algorithm is in effect a programming code for a computational process, and generally operates via an ‘app’ or ‘application’, a piece of software installed and operated on a computer, having a specific and narrow function — for example driving a ‘self-driving’ car, or booking a takeaway dinner. Every app is governed by an algorithm, which conducts calculations based on accumulated data, makes decisions and initiates actions. Artificial Intelligence systems (AI) use algorithms to recognise patterns which enable them to make their own ‘decisions’, which allegedly get better and better when provided with more and better data. Complex algorithms are devised by invisible and anonymous ‘creators’ who merge and morph as though the crowd around a street busker, a kind of shifting cloud of shadowy expertise, with individual contributors remaining utterly immune from personal accountability. This, added to the utterly privatised nature of the relevant ownership and control patterns, raises all kinds of questions concerning the suitability of such methodologies in a ‘free’ and ‘democratic’ society. Who, for example, decides how the processes of a society might be maximised in terms of goals and means? Who decides what the goals are? On what basis?
And ought we by now be starting to stir in our sleep and taking time to review our prior complacency? Has the Time of Covid not cautioned us against the risk of encroaching regulation without democratic mandate or public consultation? Could this become the model for some kind of future vision of human society, in which long-held freedoms and powers are traded for comforts and baubles, but on a take-it-or-lose-everything basis? Is such a scheme as this already in train?
Another matter of major concern is the likely future combination of algorithm decision-making with social credit scoring systems such as have recently been rolled out in China. The principle of credit scores is not new in relation to, for example, creditworthiness, but algorithms and data availability will greatly increase their reach and impact, with enormous implications for, in particular, people of limited means who will run into successions of potential sources of support that have already been nobbled by virtue of the sharing of data. This kind of thing is currently becoming commonplace in Australia, with people losing their social welfare entitlements because they have failed to comply with the ‘vaccine’ ‘passports’ regime.
Such powers have the capacity to become total in the age of the almighty algorithm. Algorithms, opaque and controlled by private and largely unaccountable companies, will be beyond appeal or amendment, especially by people who lack the wherewithal to escape from the web of algorithms policing their access to basic services.
For example, will someone deprived by an algorithm of access to medical care, a job or a house be able to sue the withholders of these once basic entitlements with any chance of success? While the speed of virtualisation grows exponentially, our collective ethical capacity seems to be slowing down. As we lose touch with the concrete world and the intensity of its challenges and enchantments, we dismantle the frame of alignment that sustained not just the moral order but the capacity of reality to convey meanings we could hold to. In sidelining God, we have rendered ourselves as though mechanistic organisms, and at the same time reduced the very ‘equations’ of human existence and functioning so that we can no longer see or keep account of shifts in responsibility arising from ‘progress’. However we may regard our ‘progress’, it gets harder to deny that the secular-atheistic society has no way of mitigating or discounting great calamities in the way that religious societies were able to.
The algorithm now seeks to insinuate itself as a new form of absolutism, a new infinite and eternal mysteriousness, and its advance publicity credits it with the power to arrive invariably at self-evidently best-possible decisions. But the idea that some concept of ‘neutrality’ will inoculate such as the self-drive algorithm from public disquiet is based on a misunderstanding of what has occurred in purely earthly contexts until just about now. The new deity, the technology, will decide for us, opaquely; and we will come to accept its ‘decisions’ because they are smarter, or faster, than anything we could come up with ourselves. ‘Smart’ will be the measure of all value, and will essentially mean ‘smarter than you’. The ‘we’ will not be ‘us’ but some self-appointed politburo of humanity — a ‘they’ who will not necessarily have ‘our’ best interests at what they still possess of heart.
In these circumstances, of course, we will no longer be able to speak of ‘accidents’ — for example, of a self-driving vehicle — which may in future be termed ‘algorithmic misadventure’. The concept of ‘chance’ will acquire a new meaning: something entirely predictable but not by those most implicated. Each incident, fatal and otherwise, will flow from a pre-programmed response, perhaps tailored to adapt in differing individual sets of circumstances, but nevertheless set in code, having been formulated on the basis of general principles. What are those principles to reflect? Economics? Social utility? Political correctness? This latter heading is not as flippant as it may seem, since PC is much in vogue in Silicon Valley, and has already developed a very precise and detailed hierarchy of human categories, based on victim pecking orders, and policed under the relentless rubric of Cultural Marxism.
In the approaching dispensation, the word ‘moral’ may require to be amended or replaced by something that overcomes the inevitable wooliness of ethical concepts in a world becoming detached from direct human control — and, simultaneously, increasingly devoid of absolute beliefs that are ‘written’ upon the heart of the human person as well as being grounded in actual schemas of learning and written down in weighty volumes.
The almost universal experience of morality is of something largely absorbed into the being of the human, as well as something capable of being comprehensively codified and tabulated as required by the computer. If we remove moral action from the remit of human beings and vest it in computers, whereby it returns under a veil of obscurantism, in what sense will we be able to go on calling this morality? Will it be sufficient to incorporate into the algorithm some formulaic versions of such as John Stuart Mill’s principle of utility, or Immanuel Kant’s categorical imperative? In what sense might such principles be trusted if their outworking remains invisible?
Morality as we currently understand it, for example, incorporates aspects like ‘respect’ and ‘duty’, which by definition tend towards unselfishness. Can these become part of the self-driving algorithm, or will an automated car, in the manner of a loyal watchdog, be programmed to protect primarily its occupants rather than bystanders or occupants of other vehicles? How far might this be taken, and how transparent will the outcomes be? Will the algorithm take the life-expectancy of potential fatalities into account before initiating a response? What about employment status? Sexual orientation? The list of possible questions is endless. The answers all appear to be along the lines of: Let’s leave it to the tech.
We are, it appears, to leave it to the algorithms to decide — on the basis that, being ‘scientific’, they exceed human capacity for morality as well as calculation — and to seek no elaboration of the underlying logics being applied. Self-driving cars, though safer in many respects, will in other ways become inscrutable to users, pedestrians and other adjacent humans. This is in part because of the complexity of the technology and its guiding mathematics, in part because technological systems operate differently to humans in pursuit of similar ends, but mostly because of an evolving deference to technology that ought to chill any sentient human to the core.
Not that the evolution of technology is utterly devoid of ethical rumination. There is, for instance, a growing scientific literature on the moral life and ‘rights’ of machines, robots and’ robotic care-givers’. Machine medical ethics, especially as it applies to healthcare robots, generally focuses attention on controlling the decision-making capabilities and actions of autonomous machines for the sake of respecting the rights of human beings. As they become more and more sophisticated, taking on board multiple layers of instructions and programmes, computers move beyond the range of their designers’ and controllers’ intentions to new levels of operation and calculation. It is possible that we may yet develop a sense of robo-computers analogous to the way we — some of us and in some moral contexts — regard animals and the environment as ‘moral subjects’. This would require us to rethink our entire worldview, especially with reference to ethics and our own place in reality.
This underlines a huge unremarked problem of the present moment: that the human race in general appears to understand the ethicality of regarding the environment as entitled to protection, as well as grasping the ethicality of treating discrete and specific species of human with exalted respect within the realm of their difference, but has no philosophy anymore with regard to the survival of the species as a whole, or indeed — as we’ve been discovering in recent times — the rights of human beings as human beings to simply be let alone. Humans per se are no longer, it seems, included in the 'ethical biosphere', indeed are depicted as the enemy of such, and therefore liable to punishment under other headings. The general population of humans is on the hook for whatever damage technology/industrialism does, but the owners — those directly responsible for the lion’s share of the depredation — are able to buy exemptions by becoming 'philanthropists' (which means organising your tax affairs to divert as much money as you can into propaganda in the certain knowledge that you'll get 95% of it back in hard cash and the rest a thousandfold in lubricated advocacy and favourable coverage.)
Until now, human beings have been the sole moral agents in the world. The shift of the moral centre-of-gravity to include robots does not — as yet — imply equivalence between the human and the machine. Machine ethics have been with us for close on two decades and, in their present form, neither deny nor affirm concepts like machine consciousness, emotions, or personhood. The evolving new ethics are ‘pragmatic’, leaving the philosophical questions for later, for now simply accepting that machines are already making decisions that invoke ethical questions.
In practice, we still, in this context, apply the standard instrumental theory of technology, which sees all kinds of machines, including the most advanced, as merely tools which we are not required to feel anything for, or to regard as occupying the place of another ‘person’ in social relationships. But a radical review of this approach may not be far away. To be a ‘moral patient’ (i.e. an object of moral ministration) does not necessitate being of the human species: we can imagine, for example, an alien from another galaxy impressing us with the idea that he or she had this entitlement also. If we are moving towards a world where robots look approximately like humans but possess a far superior intelligence, it may not for much longer be possible to deal with this question by evading it. The extension of ‘rights’ to robots, therefore, is likely to imply at some stage that these rights reflect something of the relative contributions or capacities of robots vis-à-vis humans.
If we succumb to the proposed shift towards transhumanism, this kind of differention is all but guaranteed. Once machines appear to have a ‘consciousness’ similar to ours, and we have microprocessors embedded in our bodies, there will seem to be less reason for any moral delineation. The fact that our consciousness appears to be created mysteriously, whereas theirs will be the product of micro-chips and networked microprocessors, does not offer much scope for discrimination in our favour. The fact that we are flesh and bone, and they are silicone and steel, will mean next to nothing. One area of differentiation might be that humans will continue to have a susceptibility to pain and suffering, whereas robots will not. But this — believe it or not — is by no means certain: There is a sturdy body of literature exploring this idea and appearing to lean towards the conclusion that the matter cannot be resolved because we do not know exactly what pain is.
In one study, researchers showed pictures of painful and non-painful situations involving human hands and robot hands, such as a finger about to be cut with a knife. Although people empathised more with humans than with robots, there was what the scientists interpreted as an empathic response to both humans and robots. It seems that we empathise with humanoid robots, ‘masquerading machines’ in ways we do not with our fridge or microwave oven. Perceived pain (or threat of pain) seems to be sufficient to provoke this response. Although our robots are as yet nowhere close to the intelligence and complexity of humans or animals, there is already a notable difference in how we interact with certain types of robots. As we move towards the technological singularity, these shifts are likely to quicken.
For example, many of the robots being developed for the role of carers to the elderly and incapacitated have a facility to exhibit ‘affection’ to their owners/charges. This reminds us that morality is largely relational: it derives from interaction, usually between humans: Moral agency is relationship — and group-situated rather than located in a particular entity, human or otherwise, which is to say it is affiliative and social rather than ontological. If the computer is merely a tool, the problem does not arise; but if the computer involves inter-communication and interaction with a human, the relationship alters and the computer is transformed into a second ‘subject’ — the ‘other’. It is not hard for even the layman to conjure up manifold problematic areas in which these issues might raise concerns. The problems arise not from how real are the emotions of the robots, but from the realness of the emotions of the humans interacting with them. These drifts have the capacity to confer rights, and therefore competing entitlements, on certain machines, which can only be, in the end, detrimental to the rights and freedoms of affected human beings.
It may, for example, be possible to construct a robot capable of feeling a simulacrum of pain; and, whether or not this has any real basis, the consensus of humans may be that it ought to be regarded in the same way as if it were real, in which case it might become necessary to regulate the relationships between such robots and humans, or even cease creating such machines and deliberately regress from the technology that enabled them. A similar conundrum applies to the possibility of the evolution of ‘psychopathic machines’ — a robo-computer, for example, which can depict a convincing imitation of moral agency, but without the capacity to ‘feel’ anything. Some robots have the capacity, for example, to convey affection — by word or gesture — to their human charges, inevitably attraction affection in return.
Research shows that humans can sometimes become very abusive towards computers and robots, particularly when they are seen as human-like. Once you set out the questions that arise in this context, however, it becomes clear that, regardless of the dangers, there are, in the first instance, good and sensible reasons for exploring the moral dimensions of what is occurring.
There is abundant evidence that, when a machine masquerades as a human, this affects the behaviour of real humans, both towards the robot and other people. There are those who believe we need to examine these questions with some urgency before we find ourselves in the midst of an ethical mess akin to colonialism or slavery. If this seems far-fetched, then think about how far-fetched it would have been just two years ago to talk about human beings being refused access to food because they declined to accepted an untested medicine.
The moral conundrums of AI are virtually limitless. Is it acceptable, for instance, for humans to treat human-like artefacts in ways that we would consider morally unacceptable as treatment of other humans? If so, just how much sexual or violent ‘abuse’ of an artificial, programmed ‘moral patient’ should we allow before we censure the behaviour of the abuser? Is it entirely fanciful, from what we have observed of human society of recent times, that it will not, before long, become possible for someone to marry their ‘robot-carer’?
These are real, serious questions, in part because of the danger that a deterioration in the treatment of human-like robots could spill over into human-on-human relationships — the idea that ‘cruelty’ towards a robot might cause a hardening in the attitudes of the perpetrator to other humans, a principle we already hold to in relation to cruelty towards animals. This understanding already turns mistreatment of a robot into a kind of vice. It is likely that this starting point will in time lead to laws forbidding ‘ill-treatment’ of robots and the framing of ‘robots rights’ charters, which inevitably, and by definition, will dilute or displace many existing rights of human beings. As robots, due to their exploding ‘intelligence’, become ascendant in the functioning of society, there is every reason to expect that humans, other than those who have become integrated into the mechanistic economy, will recede in significance, and accordingly in protections. And these tendencies will accelerate when we have reached the tipping point in the development of ‘artificial consciousness’ promised in the technological singularity.
In seeking to come to terms with these developments, we are thrown back on the question: Is technology a tool — i.e. an instrument of human action — or something humans interact with? In other words, is the robot a ‘what?’ or a ‘who?’ An increasing amount of online activity — human-on-computer interaction as opposed to humans simply using the computer as a tool — is clearly leaning towards the ‘who?’ rather than the ‘what?’. The core of posthumanist and transhumanist progess depends on the probability that this relationship will have been incrementally changing — that the line between humans and ‘others’ will have become less clearly definable — ‘other’ no longer meaning just ‘other humans’.
This is further complicated by another side-street of posthumanism: ‘biological robots’. Already, as we have seen, technological evolution has moved beyond the idea that robots cannot exceed the capacity inputted by their human ‘controllers’. Biological robots controlled by artificially cultivated neuronal networks demonstrate observable individuality and can be programmed using sensors to ‘learn’ to avoid obstacles and carry out specific tasks as though of their own volition — and, additionally, by a process of trial and error to become increasingly dexterous and adept. It is also possible to create machine-human inter-relationships in which a biological body is governed at least in part by a computer-brain interface. Other avenues of experiment likely to yield fruit are ‘biobots’ and a manmade ‘brain’ that emulates the human sensory and cognitive apparatus — a ‘mind without a body’. All this, and biometrically adapted humans as well! (A topic we shall return to in another article.)
There are philosophers who believe that machines ought to be merely ‘slaves’, but even this idea appears to open up problematic ethical situations insofar as it may be accompanied by a blurring of the lines between robots and humans. As we confer on robots the simulacra of human emotions, the ethical issues will multiply. If we become emotionally attached to our robots, we will be required to construct an ethical calculus to protect not so much the robots as ourselves and our fellows from the potential consequences of this — not least that, once ethically constituted, robots will become ethical rivals of human beings.
The crucial thing, at least in the early stages, will be not so much the capabilities of the robot but how we come to understand its (his? her?) actions and behaviours. It will not be acceptable to utilise the same ‘ethic’ we apply to a fridge if there are convincing signs of ‘intelligence’, ‘feelings’ or ‘autonomy’, not least because, as already noted, of the dangers of our responses to robots infecting our relationships with and behaviour towards other human beings. And this in turn will alter both our attitudes to these ‘creatures’ and our behaviour towards them, a dynamic likely to become radically affected by the fact that, increasingly — if the tech tsars have their way — most humans will become, to some degree at least, machines. In such an altered ‘ethical’ realm, humans may feel an increasing pressure to succumb to the transhumanist experiment in order to retain their most fundamental personal rights and entitlements.
Occasionally voices are raised in concern about the implications of the algorithm-driven society — even from within the Silicon Valley compound — but these contributions tend to be self-interested acts of misdirection or ideological try-ons on behalf of the Big Tech monopoly interests — such as Apple boss Tim Cook’s warning earlier this year about ‘rampant disinformation and conspiracy theories juiced by algorithms' Ho-hum. Some of the language Cook used appeared to raise the correct flags, but his wider argument was overwhelmed by the nudge-nudge of his agenda. He was speaking specifically of the capacity of social media to generate what he regarded as harmful political activity. We can no longer turn a blind eye, he said, to a theory of technology ‘that says all engagement is good engagement . . . and all with the goal of collecting as much data as possible.’ But his context was the events in the Capitol Building in Washington on January 6th 2021, which turned his remarks into special pleading rather than self-criticism. He went on: ‘It is long past time to stop pretending that this approach doesn't come with a cost — of polarisation, of lost trust and, yes, of violence. A social dilemma cannot be allowed to become a social catastrophe.’ He might well have been talking about the BLM-driven riots of the summer of 2020, but the point is that he wasn’t. It’s hard to see this kind of intervention as other than a strategically timed ass-covering exercise with an eye to future pushback on the stealth nature of the algorithm culture, a kind of controlled explosion of factuality as an insurance policy against future liability issues. In short, when Big Tech speaks, it invariably employs the forked tongue.
A robotics researcher, Peter Haas, Associate Director of the Brown University Humanity Centered Robotics Initiative, gave an interesting Tedx Talk in 2017, in which he suggested that the real threat from AI might not be killer robots but an excess of yhuman deference to the algorithm, which has become a new proxy for the white-coated scientist of Milgram experiment infamy.
Haas says he is ‘kind of terrified’ of robots from watching too many movies, but he thinks we have ‘a little time before the robots catch up.’ Robots like the PR2 that I have in my initiative,’ he says, they can’t even open the door yet!’ He regards the discussion about the danger from super-intelligent robots as a distraction from ‘something far more insidious’ that is going on with AI systems, algorithms, machine learning et cetera . . . ‘
‘You see, right now there are people — doctors, judges, accountants — who are getting information from an AI system and treating it as if it is information from a trusted colleague. It’s this trust that bothers me.’ It bothers him not because of how often AI gets things wrong — AI researchers pride themselves on the accuracy of their results, so breakdowns are rare — but of how badly it gets it wrong when it makes a mistake. ‘These systems do not fail gracefully,’ he says.
He gives an example of an algorithm misidentifying a husky in a photograph as a wolf. The researchers rewrote the algorithm asking it to show them the parts of the picture it had paid attention to in making this call, and discovered that the key factor was nothing in the appearance of the animal but the fact that there was snow in the background — ‘bias in the data’.
‘Most of the pictures of wolves were in snow, so the AI algorithm conflated the presence or absence of snow for the presence or absence of a wolf. The scary thing about this is that the researchers had no idea this was happening until they rewrote the algorithm to explain itself.’
‘Even the developers who work on this stuff have no idea what it’s doing.’
The Compas Computer Sentencing algorithm (Correctional Offender Management Profiling for Alternative Sanctions, a case management and decision support tool used by US courts to assess the likelihood of a defendant becoming a recidivist) is used in more than a quarter of US states. A ProPublica survey found that African Americans were 77 per cent more likely to be identified as a probable repeat offender than Caucasians. Haas warns that such biases are not merely wild within the system, but are being ignored by authority figures who continue to use it.
Judges, he says, ignore the bias aspect because Compas is one of the most efficient elements in a backlogged system.
‘Why would they question their own software? It’s been requisitioned by the state, approved by their IT department. Why would they question it?’
Actually, he says, some of those affected have gone to court to question it — and that hasn’t gone well either. The Wisconsin State Supreme Court, for example, ruled that Compas did not deny a defendant due process provided it was used ‘properly’, and at the same time blocked the defendant from inspecting the source code of Compas to prove that the programming was biased. This should concern everyone, he says, because the kind of programming to be found in Compas is used in systems for vetting house loan submissions, job applications and health cover. Even more ominously, it’s used in self-drive cars.
‘Would you want the pubic to be able to inspect the algorithm that’s trying to make a decision between a shopping cart and a baby carriage — in a self-driving truck — in the same way the dog/wolf algorithm was trying to decide between a dog and a wolf? Are you potentially a metaphorical dog who’s been misidentified as a wolf by somebody’s AI algorithm? Considering the complexity of people, it’s possible. Is there anything you can do about it now? Probably not, and that’s what we need to focus on. We need to demand standards of accountability, transparency and recourse in AI systems. . . . We need to demand standards and we need to demand regulation, so that we don’t get snake oil in the marketplace. And we also have to have a little bit of scepticism.’
Haas references Stanley Milgram’s experiment in human obedience to authority, which showed that the average person will follow the orders of an authority figure even if it meant harming another person — and even to the point of death. ‘In this experiment, everyday Americans would shock an actor past the point of him complaining about heart trouble, past the point of him screaming in pain, past the point of him going silent in simulated death — all because somebody, with no credentials, in a lab coat, was saying some variation of the phrase, “The experiment must continue”. In AI, we have Milgram’s ultimate authority figure. We have a dispassionate system that cannot reflect, that cannot make another decision, that there is no recourse to, that will always say, ‘The system, or the process, must continue.”’
He recalls a car journey in which, as he approached Salt Lake City, it started to rain. As he climbed into the hills, the rain turned to snow, and soon it became impossible to see the road in front or the lights of the car ahead of him. The car started to skid, and went off the highway. He became terrified that someone would crash into him.
‘I’m telling you this story to get you thinking about how something small and seemingly mundane, like a little bit of precipitation, can easily grow into something very dangerous. We are driving in the rain with AI right now, and that rain will turn to snow, and that snow could become a blizzard. We need to pause, check the conditions, put in place safety standards, and ask ourselves how far do we want to go. Because the economic incentives for AI and automation to replace human labour will be beyond anything we have seen since the Industrial Revolution. Human salary demands cannot compete with the base cost of electricity. AIs and robots will replace fry cooks in fast food joints and radiologists in hospitals. Someday, the AI will diagnose your cancer and a robot will perform the operation. Only a healthy scepticism of these systems is going to help keep people in the loop. . . . And I’m confident — if we can keep people in the loop, if we can build transparent AI systems like the dog/wolf example, where the AI explained what it was doing to people, and people were able to spot-check it, we can create new jobs for people partnering with AI. If we work together with AI, we will probably be able to solve some of our greatest challenges. But to do that, we need to lead, and not follow. We need to choose to be less like robots, and we need to build the robots to be more like people, because ultimately the only thing we need to fear is not killer robots — it’s our own intellectual laziness. The only thing we need to fear is ourselves.’
This is sensible and germane talk, the kind we need to hear much more of. But in the absence of a broader discussion, in the absence of more widespread knowledge and understanding about the context we may very shortly be facing, Haas’s use of ‘we’ and ‘our’ again should prompt at least a raised-eyebrow: As things stand, the decisions he’s talking about, as well as the interventions that may occur on foot of them, are likely to be made not by ‘us’ but by ‘them’.
These things, like so much relating to the coming tech neo-genesis, will be decided over the heads of the vast majority of the world’s population. We are unlikely to be offered any choice. Because of the many dangers arising from loss of data privacy, there is some talk about regulatory initiatives, but given the record of the political sector’s engagement with Silicon Valley up to this, we would be well advised not to hold our breaths.
It is a short leap from where we already are to programming robots to police, judge and punish humans, unless we decide to block these options, a question that again raises the ominous question as to who the ‘we’ is. Virtually all the literature on these topics appears to regard the human race as a monolithic entity with common interests and, approximately, objectives. The idea of a humanity divided between controllers and obsolescent impotents does not appear to have yet entered the picture. But, imagine a world in which robots and humans exist side-by-side, or at least in adjoining dormitories. But, whereas the robots are ‘useful’ by the measure of the controllers, the human quotient contributes nothing, wasting away on screens, stuffing their faces with food and drugs, requiring more and more surveillance and policing. What do we imagine might happen next? This question was ‘live’ for some time before Covid; it is exponentially more urgent and vital now.