The Algorithm Method
A world in which our lives will be governed by opaque formulae owned by corporations trundles towards us, with zero discussion of the seismic changes the rule of the algorithm will unleash.
Working on my recent two-parter tribute to the great technologist, engineer and technological philosopher Mike Cooley brought to mind an occasion when, writing about intelligent machines nearly 30 years ago, I called Mike to ask him if he was not worried that we might create a machine that would turn on the human race and enslave us. In spite of his concerns about technology stealing human creativity and thereby supplanting human beings in all kinds of ways, Mike was at the time reassuring in his ultimate prognosis, which was rooted in a profound belief in the scope of human wisdom and intelligence. ‘I have never met an ordinary person,’ he would say. ‘Every person I have ever met has been extraordinary in some way.’
The example he used had to do with the complexity of something as ostensibly simple as crossing the road, which he emphasised was not in the least a simple exercise. He went through all the computations involved — distances, speed of oncoming vehicles from both directions, rate of potential acceleration, obstacles, contingencies etc., and concluded that a machine could never be developed to carry out this level of computation in such a short time.
At the time it seemed he was probably right. Now, however, things are not so clear. In fact, it’s obvious that something akin to the developments he said could never happen are now old hat. A modern robo-computer is capable of making calculations at rates phenomenally faster than a human brain. Mike’s example is passé, but the fact that a robot can easily be programmed to cross a road safely does not necessarily gainsay his broader sanguinity on the question of artificial intelligence. He was using an over-simplistic example to make a much deeper point, and this point remains valid, though it may not be so for very much longer, unless we wake up.
The essential logic of the position that machines can never exceed the intelligence of humans arises in principle from the fact that humans are the architects of technological systems and therefore always destined to remain the ‘masters’ of the relationship. This seems to make sense, but avoids the deeper nature of that relationship. Given the changing nature of what it means to be ‘intelligent’, to suggest that machines are absolutely incapable of being more ‘intelligent’ than their creators seems increasingly naïve. The way we receive and store information has changed: nowadays, barely conscious of what we are doing, we rely on Google to ‘remember’ things for us, and consult it with something like the sense that we are merely stretching our own memories. Due to our voluntary servility to a particular model of technology, our intelligence has come under potentially lethal attack.
I remember being struck one time by something said to me by a man who made his living by teaching politicians and the like to speed-read. He spent much of his time between clients reflecting on his observations of these people. Once, speaking to him about my sense of the intelligence of a particular individual, he cautioned me to be careful — ‘There is a difference between intelligence and memory capacity,’ he said, ‘although superficially both may pass for intelligence.’ Many of his clients, he said, had reputations as highly intelligent people, but his experience of them was that they simply had something like photographic memories — the capacity to store and recall huge amounts of learnt-off information.
Since intelligence involves a high degree of informational capacity, the ability of computers to store vast amounts of data gives them a head start on humans, at least in the zone of appearances. Add in programming for computation, reasoning and meaning and you already have something that leaves most humans glued to the spot. In the robot, whether intentionally or not, man is creating another autonomous life form, though one without any intrinsic form of moral agency other than what may be installed as part of its programming. But, whereas men may design the moral programming of robots, even these controllers will not determine the ultimate mix of this morality, which may be tilted by all manner of elements, including data and cross-pollination of algorithms. Moreover, ‘men’ does not mean the same as ‘man’: robots, like nuclear weapons, may fall into the wrong hands, and become immune from the control of the community. Ultimately, this shift in ‘moral’ thinking will enable a number of fundamental alterations in the very nature of moral reasoning. One such is that ‘morality’ will no longer be comprehensible by the old understandings; another, arising from that, is that morality will be amenable to constant redefinition, which is to say that relativism will enable a developing language of ‘morality’ and ‘ethics’, while in reality all morality, and the very idea of morality, will be erased in all but name.
In his fascinating 2018 book New Dark Age: Technology and the End of the Future, James Bridle, describing experiments conducted on nuclear fusion by the Californian research company Ti Alpha, related how the company had developed what it called an ‘Optometrist Algorithm’, combining human and machine intelligence as an optimum method of problem-solving.
‘On the one hand is a problem so fiendishly complicated that the human mind cannot fully grasp it,’ he writes, ‘but one that a computer can ingest and operate upon. On the other is the necessity of bringing a human awareness of ambiguity, unpredictability, and apparent paradox to bear on the problem — an awareness that is itself paradoxical, because it all too often exceeds our ability to consciously express it.’ Two forms of inscrutability coalescing in pursuit of clarity? Or the union of two forms of opacity begetting something even more opaque?
Algorithmic systems designed in this way, Bridle writes, may become relevant not merely to solving technical problems in the pharmacological and physical sciences, but will become applicable also to questions of morality and justice. It becomes clear that, in the emerging technological dispensation, there will be aspects of everything that remain beyond human ken. ‘Admitting to the indescribable,’ he writes, ‘is one facet of a new dark age: an admission that the human mind has limits to what it can conceptualise.’ This ‘indescribability’ — or, in the parlance, ‘technological opacity’ — must, it appears, be accepted with good grace by humanity, perhaps as religious-minded humans have hitherto accepted the will of God. But who will make the choices and decisions and on what bases? How will we know if or how the dice is loaded?
The problem, then, is that, although the machine may indeed be designed by a ‘limited’ human intelligence, the ‘magic’ of the algorithm may confer capacities which are more than the sum of the inputs. If the programmer primes a machine with an ‘ethical’ programme, and adds in a number of layers of additional coding that ascribe weightings to various factors, the programme enables the machine, in a sense, to ‘think’ about situations that are not before the programmer at the time of installing the programme.
Take, as a basic example, the ‘trolley dilemma’ a standard ethical conundrum posited for generations by philosophers and ethicists without any definitive resolution. In this scenario, a runaway rail trolley is heading at speed down a line on which, further down in the path of the trolley, five people are standing with their backs turned, unaware of the danger. The pointsman in the signal cabin observes the situation and has an opportunity to divert the trolley into a siding where just one person will be in its path and likely to be run down. Ethical processes, generally speaking, tend to favour inaction as a lesser failing than a harmful action. But is the saving of five lives which might be lost through happenstance to be regarded as a preferable ethical outcome to the loss of a single otherwise unendangered life, in circumstances where this occurs as the result of conscious human intervention?
The dilemma, then, is whether the loss of a single life, caused by direct and conscious human intervention, is to be regarded as ethically preferable to the loss of five lives as a result of happenstance. If the pointsman acts to save the five, he has, in a sense, achieved a net ‘saving’ of four lives, but he has also taken a deliberate action likely to cause the death of someone who would otherwise have remained safe. In actuarial terms, he would appear to have achieved a good result. But a man is dead and the incident can no longer be deemed an accident. Or can it? Hence the ‘dilemma’.
In the kind of technological context we’re talking about, these ‘moral conundrum’ decisions would be vested in and pre-programmed into the governing algorithm. But the algorithm, through surveillance and data harvesting, may have nano-second access to an array of information that, if incorporated into the decision-making process, may have infinitely more complex implications. The use of x-ray machines as part of CCTV infrastructure may provide instantaneous access to crucial information regarding the six implicated individuals. Three of the five people on one path may suffer from potentially fatal cancers. The sole person on the other path may be a female pregnant with twins. On the headline facts, we may believe that one or other outcome is preferable, but are we sanguine about the fact that these decisions will be made in an instant by a process that is not merely beyond appeal but is governed by an algorithmic process whose creators cannot either predict or afterwards precisely say how the decision was arrived at?
Is it, at this moment, imaginable that we would allow, for example a jury to impose a death sentence and immediately carry it out? Yet this is the kind of implication these technologies are riddled with.
There is a remarkable resonance between this hypothesis and the situation we currently find ourselves in — I mean with the Covid coup and the political and medical authorities’ bizarre intervention to divert the action of nature and impose a sequence of events arising from their decisions and the underlying modelling. In both contexts, we are confronted by attempts to subvert the unfolding of events, whether these be seen as naturalistic, accidental or ‘the will of God’. The problem with this is that, once the natural flow of events is interfered with, the consequences ceases to be accidental and become something involving culpability, even if purely in the nature of negligence.
The context where the ‘trolley dilemma’ becomes most immediately and obviously relevant is autonomous vehicles/self-driving cars, which approaches from behind, out of sight even via our rear-view mirrors. We anticipate it with a mixture of bemusement, disquiet and excitement, all converging in disbelief: It’s not possible, not really. But the indications are that it is not just possible but a virtual — so to speak — certainty, within perhaps a decade. The World Economic Forum/Klaus Schwab ‘Great Reset’ requires as a primary element of its functioning the introduction of the self-driving car, accompanied by the intention that all human-driven cars will begin to disappear from public roads within little more than a decade. By many accounts, however, these plans have run into some real problems, in particular arising from the coordination of cars in urban environments, which has proved far more complex and problematic than previously imagined. Current projections are that the autonomous vehicles rollout will begin with long-distance trucks, and that there may be some revisions of previously announced timelines. The moment when the ‘trolley dilemma’ becomes the ‘Tesla dilemma’ may therefore be somewhat delayed.
A fully autonomous vehicle is one capable of driving on its own, without human intervention. Humans carried in such vehicles are never drivers, always passengers. Self-driving vehicles are an example of a new category of machine, in that they have access to the public thoroughfares on much the same basis as humans: without constraint of track or rail. Computer-generated movement of machines in these contexts is a ‘brave’ initiative for all kind of reasons, and will necessitate radical changes in laws and cultural apprehension.
Self-driving cars use sensors, cameras and GPS to mediate between the world and the computer. In the event of an emergency situation, the car will make a judgment. But how — by what criteria? — is the software to be programmed to make these judgments? How, as dictated by its programming algorithm, should the computer prioritise the information it receives? And will we be able to come to terms with these ‘decisions’ when the outcome may involve the death or serious injury of a loved one?
An algorithm is in effect a programming code for a computational process. Every app is governed by an algorithm, which conducts calculations based on accumulated data, making decisions and initiating actions. Artificial Intelligence systems (AI) follow algorithms, which by recognise patterns are enabled to make their own decisions, which allegedly get better and better when provided with more and better data. Complex algorithms are created by invisible actors who merge and morph like a crowd around a street busker, a kind of shifting cloud of anonymous expertise, with individual contributors remaining anonymous and utterly immune from personal accountability. This, added to the utterly privatised nature of the relevant ownership and control patterns, raises all kinds of questions concerning the suitability of such methodologies in a free and democratic society. Who, for example, decides how the processes of such an algorithm society might be maximised in terms of goals and means? Who decides what the goals are? On what basis? Is it permitted to dissent or just opt-out without penalty? Has the Time of Covid not cautioned us against the risk of encroaching regulation without mandate or consultation? Could this become the model for some kind of future vision of human society, in which long-held freedoms and powers are traded for comforts and baubles, but on take-it-or-lose-everything basis?
Even now we are just beginning to wrestle with the implications. The media — aside from specialist journals — tend to treat the matter whimsically and, although there has been a scatter of books on the topic, it has yet to surface in the neocortex of Western society. The legacy media, under ideological instruction, bang endlessly on about climate change, but here, in a context with implications at least equally grave, there is scarcely a murmur of questioning or analysis.
Listening to the almost total silence surrounding issues relating not just to self-driving cars but more generally to the coming rule of algorithm, one might imagine that the coming dispensation will involve no more than technical alterations to existing ways of doing things. But the algorithm society will bring with it not merely practical changes in the way everyday benefits, freedoms, services and practices are delivered, but also transform the moral and ethical fields in which these phenomena will be framed once the new dispensation kicks in. It will transform not merely our working lives, but also our social and leisure time, our relationships with public bodies and social services, our capacity to travel, the way we shop and do business, or even whether we get to buy a car, rent an apartment, or get a job. The algorithm will interpose itself as an intermediary and arbitrator in virtually every context in which, hitherto, we interacted with employees of services or goods suppliers.
Except that the algorithm will be closer to a god than to another human person or human agency. It will decide whether you will be admitted to hospital, whether you will be allowed to adopt a child, whether your child will be fostered out, whether or not your family qualifies for social housing. Its decisions will be opaque and probably immutable. The ‘will-of-god’, not long disappeared as a cultural trope, will be supplanted by a new form of absolutism, in which the thought processes and criteria behind outcomes or decision with immense impacts for our lives will be undiscoverable and unappealable. Here, enforcement will operate with the threat of instant state coercion rather than the promise of long-fingered eternal damnation.
It is hard to avoid the thought that the silence on all these matters, which cumulatively comprise the next phase of the Silicon Valley takeover of the spectrum of human concern and activities, is not a consequence of accidental omission or oversight. On the contrary, it is arises precisely from the deep nature of factors underlying these phenomena, in particular the overarching ambitions of Big Tech to take over every last and most intimate area of human existence, and the resources of power and money these entities can call upon in seeking to achieve these ambitions. In other words, the reason why there is almost no coverage or discussion of these issues in mainstream ‘legacy’ media is not that they are not urgent or interesting, but that Big Tech, which exerts near absolute influence and control over such media, does not want them covered or discussed.
Another matter of major concern is the likely future combination of algorithm decision-making with Chinese-style social credit scoring systems. The principle of credit scores is not new in relation to, for example, creditworthiness, but algorithms and data availability will greatly increase their reach and impact, with enormous implications for, in particular, people of limited means who will run into successions of potential sources of needed benefits that have already nobbled by virtue of the sharing of data. Algorithms, opaque and controlled by private and largely unaccountable corporations, will be beyond appeal or amendment, especially by people who lack the wherewithal to escape from the web of algorithms barring them from basic resources. For example, will someone deprived by an algorithm of access to medical care, a job or a house be able to sue anyone with any chance of success? Will you be able to sue a supplier or manufacturer for damage incurred? Or — a new but plausible question — have our governments, as with Covid vaccines, already agreed to indemnify the algorithm controllers so as to prevent this area being amenable to legal action?
Because of the many dangers arising from loss of data privacy, there has been some talk about regulatory initiatives, but given the record of the political sector’s engagement with Silicon Valley hitherto, we would be well advised not to hold our breaths. Added to political inertia is the fact that the complexity, opacity and anonymity of algorithms renders them virtually impervious to external scrutiny. The language of algorithms is not readily translatable into plain words, in any language. Even the experts, as we have seen, have difficulty explaining what goes on inside one.
So far, academic papers barely scratch the surface of these dilemmas, concentrating on the more generalised implications of artificial intelligence (AI), such as the implicit threat to human labour, or questions like, ‘Will intelligent robots acquire human rights?’ Even when academic studies touch on self-driving vehicles, they tend to talk about traffic-flow models or the implications for the concept of human responsibility of supplanting human decision-making with complex algorithms. The occasional study deals with the issue of ‘artificial moral agents’ and tends to be accompanied by a darkish innuendo to the effect that human beings are so morally deficient as to be beneficially replaceable by entities imbued with artificial morality — homo sapiens 2.0 — which will be programmed to implement only what is ‘good’ about humanity.
An even more challenging issue lurks downstream from the responsibility question, but so far has been largely overlooked. This relates to how humanity will cope in a world in which there is no recourse to justice, reckoning or even satisfactory resolution when often opaque decision-making — vested in technologies which result in death or serious injury in circumstances presenting in a God-vacated culture as arbitrary and inconclusive — offers no possibility of closure flowing from acceptance or resignation. These questions open up a sightline on the unfolding tech future, which so far has rolled downhill towards us without the imposition of brake or restraint.
Tools are never neutral. A screwdriver supplants the finger and nail, a spade changes the role of the hands. AI is no different. Many of the technologies we use, which we fondly imagine are increasing our freedoms, are doing the precise opposite. Many Internet users, for example, imagine that the world wide web remains unchanged from the way it was described in its early days, as an unrestricted and diversity-fostering information highway. In fact, over the past seven or eight years, due to the pressure to ‘monetise’ — i.e. to make bigger and bigger profits from advertising revenues — the web has become involuted and convergent, narrowing the horizons of its users rather than broadening them. The main cause of the change is the ‘personalisation’ of Google searches, which causes each search made by an individual to be tailored to that person's known ‘likes’ and interests, a process which remains invisible, even from the user who may well believe that his searches are throwing up the same things as everyone else’s. Resulting from this customisation of searches, what we searched for in the past determines what we hear about in the future, eliminating all possibility of serendipity and isolating us in cultural and ideological bubbles. The invisibility of this process is even more worrying: Google doesn't tell you how it reads your profile or why it’s giving you the results it is. You may not even know that it’s making any kind of assumptions about you. Google’s CEO Eric Schmidt once expressed his delight at this development by declaring that what its users wanted was for Google to ‘tell them what they should be doing next.’
The nature and implications of this trend were made clear by Eli Pariser in his 2011 book The Filter Bubble– What the internet is Hiding from You, which describes the future ‘algorithm society’, in which everyone will hear about only those things they are already known to believe in or agree with. Pariser fears a drying up of democratic exchange, which is obviously already happening. ‘Democracy,’ he writes, ‘requires citizens to see things from one another’s point of view, but instead we’re more and more enclosed in our own bubbles. Democracy requires a reliance on shared facts; instead we're being offered parallel but separate universes.’ Information about web-users has become one of the most lucrative resources in the world, and is being used to precision-target increasingly customised advertising. The use of cookies and tracking beacons means that every clue dropped — even unwittingly — by every user can become a commodity. Even when they remain hidden from or anonymous to the Great Outdoors, Internet operators can still harvest your personal details and sell them to the highest bidder — usually corporate entities with stuff to sell. It is now estimated that three-quarters of all content consumed via Netflix is the result of machine recommendations. Algorithm recommendations are replacing arts pages criticism of books, movies, music, etc., creating a further round of circularity.
These tendencies will take the randomness out of living, ensuring that the things placed before us will be, by and large, things that our ‘profiles’ have already ordained. The possibility, for instance, of things ‘growing on’ us will be almost entirely eliminated, because there will be less opportunity and context to try new things. Big Data, for example, will ensue that we have to listen only to the kind of music that we are already known to like. A Bowie fan will be someone who has listened many times to Changes and Life on Mars, but never to The Bewlay Brothers. In other words, the algorithm will remove from human existence precisely the qualities that make life a constant process of growth and adaptation, and human personality a mysterious and unpredictable thing. In such ways, perhaps, the puzzle of whether or not man’s intelligence can be exceeded by his mechanistic creations may be resolved in an ominous way: Even if it is not possible to create artificial intelligence that is smarter than the human brain, perhaps if we manage to make humans more stupid we can achieve a simulacrum of the same outcome?
The term ‘Big Data’ is widely misunderstood. Data are ‘big’ not because of the size of the database but because of the data’s capacity to grow organically and constantly update itself. The database of every ‘subject’ is therefore from the outset an ever-growing corpus of knowledge, highly contingent on further refinement by virtue of the expansion of the data. The purpose is to ‘know’ the subject more fully, ideally better than he ‘knows’ himself.
Data is not merely information: it is a deep, wide and very different form of knowledge than anything we’ve been used to. The Big Data controllers have access to multiple layers of information about each one of us, including those who think they’re opting out. They have the primary information, harvested directly from the subject, often without his permission or even awareness. They have the secondary information harvested from those who know the subject, the tertiary information from those who know those who know the subject, and so on ad infinitum. Combining these data and subjecting them to probability calculations with a 97% level of reliability means that the controllers ‘know’ more about each person than anyone else, including the subject himself, and are in a position to divine with an almost impeccable accuracy his likely opinion on a topic he has yet to be introduced to.
These mechanisms have the capacity to investigate not merely connections to be found within the data, but also the meanings of these, and these, again subjected to probability formulae, are capable of expanding the value of the data exponentially.
Among the consequences of these drifts is that the evolution of what hitherto one might non-controversially have described as ‘human society’ may be shifting tracks from the human to the technological, with concepts of ‘progress’ and ‘development’ vested no longer in the human being but in the machines designed and built by human beings but which in some respects ‘transcend’ not merely the capacities of the average human person but even those of the architects of the machines in question.
The ideologies protecting technology always insinuate themselves as utopian, but as the age of digital normality approaches, what is emerging suggests itself, by any human measure, as dystopian. As we age, our companions will be robots, who can be trained to ‘converse’ with us about the things we are interested in. We will eat our meals speaking to strangers, ‘mukbangers’, on Zoom whom we will pay to keep us company and match us bite for bite. As the fascist corporatocracy gains ground, the overwhelming majority of our populations will be housed in hive cities where they will most like be guarded by robot dogs and surveilled by overhead drones. The Combine will know each citizen better than he knows himself.
All this change implies also massive changes in the moral and ethical life of society. Hitherto, morality has centred on protecting and placing limits on the human person, its purpose being either to insulate or restrain one or other person or group in respect of another or others. This landscape is being transformed. Already, morality has been expanded to incorporate also other animals and, latterly the environment, which is frequently spoken of as an entity deserving of exalted protections in comparison to, and indeed against the claims of human beings. Soon, robots may join this growing band of competitors for moral and ethical sheltering.
The four industrial revolutions — steam, electricity, nuclear and electronic — have parallels in the quasi-ethical fields, these being the points at which, under the formulas coined by the historian Bruce Mazlish, man came to recognise that there was not, in fact, a sharp discontinuity arising from each of these shifts, where hitherto it had been assumed there was. The first of these parallels, coinciding with the steam revolution, was the Copernican revolution, in which man came to recognise a continuity between the planet Earth and the rest of the universe; the second — equating to the arrival of electricity and gas — was when man came to recognise a continuity between his own species and other animals (the Darwinian revolution); the third (coinciding with the nuclear age) was when man came to recognise a continuity between rational and irrational humans (the Freudian revolution). Now, we come to the fourth revolution: when man comes to accept a continuity between humans and intelligent machines. These continuities are never absolute (we still eat other animals, for example) but nevertheless create tendencies that become more and more solid.
The Turing test, originally called the imitation game by its creator Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. The test is a subjective one, being determined not by any evaluation of the qualities of the machine, but by how this question is judged by an intelligent human. If the human cannot tell the machine’s behaviour from that of a human, the test is passed.
Research shows that humans can sometimes become very abusive towards computers and robots, particularly when they are seen as human-like, or ‘masquerading robots’. That this might lead to a revised moral calculus is not immediately obvious. But, once you set out the questions that arise in this context, however, it becomes clear that there are, in the first instance, good and sensible reasons for exploring the moral dimensions of what is occurring.
There is abundant evidence that, when a machines masquerades as a human, this affects the behaviour of real humans, both towards the robot and other people. There are those who believe we need to examine these questions with some urgency before we find ourselves in the midst of an ethical mess akin to colonialism or slavery. If this seems far-fetched, then think about how far-fetched it would have been just a year ago to talk about people debating whether they should wear one face covering to the shops or three.
Is it acceptable, for instance, for humans to treat human-like artefacts in ways that we would consider morally unacceptable as treatment of other humans? If so, just how much sexual or violent ‘abuse’ of an artificial, programmed ‘moral agent’, ‘moral subject’ or ‘moral patient’ (all phrases increasingly used by ethicists to refer to certain kinds of machines) should we allow before we censure the behaviour of the abuser?
Is it entirely fanciful, from what we have observed of human society of recent times, that it will not, before long, become possible for someone to marry their ‘robot-carer’?Might such a ‘marriage’ be a matter requiring oversight by the state? These are real, serious moral questions, in part because of the danger that a deterioration in the treatment of human-like robots could spill over into human-on-human relationships — the idea that ‘cruelty’ towards a robot might cause a hardening in the attitudes of the perpetrator to other humans, a principle we already hold to in relation to cruelty towards animals. This understanding has already turned mistreatment of a robot into a kind of vice, the test being the responses of other people. It is likely that this starting point will in time lead to laws forbidding ‘ill-treatment’ of robots and the framing of ‘robots rights’ charters which, inevitably and by definition, will dilute the rights of human beings. As robots become ascendant in the functioning of society, there is every reason to expect that humans will recede in significance, and accordingly in protections. And these tendencies will accelerate when we have reached a tipping point in the development of ‘artificial consciousness’.
Moreover, if it becomes possible to programme machines with ethical principles enabling them to make choices and judgements relevant to their functions, it is a short leap to programming them to judge humans also, unless we decide to block this, a question that raises the ominous question as to who the ‘we’ might be. Virtually all the literature on these topics appears to regard the human race as a monolithic entity with common interests; the idea of a humanity divided between controllers and obsolescent impotents does not appear to have yet entered the picture. But, imagine a world in which robots and humans exist side-by-side, or at least in adjoining dormitories. But, whereas the robots are ‘useful’ by the measure of the controllers, the human quotient contributes nothing, wasting away on screens, stuffing their faces with food and drugs, requiring more and more surveillance and policing. What do we imagine might happen next?Might it involve house arrest, ‘detention centres’ (already mooted in relation to Covid) or even ‘demise pills’?
Ever so occasionally, intimations of concern are raised from within Silicon Valley itself about the implications of the algorithm-driven society, but these contributions tend to be self-interested and ideological try-ons, such as Apple boss Tim Cook’s recent warning about ‘rampant disinformation and conspiracy theories juiced by algorithms.’ Ho-hum. Some of Cook’s language appeared to raise the correct flags, but his overall argument was overwhelmed by the nudge-nudge of his agenda. We can no longer turn a blind eye, he said, to a theory of technology ‘that says all engagement is good engagement . . . and all with the goal of collecting as much data as possible.’ But his context was the events in the Capitol Building in Washington on January 6th 2021, which reference turned his remarks into a non sequitur, the implication being that the dangers arise purely from people Big Tech disapproves of. He went on: ‘It is long past time to stop pretending that this approach doesn't come with a cost — of polarisation, of lost trust and, yes, of violence. A social dilemma cannot be allowed to become a social catastrophe.’ He might well have been talking about the BLM-driven riots of the summer of 2020, except that he wasn’t, which is why his words sound like a placing on the record of worries he hopes will not be raised concerning the conduct of his own company. It’s hard to see this kind of intervention other than as an ass-covering exercise with an eye to future pushback on the stealth nature of the algorithm culture, a kind of controlled explosion of factuality as an insurance policy against future liability issues. ‘Didn’t I raise precisely these questions in 2021?’ I can hear a 2025 Tim Cook ask with a pained expression.
Do we really intend to hand the formulation of future social value over to entities — Big Tech, Silicon Valley — which have already contaminated the groundwater of public trust to such an extent that it will be impossible for most of us to believe that their algorithms are constructed without bias of some kind? Will the convenience factor and life-saving capabilities of technologies like the self-driving car be sufficient to quiet any unsettling thoughts? Will those who have already turned their backs on God as an irrational superstition be prepared to enter a new age of irrationalism, in which they will be as unknowing and ‘superstitious’ as the most simplistic god-botherer in history, based on the ‘graces’ of utility and efficiency?
From the perspective of the ‘faithful’, it is of vital importance that we become alert to these changes and their significance, and do not allow them to pass us by as we sit with eyes buried in the last generation of tech. Even at this moment, the algorithms are silently working away in the background to make our modern world run more smoothly — but for whom? If we think of these developments at all, we think of them as just another aid to the modernised functionality of our lives, like the way the word ‘transistor’ might once have made us think momentarily of the innards of device on which we listened under the blankets to Kid Jensen on Radio Luxembourg: as unassuming servants of our needs and desires. But, one day soon, we will find ourselves in a situation where we are no longer able to think of ourselves as in the least the masters, but instead the subjects, if not the slaves of the Big Tech-controlled algorithms we have been ignoring all along. We need to sit with that thought and consider whether we are happy to forsake so much for so pitifully little. And we need to ask ourselves: Why the silence? Why the absence of curiosity or inquiry? Have we already become so enfeebled by the growingly ascendent invisible brains of our rapidly transforming culture that we are no longer capable of rendering these developments subject to limitation or consent? Is it already too late? If not, maybe it is time someone created an algorithm for provoking and managing democratic conversation.