Pathfinders – AI: the last invention we ever need to make…

In this issue we spotlight the rise and rise of Artificial Intelligence, a hot topic that raises fundamental questions about how it should be used, and what happens if it develops in ways we don’t expect and don’t want.

Currently AI is strictly horses for courses, confined within rule-based parameters and master of just one thing at a time, rather than becoming a super-jack of all trades. So, like numerical engines before the era of programmable general-purpose computing, it has been of limited use. But artificial general intelligence (AGI) is without doubt the ultimate goal, and the race is on to achieve it.

With this in mind, and with a chequered history of failed AI winters behind them, developers are concentrating on the ‘can we do it?’ question rather than the bigger ‘should we do it?’ question. Even less ethically distracted are investors whose only question is ‘can we make money out of it?’ This is not encouraging, given capitalism’s track record.

One problem with AI is that the more advanced it gets, the less we understand it. AI is increasingly a ‘black-box’ phenomenon, whose inner workings are a mystery to us and whose results are often inexplicable and unverifiable by other means. We can’t just treat it like a Delphic oracle, because it’s already clocked up embarrassing gaffes such as building racism and sexism into its staff-hiring rationales, or factorising income instead of health conditions into its medical outcomes estimates. And there have been several public relations disasters, with AIs answering enquiries with profanities after reading the online Urban Dictionary, Facebook chatbots creepily inventing their own language that no human can understand, and Amazon’s Alexa laughing demonically at its own joke: ‘Why did the chicken cross the road? Answer – because humans are a fragile species who have no idea what’s coming next’ (bit.ly/3wd4vh6).

Then there is the lack of internationally agreed definitions, paradigms and developmental standards, in the absence of which each developer is left to make up their own rules. Can we expect global agreement when we can’t get states to agree on climate change? In the absence of such a framework, it’s no wonder that people fear the worst.

Frankenstein-anxiety is nothing new in the history of technology, of course, and if we banned every advance that might go wrong we would never have stopped wearing animal skins and woad. It’s uncontroversial to say that the possible advantages to capitalism are huge, and indeed we’re already seeing AI in everything from YouTube preference algorithms to self-drive tractors and military drone swarms. And that’s small potatoes next to the quest for the holy grail of AGI. But while all this promises big profits for capitalists, what are the pros and cons in human terms? What is the long-term effect of the automation of work, for example? Tech pundits including Tesla boss Elon Musk take it for granted that most of us will have no jobs and that the only solution is a Universal Basic Income, a solution we argue is unworkable.

That’s not the worst of it. In 1950 Alan Turing wrote, ‘[T]he machine thinking method […] would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control’. IJ Good, Turing’s colleague at Bletchley Park, helpfully added, ‘The first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control’ (bit.ly/3FNCekb). The last thing we ever need, or the last thing we ever do, this side of a Singularity that wipes humans from the Earth?

It’s not so much a question of a Terminator-style Armageddon with machines bent on our annihilation. Even in capitalism it’s hard to imagine anyone investing in developing such a capability, at least not on purpose. But the fear is that it could happen by accident, as in the proposed ‘paperclip apocalypse’, in which a poorly considered instruction to make as many paperclips as possible results in the AI dutifully embarking on the destruction of the entire globe in order to turn everything into paperclips. Musk has similarly argued that AI does not have to be evil to destroy humanity: ‘It’s just like, if we’re building a road and an anthill just happens to be in the way, we don’t hate ants, we’re just building a road, and so, goodbye anthill’ (cnb.cx/3yJ7pMl).

Stuart Russell, in his excellent 2021 Reith lectures on AI (see our summary here), makes a telling observation about capitalist corporations like the fossil fuel industry, arguing that they operate as uncontrolled superintelligent AIs with fixed objectives which ignore externalities. But why only certain industries? We would go one further and argue that capitalism as a whole works like this. It doesn’t hate humans or the planet, but is currently destroying both in the blind and disinterested quest to build ever greater profits, so goodbye world, to paraphrase its richest beneficiary, one Elon Musk.

Musk is right about one thing, saying ‘the least scary future I can think of is one where we have at least democratized AI because if one company or small group of people manages to develop godlike digital superintelligence, they could take over the world’. It’s rather ironic that, once again, Musk sees himself as part of the solution, not part of the problem.

To democratise AI you would first need to democratise social production, because in capitalism science and tech are sequestered behind barriers of ownership by private investors anxious to avoid any uncontrolled release of potentially profitable knowledge into the environment. AI needs to belong to all humanity, just like all other forms of wealth, which is why socialists advocate post-capitalist common ownership. In such circumstances, a global standardisation of AI development rules becomes genuinely feasible, and as Russell argues, it wouldn’t be that difficult to program AIs not to kill us all in the quest for more paperclips: you simply build in an uncertainty principle, so that the AI understands that the solution it has devised may not be the one humans really want or need. It’s a sensible approach. If only humans used a bit of natural intelligence and adopted it, they’d get rid of capitalism tomorrow.

PJS


Next article: Ukraine: whose side are we on? ⮞

Leave a Reply