Cooking the Books 1 – Another extinction rebellion?

This time a pre-emptive strike against intelligent robots before they become too intelligent? This, at least, was the impression given by the front page headline ‘AI PIONEERS FEAR EXTINCTION’ (Times, 30 May). It said:

‘More than 350 of the world’s experts in artificial intelligence […] have warned of the possibility that the technology could lead to the extinction of humanity’.

The 22-word statement itself doesn’t actually say this. It merely said:

‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war’ (tinyurl.com/2p8ka9yv).

As this doesn’t say what ‘the risk of extinction’ is we are left guessing. The Times speculated:

‘Some computer scientists fear that a super-intelligent AI with interests misaligned to those of humans could supplant, or unwittingly destroy us’.

That’s a ‘possibility’ — a lot of things are — but a rather remote one, if only because no such ‘super-intelligent AI’ exists (if it ever does).

Earlier a number of tech company bosses had called for research on AI to be suspended while regulations were drawn up. That’s not going to happen, even if it was desirable. It’s too late. As with nuclear physics, the genie is out of the bottle. The knowledge is there and is already being applied.

Dan Hendrycks, the director of the Centre for AI Safety (CAIS) in San Francisco, who organised the signatures, was himself quoted as saying:

‘We are currently in an AI arms race in industry, where companies have concerns about safety but they are forced to prioritise making them more powerful more quickly […] We’re going to be rapidly automating more and more, giving more and more decision-making control to systems. If corporations don’t do that, they get outcompeted’.

If that is really the case, as it will be if the use of AI means lower production costs, then it will spread. That’s what happens under capitalism. One company finds a way of reducing costs and makes super-profits till its competitors follow suit and the new method becomes the norm.

Actually, there literally is an arms race over AI going on. Another article in the Times (1 June), by Iain Martin, was headlined ‘To defend the West we must win this AI race’. Arguing against pausing AI research, Martin deployed the same arguments for developing ever more intelligent AI weapons as for developing the H Bomb — if we don’t, they will and then where will we be? If, he wrote, the West fails to win the AI arms race:

‘we will be at the mercy of dictators who can swarm us with 20,000 drones, communicating with each other rather than humans, and picking their own targets. Vast computer power can relentlessly seek for weaknesses through which to launch cyber attacks and shut down our financial system or turn out the lights’.

Meanwhile, the West’s rivals over raw material resources, markets, investment outlets, trade routes and strategic points and areas to protect these will be making similar calculations. The world has not yet reached Martin’s nightmare stage but the line of march is towards it. If capitalism continues that point will be reached, probably sooner rather than later.

All this is not a result of AI as such, but of its misuse under capitalism. In a socialist society, further-developed AI could be of immense help in taking decisions about allocating resources, what, how and where to produce wealth. What threatens humanity is not AI but capitalism, with its competitive struggle for profits and ‘might is right’ in relations between capitalist states. If only the 350 experts had used their intelligence to make that point.


Next article: Bird’s Eye View – No safety at work, China’s billionaires ⮞

Leave a Reply