Last reviewed 15 January 2019
The Industrial Strategy wants to put the UK at the forefront of the artificial intelligence (AI) and data revolution. Paul Clarke reports.
When the Government launched its Industrial Strategy in 2017, it said that a key part of its approach would be taking on Grand Challenges: “the society-changing opportunities and industries of the future, where we can build on our emerging and established strengths to become a world leader”.
Putting the UK at the forefront of the AI and data revolution was at the head of the first four challenges identified. Unfortunately, the challenge may be even greater than ministers think. The European Commission agrees that AI is taking a prominent position in the emergence of the new economy but it has noted that, while China and the USA represent around 86% of global investment in this field, Europe as a whole is a small part of the other 14%. That would suggest that the UK has some way to go to become a world leader.
Before considering how its present, relatively modest, position can be improved, however, it might be worth looking at reactions to the likely impact of AI.
What is AI?
According to a Government speaker at the August 2018 Tech City Executive Accelerator conference, “there aren’t many moments in human history when a technology turns up that changes everything. The wheel, maybe … the printing press … the micro-processor. And we are living through one of those moments right now.”
He put the key date as 2017, when DeepMind beat KehGee, the world’s best Go player. While immediate applications may have focused on having phones that can suggest songs we might like, Ambassador Richard Wood noted drily, AI will soon be helping to solve some of the world’s biggest challenges: chronic diseases, climate change and cyber-security threats. Used properly, AI could transform crucial industries, adding between $3.5 and $5.8 trillion to the global economy.
What could possibly go wrong?
A lot, according to the Chief Economist at the Bank of England, Andy Haldane.
AI, he warned, has the potential to destroy up to 15 million jobs in Britain alone as a “third machine age” hollows out the labour market, widening the gap between rich and poor. He said the Bank had used methodology pioneered in the USA to model the impact of smarter machines on the UK labour market and its more than 30 million employees. It classified jobs into three categories — with a high (greater than 66%), medium (33–66%) or low (less than 33%) probability of automation — and his gloomy forecast is based on making adjustments for the proportion of employment those jobs represent.
PricewaterhouseCoopers (PwC) must have been using different inputs, however, as it has produced a report arguing that robots and AI could in fact create as many jobs as they destroy. It analysed an OECD dataset that looks in detail at the tasks involved in the jobs of 200,000 workers across 29 countries. In Will Robots Really Steal our Jobs? An International Analysis of the Potential LongTerm Impact of Automation, PwC identifies three overlapping waves of automation between now and the mid-2030s: algorithm, augmentation and autonomy.
According to the report, until the early 2020s, we will be in the first stage and, so far, relatively few jobs have been automated because many of the technologies are still at an early development stage. From the early to the late 2020s, the augmentation wave will shift the focus to automating repeatable tasks and exchanging information, as well as further developments of aerial drones, robots in warehouses and semi-autonomous vehicles. Up to 20% of jobs could be affected by the end of the 2020s, as the use of AI systems becomes more widespread and robotics technologies advance and mature.
By the mid-2030s, the advent of the autonomy wave will see AI able to analyse data from multiple sources, make decisions and take physical actions with little or no human input. PwC forecasts that barely a third (30%) of jobs will be hit by that time, even as autonomous robots and driverless vehicles roll out more widely across the economy. At that point, many more manual tasks could become automated, pushing sectors such as transport, manufacturing and retail to the top of the likely automation list. “We do not believe, contrary to some predictions, that automation will lead to mass technological unemployment by the 2030s any more than it has done in the decades since the digital revolution began,” PwC concludes.
Two completely different conclusions from leading experts then, but the confusion does not stop there. Earlier this year, the MIT Technology Review identified at least 18 predictions about automation from companies, think-tanks and research institutions. Some predict millions of jobs created, some millions destroyed, some both. This led Massachusetts Institute of Technology (MIT) to decide that “there is really only one meaningful conclusion: we have no idea how many jobs will actually be lost to the march of technological progress”.
So where does the Government stand in this continuum of confusion? Not surprisingly, it tends towards the optimistic.
Shaping our future
It highlights that an AI start-up is founded almost weekly in the UK and that AI already “gets us from A to B, makes shopping and entertainment more convenient and protects us from fraud”. The Government also claims that new developments offer an opportunity to accelerate medical research in early diagnosis, leading to better prevention and treatment of disease. AI and machine learning are new industries in their own right, it notes, but are also transforming business models across many sectors by deploying vast datasets to identify better ways of doing complex tasks.
The fourth industrial revolution
AI is at the heart of what is being described as the fourth industrial revolution (4IR) with the manufacturers’ organisation EEF saying that 80% of its members agree that this next industrial transformation, driven by connectivity, big data and other rapid advances in product and process technology, will be a business reality within the next decade. By 2023, EEF predicts, almost 2 in 10 manufacturers will have invested in the 3D simulation of manufacturing processes, augmented reality and/or fully autonomous robots.
Central to all this will be the capture of data “on everything” and real-time analysis of that data by machines and systems. In the Industrial Strategy, the Government argues that what differentiates 4IR from its three predecessors — mechanisation; electrification and mass production; and digitalisation — is its scale, speed of implementation and complexity, blurring the lines between the physical, digital and biological worlds.
Taking the lead
Despite the confusion visible in expert predictions about the likely impact of AI, the Government believes that embedding AI across the UK will create thousands of good quality jobs and drive economic growth. “We have some of the best research institutions in the world,” it points out, “and globally-recognised capability in AI-related disciplines, including maths, computer science, ethics and linguistics.” It also highlights the substantial datasets available in public institutions where AI can be explored safely and securely.
Artificial Intelligence Sector Deal
In April 2018, the Government built on the Industrial Strategy by agreeing a Sector Deal to boost the UK’s global position in developing AI technologies. Available at assets.publishing.service.gov.uk, the Deal is described as a first commitment from Government and industry to realise this technology’s potential, including up to £0.95 billion of support. It sets out actions to promote the adoption and use of AI in the UK, and delivers on the recommendations of the independent review, Growing the Artificial Intelligence Industry in the UK, led by Professor Dame Wendy Hall.
The Deal will establish a new AI Council to bring together respected leaders in the field from across academia and industry; a new delivery body within the Government (the Office for Artificial Intelligence) to support it; and a new Centre for Data Ethics and Innovation. This last body will be expected to ensure safe, ethical and ground-breaking innovation in AI and data‑driven technologies.
Digital Secretary Matt Hancock said: “Advances in the ways we use data are giving rise to new and sometimes unfamiliar economic and ethical issues. We need to make sure we have the governance in place to address these rapidly evolving issues, otherwise we risk losing confidence among the public and holding businesses back from valuable innovation.”
Incidentally, the EU has also decided that more effort is needed to seize the AI initiative from the USA and China and it too has set up a new body to channel support and effort in that direction. It has, however, come up with a slightly more exciting name: the Joint European Disruptive Initiative (JEDI).
Currently, the big users of AI are companies such as Facebook, Google, Amazon and the creators of video games. While they are mainly concerned with improving their products (or their marketing to consumers), it should be noted that Google has implemented AI at its own data centres to optimise energy efficiency. It cuts power use in the centres by 40% by using DeepMind's reinforcement learning algorithms to optimise cooling. Microsoft has launched a new $50 million five-year programme called AI for Earth which awards grants to support projects that change the way people and organisations monitor, model, and ultimately manage Earth’s natural systems.
AI for Earth favours projects that address four areas: agriculture, including land-use planning and management; biodiversity, including habitat protection and restoration; climate change, including extreme weather and climate modelling; and water, including droughts, floods and disaster management. Meanwhile, a new field of "Climate Informatics" is being developed, by agencies ranging from the Met Office to NASA, harnessing AI to transform weather forecasting (including prediction of extreme events) and to improve understanding of the effects of climate change.
Smaller, innovative, firms are looking at applications such as introducing machine learning into 3D printers so that they can determine how to create a better product and eventually cut waste by using the most appropriate materials. According to Christina Valimaki, Director for Chemicals at Elsevier R&D Solutions, 3D printing technology can drastically reduce an item’s carbon footprint. “Frequently, business is seen to be at odds with environmental issues,” she said, “but 3D printing is the perfect example of how taking a ‘green’ approach can have significant business benefits.”
Utopia or Armageddon?
As was made clear earlier in this article, expert opinion is hopelessly divided on whether developments in AI will lead to triumph or disaster. Will the prediction made by John Maynard Keynes in 1930 finally come to fruition: with the working week cut to perhaps 15 hours a week as people enjoy more leisure time? The TUC certainly hopes so as it has called for businesses to share the benefits of AI with their workforce by introducing a four-day working week. Or will the dark prophecy of Yuval Noah Harari in his much-publicised book, “21 Lessons for the 21st century” come true? “Theoretically, he said, “you can have an economy in which a mining corporation produces and sells iron to a robotics corporation, the robotics corporation produces and sells robots to the mining corporation, which mines more iron, which is used to produce more robots…” which would seem to make human beings economically irrelevant. Meanwhile Elon Musk, CEO of Tesla and Space X, has warned that AI poses an “existential threat” and has led calls for it to be regulated by Governments.
Peers in the UK seem to agree as the House of Lords recently released a report, AI in the UK: Ready, Willing and Able?, which calls for ethics to be placed at the centre of AI development. AI, the report argues, should be for the common good and benefit of humanity; should operate on principles of intelligibility and fairness; and should not be used to diminish the data rights or privacy of individuals and communities. Additionally, all citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside AI and the autonomous power to hurt, destroy or deceive human beings should never be vested in AI.
Seemingly unperturbed by these possibilities, in May 2018 the Prime Minister announced the first AI and Data Grand Challenge mission: To use data, AI and innovation to transform the prevention, early diagnosis and treatment of chronic diseases by 2030. Success in this mission can save lives and increase NHS efficiency by enabling prevention and reducing the need for costly late stage treatment, the Government suggests.
“The opportunity — working with academia, the charitable sector, and industry and harnessing the power of AI technologies — is considerable,” it goes on. “It should lead to a whole new industry of diagnostic and tech companies which would drive UK economic growth.” So perhaps we should end on that positive note.