Will AI steal your job?

There are growing concerns that artificial intelligence will put vast numbers of people out of work. Those fears are overdone, argues Carl Frey, the Oxford academic who famously predicted that 47 per cent of US jobs are at high risk of automation.

Gaming image

Concerns about Artificial Intelligence (AI) resonate with many, from Hollywood screenwriters to truck drivers. As technology advances rapidly, there's growing unease about the implications of Generative AI for our work, social fabric, and the world at large. Will there be any task beyond the reach of machines? 

Over the past 10 years, my collaborators and I have delved deep into the ramifications of AI. A decade ago, I co-authored a paper with Michael Osborne estimating that nearly 47 per cent of jobs in the US could, in theory, be automated as AI and mobile robotics widened the spectrum of computer-capable tasks. 

We grounded our predictions on the belief that, regardless of the technological advancements of the day, humans would continue to have the upper hand in three pivotal areas: creativity, intricate social interactions, and dealing with unstructured settings, such as in the home. 

Yet, I must admit that there have been significant strides even in these areas. Large Language Models (LLMs) such as GPT-4 now offer impressively human-like textual reactions to an extensive array of prompts. In this era of Generative AI, a machine might just pen your heartfelt love notes. 

The bottlenecks to automation we identified a decade ago, however, remain relevant today. If GPT-4 crafts your love letters, for example, the significance of your face-to-face dates will amplify. The crux of the matter is that as digital social engagements become more intertwined with algorithms, the value of in-person interactions, which machines can't yet duplicate, will surge. 

Furthermore, while AI might pen a letter mirroring Shakespeare's eloquence, it's only possible because it draws from existing works of Shakespeare for training. Generally, AI excels in tasks defined by explicit data and objectives, such as optimising a game score or emulating Shakespearean prose. Yet, when it comes to pioneering original content instead of iterating on established ideas, what benchmark should one aim for? How to pinpoint this goal is often where human creativity comes into play. 

What is more, many jobs can’t be automated, as our 2013 paper suggested. Generative AI – a subset of the vast AI landscape – doesn't strictly function as an automation tool. It requires human input for initiation and subsequent refinement, fact-checking and editing of its results. 

Finally, the quality of content from Generative AI reflects the calibre of its training data. The old adage "garbage in, garbage out" holds true. Typically, these algorithms rely on vast datasets, often encompassing vast swathes of the Internet, rather than meticulously curated datasets crafted by experts. Thus, LLMs tend to produce text that mirrors the common or average content found online, rather than the exceptional. As Michael and I recently argued in an article in The Economist, the principle is simple: average data leads to average results. 

map of ai use

AI needs people

So, what does this portend for the future of employment? For starters, the newest wave of AI will consistently require human oversight. Interestingly, those with less specialised skills might find themselves at an advantage, as they can now produce content that meets this "average" standard. 

A key question, of course, is whether future progress might change this soon, enabling automation even in creative and social realms? Without a significant innovation, it appears doubtful. To begin with, the data LLMs have already consumed likely represents a substantial portion of the Internet. Thus, there's scepticism about whether training data can be sufficiently expanded in the coming years. Moreover, the proliferation of subpar AI-generated content could degrade the overall quality of the Internet, making it a less reliable training source. 

Additionally, while the tech world has come to expect the consistent growth predicted by Moore's Law – the notion that the number of transistors on an integrated circuit (IC) doubles roughly every two years – there's growing consensus that this pace might plateau by around 2025 due to inherent physical limits. 

Thirdly, the energy expenditure for developing GPT-4 was believed to account for a significant portion of its USD100 million training cost – and this was before the surge in energy prices. With the pressing issue of climate change, the sustainability of such practices is under scrutiny. 

What is needed is AI capable of learning from more concise, expertly curated datasets, prioritising quality over quantity. Predicting the advent of such a breakthrough, however, remains elusive. One tangible step is to foster an environment that promotes data-efficient innovation. 

Reflect on this historical perspective: as the 20th century dawned, there was a genuine race between electric vehicles and the combustion engine to dominate the emerging automotive sector. Initially, they seemed neck and neck, but vast oil discoveries soon tipped the scales towards the latter. Had we implemented an oil tax during that era, the trajectory might have favoured electric vehicles, thereby reducing our carbon footprint substantially. Similarly, imposing a data tax could spur efforts to make AI processes leaner in terms of data consumption. 

As I have discussed in previous writings, many job roles are bound to undergo automation. Yet, it won't necessarily be due to the current generation of Generative AI. Unless there are significant innovations, I anticipate the challenges highlighted in our 2013 study will persist, limiting the extent of automation for years to come. 

Investment Insights

  • by Anjali Bastianpillai, senior client portfolio manager, thematic equities, Pictet Asset Management
  • The generative AI market is expected to grow to USD1.3 trillion over the next decade from just USD40 billion in 2022, according to Bloomberg Intelligence
  • McKinsey has identified 63 generative AI use cases across 16 business functions that could deliver between USD2.6 trillion and USD4.4 trillion in economic benefits annually. 
  • Each new generation of AI systems requires exponentially greater computing power. Google’s PaLM2 large language model, one of the latest generative AI systems, incorporates 340 billion parameters – variables that are adjusted during training to establish how input data are transformed into desired results – uses a training data set of 2.7 trillion data points and requires 7.34 billion petaFLOPs in computing power. As recently as 2019, the leading AI engine, OpenAI Five, used 159 million parameters, 454 billion data points and 67 million petaFLOPs, according to Our World in Data.

About

Carl Frey

Dr Carl Frey is Oxford Martin Citi Fellow at Oxford University where he directs the programme on the Future of Work at the Oxford Martin School.His most recent book, The Technology Trap, was selected a Financial Times Best Books of the Year in 2019. 

Photo of Carl Frey

Related articles