National Productivity Week 27th January 2025 | Visit Website

A diverse community of leading experts, policymakers and practitioners

The Institute’s key research themes are led by ten academic partners spread across the UK.

Working closely with policymakers

We’re a UK-wide research organisation exploring what productivity means for business

Businesses are crucial to solving the UK’s productivity problems.

Can digital innovation, including AI, improve productivity growth?

For new digital technologies like artificial intelligence, the rate at which productivity growth will be boosted depends not just on the innovations themselves. How business processes are adapted to take advantage of them is also crucial – as are complementary investments by firms and governments. This blog was written by Diane Coyle and Richard Jones, based on insights from their work in The Productivity Agenda report.

Innovation in new products and processes is the engine of long-term growth in productivity – the increased efficiency of production of goods and services that ultimately underpins rising living standards. But there is a productivity puzzle: despite astonishing scientific progress in recent years – from biomedicine to advanced materials to artificial intelligence (AI) – the wave of innovations is not showing up in overall productivity growth in the UK.

Competing explanations for the digital paradox

One way to explain why today’s digitalisation isn’t translating into productivity gains is that these innovations are simply less valuable than older ones such as electricity. Another is that it always takes time for businesses and consumers to adopt a new technology, and that diffusion and adoption are slower with today’s technologies because they involve complex software.

The balance of evidence is tilting toward the latter explanation as digital innovations and data are enabling a minority of already high-productivity businesses to pull further ahead of others in their industries. But this in turn raises further questions about how adoption might be accelerated and what the barriers are to using digital tools to drive faster productivity growth.

Why do digital technologies take so long to diffuse?

Digital technologies generally have a high upfront cost (such as developing code and collecting data) and low marginal cost (copying software or duplicating data are essentially free). As a result, it can take a long time to get to the critical mass.

Use then grows dramatically, especially if there are network effects benefiting existing users when more users are added, as in a telephone network. Other influences matter too – personal networks and face-to-face contact can help to spread the technology.

As it takes time to learn how to use new digital tools effectively, there may even be a reduction in firms’ productivity at first, followed by a later acceleration. This has been labelled the ‘productivity J-curve’.

If this is correct, the productivity dividend from recent digital innovations will eventually arrive. It might take the form of digitally discovered new drugs or materials (such as the AlphaFold protein structure prediction tool or the new DeepMind materials database). Or it might take the form of improved prediction and reduced inventories.

The dispersion of productivity benefits

Businesses in the top 5-10% in terms of performance have pulled further and further ahead of the average. This is a phenomenon that has been observed across the OECD economies.

Some researchers have linked this to increasing concentration and market power in many industries, with the consequent decrease in competition itself reducing productivity growth on average (see Covarrubias et al, 2019, and De Loecker et al, 2020).

One explanation for the diverging fortunes of the best and the rest is that the high-productivity firms are precisely those that are using digital technologies. One study finds that US manufacturing firms using big data for predictive analytics had significantly higher sales and productivity than others – as long as they had made appropriate complementary investments in hardware, skills and workplace organisation (Brynjolfsson et al, 2021).

Another study, also in the United States, finds that digital automation is associated with about 11% higher labour productivity at the firm level (Acemoglu et al, 2022).

Similarly, evidence from the European Union (EU) finds that the use of digital tools such as robotics or 3D printing characterises high-productivity firms (Cathles et al, 2020). Among UK firms, higher productivity is linked to the use of digital tools and skills, especially for those using more than one digital technology and combining them with in-house skills.

More recent technological innovations, such as generative AI, are still in the early stages of adoption, but the emerging evidence again confirms that the use of AI is strongly associated with higher productivity. For example, work using survey data on German firms demonstrates with good evidence that it is a causal relationship (Czarnitzki et al, 2023).

One study uses data for businesses in 11 OECD countries – Belgium, Denmark, France, Germany, Ireland, Israel, Italy, Japan, Korea, Portugal and Switzerland – to uncover some of the characteristics of firms using AI (Calvino and Fontanelli, 2023). This highlights that these firms tend to be larger and/or younger and that the information and communications technology (ICT) and professional services sectors are the most intensive AI users. This fits the intuition that the effective use of AI requires appropriate skills and pre-existing digital infrastructure.

The impact of AI

Even in the case of AI, it is unlikely that the well-established pattern of gradual adoption and large dispersion in productivity effects will be overturned rapidly.

There is no clearer signal of the promise of AI in the life sciences than the effective solution of one of the most important fundamental problems in biology – the protein-folding problem – by DeepMind’s programme AlphaFold.

Many proteins fold into a unique three-dimensional structure, the precise details of which decide its function – such as catalysing chemical reactions. This three-dimensional structure is determined by the (one-dimensional) sequence of different amino acids along the protein chain.

Given the sequence, can one predict the structure? This problem had resisted theoretical solution for decades, but AlphaFold – using deep learning to establish the correlations between sequence and many experimentally determined structures – can now predict unknown structures from sequence data with great accuracy and reliability.

Two factors made this success possible. First, the problem it was trying to solve was well-defined. Second, it had access to large amounts of well-curated public domain data to work on, in the form of experimentally determined protein structures, generated through decades of work in academia using x-ray diffraction and other techniques.

The lesson here is that AI is good at solving well-posed problems, where there are big and well-curated datasets that span the problem space.

But AI’s contribution to overall productivity growth will depend on whether those AI-susceptible parts of the overall problem are in fact the bottlenecks. In drug discovery, for example, it may be that the lack of screening techniques and disease models with good predictive power may be a bigger bottleneck than the rate at which candidate molecules can be generated with the help of AI.

The potential for generative AI

So how is the situation changed by the massive impact of large language models (LLMs), such as ChatGPT? These tools can significantly accelerate the writing of computer code, and any sector that generates boilerplate prose, such as marketing, routine legal services and management consultancy, is likely to be affected. Similarly, the assimilation of large documents will be assisted by the capabilities of LLMs to provide synopses of complex texts.

Language and text are hugely important for how we organise and collaborate to achieve common goals, and for the way we preserve, transmit and build on the sum of human knowledge and culture. So we should not underestimate the power of tools that facilitate that.

Equally, many of the constraints that we face require direct engagement with the physical world – whether through the need for better understanding of biology that will allow us to develop new medicines more effectively, or the ability to generate abundant zero-carbon energy. This is where those other areas of machine learning – pattern recognition and finding relationships within large data sets – may have a bigger contribution than generative AI.

Fluency with the written word is an important skill in itself, so the improvements in productivity that will come from the new technology of LLMs will arise in places where speed in generating and assimilating prose are the rate-limiting step in the process of producing economic value.

The need for complementary investments

For machine learning and AI more widely, the rate at which productivity growth will be boosted will depend not just on developments in the technology itself, but also on the rate at which other technologies and other business processes are adapted to take advantage of AI.

This means that firms need those complementary investments – they must respond to the general challenge of reorganising production to adopt innovations.

Research on the 1990s dotcom boom found that businesses that were then adopting digital tools needed to make investments in reorganisation that were about much more than just the investment in computer and telecommunications equipment.

As one study notes: ‘Firms that are intensive IT users are also more likely to adopt work practices that involve a specific cluster of organizational characteristics, including greater use of teams, broader distribution of certain decision rights, and increased worker training’ (Brynjolfsson et al, 2002). This early work also found that it could take years for the full value of ICT and organisational investments to be realised.

The reason is that these technologies change the cost of transferring information, which can contribute to better decisions, but only if people in the business are able to use the information. They might need new skills, but they will also need to have invested in the data required and to have the authority to make decisions.

It is also likely that data- and software-enabled change is inherently harder to adopt than previous technologies. There is more tacit knowledge involved – that is, the kind of know-how that is not written down but shared among co-workers – because activities involving data science and manipulating software are not very standardised.

This might change if the new generation of foundation AI models make using digital tools more systematic and routine. It is possible that chatbots and application programming interfaces will make AI models easier to use. But for now, there seems to be a high premium for the very specific digital skills and software involved in running a high-productivity modern business.

Crystallising the change

So how can the benefits of continuing technical change – in digital and AI, but also in other areas such as energy and biomedicine – be crystallised? The need to speed up technological diffusion to generate economically valuable products and services points to important policy levers.

Complementary investments are needed in physical infrastructure (wired and wireless broadband and data centres), and in organisational change. Of these, the latter seems to be the hardest. One area for additional policy intervention may be in transferring the necessary know-how and management practices between firms. Management quality may be a more tightly binding constraint in a business using complex software and data.

Investment in appropriate skills is also required. The wage premium for software engineers and data scientists suggests that their skills are in short supply in the UK. The House of Lords Science and Technology Committee concluded in a 2002 report that government policy had so far failed to address businesses’ skills needs – and there is no sign of improvement since then. Given the policy focus on ensuring that the UK is a world leader in at least some areas of AI, the skills shortfall is likely to need even greater focus.

Digital markets are often dominated by large incumbents, which might either use their data advantage or their ability to acquire potential competitors to cement their dominance. The data hoard of big technology companies forms a competitive ‘moat’ in some markets.

What’s more, there are emerging areas where the use of data across a whole supply chain or cluster of businesses will be needed to deliver the potential productivity benefits of digital technologies. Resisting the lobbying of big technology companies to enable new entry in relevant markets will be essential for the UK to take advantage of its strengths in areas of AI innovation.


This blog was first published by Economics Observatory where a list of further reading and experts can also be found.