Who benefits from research and innovation?

I’ve been pondering a report written by my friend and Industrial Strategy Commission colleague Richard Jones (with James Wilsdon), The Biomedical Bubble. The report calls for a rethinking of the high priority given to biomedical research in the allocation of research funding, and arguing for more attention to be paid to the “social, environmental, digital and behavioural determinants of health”. It also calls for health innovation to be considered in the context of industrial strategy – after all, in the NHS the UK has a unique potential market for healthcare innovations. It points out the there are fewer ill people in the places where most biomedical and pharmaceutical research is carried out, thanks the the UK’sregional imbalances. It also points out that, despite all the brilliant past discoveries, the sector’s productivity is declining:

“In the 1960s, by some measures a golden age of drug discovery, developing a successful
drug cost US$100 million on average. Since then, the number of new drugs developed per
billion (inflation adjusted) dollars has halved every nine years. Around 2000, the cost per
new drug passed the US$1 billion dollar milestone, and R&D productivity has since fallen
for another decade.”

All of this seems well worth debating, for all its provocation to the status quo – and this is a courageous argument given how warm and cuddly we all feel about new medicines. I firmly believe more attention should be paid to the whole system from basic research to final use that determines the distribution of the benefits of innovation, rather than – as we do now – treating the direction of research and innovation as somehow exogenous and worrying about the distributional consequences. This goes for digital, or finance, say, as well as pharma. What determines whether there are widely-shared benefits – or not?

Serendipitously, I happened to read a couple of related articles in the past few days, although both concerning the US. One was this BLS report on multi-factor productivity, which highlights pharma as a sectors making one of the biggest contributions to the US productivity slowdown (see figure 3). And this very interesting Aeon essay about the impact of financial incentives on US pharma research. It speaks to my interest in understanding the whole system effects of research in this domain. Given that this landscape in terms of both research and commerce is US-dominated, this surely makes the question of how the UK spends its own research money all the more relevant? As The Biomedical Bubble asks:

“[T]he importance of the biotechnology sector has been an article of faith for UK
governments for more than 20 years, even when any notion of industrial strategy in other
sectors was derided. So the failure of the UK to develop a thriving biotechnology sector
at anything like the scale anticipated should prompt reflection on our assumptions about
how technology transfer from the science base occurs. The most dominant of these is that
biomedical science would be brought to market through IP-based, venture capital funded
spin-outs. This approach has largely failed, and we are yet to find an alternative.”
For it seems the model is no longer serving the US all that well either – not economy-wide innovation and productivity, and not the American population, which has worth health outcomes at higher cost that any other developed economy. There are some challenging questions here, fundamentally: who benefits from research and innovation, how should the public good being funded by taxpayers be defined and assessed, and what funding and regulatory structures would actually ensure the gains are widely shared?

Finance, the state and innovation

Yesterday brought the launch of a new and revised edition of Doing Capitalism in the Innovation Economy by William Janeway. Anybody who read the first (2012) edition will recall the theme of the ‘three player game’ – market innovators, speculators and the state – informed by Keynes and Minsky as well as Janeway’s own experience combining an economics PhD with his experience shaping the world of venture capital investment.

The term refers to how the complicated interactions between government, providers of finance and capitalists drive technological innovation and economic growth. The overlapping institutions create an inherently fragile system, the book argues – and also a contingent one. Things can easily turn out differently.

The book starts with a more descriptive first half, including Janeway’s “Cash and Control” approach to investing in new technologies, and also an account of how the three players in the US shaped the computer revolution. This is an admirably clear but nuanced history emphasising the important role of the state – through defense spending in particular – but also the equally vital private sector involvement. I find this sense of the complicated and path dependent interplay far more persuasive than simplistic accounts emphasising either the government or the market.

The second half of the book takes an analytical turn, covering financial instability, and the role of state action. It’s fair to say Janeway is not a fan of much of mainstream economic theory (at least macro and financial economics). He includes a good deal of economic history, and Carlota Perez features alongside Minsky in this account.

The years between the two editions of the book, characterised by sluggish growth, flatlining productivity, and also extraordinary changes in the economy and society brought about by technology perhaps underline the reasons for this lack of esteem. After all, there do seem to be some intractable ‘puzzles’, and meanwhile, just in time for publication, Italy looks like it might be kciking off the Euro/banking crisis again. The experience of the past few years also helps explain the rationale for a second edition. That’s quite a lot of economic history and structural change packed into half a decade.

Although I read the first edition, I’m enjoying the second as well. And for those who didn’t read the book first time around, there’s a treat in store.

[amazon_link asins=’1108471277′ template=’ProductAd’ store=’enlighteconom-21′ marketplace=’UK’ link_id=’39f773b9-63ec-11e8-af2f-ffd876094224′]

Epidemics vs information cascades

As I was looking at publishers’ websites for my New Year round up of forthcoming books, I noticed OUP billing Paul Geroski’s The Evolution of New Markets as due out in January 2018. This is odd as it was published in 2003, and Paul died in 2005; it isn’t obvious why there’s a reprint now. He was Deputy Chairman and then Chairman of the Competition Commission during my years there, and was a brilliant economist as well as a wonderful person. I learnt an amazing amount from being a member of his inquiry team.

Anyway, the catalogue entry for the reprint sent me back to my copy of the book, along with Fast Second, which Paul co-authored with Constantinos Markides. Fast Second challenges the received wisdom of first mover advantage: Amazon was not the first online books retailer, Charles Schwab not the first online brokerage, and so on. The opportunity lies in between being early in a radical new market and being late because a dominant design and business model have already emerged. The Fast Second business competes for dominance – and supposed first mover advantages are usually fast second advantages.

Paul’s book The Evolution of New Markets – in which I found a handwritten note he’d sent me with it, which made for an emotional moment – does what it says, and explores models of innovation diffusion – so in other words, models of S-curves. His view was that the epidemic model of S-curves, which seems to be the standard one, was a misleading metaphor. He argued that information cascades best fit the observed reality. The epidemic model assumes that a new technology is adopted as information about it is diffused. Each user passes on the info to the next user. However, as the book points out, information diffuses far faster than use. Users need to be persuaded rather than just informed.

More appropriate is a model whereby new technologies arrive in a number of variants at slightly different times, making adoption risky and costly – especially when there are network externalities or when people realize there is going to be a standards battle. Most new products fail, after all. But once one variant starts to dominate, the cost and risk decline and adoption will occur much faster.

It’s a persuasive argument, and a very readable book. Although the list price is surprisingly high for a short paperback, one can be confident second hand copies are just as good.

[amazon_link asins=’B00JMCZKOI’ template=’ProductAd’ store=’enlighteconom-21′ marketplace=’UK’ link_id=’01c24051-f0af-11e7-8ae2-8523a12ffe45′]  [amazon_link asins=’B000PY4A52′ template=’ProductAd’ store=’enlighteconom-21′ marketplace=’UK’ link_id=’07102152-f0af-11e7-9367-6b18dc756ddf’]

 

Free innovation

I polished off Eric von Hippel’s Free Innovation on my Washington trip. It’s an interesting, short book looking at the viability and character of innovation by individuals – alone or co-operating in communities. It is free in two senses: the work involved is not paid; and the innovations – or at least their design – is not charged for, although it may subsequently be commercialised by the inventors or by other businesses. The viability of free innovation has been greatly extended by digital technology and the internet: there is more accessible useful information, it is easier and cheaper to co-ordinate among a group. The diffusion of innovations is also easier, although rarely as extensive as when a commercial business takes them up and markets them. In fact, von Hippel argues that there are some strong complementarities between free innovation and commercial vendors, as the latter can bring the scale economies of production and marketing, while the former can enhance the use case, the complementary know-how, that increase the value of whatever it is.

The book has a little theorising, some survey evidence on the wide scope of free innovation, and plenty of nice examples. It ends with a couple of chapters on how to safeguard the legal rights of free innovation and how the pehnomenon might be encouraged. The scope is what interests me particularly. I had already been thinking about phenomena such as open source software as a voluntary public good, which competes with marketed goods – compare Apache with Microsoft’s server software (as Shane Greenstein and Frank Nagle do here). There is clearly a growing amount of substitution across the production boundary going on.

The surveys reported in this book seem to indicate that millions of people are innovating (5-6% of respondents in the UK and US, Finland and Canada) – but equally, some of the innovations are minor contributors to economic welfare and one cannot imagine them ever having a wide market or competing with marketed equivalents. The question is how to get a handle on the scope and scale of all the open source, public good, innovation.

415db3y94YL

Have we run out of innovations?

I’ve been reading old articles about about hedonic adjustment and followed one trail to a 1983 paper by William Nordhaus about the productivity slowdown between the 1960s and 1970s. He wrote: “Is it not likely that we have temporarily exhausted many of the avenues that were leading to rapid technological change?” (1981 working paper version here). Timothy Bresnahan and Robert Gordon pounce on this in their introduction to the 1996 NBER volume they edited on The Economics of New Goods: “The world is running out of new ideas just as Texas has run out of oil. Most new goods now, compared with those of a century ago, are not founding whole new product categories or meeting whole new classes of needs.” (Apropos of Texan oil, see this: Mammoth Texas oil discovery biggest ever in USA, November 2016.)

Gordon has, of course, doubled down on this argument in his magisterial The Rise and Fall of American Growth. (It is btw a great holiday read – curl up under a blanket for a couple of days.)

This reminded me I’d seen this post by Alex Tabarrok at Marginal Revolution:  A Very Depressing Paper on the Great Stagnation.

I haven’t yet read the paper it refers to, nor the earlier Jones one, and will do of course. It’s just that it seems we’ve been running out of ideas for over 30 years. I’ll say nothing about sequencing the genome and the consequent medical advances, new materials such as graphene, advances in photovoltaics, 3G/wifi/smartphones, not to mention current progress in AI, robotics, electric cars, interplanetary exploration. Oh, and chocolate HobNobs, introduced in 1987. Excellent for productivity.

For the time being, I’m going to stick with the hypothesis that we haven’t run out of ideas.