The Paper Age

October already!

I just finished reading Dinner With Joseph Johnson: Books & Friendship in a Revolutionary Age by Daisy Hay, thanks to a couple of train journeys and a quiet evening alone at home. I enjoyed it a lot. It’s one of the mini-genre of books (like The Lunar Men) that paints a picture of an era’s ideas through a description of the people who gathered to talk and wirte and indeed paint about them. In this case it’s Britain of the 1770s to 1790s, and the centre – although an enigmatic character himself compared to some of his famous authors and illustrators – was Unitarian publisher Joseph Johnson. The central event giving the book its narrative arc is the French Revolution, and the subsequent crackdown on freedom of speech and worship by the British Government.

Anyway, the relevance here is this passage about a magazine started by Johnson, the Analytical Review (great title). One reviewer is quoted: “This is a PAPER AGE.” the book continues, “Paper had become the engine of Britain’s emergent capitalist economy, as banknotes, share certificates, contracts and promissory notes circulated out from London into the provinces and across the globe.” The magazine estimated that nine tenths of Britain’s trade relied on the medium of paper.

I suppose ours is an ELECTRON age. Although electrons, as Ed Conway makes so plain in the excellent Material World, depend entirely on a material substrate.

Screenshot 2023-10-01 at 10.21.45

What stops good innovation happening?

On the train this week I read a slim NBER volume, Entrepreneurship, Innovation Policy and the Economy edited by Benjamin Jones and Josh Lerner. There are some interesting papers covering pharmaceuticals innovation, green innovation and distributional issues – specifically, race and place.

The first paper makes a strong case for government support for new vaccine development as the combination of a low probability of any single vaccine being needed and the cost of clinical trials make it impossible for a commercial case to stack up. (It does not tackle the question of why and whether trials need to be so long and costly.) Therefore something like advance purchase commitments are needed. The second paper sets out a trilemma between low price, sufficient quantity and adequate quality including safety for off-patent drugs. Written by US lawyers, it implies the real problem is the unwillingness of the authorities to let generic producers charge enough for their products; absent too low a price cap, quantity and quality can both be assured.

On green tech, one paper looks at the role of venture capitalists in financing science-based rather than incremental innovations as (as Will Baumol also described in The Free Market Innovation Machine) big companies do small innovations and small companies do big innovations, for the most part. The second paper argues for continuing to invest in ‘dirty’ companies to exercise ‘voice’ – Hirschman’s insight pops up everywhere these days.

The final papers find, respectively, that there is a clear racial gradient in access to capital for innovation; and that the benefits of spatial agglomeration outweigh the costs of congestion and by a large margin on average, but by a zero margin in the most R&D concentrated cities such as San Francisco/Silicon Valley.

All very nice papers, of interest to people studying innovation. Almost entirely US-focused however.

Screenshot 2023-09-29 at 16.37.50

Classes, elites and people

I was very excited to get a proof copy of Branko Milanovic’s new book, Visions of Inequality From the French Revolution to the End of the Cold War, a while ago. The book is out in early October so it seems ok to post about it now. For anybody interested in inequality – and we all should be – anything by Milanovic is an essential read. His collation and interpretation of global inequality data is masterly, and his perspective from a socialist background (he was born in former Yugoslavia) is always interesting.

This new book is an intellectual history of how economists of the past have perceived and analysed inequality. The chapters cover Quesnay, Smith, Ricardo, Marx, Pareto, Kuznets and then – for the second half of the 20th century – a cluster of neoclassical economists during the period the book labels as ‘the long eclipse of inequality studies’. The Cold War involved in the west the myth (in economics although not in life) of a classless society. The book aims to describe each thinker’s ideas about the dynamics of income distribution, but not their normative perspective. Hence the discussion of Marx covers the evolution of wages and the downard tendency of the rate of profit but not the labour theory of value and alienation.

As I’m no expert on the history of thought, I learned a lot from the earlier chapters. The earlier thinkers all framed their analysis around the concept of social classes: “Classes were the natural concepts around which income distribution was ‘built’.” With Pareto, the analysis shifted to interpersonal distribution within a framework of the social hierarchy (the eltie vs the rest), and then with Kuznets and the later neoclassicals to individuals. This was partly driven by the availability of data on individual incomes from income tax records, after the introduction of direct taxation. The distribution among individuals could be sliced in different ways – location, education, occupation – but the background context of social structure faded. And then, after around 1960, economists’ interest in income distribution faded too. Why?

One comment Milanovic makes in the introduction struck home: “The puzzle was solved when I realized that the discipline of economics, as it was taught and studied betweem 1960 and 1990 in the West, was really designed for the period of the Cold War. …. Inequality seemed like a problem that was going away, and this reduced interest in studying it.  … Each side [in the Cold War] had to insist that it was more equal and less class based than the other.” The book quotes Kuznets’ 1955 AEA Presidential Address calling for economists to begin to look at processes of long-term change – technology, demography, social frameworks: “Effective work in this field necessarily calls for a shift from market economics to political and social economy,” Kuznets said. Of course, this did not happen and economics doubled down on the market framework. “We might say that economics as a field stagnated or even regressed, at least in its understanding of income distribution under modern capitalism,” Milanovic comments.

This has changed in recent years, with the empirical work of economists like Milanovic, Saez and Piketty – I would add the prescient prior work of Tony Atkinson (Inequality: What Can Be Done is a terrific overview and battle cry), who was ahead of his time. Visions of Inequality ends with a call to augment the study of individual incomes with a greater focus on non-labour income, on household income rather than the individual wage earner, and on global inequality. My addendum would be the distribution of unpaid work within the household and the community. It’s an exciting time to be studying inequality thanks to the data and recent scholarship, and an important time given how unsustainable the current distribution has become – after all, the term ‘elite’ has become an insult in political debate. This book is a great scene setter for the modern debate, not least in illustrating the link between ideas of inequality and the times in which ideas are formed.

Screenshot 2023-09-24 at 09.45.31

 

 

Brains and machines

The (sub-)title of After Digital: Computation as Done by Brains and Machines by James Anderson intrigued me. It’s a book of a course taught by the author, a pioneer in neural network research, which was foundational for modern AI, and so explains (in perhaps too much detail for some readers and at too basic a level for experts but fine for me!) how computers compute and how brains compute. It starts with the difference between analogue and digital computing – the former with hardware tailored to specific problems, the latter with generic hardware and the software is decisive – and then goes on to describe how neurons and the brain compute – analogue with digital characteristics, much slower than digital computers but massively more energy efficient.

I didn’t take away big messages, but did get lots of interesting snippets. Computers could get smaller as they got faster, for example, because it takes light a nanosecond to travel about a foot (about 30cm). Between the early 1800s and 1860 the time it took to get a message from New York to Boston dropped to instantaneous rather than a week. It made me ponder the relationship between Godel’s Incompleteness Theorem and the eventual capabilities of AI.

Anyway, if you want an introductory course (as a reader or a teacher) to both computation and neuroscience, it’s excellent. But not a general read. (I loved the cover.)

Screenshot 2023-09-13 at 09.26.56

Middling models in the big real world

I’ve been re-reading Mary Morgan’s excellent book The World in the Model: How Economists Work and Think. It’s relevant to the debate that’s been happening on Twitter – sorry, X – about the way economic research has become ever-narrower and more technical. As Richard Baldwin put it: “We are getting more and more precise answers to less and less important questions.” I’m not even sure that the precision is real. But there is a mania for technique, whether econometric, RCTs, or ‘big data’ methods – and above all for ‘identification’. This means being statistically confident that outcomes can be causally attributed to potential drivers. The identification mania is so intense that I’ve kept a gobsmacking email rejecting an article on the grounds that it ‘wasn’t identified’, when it wasn’t trying to do a causal analysis at all.

The book documents the way ‘models’ have become the dominant way economists work, to the exclusion of other modes of analysis, and argues that this means two kinds of work are excluded: big picture and context-specific detail. The professionally high status work is all “middle level stuff”, wrapped in techniques and modes of thought that prove to incumbents that the work passes appropriate professional hurdles. The dangers are obvious: economics is too silent on big issues, too generic on detail, and extraordinarily conformist.

Morgan concludes: “[D]uring the 20th century, modelling became the way to do economics. The term ‘model’ changed from being a noun to being a verb. ….The epistemic genre of creating and reasoning with models requires a craft skill working with highly formal instruments. … [I]t comes to be thought to be the ‘right way’….” What’s more, economists are taking the requirement for modelling as the way of knowing into other domains such as social policy or everyday phenomena. “Now, when economists look at their small mathematical models they see the real word, and when they look at the big real world they see it as a sequence of their small models.”

The Twitter X thread had a few suggestions – write books rather than journal articles, have one-issue editors for journals more often – but such a strong professional way of knowing and doing is hard to shift. I see some signs of change in academic economics, but don’t know if it will amount to a much broader discipline. Here’s hoping more economists will read this book, or indeed the comments by Nancy Cartwright and Angus Deaton on the limitations of fashionable RCTs.

Screenshot 2023-09-11 at 11.27.38