Cybernetic dreams

I read Eden Medina’s [amazon_link id=”0262525968″ target=”_blank” ]Cybernetic Revolutionaries: Technology and Politics[/amazon_link] in Allende’s Chile because I spotted the fuss on Twitter about Evgeny Morozov’s New Yorker piece, The Planning Machine: Project Cybersyn and the Origins of the Big Data Nation. I’m not all that interested in the fuss but was very intrigued by what people were saying about the book.

It is indeed a completely fascinating history and reflection on the interaction between technology and politics, and I highly recommend it. The cover photograph gives a good flavour of the weirdness of this episode. It is the control room built in Santiago in late 1972 under the guidance of British cybernetician Stafford Beer. The control room, that is, for the economy, linking a network of telex machines in factories around the country to a mainframe computer in the capital.

[amazon_image id=”0262016494″ link=”true” target=”_blank” size=”medium” ]Cybernetic Revolutionaries: Technology and Politics in Allende’s Chile[/amazon_image]

While not a fully planned economy, the Allende government had nationalised substantial sections of industry and, as time went on and the American-led sanctions began to bite, planned to control key prices. It also had to contend with a nationwide strike led by businesses opposed to the leftist government. The aim with Project Cybersyn, as the cybernetic plan was labelled, was to deliver to the central authorities ample real-time information on production while allowing individual factories the freedom to make their own decisions. Government policy could be adapted quickly in response to the trends identified. In other words, it was meant to avoid the pitfalls of central planning while enabling the co-ordination benefits. As Medina puts it: “Connecting the State Development Corporation to the factory floor would … allow the government to quickly address emergencies such as shortages of raw materials and adapt its policies quickly. Up-to-date production data would also allow Chile’s more experienced managers to … identify problems in factories and change production activities in the enterprise when necessary to meet national goals.”

Apart from the obvious practical difficulties (eg only one mainframe and very few programmers), one challenge was actually modelling the economy. It is unclear what kind of relationships were written in to the code, but they must have been something similar to those embodied in the simple linear model of the Phillips Machine. For all that it was a project about managing the economy, there was just one economist on the team, according to the book. However, Medina emphasises the intended flexibility of Project Cybersyn: “The model would not function as a predictive black box that gave definitive answers about future economic behaviour. Rather, it offered a medium in which economists, policy makers and model makers could experiment and, through this act of play, expand their intuition about [the economy].” The structure embodied the cybernetic emphasis on responding to the information contained in feedback. I must say I didn’t understand Beer’s cybernetic models at all, as the language and concepts are so different from anything I’m familiar with – but then cybernetics itself comes across as rather futuristic-retro.

Beer also hoped to have a method of getting real-time feedback from the people to the government by installing ‘algedonic meters’, or dials indicating their happiness or dissatisfaction, that would be installed in community centres or public places. This part of his plan was never taken up. However, he was keen on getting public engagement with the project and even persuaded Chile’s most famous folk singer Angel Parra to write a Project Cybersyn song.

One of the divisions within the project, well-described in the book, was between the technocrats who saw it as a tool for managing the economy more effectively, and those who saw it as a means of reverse engineering politics and society on the ground. The latter group hoped workers in the factories would develop their own sense of autonomy through inputting information into the telex, and understanding in this way the part they played in the whole. “[Beer] believed that engineering a technology also provided opportunities to engineer the social and organizational relationships that surrounded it.” The technocrats tended to dominate, though, largely because of the growing difficulty Allende’s government had in sustaining its coalition. Politics didn’t co-operate with the technology.

One of the interesting aspects of Project Cybersyn is that the technologies it used were not the most advanced. The US blockade largely prevented Chile from importing more computers or sophisticated equipment. Aside from the one mainframe and the telexes, the futuristic control room used slide projectors and hand drawn slides. The fibreglass control chairs, based on Italian designs, were one of the most cutting-edge aspects of the control room. And yet the project was the most ambitious cybernetics project ever (partially) implemented.

The project Cybersyn control room

It’s hard to decide whether the people behind Project Cybersyn were crazy dreamers or just 50 years ahead of their time – what would they have made of the possibilities of the web and ‘big data’? The basic cybernetic question the project poses remains valid: can policymakers do a better job with rapid real-time feedback on economic indicators – or is the economy as a dynamic, complex system simply beyond the kind of mapping implicit in any such project? Can what is measured about the economy reshape the economy or underlying social order in turn – and what does that imply for the indicators one might try to include in a Project Cybersyn 3.0?

Fascinating questions, and a fascinating book.

PS After finishing the book, I read the Morozov column. It is a precis of the story told in Medina’s book, with a handful of extra paragraphs woven in that give his own reflections on the issues raised – including, for example, exactly the obvious ‘what could we do in the era of the internet of things’ question. If the column had actually been billed as a review of [amazon_link id=”0262525968″ target=”_blank” ]Cybernetic Revolutionaries[/amazon_link], I don’t think there would have been any fuss. While not plagiarism, as the book is the only source mentioned, for Morozov to have given it just one passing mention in the ‘Critic at Large’ section seems ungenerous.

Who owns the future? Not you

It’s taken me a while to get through Jaron Lanier’s [amazon_link id=”0241957214″ target=”_blank” ]Who Owns the Future?[/amazon_link] It was highly recommended to me and I found it an interesting read. But as it’s a book about digital economics by a non-economist, and therefore written in a language foreign to the way I think about the issues, it was a surprisingly difficult read. I don’t think normal people would have the same difficulty.

[amazon_image id=”0241957214″ link=”true” target=”_blank” size=”medium” ]Who Owns The Future?[/amazon_image]

The theme of the book is that the economy has developed in ways that enable what Lanier calls ‘Siren Servers’ to appropriate the past and present labour of many other people for themselves, and thereby hollow out the middle classes. This situation is the result of the way the Siren Servers – he means Amazon, Facebook, Google etc – have used the presumption that “information is free”, specifically the data they all gather about all of us and by all of us, but advertising is paid for. Lanier quite rightly points out that the customers of these titans are the advertisers, not the individual users. Lengthy user agreements that nobody reads means the corporations take no risks, only revenues.

Lanier seems to believe that eventually this economic structure will become unsustainable because it is destroying normal middle class livelihoods and there will be nobody to buy the products being advertised. The Siren Servers become so big that they eat their environment (just as the financial markets did).

His proposed solution is nano-payments attached to information generated by individuals, whether that’s their ‘data’ or their creative or digital products. “If the system remembers where information originally came from, then the people who are the sources of information can be paid for it.” He points out that HTML, although marvellously convenient, only links one way, while Ted Nelson, an early thinker about linking, argued for two-way links. This is less convenient because of the additional updating required. In fact, the book left me completely unclear how two way linking to enable nano-payments would work in practice. However, Lanier argues: “This is the only way that democracy and capitalism can be in alignment.” Without greater symmetry between supplier and acquirer of information, the information economy will collapse.

I have an instinctive sympathy with the book’s argument, but do not think the unsustainability in capitalism we all can see at present boils down to the absence of micro-payments implemented via two-way hypertext linking. One question is Jean Tirole’s: will new digital giants benefiting from network effects come along and displace Google et al? If that hasn’t happened within, say, a decade, then the time would come to regulate these vital utilities to ensure they serve the public interest. More generally, I would look at beefing up competition policy as one of the levers to loosen the political power acquired by ‘Siren Servers’ – in which category I’d include the financial sector as well as the ICT sector.

The question of distributing productivity gains to the population as a whole is not confined to the digital economy either. While it’s right to be concerned about the jobbing musicians and journalists whose jobs are being destroyed by “free” online content, there are lots of other standard middle class jobs seeing living standards decline, so the economic and political issues go far beyond what’s covered in [amazon_link id=”0241957214″ target=”_blank” ]Who Owns the Future?[/amazon_link] For of course this started some time ago with blue collar jobs. However, it’s an interesting book, and it’s always worthwhile to hear what experts in other fields have to say about economic issues, for their different perspective. I think Lanier’s diagnosis and solution will have quite wide appeal.

It’s the society, stupid

There is one other thought prompted by re-reading Jane Jacobs’ [amazon_link id=”039470584X” target=”_blank” ]The Economy of Cities[/amazon_link]. She has an almost by-the-by section about the changes in the mass media of the day, the switch in readership away from mass circulation national daily newspapers to mass television capturing the national audience and more local, often suburban, newspapers. TV was the disruptive technology of the day, and audience habits changed. The argument Jacobs makes is that the technology wasn’t so much the cause of the transition as the enabler of it. The driving force was the growth of the suburbs, and the social changes that went alongside it.

I don’t know enough US media history to evaluate this properly, but it’s surely a good reminder that technology always, but always interacts with social change. Knowing that’s true in general has been at the heart of my work since the 1990s, but at a time of exciting and rapid technical change (pace Robert Gordon), it’s easy to forget to central role of social change in specific cases. Including the changes happening now in media habits.

[amazon_image id=”039470584X” link=”true” target=”_blank” size=”medium” ]The Economy of Cities (Vintage)[/amazon_image]

Technology in history

I’ll collate the economic history suggestions another time. Meanwhile, though, seeing a recommendation for David Edgerton‘s influential [amazon_link id=”1861973063″ target=”_blank” ]The Shock of the Old: Technology and Global History since 1900[/amazon_link] sent me to both that – which insists that there is too much cheerleading about invention and not enough focus on the implementation of technologies in specific historical contexts – and to his subsequent book, [amazon_link id=”0141026103″ target=”_blank” ]Britain’s War Machine: Weapons, Resources and Experts in the Second World War[/amazon_link].

The latter makes some contrarian arguments about the war. Edgerton argues that (a) Britain was the richest and most powerful combatant thanks to its imperial resources – it is a mistake to think of it as a beleagured nation standing alone begging for American charity, and Germany would (and did) struggle to combat it; and (b) the ‘declinist’ histories about Britain after the war (notably Corelli Barnett in [amazon_link id=”033034790X” target=”_blank” ]The Audit of War[/amazon_link] and  [amazon_link id=”0333480457″ target=”_blank” ]The Lost Victory [/amazon_link]etc) are mistaken, as relative decline was due largely to strong growth in other countries.

[amazon_image id=”0141026103″ link=”true” target=”_blank” size=”medium” ]Britain’s War Machine: Weapons, Resources and Experts in the Second World War[/amazon_image]

Edgerton’s argument is pinned on a materialist account of the resources and technology developed and used by Britain. Some of this evidence is very striking. One example is a graphic showing the vastly, vastly greater tonnage of bombs the UK dropped on Germany compared with German bombing of the UK – the horrors of the firestorms and mythology of the Blitz notwithstanding. It’s all very interesting. Britain’s early defeats led to a huge emphasis on increasing production, he writes, saying there was a “powerful sense that the war was a war of production,” with contemporary debate focusing on industrial efficiency, or the lack of it.

The importance of scientific advance in the conflict is obviously a well-known part of the story, from codebreaking (my favourite account is R.V Jones’s [amazon_link id=”185326699X” target=”_blank” ]Most Secret War)[/amazon_link] to the Manhattan Project. Edgerton adds to this the sense of Imperial power and the availability of material resources.

[amazon_image id=”185326699X” link=”true” target=”_blank” size=”medium” ]Most Secret War (Wordsworth Military Library)[/amazon_image]

His reinterpretation is certainly interesting, and it must always be fruitful to test received wisdom. His claim that postwar decline is misinterpreted is less convincing, however. Surely the loss of Empire is a decline, whether you think it was a good thing or not? And the transition to US superpowerdom postwar is clear.

I see from his website that Prof Edgerton is working on currently working on Capitalism, Empire and Nation: a new history of twentieth-century Britain, a forthcoming book for Penguin. That will be an essential read.

Innovation, competition and public good

[amazon_link id=”1594203288″ target=”_blank” ]The Idea Factory: Bell Labs and the Great Age of American Innovation[/amazon_link] by Jon Gertner is a fabulously interesting and readable book. It’s a terrific business history about the research and development arm of AT&T during its golden, monopoly era. Scientists and engineers at Bell Labs created some of the defining technologies of modern times, including the transistor, the semiconductor, the laser, fibre optics, Claude Shannon’s information theory, submarine cables, satellites (Telstar), early work on mobile communications, and more.(Francis Spufford’s lovely book [amazon_link id=”0571214975″ target=”_blank” ]Backroom Boys[/amazon_link] has a chapter on the UK’s contribution to mobile communications at the same time.)

“Finding an aspect of modern life that doesn’t incorporate some strand of Bell Labs’ DNA would be difficult,” as Gertner rightly puts it.

[amazon_image id=”1594203288″ link=”true” target=”_blank” size=”medium” ]The Idea Factory: Bell Labs and the Great Age of American Innovation[/amazon_image]

The book is also a thoughtful exploration of how this institution was able to be so consistently innovative for such a long time. The key is the implicit deal between AT&T and the US authorities to permit the company its monopoly of local and, for many years, long-distance calls as long as the fruits of the research were shared with competitors. Thus key technologies such as the transistor were quickly licensed at low cost. It was an excellent system for delivering the public good of innovative ideas. The parent company was a dull but profitable utility. It paid good and steady dividends to shareholders, and to Bell Labs. “The paradox of course was that a parent company so dull, so cautious, so predictable was also in custody of a lab so innovative,” Gertner writes.

An interesting question is therefore how Bell Labs came to be so innovative in the first place. Apart from the steady flow of generous funding from the parent company, its rules seemed to have played a vital role. People were strongly discouraged from closing their doors. Anybody could ask anybody else – no matter how eminent – to help on a problem. The different disciplines were located in close proximity. All work had to be written down in specified notebooks and countersigned, so ideas were attributed, but nobody could claim individual patents. Everyone had to work on their own side-projects, an idea copied by Google. Its director saw the lab as a living organism, with physical proximity essential for the fruitful cross-fertilisation of ideas.

In those pre-competitive times, the value of patents was well understood, and Bell Labs was careful to patent its discoveries, but there was no inhibition in exchanging ideas with the broader scientific community. For example, in the early days of semi-conductor research, visitors from Fairchild Semiconductor in Palo Alto and Texas Instruments in Dallas were frequent visitors to the Bell Lab home in New Jersey. It’s hard to recall a time when commercial entities were so open with each other about their R&D.

Eventually of course the monopoly power for social returns deal broke down – and apart from Bell Labs, the other social aspect of it was AT&T’s use of long distance profits to subsidise local calls. By the time the break up of AT&T into the Baby Bells occurred in 1984, there had been several assaults on the monopoly by various US regulators. (Tim Wu’s [amazon_link id=”1848879865″ target=”_blank” ]The Master Switch[/amazon_link] gives an account of the communication monopoly from a far more sceptical perspective than The Idea Factory.) The Federal judge who finally oversaw the agreement to break up AT&T was not concerned about the vertical integration of AT&T with its research subsidiary or Western Electric, the equipment subsidiary, seeing economic benefit to consumers in the supply chain links, but rather with the horizontal integration. Hence the deal to break off the regional Baby Bells. Competition from MCI on long distance calls was already occurring. But some people anyway saw the end of the monopoly as an inevitable result of the earlier licensing of key technologies. AT&T and Bell Labs had given birth to their own future competitors.

The inevitable question is what kind of innovation system could again deliver such fundamental technological advances? All of the communications technologies have involved vast, vast sums of money and multi-year, multi-person efforts. Mariana Mazzucato has argued that government involvement in innovation is always essential, due to the scale of funding and effort, and the risk involved, giving examples mainly from the computer industry in her book [amazon_link id=”0857282522″ target=”_blank” ]The Entrepreneurial State[/amazon_link]. Governments of course fund university research, as do some foundations, but direct public funding of research and – importantly – development in the commercial sector is rare – often done through the defense budget in the US, previously through nationalised entities in other countries.

Elsewhere, and in the post-privatisation era, it is pretty rare. And today’s information sector monopolists and quasi-monopolists do not seem to have the same sense of public obligation as their Bell Labs predecessors; the profit motive did not drive the creation of transistors and semi-conductors, although it was vital in getting them into new products in the market once they had been invented. Dominant companies in digital businesses with low marginal costs and strong network effects have tremendous market power which it’s hard for competition authorities to address because there are large consumer benefits and because there’s always the hope of disruptive entry by a new and better soon-to-be-dominant company. Perhaps the right public policy approach is to learn a lesson from the history of Bell Labs and look at what public or social benefits these dominant players offer until that disruption happens?

[amazon_image id=”1848879865″ link=”true” target=”_blank” size=”medium” ]The Master Switch: The Rise and Fall of Information Empires[/amazon_image]