(Co-written with Paul Murphy, IT Programme Leader at Co-operatives UK)
All intelligent people can be stupid at times – it just depends on the context. The same is true for technology, which is why the search for artificial general intelligence – a system perhaps that can be stupid less often when the context changes – is the focus of so much interest at present.
Like many other organisations, we have been looking at where we can use automation. The shift in functions from holiday booking to sales and accounts to cloud based systems represents an extraordinary shift in organisational life. As a co-operative association, we have embraced the possibilities of digital learning, with a national platform to support new co-operatives, the Hive, with digital resources, diagnostic tools and advice communities, funded by the Co-operative Bank. All this is in line with our mission to promote, develop and unite co-operative enterprise.
Machine learning is another area of promise. As characterised by Martin Ford, author of the Rise of the Robots, this is where “a computer churns through data and, in effect, writes its own program based on the statistical relationships it discovers.”
The organisation, founded in 1869, has always collected data from its members, in order to publish statistics on the sector, for benchmarking and to allow our members to tell their shared story as a different kind of business. What intrigued us was whether we could use artificial intelligence to learn from the data that we have.
In particular, when things fail in a co-operative, it is bad news all round, because so many people are both co-owners and have an emotional stake in the enterprise. Could we use the data that we have to spot the early warning signs in a way that supports early action?
We drew on recent work using two core techniques:
- Neurocomputing is inspired by the working and structure of the animal brain and has prompted the development of neural networks that allow machines to learn in a somewhat similar way.
- Evolutionary computation is another bio-inspired technology whereby algorithms are iteratively adapted through processes based on natural selection and evolution.
Neurocomputing has had success in developing AI products that are now widely in use. In computer vision, neural networks can perform highly accurate image recognition and are being used in robotics, online mapping, and autonomous vehicles. Experimental applications include their use in medical diagnoses, where researchers believe they can be used to analyse radiographs, CT and MRI scans with much greater accuracy than traditional techniques, and drug companies are experimenting with neural networks to process 3D images of molecules with the aim of identifying new drugs.
Natural language processing is another area of use of neural networks. ‘Chatbots’ are aimed at automating customer service interactions, and products such as Google Now, Amazon Alexa and Apple Siri, which are built into common consumer electronics, such as smart phones and smart speakers, are using neural networks to provide a voice user interface and language translation. Voice recognition has been common since the mid 1990’s but the addition of neural networks has allowed a radical improvement in the accuracy and usability of the technology.
In economics, over the past 20 years or so this new field of study has offered a powerful critique of neoclassical economics and its various assumptions. Leading thinkers like Brian Arthur, Paul Ormerod, Eric Beinhocker and Doyne Farma have all pointed to radically new ways of framing how economies work. As Manfred Max-Neef described it years ago, we might dub this ‘Real-Life Economics’.
Our pilot study was to create and train a population of neural networks. Using the familiar evolutionary principles of reproduction, mutation and selection, the population was evolved through several generations to produce a network that performed significantly better than any member of the original population.
For those interested in the comparison with biological systems, the chart below, drawn by Karlijn Willems, compares the two:
So we trained the network using a selection of historical time-series economic data on the UK co-operative sector. We then tested the results with the purpose of predicting those organisations that were at high risk of financial difficulty or failure.
Using the historic data, we could see that the accuracy of prediction increased from 35.60% – being stupid most of the time – to 51.45% – being intelligent more often than not.
The predictions we generated can be used to guide our contact and offers of support for members. Often in a co-operative, preventative work can make a difference. If there is not enough income, there is not enough income, but if the underlying reason is one that can be addressed, such as the quality of governance or the capital structure, then the advice we offer can make a real difference.
We have made big strides in the use of open data for the co-operative sector and as a recent UK Government review on AI puts it “more open data in more sectors is more data to use with AI to address challenges in those sectors, increasing the scope for innovation.”
The downside was the astonishing drain on computing power that was required to run the analysis. We had to use a very high specification cloud computing instance to run this and even that took over two weeks.
When the computing finishes though, we then face the human challenge. How do we talk to our members about a prediction that they will fail as a business within one to two years?
You might think these things should not be a surprise. Tacit knowledge should always add weight and insight that formal systems won’t capture – which is why some of the interesting work on complexity systems at the London School of Economics by Professor Eve Mitleton-Kelly brings in stakeholders and focuses on an enabling environment to resolve specific challenges. This is the more participative and applied end of a field often accused of producing ‘black box’ solutions. This is where you are told the answer but have no idea why it is the answer (the Watson super-computing system developed by IBM takes this challenge on – how to reconcile complexity and transparency). An unhappy outcome for an individual co-op would be that a prediction of failure that is stupid (wrong) triggers a closure as a result.
Thirty years ago, John Butler, a legendary ex-colleague from Co-operatives UK (then the Co-operative Union) went to Clydebank to visit the local, fiercely independent consumer co-operative. Armed with statistics, John argued that the co-op was sure to fail and would need to transfer its engagements to another larger co-op in order to survive. He was sent away with a flea in his ear, the co-op turned itself around without outside help and for thirty years, Clydebank Co-operative has refused to rejoin Co-operatives UK. It was a lesson.
When you think you are genuinely being intelligent… now that‘s when stupidity is most likely to catch you unawares.