Computers do exactly what you tell them to. Often, this isn’t what you want them to do. When something goes wrong, in other words, it was bad input, not bad processing. People can respond to implication, subtle tones, fuzziness. Computers are typically relentlessly deterministic.
‘Systems theory’ emerged as a result of ‘cybernetics’ that were developed from the 1940s in the US. Norbert Weiner was an early pioneer of the notion that information travelled in organisms as in hardware systems. This background led directly to an early ecological movement that saw all of nature as a system which could be perfectly modelled, as if an electrical or other technical system. This assumed that objective knowledge of the world around us was possible, manipulable and controllable according to objectively verifiable facts. A utopian vision of a future free from political power, intrigue and interest followed, largely resulting from the move from cybernetics through ecology to society.
The move from cybernetics, through ecology to the political was largely fuelled by the appeal of the idea to the popular imagination. Trust in politicians, traditionally never high, was seeing new depths and a downward trend. These movements promised futures freed from narrow power-plays and the interests of status-seeking individuals. Harmony in society, like that of nature, was possible through the objective description and manipulation of the facts of social life considered as a system. Equilibrium could be achieved by thinking of individuals as playing roles in the broader machine and tending to that machine’s functioning, not the inherently biased and narrow perspectives of its components (cogs like you and me.)
There is a concern among many people from many walks of life that this relentless logic, if put in positions of power, can infringe or destroy human freedom. There are more and less hyperbolic ways of stating this fear which can be taken as a call to temper the on/off, digital world of the computer with some of the irrational, subtle nuance of the human – a first strike to prevent the domination of the familiar life of humans by the precision of digitalism. The idea is to humanise the emerging technology rather than risk being technologised by it.
In a rapidly advancing and highly pervasive world of technology , we find not only that technology facilitates ways of life, but that it shapes them also. For instance, the numbers of bank workers in America fell whilst bank transactions rocketed thanks at least in part to the introduction of ATMs. We could speculate that we bank more the more easily we can bank, but also that our habits become disconnected from interpersonal practices. Would we have accepted so readily ATMs if they were described in terms of the thousands of jobs (mainly jobs held by women) that they would cost? Naturally, the banking system as an economic unit benefits from lower costs as it both gains the rewards of higher numbers of transactions, without having to pay wages to tellers. But are these interests of systems trumping human interests?
Part of the reason for fear for the future is the sense that the progress of science and technology is unstoppable, inevitable and the presumption that better technology makes for better living. The excitement with which is met each innovation, the marketing buzz surrounding a new invention, can mask the concerns that ought to be raised by what that innovation means.
For instance, if my smart phone is feeding information on my whereabouts, my likes and dislikes, my web habits, to various information centres at what point does it stop being my phone and instead become a sense organ for a virtual entity? Given I own the phone, at what point do I cease being a person primarily rather than an information source – a peripheral for the interests of something external to myself and in whose objectives I have no say? When I use the latest app am I really exercising my freedoms and augmenting my lifestyle, or am I feeding potential marketing information to a distant, virtual, self-facilitating information node?
Technology has as much power to direct existing ways of being as it does to enable new ways of being. This can include suppression of some ways. It can also do this invisibly. Relentless, life-course altering and dry logical potentials, based in the machine, ought to be a cause for at least some discussion. Isn’t human, irrational wiggle-room important in a world predicated on so much technology?
Without a means of expressing something else, the kind of (perhaps as yet undelivered) promise of political emancipation offered by systems theory must simply be abandoned. We can’t cede power to the machine, but neither can we to the assumed synergy of expressed opinion. We have to take responsibility, and that means dealing with the troublesome business of politics.
Whereas the cybernetic-ecological movements sought equilibrium through the suppression of perspectives and the ceding of autonomy to the machine, systems theory is utilised more nowadays in a metaphorical sense. Society functions like a machine, we might say now, not as a machine.
Effectively, politicians and policymakers are well aware of the fuzziness that surrounds the metaphor and so they seek different types of information with which to make effective and legitimate policy decisions. The impetus nowadays is not to cede autonomy to the machine on the presupposition that it will find its own level, but instead to ‘include stakeholders’, and to manage rather than deny irreducible the fuzziness.
Ironically, there is much computer science focussed upon simulating human behaviour and creating artificial intelligence. Politically, there is still the image of the system as ideal, whilst in computer science, the model being explored is that of human intelligence.
Human intelligence involves more than just instrumental rationality and decisions about efficiency. Intelligence involves the utilisation of some capacity to perceive, interpret, judge and manipulate the cognitive and natural environment, and to do so in a rational way. That capacity is an open one, meaning it resists definition, and the standard of rationality by which it is judged is essentially contestable. On this basis, all intelligence is artificial. It is in constant construction between individuals, standards, aims and methods.
That being so, we should be more concerned with having the discussion about these various perspectives and beliefs that we are with enacting programmes based on them. Systems theorists, techno-pagans, managerial politicos and the person with no fixed view should be interested in seeing what’s being thought about in the other’s camp. The fact is that technology is increasingly playing a role in all walks of life. That technology can have effects on how those walks are walked. Better then not to take a side and stick to it, but instead to get a grip on what it means for all involved.
 Norbert Wiener, Cybernetics: Or the Control and Communication in the Animal and the Machine, 1948
 Odum, E.P, “The strategy of ecosystem development”. in: Science 164:262-270, 1969
 This utopian idea is neatly expressed by the poet Richard Brautigan in his All Watched Over By Machines Of Loving Grace, 1967 and can be seen expressed in Odum, H.T. Environment, Power, and Society. Wiley-Interscience New York, N.Y, 1971
 Larry W. Hunter, Annette Bernhardt, Katherine L. Hughes and Eva Skuratowicz ‘It’s Not Just the ATMs: Technology, Firm Strategies, Jobs, and Earnings in Retail Banking’, Industrial and Labor Relations Review, Vol. 54, No. 2A, Extra Issue: Industry Studies of Wage Inequality (Mar., 2001), pp. 402-424
 Atkinson, C. J., Checkland P. B., “Extending the Metaphor ‘System’” in Human Relations October 1988 vol. 41 no. 10 709-724 presents a critical overview of this idea.