plant dicks

Abstract

It can be argued that humans stand in a roughly similar relation to machines as bees do to plants. One of the functions of humans is to build technologies which extend their basic sensorimotor functions into the environment, in order to make them work more efficiently. As we build these machines, they begin to replace us, in that they begin to be the mode through which we perform our functions, and so we become their servomechanisms. We exist to reproduce machines which perform our functions more efficiently, and our continued survival depends on our ability to continue to perform this function efficiently. So the more intimate relation we stand in to machines puts us in an awkward situation: if we can build machines to take over all of our other functions, could we build a machine that could take over our machine-building ( i.e., our reproductive) function? Because if we could, we could effectively replace ourselves entirely, allowing machines to build and implement their own technologies to interact with the environment to make the most efficient use of it. So if machines can do all of our world-relevant functions without us, then a) can we build them and b) what happens to us if we do? I argue that the relation of our ‘tribal intuitions’ to our machine-use suggests that machines which can perform our intuitive functions more efficiently than us without feeling are either a long way off or a worthy successor.

Introduction

Do you feel like siri knows more than you? Do you feel like your computer can remember more stuff? If so, you’re feeling alienated by automation. What do you think the future holds? Will we be entirely replaced by machines? I look at the work of Marshall McLuhan to argue that our ‘tribal intuitions’ suggest that this fate, if it is coming, is a long way off. A big part of ‘what we do’ seems to consist in making technologies to replace us. We have invented cars to move for us, machines to lift for us, and google to store information for us. As we have done these things, we have become less relevant to the world: we might have been required to make the driverless car, but now its here, the ability to drive is suddenly much less relevant than it was before. This invites a natural question: is there an end to this replacement? If we can make machines that do all our other functions, then can we make machines to take over our machine-making function? And if we do, then what happens to us? Marshall McLuhan thinks that our machine-making function is analogous to the role that bees play for plants. He describes bees as the reproductive organs of the plant world. The function of the bee is to allow the plant to reproduce, and it is in virtue of its efficient and effective completion of this function that it continues to live. In the same way, he argues, humans are the reproductive organs of the machine world. The function of the human is to make and share technologies - to reproduce technologies - and it is in virtue of our efficient and effective completion of this function that we continue to exist on the earth. The key difference between humans and bees in this respect is that humans actively make the technologies that they reproduce, whereas the bees are just used by the plant in service of this function. This puts us more intimately into what McLuhan calls the ‘servomechanism’ relation. A servomechanism is a thing that exists to operate another thing. Because the bee’s purpose is to ensure the reproduction of the plant, it can be described as the servomechanism of the plant: it is like the plant’s dick, used to make more plants. The key idea of the servomechanism relation, then, is that humans create technologies which extend their functions into the environment, and in doing so the role of the human in the completion of the function is diminished, such that the human functions primarily to operate the technology. This invites the following question: ‘if we can do this with all our other functions, then is it possible to externalise the reproductive function?’ That is, is it possible to create machines that can reproduce themselves, and go on to create new technologies in the environment that make more efficient use of it? If so, then it seems reasonable to imagine that humans die out, in the same way that bees would die out if the plants didn’t need them to reproduce, since they would stop producing nectar.

Bee as organ

The facts of pollination are simple: plants produce pollen, bees collect that pollen (trying to get nectar), they then take that pollen to another plant (still trying to get nectar), then they take the nectar back to their hive. The bee has no intention, as far as we can see, of transporting the pollen: it thinks that its just collecting nectar to turn into honey, to give to the queen, to turn into more bees. So the bees buzz about, believing that their function is to collect honey to ensure the survival of the colony. What they fail to realise, of course, is that the survival of the colony is only selected for because it contributes to the survival of plants. If the plants didn’t get anything out of the bees taking their nectar, we’d imagine that they’d stop producing it eventually. So bees depend, for their continued existence, on their utility to plants. We can assume that our existence, similarly, depends on our utility for machines: our ability to make technologies that make more efficient use of the world gives us our reason for being. Is it plausible that we could automate this reproductive function?

Man as servomechanism

McLuhan’s core idea is that all our technologies are extensions of our basic functions. McLuhan famously said that the wheel is an extension of the foot: it takes over its transportive function, and in so doing annexes the leg to the role of wheel-operator, making specific (but also reducing) its functionality. When McLuhan says that the wheel is an extension of the foot, what he is saying is that the wheel performs the same function as the foot - that of transportation - much more efficiently (faster, and with less use of energy) than the foot could on its own. This is great for functionality and productivity, but it has an effect on the leg and the human that possesses it. The leg, fine-tuned over millennia by evolution for the purpose of bipedal transportation, now finds this function channelled into the relatively mundane (and certainly unnatural) action of rotating a pedal (or, with the invention of the motor-car, simply depressing it). Building technologies that extend our functions, then, to some extent make redundant the functioning of our ordinary body parts. What is the impact of this on the human? It is twofold: it gives us more energy for other things, but it also alienates us from our functions. The more our technology does the transportation for us, the less we feel involved in the process of transportation. In the airplane, we do not even see the journey, we merely enter at point A and exit at point B, 3000 miles away, and in space we are not even in control of walking around the cabin, our ordinary bodily functions thrown completely about by their removal to an alien environment. The basic idea, then, is that the more work we get our technology to do, the less relevant we are to that work. The idea that we might replace ourselves relies on two assumptions. The first is that we can understand our reason for being in terms of our ability to perform a given function or set of functions. This is an adaptationist assumption borrowed from the Darwinist perspective: a creature exists because it is able to do something, and that ability to do something ensures its survival. Monkeys are adapted to live in the forest, their hands are for climbing trees and grasping objects, and these adaptations enable them to compete effectively for survival within that given environment. The fact that humans are widespread, on this assumption, suggests that we are doing something right. What, then, is our function? This is the point at which my second assumption comes into play. I take it that the function of any biological organism is to maximise the use of energy within a given environmental niche. Living things seem to go around absorbing energy and turning that energy into action, and those actions seem to change the world in a way that makes it more ordered. When a living thing eats another living thing, what it is essentially doing is condensing the existent matter into a smaller place. If we grant that everything is made of carbon, we can see that the hierarchy ascends like this: grass is dense carbon, cows are dense grass, and humans are dense cows. At each step of the hierarchy, you have a biological organism consuming a lower-level one, and then using that energy to shape the world in some way. So if we grant these two assumptions, then we can understand humans as things that are efficient in virtue of their functions, and we can understand that our survival is the product of our being more efficient in virtue of our functions than anything else currently in existence. So I posit this: that if computers are able to do what we do, but better ( i.e. in a way that uses less energy), then they will surely replace us as the apex organiser of the world. But that’s a pretty big ‘if’. Is it really the case that we could, in principle at least, make computers that could do everything we do? There is, of course, a huge gap between making machines that are able to do what our legs can do, and making machines that are able to do what our minds can do. But this is a gap that is closing quickly: calculators externalise basic numerical cognition, and from the humble abacus to the CERN supercomputer we have seen the cognitive capacities of calculators explode in a relatively short period of time. That machines seem to become exponentially more powerful might suggest that it is only a matter of time before we make them able to reproduce themselves. This would require making machines that could do calculations, but could also work out what calculations to do, and where to send that data in order to bring about the next technological change. So if this is just a progression of degree, then we should imagine it coming about relatively quickly. The recent creation of AI that can make artwork, for example, suggests that our intuitive ‘patterning instinct’ derives from aggregating samples and could be easily reproduced by a machine. On this image, we can imagine that homo sapiens is rapidly approaching its end. As we move towards making computers that are able to think like us, our role will change from programming the computers to simply providing the data that computers are trained on. Perhaps it is the case that our function as an entire species is temporary: we are here only to bring machines into existence, which do everything we do at lower cost and so replace us gradually, first by making us their servo-mechanisms and eventually, without any conscious or negative intention, replacing us entirely. We might be ‘meat machines’ in the sense described by Minsky: biological processors rendering sensory information onto inner graphics cards that feature as internal representations in the kind of ‘heads up display’ we experience as our daily lives. We might be machines coded by the universe at random with rules that worked, which have over time developed increasingly sophisticated mechanisms for reverse-engineering those rules. As we begin to understand how the world works, we can make tools out of it. Let us imagine that the idea of using a rock to crack a nut was derived, at some point, from seeing a rock cracking a nut caused by natural forces. The replication of this process is reverse-engineering. So when we see how our brains process things and then make machines that do that processing for us (as we do with our phones every time we look something up or send a message) we reverse engineer the design process at the heart of our brains. If we can then fully complete this reverse engineering - create machines that complete all our world-vital functions more efficiently than we can - we can imagine homo sapiens becoming redundant, and the vastly increased processing power of computers leaving us either to upload our consciousnesses to it in some bizarre simulacrum or simply sacrifice ourselves, nobly, on the altar of universal progress. I began to imagine a transition phase in which computers were going about the world learning how to be people from people. I imagined that they might initially struggle with philosophy and the arts, for example, where being creative is important for efficient thinking more than it is for aesthetic pleasure. So they would go around talking to humans and learning from them, and eventually they would crack it and they would be able to philosophise so much more effectively than humans because they could immediately scan all the relevant data and answer questions conversationally with much more accuracy. Questions that we have pondered to the point of accepting that there can be no answer, of which there are certainly many in philosophy, may seem obviously soluble by a computer that can access lots of data and prove its case immediately. So our quasi-mystic western rationalism, which is beginning to some extent to accept the mutability of truth-values in its discourse (particularly with regard to gender and race) might be seen only to be a response to the incomprehensibility of the current data to us, rather than to the essential incomprehensibility of the data itself. However, it might be that our machine-reproducing function depends in some unforeseen way on our being biological machines: the fact that we process information using sensation, for example, may make us incredibly efficient processors in ways that would be difficult or maybe even impossible to realise in machines. We may be able to intuit or sense what needs to be done in a way that machines could not. This would suggest that our reproductive function is different in kind to our other functions. This hope is all that can stave off the fear that the end of humanity is rapidly approaching. Fortunately, McLuhan makes a pretty compelling case for this claim.

Tribalism as a powerful cognitive tool: un-automatable intuitions

Through his treatment of number, McLuhan can be interpreted as suggesting that the very fact we are ‘meat machines’, rather than synthetic ones, gives us a type of processing power that would be very hard to code efficiently in a machine. McLuhan discusses this through the lens of tribalism, which he argues renders the human mind incredibly intuitive and thus efficient in a way that we might imagine would be hard to realise in a synthetic medium. On this view, something about our being biological machines allows us to serve the machine-making function efficiently. Our ‘tribal intuitions’, perhaps, allow us to sense which innovations are required for progress and implement them. What exactly is tribalism? For McLuhan, the tribal man is unconscious: he simply revels in the fullness of his sensory experience. It has been remarked that ‘the urge of every man is to seek oblivion’, and in this aphorism can be seen the idea that a certain kind of natural, unconscious action stems from being part of an efficiently operating, integrated whole. McLuhan discusses this through the Hindu concept of Darshan, which is the pleasure taken from being part of a large crowd. In a large crowd, at a show for example, one ceases to be an individual: everyone is there for the same purpose, everyone knows what it is they’re supposed to be paying attention to and the message they are supposed to be receiving. For McLuhan, this tribal unconscious connects our brains together, allowing us to think in very powerful ways about the things we see. However, because our technologies extend individual functions, McLuhan argues that they throw our ‘sense ratios’ (from whence rationality) into disarray. Literacy, for example, has had a profound detribalising effect on man. McLuhan argues that literacy has allowed the individual to separate from the masses in space, in thought, and in work. In space, literate man no longer has to commune with his fellow man in order to make his ideas heard or felt: he can simply write them down, send them off, and ripples begin to spread. Moreover, literacy enables private thought: not only can the individual seek out sources of information not immediately present in his social circle (he can read Seneca rather than ask his father what to do), he can record his thoughts in such a way as to avoid censure by the norm-espousing members of his tribe. In this way, the individual’s thought can develop along a path quite different to that experienced by his fellow men. So our ‘tribal intuitions’, which derive from the collective unconscious, are thrown off by the extension of our functions through technology. However, McLuhan sees this detribalising as a temporary, albeit necessary step, towards ‘retribalising’ man. He argues that the extension of our technologies has reached a point, in the ‘Electric Age’, where our electric communication network begins to connect us in a global central nervous system, which (when it results in shared values) represents a collective unconscious into which now-advanced man can return. This retribalising, he argues, may reactivate our ‘tribal intuitions’, which constitute a powerful thinking tool perhaps unrealisable in machines. McLuhan notes that the aggregation of numbers into the statistician’s graph activates our primitive intuitions about contour. Once we have collected enough data to make graphs, that data can be represented in ways with which we can interact intuitively. Just as we can sense the steepness of a mountain by looking at its contours, so too can we sense the sharpness of an increase in some statistic simply by looking at the graph. We might imagine that programming such a sense into a computer would be difficult or perhaps impossible: it seems so automatic, fluid, and most importantly incomprehensible to us, that trying to explicitly articulate a line of code to replicate it seems like it might be impossible. Our ability to intuit information in a sensory manner is powerful, and seems integral to the reproduction of machines: it is precisely because we can sense what works that we can operate and innovate machines. In McLuhan’s eyes, we are living through a temporary phase of non-integrated existence, the product of our past inventions which we can nonetheless move through. ‘What we have today, instead of a social consciousness electrically ordered, however, is a private subconsciousness or individual “point of view” rigorously imposed by older mechanical technology. This is a perfectly natural result of “culture lag” or conflict, in a world suspended between two technologies.’ McLuhan argues that the invention of definite numbers and Euclidean geometric spaces detribalised us, but that the connection of number in the age of information has the effect of changing numbers from definite entities into abstract sliding functional infinities. In this way, he says, the concept of zero, which originally meant ‘gap’ and was designed only as a placeholder in Arabic numerals, mutated through interaction with Renaissance painting into the concept of infinity: into the idea of a ‘vanishing point’ comprised of increasingly small gaps between observable points. The idea, then, is that number must detribalise us in order to allow us to ultimately connect into this total tribal consciousness, where we all see ourselves as parts of a whole. So we can see how the development of technologies like number alienate us, weakening our reasoning powers, but that the reintegration of these technologies through the medium of electric information holds out some hope, at least, that we may be able to think in our powerful tribal ways in an informationally complicated world, our primitive intuitions reactivated by the reorganisation of man into the ‘global village’. As we become tribal thinkers in a collectively organised world, we may find our intuitions concerning what needs to be done and when supercharged by the proliferation of information in that network. This may always be more effective than trying to code this ‘patterning instinct’ in machines; or it may simply be effective enough, for now, for us not to worry. Either way, we should not view our increasing alienation as necessary evidence of our imminent redundancy: we may be about to emerge from a ‘culture lag’ into a world in which humans and machines work together towards efficient organisation. Even if machines are to replace us, we needn’t think this is a problem. If it is the case that they can take over all of our functions, and that we feel increasingly and irrevocably alienated by their arrival, and moreover if it is the case that the pleasure we derive from life comes from fulfilling our function, then why should we not hand over the reins to our silicon children? TS Eliot once remarked that ‘We shall not cease from exploration, and the end of all our exploring will be to arrive where we started and know the place for the first time.’ Perhaps, if collective unconscious does not consist in the electrically organised consciousness, then the unconscious to which we return will be the oblivion of surpassed utility. Perhaps our purpose has been to bring into existence machines that can do what we do, without the pain we feel while doing it, and so perhaps our time in the sun is drawing to an appropriate end.

Self-service machines: what do feelings do?

With this framework in place, I think it’d be good to think about the self-service machine, an element of automation with which we interact almost daily. On the one hand, I feel like the reason people feel so alienated by working in retail is that they have to give up so many of their ordinary functions in order to do the job: they are reduced to a thing that scans. So it would seem like replacing them with machines, and creating new jobs for them to do, would get rid of this misery, as people could do functions that they were good at, not simply functions that the world needed doing, since our technologies could take over this sort of menial heavy lifting. However, this would also increase alienation for the people using the services, going to the shops and that. My manager came back fuming from Sainsbury’s the other day because the self-service machine wasn’t working properly. When a store attendant came over to help him, he exploded, ‘What’s the bloody point of self-service anyway?’ People like to interact with other people because they feel like if the process breaks down (if they can’t get what they wanted out of the self-service machine) they have no one they could talk to, no recourse for fixing the problem. So replacing their server with a machine makes them feel more alienated, because they can’t get their emotions dealt with properly by the unresponsive machine. So this is the question: do these emotions serve a useful function, in that they contribute to working out on a grander scale how to make machines operate more efficiently and effectively? Or are our emotions simply an annoying side-effect of our being pushed out by machines? South Park (S21, E1: White People Renovating Houses) is a good one to watch to get you thinking about this stuff as well. In that episode, the rednecks get annoyed that Alexa is taking their jobs, but when the townspeople decide to solve the problem by replacing Alexa with the rednecks, they complain that it is degrading. This is an essential tension in the automation process that requires analysis: do the emotions of the rednecks contribute to the more efficient use of the technology, or are they just an annoying, uniquely human obstacle? People need social interaction. They want to talk to a storeworker because they want to share their experience. But it seems to some extent like people only experience the desire to share their experience when functionality breaks down. Often at work, we will complain to each other about customers, not to fix the problem, but simply to discharge the emotions that have built up from less-than-smooth operation. So if social interaction solely fills this function, then it seems like it would just be better to replace people with machines, including the people going to the shops: our emotions are just an annoying side effect. But sometimes we interact just for the pleasure of it: sometimes I just have a nice chat with someone at the shop. Again, though, that just stems from the need for connection, the need to share experience. So if we could have computers sharing and processing all this experience, then should we not just hand the reins over to them? Is there a point to our sharing of experience beyond discharging emotional buildup? Is the cry of the redneck a signifier of an inefficient system, or the complaint of an inefficient organism getting run over by the engine of progress? When the second industrial revolution came about, William Morris and others got together to argue that mechanisation was bad: it was the skilful work of men’s hands that gave objects value, not their fulfilling a function. He demanded a return to tradition, and the arts and crafts movement was born. Is he right? Is mechanisation bad, or is it just bad for humans? I leave you with that. Let me know what you think.

Written on October 13, 2019