The Glass Cage: Automation and Its Consequences
Nicholas Carr is counter-intuitive, and therefore, must-read for anyone interested in talking technology. I followed his big ideas since his path-breaking 'Does IT Matter?' which was about Information Technology stop being a strategic tool and more like an utility, like Electricity. One could argue that this prediction did not materialise, as we put our hopes on Big Data etc to change the way business is done. However, the follow-up on this thesis, that IT would be available through a pipe rather than the strong-room like infrastructures in the past, certainly did, and today one could look at the Cloud Computing infrastructure as an utility, rather than a strategic asset. His later work, 'Is Google Making Us Stupid' (and the book that followed, The Shallows), created a whole genre of work exploring the effects of technology on our brain and our capacity to think, which bore out some of his early warnings about changing behaviour. In summary, he excels in making the Technology Utopianism look what it really is, shallow!
His latest book, The Glass Cage, is about automation, and what effect it may have on our lives. Like his earlier work, Mr Carr is full of foreboding. His basic point is that automation leads to de-skiling, even for the highly skilled professionals such as Pilots, Doctors and Architects. This matters, as contingencies do invariably occur, and when they do, the people left in charge are mostly incapable of handling such contingency. The reason for this is the role accorded to the human intervention in designing the automated system - that of monitoring and supporting, rather than being on control - which, humans not being very good at focus and passive watching, invariably leads to disengagement and deskilling. The way the technologies are designed generally discourages people taking charge (unlike the movie pliot Cooper in Interstellar, for example, who seems to grab the stick all the time), and the human error become a self-fulfilling prophecy, as deskilled humans fail to act when technologies fail to behave as they should.
The answer to this problem is indeed to build fail-proof technologies, but we all know that does not happen. Even the most ardent of the automation advocates would accept that we can not know all the eventualities in advance and plan for all kinds of contingencies. And, indeed, a more realistic appraisal would also acknowledge the fact that sometimes, we may not engineer for all contingencies even if we know them, because economics of production must take into account the likelihood of an event and costs associated. So, we would almost always build automated systems that can fail, and then attach a human operator tasked to act during such failure, but at the same time, this human operator would be, by the design of things, utterly incapable of doing so. Automation is, therefore, a self-destructive paradigm, unless we have learnt to think about technology in a different way.
Mr Carr really becomes interesting at this point. He is not a technology-averse exotic, but a sensible advocate of human-centred technology. His point is that we have got our priorities wrong in automation and we are asking the wrong questions. This is because technologies are not value-neutral, but a reflection of priorities of their maker. The automation technologies that we are designing are not about making lives better, but about displacing labour and human intervention in general, which is, using a popular expression, a project of the 1% against the other 99%. This is why we can not find a cure for Ebola but can boast about engineered babies, and send a mission of Mars but can not solve hunger problem. This is not the territory Mr Carr covers, but this debate about priorities are very central to his argument. Technologies, in the hands of workers themselves, are emancipatory forces, but in the hands of detached owners of capital, a tool for de-humanisation.
I perhaps stray from the argument of the book, but I am not reviewing it, just reflecting on my reading. And, this connects to my practice, in education. The questions we face are similar. How to put the teachers in control of technologies rather than trying to design technologies to replace or reduce the role of the teacher? How to transform the goal of using technology in education from cost-saving to creative engagement? How to move beyond the institutional or investor priorities in technology deployment, and make it a human thing? Mr Carr provides a persuasive argument of rethinking our ideas about technology and what it does to humans.
The answer to this problem is indeed to build fail-proof technologies, but we all know that does not happen. Even the most ardent of the automation advocates would accept that we can not know all the eventualities in advance and plan for all kinds of contingencies. And, indeed, a more realistic appraisal would also acknowledge the fact that sometimes, we may not engineer for all contingencies even if we know them, because economics of production must take into account the likelihood of an event and costs associated. So, we would almost always build automated systems that can fail, and then attach a human operator tasked to act during such failure, but at the same time, this human operator would be, by the design of things, utterly incapable of doing so. Automation is, therefore, a self-destructive paradigm, unless we have learnt to think about technology in a different way.
Mr Carr really becomes interesting at this point. He is not a technology-averse exotic, but a sensible advocate of human-centred technology. His point is that we have got our priorities wrong in automation and we are asking the wrong questions. This is because technologies are not value-neutral, but a reflection of priorities of their maker. The automation technologies that we are designing are not about making lives better, but about displacing labour and human intervention in general, which is, using a popular expression, a project of the 1% against the other 99%. This is why we can not find a cure for Ebola but can boast about engineered babies, and send a mission of Mars but can not solve hunger problem. This is not the territory Mr Carr covers, but this debate about priorities are very central to his argument. Technologies, in the hands of workers themselves, are emancipatory forces, but in the hands of detached owners of capital, a tool for de-humanisation.
I perhaps stray from the argument of the book, but I am not reviewing it, just reflecting on my reading. And, this connects to my practice, in education. The questions we face are similar. How to put the teachers in control of technologies rather than trying to design technologies to replace or reduce the role of the teacher? How to transform the goal of using technology in education from cost-saving to creative engagement? How to move beyond the institutional or investor priorities in technology deployment, and make it a human thing? Mr Carr provides a persuasive argument of rethinking our ideas about technology and what it does to humans.
Comments