The Machine Age: Is It Progress, or is It an Anomaly?

When one questions current trends, especially for those beyond their twenties, it is quite common to be thought of as ‘out of touch’, ‘outdated’, ‘reactionary’ or various other negative descriptions for those who might be resistant to the inevitable process of change.

Change is perhaps the only constant in known existence, on both a micro or macro scale. Accepting that all things are subject to eventual change is perhaps a valuable life lesson that all of us must confront if we wish to attain any self-knowledge of wisdom. However, not all change is necessary, inevitable or beneficial and change for its own sake is not something we should blindly accept. Blind acceptance of all change as ‘progress’ is foolish to say the least and one must consider the view that just because something is possible it is not necessarily imperative.

Looking at human history, we can see variations in the pace of human change over millennia, in different societies and in different parts of the world. Most people would agree that it is fair to say that change has been slow up until the Renaissance period. During and after this period, the rate of change in technology and human innovation increased dramatically, as did the effect of humans on the planet.

This rate of change increased again as we entered the industrial age – with the creation of machines that ran on steam, hydrocarbon fuels and electricity. Coupled with technological progress we see a huge increase in the human population, reaching around 1 billion in 1800 approximately and hitting 2 billion around 1925 and currently at just under 8 billion people.

Although electricity became practically usable in the 1820s, mostly due to the work of Michael Faraday, it did not become widely usable by vast numbers of people until after World War 2. Even today, there are over 1 billion people in the world without electricity available to them, although developments in ‘Free Energy’ might soon change that. So, apart from the last 70 years, the entirety of human history (except for minor uses in ancient Sumer, Greece etc) has been without electricity, and without any understanding of it, or any desire to avail of it.

Likewise, the motorised vehicle, gas appliances, street lighting, metalled roads, advanced medicine and most practical sciences are all recent innovations that humans have done without for millennia. For the most part, people would agree that these recent innovations have been a huge benefit to humanity, even though many of them have had a detrimental effect on the wider environment. However, most of the innovations of the last 200 or so years have been fairly gradual in comparison to recent developments.

Along with faster, but still gradual technological change, there has been huge but gradual social change. The most dramatic change has been the acceptance of women  as the social and intellectual equals of men, although this has taken considerable time and arguably is still incomplete.

Much of the ‘progress’ we have seen in the 20th century has been of great benefit – such as in medicine, but some of it is rather questionable. Two areas that are most suspect are agriculture and computer technology. Agricultural innovation of the ‘Green Revolution’, which began in the 1950s has yielded as many problems as it has benefits, and is now proving to be catastrophic for bio-diversity, despite the increased productivity in human food generation.

Computer technology has also yielded somewhat mixed results – innovation and computational ability far beyond that of individual humans is a great benefit, but one must also remember that computers have eliminated entire industries, created alienation, unemployment, addiction and a host of other social ills that we still do not fully appreciate or comprehend.

Alvin Toffler, in his 1970 book Futureshock predicted much of the problems that have come to pass, due to the vast acceleration in the pace of change. These changes have not just been technological but social in nature, and have stressed the very fabric of human society near to breaking point on occasion. As a result, we now live in an increasingly fragmented world, where human experience and perception is perhaps more diverse than it has ever been.  While diversity in generally a good thing, extremes and myriad deviations can cause loss of continuity, cohesion and the destruction of formerly homogeneous societies.

What we see now in the world is vast array of experiences – ranging from the non-technological indigenous person, who has no experience of modern lifestyles to the techno-savy, socially aware city/town dweller – ready to embrace a transhumanist future. These two extremes are worlds apart, and most people do not fit in either extreme, falling somewhere between the two, in all or many areas of their life – as humans are not simple beings, we are complex, nuanced and at times contradictorary.

Along with the vastly accelerated technological changes of the 21st century, we are seeing social changes that, although somewhat slower, mirror the technological shift, perhaps, in part, in an attempt to adapt to the technological norms that are increasingly imposed upon humanity.

The concept of the Overton Window, developed by the late Joseph Overton, at the end of the last century, has become very popular in sociology, politics and increasingly so in popular culture. The basic idea is that what is acceptable now is different from what was acceptable in the past and that extreme ideas, beyond the bounds of current reasonable public perception will not be accepted. At one time gladiatorial games would have been normal and acceptable to Roman citizens, but in our current time such a thing would be socially and politically anathema.

While it is clearly a good thing that slavery, oppression of women, widespread violence, racism, etc. are generally no-longer acceptable, I question whether some of the social changes we are now seeing fall into the category of ‘beneficial to humanity’. Where the pace of change accelerates beyond what the human mind can easily adapt to, we end up in unknown territory.

While change is inevitable and one cannot even hope to prevent the development or evolution of humanity, it is perhaps wise to question the current direction we are taking. Is an over-reliance on technology really a benefit? A simple example would be the electronic calculator, that became fairly common in the 1970s and present in the bags of most western school children by the end of the 1980s. As a result of this small innovation, many people have difficulty in performing simple arithmetic and slightly more advanced arithmetic (such as long division) is pretty much impossible for some. This is only one small example of where mental laziness has crept in, due the replacement of thought with a technological aid, today we see the possibility of a multitude of tasks being undertaken by machines in a ‘labour saving’ exercise. In truth, mental effort and exertion are both necessary and beneficial for humans – without use, both our bodies and brains deteriorate and atrophy.

In the space of 70 years we have automated and electrified human existence, making many everyday tasks easier, eliminated much of the need for back-breaking work and created new areas of endeavour that were just science-fiction or unthought of before WW2. In the last 20 years the internet has been brought from its humble beginning (the academic JANET) to a world-wide information exchange system. Far beyond its original purpose of information sharing, the Internet  as the ‘Internet Of Things’ has become the ultimate information monitoring, sharing and distribution system. If your home is a ‘smart-home’, technology companies have access to unprecedented information, such as how often your toilets are used and how much milk is in your fridge; not to mention all the data that is collected through phones, tablets, laptops, Alexa etc, smart TV and conventional desktop computers.

Seemingly the next step is integration of human thought processes with this technology. In the past Human Computer Interaction (HCI) was about screen design and external devices (peripherals) but it has now progressed to the stage of direct interfacing with the body (phones, Fitbit) and even with the human brain (Elon Musk’s Neuralink, for instance).

It is clear that robotics are to become part of every-day human life in the same way that computerized devices have become omnipresent. The prospect of computerized human enhancements and prosphetic devices is also within reach – the main obstacle to this is not the technology but human resistance to such a major sea-change. We can now choose to change our gender, aided by hormones and surgery. Perhaps in the future we will be able to change our ‘species’ and even become cyborgs – part machine.

Going back to the Overton Window – much of the films, series, documentaries, journalism, social commentary and ‘woke’ culture seem to be conditioning us through what some call ‘social engineering’ to accept a future of unlimited possibilities, including machine-human integration that is total departure from 99.9% of historical human experience. It may be unacceptable right now, but these concept could become normlized in the very near future.

Transhumanism is a concept that has been around for a long time, about 100 years or so. Aldous Huxley wrote about it in his 1931 novel Brave New World, although this was a fictional warning, Huxley knew plenty about the Transhumanist philosophy through his brother Julian, who was an eminent supporter of it. Aldous Huxley later discussed his fears for the future at length in a television interview in 1958 (still available via YouTube), which was a rather grim prediction of what has come to pass and may come to pass very soon.

To some, my concerns about the direction of human progress may seem far-fetched and nonsensical, but considering how far we have come in such a short time, I believe a modicum of caution and restraint would be wise. Once we let the AI genie out of the bottle there is no putting it back in. The benefits of AI as a ‘stand alone’ device are obvious, but we are increasingly looking at integration and networking of AI technologies with human activities and human consciousness. Once we integrate into a network we can never be alone again, we cannot ever have true privacy again, unless we remove ourselves from that network. If permanent physical devices (prosphetics and implants) are used to connect us to each other and increasingly sophisticated computer networks, then that removes the option of disconnnection – effectively there will be no off switch.

Already it is hard to disappear. Even if we switch off our wifi, our phones and all surveillance-capable electrical devices we can still be monitored by CCTV, electronic purchases and other peoples’ devices. Some might say that ‘if you have nothing to hide, you have nothing to worry about’, but that is not the point. Up until recently, humans have had the choice whether or not to share their time, space, energy, words and thoughts with others – in short, privacy was easily obtained. In this current era, one will find it increasingly difficult to find true privacy – perhaps in a forest or a deserted beach, but otherwise privacy is in increasingly short supply.

One could argue that this current period is simply an anomaly and that technology will continue to be the tool that it always was meant to be. Certainly some people are now desensitized to social media, electronic devices and no longer make them the centre of their existence. I am not advocating for a de-technologized future; what I am saying is that we need to consider that when ‘progress’ ceases to be a choice, then we are in trouble. Innovation is part of human existence, but acceptance and integration of technological innovation has always been optional up until now.

My fear is that technological obsession will not be an anomaly but become a new integrated future of humanity, in which there is no ‘opt-out’. I do not like the prospect of a world where I cannot choose to use my mind to perform multiplication tasks instead of using my electronic calculator. What AI and network integration between computers and human nural processes offers is the ability for us to stop thinking for ourselves. What is more frightening still is the possibility that AI will preemptively think for us, predictively and far faster than we can make analyses and decisions ourselves. Self-reliance is a key skill to be a successful human and societies that have relied too much on technology, other actors (machines, servants or slaves) have often deteriorated and failed in the long term.

We are at a pivotal stage in human history – whether we choose to truly embrace the machine and AI, or encourage and develop our own faculties, is key to how the rest of this century may unfold. Indeed, what we choose to do in the next decade or so might change the future direction of humanity as fundamentally as the invention of the wheel.

Luke Eastwood is a writer, graphic designer and horticulturist, he also gained a BSc (Hons) in Business Computing Systems from City University, London. He continues to use computer technology for both work and pleasure. You can read more of his work at lukeeastwood.com Read other articles by Luke, or visit Luke's website.