Giant humanoids and automata

Today I had the honour of delivering a lecture as part of the Associate of King’s College (AKC) series. The AKC has its origins in the earliest days of King’s, almost two centuries ago, and continues as a cross-disciplinary programme and this year there are around 5,000 participants, including staff. It was a busy room (most of the 5000 watch online though), and I felt very aware of the history behind the series while speaking. This post is not intended to summarise the entire lecture but is a quick reflection based on a question posed by one the students after the lecture.

The lecture itself, titled Rethinking Human Learning in the Age of AI, was structured as a journey through time and across cultures. I wanted to draw attention to the long history of machines, automata and tools designed to work alongside us or, at times, to imitate us. Alongside this, I wanted to acknowledge current concerns about cognitive offloading, over-reliance on AI, and the anxiety that students (and others) may be outsourcing thinking itself. Rather than focus solely on the present moment, I wanted to show that many of these concerns are not new. They have deep roots in myth, invention and cultural imagination.

I began by considering why humans have been drawn to making machines that act or look like us. First, Talos, the bronze giant forged by Hephaestus to guard Crete. Talos, always on, 24/7 sentinel, ever watchful, apparently sentient, yet bound to servitude. Despite his scale, he was defeated by Jason (see how around 6 mins in the video below). The question I raised was: why build a giant in human form to defend an island when there might have been other, more efficient forms of defence? And what are the hidden consequences of a defender shaped like a human? And do we not actually feel sympathy for Talos when he dies?

The second example was the story of Yan Shi, artificer to King Mu around 3000 years ago, who constructed a singing and dancing automaton. The figure was so lifelike that it provoked admiration, but when it began to flirt with women in the court, the king’s admiration turned to fear and fury. Yan Shi had to dismantle it to reveal its workings and save his own life. The story anticipates Masahiro Mori’s ‘Uncanny Valley’ effect. The discomfort arises not simply from human likeness, but from behaviour that unsettles what we assume about intention, autonomy and control.

The third example was the tradition of the Karakuri puppets dating from 17th century Japan, whose fluid, human-like movements still evoke fascination. As with the Bunraku puppets (life size theatrical puppets), we know they are not real, yet we are drawn to the artistry and precision. There is both enchantment and a kind of deception. The craftsmanship invites admiration, but it also encourages us to question what lies beneath the surface.

With all these examples I suggested that our enchantment with lifelike machines can be both captivating and disarming. In each case, the machine is inspired by human design and each in its own way astounds and captivates. But Talos, despite his size, had a single point of vulnerability. Yan Shi’s automaton ultimately profoundly disturbed its audience. The Karakuri mechnaical dolls delight and amaze in the main but like Talos and Yan Shi’s automaton challenge us to ‘look under the hood’ to see both how they work and to uncover frailties.  My point, when I eventually go to it, was that natural language exchanges and fluent outputs of some modern AI tools can similarly enchant us and lead us to assume capabilities it does not have. We need to, within our own capabilities, look under  the hood.

I went on from there… you can see the whole journey reflected in this which I presented as an advance organiser (try to ignore the flags, they spoil it a bit):

After the lecture, a student asked what I thought was an excellent question: why are modern robots, especially those that intersect with novel AI tools, and representations of them in popular culture so frequently humanoid? Why, given that the most effective machines are those designed for highly specific functions, are we drawn to building robots in our own image? We talked about how a Roomba vacuum cleaner is far more practical than a 5-foot humanoid robot pushing around a standard vacuum but, still, in the popular imagination the latter is the imagined domestic help of the future a la Rosie from the Jetsons.

Industrial robots in car plants are arms, not bodies, because arms are what are required to achieve the task. So why does the humanoid form persist so strongly in imagination and development?

I replied, almost instinctively, that one of the reasons we return to the humanoid shape is a lack of imagination. Even among the highly skilled, it is difficult to escape the pull of the human form. We continue to project ourselves onto our technologies, even when the task at hand requires something entirely different. This is not to say that humanoid robots are always misguided. Sometimes there is a clear functional rationale. But in many cases the fascination with the human shape seems to outweigh the practical benefits. We see this in videos of humanoid robots attempting to play football, really really badly, yet we persist.

I mentioned an example I saw recently at the New Scientist Festival: a robotic elderly human head, connected to a large language model, with articulated features. It was being trialled in care homes. I found this compelling because it was not the typical youthful, idealised (and so typically female – which raises other disturbing assumptions I have to say) robot form that popular technology tends to prioritise. It was designed because there was a specific need to support human interaction where human presence was limited. It was based on their own research evidence that residents responded better to a human face than to a screen or disembodied voice. It did not need a body to fulfil that role but it needed the head. Problem> design > research > testing > honing. It contrasts dramatically with the ‘let’s make a robot that takes us into the uncanny valley and out the other side!’

What do you think?

Incidentally, I do not know why I included the word automata in my lecture as I failed (as I always do ) to say it properly.

Leave a comment