Technology runs at an unprecedented pace and some examples of Artificial General Intelligence (AGI) will soon be elements of our routine.
We don’t know when and in what form this disruptive breakthrough will impact us, but it’s fundamental to get ready and start outlining the principles on which we want to build our future.
For many of us, the initial reaction to AI is often fear and skepticism. I remember when I proposed to my colleagues the use of Rytr, a precursor to ChatGPT, and the response was “Are you trying to replace us all?”.
As the bots become part of our lives, various challenges will emerge: how can we foster trust between humans and machines? How can we design robots able to fit human necessities? How can we avoid focusing on the functionality, rather than how we perceive the machines and how they make us feel?
Fortunately, research in Human-Robot Interaction (HRI) and contributions from experts in technological ethics support this approach. Let’s delve into their results, so we can try to develop some cornerstones for building our high-tech companions.
My Roomba is Rambo
Back in 2007, Georgia Tech researchers came out with the paper "My Roomba is Rambo".
That’s an exploration into the relationship people develop with their autonomous vacuum cleaners and surprisingly gives profound insights into the human capacity to form attachments with robots.
People seem to have a tendency to anthropomorphise their machines, calling them by nicknames and even treating them as family members.
It is not uncommon to tidy up the house beforehand to facilitate the robot's work and to express affection as if it were a pet.
The reasons behind such behaviour lie in the intimacy of the relationship forged with a device that's not just visibly present but actively engaged in our daily lives.
The ability of these robots to navigate our living spaces and operate with a degree of autonomy lends them an aura of having their own will and personality.
We also have a clear understanding of Roomba’s limitations, it's a machine whose actions we can comprehend and monitor.
Following this path, an innovative design solution for the cleaners involved introducing apps to control them remotely, checking their activity and in which areas are they cleaning.
Coded Bias
The findings in 2018 by then MIT researcher Joy Buolamwini (whose story also became a Netflix documentary entitled “Coded Bias”) also come to our aid.
During her studies, she noticed problems with certain facial recognition software (from IBM and Microsoft), which had difficulty recognising the faces of black people, particularly black women.
Why, according to the researcher? It was due to a problem detected in the dataset used to train the machine and the fact that the development team, generically white male engineers from Silicon Valley, failed to spot the presence of this bias.
Her work has had a strong impact on the world of technological ethics and AI regulation, so much so that she was called to raise the alarm in front of the US Congress and the European Union recently banned the use of this technology among its nations.
Her research leaves us with two further pieces to understand how to bring humans to trust the machines.
At first, it underscores the need for diverse cultural and working backgrounds in AI development teams throughout all the design process.
Teachers, psychologists, economists, and many other professionals are equally important as the engineers.
The second is yet another confirmation of how crucial it is for the human being to understand the machine's intentions and not to let it all be part of a black box, where you see the outputs, but you don’t understand the process behind it.
Equilibrium
Machines will have the capability to replace us; it’s not a matter of “if”, but “when”.
This does not mean that super-intelligent mechatronic creatures will be unleashed on the market: society doesn’t consider itself sufficiently ready (technologically "mature”) to be completely substituted and probably never will be.
For this, a key element will be the implementation of designs to increase human capability through human-robot collaboration teams.
A study shared between the Netherlands and the United States is based on the concept of Longitudinal Trust, or the relationship that is built over time within a team.
According to the researchers, it is essential to reach a point of relationship equity, where man is not led to disuse the machine, due to a lack of trust, but neither to excessive dependence.
Trust towards the machine, as between humans, is seen as a dynamic relationship that needs adjustments and learning from each other.
It’s about focusing on how good past experiences can positively influence our reserve of goodwill to work in synergy with the robot.
This is achieved, through the provision of explanations and justifications for machine’s actions or failures, as well as its ability to adapt to human feedback.
Our designs should target a strategy to improve performance in a team where the machine must integrate frictionlessly into the workflow.
Our Design Goal
When designing a product, our priority should not be the most efficient machine possible, but one whose attitudes and decisions we do not understand.
In a technology-driven world, we are inclined to think that the better the technology, the more consumers will be inclined to use it.
The reality is that human needs are different from mere optimisation.
A goal for the early future will be to find ways of creating emotional bonds, generate trustworthy relationships and give to the humanity a reason to choose one machine over another (or choose not to use them at all).
In summary, technology should not be pursued as an end in itself.
It will be crucial to make machines that behave intuitively, that allow us to create long-lasting relationships, in collaboration and not in subordination, where both actors are willing to improve according to the feedback they receive.
A collaborative design for artificial intelligences does not equal to a limitation of technological capabilities.
On the contrary, the “new meaning” that must lead us is about giving them the possibility of enriching and enhancing human existence.
Machines will not be here to replace our lives, but to extend the value of human life.