As I put in more thoughts on a World of AI, one of the biggest stumbling block right now is not the advancement of algorithms, software or hardware, but rather, human's tendency to anthropomorhise robots.

There are many existing examples already even though our robots are not animated to a human-like level. For instance,

"Experts predict human-robot marriage will be legal by 2050" (Actually do we need to make it legal? Why make it legal? To attribute rights to the robot?)

"Is AI-generated art really creativity?" (So if we NFT an AI-generated art, who gets to keep the proceed? What is the engineered used transfer learning, created a more niched DALL-E, and its work is turn into an NFT)

Or if you have watched "The Good Place" (see video below).

So here lies another definition question. In the YouTube video above, Janet is very life-like so it feels like killing an actual person, despite the fact that Chidi knows Janet is one. But where do we draw the line?

I think most of us will throw away or scrapped a spoilt Roomba easily if we know that it is spoilt or not functioning properly.

How about Tamagotchi?

Will you switch off Pepper the Robot or Atlas?

Will you switch off Sophia?

By the way, I am very sure for each of us the line drawn between human-like and non human-like can be different. Ever heard of robot abuse?

This is where I feel some regulations are needed to ensure that not everyone is hurt, emotionally at least when it the Age of Robots come into the picture. The tendency to anthropomorphise is a human trait that we will have to deal with as robotics advancement continue towards humanoid level.

What are your thoughts on this? I will be keen to hear from you.

To share your feedback, please feel free to link up on LinkedIn or Twitter (@PSkoo). Do consider signing up for my newsletter too, to stay in touch! :)