For those of you who are in the field of Artificial Intelligence, you will have come across Lex Fridman from MIT. He actually has series of videos where he interviews many prominent figures in the field of Artificial Intelligence. These interviews actually provide many nuggets of information, for instance, research directions, constraints, concerns in the AI field. If you have not started digesting it, here is the YouTube playlist to go through.

I wanted to start a blog series, that share my thoughts after watching some of these videos so this is the first post. I have went through other earlier videos so with this need to jot down what I learnt, I might revisit some of them where time permits. :)

Pieter Abbeel is a professor at UC Berkeley, director of Berkeley Robot Learning Lab, and is one of the top researchers in the world working on how to make robots understand and interact with the world around them, especially through imitation and deep reinforcement learning. (Description taken from Lex Fridman site.)

In this talk, although it is short, contains many nuggets on Robotics. The video started off with an interesting question.

How long does it take to build a robot that can play as well as Roger Federer?

So immediately, Prof Pieter was thinking about a bi-pedal robot and one of the question that comes to mind is, "Is the robot going to play in clay or grass court?". For those not familiar with tennis, there is a difference between the two, where clay allows sliding and grass does not.

Lesson 1: We bring assumptions to though process

So the assumption made was the robot is bi-pedal until Lex pointed out that it need not be, just need to be a machine. This showed me that sometimes when we are thinking about how we solve problems, we might bring in certain assumptions unknowingly and to effectively solve the challenge, it might be worthwhile to take a step back and check our assumptions.

Lesson 2: Signal-to-Noise Training

I found it interesting that Prof Pieter, when looking on how to train a robot to solve a particular problem, he looked at it from a signal-to-noise point of view. What that means is how can he send as much as the signal to the robot, so that it can learn and perform a task better and faster. For instance, looking at autonomous driving problem. Is it better to have the robot drive and learn at the same time (through reinforcement learning), or observe how a human drive and through observation picks up the necessary rules of driving. Or can simulation be used to train the robot to a certain level and then move the learning over to the actual environment.

Such thought process tells me that AGI is still a distance away because human design/decision is still needed to ensure that our AI learns the correct behavior and in an efficient way.

Lesson 3: Building Suitable Level of Knowledge Abstraction

There was discussion about building reasoning methods into AI so that they can learn the existing world better. I am on the same page here. What is stopping our current development in moving AI to AGI is the knowledge representation of the world, in my opinion. How can we present the world in terms of symbols and various abstraction levels, to teach the AI to move through these different abstraction level so as to continue the necessary reasoning.

For instance, when do we need to know that an apple is a fruit and when do we need to know that apple might not be a fruit but a provider of vitamins and continuing, this apple provides Vitamin A which is an antioxidant. How do we move through the different entity/label and their representation so that we can build a smarter AI?

I am very interested to understand knowledge representation/abstraction and how we can build it into our Artificial Intelligence but let us see if there is a suitable opportunity to pursue this research direction. :)

Lesson 4: Successful Reinforcement Learning is about Designing

Can we build kindness into our robots?

That was, I believed, the last question asked and Pieter mentioned that it is possibe (which I do think so too). What we need is to build "acts of kindness" into our objective function and ensure that we send back the right signal/feedback to ensure these "acts" stays with the robot.

We have come very far when it comes to Reinforcement Learning, given the development on the deep learning front. But I feel at the end of the day, what is going to make reinforcement learning agents perform to specification will greatly depend on the AI scientist, how they design the objective function, how fast can we send the feedback to the agent, how the agent understand the signals and many more. Designing the agent behavior and environment is an iterative and experimental process. There is a very small chance we get it right on the first try so be prepared to work on it iteratively.

If you are interested to discuss about the content or any AI related questions, feel free to link up with me on LinkedIn or Twitter.