Video Interview below:

Stuart Russell is a professor of computer science at UC Berkeley and a  co-author of the book that introduced me and millions of other people to  AI, called Artificial Intelligence: A Modern Approach. ~Description from Lex Fridman website

Lesson 1: Meta-reasoning

Interview went off with a great start, talking about meta-reasoning, reason about reasoning methods. In other words, how do we reduce the solution space quickly so that the optimum solution can be found in a shortest time possible.

The context for the discussion was creating winning chess software and alpa-beta search was mentioned. In the context of chess, generally the solution space is very well-defined and environment is pretty deterministic (does not change at all) thus finding the optimal solution is straightforward.

During the discussion, Prof Stuart mentioned that Alpha Go was very good is because it was able to do two-stage reasoning:

1st stage - Being able to "intuit" about the next move, meaning it has an intuition on the moves available in the depth-one level. Its able to quickly assess which moves is good and bad.

Its kind of like the System 1 thinking that was mentioned in Daniel Kahneman's book, "Thinking, Fast & Slow". My initial suspicion is we may be able to build this "intuition" in AI through many training rounds (i.e. gives it a lot of experience). The more training, the sharper will be the intuition.

2nd stage - After the intuition, its able to evaluate many moves ahead, more moves than a human can handle to finally determine which moves to pursue as its able to calculate the probability of success.

E.g. in the video interview was this (I found it superbly useful to understand the two stages.). Let us say there are two moves: Move A and Move B in a chess game.

At the first stage, intuition (sharpen by experience), may say that Move A is very likely to end in a draw. Whereas, through intution again, Move B has some uncertainty in its probability of winning, just not sure how much. What is going to happen is the evaluation of Move A will stop at Stage 1, whereas Move B will move to Stage 2, to understand (or calculate more accurately) the probability of winning.

Lesson 2: Machine Intelligence Has Growth Spurts?

Very briefly, continuing the discussion in the context of chess, Prof Stuart mentioned its seems Machine Intelligence enjoys spurts of growth and he quoted Garry Kasparov, when he battled with Deep Blue,"It was a different machine.".

This point need further research to understand what it means and its implications.

Lesson 3: AI research methods

This is another brief discussion but it is of interest to me.That is when researching to build better Artificial Intelligence, it is always a cycle between removing one constraint at a time, finding the possible solution and going back to removing a single constraint.

I find this interesting because, there needs to be a thought process behind which constraint to remove, the extent to relax the constraint etc. Only with a good thought process, a well-managed research process then we can have better success in building Artificial Intelligence.

Lesson 4: Self-Driving Car and AI Winter

Self-driving car has been around since 1987 but much success was gained when the car was on the freeway (or expressway). Prof Stuart did mentioned that what has made self-driving car more promising (not viable) these days, is that perception of car has improved, it is more reliable in detecting humans and tracking objects.

But one must note that in all the amazements that we have, what we see are demonstration where the weather is bright and sunny but we know the perception deteriorates drastically in bad weather conditions. This translates to the point that we should not over-hyped the current capabilities of AI. There are still siginificant challenges to overcome before we get actual self-driving cars.

Lesson 5: Why GOFAI failed?

Here is a quick idea what is GOFAI (stands for Good Old Fashion AI). GOFAI is actually symbolic AI where the question of what action to take is reasoned out through rules and statements.

Prof Stuart mentioned that symbolic AI can fail easily when it does not contain rules to handle unique scenarios. For example, a self-driving car detects there is a line of ducks a few meters from the car. It will not be able to know what to do if there are no rules built to deal with ducks.

One can say we can build more rules to cater but scenarios to plan for can be numerous because we want to handle them differently. For instance, how should the self-driving car react if the ducks are standing on the road versus a scenario where the ducks are flying low (how low?) instead? Moreover, the more rules we build, we need better computation power to run through numerous rules, determining if conditions is true and send out pulses to execute relevant actions.

Lesson 6: Success in one domain does not translate to success elsewhere

Continuing on the discussion, we have much success in Artificial Intelligence when it comes to chess is because its deterministic and environment is well-defined. This is very different from self-driving car where, yes we are able to detect humans and objects better but we may not be able to anticipate well, the next move of human or objects. For instance, the car can detect there is a human on the right side of the road, but is the human going to cross the road despite the lack of traffic crossing or is the human just standing there? If the human is talking on the phone (assuming the car detects the phone), does that mean he/she is not crossing the road? Currently, the level of AI that we have is not able to anticipate the next action accurately though.

Lesson 7: AI Safety - Control Theory

This part of the discussion was very engaging because I wanted to know more about AI safety and ethical usage.

Current way of training Artificial Intelligence agent is that we provide the purpose (i.e. the objective function) to the AI agent and the agent goes about to determine which allowable actions (designed by researchers) it should take to reach the objective function. Prof Stuart said this,"We need to get away from building an optimizing machine and throw in an objective (for the machine to achieve)?" What is going to happen is that optimizing machine will treat the objective function as the gospel truth and it will try all kinds of ways (allowed by design) to achieve it thus resulting in scenarios where machine take steps that humans find laughable or deplorable.

What did Prof Stuart propose? This is the thought-provoking and interesting part. Key statement is "Teach machines humility." Do not give the objective explicitly but allow the machine to learn and determine the ideal objective instead. For instance, Machine A may take Move X and then asked the humans if Move X is allowable. If it is, Move X moves into the "Allowable Action List". (Note:Last statement added by me.)

Prof Stuart was also drawing relationship between our current AI development/research and utilitarianism, where we use the outcomes to justify the means (i.e. stating the objective function that we want to reach upfront).

My opinion is this, training an AI from NOTHING to a Safe-to-Use AI is possible in theory but not possible in practice (Prof Stuart holds the same view too). Why is that so is because in Reinforcement Learning, the AI researchers have to define the environment (constraints and possibilities) upfront together with the objective function. The agent will then think of ALL possible moves allowed by the environment to reach the objective. So if we do not state down all the constraints, the computer will get creative enough to come up with different actions to reach the objective because of its laser-focus attention on getting to it.

Its an iterative process, state the scenarios and constraints needed so that non-ideal (foreseeabl) scenarios will not happen. Run the simulations and see how the agent achieve its objective. If after multiple simulations, we see "unexpected" actions taken, we have to build in further constraints. Check out this YT video where agents play hide-and-seek from each other.

If the AI researchers have stated that the object cannot be moved while an agent is on top of it, you definitely will not have gotten scenarios where agents "surf", but how many of us will have thought of that and state it upfront, during the design phase?

Lesson 8: Regulating AI

From what I gathered, Prof Stuart is for regulating AI because AI has a characteristic called Scalability, that is a double-edge sword. He gave an example on Pharmacy. Pharma has regulators to determine if it is safe to push out certain drugs to the market. Why is that so? Besides the life and death issue, there is the scalability characteristic where once a drug is mass-produced and used, it can impact many people.

Similarly, for AI, once an "intelligent" agent is built it will be feature or function used by many. Definitely regulation is needed to ensure the impact stays positive for people. Although we are pushing the frontier of Artificial Intelligence, finding out what are the possibilities, there is still a good chance to set up good oversight and regulation since we have defined the different levels (in many ways) of Artificial Intelligence. We can anticipate the possible scenarios and set up the first draft of regulations and improve along the way. This means to build strong regulations, it takes time. Similar to the Pharma example where FDA also took time to build up its regulatory framework.

Lesson 9: Another risk: Over-reliance on Artificial Intelligence

Prof Stuart Russell has been a strong proponent of there being huge risk in Artificial Intelligence and we should start regulating it. Another risk that he pointed out and (in my opinion) not getting much attention, is over-reliance on Artificial Intelligence. What risk is there? Well, since the time of humanity, we kept our knowledge within humans and through humans, we build up the current technology and society of humans. But once we decide to hand that knowledge to AI, together with robotics, the society is going be built through AI rather than humans. Humans may lose the ability to think, to integrate new knowledge so as to build better society and technology. Humanity progress may stop there and then. This means, instead of we being the masters of our fate has now given that power over to AI.

Lesson 10: Provably Beneficial Machine

I did a quick check and this idea about "Provably Beneficial Machine" came out in 2017. Here is the paper from Prof Stuart's website. Basically, it is to build machines that on paper can show that the machine will be beneficial to humankind, in addition its based on "explicit uncertainty in objectives". I have no idea what the second part is. It is going to be a research note for now. Definitely another interesting concept to research further. :)

By the way, I stumbled upon another interesting interview with Prof Stuart, which you might want to read further for more details on AI Safety discussion. Here it is. (Interview)

Definitely learnt a lot from this video interview and I strongly recommend you to run through it as well. If you are interested to discuss about the content or any AI related questions, feel free to link up with me on LinkedIn or my Twitter (@PSkoo). Do consider signing up for my newsletter too. :)