Lex Fridman Interview - Greg Brockman
Greg Brockman is the Co-Founder and CTO of OpenAI, a research organization developing ideas in AI that lead eventually to a safe & friendly artificial general intelligence that benefits and empowers humanity. ~ From Lex Fridman's website.
Note 1: Building Beneficial Artificial Intelligence
Considering impact on stakeholders is something that I strongly advocated all along, and most boot camps are sorely lacking in this aspect, getting trainees to think about the impact of their work. Greg, at the start of the interview, mentioned that when it comes to designing Artificial Intelligence, it is very important to consider the possible impact. Granted that it is not easy to imagine the world with the new technology (for instance trying to explain Uber to 1950s people).
Working with new technology, there is an advantage where AI scientists can set up the initial conditions that the tech is born from, which means it greatly depends on the AI scientists' character, values and empathy. To build an AI that is beneficial to mankind, AI scientists may have the advantage to set the initial conditions but for the new technology to prosper, to benefit mankind, a string of correct steps have to be put together, with careful planning and consideration for each step. But to get it wrong only requires a single misstep.
Greg also points out that the discussion space has been dominated by the negatives artificial general intelligence (AGI) can bring but the discussion on how to make the technology beneficial to humankind should be expanded too.
Individual countries, because of the differences in culture and history, are going to build AI that benefits their population. That may create differences among the countries. So how do we ensure that the benefits created can be beneficial for the collective mankind? This is where policies have to come in.
In the interview, Greg also stated that governments need to play a part but a more measured approach. When the technology is not matured yet, continue to measure the progress. Once it is matured enough, it is time to step in with regulations (on the usage).
My opinion on this is, for governments to set "meaningful" regulations, to ensure that the full benefits can be gotten from the new technology, they need to have the technical knowledge to understand the possibilities and impossibilities. Unfortunately, policymaking takes a lot of effort, having to go through considerable studies and understand the huge domains (healthcare, manpower, education, security, transport, etc) the policy is going to affect and regulate. Thus a team should be formed where opinions and facts can be shared so that all viewpoints are considered, ensuring blind spots are the lowest.
Note 2: Computation Power rules?
This blog post titled "The Bitter Lesson" by Richard Sutton was quoted. In the blog post, Richard Sutton mentioned,
"The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin."
The reason we achieve much progress in AI in recent years was because of the high computation power that computers have. In the interview, Lex asked Greg if that is the case. Greg commented, yes the current level of development comes from the high computation power paired with more data being collected but scaling computation up is not the key to AGI. More needs to be done, for instance researching on better algorithms. Researching better algorithms is where small players that have no access to large computation power can work on. Having said that, Greg pointed out that his mentor, Matt Welsh (I do not know whether I got the right person, please let me know if I did not.) was working on projects in a small scale but when he had the opportunity to scale up these projects, results got better and for some of the results, unimaginable.
An example brought about during the interview was GANs. Lex Fridman mentioned that GANs were not that accurate at a small scale but once it was scaled up, it presented very good results.
Note 3: Next Step Towards AGI
So what is the next step and how do we measure the next step? Greg mentioned, the next big project for Open.AI is to build reasoning into AI. Once reasoning is built into AI, the AI can refine and generate new ideas. My thoughts are does this mean we can solve our climate crisis and cure cancer? I hope so and I am looking forward to it. If anyone is reading this, I will like to be part of the team researching on how to add reasoning to AI! :)
So how do we measure whether the AI can reason? Greg pointed out a few applications and they are theorem proving (which I believed DeepMind is also doing this as well, see article), able to write usable and efficient computer codes and security analysis of computer codes.
Author's Note
Personally, the whole interview was yet another very insightful one that I wished I have seen it earlier. But having said that, it also makes me strongly believed that we might have exhausted the low hanging fruits, whatever the Artificial Narrow Intelligence can do (i.e. automate at the task level). To move forward, building the next level of AI, we have to start building on top of the existing design of solely connectionist and instead be a mixture of symbolic and connectionist.
As for what kind of mixture and what other fields do we need? As of this writing, I am exploring Information Theory & Knowledge Representation and will research further on how to build them into (if it is helpful) our existing architecture.
Technology from OpenAI that was being shared in the interview is the following:
- GPT-2 - Language Modelling
- OpenAI Five - Multi-agent DOTA team
If you will like to discuss the topic of AGI, feel free to Tweet me or add me on LinkedIn. To keep up to date with my learning and sharing, consider subscribing to my newsletter below. Each subscription is a vote of confidence in my work.