In my newsletter, I wrote a quick thought on this statement that many prominent figures in AI (including Philosophers apparently) have voiced support and sign on it. The statement is as follows:

Mitigating the risk of extinction from AI should be a global priority  alongside other societal-scale risks such as pandemics and nuclear war.

For those that did not read my newsletter but stumbled upon here I will like to make two points before we continue further from the newsletter post.

1) I strongly believe and am advocating that we manage the risks that comes with AI right now! The Center for AI Safety actually state down the possible risks.

2) My definition of existential risk, or risk of extinction will be something that if managed, can wipe-out humanity very quickly, much like the SNAP by Thanos. (If you do not know the reference, I suggest Google for it for watch "The Avengers: Infinity War" & "The Avengers: Endgame")

I felt that the listed 8 risks by the Center for AI Safety are valid risks and need to be addressed, however, I will not classify them as existential risk at the moment. So to me the statement made by the prominence is close to fear-mongering and it is never going to be health for the growth of the industry, and folks who has put in the effort to join the industry. This viewpoint is shared by the author of this article as well.

Except for weaponisation, it is unclear how the other – still awful –  risks could lead to the extinction of our species, and the burden of  spelling it out is on those who claim it.

What I will be advocating for with regards to the statement made is I hope those that signed the statement to provide more information on what is the existential risk that they saw and it if is so catastrophic to call it existential risks, to share more details on it so the whole of humanity is 'shocked' into it and we start managing it.

Further Thought

Yoshua Bengio & Geoffrey Hinton are luminaries in the development of AI, they brought AI to where it is now. They should be credited for that. But to look at their legacy and assume they will continue to understand the field and be able to contribute substantially, can be a "dangerous" assumption here.

From what I have read and understand so far, Geoffrey Hinton came up with Capsule Network (2017) and you can see from arxiv, his group of researchers are very much focused on Computer Vision models. And Yoshua Bengio came up with Generative Flow Network (2022) (I like his thought process behind building this network). But so far, I have not seen any breakthrough in applications for their work. The point I am saying is that at least based on the current knowledge and publications...it seems that there has not been much contribution from the two luminaries and for them to see existential risk...without more information from them, I have my doubts.

Again I want to stress that I have great respect for their work done so far but as of now, with no new information, I take what they say that AI is existential risk with pinches of salt.

Financial Crisis and AI Crisis Similar?

One of my community members on LinkedIn posted this reply on my post.

We don't need to know the exact details to call out a problem. Banking and economy is a man made construct that we have control over and even then we are unable to predict the intricacies and pitfall until it smack us in the face every single time. How many financial crisis has come and go and how many post event regulation have we tried to patch in and yet each time human finds a new way to break the economy. How much hubris do we need to have to think we can identify all the pitfalls that AI will  face, avoid them and control the direction and progress of AI for it to  only be a force for good.

I will like to address his reply here in my post. His reply was thought-provoking though but it pointed out rightly again why it is not AI that is dangerous but it is the humans running the show, using the tools that are dangerous rather. And thus my call for regulations.

However, there is a difference between financial crisis and AI in my opinion. In a financial crisis, humans decision, continuous stream of decision caused financial crisis to happen and takes a "long period" of time to recover. As in, humans are both the input and the algorithm and what makes it even more complicated is each humans have their own optimization algorithm. Each individual humans output is then fed into the economic system, and the final output is generated. This happens over a period of time which is what I meant by the continuous stream of decision. It may sounds like recurrent neural network but it is not really. It is much much messier!

AI is very different. Why do we worry so much about AI is not that complicated. The reason why AI is "dangerous" is because the uncertainty of output, and that uncertainty of output comes from the algorithm or deep learning models/multi-layer neural network. That is it! Simply put, if we can unravel the algorithm, which I admit is not going to be easy given the parameters are in billions these days, we will not feel that unease with the algorithm, plus we will have a better idea how to "re-wire" to build better models.

In conclusion, I hope the signatories provide more information on what is the existential risk they saw, so that we can work together on it otherwise the whole exercise reminds me of the story "Chicken Little" rather, and this is not healthy for the industry and in fact may just increase the risk of Value Lock-In rather.

What are your thoughts?

Note: This is the long form of my newsletter.  Consider subscribing to my newsletter below to get a glimpse before it gets published on my blog site. :)

Consider supporting my work by buying me a "coffee" here. :)

Please feel free to link up on LinkedIn! I also have a podcast and YouTube channel as well, and will try my best to update them. If you stumble to  this post through search engine, consider signing up for my newsletter to stay in touch!