With all these recent announcements and the accompanied hype on ChatGPT and GPT-4, etc, many of you may start thinking that these Artificial Intelligence is going to take over the world.

What I will like to say, to kickstart this post is, there is no need to panic. Rather you should take the opportunity to educate yourself on the technicals that is working behind the scene. Just to help you start your learning journey, note that these machine learning models behind the scene are what we call Generative Models. Basically what it does is that with your prompt as an input, the model will then generate words. How it decides the next word, in a very simplistic manner, is that it choses, based on its training data, the highest probable word that should appear next.

Once you have this idea, it is easier for you to evaluate on whether you should panic or not given all the information out there. Let us work through an example.

I came across this video while trying to understand how GPT-4 works.

Don't Panic Yet!

Before you start panicking, here is what I propose for you to think about.

What kind of data will you collect that is widely available to train a model that writes best-selling novels?

I will collect data from sci-fi blogs that are openly published, probably written by wannabe sci-fi writers. As such, since it is a work of fiction, then it has to be written in an exciting way right? With some stealth and suspense and of course throw in some excitement "Yes! Another robot kill human plot!" rather than "Yes! Let's build robot that serves human! A very happy ending indeed! The End!".

I might collect some of the discussion on artificial intelligence, perhaps on forums and research papers. Again, you will notice there will be a lot more discussion on the "evils" and "wrongdoing" of artificial intelligence and how to mitigate it rather than "boring" areas that Artificial Intelligence is serving humanity very well.

With all these data being used to train Generative Models, it is pretty logical to say that models like GPT-4 will be great at writing "Superintelligence kills human plot" rather than "Artificial Intelligence served mankind well(*yawn*)."

So really there is no need to panic right now imo. When there are examples of 'rogue' response, my suggestion is always think about the how and what kind of data are collected to train these Generative Models...and from there determine if you do need to panic or not. :)

What are your thoughts on this?

Consider supporting my work by buying me a "coffee" here. :)

If you are keen to reach out to me, you can PM me on my LinkedIn. Do consider signing up for my newsletter too. I just started my YouTube channel, do consider subscribing to it to give support! :)