Is AI Getting Dangerous?

Recently, there is an article/interview that went viral in the industry. Rodney Brooks, the former Director of CSAIL was interviewed. Briefly, he is a robotics researcher in MIT currently. In the interview, he discussed about the current state of Artificial Intelligence, especially on Generative AI, driver-less cars and of course Robotics. I will share more of my thoughts as I digest the interview and share my thoughts in my newsletter (short form) and here (long form).

I want to introduce one perspective of looking at AI right now, so that you will understand where I am coming from in understanding whether AI is dangerous right now and near future. From there I hope you can reach your own conclusion.

I see AI as a tool built by human. What are tools? From past history till now, tools have been developed and deployed by humans and animals because we lack certain competence, such as killing larger animals for protection and food. These tools help us to mold our environment and resources to enable us to survive and live better. Calculators, computers (both hardware and software), smartphones are tools too, computation and memory tools, to be exact. These tools helps us to hold many data & information and put them together for decision making. AI is similar!

Computation tools are used by humans because as we want to make complex decision, we need to hold more pieces of information and data then putting them together with different combination of operations/calculations to help us derive a 'rational' decision. From a very simplistic point of view, computation tools will ALWAYS give the same output given the fixed computation and input. Expand this to AI right now which is mostly made up of machine learning models, the 'insides' of the machine learning models does not change at all, why would the same input gives a different output then? Or from another perspectives, why will it deviate on the output with the same input? As long as we are clear about the input, we should not expect a deviation.

Will these AI/machine learning models "suddenly" become super-intelligence? I leave it to you to decide. :)

However, the biggest uncertainty right now is the algorithm. How so? You can imagine the AI we are using right now is like a messy switchboard where we have no idea who the input will translate inside to become the output. This comes about because the base model right now for these Artificial Intelligence is neural network and it is infamous for its lack of clarity in how it translates the input to output. As such, humans can only keep testing with different inputs, and see what is the resultant output and of course some of these outputs are undesirable. Just a note, I said "biggest" at the start of the paragraph, thus data used to train and "set-up" the messy switchboard is also important as well, to have some form of control on the algorithm development, avoid undesirable output (as much as foreseeable). But rest assured the AI algorithm won't be deviating anytime soon, and you suddenly have a machine overlord telling you to clean up your room or make up your bed!

Now with this perspective that AI are computation tools, let us continue our journey on deciding whether AI are dangerous. Like any tools, humans will have to use it often then it can discover other use cases of it. During this discovery of more use cases, naturally we will hit something undesirable. Opportunities arises for us to understand better and avoid the 'undesirable'. In humanity, naturally there will be bad actors who wants that 'undesirable'. This is where the larger community need to come together, set up regulations and compliance to make the 'undesirable' VERY VERY difficult to materialise.

This leads to the point I've made in my previous post that we need to start the step of regulation and compliance by having a central repository of use cases, especially those that went bad. Gather lessons from the repository, set up guidelines and policies that will not stifle innovation but manage the 'undesirables' and communicate these guidelines and policies to the industry.

Conclusion

Whether AI will be dangerous or not, it really depends on the humans using it, like any tools we have used thus far. Humans are currently trying to find out that messy switchboard, how it works with the different inputs and the decisions/output that it make. As we proceed and experiment, we will discover more of how it works and what is important then is a process to turn these lessons learned into useful regulations that promotes innovation and reduce the risk of 'negatives'.

Your thoughts?

Note: This is the long form of my newsletter. Consider subscribing to my newsletter below to get a glimpse before it gets published on my blog site. :)

Consider supporting my work by buying me a "coffee" here. :)

Please feel free to link up on LinkedIn! I also have a podcast and YouTube channel as well, and will try my best to update them. If you stumble to this post through search engine, consider signing up for my newsletter to stay in touch!