Have you caught the 2023 AI Index Report by Stanford's Institute for Human-Centered Artificial Intelligence? If not, you can catch it below.

https://hai.stanford.edu/research/ai-index-2023

Here are the top 10 findings/insights from the paper. I do strongly recommend to read the other insights that are provided by the paper as well.

1. Industry races ahead of academia.

2. Performance saturation on traditional benchmarks.

3. AI is both helping and harming the environment

4. The world's best new scientist...AI?

5. The number of incidents concerning the misuse of AI is rapidly rising.

6. The demand for AI-related professional skills is increasing across every American industrial sector.

7. For the first time in the last decade, year-over-year private investment in AI decreased.

8. While the proportion of companies adopting AI has plateaued, the companies that have adopted AI continue to pull ahead.

9. Policymaker interest in AI is on the rise.

10. Chinese citizens are among those who feel the most positively about AI products and services. Americans … not so much.

As always I will be sharing my thoughts here and I welcome any discussion or feedback from my audience. You can do that at my LinkedIn :)

I will do the first two points in this part. :)

1. Industry races ahead of academia.

Until 2014, most significant machine learning models were released by academia. Since then, industry has taken over. In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia. Building state-of-the-art AI systems increasingly requires large amounts of data, computer power, and money—resources that industry actors inherently possess in greater amounts compared to nonprofits and academia.

This was foreseeable when you've seen luminaries like Andrew Ng, Sebastian Thrun, Yann LeCun moving to the industry like Google and Facebook/Meta, followed by other PhD graduates that decided to move to the industry as well, such as Ilya Sukstever, Richard Socher, Ian Goodfellow, etc.

Why is there such a large attraction for them to do their "postdoc" at the industry? My guess is the accessibility of data and the quick feedback on whether their research is making a positive impact. If they are to stay in the university system, they will be forced to publish, and publish in top tier journals especially to be considered for tenureship. This means they will have to spend time writing papers, dealing with the to-and-fro with editors, to get to tenureship as compared to working in the industry, there is no pressure to publish but definitely pressure to commercialise. However, most of these big tech firms are technical people in the first place and thus they do realise it takes time to commercialise innovation and technology.

Thus it is no surprises that the development is much faster in the industry rather than academia given the industry systems/process works in favor of the research scientist, the availability of data especially. Did you notice I did not even mention about the level of salary? :)

What the future might hold?
So here is the thing, perhaps the notion that universities are doing the frontier-pushing research have to come to an end. Universities will need to start re-inventing themselves to be relevant, or at least update the internal process to make industry/businesses want to engage them for research. If this continues, universities are going to be too theoretical. This will have an impact to society and countries since they also educate and train the manpower that is needed to power our economy. If what they can impact are theories and practice, there is a good chance that the manpower may get obsolete as well, because employers are not paying for theories, they are paying for money-making applications.

2. Performance saturation on traditional benchmarks.

AI continued to post state-of-the-art results, but year-over-year improvement on many benchmarks continues to be marginal. Moreover, the speed at which benchmark saturation is being reached is increasing. However, new, more comprehensive benchmarking suites such as BIG-bench and HELM are being released.

Personally this one should be looked at together with another finding in the paper.

"AI systems become more flexible. Traditionally AI systems have performed well on narrow tasks but have struggled across broader tasks. Recently released models challenge that trend; BEiT-3, PaLI, and Gato, among others, are single AI systems increasingly capable of navigating multiple tasks (for example, vision, language)."

It is again foreseeable that if we just stick to deep learning models with Transformers, I do not feel there is a breakthrough to get to "better" intelligence. I remember about 2-3 years ago,there was a WIRED article that announced machine learning research has plateaued. I still go back to my point I believed that General Intelligence is both symbolic and connectionist at the same time. (Post)

There are two points we need to look at, firstly, we have to look at our benchmarks perhaps, determine their adequacy in measuring the artificial intelligence we are building, are they getting better. As we move to AI systems that are becoming more flexible, perhaps it is time we move to "non-traditional" benchmarks to determine if we are getting closer to General Intelligence. I'll suggest a matrix look so that we can understand how to improve the models.

Once we move to matrix benchmark, we also have to be very careful how these breakthroughs are achieved. No doubt, we are getting more "flexible" models that can tackle a basket of tasks, we need to understand that this "flexibility" is actually achieved through brute force i.e. stuffing the model with massive, tons, humongous data to achieve it.

What the future might hold?
My prediction is going forward, the models will definitely get better as we feed more and more data generated by the world but IT WILL PLATEAU! After that plateau is reached, we may then have to revisit the models architecture again to see how to get to the next level.

Concluding Thoughts

I'm not surprised by these two findings given the signs we have seen. I will be keen to see how universities keep up with the industry or will they be working with the industry to secure their positions. I have doubts because I strongly believe that it is time business set up their own research department rather given that they are collecting so much more data as compared to previous, and it is very unlikely business will work with universities research talent given how sensitive these data can be. It will be in conflict with university researchers' need to publish.

To my second point, AI will definitely get better but again is going to be small increment, regardless of how we measure it unless we start looking at the model architecture again and start including more symbolic manipulation in the models.

Will we get to AGI soon, I still have doubts but we are definitely getting closer. :)

What are your thoughts?

If you want to read my thoughts on the 2022 report, here they are.

Part 1, Part 2, Part 3

Consider supporting my work by buying me a "coffee" here. :)

Please feel free to link up on LinkedIn or Twitter (@PSkoo). I just started my YouTube channel, do consider subscribing to it to give support! Do consider signing up for my newsletter too.