Thoughts on AI Index Report 2023 (Stanford-HAI) - Part 2
Continuing my thoughts on the AI Index Report. If you want to catch Part 1.
3. AI is both helping and harming the environment.
New research suggests that AI systems can have serious environmental impacts. According to Luccioni et al., 2022, BLOOM’s training run emitted 25 times more carbon than a single air traveler on a one-way trip from New York to San Francisco. Still, new reinforcement learning models like BCOOLER show that AI systems can be used to optimize energy usage.
Artificial Intelligence at the end of the day it is a tool. How we use it is really up to the user and the associated intention. To me, how the insight is written accentuates that AI is a tool. Look, anything that you consume right now is also "harming" the environment as well, cos there are carbon emitted in the manufacturing of it. Plants that you eat also breathes out carbon dioxide too, you learned that in Primary School Science (Singapore, where I am from), don't you?
So I have used the following website to do the calculations (naturally with all the assumptions made since actual data are not published from what I know), https://flightfree.org/flight-emissions-calculator.
GPT-3 emits 502 tonnes of CO2. From the above calculator, below is the data I have gotten.
You can do a quick check and see GPT-3 training is equivalent to about 100 passenger's round trip from Singapore to New York City. Do not forget that there is at least 1 flight per day in a week that goes between these two cities. So from another perspectives, the carbon emission from training these Large Models are minute compared to the emission we produced through our flight traveling.
HOWEVER, I am not saying that we should just frivolously keep on training large models. I am an Economist by training and for me, I believed in efficiency use of resources, the value of the project must be greater than the cost of it (with carbon emission as part of the cost).
Here is what I proposed. Carbon emission can be seen as an externality while the benefit of a large foundation model might benefit the business largely. Suggestion is provide tax incentives for technology firms to contribute to "planting of trees". Everytime a technology company want to train a large model, they should contribute to a tree planting or re-forestation program. The foundation/trust that runs such program can issue certificates to the technology company which they can display on their website. Companies will be incentivised to contribute and use these certificates as a signal to consumer/clients that they are mindful of the environment.
Governments who are keen on sustainability and 'green' efforts can encourage adoption of AI in carbon emission management as well. Not only for data centers but also for buildings too. If we can encourage property management firms to adopt and train internal expertise to maintain these algorithms, we will have better and more efficient energy consumption overall. :)
4. The world's best new scientist...AI?
AI models are starting to rapidly accelerate scientific progress and in 2022 were used to aid hydrogen fusion, improve the efficiency of matrix manipulation, and generate new antibodies.
I am for Augmented Artificial Intelligence, thus again I will not say the world's best new scientist is AI but rather the world's best new scientist are human scientists augmented with fast computation tool.
From a scientific research perspectives, there are going to be more and more complex calculations that needed to be done if we are to seek further breakthrough, not only matrix calculation alone but other calculation as well. What I foresee will be happening in the research community is, with high computation power, scientist will be exposed to more options and possibilities, like in the case of protein folding. Scientists can, with high computation power, be able to explore, experiment and simulate this 'bigger basket' of options and beginning to curate the options according their 'simulated' characteristics.
If you find the above paragraph too long to read, basically what I meant is scientists now have more options and ideas generated by Artificial Intelligence to explore and it is up to the human scientists to curate these options and ideas, to determine their strengths and weaknesses.
To me all these means that scientific breakthrough might be more often, assuming that the human scientists are really working on the cutting edge and not on publishing to keep their jobs.
What are your thoughts?
If you want to read my thoughts on the 2022 report, here they are.
Part 1, Part 2, Part 3
Consider supporting my work by buying me a "coffee" here. :)
Please feel free to link up on LinkedIn or Twitter (@PSkoo). I just started my YouTube channel, do consider subscribing to it to give support! Do consider signing up for my newsletter too.