Recently I published several issues of my newsletter on the above topics and I thought I do a consolidated post on my thoughts on the above. If you are keen to subscribe my newsletter before continuing, do open this in a new tab. :)
Let's get on to it. :)
Sorry, after a few years of trumpeting the term "AI Ethics" and many (self-acclaimed) 'experts' on LinkedIn, I still do not believe in creating Ethical AIs. How so?
Let me ask you the following questions and let yourself know if there are any ethically correct answers to it. :)
Does a person who eats meat despite seeing the killing process of animals considered unethical then?
A very young son who stole food from a convenience store to feed his ailing mother, and they only have each other in the whole family. Is that right, or wrong?
How about a retiree who accidentally hurt a very pushy real estate agent that has been harassing the retiree non-stop to sell his retirement home for a profit (agent’s mostly). But the agent is desperate to make a profit because his wife needs a specific cancer treatment that is very expensive. Is the retiree right, wrong? Is the agent right, wrong?
So how? DO YOU have a ethically correct answer? If you have, what are the guidelines you used to come to the conclusion? Do you think you can articulate it into something the machine can learn from, rules that can be coded into machine perhaps? Do you think your friends will come to the same conclusion as well then? And even if the same conclusion is derived, is your guidelines really the same as your friends?
On another perspective, per what a good friend mine mentioned, and I quote.
"The term 'Ethical AI' has no meaning – it's a machine, it has no ethics. Ethics belong to the realm of human activities. Unless there is evidence that AI is sentient, we should never be using that term as it obfuscate the issues."
This also means that the "ethical standards" in machine comes from the person who designed the machine, at the end of the day, a product of a human. Now, given how you answered the above questions on whether you've come up with a reasonable and repeatable answers, do you think "Ethical AI" is a possibility then?
I'd rather focus more on raising the ethical standard of people working on Artificial Intelligence, having them see the impact of their work on people's livelihood when a bad decision is made. Or to summarise, it should be raising "AI Professionals Ethics" rather than building "Ethical Machines".
"Governance is the process of making and enforcing decisions within an organization or society. It encompasses decision-making, rule-setting, and enforcement mechanisms to guide the functioning of an organization or society." ~ Wikipedia
Governance is all about setting up good practices and prevention of abuse. Now, my viewpoint is we DO NEED to set up good practices and prevention of abuse, per what I stated above on AI Professional Ethics. Working on AI Professional Ethics, is more intrinsic, whereas AI Governance is more extrinsic to get to AI that can be used.
I am observing the outputs from the various AI Governance committees out in the world and I am keen to see the correlation on the quality of their recommendations versus the skills, knowledge and experience composition on the commitee.
Ideally, an AI Governance Committee should have the members who has the following background, imo:
1) Practitioners who has train, build and implemented models. The more the better, the more varied the better. What do I mean by varied? Have implemented decision making models, Natural Language Processing models and Computer Vision models.
2) Business representatives
3) Goverment Policymaking & regulation/audit background
4) Law and jurisprudence background
Of course not forgetting including people with different gender and sexual orientation, and races.
Coming back, with all the mention about AI Governance, my gut feels we are still not hitting the crux of the matter. I also have some discomfort with the term “AI Governance” although only in recent days then I was able to point this out. To me the term “governance”, consist of a certain level of lack of trust, or adversarial relationship between the “governing” and “governed”. In that case, its a matter of which side has the most resources and can attract the necessary talent, and unfortunately, it turns into a “police and thief game” rather. I am not disputing the need for “AI Governance”, just more of that is not something I want to start with.
What we really want to build at the end of the day is TRUSTABLE AI, AI that can be trusted, trusted by all stakeholders (business, consumer, government) that are going to use AI or be impacted by the decision that it made. If you do not trust it, AI will never be used. This means the AI governance by itself is not going for the crux of the issue, however it is part of the toolbox in moving towards AI that can be trusted. There is a need to balance the benefits and costs of all stakeholders for an AI to produce fruitful, and positive impact. The need to build Trustable AI should be the approach practitioners and researchers adopt to ensure a healthy proliferation of AI as a tool to support the growth of humanity.
Deloitte has trademarked "Trustworthy AI". (Side note: I find it very weird that one can trademark a combination of two words. I should consider trademarking Deep Learning, Neural Networks, Existential Risks, and see the money rolling in! Any lawyer can shed some light on this?) It stated the values that businesses need to pursue. But given what I understand about the underlying models being used plus the data needed to train a trustable model, I beg to differ on a few of the values stated. (I am keen to have a discussion on it so give me a PM on LinkedIn if you are interested.) While the focus on business is correct but I feel we should also look at it from a consumer and government/regulator perspective as well.
I feel that the most important and clear objective is build AI that can be trusted, or trustable AI and we need to expand it further by not only the values that companies need to work on, but also the need to look at and hear from the perspectives of other stakeholders, namely consumer, government and regulators.
AI Professional Ethics and AI Governance is part of the toolbox that is needed to build Trustable AI. And if I have not made it clear, sorry there is no place for AI Ethics or Ethical Machines.
At the end of the day, the scope is actually much broader, and the focus is really building Trustable AI.
Its a long post and I thank you for reading till now! What are your thoughts? :)