As more companies adopt AI, the risks posed by AI are becoming clearer to business leaders. That is driving many companies to hire AI ethicists to help guide them through an ethical minefield. But just as data scientists proved to be as elusive as unicorns, qualified AI ethicists are also in very short supply, says Beena Ammanath, executive director of Deloitte’s AI Institute.
“We’ve seen different models evolving. It’s still very nascent,” Ammanath tells Datanami. “Just like with data science, there is a rush to get a chief AI ethics officer or hire an AI ethicist. Those are becoming newer roles that we brought in to [get] to a solution for ethics.”
Other job titles being used in the area include chief ethics officer, chief human tech officer, and machine learning ethicists. These titles are just starting to pop up on job boards–at least on some of them. The job site Indeed appears to auto-correct a search for “chief ethics officer” to “chief executive officer, while a nation-wide search for “AI ethicist” on Monster turned up zilch.
Just like with the hunt for data scientists, the person in charge of driving the AI ethics strategy at a company ideally will have a long list of qualifications. According to Ammanath, who was a Datanami Person to Watch for 2020, an AI ethicist generally should have the following skills and capabilities:
- An understanding of AI tools and technology;
- An understanding of the business and the industry and the specific AI ethical traps that exist in them;
- Good communication skills and the ability to work across organizational boundaries;
- And regulatory, legal, and policy knowledge.
There are additional skills that may be required, such as having experience with the philosophical, psychological, or sociological aspects of ethics; knowing how to structure a business and a team in an ethical manner; and even knowing how to mitigate the environmental impact of using AI.
“The point is that you need to have a wide variety of skills,” Ammanath says. “It’s like finding that unicorn…Trying to find that person with credible experience and knowledge in all of these areas is practically impossible.”
So where does that leave you? The odds are, unless you’re working at a very large enterprise, you won’t be able to find a person to fit this exact job description. In lieu of finding a perfect match with one person, Ammanath suggests we borrow another page from the data science playbook: consider AI ethics a team sport rather than an individual sport.
“I think it really needs to be a combination,” she says. “Just like everything with AI, the context and maturity of the organization drives a lot of this. It’s practically impossible to find this unicorn individual with all this expertise. It has to be a combination play.”
Just as a good data science team isn’t limited to folks with “data scientist” in their job title but also has data engineers, software engineers, analysts, and machine learning engineer, the AI ethics team will be composed of individuals with a variety of titles who possess a variety of skills and abilities.
Many companies will be forced to take the team approach to AI ethics out of expediency. In these situations, it will be tempting for companies to appoint a single person to be the sole champion for AI ethics. However, Deloitte says that approach is a recipe for failure, as the ethical implications of AI are expected to grow in priority and become bigger in scope in the months and years to come.
The best approach, Ammanath says, may be to have somebody from the C-suite overseeing the AI ethics activities. This individual, who could have the title of chief trust officer or chief AI officer, could have one or more AI ethicists working for them. They may also have folks with more technical backgrounds advising them. These are some of the recommendations that Ammanath made in a recent Deloitte paper, titled “Does your company need a Chief AI Ethics Officer, an AI Ethicist, an AI Ethics Council, or all three?”
Another recommendation from Deloitte is to form one or more advisory boards dedicated to AI ethics. An external advisory board could help with outward facing concerns that customers or partners may have, including dealing with changing regulations. An internal AI ethics advisory board, meanwhile, could keep decision-makers inside the company informed about AI ethics activities.
In any discussion about AI ethics, however, it’s important to emphasize that there are no one-size-fits-all solutions. The ways that specific AI technologies are used varies by industry and by region, and understanding the reputational and regulatory risks that these pose to companies is not always straightforward. For example, facial recognition technology has been banned in some places because of the manner in which it can breach individual’s privacy and contribute to invasive surveillance. However, it could be perfectly ethical to use the technology in a manufacturing plant, Ammanath says.
“To be able to understand the ways that technology can be used and what is ethical versus what is not–that needs some understanding,” she says. “So getting these ethics aligned with technology teams is part of the organization’s success.
Ammanath, who is also the founder and CEO of Humans for AI, says she’s optimistic about the progress that is being made in the field of AI ethics.
“I’ve been talking about ethics for quite a few years now. In the past 18 months, the popularization of ethics has begun to move beyond [the discussion phase] to actually take action,” she says. “So I’m very happy and positive and optimistic. I think we will get there.”
This article originally appeared on Datanami.