Incorporating AI ethics
- marketing
- Categories: Artificial Intelligence, Isidoros' blogposts, Portfolio
- Tags: AI
Everyone in modern organizations will need to understand AI ethics
It was 2018 when we started working on our new statement of purpose for Pobuca. The very same year where we put our focus into becoming an AI-first company, dedicating almost 80% of our R&D budget solely to AI-related advancements.
We ended up with the following, a new statement of purpose that integrated our focus on the groundbreaking technology of AI:
“To unleash business workers’ creativity and offer to the society”
And when it comes to the public’s reception, the typical question that we kept receiving was the following:
Ok then, since you offer business automation, it makes sense to claim that you “unleash creativity in the workplace”. But how, do you “offer to the society”?
Well, one has to admit that we could see this coming, as “offering to the society” can be no more than an empty promise, especially in business…
Thankfully this is not the case for us, as a matter of fact, it is quite straightforward. In a nutshell, we simply decided to focus our efforts into building organizational awareness about AI ethics; and that we should all raise our ethical bars around it, more or less like the scientists that work in nuclear programs.
This is why we decided that, in order that we could better build it into our organizational culture, we should integrate our discussion of AI ethics within our mission, explaining how AI ethical risk management is a further extension of that purpose, a set of guardrails around what we are – or not – allowed to do in pursuit of our mission.
Why AI ethical risks are so important?
Many organizations have come around to seeing the business imperative of an AI ethical risk program. Countless news reports – from faulty and discriminatory facial recognition to privacy violations, to black-box algorithms with life-altering consequences – have put it on the agendas of boards, CEOs, and C-suite executives.
Let’s check an example of AI risks, when marketing is becoming too personalized and intelligently automated, leading to unethical use of data that can include the use of targeted advertisements that may further harm vulnerable consumers. For example, if a consumer has grown a gambling addiction, using information gathered from personal data to market gambling-related products would be unethical. While targeted advertisements can be convenient for both businesses and consumers, marketers should make sure that they are not exploitative or harmful.
Another example is with Amazon’s AI-powered hiring software, and its inability to sufficiently mitigate the biases admirably led Amazon to pull the plug on the project rather than deploying something that could be harmful to job applicants and the brand alike.
AI ethics: a functional area or company-wide?
Addressing such AI ethics risks requires raising awareness on them across the entire organization. Especially for companies that use AI, this needs to be a top priority. According to research by Deloitte, over 50% of executives report “major” or “extreme” concern about the ethical and reputational risks of AI in their organization, given its current level of preparedness for identifying and mitigating those risks.
That means that building an AI ethical risk program that everyone buys in is necessary for deploying AI at all. Done well, raising awareness can both mitigate risks at the tactical level and lend itself to the successful implementation of a more general AI ethical risk program.
However, most the organizations understand the necessity of such a program, but often don’t know how to proceed.
My advice is to start with the basics: let’s say procurement. Procurement officers are one of the greatest -and most overlooked- sources of AI ethical risks. AI vendors sell into almost every department, but especially marketing, HR, and finance. If your marketing procurement officers don’t know how to ask the right questions to vet AI products then they may, for instance, import the risk of discriminating against sub-populations in customer engagement or service.
How to Build Awareness
To make sure your AI ethics value commitments aren’t seen as mere PR “fireworks” or a loose CSR promise, you could tie those commitments to solid ethical guardrails as “we will never sell your data to third parties” or “we will always anonymize data shared with third parties”.
If you have a well-crafted AI ethics statement, it will include those guardrails, which will serve in a dual role.
Initially, by conveying to your team what you’re set to do about the AI ethical risks.
And secondly, by making clear that this is not PR or something fuzzy; when values are articulated and communicated in a way that ties them to actions, those communications are credible and memorable.
Finally, not forget that creating this cross-organizational awareness well requires some serious, committed work and a consistent message that has to be adjusted to the specific concerns of each group, differentiating across the interests and responsibilities of the C-suite to those of product owners and designers, or the data scientists and engineers. Although there can be a single ethics truth that needs to be told, it has to be conveyed through different narratives across your organization.