A Better Framework For Ethical Risk Management

blog.A Better Framework For Ethical Risk Management

You’ve probably heard at a cocktail party something to the effect of: “well, ethics is just subjective, isn’t it?” That question makes me cringe. Fortunately, we don’t need to address that question. Whether you think ethics is subjective is irrelevant.

What is relevant is that business leaders are clearly vulnerable to ethical risk, regardless of their opinions or moral theories (in no small part due to the highly overlapping circles in the Venn diagram of ethical risk, reputational risk, and legal risk). This article will introduce you to some of the common problems with ethical risk management and provide advice on how to avoid getting stuck in common traps. My goals are to convince you that ethical risk management is worth doing (and worth doing well), and also to help you think through your ethical risk management strategy.

What is ethical risk management?

When Uber’s self-driving car hit a pedestrian, falling back on the claim that ethics is subjective was not an option. Nor was it when the world learned that Meta (then, Facebook) had architected its social media platform to allow Cambridge Analytica unauthorized access to up to 87 million records’ worth of personal data. Mortgage systems have denied loans to qualified minorities, and HR algorithms have precluded women from job searches. Healthcare AI products have shifted care from sicker Black patients to healthier White patients, causing healthcare providers to accidentally systematically discriminate at scale.

There are very real ethical, reputational, legal, and regulatory impacts to lapses like these. Organizations cannot rely on academic theory and armchair opinions to mitigate ethical risks. Establishing a code of conduct is not sufficient to mitigate these risks as many breaches are the result of mistakes or ignorance, not bad behavior. Finally, listing a few values like “Fairness” and “Transparency” in an ethics statement posted somewhere on your website will do nothing to prevent things from going ethically sideways.

Designating a risk manager doesn’t work either

Ethical risk cannot simply be placed on the shoulders of technical teams developing AI products, the HR leader procuring hiring algorithms, or the healthcare providers determining whom to treat and how (if they use biased AI products acting as their medical assistants). Directors of a functional area — HR, engineering, product, underwriting, legal, and so forth — are not ethicists. They are not trained to think through the relevant ethical concerns at a deep enough level to effectively mitigate risk.  Perhaps most importantly, they do not have the organizational authority and support to do so.

In other words: you can’t let Mark, a technical Product Owner and AI ethics enthusiast who volunteers to help, manage ethical risk for your $50B revenue company.

In my work at Virtue Consultants, my team and I develop and implement comprehensive ethical risk management strategies for enterprises. We work alongside C-suite leaders that are committed to reducing their ethical risk while keeping company innovation alive and well. And while there is no one single risk management program you can buy off the shelf and implement at your company, I’ll share some high level advice for how to think through this challenge.

Developing an ethical risk framework

1. Bad behavior by employees is often not the issue

In the case of Enron, we can all point a finger cleanly at the CFO and the executive team, who knew they were doing the wrong thing and did it anyway. But that episode of corporate accounting history doesn’t apply to many modern scandals — privacy breaches and biased AI, for example — where the main issue is not with employees’ moral intentions or behavior. Often, you need to think as much about the unintended consequences of the AI technology you’re developing or procuring, more than you do about bad actors.  (The exceptions here include the need to think about how bad actors may use the technology you deploy and/or sell to them as well as how bad actors may use technology to attack your organization).

Let’s say your talented team of engineers, product leaders, marketers, and visionary executives is building facial recognition software. Everyone has perfectly acceptable ethical intentions. Your team builds software, improves its accuracy, and procures lots of training data to teach your machine learning model to recognize faces. You market it to customers, who then buy it and use it. All is going well.

As this happens, you learn that your facial recognition software plays a role in police officers arresting Black people at much higher rates than other subpopulations. You didn’t intend for this to happen, but you realize that your data scientists trained your AI on datasets that under-represent Black people (relative to the other subpopulations, like white males). You (and your customers) may now have accidentally played a role in producing an unjust outcome, which brings in its tow potential reputational and legal fallout.

Notice that the data scientists didn’t intentionally choose training data that underrepresented Black people. Nor was it the intention of the product manager or the executive to play a role in discrimination. Bad behavior is not the problem here. Instead, the problem is ignorance of the risks and/or an inability to identify and mitigate them and/or a lack of organization support to acquire the requisite knowledge, skills, and authority.

2. It’s not just about legal risk, nor just about PR

It’s tempting to default to one of two broadly popular schools of thought when it comes to ethical risk management. The first is to view this as all “just a matter of compliance with laws or regulations”; and second, that it’s “just a question of consumer perception” or public relations. Neither approach is effective.

First, take the speed at which technology is evolving. Compare that with the speed at which legislators make laws governing that technology. If the comparison isn’t immediately obvious, remember that ChatGPT was launched in November 2022, and by February 2024, there were no laws in the United States governing AI development and usage (merely a few public statements and a non-binding “blueprint” from the White House). But we’ve already pointed out several examples of reputational damage suffered by companies who deployed AI with unintended consequences, with damage done to their customers as well.

The point is: laws and regulations will almost certainly always lag behind the pace of technology innovation. You can be in total legal compliance and still be vulnerable to ethical risk. For the foreseeable future, technology will continue to be the sort of thing that just evolves faster than legislation does.

Next, take the somewhat cynical view that “it’s really just all about PR anyway”. If we’re talking about reputational damage, the thinking goes, then why not just couch this discussion in terms of market research and consumer perception? Why not research what will likely keep us out of negative news articles, and only do those things? But the urge to bypass ethical discussion doesn’t work, either: this perspective disintegrates upon closer review.

In fact, the view that ethical risk management is all about PR actually leads to bad PR. Consumers don’t like companies who waffle in their commitments, to say nothing of internal staff and leadership who will be promptly disillusioned by hollow ethical commitments, and take none of it seriously. This all in turn leads to lapses in the ethical risk management program itself, making it effectively useless.

Not only that, but even if you could identify exactly what you should say you believe to maximize public and internal opinion of your firm, you could wake up the next day and find that public opinion has changed. This happens all the time. And so trying to shoulder the never ending burden of staying ahead of popular ethical beliefs fails. Boiling it all down to PR doesn’t work.

3. Focus on wrongs, not harms

Many well-intentioned designers of corporate ethical risk management programs take inspiration from healthcare ethics. Product teams think, “We have to ensure that our products don’t harm our various stakeholders”. That’s a good start. But thinking through this at a deep and granular level requires more depth: might your product have a justifiable negative impact on them?

There are instances where harm is justified. The makers of self-defense mace and pepper spray are intending for their products to cause harm; in fact, that’s the whole point of them! But generally, we don’t take the harm those products are meant to cause to indicate that someone has done anything wrong. (In fact, the only wrongdoing is executed by the very person being harmed by the spray!) . And even if those products are used for purposes other than self-defense, we don’t generally blame the maker of the product for that. Flatware manufacturers are not in the news because someone was stabbed with a fork.

To take another example, if we were to find that Berkshire Hathaway’s investors have created far more wealth for themselves on a risk-adjusted basis than everyone else, and that those investors are predominantly white and male, would that fact alone mean that they had wronged other subpopulations? No: merely that a negative differential impact was caused to non-investors, as opposed to investors.

A nuanced set of ethical guidance for product designers and owners, engineers, operations managers, and other team members will take into account that doing wrong, and causing harm, are different focuses.

4. Create ethics statements that actually do something

I’ve described how listing single-word Values (Respect, Accountability, Reliability, etc) aren’t helpful or actionable by themselves. Without context, they’re too abstract and they don’t guide behavior. So compiling a list of these and putting it on the wall is not the answer. But this does not mean you shouldn’t bother writing an ethics statement! You should: it just has to be structured so that it drives behavior in your organization. To do that, the statement must be understandable, concrete, and specific. It has to be deeply informed by the ethical context in which your company operates, and the detailed mechanics of how your internal ethical risk management framework operates.

Let’s say your ethics statement is composed of things like “We engage in AI ethics by design”, and “Our organization is committed to user privacy as one of our core responsibilities”. Who engages in reviewing the ethics of your product designs? When and how are potential products whose outputs might be classified as not ethical sent to the ethical risk committee for review? When designing a particular feature, how do you as an individual engineer know whether you’re treating user privacy as a core responsibility, or a non-core one?

To be effective, your ethics statement must be actionable by everyone in the organization. A better approach, that I go into in much more depth in my book “Ethical Machines”, involves:

  • First stating your values by thinking through your “ethical nightmares”, and reversing those to figure out where you really stand and what’s important to you
  • For each of those values, explaining why you value that thing in a way that connects it to your organization’s mission or purpose
  • Connecting your values to what the company takes to be ethically impermissible to prescribe no-go areas
  • Articulating how you will realize your ethical goals and avoid the “nightmares”, and assigning relevant metrics (KPIs and OKRs) that can then be tracked

When done right, this ethics statement will let you measure the gap between your current state and the ideal future one, determine metrics that carry weight and meaning, and provide you with a plan to deal with tough ethical cases you’ll encounter in the operational trenches as you innovate.

As an added bonus, when the statement is structured in this way, it will function as a far more credible branding and PR document than the ethics statement that was created for the purposes of PR.

5. Build an ethics committee with teeth

Having an ethics committee is undoubtedly better than not having one. But how you set it up, who is on it (not just individuals but roles, which transcend individual employment), and how much power it has, will determine its effectiveness in mitigating ethical risk. (I go into more detail in my Harvard Business Review article “Why You Need an AI Ethics Committee,” and offer some high level advice here).

Two common distinctions should be thought through. First: are there specific cases or product functions that trigger a requirement to have the ethics committee review the proposal? Or do those cases merely trigger recommendations? The Requirement version is more strict but the tradeoff is that it helps mitigate ethical risk far more than the Recommendation version. Second: when ethical issues are presented to the committee, are the decisions made considered requirements or recommendations? And if the decisions are requirements, will you allow certain cases where a senior executive can overrule these decisions?

Designing an ethics committee that people are encouraged to consult, and whose decisions are non-binding recommendations, carries high risk. A committee that people must consult, and whose decisions are binding except in rare circumstances when an executive overrides them, is a less risky structure.

6. Get ethicists involved

Your ethics committee should include not just one expert, but several. You’ll want a data scientist or someone who understands the technical underpinnings of what you’re building in your organization. A subject matter expert should be a member, as well as an attorney or privacy officer, and/or a cybersecurity risk management professional. And because the committee’s main job is to identify and mitigate ethical risks, it is wise to include an ethicist: that is, a PhD in philosophy specializing in ethics, or someone with a master’s degree in Medical Ethics, depending on your industry. Ethicists are not all equally valuable to your organization: someone with strong business experience who understands the messy nuances of operating a large company is sometimes preferable to someone with only an academic background.

This combination allows you to bring specialized training, knowledge, and experience to bear when dealing with a vast array of ethical risks, while maintaining valuable internal business context and subject matter expertise. Construct the membership of the committee so that individuals can leave the company and be replaced by a person with a similar background, allowing the group to maintain its capacity for clear-eyed ethical deliberation in a way that lets innovation thrive.

How Virtue can help

I’m Reid Blackman. As the CEO of Virtue Consultants, my team and I work with enterprises to implement comprehensive approaches to AI ethical and regulatory risk mitigation. Our experts build and integrate digital ethics systems into legal, regulatory, compliance, performance improvement, and cyber defense frameworks — tailored to each client we work with. I also speak and advise on this topic. Consider reading more about my approach in my book, Ethical Machines (Harvard Business Review Press, 2022).

If it’s the right time for you to set up an ethical risk management strategy consultation, feel free to contact me by email.