Artificial intelligence is developing so quickly that standardized responsible practices—set out in legislation, regulation or industry codes—may take many years to adequately govern intelligent technologies. In the meantime, organizations must self-regulate to ensure AI is used responsibly. But what should this self-regulation look like? And can it work effectively without blocking innovation?

People have been playing the strategic board game Go for 2,500 years. In 2016, Google Deepmind’s AI-powered Go player, AlphaGo, defeated the human world champion, Lee Sedol.

At a pivotal moment in the match, AlphaGo made a bizarre move. “It’s not a human move,” a fellow professional Go player said, “I’ve never seen a human play this move.” But it helped AlphaGo to win.

AI is full of such surprises, not all of them as “beautiful” as AlphaGo’s new strategy. Amazon, for example, tried to remove the algorithmic biases in their machine learning recruitment tool that developed a bias against female candidates, but unfortunately was unable to and had to abandon its use. Recent events like this might explain why, in our global research survey of business leaders, we found that 88 percent of respondents do not have confidence in AI-based decisions and outputs.

AI is not just less predictable than traditional computer programming; it is also newer (to most organizations) and already transforming many industries. In the same global survey, we found that 85 percent of business leaders expect AI to open up new products, services, business models and markets, while 78 percent expect AI to disrupt their industry in the next 10 years.

Since AI is both powerful and, at times, unpredictable, there has been growing interest in the responsible governance of AI applications. Businesses are wary of the possible, unintended consequences when working with AI, and have ramped up this interest. But the concern is that the wrong kind of regulation can stifle innovation and hold back the benefits of AI projects. In other words, there is an immediate need for good AI governance that allows for innovation to flourish.

 

Police patrols versus fire wardens

There are two main ways to approach AI governance: the police patrol and the fire warden.

Police patrols are the effect of AI governance rules applied from the top down. Organizations monitor people and detect violations of the rules in order to enforce compliance. This model seems like a straightforward option, but it tends to stifle innovation. Teams see it as a barrier, rather than their shared responsibility.

The fire warden model is different. It embeds skills within teams that help them to escalate issues that need attention—much like training fire wardens to raise the alarm and then carry out the necessary safety actions.

In general, we favor the latter approach for AI governance because it supports innovation and the agile development that is crucial to the competitiveness of today’s businesses—and it can evolve more easily alongside fast-changing AI technology.

 

Three ways to succeed with the fire warden approach
The fire warden model gives teams a high level of responsibility and agency over outcomes. That means that it’s critical to ensure the teams have the right people, processes and training. Getting all this right starts with three key practices:

  • Select “fire wardens.” Agile development is generally accepted as the best approach to AI initiatives. Collaborative design is central to this, so AI governance should be embedded in the co-creation process. Choose key individuals within development teams to build and guide AI initiatives alongside their colleagues and—like fire wardens—escalate issues as they emerge. Their job is not to put out the fire; they just have to follow the alarm process.
  • Embed regulatory expertise. Organizations should ensure there are suitable human links between data science teams and legal or compliance teams. These are people that can demystify both sides of the equation for their peers and develop an advanced understanding of potential consequences from all angles. Ideally, to maximize agility, this person would be primarily more technical than legal, a valuable member of the development team and deeply embedded in AI project development.
  • Welcome false alarms. In order to understand the ethical, fair and responsible implementation of AI, teams will need time to get an instinct for the kinds of things that may be indicators of risk. All teams will need time to get this right in practice, so we should encourage teams to lose all fear of firing the alarm—false alarms are much better than out-of-control threats. Only through experience will teams strengthen their ability to anticipate risks and truly understand what responsible AI means in practice.

At the heart of it, the fire warden approach depends on trust. Trust needs to be strong vertically, between leaders and workers, and horizontally, across teams.

 

Adapting AI governance to your context

Responsible, sustainable success with AI comes down to raising awareness at an individual level and empowering those individuals to act wisely. This gives us the best chance of supporting effective, agile governance and AI innovation, while helping us anticipate risks early enough to snuff out the sparks.

Register for FREE to comment or continue reading this article. Already registered? Login here.

0