This image is not a photo: it was generated using DALL E 2. DALL E 2 creates realistic images and art from a natural language description, and it's just one of many examples of disruptive AI that is becoming more and more mainstream.
This is part two in our series on AI Governance. Visit part one here.
Directors should take note, even though you may have an established internal format for consideration and decision making for any new investment in technology – it might not work so well when it comes to the introduction of Artificial Intelligence without some adjustments. The need for a plan and a business case are likely to be the same, but there will be some new or unique considerations.
For example, deciding to implement a Computer Vision solution is not the same as deciding to install or upgrade a CRM system. Imagine your board was being asked to invest in a Computer Vision solution for the purpose of improving customer experience and this in turn meant using cameras and video to capture customer use of a product or service (think retail foot traffic, H&S monitoring in construction, looking at traffic in a drive-through, or preventing crime).
In each of these scenarios, considering the various interests of stakeholders gets interesting. For example - Is there a chance of individuals being identified through the video capture – whether accidentally or by design? If the proposed solution includes any facial recognition technology, then the question becomes more acute. Suddenly, the customer’s right to privacy and therefore their interests as a stakeholder become an important part of the picture (no pun intended).
Some of these applications of AI require additional steps to protect the interests of stakeholders. In the above example, there might need to be ‘masking’ of the image data to prevent the identification of individuals. This requirement may extend into the early stages of a project during training of the machine learning model. Sometimes this is handled by a 3rd party for labelling data and so masking would likely also be required for this stage.
Given that AI covers a broad spectrum of possible solutions and is developing quickly, then those in governance roles must also develop their approach to these technologies. Not only consideration of a wider than usual group of stakeholders, but also how those stakeholder interests are considered.
Take four stakeholder groups and a few of the types of considerations an investment in AI may require. This is only a representation of a wider set of stakeholder and issues;
How might the proposed solution impact their right to privacy? Might it capture information that could personally identify them and if so how will the information be used and stored?
If the solution is being designed to provide a significant benefit to one group of customers, does it have the potential to have a negative impact on another group of your customers? For example; using data to reward off-peak or other types of consumption with discounts, but maintaining or increasing prices to another group of customers unable to access the benefit.
Business cases for AI can often hinge on operational efficiency gains. Repetitive or similar tasks completed by humans that can be done faster and more accurately by machine learning models are a great place to start when considering how to take advantage of AI. The current labour supply challenges being experienced in many markets are in part fueling research into digital solutions to help.
But what would the introduction of AI projects to your organization mean for your employees, their career opportunities and their futures? If the project will reduce the headcount required for the same or greater outputs then the board needs to at least consider employees as stakeholders and how it might create new career requirements internally and how there might be new development opportunities for existing employees. It may not be reasonable to expect that any staff who are displaced by technology will easily retrain or find other roles in the organisation. In some cases, Unions or other employee representative groups will be stakeholders to consider and even consult.
Similarly, if it is a significant AI project then it may represent real changes to your business model and / or your business processes. A board asking questions along these lines early in the life of such a project may flush out technology adoption or other internal challenges that should be added into the costing model or business case.
It is clear that board accountability has extended beyond simply acting in the best interests of the company first and then in the interests of the shareholders. More recently there are models emerging that deliver ESG reporting initiatives and recognition of an organisation’s wider responsibilities. In a way, these models reflect the same concepts I am referring to here with wider groups of stakeholders to consider and be responsible to.
But shareholders remain a very important stakeholder group. So a board must consider whether any new AI project is in line with the shareholders’ expectation of the brand. How will it be received by non-customers or groups outside of the customers who will benefit?
Is the project consistent with the organisation’s ethics and shareholder expectations of those ethics? A machine learning model should be adequately explainable (understanding how it reaches conclusions and outputs). It should also be as free as possible from bias. Have potential downstream uses for any data gathered and included in a model been considered?
Any ‘large’ machine learning model (large as in processor and energy heavy) will inevitably create a carbon footprint of its own. A board needs to ask about this issue and can get access to the calculated net effect of such a model to be comfortable that it aligns with their ESG goals.
Whilst an AI project might serve a particular customer group, it may also have impacts on a community group – the same one that affords the organisation a social licence to operate. Take for example a marketing oriented model that identifies customers and can send them to a specific location for a benefit – like free fuel for a window of time at a service station location. There is a risk of this causing significant upset for a community group wanting to access the same area and only getting choked roads.
There is also potential for a model to have inherent bias – consider for example an automated front end to a home loan application process, if this is biased then it may have a negative impact on one or more groups in the community. Any model replacing human judgement to any degree needs to be fair or unbiased.
Regulators are an increasingly important stakeholder group. For Artificial Intelligence, a technology developing at speed - regulation is often lagging behind the technology itself. This provides the opportunity for an organisation or group to inform and influence regulation as it is formed. Whilst this is usually the domain of larger entities, like self driving cars or UAV’s, your organisation may have a specific niche requirement or application that crosses over into regulatory territory.
The examples listed are on balance, negative ones. They simply serve to illustrate the potential for unforeseen consequences as AI serves niche purposes. These pitfalls may not be obvious at the outset or in a business case that only considers numbers.
If the board starts with the intended outcome of an AI project and then considers how that project serves the purpose and the strategy of the organization - then you’ll be able to begin the process on a solid footing and to identify the key stakeholder groups you need to take into account. Often seeing potential impacts on one group will start to open up new groups you may not have considered.
Richard's AI Governance series can be found here: