This image is not a photo: it was generated using DALL E 2. DALL E 2 creates realistic images and art from a natural language description, and it's just one of many examples of disruptive AI that is becoming more and more mainstream.
This is part three in our series on AI Governance. Visit part one or part two.
Artificial Intelligence is more of a frontier than a technology. That is to say, the term covers a wide range of applications and underlying types of machine learning models, other software models and hardware integrations that provide sources of data. Like any frontier of old, there will always be a few ‘pioneers’ who fall victim to the many hazards associated with new or uncharted territory.
With such rapid advances in research and development, this frontier is expanding rapidly. Whilst it is certain that developments in AI will continue to advance and improve the delivery of goods and services in marketplaces everywhere – what is much less certain is the success of individual AI projects, even if the technology is already proven.
For any organization, it can make good sense to consider a ‘proof of concept’ styled approach to any Artificial Intelligence project. This is a well proven approach to testing the value of investing significant time and money into the full production and integration of a solution. The theoretical progression is something like;
Of course, this is a simplified view of what can be a complex series of decisions. For a Governance team though, it is helpful as a simple way in which to consider several key concepts, namely;
The point of this article is not to warn boards off AI projects, rather quite the opposite. The point is to be aware of and consider the potential roadblocks, internal and external as early as possible and thus significantly increase the chance of a successful POC and perhaps more importantly, the chances of the project making it into production. Current statistics reveal nearly half don’t make it, so it is worth the time spent early to de-risk where possible.
Preparedness is definitely a thing when it comes to AI projects. Often, they ingest large amounts of data (especially in the training phase) and they require access to specific types of data. If the data needed is not readily available and accessible, then this can become an early roadblock that slows or stops a project, which happens more often than might be obvious. To generalize terribly, large organisations are more likely to have achieved a stage of digital maturity and therefore are likely to have more useful data for a project, but if that data is widely distributed in many formats and therefore difficult to access and use – then there is going to be a significant requirement to prepare it. Understanding the downstream requirements before commencing a POC is healthy practice, as is establishing sound data governance (a separate subject in its own right).
Questions worth asking on this front are:
Whilst the benefits of a successful venture into AI may be obvious to sponsors and a board, the impact on people and culture may not. Considering stakeholders and the organisation’s purpose is an important step, but so too is being sure to understand internal views and beliefs held towards the technology.
If there are personally held concerns about the introduction of AI technologies or even wider beliefs that AI is not a good thing, then this will significantly impede the success of your project and can be a significant barrier to adoption when it comes to being in production. The complete opposite can be an impediment too. Unrealistic expectations of AI - such as 99% accuracy for an out-of-the-box model applied to a new process can be a poor start for any project. An internal survey or informal team discussions can help a board understand where attitudes lie.
In a similar vein, the concept of change management as it applies to embedding new technology is important. For directors, understanding the likely changes in workflow or systems used for people and how they will be assisted through these changes matters. These existing technology systems and structures create inherent boundaries and constraints, which will need to be overcome.
Good questions to ask around this include;
If the board is looking at a proposal simply on the merits of the technology and don’t have access to at least some investigation and understanding of these potential roadblocks to success then it is very likely worth asking for this to be done before committing.
Richard's AI Governance series can be found here: