Our AI Ethics, Policies & Governance services help you maintain control of technology that can dramatically affect a myriad of decisions & actions during day-to-day business.
This is particularly important once people have become comfortable with the technology. They may no longer know the basis of AI/ML decision & actions, but accept them anyway.
AI Ethics is a complex topic, that is increasing in importance as AI usage grows
AI Policies help handle unintended consequences of AI/ML work systematically
AI Governance is about staying in control of systems that “think” and adapt themselves
Today’s business world is becoming familiar with fines for data misuse, CEO dismissals for controversial business practices, and share prices responding to public concerns about corporate behaviour.
AI, Machine Learning and Big Data – particularly using Cloud technologies – are accelerating such changes. Even when companies pay attention to such consequences, it’s easy for ground to shift quickly, often driven by public opinion. For example, Cambridge Analytica.
Our AI Ethics, Policies & Governance services help stay attuned to such risks and implications, prevent problems arising where possible, and know how to resolve them at source if not.
AI Ethics Consulting
We use a proprietary framework to identify the areas of your business that may face ethical risks (actual & perceived) because of AI, Machine Learning & Big Data.
We take account of your existing AI/ML plans and aspirations, as well as competitor behaviour, industry trends and technology developments.
Our analysis is presented using a mix of Scenario Planning and Impact Analysis techniques. These provide an objective – and where possible quantified – assessment of your AI Ethics risks, with mitigation strategies and their implications.
This kind of work is best done before problems arise, so that the results can be integrated into your overall AI/ML roadmap, business plans and development processes.
AI Policy Development
There are two broad aspects to our AI Policy work:
Impact of AI/ML on existing policies e.g. HR, pricing
New policies around how you plan, build, deploy and use AI/ML e.g. data security
Impact of AI/ML on existing policies
As with AI Ethics, the consequences of AI/ML work on existing policies may not be apparent until they arise – by which time it may be too late for anything other than damage limitation.
The reason for bringing in a specialist outside perspective are two-fold:
the consequences are often not where policy makers typically look; and yet
the consequences are also one step removed from what AI teams usually consider
For example, sales incentive policies are designed to encourage certain sales behaviours, and reward particular skills. But if AI/ML is used to replicate or enable some of the qualities of the best sales people, those incentives may no longer be appropriate.
Our AI Policy work involves identifying your existing business policies – some of which may be implicitly built into workflows and systems, rather than explicitly documented. We classify the possible risks of AI/ML work on them using a standard impact/likelihood framework.
For many organisations, this is sufficient, and no further work is needed by us. For others, there may be value in us developing a prioritised action plan to reduce these risks. You may even want us to suggest the changes for you.
Policies for AI/ML work
Work on policies for AI/ML activity is generally much more straightforward. It involves starting with your IT development & operations policies, and building on them. This generally extends into related policies such as privacy, security and legal.
In practice, AI policies are best considered alongside AI processes, and risk becoming too abstract to be useful if developed in isolation.
AI Governance is the application of controls and checks that ensure an organisation is complying with its obligations and expectations, both internal and external. These are usually required either because individuals may gain by avoiding these (e.g. fraud), or because of the consequences of non-compliance are significant (e.g. reputation damage or litigation).
Business accepts the need to mitigate such risks in many areas e.g. Finance. But few routinely consider them in the context of AI and Machine Learning (yet). It is, however, becoming mainstream for data, particularly following GDPR and similar legislation.
We approach AI Governance from two perspectives:
Changes required to existing governance mechanisms because of AI/ML
New checks and balances needed for AI/ML, not previously required
Changes to Existing Governance
Governance means different things to different organisations. Its importance also varies with factors such as listing status, regulatory obligations and sector norms.
We therefore first confirm what governance means for you, and the scope of your existing governance mechanisms. This doesn’t just mean corporate governance. It also manifests in places such as project governance, business case monitoring and procurement approval.
The aim of our work is two-fold.
Firstly, we ensure the governance implications of any current or planned AI/ML work is spotted – if required, we can also specify/propose changes
The second is to identify likely future AI/ML initiatives and trends with governance implications, and flag the changes to be considered or made as they appear
This can become a Governance Checklist to incorporate into day-to-day AI/ML practices.
New Governance Mechanisms for AI/ML Work
One of the looming big questions created by AI is how to maintain control of systems that “think” for themselves, and continually adapt how they work.
When AI/ML works well, results are received positively. However, if that changes, or there’s a dramatic aberration, questions arise that may not be answerable. For example, why the maintenance interval of a failed engine was allowed to extend, or how a batch of graduate job offers were only made to white males.
This problem becomes more insidious with the indirect effects of an AI/ML solution, or when AI/ML is introduced into a small component of a larger business activity.
Our AI Governance work helps you predict some of these wider, longer-term consequences. We identify ways to prevent them from the outset where possible. If not, we help you systematically monitor the right flags to indicate something is no longer as it should be.
Work may be a project or initiative-based exercise, where the governance required for that piece of AI/ML work is established.
Alternatively, it may be more appropriate to develop a more generic set of AI Governance guidelines, to incorporate into all AI/ML development work or processes.