AI OK?

Can AI COTS ever be OK for government?

(Apologies for the acronym-heavy sub-head. For the purposes of this blog, AI = Artificial Intelligence, COTS = Commercial off the shelf software. I’m sorry, forgive me, I am a civil servant).


Last week I was approached by a colleague who wanted to understand how they might use an AI-driven video interviewing software.

The proposal was that this would help to manage “sifting” for new candidates applying for certain roles within the civil service.

This would sit between the application and interview stages and use AI to score applicants and provide the top candidates to bring to interview [1].

The colleague who brought this to me has identified a genuine business problem which needs solving and needs support to do so.

As Digital Business Partner, it’s my job to help them find the way to the best solution, within government policy.

Colleagues come to me with all kinds of problems which bring opportunities of all shapes and sizes. In general, there are three paths these could take.

  1. The issue could be fixed by enabling the team to work more effectively using our standard software (we use Google GSuite),
  2. The problem needs more clearly defining, but in doing so, can be solved by the purchase a new piece of software that meets their need, or,
  3. Their issue is more complex, and requires the development of a Service to be used by civil servants across departments, or by the public.

Generally the distinction between these is clear. The first two tend to be driven by business need which is confined to a small set of civil servants within my department, while the latter have a wider reach, or a public-facing element which means that it necessarily needs to be driven by user need.

Every so often a stakeholder will request to use a piece of COTS software in the public domain (seeing 2 as the solution when it’s really 3).

This is often the case when the colleague hasn’t yet considered the user need, or the user need is perceived as secondary to the business need. For example:

“This will help to reduce our workload… but it will also help to speed up the experience for the user”

It’s my job to get them to a place where they prioritise user need. So that they can either evidence that need, find alternative ways of meeting it, or disprove their hypothesis.


I’d like to suggest that an implicit but enduring user need for any project delivered by government should always be:

“To feel confident that I’m going to be treated fairly and having transparent processes which give me that confidence.”

So if we start with this as a user need, how do we take this forward when we are talking about AI software?

Trust is everything.

With any software being used in this kind of instance, I would have a number of questions or concerns generally:

  • Will any users be excluded because they don’t have the technology to participate?
  • Will this exclude any users with particular needs, such as those who use assistive technologies (like screen readers).
  • Will users even want to do this? Are the likely to drop out of the process?

But the use of AI also raises further questions, for me:

  • By what factors will the application be judging a successful video interview?
  • Will the AI be looking at physical factors such as body language and facial expressions, as well as the ability to answer questions?
  • If so, could this disproportionately affect certain groups? I’m not just thinking about protected diversity characteristics (though that is a big concern) but also neuro diversity, introverts/extroverts?
  • What about presentation skills, regional accents and stammers?


While these potential issues and possible biases are important to understand this also raises a wider question about responsibility.

Namely, with what happens to the trust and integrity of a process if this has been bought in and isn’t “owned” by government?

As a commercial organisation selling AI, it makes sense that a company would want to closely guard their intellectual property. But where would that leave an organisation which is rightly open to scrutiny?


How would an applicant, in this instance, challenge the process if they felt it to be unfair? What would this do to our user need around trust?

How could an organisation prove that a process was fair without understanding the algorithms behind the decision making? Could we, as an organisation, demand transparency around algorithms?

How would procurement activity need to change to ensure that responsibility is understood?


This is the first example of utilising AI which has come my way, but I’m sure it won’t be the last.

I’m also sure that using AI is likely to bring loads of opportunities to increase efficiency and to help improve processes for us as a large organisation. It’s tempting, even slightly seductive. But it also needs to come with additional thought about the long term effects, and it needs to come with conversations about whether we have the right skills to procure properly, to hold ourselves and others to account, and to understand the algorithms which are being used.

Currently I don’t feel like my team are prepared enough for debates around this or equipped with the tools to have conversations around ethics with stakeholders.

I’m keen to understand more, and to speak with anyone working on policy in this area for guidance. I’m particularly interested in whether consideration has been given to how companies offering these services will be able to get onto the Digital Marketplace in the future, or whether any consideration has been given to this in the Digital Service Standard or Spend Controls process.

If you’re working in this area, or just interested, please let me know!


Huge thanks to Dan Barrett and Louise Cato for reading and providing their support for this post, thank you! Extra props go to Ryan Dunn for providing me with a new perspective and a wealth of reading material (which I’ve listed below for anyone who is interested).

[1] I don’t say this because government is immediately planning to use AI in this way; so if you have accidentally stumbled on this blog please don’t go calling your newspaper of choice with your views. These thoughts are all my own.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s