Leading on from my last blog post conclusion that holding algorithms accountable is a bit of a daft idea, I want to thank Richard Nota for this wonderful comment on The Conversation article posted by Andrew Waites in our slack channel

But that is exactly what I believe needs to happen, stop anthropomorphising algorithms.

As Theresa puts it, they are part of the infrastructure, and once let loose into the wild, they can be made extremely inflexible if they are not created with care and managed appropriately.

Its up to the humans to manage the ethical implications of the algorithms in their systems.

Anyway, the article was written by Lachlan McCalman who works at Data61 and he makes some very good arguments.

He points out that making the smallest mistake possible does not mean NO mistakes.

Lachlan describes 4 errors and how the algorithm can be designed to adjust for these.

**1. Different people, different mistakes**

There actually can be quite large mistakes for different subgroups that offset each other. In particular for minorities, where because there are few examples, getting their predictions wrong doesnt penalise the results too much.

I know about this already due to my favourites Jeff Larsson at ProPublica and the offsetting errors in the recidivism prediction algorithm in False negatives and positives for white and black males. Im sure you can work out who was the false negative (incorrectly predicted will not reoffend) vs false positive (incorrectly predicted will re-offend).

Lachlan suggests to fix this, the algorithm would need to be changed to care equally about accuracy for the sub groups.

**2. The algorithm isn’t sure**

Of course, its just a guess, and there are varying degrees of uncertainty.

Lachlan suggests the algorithm could allow for giving the benefit of the doubt where there is uncertainty.

**3. Historical bias**

this one is huge. of course patterns of bias become entrenched if the algorithm is fed biased history.

So changing the algorithm (positive discrimination perhaps) to counter this bias would be required.

**4. Conflicting priorities**

trade offs need to be made when there are limited resources.

Judgement is required, with no simple answer here.

In conclusion, Lachlan proposes there needs to be an “ethics engineer” who explicitly obtains ethical requirements from stakeholders, converts them into a mathematical objective and then monitors the algorithms ability to meet that objective when in production.

### Like this:

Like Loading...

*Related*

The ethics is about the people that oversee the design and programming of the algorithms.

A good start would be for people to stop

anthropomorphisingrobots and artificial intelligence.