I’ve argued in detail in this blog (here and here) that management-side employment lawyers must get deep under the hood of expert systems designed to perform evaluative functions on candidates and employees (such as expert HR systems). At each step of development — arguably from the earliest design phase — lawyers must be equipped to understand the potential bias that might creep into algorithmic decision-making and help design systems that are as bias-free as possible.
Here are two important articles for readers on the subject (and why they are relevant):
Nanette Byrnes writes at MIT tech review Why We Should Expect Algorithms to Be Biased. She argues:
[T]he danger remains that unrecognized bias, not just in the programming of an algorithm but even in the data flowing into it, could inadvertently turn any program into a discriminator. For consumers who are unable to unpack the complexities of these programs, it will be hard to know whether they have been treated fairly.
It’s not just consumers who are often unable to “unpack the complexities.” Employers — and their lawyers — will need to do so as well. As said, management-side lawyers will need to get deep under the hood (I have argued as early as at the programming stage) to ensure that the choices made at each stage of system development is as bias-free as possible. Frankly, anything short of this may be tantamount to malpractice.
Bryce Goodman and Seth Flaxman argue that the EU’s GDPR will
restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which “significantly affect” users. The law will also create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them.
While Goodman and Bryce believe this is an opportunity for “machine learning researchers to take the lead in designing algorithms and evaluation frameworks which avoid discrimination,” it is also an “opportunity” for lawyers. What constitutes “discrimination” is quite technical from a legal point of view (particularly in employment matters). To Goodman and Bryce I say: this is an opportunity to work, at the earliest stages, with lawyers who understand machine learning and who can help you avoid building algorithms and evaluation frameworks that are infected with legally cognizable bias.
One thought on “‘Unpacking the Complexities’ of Algorithmic Bias”