Just published in the International In-house Counsel Journal! Full text here. Print edition available in April 2018.
People analytics – the focusing of so-called big data on human resources – is a tool for finding those factors that correlate with workplace success. Done right, people analytics can be extraordinarily powerful tools for testing, understanding and advancing the practice of human resources. Done incorrectly, and there is a potential to bake bias right into the core of your human resources activities. The story told about the future of people analytics is the stuff of utopian writing – that computers will take massive amounts of data (say, the history of your company’s hiring to date and the success or lack of success of candidates) – and derive from that data a way to select the best candidates for the job. And because these processes are based on data, math and formulas, we are led to believe that the results will be free from the usual bias associated with hiring. Put more succinctly, people analytics should help us learn from the past to determine who will be more likely to succeed and who will be more likely to fail fail. Unfortunately, the truth is quite different. With smart lawyering, however, we can create better systems and help our organizations better understand (and thus manage) the risk associated with people analytics.
AI Now, comprised of researchers from Google Open Research, Microsoft Research, and New York University, issued a list of top 10 Recommendations for AI. Worth the read. Here are two recommendations of note:
2 — Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design. As this is a rapidly changing field, the methods and assumptions by which such testing is conducted, along with the results, should be openly documented and publicly available, with clear versioning to accommodate updates and new findings.
4 — More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR. This research will complement the existing focus on worker replacement via automation. Specific attention should be given to the potential impact on labor rights and practices, and should focus especially on the potential for behavioral manipulation and the unintended reinforcement of bias in hiring and promotion.
A worthwhile read.
I am pleased to announce that my paper, tentatively titled “Countering Bias in Expert HR Systems: A Guide for In-House Counsel,” has been accepted for publication in the International In-house Counsel Journal.
The paper will present a very user-friendly guide to understanding and managing the risk from expert HR systems. As I’ve argued in this blog, management-side employment counsel must get deep under the hood of expert systems designed to perform evaluative functions on candidates and employees. From procurement to deployment, counsel must be equipped to understand the bias that will likely (unintentionally) creep into algorithmic decision-making and to manage the risk of such bias.
I’ve argued in detail in this blog (here and here) that management-side employment lawyers must get deep under the hood of expert systems designed to perform evaluative functions on candidates and employees (such as expert HR systems). At each step of development — arguably from the earliest design phase — lawyers must be equipped to understand the potential bias that might creep into algorithmic decision-making and help design systems that are as bias-free as possible.
Here are two important articles for readers on the subject (and why they are relevant):
Continue reading “‘Unpacking the Complexities’ of Algorithmic Bias”
I was just at the 2015 ABA National Symposium on Technology and Employment Law and had the privilege of participating in a panel about big bata and bias in employment law. The conference was really, hour for hour, two of the most valuable days of my legal career.
Continue reading “2015 ABA National Symposium on Technology and Employment Law”
It has been widely reported that EEOC Assistant Legal Counsel Carol Miaskoff, when addressing a conference on big data, shared her belief that employers should be concerned with the disparate impact of their employment-related data mining and analysis.
I am not convinced that she is right. I don’t think disparate impact will be the theory on which plaintiffs successfully attack big data in employment — I think it will be on a theory of intentional discrimination through proof of a discriminatory “pattern and practice.”
Continue reading “EEOC, Big Data and Disparate Impact: Barking up the Wrong Tree”
2015 will see the broadening and deepening of the transformation of tech-related employment law.
Here are eight reasons why:
Continue reading “8 Trends in the Transformation of Tech-Related Employment Law”
The big data revolution will require employment lawyers who can get “under the hood” of claims driven by big data analytics. Here are 10 questions that can help uncover error and bias in the work of data scientists.
Continue reading “10 Questions: Confronting Allegations Based on Big Data”
The price of data storage and sophisticated analytics are dropping. This will herald a sea-change in how employment cases are litigated, including both class actions (initially) and individual cases (eventually). Employment lawyers will need to:
- be involved in data governance;
- apply basic statistical methods to the results of data mining; and
- get deep “under the hood” of the data scientist’s work product.
Continue reading “Big Data: Critical Concerns for Employment Lawyers”