AI Now, comprised of researchers from Google Open Research, Microsoft Research, and New York University, issued a list of top 10 Recommendations for AI. Worth the read. Here are two recommendations of note:
2 — Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design. As this is a rapidly changing field, the methods and assumptions by which such testing is conducted, along with the results, should be openly documented and publicly available, with clear versioning to accommodate updates and new findings.
4 — More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR. This research will complement the existing focus on worker replacement via automation. Specific attention should be given to the potential impact on labor rights and practices, and should focus especially on the potential for behavioral manipulation and the unintended reinforcement of bias in hiring and promotion.
A worthwhile read.
I am pleased to announce that my paper, tentatively titled “Countering Bias in Expert HR Systems: A Guide for In-House Counsel,” has been accepted for publication in the International In-house Counsel Journal.
The paper will present a very user-friendly guide to understanding and managing the risk from expert HR systems. As I’ve argued in this blog, management-side employment counsel must get deep under the hood of expert systems designed to perform evaluative functions on candidates and employees. From procurement to deployment, counsel must be equipped to understand the bias that will likely (unintentionally) creep into algorithmic decision-making and to manage the risk of such bias.
I’ve argued in detail in this blog (here and here) that management-side employment lawyers must get deep under the hood of expert systems designed to perform evaluative functions on candidates and employees (such as expert HR systems). At each step of development — arguably from the earliest design phase — lawyers must be equipped to understand the potential bias that might creep into algorithmic decision-making and help design systems that are as bias-free as possible.
Here are two important articles for readers on the subject (and why they are relevant):
Continue reading “‘Unpacking the Complexities’ of Algorithmic Bias”
I was just at the 2015 ABA National Symposium on Technology and Employment Law and had the privilege of participating in a panel about big bata and bias in employment law. The conference was really, hour for hour, two of the most valuable days of my legal career.
Continue reading “2015 ABA National Symposium on Technology and Employment Law”
It has been widely reported that EEOC Assistant Legal Counsel Carol Miaskoff, when addressing a conference on big data, shared her belief that employers should be concerned with the disparate impact of their employment-related data mining and analysis.
I am not convinced that she is right. I don’t think disparate impact will be the theory on which plaintiffs successfully attack big data in employment — I think it will be on a theory of intentional discrimination through proof of a discriminatory “pattern and practice.”
Continue reading “EEOC, Big Data and Disparate Impact: Barking up the Wrong Tree”
2015 will see the broadening and deepening of the transformation of tech-related employment law.
Here are eight reasons why:
Continue reading “8 Trends in the Transformation of Tech-Related Employment Law”
The big data revolution will require employment lawyers who can get “under the hood” of claims driven by big data analytics. Here are 10 questions that can help uncover error and bias in the work of data scientists.
Continue reading “10 Questions: Confronting Allegations Based on Big Data”