Every year there are more than 4 million referrals made to child protection agencies across the US.
The practice of screening calls is left to each jurisdiction to follow local practices and policies, potentially leading to large variation in the way in which
referrals are treated across the country. While increasing access to linked administrative data is available, it is difficult for workers to make systematic use of
historical information about all the children and adults on a single referral call. Jurisdictions around the country are thus increasingly turning to predictive modeling
approaches to help distill this rich information. The end result is typically a single risk score reflecting the likelihood of a near-term adverse event.
Yet the use of predictive analytics in the area of child welfare remains highly contentious. There is concern that some communities—such as those in poverty or from particular racial and ethnic groups—will be disadvantaged by the reliance on government administrative data. In this talk I will describe some of the work we have done both in the lab and in the community as part of developing, deploying and evaluating a prediction tool currently in use in the Allegheny County Office of Children, Youth and Families.
Keywords: algorithmic fairness, predictive modeling, statistical machine learning tools
Dr. Alexandra Chouldechova is the Estella Loomis McCandless Assistant Professor of Statistics and Public Policy at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy.
Her research investigates questions of algorithmic fairness and accountability in data-driven decision-making systems, with a domain focus on criminal justice and human services. Her work has been supported through funding from organizations including the Hillman Foundation, the MacArthur Foundation, and the NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon. She is a member of the executive committee for the ACM Conference on Fairness, Accountability and Transparency (FAccT), and previously served as a Program Committee co-Chair for the conference.
Dr. Chouldechova is a 2020 Research Fellow with the Partnership on AI, where she is working on understanding factors that drive racial bias in algorithmic risk assessment tools being developed for use in pre-trial, parole and sentencing contexts. She is also a member of the Pittsburgh Task Force on Public Algorithms.