Algorithms Are Making Government Decisions. The Public Needs to Have a Say.

Dillon Reisman - Technology Fellow at the AI Now Institute 
Meredith Whittaker - Co-founder of the AI Now Institute
Kate Crawford - Co-founder of the AI Now Institute

APRIL 10, 2018 | 10:00 AM


AI and automated decision systems are reshaping core social domains, from criminal justice and education, to healthcare and beyond. Yet it remains incredibly difficult to assess and measure the nature and impact of these systems, even as research has shown their potential for biased and inaccurate decisions that harm the most vulnerable. These systems often function in oblique, invisible ways that are not subject to the accountability or oversight the public expects.

Consider how a lack of such public oversight hit the New Orleans community. In 2012, the New Orleans Police Department contracted with the data analytics company Palantir to build a state-of-the-art predictive policing system, designed to help the police identify people in the New Orleans community who are likely to commit violence or become the victim of violence.

The accuracy and usefulness of such predictive policing and “heat mapping” approaches are very much in question. Recent research has demonstrated that predictive policing has great potential to disparately impact communities of color, amplifying existing patterns of discrimination in policing. Other research has raised doubts about whether predictive policing is effective at all.

The rise of automated decision systems has already had and will continue to have an impact on the most vulnerable people.

This controversial and potentially biased system was put in place with no oversight. Until a report in The Verge last month, even members of the New Orleans City Council had no idea what their own police department was doing.

Other jurisdictions are similarly grappling with the lack of oversight over invisible automated systems. Former New York City Council Member James Vacca, sponsor of the legislation forming NYC’s new automated decision system task force, cited his own lack of insight into how the city is using automated decision technologies as reason for drafting the bill.

This is why we at AI Now released a report on Monday detailing our proposed accountability framework for “Algorithmic Impact Assessments.” AIAs provide a strong foundation on which oversight and accountability practices can be built, by giving policymakers, stakeholders, and the public the means to understand and govern the AI and automated decision systems used by core government agencies.

Algorithmic Impact Assessments would first give the public the basic knowledge it needs through disclosure. Before procuring a new automated decision system, agencies would be required to publicly disclose information on the system’s purpose, reach, and potential impact on legally protected classes of people.

mytubethumb play
%3Ciframe%20thumb%3D%22https%3A%2F%2Fwww.aclu.org%2Fsites%2Fdefault%2Ffiles%2Fstyles%2Fvideo_thumbnail_1030x580%2Fpublic%2F2018-04-03-ai-thumbnail-no_copy.jpg%3Fitok%3DI8_4R260%22%20class%3D%22media-youtube-player%22%20width%3D%221024%22%20height%3D%22576%22%20title%3D%22Does%20Artificial%20Intelligence%20Makes%20Us%20Less%20Free%3F%22%20src%3D%22%2F%2Fwww.youtube-nocookie.com%2Fembed%2FTbBMeFrR7H8%3Fwmode%3Dopaque%26amp%3Bmodestbranding%3D1%26amp%3Brel%3D0%26amp%3Bshowinfo%3D0%26amp%3Bcolor%3Dwhite%26autoplay%3D1%26version%3D3%22%20frameborder%3D%220%22%20allowfullscreen%3D%22%22%20allow%3D%22autoplay%22%3EVideo%20of%20Does%20Artificial%20Intelligence%20Makes%20Us%20Less%20Free%3F%3C%2Fiframe%3E
Privacy statement. This embed will serve content from youtube-nocookie.com.

Beyond such disclosure, agencies would also be required to provide an accounting of a system’s workings and impact, including any biases or discriminatory behavior the system might perpetuate. Given the many contexts and many types of systems, this would be accomplished not through a one-size-fits-all audit protocol, but by engaging with external researchers and stakeholders, and ensuring that they have meaningful access to an automated decision system.

These external researchers must include people from a broad array of disciplines and experience. Take, for example, the Allegheny Family Screening Tool, a tool used in Allegheny County, Pennsylvania to help judge the risk that a child might face abuse or neglect. Researchers with different toolsets have yielded insights on how the tool makes predictions, how employees of the Allegheny Department of Human Service use the tool to make decisions, and how it impacts people subject to those decisions.

Finally, agencies would need to honor the public’s right to due process. This means ensuring that meaningful public engagement is integrated into all stages of the AIA process before, during, and after the assessment through a “notice and comment” process, through which agencies solicit public feedback on their assessments. This would be a chance for the public to raise their concerns and, in some cases, even challenge whether an agency should adopt a particular automated decision system. Additionally, if an agency fails to adequately complete an AIA, or if harms go unaddressed by the agency, the public should have some method of recourse.

In developing AIA legislation, lawmakers will need to address several points. For example, how should external researchers be funded for their efforts? And what should agencies do when private vendors that sell automated decision systems resist transparency? Our position is that vendors should be required to waive their trade secrecy claims on information required for exercising oversight.

The rise of automated decision systems has already had and will continue to have an impact on the most vulnerable people. That’s why communities across the country need far more insight into government’s use of these systems, and far more control over which systems are used to shape their lives.

Dillon Reisman is a Technology Fellow at the AI Now Institute. 
Meredith Whittaker is a co-founder of the AI Now Institute, a Distinguished Research Scientist at New York University, and the founder of Google’s Open Research group. 
Kate Crawford is a co-founder of the AI Now Institute, Distinguished Research Professor at NYU, a Principal Researcher at Microsoft Research, and a leading scholar of the social implications of data systems, machine learning, and artificial intelligence.

This piece is part of a series exploring the impacts of artificial intelligence on civil liberties. The views expressed here do not necessarily reflect the views or positions of the ACLU.



Stay Informed