Rashida Richardson

Rashida Richardson is a visiting scholar at Rutgers Law School and the Rutgers Institute for Information Policy and the Law and a senior fellow in the Digital Innovation and Democracy Initiative at the German Marshall Fund. She is scheduled to join the faculty at Northeastern Law as an Assistant Professor of Law and Political Science with the School of Law and the Department of Political Science in the College of Social Sciences and Humanities in July 2021. Richardson previously was the director of policy research at the AI Now Institute, where she designed, implemented and coordinated research strategies and initiatives on law, policy, and civil rights. During her career as an attorney, researcher, and scholar, Richardson has engaged in science communication and public advocacy.

Advocacy
In 2017, as legislative counsel for the New York Civil Liberties Union, Richardson spoke with NBC News about the "Textalyzer" device designed to catch distracted drivers, stating, "The first concern is that it gives officers full discretion to decide who to use the Textalyzer against," and "That allows for a lot of bias." In 2018, Richardson spoke at length with The Christian Science Monitor about the impacts and challenges of artificial intelligence, including a lack of transparency with the public about how the technology is used and a lack of technical expertise by municipalities in how the technology works or whether the results are biased or flawed.

In 2018, as the director of policy research for the AI Now Institute, Richardson spoke with NBC News about facial recognition technology, stating, "There needs to be greater transparency around the use of these technologies and a more open, public conversation about what types of use cases we are comfortable with — and what types of use cases should just not be available." In 2020, Richardson spoke with CBS News about bias problems with facial recognition technology, stating, "If you look at the top three companies [in the field], none of them performs with 100% accuracy. So we're experimenting in real time with real humans."

In 2019, as the director of policy research for the AI Now Institute, Richardson spoke with Reuters about ethics and artificial intelligence, stating that since Amazon.com, Facebook, Microsoft and others had created the nonprofit Partnership on AI, that "There is a real imbalance in priorities" for the companies, and given "the amount of resources and the level of acceleration that’s going into commercial products, I don’t think the same level of investment is going into making sure their products are also safe and not discriminatory." In 2019, Richardson also spoke with the Detroit Free Press about the increasing use of artificial intelligence systems by governments across the United States, and extended her warnings to Canada when speaking with The Canadian Press.

In 2019, Karen Hao at MIT Technology Review profiled a study led by Richardson at the AI Now Institute, that according to Hao, "has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system." In 2020, Richardson spoke with Karen Hao at MIT Technology Review about the use of predictive analytics applied to child welfare, including when a subjective judgment like whether a child has "grit" is included, which according to Richardson, research shows is "a racist construct for measuring success and performance," and undermines the accuracy of the prediction. Richardson also spoke with Will Douglas Heaven at MIT Technology Review for articles published in 2020 and 2021 about algorithmic bias problems in predictive policing programs, including her perspective, "I think many predictive policing vendors like PredPol fundamentally do not understand how structural and social conditions bias or skew many forms of crime data" and that "political will" is needed to address the issues. In 2020, as a visiting scholar at Rutgers Law School and senior fellow at the German Marshall Fund, Richardson spoke with The New York Times about resistance from American police departments in sharing details about technologies used, and the limited regulation of what technology is used or how it is used, stating, "So we don’t know if they work, and we only find out about them after there’s some harm or risk identified," and "The only thing that can improve this black box of predictive policing is the proliferation of transparency laws."

In 2019, Richardson testified before the U.S. Senate Subcommittee on Communications, Technology, Innovation, and the Internet in a hearing titled "Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet Platforms." Before the hearing, Richardson told Politico, "Government intervention is urgently needed to ensure consumers - particularly women, gender minorities and communities of color - are protected from discrimination and bias at the hands of AI systems" and "Congress needs to enact legislation that injects transparency, accountability, and auditing requirements into an industry shielded by trade secrecy and motivated by the bottom line." In 2020, Richardson was featured in the documentary film "The Social Dilemma," directed by Jeff Orlowski and distributed by Netflix, that focuses on social media and algorithmic manipulation.

Career
Before joining The AI Now Institute, Richardson served as Legislative Counsel at the New York Civil Liberties Union and had worked as a staff attorney for The Center for HIV Law and Policy. She previously worked at Facebook and HIP Investor in San Francisco.

In March 2020, she joined the advisory board of the Electronic Privacy Information Center (EPIC).

Education
Richardson earned a BA with Honors from the College of Social Studies at Wesleyan University, and a JD from Northeastern University School of Law. She was an intern with Judge Charles R. Breyer of the US District Court for the Northern District of California, the law firm of Cowan, DeBeats, Abraham & Sheppard, and the Legal Aid Society.

Selected Works

 * Richardson, R., & Cahn, A.F. (February 5, 2021). "States are failing on big tech and privacy — Biden must take the lead." The Hill.
 * Richardson, R., & Kak, A. (September 11, 2020). "It’s time for a reckoning about this foundational piece of police technology." Slate.
 * Kak, A., & Richardson, R. (May 1, 2020). "Artificial intelligence policies must focus on impact and accountability". Centre for International Governance Innovation.
 * Richardson, R. (December 15, 2019). "Win the war against algorithms: Automated Decision Systems are taking over far too much of government". New York Daily News.
 * Richardson, R. (ed.) (December 4, 2019). "Confronting black boxes: A shadow report of the New York City Automated Decision System Task Force". New York: AI Now Institute.
 * Richardson, R., Schultz, J. M., & Southerland, V. M. (2019). Litigating algorithms 2019 US report: New challenges to government use of algorithmic decision systems. New York: AI Now Institute.
 * Richardson, R., Schultz, J. M., & Crawford, K. (2019). Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review.
 * Richardson, R. (December 12, 2017). New York City Takes on Algorithmic Discrimination. NYCLU.