Cancel DHS Use of AI Technologies for Immigration Enforcement and Adjudication by December 1, 2024
The Advocates, alongside over 140 tech, immigrant rights, labor, civil rights, government accountability, human rights, religious and privacy organizations, signed onto a letter to the Department of Homeland Security (DHS) Secretary Mayorkas calling on the agency to suspend and cancel the use or development of any artificial intelligence (AI) technologies used in immigration adjudication or immigration enforcement by December 1, 2024, in accordance with federal guidelines. Read the letter below:
September 4, 2024
Secretary Alejandro Mayorkas
Department of Homeland Security Washington, DC 20528, USA
Re: Cancel DHS Use of AI Technologies for Immigration Enforcement and Adjudication by December 1, 2024
Dear Secretary Mayorkas,
The undersigned immigrant rights, racial justice, government accountability, human rights and privacy organizations call on the Department of Homeland Security (DHS) to cancel or suspend the use of artificial intelligence (AI) technologies for immigration enforcement and adjudication that do not comply with federal requirements for responsible AI by December 1, 2024.1
DHS’s use of AI appears to violate federal policies governing the responsible use of AI, particularly when it comes to AI used to make life-impacting decisions on immigration enforcement and adjudications. Federal rules and policies require DHS, among other things, to consult with impacted communities before using AI tools; monitor AI tools for errors and civil rights violations on an ongoing basis; provide notification, redress, and an opt-out process for those impacted by AI tools; produce a comprehensive public inventory of DHS AI tools and conduct AI Impact Assessments prior to the rollout of these tools. We have serious concerns that DHS has fast-tracked the deployment of AI technologies in contravention of these federal policies, executive orders, and agency memoranda. The Office of Management and Budget requires DHS to terminate the use of rights-impacting AI tools by December 1 if they fail to comply with these minimum requirements.2
The impact and potential harm of DHS use of artificial intelligence on U.S. communities is not theoretical. According to reports including the agency’s own AI inventory and Privacy Impact Assessements, DHS and its subagencies use AI technologies to make critical decisions – from whether to deport, detain, and separate families, whether to naturalize someone, to whether to protect someone from persecution or torture.3 We highlight a number of these concerning AI tools below. To date, the public has received little to no information about these AI tools, such as what training data or algorithm was used to create the AI tools and to what extent the agency tracks their civil rights impacts:
? U.S. Citizenship and Immigration Services (USCIS) uses AI to make decisions on immigration benefits and relief. For example, its “Predicted to Naturalize” AI tool helps make decisions on citizenship eligibility. Its “Asylum Text Analytics'' AI tool screens and flags asylum and withholding applications as fraudulent. The FDNS Directorate also has plans to develop an AI tool to classify individuals as fraud, public safety or national security threats in the immigration adjudication process.
? Immigration and Customs Enforcement (ICE) uses secretive AI technologies for decisions on detention, deportation and surveillance. For example, ICE appears to uses a “Hurricane Score” algorithm4 in assessing whether someone will “abscond” from its Intensive Supervision
Appearance Program (ISAP) – an assessment that likely impacts the terms of electronic
surveillance for nearly 200,000 immigrants subjected to ISAP.5 ICE also uses a “Risk
Classification Assessment” (RCA) algorithm tool to make detention decisions. During the Trump
Administration, ICE faced lawsuits from civil rights groups alleging ICE secretly modified the RCA to increase detention and deportation.6
? CBP uses AI for biometric surveillance7 as well as to build out the deadly digital border wall. For example, contradicting DHS’s own statement that it would not use AI to enable “surveillance, or tracking of individuals,” CBP has rapidly expanded its network of AI-enabled surveillance towers, sensors, and systems to track migrants at the border, creating a “funnel effect” and more migrant deaths.8
The stakes are high – DHS’s latest AI tools impact millions of people in the U.S. Given the historical discrimination, inaccuracies, and complexities of the immigration system,9 we have serious concerns that DHS’s AI products could exacerbate existing biases or be abused in the future to supercharge detention and deportation.10 In consideration of federal policy and the human lives at stake, we urge DHS to cancel or suspend the use of any non-compliant AI tool by the deadline of December 1, 2024.
Signed,
Just Futures Law
The Advocates for Human Rights
(Full list of signees here.)
_______________________________________________________________________________________________________________________________________________
Endnotes
1. Similarly, we urge the DHS Secretary and the Chief AI Officer not to issue waivers of noncompliance for AI tools used to make critical, rights-impacting decisions on immigration. Under the OMB guidance, DHS may invoke waivers to exempt themselves from these requirements. Shalanda D. Young, Memorandum for the Heads of
Executive Departments and Agencies: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, Office of Management and Budget (Mar. 28, 2024), at 15, https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-M anagement-for-Agency-Use-of-Artificial-Intelligence.pdf.
2. This includes the Advancing American AI Act, Pub. L. No. 117-263, div. G, title LXXII, subtitle B, §§ 7224(a), 7224(d)(1)(B), and 7225 (codified at 40 U.S.C. 11301 note),
https://www.congress.gov/117/plaws/publ263/PLAW-117publ263.pdf; Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Exec. Office of the President (Oct. 30, 2023), https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-andtrustworthy-development-and-use-of-artificial-intelligence/. Moreover, DHS’s own 2023 memo on the acquisition and use of AI includes strong language on civil rights and privacy protections to limit surveillance and targeting. See
Alejandro N. Mayorkas, Acquisition and Use of Artificial Intelligence and Machine Learning Technologies by DHS Components, U.S. Dep’t of Homeland Sec. (Aug. 8, 2023), https://www.dhs.gov/sites/default/files/2023-09/23_0913_mgmt_139-06-acquistion-use-ai-technologies-dhs-compon ents.pdf.
3. Avi Asher-Schapiro, Trump, armed with tech, could supercharge deportations, Context, Jul. 22, 2024, https://www.context.news/ai/trump-armed-with-tech-could-supercharge-deportations; Monique O. Madan, The Future of Border Patrol: AI Is Always Watching, The Markup, Mar. 22, 2024, https://themarkup.org/news/2024/03/22/the-future-of-border-patrol-ai-is-always-watching; Automating Deportation:
The Artificial Intelligence Behind the Department of Homeland Security’s Immigration Enforcement Regime, Mijente and Just Futures Law (June 2024), https://mijente.net/wp-content/uploads/2024/06/Automating-Deportation.pdf; Artificial Intelligence Use Case Inventory, U.S. Dep’t of Homeland Sec., https://www.dhs.gov/data/AI_inventory (last accessed July 25, 2024).
4. Mijente and Just Futures Law, supra note 3.
5. Alternatives to Detention (ATD), Transactional Records Access Clearinghouse (May 4, 2024), https://trac.syr.edu/immigration/detentionstats/atd_pop_table.html.
6. Adi Robertson, ICE rigged its algorithms to keep immigrants in jail, claims lawsuit, The Verge, Mar. 3, 2020, https://www.theverge.com/2020/3/3/21163013/ice-new-york-risk-assessment-algorithm-rigged-lawsuit-nyclu-jose-v elesaca.
7. United States of America: Mandatory Use of CBP One Application Violates the Right to Seek Asylum, Amnesty International (May 7, 2023), https://www.amnesty.org/en/documents/amr51/6754/2023/en/.
8. Mayorkas, supra note 2, at 3; Samuel Norton Chambers, et al., Mortality, Surveillance and the Tertiary “Funnel Effect” on the U.S.-Mexico Border: A Geospatial Modeling of the Geography of Deterrence, Journal of Borderland Studies, Vol. 36 (Jan. 31, 2019), https://www.researchgate.net/publication/330786155_Mortality_Surveillance_and_the_Tertiary_Funnel_Effect_on_t he_US-Mexico_Border_A_Geospatial_Modeling_of_the_Geography_of_Deterrence.
9. Evidence shows that USCIS disproportionately denied citizenship to Black, Latinx and Muslim immigrants for years. See e.g. Emily Ryo, Reed Humphrey, The importance of race, gender, and religion in naturalization adjudication in the United States, PNAS Vol. 119 No. 4 (Feb. 22, 2022), https://www.pnas.org/doi/10.1073/pnas.2114430119. USCIS also has discriminated against Arab, Middle Eastern, Muslim, and South Asian communities in designating them as “national security concerns” in the past. See e.g.
Muslims Need Not Apply: How USCIS Secretly Mandates the Discriminatory Delay and Denial of Citizenship and Immigration Benefits to Aspiring Americans, ACLU SoCal (August 2013),
https://www.aclusocal.org/sites/default/files/carrp-muslims-need-not-apply-aclu-socal-report.pdf.
10.Far from being more objective and accurate, AI machine learning tools often worsen the existing discriminatory practices of a decision maker. See e.g. Ben Green, The Flaws of Policies Requiring Human Oversight of Government Algorithms, Comput. Law & Sec. Review, Vol. 45 (2022),
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3921216; Lauren Leffer, Humans absorb bias from AI–and keep it after they stop using the algorithm, Scientific American, Oct. 2023,
https://www.scientificamerican.com/article/humans-absorb-bias-from-ai-and-keep-it-after-they-stop-using-the-algori thm/; There’s More to AI Bias Than Biased Data, NIST Report Highlights, Nat. Inst. of Standards and Tech., U.S. Dep’t of Commerce (Mar. 16, 2022),
https://www.nist.gov/news-events/news/2022/03/theres-more-ai-bias-biased-data-nist-report-highlights; Aaron
Sankin, Dhruv Mehrotra, Surya Mattu, Annie Gilbertson, Crime Prediction Software Promised to Be Free of Biases.
New Data Shows It Perpetuates Them, The Markup and Gizmodo, Dec. 2, 2021,
https://themarkup.org/prediction-bias/2021/12/02/crime-prediction-software-promised-to-be-free-of-biases-new-data -shows-it-perpetuates-them.
1. Similarly, we urge the DHS Secretary and the Chief AI Officer not to issue waivers of noncompliance for AI tools used to make critical, rights-impacting decisions on immigration. Under the OMB guidance, DHS may invoke waivers to exempt themselves from these requirements. Shalanda D. Young, Memorandum for the Heads of
Executive Departments and Agencies: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, Office of Management and Budget (Mar. 28, 2024), at 15, https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-M anagement-for-Agency-Use-of-Artificial-Intelligence.pdf.
2. This includes the Advancing American AI Act, Pub. L. No. 117-263, div. G, title LXXII, subtitle B, §§ 7224(a), 7224(d)(1)(B), and 7225 (codified at 40 U.S.C. 11301 note),
https://www.congress.gov/117/plaws/publ263/PLAW-117publ263.pdf; Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Exec. Office of the President (Oct. 30, 2023), https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-andtrustworthy-development-and-use-of-artificial-intelligence/. Moreover, DHS’s own 2023 memo on the acquisition and use of AI includes strong language on civil rights and privacy protections to limit surveillance and targeting. See
Alejandro N. Mayorkas, Acquisition and Use of Artificial Intelligence and Machine Learning Technologies by DHS Components, U.S. Dep’t of Homeland Sec. (Aug. 8, 2023), https://www.dhs.gov/sites/default/files/2023-09/23_0913_mgmt_139-06-acquistion-use-ai-technologies-dhs-compon ents.pdf.
3. Avi Asher-Schapiro, Trump, armed with tech, could supercharge deportations, Context, Jul. 22, 2024, https://www.context.news/ai/trump-armed-with-tech-could-supercharge-deportations; Monique O. Madan, The Future of Border Patrol: AI Is Always Watching, The Markup, Mar. 22, 2024, https://themarkup.org/news/2024/03/22/the-future-of-border-patrol-ai-is-always-watching; Automating Deportation:
The Artificial Intelligence Behind the Department of Homeland Security’s Immigration Enforcement Regime, Mijente and Just Futures Law (June 2024), https://mijente.net/wp-content/uploads/2024/06/Automating-Deportation.pdf; Artificial Intelligence Use Case Inventory, U.S. Dep’t of Homeland Sec., https://www.dhs.gov/data/AI_inventory (last accessed July 25, 2024).
4. Mijente and Just Futures Law, supra note 3.
5. Alternatives to Detention (ATD), Transactional Records Access Clearinghouse (May 4, 2024), https://trac.syr.edu/immigration/detentionstats/atd_pop_table.html.
6. Adi Robertson, ICE rigged its algorithms to keep immigrants in jail, claims lawsuit, The Verge, Mar. 3, 2020, https://www.theverge.com/2020/3/3/21163013/ice-new-york-risk-assessment-algorithm-rigged-lawsuit-nyclu-jose-v elesaca.
7. United States of America: Mandatory Use of CBP One Application Violates the Right to Seek Asylum, Amnesty International (May 7, 2023), https://www.amnesty.org/en/documents/amr51/6754/2023/en/.
8. Mayorkas, supra note 2, at 3; Samuel Norton Chambers, et al., Mortality, Surveillance and the Tertiary “Funnel Effect” on the U.S.-Mexico Border: A Geospatial Modeling of the Geography of Deterrence, Journal of Borderland Studies, Vol. 36 (Jan. 31, 2019), https://www.researchgate.net/publication/330786155_Mortality_Surveillance_and_the_Tertiary_Funnel_Effect_on_t he_US-Mexico_Border_A_Geospatial_Modeling_of_the_Geography_of_Deterrence.
9. Evidence shows that USCIS disproportionately denied citizenship to Black, Latinx and Muslim immigrants for years. See e.g. Emily Ryo, Reed Humphrey, The importance of race, gender, and religion in naturalization adjudication in the United States, PNAS Vol. 119 No. 4 (Feb. 22, 2022), https://www.pnas.org/doi/10.1073/pnas.2114430119. USCIS also has discriminated against Arab, Middle Eastern, Muslim, and South Asian communities in designating them as “national security concerns” in the past. See e.g.
Muslims Need Not Apply: How USCIS Secretly Mandates the Discriminatory Delay and Denial of Citizenship and Immigration Benefits to Aspiring Americans, ACLU SoCal (August 2013),
https://www.aclusocal.org/sites/default/files/carrp-muslims-need-not-apply-aclu-socal-report.pdf.
10.Far from being more objective and accurate, AI machine learning tools often worsen the existing discriminatory practices of a decision maker. See e.g. Ben Green, The Flaws of Policies Requiring Human Oversight of Government Algorithms, Comput. Law & Sec. Review, Vol. 45 (2022),
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3921216; Lauren Leffer, Humans absorb bias from AI–and keep it after they stop using the algorithm, Scientific American, Oct. 2023,
https://www.scientificamerican.com/article/humans-absorb-bias-from-ai-and-keep-it-after-they-stop-using-the-algori thm/; There’s More to AI Bias Than Biased Data, NIST Report Highlights, Nat. Inst. of Standards and Tech., U.S. Dep’t of Commerce (Mar. 16, 2022),
https://www.nist.gov/news-events/news/2022/03/theres-more-ai-bias-biased-data-nist-report-highlights; Aaron
Sankin, Dhruv Mehrotra, Surya Mattu, Annie Gilbertson, Crime Prediction Software Promised to Be Free of Biases.
New Data Shows It Perpetuates Them, The Markup and Gizmodo, Dec. 2, 2021,
https://themarkup.org/prediction-bias/2021/12/02/crime-prediction-software-promised-to-be-free-of-biases-new-data -shows-it-perpetuates-them.