Palantir
Employees of Palantir Technologies are not happy with their contract with the US Immigration and Customs Enforcement Palantir on X

Palantir Technologies is facing mounting internal anger after the fatal shooting of Minneapolis nurse Alex Pretti during a US Immigration and Customs Enforcement (ICE) operation reignited scrutiny of the company's work with the agency. Employees have taken to internal channels to question whether Palantir's data and AI tools are helping to enable enforcement practices they view as unethical and dangerous.

The backlash has exposed sharp divisions within the Silicon Valley firm, where staff have long debated the moral cost of government contracts tied to immigration enforcement. According to internal discussions reviewed by WIRED, some employees are now openly demanding greater transparency from leadership, warning that the company's technology risks becoming inseparable from the real-world consequences of ICE operations — including loss of life.

Outrage After Fatal Shooting Sparks Questions Within Tech Firm

The controversy erupted after a Minneapolis ICE operation resulted in the death of Pretti, triggering nationwide protests and calls for de-escalation. The incident has not only drawn criticism from civil liberties advocates and corporate leaders alike but has also prompted internal debate among Palantir staff about the company's role in supporting enforcement actions.

According to internal Slack messages reviewed by WIRED, employees expressed frustration and concern about Palantir's ongoing work with ICE. 'Our involvement with ICE has been internally swept under the rug ... We need an understanding of our involvement here,' one staffer wrote, while another asked whether the company could exert pressure on the agency to curb its actions.

AI Tools for Enforcement Under Scrutiny

Palantir's relationship with ICE is longstanding: the company has been a major contractor since 2011, providing analytical software used across immigration enforcement functions. Additionally, newer collaborations involve the use of artificial intelligence tools that help ICE process and prioritise public tips submitted through its online systems.

These tools generate high-level summaries, dubbed BLUFs ('bottom line up front'), and assist investigators by quickly analysing incoming tips and translating non-English submissions.

A recently released 2025 Department of Homeland Security (DHS) AI Use Case Inventory — which details all AI applications across DHS components — confirms ICE's deployment of these generative AI capabilities to streamline tip handling. The inventory also mentions another Palantir system, ELITE, which creates maps and dossiers to help identify potential enforcement targets.

Leadership Responds to Employee Concerns

In response to employee pressure, Palantir leadership published an update on an internal company wiki outlining the nature of its work with ICE and other DHS agencies. The document defends the collaboration as a way to improve ICE's 'operational effectiveness' by giving agents better data to make informed decisions.

A company privacy official also emphasised that these capabilities are designed — in Palantir's view — to mitigate risks while enabling targeted outcomes.

The wiki acknowledges the reputational risks of partnering with ICE, including allegations of wrongful detentions and racial profiling, but insists the agency remains committed to avoiding unlawful targeting of U.S. citizens. However, these reassurances have done little to quell disquiet among some employees, many of whom remain sceptical about the broader impacts of the work.

Broader Tech Sector Backlash

The unease at Palantir reflects a wider backlash across the tech industry following the Minneapolis shooting. Prominent tech executives have publicly criticised ICE's methods, with CEOs including OpenAI's Sam Altman and Apple's Tim Cook calling for de-escalation and greater restraint from enforcement agencies.

For employees grappling with these issues, the debate centres on whether Palantir's technology helps promote public safety or enables harmful enforcement practices. 'In my opinion, ICE are the bad guys,' one worker stated in internal messages, encapsulating the ethical dilemma felt by many.

As scrutiny intensifies, the company's leadership faces mounting pressure to justify its partnerships and address employee and public concerns about the future direction of its work with federal authorities.