Is the DHS using this Unisys pre-crime software?
A press release last week from Unisys gives a disturbing glimpse into the extent to which border guards — possibly including US Customs and Border Protection (CBP) and other components of the US Department of Homeland Security — are making decisions on the basis of automated “pre-crime” predictions of future bad actions or bad intentions.
Unisys describes its “LineSight” (TM) product as,
[N]ew software that uses advanced data analytics and machine learning to … enable border agents to make … on-the-spot decisions about whether to trigger closer inspection of travelers … before admitting them into a country…. The solution [sic] uses advanced targeting algorithms to continuously ingest and analyze high volumes of data from multiple sources and to flag potential threats in near real time. For travelers crossing borders, LineSight assesses risk from the initial intent to travel and refines that risk assessment as more information becomes available – beginning with a traveler’s visa application to travel, reservation, ticket purchase, seat selection, check-in and arrival.
Think about what this means: This is not a tool for investigation of illegal conduct or prosecution of people who have committed crimes. It presumes that government agencies will be sufficiently deeply embedded in travel industry infrastructure and have the surveillance capability to know as soon as you form an “initial intent to travel”. It’s being marketed to government agencies as a “pre-crime” system alleged to have “pre-cognitive” ability to predict intentions and future actions, and to generate its own algorithms for doing so:
“Many legacy border security solutions identify potentially risky travelers and cargo based on previously known threats – which is kind of like driving a car and only using your rear view mirror,” said Mark Forman, global head of Unisys Public Sector….
LineSight does not depend solely on pre-defined pattern matching rules; it also includes predictive analytics and machine learning that allow the system to learn from experience and automatically generate new rules and algorithms to continuously improve assessment accuracy over time.
Decisions about which travelers should be subjected to more intrusive searches should be be made on the basis of probable cause to believe that crimes have been committed, not on the basis of fantasies of “pre-cognitive” pre-crime prediction.
It’s wrong to delegate judicial decisions to administrative agencies, wrong to further delegate those decisions to software ‘bots, and wrong to set those robots loose to make up their own rules to govern whch individuals are subjected to searches or other sanctions.