Artificial Intelligence in Law Enforcement Raises Bias and Privacy Concerns

Print More

While artificial intelligence systems such as the mobile app CBP One, which was deployed by border officials at the U.S.-Mexico border to reduce manual data entry and speed up the process, continue to grow in popularity among law enforcement, critics have raised concerns regarding the protection of civil rights in technology and the dangers of AI bias and data privacy that the technology represents, reports VentureBeat. In May, Senator Edward Markey and Representative Doris Matsui introduced the Algorithmic Justice and Online Platform Transparency bill, which clamps down on harmful algorithms, encourages transparency of websites’ content amplification and moderation practices and proposes a cross-government investigation into discriminatory algorithmic processes throughout the economy.

In addition, local bans on facial recognition technology and other bills or resolutions related to AI have been introduced in at least 16 states. However, while moves like these are steps in the right direction–giving the Federal Trade Commission broader authority, requiring impact assessments that include considerations about data sources, bias, fairness, privacy and more and helping to expand compliance standards and policies–there are drawbacks for companies that rely on fundamentally flawed or discriminatory data that would make compliance without endangering their business difficult. And even if established players support a law to prevent AI bias, it isn’t clear what bias looks like in terms of machine learning.

Leave a Reply

Your email address will not be published. Required fields are marked *