Empowering algorithms to make potentially life-changing decisions about citizens still comes with significant risk of unfair discrimination, according to a new report published by the UK's Center for Data Ethics and Innovation (CDEI). In some sectors, the need to provide adequate resources to make sure that AI systems are unbiased is becoming particularly pressing – namely, the public sector, and specifically, policing. 

The CDEI spent two years investigating the use of algorithms in both the private and the public sector, and was faced with many different levels of maturity in dealing with the risks posed by algorithms. In the financial sector, for example, there seems to be much closer regulation of the use of data for decision-making; while local government is still in the early days of managing the issue. 

Although awareness of the threats that AI might pose is growing across all industries, the report found that there is no particular example of good practice when it comes to building responsible algorithms. This is especially problematic in the delivery of public services like policing, found the CDEI, which citizens cannot choose to opt out from.  

Research that was conducted as part of the report concluded[1] that there is widespread concern across the UK law enforcement community about the lack of official guidance on the use of algorithms in policing. "This gap should be addressed as a matter of urgency," said the research. 

Police forces are fast increasing their adoption of digital technologies: at the start of the year, the government announced £63.7 million ($85 million) in funding to push the development of police technology programs. New tools range from data visualization technologies to algorithms that can spot patterns of potential crime, and even predict someone's likelihood to re-offend. 

If they are deployed without appropriate safeguards, however, data

Read more from our friends at ZDNet