Facial recognition is back in the news this week as the South Wales police force defends its use of the technology after an office worker claimed it breached his privacy and data protection rights. 

The technology in question allows the user to map faces in a crowd and to then compare them with a database of images. This is incredibly useful for police forces and government agencies who can use it to spot suspects, missing people and other persons of interest. 

What would appear on the surface to be an incredibly useful tool for government and law enforcement agencies to spot suspects, missing people and other persons of interest has, however, come under fire as critics fear it could be used for nefarious purposes (e.g. mass surveillance) or that it will negatively impact society (e.g. people not wanting to visit public spaces for fear of being tracked). It's also alleged by the claimant in this case that use of the technology is unregulated. 

In the past, similar arguments have been used to question the vast network of CCTV cameras that are now found across the globe.

Whilst we should be concerned about the impact of these new technologies on people's privacy (and regulation is in place to deal with this) it seems extreme to suggest, as some have done, that these technologies should be banned outright because the tech could be used to cause harm.

Throughout history there have been examples of useful inventions that have been used for something more sinister than their creator intended - for example TNT was originally intended to be used as a yellow dye for clothing but its explosive properties made it a weapon of choice in World Wars I and II.

A much more proportionate response would be to ensure that those who build and use the tools are subject to legislative controls and informed by guidance on good governance. This would reduce the likelihood that these technologies are exploited in a harmful way and would guarantee that there is a regime in place for dealing with those who do.

The challenge is future proofing legislation as developments are being made at a rapid pace. It is not easy for legislators stay up to date with the latest technological developments.  However,  if they were to focus on ensuring tech companies were transparent about how the underlying tech works (including the types of data used to train the algorithms and any underlying bias) and those who are using it and for what purpose, regulators and the general public could make a much more informed choice about whether the tech was being used for good.