This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 1 minute read

Responsible AI Licences

A common fear in popular culture representations of AI is the rise of the killer robots. From Skynet to HAL 9000, the nefarious, destructive AI has become a science fiction trope. While the reality may be far less dramatic, it is nonetheless now a real possibility that current AI technology will be used in harmful ways in surveillance, profiling and military operations.

A team of AI researchers, academics and lawyers have set out to try to restrict these harmful applications of AI. Responsible AI Licenses (RAIL) have developed basic templates for a source code licence and an end-user licence which developers can include with their AI software. By using these licences, "developers/technology providers will have the legal right to prevent undesired applications of their code/software." RAIL licences attempt to tackle the problem of the adaptable nature of AI technology, which means that “the same AI tool that can be used for faster and more accurate cancer diagnoses can also be used in powerful surveillance systems”.

On their own, these licences are unlikely to be enough to prevent the harmful use of AI. Actors intent on evil will, after all, simply ignore the terms of any licence. The development of such licences, however, is a sign that developers are starting to recognise the need for the legal machinery to police  AI adequately. Regulation and policy will need to follow. For companies who develop or use AI, it may be worth considering whether their relevant legal documentation could be adapted for the new challenges AI poses. 

Responsible AI Licenses (RAIL) empower developers to restrict the use of their AI technology in order to prevent irresponsible and harmful applications.

Tags

ai