Can robots be prosecuted?

Today, sophisticated algorithms designed into artificial intelligence are being used in our justice system.

Can robots be prosecuted?

The Home Office’s Modern Crime Prevention Strategy highlights the crucial role that technology will play in the future of the modern crime prevention. While these capabilities have the potential to enhance various aspects of criminal justice decision-making, they also present a number of complex ethical and legal considerations. The House of Lords report titled ‘AI in the UK: ready, willing and able?’ published in April 2018 explained that ‘while AI-specific regulation is not appropriate at this stage… Existing regulators are best placed to regulate AI in their respective sectors..’.

 

Current use in justice system

  • The use of artificial intelligence and algorithms has the potential to enhance various aspects of criminal justice decision-making
  • Currently used in predicting and solving crimes such as in crime detection, surveillance, determining chances of success in case prosecution, granting of bail, and pilot use in predicting reoffending
  • Predictive mapping to predict and target where crime is most likely to occur
  • Predictive analytics to determine chances of reoffending
  • Big data collected from a variety of online locations to develop a preventative strategy
  • Predicting the outcome of court cases with 76% accuracy useful for rapidly identifying patterns in cases that lead to certain outcomes
  • Greater use of automation to help lawyers undertake their work such as in the area of conveyancing of property

 

Robots gone wrong

  • 300-Pound Security Robot Runs Over Toddler At California Shopping Center
  • Swiss police release robot that bought ecstasy online
  • Tesla Motors was absolved of criminal responsibility in the US when a driver was killed in a crash after the car was on autopilot
  • PredPol piloted in policing has generated controversy after research showed it appeared to be repeating systematic discrimination against black and ethnic minority offenders. Compas, used in the US had received similar criticism

A pilot system evaluating the risk of reoffending a crime uses personal data such as past convictions, age, location and characteristics to determine whether there is a low, medium, high risk of reoffending. However, the data was limited because it came from a small local pool which meant those not on the database could not be identified.

 

Problems

  • Who is culpable and liable when a robot goes haywire?
  • How reliable is a robot in making predictions and judgements? How will we ascertain the criteria and factors it has used to come to its decision?
  • Could robots be bias? This will depend on what data has been fed into it and how the machine has learned the data and interpreted it
  • To what extent can robots replace what humans do and how much control should there be over robots and their functions?
  • The study and practice of law is presented with these novel challenges. The law is a living instrument, to be interpreted and applied in light of societal needs and modern times, so how will we be able to ensure justice, fairness, responsibility, accountability, safe place, peace and good values with AIs? One of the functions of our legal system is to regulate the behaviour of legal persons and to punish and deter offenders. It also provides remedies for those who have suffered, or are at risk of suffering harm.
  • So can our existing laws offer the necessary justice when it comes to harm from robots?
  • At the moment, there is no specially designed laws which apply to liability and responsibility of AIs

 

We take a look at Criminal law in this post.

 

Laws apply to legal persons

  • Legal persons are a general term used in the law to describe an entity which has rights and responsibilities recognised by the law allowing for suing others, be sued for a breach, mistake or criminal offence, enter into contracts
  • Human beings are legal persons
  • Companies and corporations are considered to be legal persons for the purposes of establishing liability and legal rights although unlike humans are not capable of having a conscience, personality and capacity to experience emotions
  • But, the problem arises when the machines themselves make decisions of their own accord, where they become capable of learning from their successes and mistakes and begin to apply new learning in different situations (known as machine learning)
  • Here, we are interested to know whether machines themselves are considered to be legal persons – not yet, the law is yet to recognise robots as legal persons. The Law commission is tasked to ensure that the law is clear on accountability as a way of establishing where fault lies when robots go wrong

 

Elements of establishing a crime

Criminal law has two important concepts:

  • action of a criminal act or an omission to act and;
  • the intention of harming them, or knowing that harm was a likely consequence of their action

 Actions:

  • Driverless car, for example, clearly has the potential to harm, kill or damage
  • It could be the case that the AI programme itself had acted wrongfully because of a mistake in its design or misuse by the user
  • criminal act - example of an artificially intelligent robot in a Japanese motorcycle factory that killed a human worker.
  • Generally when establishing liability, we ask whether anyone else could have been involved such as whether the designer or programmer was aware of a possibility of the machine’s behaviour

 Intention - known as the guilty mind:

  •  It may be the case that a robot could have committed a criminal act or omission but how do we know the robot intended to do what it did?
  • What would intention look like in a robot’s mind

 

Defence to a crime

  • Could a robot that is malfunctioning claim a defense similar to the human defense of insanity?
  • Could a robot infected by an electronic virus claim defenses similar to coercion or intoxication?
  • The law lacks clarity here, although for now, it may interpret the malfunctioning and erroneous decision making of robots as a fault, which lies in its design, programming and manufacturing
  • This is inadequate where intelligent machines are capable of learning from their successes and errors to improve

 

Punishment

  • Once guilt is establishing for a crime, a sentence is pronounced.
  • What form would this take?
  • So criminal law goes only to some lengths to embrace the disruption of automation and artificial intelligence technology

 

In the coming years, there will be many of these conversations to be had within our legal system. Join in the conversation and tell us what you think:

  • What algorithms that you know of are currently in use in the legal system in England and Wales?
  • What are some of the benefits that can be derived from the use of algorithms?
  • What are some of the dangers?
  • Should there be new laws drafted specifically to regulate the design of AI?
  • Should AIs have rights protected by the law?
  • Should there be new laws drafted specifically to allow AIs to be prosecuted for crimes committed by actions of AIs?

 

Want to find out more?

You are ready to take that next step. Ready to make that life-changing decision and work towards your career goals. By downloading our prospectus you will have the world at your fingertips.

You can browse our 100+ unique course options and have access to finance and support information to get you started. Discover all of the courses available to you from Arts to Zoology – we’re excited to see what you choose.

Order a Prospectus