Download Law Society Podcasts

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Pattern recognition wikipedia , lookup

Machine learning wikipedia , lookup

AI winter wikipedia , lookup

Technological singularity wikipedia , lookup

Intelligence explosion wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Transcript
Law Society Podcasts:
'The application of the law to machine learning and artificial intelligence'
Catherine Reed:
Hello and welcome to the first in a series of Law Society podcasts. I am Catherine Reed.
We are here speaking to Jonathan Smithers, President of the Law Society of England and
Wales, following a thought leadership event on the issue of the application of the law to
machine learning and artificial intelligence.
So, to begin Jonathan, what is the background of this event?
Jonathan Smithers:
Well, Catherine, last year, we started a piece of research to identify what the future would
bring for the solicitor’s profession.
The main finding of this research is that we face a future of change. This research
culminated in a report entitled The Future of Legal Services – there are copies available on
our website.
What that report showed was that technology is an area where revolutionary change is
taking place.
There are changes in our laws, our policies, our regulation. Our judicial system, our courts,
our tribunals. Our practices, our firms and our businesses; all of these are being reshaped
by technology.
So, we're are holding a series of thought leadership events on each of these themes to
explore the area of work.
The latest one that we held was on the legal implications of artificial intelligence with
representatives from the Royal Society, from the Government Office for Science, the
Treasury, and some large firms as well.
Catherine Reed:
Is new law needed for machine learning technology and artificial intelligence?
Jonathan Smithers:
That's an interesting question, isn't it?
The uses and application of artificial intelligence and machine learning are becoming much
more complex and sophisticated. Earlier this year, Amazon announced plans to use flying
drones for deliveries and, just a couple of months ago, the Chancellor of the Exchequer
confirmed that trials of driverless cars will take place in Britain in 2017, and that these self
driving vehicles will be available on our streets by 2020.
New applications that involve robotic decision making and machine learning are being
developed rapidly and already having unintended consequences, which require the
involvement of the solicitor profession.
For example, only last month, a police investigation was opened after a drone allegedly
crashed into a British Airways jet over Heathrow.
Earlier this year, Google's driverless car hit a bus in California. There have been instances
where this has happened previously, but this is the first case in which the driverless car was
deemed to be the cause of the collision.
Now there are differing opinions on whether new law is needed.
Some say that new law is not needed - ‘regulation will stifle innovation’ – technology has no
boundaries and it would be detrimental and unrealistic to put a harness on it.
Others say that existing regulation could be adapted to respond to modern day risks – such
as aviation rules or street signs for drivers to be aware of people texting and crossing the
roads.
Another group argues that new law is needed to address the risks posed, not only by
modern technology, but also by the changes in human behaviour that go along with it.
Catherine Reed:
If new law or regulation is needed, should the elements of accountability apply differently to
different types of machine learning technology? I'm particularly referring here to breach of
duty, causation and injury.
Jonathan Smithers:
Well, quite possible, a differential system of tort liability might be needed to regulate
machine-learning technology.
It could depend on certain factors such as the type of technology, level of AI within it, and
the characteristics of the individual engaged with the product.
On the issue of driverless cars, for example, there are a number of legal questions on
liability. Who is responsible in the event of an accident? Is it the person in the car, the
manufacturer, or the owner of the car?
The US Department of Transport has already said that that when it comes to regulating
self-driving cars, computers and software systems could be considered the ‘driver’ of the
vehicle.
So, what parameters will be used for testing these vehicles when they are on our streets
and motorways, in possibly as early as next year?
Manufacturers, for example, might have limited liability if something goes wrong, whereas
others could face strict liability. For example, if there was a speeding offence.
But there are unanswered questions such as who will regulate, enforce and control,
especially as technology is global - it goes across boundaries.
Catherine Reed:
What safeguards should be put in place to protect data used by artificial intelligence and
machine learning?
Jonathan Smithers:
The practical applications of AI inevitably raise serious issues of privacy and data
protection.
How will 'big data' - that's your search engine history, your online banking, your medical
history - be collected, be used, and stored? For what purposes can that data be accessed
and by whom? And who is responsible for keeping it safe? How will we handle global data
breaches across different jurisdictions?
There is some domestic and emerging international legislation on data protection, but it's
limited and tends to be reactive. Legislators and regulators have started to consider ‘worse
case scenarios’ on data protection.
I think the legal profession should start answering these questions now, work together with
policy makers and legislators to build a robust framework, with appropriate safeguards.
Overall, it is important to remember that the current uses of AI are symptomatic of three
main facts:
Firstly, corporate technology giants, programmers and software engineers are using our
data (we have provided it voluntarily).
Secondly, artificial intelligence and the applications of machine learning are dictating the
way in which law is made.
And thirdly, the legal profession is playing catch up - that situation must change. We have
to get ahead.
Catherine Reed:
Thank you, Jonathan, for your insight on this thought-provoking area. What is next for the
Law Society on artificial intelligence and the law?
Jonathan Smithers:
Well, it's an exciting area of law that is evolving every day.
On 21 June, the Society is hosting our first ever conference on this theme: 'Lawyers and
Robots: partnership of the future' . We're going to be exploring the issues mentioned in this
podcast in more detail.
You are all welcome. I very much hope you can join us.
Catherine Reed:
Thank you.