Download Ron Paschke - World Bar Conference 2016

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Artificial intelligence in video games wikipedia , lookup

Embodied cognitive science wikipedia , lookup

AI winter wikipedia , lookup

Computer Go wikipedia , lookup

Human–computer interaction wikipedia , lookup

Technological singularity wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Transcript
Will technology eat or aid the independent referral bar?
Ron Paschke1
Introduction
In the fascinating intersection between the fields of legal practice and technology is a controversial
question: in the future, will the work of advocates and barristers be taken over by robots that both
cost less and perform better than human beings?
Ever since the Industrial Revolution, machines have been outperforming, and taking over the jobs of
humans. In 1930 the English economist John Maynard Keynes warned the world of approaching
‘technological unemployment’.2 The Information Revolution has accelerated the pace of automation.
Up to now this has impacted mainly lower- and semi-skilled jobs, like assembly line workers, bank
tellers, travel agents and secretaries. However, fundamental developments in artificial intelligence (AI)
are said to also threaten higher-skilled and professional jobs.
Artificial Intelligence
Hardly a day goes by without a report about the development of AI and its implications for society.
An example is Go, the ancient Chinese board game. Go is played all over East Asia where it occupies
roughly the same position as chess does in the West. At the time of writing this paper, AlphaGo, a
computer made by Google’s DeepMind, has just beaten the world’s best human Go player, Lee Sedol.3
This is not the first time that a computer has proved superior at a board game. In 1997 IBM’s Deep
Blue famously beat the best chest player in the world, Garry Kasparov. Modern chess programs are
now better than any human. A computer can use brute-force to calculate the best chess move in a
given situation. But a computer winning at Go is a big deal. It was previously thought impossible
because Go is highly strategic and complex. It also has massively more potential moves than chess,4
which means that a computer cannot win at Go just by using the same brute-force method that works
in chess. It has to be smart. AlphaGo uses AI, in particular a technique called deep learning to develop
its own intuition about how to play. It watched millions of games of Go online to extract features,
principles and rules of thumb – similar to how humans learn.5
AI may one day lead to ‘superintelligence’, which is defined as an ‘intellect that greatly exceeds the
cognitive performance of humans in virtually all domains of interest’.6
1
Advocate of the High Court of South Africa and member of the Cape Bar. Paper delivered at the ICAB World
Bar Conference 2016, Edinburgh. April 2016. [email protected].
2
‘This means unemployment due to our discovery of means of economising the use of labour outrunning the
pace at which we can find new uses for labour.’ Keynes, John. ‘Economic Possibilities for our Grandchildren.’
(1930) http://www.econ.yale.edu/smith/econ116a/keynes1.pdf
3
AlphaGo won a five game series played in Seoul in March 2016 by four games to one.
4
Go has around 10170 moves, which is more than the number of atoms in the observable universe (10 80). There
are approximately 1047 different possible games in chess.
5
Simonds, Dave. ‘Artificial Intelligence and Go: Showdown.’ The Economist. 12 March 2016.
6
Müller, Vincent and Bostrom, Nick. (forthcoming 2014), ‘Future progress in artificial intelligence: A Survey of
Expert Opinion, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library; Berlin:
Springer) accessed at http://www.nickbostrom.com/papers/survey.pdf. They explain: ‘if we humans could create
artificial general intelligent ability at a roughly human level, then this creation could, in turn, create yet higher
intelligence, which could, in turn, create yet higher intelligence, and so on … So we might generate a growth
2
As to when superintelligence will be achieved and what impact it will have, scholars can be divided
into three schools.7 First there are the techno-optimists who believe that the future is accelerating
and that intensifying automation will free up labour for more interesting pursuits and leisure and will
generally bring happy results.8 Then there are the techno-pessimists,9 who envision a jobless future in
which only those who own the machines will live in abundance. The rest will cease to be viable. Elon
Musk has called AI the ‘biggest existential threat’ to humanity, but also thinks the technology’s
development is inevitable.10 Stephen Hawking and others say: ‘Success in creating AI would be the
biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid
the risks.’11
What the first two schools have in common is a belief in rapid technological change. They diverge on
whether its impact will be good for most humans.
The third group says that predictions about the impact of AI are exaggerated, particularly in historical
context.12 Luciano Floridi,13 Professor of philosophy and ethics of information at the University of
Oxford, says that Musk and Hawking are mistaken in fearing AI and that anxieties about superintelligent
machines are scientifically unjustified. Floridi advises us to ‘stop worrying about science fiction’.
A recent survey,14 asked the world’s AI experts to predict when machines will be able to ‘carry out
most human professions at least as well as a typical human’ (high-level machine intelligence, HLMI).
Half the experts predict HLMI by 2040 (24 years). A further 40% of experts predict HLMI by 2100
(84 years) while 10% of experts predict HLMI later or never.15 The majority of experts expect that
systems will probably achieve superintelligence within 30 years after reaching HLMI, but 43% of experts
said there is a 50% or less chance of superintelligence within 30 years after reaching HLMI.16
One reason to doubt predictions of the timing of superintelligence is that they are generally
extrapolations from Moore’s law, according to which the number of transistors in computers doubles
every two years, delivering greater and greater computational power at ever-lower cost.17 However,
Gordon Moore, after whom the law is named, has himself acknowledged that his generalisation is
well beyond human ability and perhaps even an accelerating rate of growth: an “intelligence explosion”.’ This is
also called superhuman machine intelligence (SMI).
7
Luce, Edward. ‘Is robust American growth a thing of the past?’ Financial Times. 19 February 2016.
8
An example of techno-optimism is The Second Machine Age (2014) by Erik Brynjolfsson and Andrew McAfee.
9
An example of techno-pessimism is Rise of the Robots (2015) by Martin Ford.
10
Townsend, Tess. ‘Why Elon Musk Is Nervous About Artificial Intelligence.’ Inc.com. http://www.inc.com/tesstownsend/elon-musk-open-ai-safe.html accessed on 19 March 2016.
11
Hawking, Russell, Tegmark, & Wilczek. ‘Transcending Complacency on Superintelligent Machines’. Huffington
Post. 19 April 2014.
(http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html)
12
Luce (note 7) cites the following books as explaining that predictions of the impact of technological change
are exaggerated: The Rise and Fall of American Growth: The US Standard of Living Since the Civil War (2016) by Robert
Gordon and Economics: The Hamilton Approach to Economic Growth and Policy (2016) by Stephen Cohen and
Bradford De Long.
13
Florindi, Luciano. ‘Humans have nothing to fear from intelligent machines.’ Financial Times. 25 January 2016.
14
Müller and Bostrom (note 6).
15
Those predictions were at a 50% confidence level. At 90% confidence level, half the experts predict HLMI by
2075 (59 years). Note that this was about ‘most professions’, not specifically the independent referral bar. See
http://www.pt-ai.org/polls/experts.
16
Experts were asked to assign a probability that ‘there will be machine intelligence that greatly surpasses the
performance of every human in most professions’ within 30 years after HLMI. Half of the experts (the median)
assigned a 75% probability. 71 out of 165 (43%) respondents assigned a probability of 50% or less.
17
Florindi (note 13).
3
becoming unreliable because there is a physical limit to how many transistors you can squeeze into an
integrated circuit.18 In any case, Moore’s law is a measure of computational power, not intelligence.
In 2011 when IBM’s Watson computer triumphed over human champions in the quiz show Jeopardy!
it was a stunning achievement that suggested limitless horizons for artificial intelligence. IBM then tried
to apply Watson in health care. Yet the next few years after its game show win proved humbling for
Watson. IBM executives candidly admit that medicine proved far more difficult than they anticipated.
Computer scientists warn that expectations now for AI are ‘way ahead of reality’.19
These data and the variation in opinions show that (a) AI experts differ widely their predictions of
when, if ever, AI and superintelligence will arrive, and (b) estimates before 50 years generally do not
carry a high level of confidence. There is certainly no consensus among experts. There is even greater
uncertainty about its impacts. This evidence indicates that nobody really knows the timing and impact
of AI and those who think they know, don’t.
Robo-lawyers?
Despite this uncertainty, some writers have made bold predictions that AI will take over the work of
lawyers.20 For example, in The Future of the Professions, How Technology Will Transform the Work of
Human Experts21 Richard and Daniel Susskind predict that computer systems will provide ‘high quality
legal advice and guidance’22 and that ‘traditional professions will be dismantled, leaving most (but not
all) professionals to be replaced by less expert people and high-performing systems’.23 They believe
that the work of all professions can be ‘standardised’ and ‘systematised’. Ultimately, they predict – and
advocate – that professionals be eliminated as far as possible and that the public instead access services
from online computers and less-qualified people (‘externalisation’).24 Susskind and Susskind are
however unable to say what type of technology they believe will bring about this change25 or commit
to time-frames.26
Other academic writers criticise this literature as sensationalist and speculative. In a detailed analysis,
Remus and Levy27 advance three criticisms: First, existing work ‘fails to engage with the technical
details … critical for understanding the kinds of lawyering tasks that computers can and cannot
perform’. Second, the writing is ‘unmoored from data about how lawyers actually spend and bill their
time’. Third, ‘the existing literature fails to take seriously the values, ideals, and challenges of legal
18
Florindi (note 13); Wagner, Mario. ‘After Moor’s Law – Double, double, toil and trouble’ The Economist.
12 March 2016 (the operating speed of high-end chips has been on a plateau since the mid-2000s).
19
Lohr, Steve ‘The Promise of Artificial Intelligence Unfolds in Small Steps’ The New York Times 28 February
2016.
20
McGinnis, John and Pearce, Russell. ‘The Great Disruption: How Machine Intelligence Will Transform the Role
of Lawyers in the Delivery of Legal Services’. Fordham L. Rev. 3041, 3066 (2014); White, Michael. ‘Don’t be smug,
technology is eating middle-class jobs too.’ The Guardian. 29 February 2016; Dashevsky, Evan. ‘Are Humans Even
Necessary?’ PC Magazine. February 2016; Groom, Brian. ‘Is a robot coming to take your job?’ Financial Times.
20 January 2014.
21
Richard Susskind & Daniel Susskind. The Future of the Professions, How Technology Will Transform the Work of
Human Experts. Oxford Univ. Press 2015.
22
Susskind and Susskind (note 21) p 186.
23
Susskind and Susskind (note 21) p 303.
24
Susskind and Susskind (note 21) sections 5.3-5.4 pp 195-210.
25
Susskind and Susskind (note 21) say, rather vaguely, that ‘increasingly capable machines will, in due course, be
capable of generating bodies of practical expertise that can resolve the sort of problems that used to be the sole
province of human experts in the professions. Whether this is achieved using Big Data, artificial intelligence,
intelligent search, or techniques not yet invented, the machines’ ways of working are likely to bear little
resemblance to that of human beings.’ (p 226)
26
Susskind and Susskind (note 21) p 303. They put no more definite time for machines outperforming human
experts as ‘many years from now’ (p 277) and ‘decades from now’ (pp 263, 271, 291).
27
Remus, Dana and Levy, Frank. Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law. 30 December
2015. Available at doi.org/bb28. pp 2-3.
4
professionalism’. They conclude that predictions of imminent and widespread displacement of lawyers
are premature.28
Sweeping and indiscriminate predictions that computers will one day take over the work of all
professions fail to show an understanding of either the characteristics of particular professions or the
technology that is said to replace them.29 A more nuanced analysis considers the impact on each
profession. To investigate the employment impacts of AI, Carl Frey and Michael Osborne,30 examined
over 700 different occupations and distinguished non-automatable jobs (those that require
characteristics such as originality and social intelligence) from automatable jobs. The risk to each job
being automated in the next 20 years was calculated.31 The big picture is striking: across all occupations,
47% of jobs in the US are at high risk of automation in the next 20 years, as are two-thirds of those
in India and three-quarters in China. However, not all occupations are at equal risk. Those at high risk
of obsolescence include accountants and auditors (94% chance of being automated), umpires and
referees (98%), technical writers (89%) and estate agents (86%). Those at low risk include
microbiologists (1%) civil engineers (2%) and surgeons (0.4%). In the legal field, paralegals and legal
assistants are at high risk (95%) but lawyers are at low risk (4%).32
There are a number of features of the independent referral bar which make advocates and barristers
even less vulnerable than lawyers in general.
Why advocates and barristers are at low risk of automation
There are at least six challenges in automating the work of advocates and barristers.
1. Wrong questions: Often a lay client asks the wrong question, or misidentifies the legal issue.
Those clients are unlikely to be properly assisted by a computer capable of answering legal
questions.33 Even a very smart computer will arrive at the wrong conclusion if it is asked the wrong
question.
2. Interests of the client: Our job is not to simply do what the client asks of us, but to identify
and act in the interests of the client. Sometimes the interests are not obvious, especially where
the client does not or cannot articulate them or has competing or even conflicting interests. We
need to draw on our experience to interpret and understand the client’s interest.
3. No right answer: When we do find the right question, there is sometimes no right answer.
Some legal issues are unresolved and text books and precedent provide no guidance. We may
need to innovate and create new arguments by importing legal concepts from one area of the law
to another or by combining existing arguments in novel and persuasive ways. We sometimes
construct arguments from fundamental legal principles, constitutional values, or intangible factors
such as our sense of justice, practical experience and even intuition.
28
Remus and Levy (note 27) p 68.
Susskind and Susskind (note 21) seek to dismiss this criticism as ‘bluster’ p 43.
30
Frey, Carl Benedik and Osborne, Michael, The Future Of Employment: How Susceptible Are Jobs To
Computerisation? 17 September 2013 accessible at:
http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf
31
For a full list of occupations and their automation risk see the NPR web tool based on Frey and Osborne:
http://www.npr.org/sections/money/2015/05/21/408234543/will-your-job-be-done-by-a-machine.
32
Frey and Osborne (note 30) p 41. Compare Remus and Levy (note 27) who calculate a 13% potential loss in
employment of American attorneys caused by foreseeable new technologies (p 46).
33
It is thought that computers will be able to respond to questions posed in natural language, something like
Apple’s Siri or another IBM machine, Watson, which won an episode of the TV quiz show Jeopardy!, beating two
human players, one of whom had enjoyed a 74-show winning streak.
29
5
4. Persuasion not knowledge: There is a misconception that what all professions fundamentally
offer is knowledge.34 However, as advocates and barristers we are employed not for what we
know, but for what we do. The essence of advocacy – what we do – is persuasion, both by oral
and written advocacy. Computers will need to become more persuasive than humans to replace
us.
5. Difficult problems: The vast majority of routine legal matters do not require counsel. As
forensic specialists, advocates and barristers mainly act in cases which must be argued in court.
Usually these matters are difficult (even for human lawyers) or have no clear answer, such as
where the law is unclear, where there are unanticipated questions and statements, which, in turn,
require recognising the context in which words are used, and/or where the evidence is disputed,
contradictory, untruthful or ambiguous.
6. Discretion and judgement: Almost every brief requires the advocate and barrister to exercise
discretion and judgement. Even many humans lack those abilities. It is doubtful whether machines
will acquire them in the foreseeable future.
Note that advocates and barristers must deal with at least one of these difficulties in almost every
case; not only in unusual circumstances. Yet, they are not the kind of structured and repetitive tasks
than can be readily standardised, systematised and externalised.35 These matters are simply too
complex or too opaque to be modelled for computers today or in the foreseeable future, using either
deductive or data driven rules.36
However, postulate a future technology which establishes a ‘post-professional society’ devoid of
human advocates and barristers. What would that society be like?
Possible implications of a post-professional society
Susskind and Susskind regard a post-professional society as ‘desirable’. This view is unsurprising
because they see professionals as ‘unaffordable’, ‘underperforming’, ‘antiquated’, ‘disempowering’ and
‘unaccountable’, ‘gatekeepers’ to knowledge, who ought to be eliminated so that their practical
experience will instead be available online.37 They predict that this will happen in one of two ways. In
one possible future, ‘practical expertise is a shared online resource, freely available and maintained in
a collaborative spirit’. That is ‘a type of commons where our collective knowledge and experience, in
so far as is feasible, is nurtured and shared without commercial gain’. The other possible future is an
‘online marketplace in which practical expertise is invariably bought and sold … by new gatekeepers’.38
The first of those possibilities is naïve and unlikely. It is difficult to see why corporations would invest
billions of dollars39 in developing technologies only to give them away for the common good without
commercial gain.
The second possible future raises disturbing concerns about the rule of law and human rights. After
an AI arms race, the most potent legal technologies could be owned and controlled by a small number
of private corporations.40 Profit-driven, they may be an oligopoly whose services are affordable only
to governments and powerful institutions. Ordinary people, having to rely on cheaper, inferior
technology would then no longer be able to effectively assert their rights in legal disputes against the
34
Susskind and Susskind (note 21) p 193.
Susskind and Susskind (note 21) pp 196-197.
36
Remus and Levy (note 27) p 9.
37
Susskind and Susskind (note 21) pp 32-36, 303.
38
Susskind and Susskind (note 21) p 306.
39
In 2015 IBM, Google, Facebook, Microsoft and Apple and others invested $8.5 billion in AI development (Lohr,
note 19).
40
See note 39.
35
6
powerful. In a post-professional society, they would be unable to call upon independent advocates and
barristers to represent them no matter the nature of the case or their conduct, opinions or beliefs.
It is true that we currently have unequal access to legal services. 41 But a future where all legal
representatives are machines could be even more unequal. The information age has already been
linked to rising inequality.42 The most sophisticated (expensive) AI systems could be orders of
magnitude superior to average (affordable) AI systems. Such disparity would exceed the differences
between today’s best and average human lawyers.
The death of the independent referral bar would also mean the loss of a key institution for the
promotion and upholding of the rule of law.43
Right now, nobody knows whether, in the long term, superintelligent machines will destroy the
independent referral bar. What we do know now is that, given the incalculable risks, we cannot
responsibly conclude that a machine-dominated world would be better than one with advocates,
barristers and other lawyers.
What should we do and not do?
Starting with what we should not do. We should not panic. We should not accept predictions about
our future which are founded on speculation or a poor or incomplete understanding of our profession
or of notional technology. We should also not attempt to – and cannot – prevent inevitable
development. We should not become luddites.
We should immediately embrace technology and use technology in our practices. There are at least
three reasons for doing so: (1) for the sake of society, (2) in the interests of our clients and (3) for
our own good.
First, only by using technology can we hope to understand it and improve our ability to act on behalf
of society to maximise the benefits and avoid the risks of AI. Second, if a technology enables us to
better advance the interests of our clients then we have a duty to use it. Third, integrating technology
into our practices is the best way to remain relevant, and thereby protect our profession.
Without going into any detail, here are some examples of technologies that we can use now and in
the foreseeable future. Technology tools already in existence which can be usefully and practically
applied in the practice of an advocate or barrister include electronic document management systems,44
digital note taking and audio recording,45 electronic records in court,46 software to aid case analysis47
41
Susskind and Susskind (note 21) use the analogy that there is a ‘Rolls-Royce service for the well-heeled
minority, while everyone else is walking’ (p 33).
42
Wolf, Martin. ‘If robots divide us, they will conquer.’ Financial Times. 4 February 2014; Wolf, Martin. ‘Enslave
the robots and free the poor.’ Financial Times. 11 February 2014.
43
For further dangers to the rule of law posed by automation, see Remus and Levy (note 27) pp 67-68.
44
A reliable, basic system can be set up on a personal computer using a folder structure and work flow practice
that meet the requirements of (1) simplicity, (2) predictability and (3) consistency. PDFs need to be OCR’d to
be made searchable. If stored in the cloud (eg Dropbox) these documents are accessible via mobile devices and
can be distributed reliably and quickly via links. Data security can be achieved by encrypting files with a service
such as Sookasa. Such a digital document management system has the benefits of making documents portable,
available from almost anywhere at any time via mobile apps, searchable, amenable to cut and paste edits,
archivable with zero physical space consumption, and backed up with access to older versions.
45
For example, Microsoft OneNote.
46
Without requiring any special system or hardware to be installed in the court room, PDF’s can be shared with
opponents and the court on tablet devices (using apps such as GoodReader) as was done in City of Cape Town v
South African National Roads Agency Ltd and Others (6165/2012) [2015] ZAWCHC 135; 2016 (1) BCLR 49 (WCC);
[2016] 1 All SA 99 (WCC); 2015 (6) SA 535 (WCC) (30 September 2015) see para 276
(http://www.saflii.org/za/cases/ZAWCHC/2015/135.pdf).
47
Mindmapping software (eg MindManager at www.mindjet.com) is useful for this.
7
and collaborative authoring tools which improve efficiency within a legal team.48 These affordable and
relatively simple tools offer significant productivity boosts. Foreseeable future developments are likely
to further assist us. One exciting example is in matters with large, electronic data sets (likely to become
increasingly prevalent49), machine-learning software may help us extract evidence to establish the
facts.50 Another promised development is information retrieval technologies using natural language
processing to improve the speed and quality of legal research.51
Conclusion
While the pace of AI development and its likely impacts are uncertain, some authors predict that
machines will increasingly take over the work of lawyers and will one day replace them. However,
other researchers, more credibly, conclude that lawyers’ jobs are at low risk of automation in the
next 20 years. The specialised work of the independent referral bar places advocates and barristers at
even lower risk than lawyers in general. A hypothetical future in which lawyers are replaced by AI
holds incalculable dangers for human rights and the rule of law because it may make it even harder for
individuals to assert their rights against the powerful.
We need to develop our understanding of technology so that we can be vigilant and be prepared to
act on behalf of society to guard against these potential risks. This is one of the reasons why advocates
and barristers should immediately embrace and use technology. Doing so is also in the interests of
our clients and the best way to future-proof our profession.
Even assuming that AI, at some point, does outperform humans in some aspects of lawyering, that does
not necessarily mean there will be no place for humans. The likely future relationship between humans
and computers is illustrated by the game of chess. While computers can beat any human at chess,
computers are not the best chess players in the world. The current world chess champions are what
we call ‘centaurs,’ that is a team of a human and a computer. A human and a computer actually
complement each other very well. Computer scientists52 give this an analogy: ‘A human can’t win a
race against a horse, but if you ride a horse, you’ll go a lot further.’
48
For example, Microsoft OneDrive. Co-authoring can take place with a number of individuals working on
different computer networks and even in different countries.
49
The amount of data in the world is growing very rapidly. Every day, we create 2.5 quintillion bytes of data.
90% of the data in the world has been created in the last two years alone.
(http://www-01.ibm.com/software/data/bigdata/what-is-big-data.html).
50
Systems such as Equivio (now part of Microsoft Office 365 Enterprise E5) allow attorneys and solicitors to
use predictive coding, applying natural language and machine learning techniques, to more efficiently identify
relevant documents in very large data sets. This is currently mainly used for document review in the e-discovery
process.
51
For example, Ravel Law (see Marr, Bernard, ‘How Big Data Is Disrupting Law Firms And The Legal Profession.’
Forbes. 20 January 2016). Ross Intelligence (http://www.rossintelligence.com), which is built on top of Watson,
claims: ‘You ask your questions in plain English, as you would a colleague, and ROSS then reads through the
entire body of law and returns a cited answer and topical readings from legislation, case law and secondary
sources to get you up-to-speed quickly. In addition, ROSS monitors the law around the clock to notify you of
new court decisions that can affect your case.’ The commercial release date of Ross Intelligence – even for
America – has not been announced.
52
Domingos, Pedro author of The Master Algorithm as quoted by Dashevsky (note 20) p11.