• Home
  • Blog
  • Artificial intelligence: a challenge for jurists

Artificial intelligence: a challenge for jurists

6 septiembre 2019

Let’s start with a quick logic problem.

“All law students are nearsighted.

  Some nearsighted people cannot tolerate contact lenses.

  Therefore, some law students cannot tolerate contact lenses.”

Is this reasoning correct? You only have three seconds to answer the question.

Time’s up. What’s your answer? With only three seconds to think about it, I am pretty sure that most readers would say “yes, the reasoning is correct”.

The answer, however, is that the reasoning is not correct, at least from the standpoint of logic.

To figure it out, we probably need a little more than three seconds. The first premise (or proposition) states that all law students are nearsighted. Consequently, every single law student falls within the universe of nearsighted people. The second premise tells us that some nearsighted people cannot tolerate contact lenses. That is, within the universe of nearsighted people, there is a group that cannot tolerate contact lenses. If these are our two premises, the suggested conclusion is not correct, because these two statements could coexist with a situation in which the group of nearsighted people who cannot tolerate contact lenses does not include any law students – all of whom, as we know, are nearsighted. If the two premises are true, they are compatible with situations in which zero law students can tolerate contact lenses, in which only some law students can tolerate contact lenses, or even in which all law students can tolerate contact lenses. Consequently, the conclusion given above does not necessarily follow from the premises. Rather, it is a syllogistic fallacy or false reasoning: it appears to be a correct syllogism, but it is not. In fact, a clever trap was laid by putting what should be the minor premise (the one that contains the subject of the conclusion) first and putting what is really the major premise (the predicate of the conclusion) second. When we read a seemingly universal premise first (“all law students are…”), the reasoning seems even more like a correct syllogism. But if we switch the order of the two premises (some nearsighted people cannot tolerate contact lenses; all law students are nearsighted; therefore….), it is perhaps easier to see where the logic breaks down.

To get an even clearer picture, we can test the problem against a visual depiction. First, we would draw a rather large circle that represents all the nearsighted people in the world. A smaller circle would represent all the law students in the world. We would have to draw this small circle entirely within the confines of the larger circle of nearsighted people (because our reasoning is based on the premise that all law students are nearsighted). Lastly, we need to draw a third circle, which is of greatest interest to us here, to represent all people who cannot tolerate contact lenses. How and where do we draw this circle? There are actually several possibilities. It must overlap in some way with the circle of nearsighted people, because some nearsighted people have the quality of not tolerating contact lenses. But once this requirement is met, there are still several possibilities. I encourage you to grab a paper and pen and play around with some set theory, drawing all the possible combinations of circles that are compatible with the two premises.

And there is still another way to approach the problem: by simplifying or “formalizing” our language. That is, we can replace the words “all law students are nearsighted” with “all A is B”; the words “some nearsighted people cannot tolerate contact lenses” (or, here, “some nearsighted people are intolerant of contact lenses”) with “some B are C”; and the conclusion with “therefore, some A are C”. We can even write this in a more sophisticated way, as “for all x, if x is A then x is B” (∀x A(x) → B(x)). The key, though, is that we have replaced terms that have a specific meaning, that referred to certain sets or types of objects or realities existing in the world (law students, nearsighted people, people who cannot tolerate contact lenses) with certain symbols (A, B, C) that can be used to refer to any set of objects or realities. We could say that we eliminated all the “semantics” of our reasoning and that we have only been left with the “syntax”: with the position of subject or predicate that each term (A, B, C) occupies in each premise and with certain “quantifiers” that modalize the scope of the terms (some, all). Reducing reasoning to its syntax – or, in other words, its formal structure – easily allows for a purely formal “calculation” whereby we can verify whether the reasoning is correct. According to this calculation, a structure “all A is B, some B is C, therefore some A is C” is clearly not valid, whereas a structure of this other type, “all A is B, all B is C, therefore all A is C”, is indeed valid. And, key here is this: this is true no matter what semantic terms are symbolized using the letters A, B and C. We simply have to restate the natural language expressions we regularly use, transforming them into premises comprising standardized formal symbols, which allow us to streamline and “calculate” our reasoning.

Formalization, computer science and artificial intelligence

This way of formalizing and analyzing human reasoning is called “logic” or “formal logic” and began with the work of Aristotle back in the 4th century BC. This same logic, adapted somewhat (in particular, what is known as Boolean algebra, in which the variables are true/false, yes/no, 0/1 binary values), underpins the computer science we use today: our computers are in essence logical machines, devices that calculate or compute using highly formalized languages and that, ultimately, manipulate symbols.

Computer science, in turn, is the technological footing for work in artificial intelligence (AI), a topic I will begin to cover in this post. This undertaking – the design and building of machines that mimic human thought – is one of the most salient issues of our time, if not the most important thing all of us are dealing with today. I say “all of us” because even though our scientists, engineers and computer programmers are in a position to spearhead the work, the development of AI pervades the everyday existence of each and every one of us, and has the power to radically affect what we do and how we go about our lives.

Jurists and artificial intelligence

For jurists in particular, there is no escaping the issue of artificial intelligence, for two reasons.

For one, because the development of increasingly “smarter”, more capable and more autonomous programs, systems and devices will continue to shape our social panorama and give rise to unprecedented conflicts and thorny problems from both the ethical and legal perspectives. Lawmakers and jurists alike will need to deeply analyze these situations and adapt to the new realities. To name just one: accidents caused by self-driving cars. Who do we sue: the manufacturer? the owner? the State, for not maintaining proper road signs? the car itself, personified and endowed with its own fund to cover liabilities? Numerous questions also arise with respect to completely automated healthcare and diagnostic systems, and of course autonomous military and police weapons that can make decisions and carry out lethal attacks without human intervention. There is another question – this time political-economic in scope but also touching on tax law and labor and social security law – that we will also have to deal with: what should we do with the growing number of people who will be booted from the employment market when their jobs are mechanized and automated? (Who will pay the cost of retraining programs? Should companies that replace humans with machines be taxed differently? What about those that manufacture robots? Should the robots themselves be taxed? Should we implement a universal basic income?) In short, these are new realities – and many of the coming new realities are going to be very, very groundbreaking – and they will require new laws.

The second AI-related issue affects us jurists even more specifically and personally: can machines eventually take over our job, which is essentially to apply rules, standards or general criteria to resolve specific cases and which has been done in virtually the same way in all the centuries since the Code of Hammurabi?

Could it happen? Should it? A person would no longer be sent to jail by a human judge, and appealed traffic tickets would no longer come across the desk of a human government worker. Instead, these things would be handled by a computer program that can resolve matters (thousands of them) objectively and using strict algorithms, without cognitive bias and at electronic speed, never tiring or needing coffee breaks or a vacation.

We have to ask whether electronic judges will end up replacing flesh and bone judges – the million-dollar question raised by artificial intelligence in the legal realm – but also whether expert electronic systems will provide legal advice, design litigation strategies and draft or even negotiate contracts. While I was writing this blog, news broke that the Pluribus bot, developed by Carnegie Mellon University with the support of Facebook (and of the United States Army Research Office, which is somewhat concerning), beat five of the world’s best players at a six-person poker game. This is nothing short of the ability to successfully operate in an “imperfect-information” environment, where players bluff and have to call others’ bluffs. What the bot did is beyond pure engineering and involves logic and epistemology, knowledge theory and philosophy in general. Can a machine truly reason, or, better yet, think the way humans do? Can it “understand” what is really at stake in order to make a decision that we would consider “the right choice”?

And that goes to the very crux of the matter.

Intelligence beyond formalization and manipulation of symbols

In order to understand how far along we are in developing artificial intelligence and what some, or many, people expect from it, we have to circle back to our initial question: Is the proposed syllogism about law students and contact lenses correct? We had concluded that it was not, and I would hope that the reader is convinced this is so.

But now, I’m going to go back on what I just said, because I believe we need to look at the problem in one more way.

It is true that the inference that supported the conclusion was logically incorrect. But the reasoning was not crazy at all; rather, it was actually quite reasonable. If all law students are nearsighted and some nearsighted people cannot tolerate contact lenses, the most likely thing is that some or at least a handful of law students cannot tolerate contact lenses. Because, in reality, there are many law students, and an intolerance to contact lenses is not an aberration or something that is out of the ordinary. What would actually be rare would be if not even one of the many nearsighted law students were intolerant to contact lenses.

This shows us that a reasoning that is incorrect in terms of formal logic – what is known as first-order logic or predicate logic – can nevertheless bring us rather close to the truth.

When we truly look at it, what we did when revisiting the proposed syllogism was to backtrack the “formalization” involved in the purely logical analysis of human reasoning. We did not stay within the confines of “syntax” – or the pure form or structure of reasoning – as logicians do; rather, we put the “semantics” (the meaning, the knowledge of the real world) back into our formulas. We put “law students” back in for the abstract symbol A, “nearsighted” for B, and “intolerant to contact lenses” for C. These terms, which belong to the realm of natural language, give us the meaning and knowledge that make the reasoning plausible.

I believe this little exercise can help bring us to the true technical problem underlying the development of artificial intelligence: as I said above, computers were originally designed to be logical machines that manipulate formal symbols. And they do this – for example, calculating a square root or tackling a complex division problem – way better than we do, more greater certainty and infinitely quicker (near the speed of light). They are also better at verifying whether propositional logic deductions or inferences are correct. In fact, the first known AI operating program, Logic Theorist, presented by Newell and Simon at the legendary 1956 Dartmouth workshop that brought together the leading pioneers in this new field of intelligence (and where the term “artificial intelligence” was coined), was an expert system in deductive inference and eventually proved 38 of the 52 theorems in Alfred Whitehead and Bertrand Russell’s Principia Mathematica.

The average human would find all this much harder to do. We are, however, very good and very quick (as we saw with our reasoning about the contact lenses) at informal inference, at enthymematic arguments and at reaching plausible conclusions from a single clue (that is, at jumping to conclusions). Sometimes we get it wrong, but it is still an important intellectual skill that helps us navigate uncharted waters and make mental leaps beyond just the information we are given. Deductive logic is done by rote – like algorithms, which are merely instructions to be followed mechanically – and simply makes explicit what was already implicit in the premises given. But it is not easy to write a rulebook or design a routine or algorithm for finding the appropriate premises to explain an observation or for coming up with a possible conclusion, hypothesis or conjecture that we can later test out (known as abductive reasoning). These ideas pop into our heads almost subconsciously, through what we often call intuition. They are also drawn from our own experiences and knowledge of the world, from what we have already seen before.

In fact, a sizable part of our day-to-day “intelligence” – simply walking down a street without getting run over, driving a car, recognizing the people around us, having an everyday conversation in our native language, understanding what is being said to us and stringing together the words to reply – is done “without thinking about it”. If we actually do try to reason out what we are doing, it can end up tripping us up.

The question is: are we able now, or will we be soon, to design and build devices that not only manipulate formal symbols at lightning speed but that can also “get by in the real world” just like a human would?

Answering this question means taking a look at the different approaches we have taken to artificial intelligence since the 1950s: from designing “expert systems” simulating the very specific knowledge and analytical skills of human experts, to attempting to “formalize” all our knowledge in the world; from the cognitive- and logic-based approach focused on conscious reasoning processes and manipulating formal symbols (the “good old fashioned AI” or GOFAI) to sub-symbolic AI, focused on pattern recognition and on replicating the workings of the human brain as a physical organ by creating artificial neural networks capable of parallel, non-sequential processing and of learning on their own, from their own experiences.

But we will get into all that later.

Other articles
Share
Call us
Admissions: +34 662 98 80 37General information +34 91 514 53 30