Thoughts from the ABA Legal Tech Show

I love tech shows. They almost always have a quirky tech failure. This year it was the registration system that failed. Less than an auspicious start. But, the conference isn’t really about technology for me. It’s about people. I think that the smart conference attendee knows that you could get a better look at most of the tech by searching on YouTube. This is where the creative minds come together. And, on that account, the conference has been a success! It is so great to get caught up with friends who I haven’t seen in ages and to meet new colleagues. I am a firm believer in community. The legal tech community has been good to me over the years.

Substantively, my path through the conference has focused on artificial intelligence, which is where I typically focus my attention. In my perspective, AI is a very special kind of tool—an idea that I heard more than one speech express. What makes AI special is that it is an intentional augmentation of our cognitive ability. That’s special. Only human beings seem to be able of augmenting their cognition. Other creatures may have tools and even some limited speech, but only humans intentionally modify their cognitive faculties. We are cyborgs by nature.

Evidence of this is as close at hand as the pencil. Writing is the initial cognitive enhancement, made more powerful by the printing press, first as a manual mechanism and later as a tool for industrial production. Writing allowed human beings to augment the mind, and so the pencil shares something in common with machine learning computers today.

The impact of this on the law is also not new. The earliest laws were spoken dictates of the king. The memory of the first written laws was still part of the culture of Athen in the time of Plato and Aristotle. Socrates, himself, was illiterate and suspicious of literacy in similar terms as our current suspicions about cell phones.

Writing the law changed it. The king, for example, could use the law as a symbol of power by placing it, set in stone, in the market square. But, it also had the unexpected power to bind the king as well to his own words. The rule of law has its origins in this. Although the history of written law is uneven—the Goths, for example, had an oral legal tradition—the lasting influence on the way law existed (its “ontology” to use a technical philosophical term).

The practice of law has been influenced as well. Law in the age of Abe Lincoln, before the commercial press, was practiced with much less regard for precedent, and certainly not for the latest case that was heard in the last few hours. It took weeks or months for news of a new case to reach the frontier, after all.

So, AI is not new in this sense. It is the continuation of a long process of human mind augmentation. And, we can learn from the past. Just as the ancient Greek culture was transformed by the new cognitive faculties that writing brought about, so too will our society change with AI. Change in the law is inevitable, even as writing captured the minds of the Greeks despite Socrates arguments against it. And, we will be surprised by how it rearranges the power structures in our society. On its face, we talk about the potential to use AI for the good by extending low-cost legal services to populations who never had access to the law in the past. What will that mean in the long run? Will the power elites permit this new shift? What avenues for resistance will arise? How will the role of law in social change over time? We have many thoughts on these issues, but little guidance on how to think about them.

AI will change us too, as writing did. For a child to learn how to read requires the growth of brain tissue that connects two adjacent modules. The practice of reading shapes the physical anatomy of the brain. I am told that a neuroanatomist can identify the language group that was spoken in life by examining a brain on a lab tray. Truly, we are cyborgs, augmenting ourselves to cope with living in the world. AI will help us do that too. So, I fully believe that AI is here to stay, and with its help, we will be too.

Lawyers should not resist this change. AI can help us represent our clients more effectively, see new potential in the business side of the practice, and manage our lives with greater intention. These are all goods. And since AI is changing the whole of society, not just the law, a competent lawyer of the future will need sufficient expertise and skills in AI to understand the social and cultural issues that drive change. AI will be as common as a pencil. Imagine a lawyer today who was fearful of the written word and could not craft a sentence. It worked in an oral legal tradition, but not in a written one. And, in the future, a lawyer who doesn’t know how to use AI tools will be as obsolete as a lawyer who didn’t know how to find the most recent precedents. The best lawyers will seek out the cognitive enhancements that give them the best edge for coping with the demands of practice in an increasingly complex legal environment.


What I’m Reading

The Habermas/Luhmann debate and subsequent Habermasian perspectives on systems theory


The Habermas/Luhmann debate centers on two questions. One is theoretical and one ethical. The theoretical question is: Can social processes be explained in primarily systemic terms? The ethical question is: What does reliance upon systems theory do to an advanced industrial society? Since the original debate, Habermas has developed an evolutionary architectonic that accords a role to systems theory. He continues, however, to voice objections and reservations about systems theory and Luhmann in particular. His criticisms need to be seen in their narrow context and brought into general conversation with the wide sweep of contemporary systems thinking.


Useful Recent Scholarship

Get it here

Topic Modeling the President: Conventional and Computational Methods

George Washington Law Review, Forthcoming

Vanderbilt Law Research Paper No. 17-62

54 Pages Posted: 12 Dec 2017  

J. B. Ruhl

Vanderbilt University – Law School

Jonathan M. Gilligan

Vanderbilt University – Department of Earth and Environmental Sciences

John Nay

New York University; Harvard University – Berkman Klein Center

Date Written: December 11, 2017


Law is generally represented through text, and lawyers have for centuries classified large bodies of legal text into distinct topics — they “topic model” the law. But large bodies of legal documents present challenges for conventional topic modeling methods. The task of gathering, reviewing, coding, sorting, and assessing a body of tens of thousands of legal documents is a daunting proposition. Recent advances in computational text analytics, a subset of the field of “artificial intelligence,” are already gaining traction in legal practice settings such as e-discovery by leveraging the speed and capacity of computers to process enormous bodies of documents. Differences between conventional and computational methods, however, suggest that computational text modeling has its own limitations, but that the two methods used in unison could be a powerful research tool for legal scholars in their research as well.

To explore that potential — and to do so critically rather than under the “shiny rock” spell of artificial intelligence — we assembled a large corpus of presidential documents to assess how computational topic modeling compares to conventional methods and evaluate how legal scholars can best make use of the computational methods. The presidential documents of interest comprise presidential “direct actions,” such as executive orders, presidential memoranda, proclamations, and other exercises of authority the president can take alone, without congressional concurrence or agency involvement. Presidents have been issuing direct actions throughout the history of the republic, and while they have often been the target of criticism and controversy in the past, lately they have become a tinderbox of debate. Hence, although long ignored by political scientists and legal scholars, there has been a surge of interest in the scope, content, and impact of presidential direct actions.

Legal and policy scholars modeling direct actions into substantive topic classifications thus far have not employed computational methods. This gives us an opportunity to compare results of the two methods. We generated computational topic models of all direct actions over time periods other scholars have studied using conventional methods, and did the same for a case study of environmental policy direct actions. Our computational model of all direct actions closely matched one of the two comprehensive empirical models developed using conventional methods. By contrast, our environmental case study model differed markedly from the only other empirical topic model of environmental policy direct actions, revealing that the conventional methods model included trivial categories and omitted important alternative topics.

Our findings support the assessment that computational topic modeling, provided a sufficiently large corpus of documents is used, can provide important insights for legal scholars in designing and validating their topic models of legal text. To be sure, computational topic modeling used alone has its limitations, some of which are evident in our models, but when used along with conventional methods, it opens doors towards reaching more confident conclusions about how to conceptualize topics in law. Drawing from these results, we offer several use cases for computational topic modeling in legal research. At the front-end, researchers can use the method to generate better and more complete model hypotheses. At the back-end, the method can effectively be used, as we did, to validate existing topic models. And at a meta-scale, the method opens windows to test and challenge conventional legal theory. Legal scholars can do all of these without “the machines,” but there is good reason to believe we can do it better with them in the toolkit.

Useful recent scholarship

Blockchains and Data Protection in the European Union

Max Planck Institute for Innovation & Competition Research Paper No. 18-01

31 Pages Posted: 6 Dec 2017  

Michèle Finck

Max Planck Institute for Innovation and Competition; University of Oxford

Date Written: November 30, 2017


This paper examines data protection on blockchains and other forms of distributed ledger technology (‘DLT’). Transactional data stored on a blockchain, whether in plain text, encrypted form or after having undergone a hashing process, constitutes personal data for the purposes of the GDPR. Public keys equally qualify as personal data as a matter of EU data protection law. We examine the consequences flowing from that state of affairs and suggest that in interpreting the GDPR with respect to blockchains, fundamental rights protection and the promotion of innovation, two normative objectives of the European legal order, must be reconciled. This is even more so given that, where designed appropriately, distributed ledgers have the potential to further the GDPR’s objective of data sovereignty.


The Philosophy of Information and Theories of Law

This is the third in a series of posts about the epistemological foundations of legal theory in the Information Age. In the first post I suggested that analytic jurisprudence does not serve well in this period because it is coherentist in its epistemology. The chief problem that I see with this is that coherentist theories of law, like Brian Leiter’s adoption of Quine’s coherentism, cannot explain the stabilities in law over time and do not quite explain how quantitative approaches to legal analysis and prediction actually work. A second post suggested that sociological theories work better than analytic philosophy because they are grounded in an epistemological realism that views the law as describing and defining social relations that actually exist among the relations that law touches. Brian Tamanaha’s analysis of Malinowski and Erlich are prime examples of this.

Here, I want to begin to suggest how the philosophers of law should respond by asserting a realist epistemology that can renew legal philosophy. I have suggested (here) that structural realism is the way ahead, and that legal philosophers adopt an informational structural realism as the foundational epistemology. The implications of this are quite broad, which I try to point out in an overly ambitious essay (here). But, nonetheless, I am confident that if legal philosophy has a future, if it isn’t displaced entirely by sociology, then that future lies with the philosophy of information.

So, what is the philosophy of information? It is the attempt to understand the significance of the new understandings of information for philosophical questions. In the mid-twentieth century, the concept of information was transformed into its contemporary state. In the past, “information” referred to a family of loosely defined concepts about phenomena that were useful for the faculties of human intellection. Human minds have the power to transform information into useful ratiocination. But, in the works of Claude Shannon and Alan Turing, information and computation were successfully described as natural phenomena that occur in many types of natural processes. Information and computation are no long human-centered. And, this has led to a revolution in the conceptualization of information that now recognizes many types of information, some more tightly defined than others. Luciano Floridi is credited with developing this shift in conceptualizations of information into a sophisticated analysis of philosophy, which he calls the philosophy of information. Applied to law, the philosophy of legal information should be tasked with investigating the informational nature of law. This involves understanding how information about social structures is related to the semantic information that encodes and represents the law.

A method that Floridi uses for investigating informational structures is the analysis of levels of abstraction (LoA). He takes this from a formal verification language (used by computer engineers to verify the results of computational systems) known as Z (after a logician named Zermeldo). The analysis of levels of abstraction allows for clarity in thinking about the structure of information in various systems. An insightful essay by Ugo Pagallo and Massimo Durante, “Philosophy of Law in an Information Society” in Luciano Floridi (ed.) The Routledge Handbook of Philosophy of Information, applies the Levels of Abstraction analysis to law. A significant contribution that they make is the description of three levels of information.  Where the article takes a more continental approach, a common law analysis can be developed from Hart’s description of law.  In my reading, the levels of abstraction should be:

  1. Information as reality: This refers to those aspects of law that are constitutive of a social relationship. This often is the case in commercial law, where legally binding commitments are constitutive of social relations. Think here of contracts and fiduciary relations.
  2. Information about reality: This refers to law that is descriptive of fundamental social norms. For Hart these would be laws that implement secondary rules and the Rules of Recognition. 
  3. Information for reality.. This refers to the primary rules. Most laws are for reality, in the sense that they seek to regulate social relationships.

An example of LoA analysis is useful here. Consider the analysis of the epistemic value of precedent in Debra Hellman’s article, “Social Epistemic Value of Precedent.” When I first read this well-written and probing article, I mistakenly thought Hellman was developing a social epistemology of law. I was corrected by a tweet from Laurence Slolum, who suggested that it was a realist epistemology. Upon closer reading, I realized that I had mistaken the ambiguity in the description of legal epistemology for social relativism. In reality, it is just vagueness, which is clarified by considering the levels of abstraction that Pagallo and Durante developed (and I refined). Viewed this way, the critical question is analyzing this essay is this: “When Hellman speaks of epistemic value, to which level of knowledge does she refer?”

This is where the ambiguity arises. Hellman develops two related epistemological claims about the value of precedent: a procedural epistemic value and a substantive epistemic value.  She explains that in examining the epistemic value of precedent, one should ask “Will the decisions of present-day judges be improved or worsened by a judicial practice of according some weight to precedent?”  She explores this question procedurally, which means where precedent is viewed as a procedure for “getting it right.” And substantively, where the focus is on the law “working itself pure,” a phrase she takes from Mansfield. Both analyses of precedent view the practice of stare decisis as useful for getting to the truth, but what is truth? Hellman gives us no idea other than the rightness or goodness of the law. But, what do these strongly normative claims mean? In what sense does she mean getting the law right? Or making it pure?  Does that mean making it morally “good” and “just”? And on what description of goodness and justice?

An LoA analysis is useful here. Stare decisis is a rule of recognition, in Hart’s sense of telling us when a law has been properly made. As such, it is—in my analysis—a law about reality in the sense of capturing the social relations of the governed. Viewed this way, the epistemic value of precedent is derived from the social norms that legitimate the law. Precedent is valuable because it functions to enhance the neutrality and equality of cases because neutrality and equality are fundamental social norms and expectations for the law. Therefore, precedent should be followed when it accords with these values. And, it should be ignored when it does not. Precedent exists for a purpose, and that purpose is to capture and implement the values of neutrality and equality in administering the primary tules. With this foundation, Hellman’s ideas about the epistemic value of precedent can be evaluated. 

Legal philosophers should be investigating the philosophical presuppositions of sociologists, exploring the nature of various types of information relevant to law, and the information structures that are involved in creating, authorizing, and implementing the law.  Information analysis holds the keys to better understanding of legal reason and legal prediction.  More on this theme next time.   

Sociological Theories of Law 

In the last post I argued that analytic jurisprudence lacks the intellectual resources for dealing with the current state of law in the emerging information society. I wrote a more complete treatment of that position in my essay, “Jurisprudence and Structural Realism,” which you can get here. In this post I want to continue discussing the foundational presuppositions of legal theory. In particular, I want to consider how legal theories treat the social and historical contexts in which the law exists.

I believe that the central deficiency in analytic jurisprudence is its methodology, which seeks to reduce law to the information of its semantic expression. For these philosophers, law is only semantic information. It is simply grammar and syntax. Indeed if Leiter’s naturalism (rooted in Quine’s coherentism) is to be believed, it is words “all the way down.” Following Quine, Leiter argues that human knowledge is essentially semantic in nature, and therefore theoretical accounts of law must consider nothing more than the semantic nature of the law. For this reason, analytic legal philosophers have tended to view law apart from its social and historical context. Since they seek a general theory of law that can be applied across cultures and throughout time, the particularities of culture and the historical construction of law are not relevant to their theoretical projects.

Contrast this with what one finds in Anthropology or Sociology of Law. The anthropologist, Clifford Geertz, is often quoted for his statement that “law is a way of imagining the real.” And that as a cultural form, the interpretation of law requires “thick description.” Similarly, sociologists have noted the “gap” between the reductive accounts of law and those that seek greater cultural/historical context. Notable among these sociologists is Brian Z. Tamanaha, whose “A Realistic Theory of Law” develops a contemporary historical theory (drawing from thinkers as diverse as Montesquieu and Savigny) that seeks to understand law in the context of historical and cultural contingencies. Needless to say, the relation between Tamanaha and the analytic jurisprudes hasn’t always been cordial.

One of the implications of living in the Information Age is the awareness of the ubiquity of information. It’s everywhere, even when we wish it weren’t. Information creates a context in which we live our lives, and law is part of that information environment. One way to think about the history and social relations that Tamanaha identifies is to consider the location of law in the information environment. Law is semantic information. But it is also influenced by many other types of informational systems. This is the meaning of Tamanaha’s theory—that law cannot be understood apart from the informational environment in which it is located.


A Preface to the Philosophy of Legal Information

This is one of my recent essays–


This essay introduces the philosophy of legal information (PLI), which is a response to the radical changes brought about in philosophy by the information revolution. It reviews in some detail the work of Luciano Floridi, who is an influential advocate for an information turn in philosophy that he calls the philosophy of information (PI). Floridi proposes that philosophers investigate the conceptual nature of information as it currently exists across multiple disciplines. He shows how a focus on the informational nature of traditional philosophical questions can be transformative for philosophy and for human self-understanding. The philosophy of legal information (PLI) proposed here views laws as a body of information that is stored, manipulated, and analyzed through multiple methods, including the computational methodologies. PLI offers resources for evaluating the ethical and political implications of legal informatics (also known as “legal information systems”).

This essay introduces PLI. Parts I and II describe Floridi’s philosophy of information. First, Part I introduces the transformation in the concept of information that occurred in the twentieth century through the work of Alan Turning and Claude Shannon. Part II describes Floridi’s approaches to traditional questions in epistemology, ontology, and ethics. Part III applies PI to the analysis of legal positivism. It suggests that PLI is a viable project that has potential for transforming the understanding law in the information age.

Get it Here.