news
Past Events | 14 October 2020 2pm-4pm
30 October 2020

The Future of Empirical Research in International Law

The inaugural session of the Computational and Empirical International Law Speaker Series (CEILSS) discussed the Future of Empirical Research in International Law. Eight years after Gregory Shaffer and Tom Ginsburg described an “empirical turn” in international law scholarship, empirical research in international law has taken yet another “computational turn”. Scholars increasingly rely on methods from computer science, network analysis and computational linguistics to describe the state, drivers and impact of international law. By looking at international law as data, researchers can describe more of it in less time and at cheaper costs, which enables the legal analysis of large datasets and gives rise to new research questions and opportunities.

Moderated by Wolfgang Alschner, Professor at the University of Ottawa, four panelists discussed the potential, challenges and limitations of this emerging field of computational and data-oriented research on international law. Professor Joost Pauwelyn, co-director of the Centre for Trade and Economic Integration (CTEI) at the Graduate Institute of International and Development Studies, launched the discussion by highlighting the virtue of building bridges between international law and other disciplines in order to bring new tools to bear on the analysis of law. Professor Ashley Deeks, director of the National Security Law Center at the University of Virginia Law School, noted how new technologies can change not only the research but also the practice of international law, e.g. by using machine learning or text-as-data tools to support states in treaty negotiations. Professor Huaxia Lai from Peking University School of International Studies remarked that international law was a latecomer to the boom of empirical and computational methods and has only recently begun to explore the potential of methods that are already popular in political science. Professor Malcolm Langford, director of the Centre on Experiential Legal Learning (CELL) at the University of Oslo, concluded the panelists’ introductory round by highlighting that computational approaches add most value where traditional legal methods do not work either because of the type of question asked (e.g. authorship detection) or the volume of data. Three rounds of discussions followed.

The first round focused on breakthroughs in international law research made possible by computational techniques. Pauwelyn provided an example of how authorship detection methods could reveal who drafted a WTO panel or Appellate Body decisions and thereby create more transparency in the WTO dispute settlement system. Deeks shared the idea of using machine learning to investigate large national archives to identify state practice relevant to the formation of customary international law. Lai stressed the efficiency and effectiveness of the computational method to study international law, which could be adapted to study fragmentation, judicial bias, and systemic visualization of the system, to name a few. Langford shared his experience using network analysis to investigate powerbrokers in the investor-state arbitration universe, to identify powerful arbitrators at the center of the regime, and to measure instances of double-hatting where the same actor occupies multiple roles in a network.

The second round concerned the challenges and limitations of computational and empirical approaches. Lai noted that data-driven approaches to international law can be driven by data availability, rather than by a theoretically informed research question. Moreover, quantitative measurements required abstracting the real world into smaller dimensions. The key question was then what information to retain and what to discard –  a question computational methods devoid of theory could not answer. Langford pointed to the complementarity between a computational approach and traditional doctrinal approaches to international law. A challenge lies in identifying at what point one should move from the computational method to the doctrinal one, or vice versa. He remarked that when dealing with large amounts of data, computational methods can provide a much-needed bird’s-eye view.

Deeks highlighted the difficulties in working with legal and government documents, most of which are not yet digitalized and are diverse in formats and languages. Data availability and reliability are thus among the main challenges for this young field of international law research. Unequal access to computational tools and know-how may also exacerbate inequality between states during negotiations. Pauwelyn identified four main challenges for the field of computational and empirical law: skills, data collection, open access, and publication of work. While lawyers knew how to ask the right questions, it was much more difficult for them to carry out data analysis, as most were not trained and equipped with technical skills. Instead of trying to do everything, a legal scholar could find a co-author well-versed in technical and computational skills. Collecting, cleaning and mining data also often require both legal and technical knowledge. Relatedly, there is a risk that databases became privatized preventing access to those who need it the most. Depending on the accessibility of future data and applications, computational methods can liberalize access to knowledge or further increase existing inequalities and power asymmetries – a point also made by Deeks. Lastly, Pauwelyn noted that computational and empirical legal scholarship remains a niche area, which makes it challenging to find mainstream journals willing to accept publications.

In the third round, the discussion turned to low hanging fruit for the deployment of computational techniques either in relation to the study of international law or its practice. Deeks suggested prioritizing data collection and building databases with open access. She called on the United Nations with its large archives to take the lead. Lai agreed that international organizations are a good starting point for implementing computational analyses. Pauwelyn looked at the possibility of developing more user-friendly platforms for better accessibility to data, as well as advocating for both scholarship and international practice to move beyond explaining to predicting in various contexts, such as tax adjudication and due diligence. Langford added that while machine learning is an effective tool, its power is limited on questions of causality. According to him, there is still a need to work backward to understand causality. In addition, he emphasized the need to equip law students with basic technical and statistical skills. He also noted that the current pandemic may fuel the development of new digital competencies and datasets as courts and negotiations move online.

The next speaker series session, titled “Toward a New Generation of International Law Databases,” will be held in November. Stay tuned!

The CEILSS Speaker Series is organized under the framework of the Swiss National Science Foundation Project, Convergence Versus Divergence? Text-as-Data and Network Analysis of International Economic Law Treaties and Tribunals, by the CTEI, the Graduate Institute in collaboration with the Centre for Law, Technology and Society, the University of Ottawa.

CEILSS Talk 1 - The Future of Empirical Research in International Law (Oct. 14th 2020)