(Image by Gerd Altmann from Pixabay)
Lawmakers are not known for keeping up with the pace of technological advancement, so it should be no surprise that many law school curriculums are also stuck in the past.
As UCLA School of Law Professor Edward A. Parson puts it, “most of law is conservative, incremental, looking backward for authorities [while] rapid tech change often challenges and disrupts legal and regulatory processes.”
It’s why one of Professor Parson’s areas of interest is the role of science and technology in policy-making, and why he’s gotten involved with UCLA’s Program on Understanding Law, Science, and Evidence (PULSE).
Ahead of next month’s Summer Institute on AI and Society co-sponsored by PULSE, we spoke with Professor Parson about how the university is training the next generation of attorneys for an AI-driven society. Here are edited and condensed excerpts from our conversation.
Professor Parson, what was your first exposure to AI and how did that experience inform your research direction?
[EP] I actually studied one of the streams of mathematical modeling that has since merged into AI in grad school more than 30 years ago, and got an incomplete PhD in it. I dropped out to try to make a career as a musician, at which I didn’t succeed.
You then became an environmental expert, advising the White House and National Academy of Sciences committees.
That’s right. For about the last decade, my climate and environment work has had a pretty strong theme of technology—its social impacts, how it interacts with law and public policy, how it can or cannot be influenced toward social benefit, as well as a set of technologies that are likely to be important in addressing climate change but that tend to frighten people, geoengineering.
A few years ago I got re-exposed to the remarkable developments happening in AI, and realized that there was a lot of overlap and linkage between the stuff I’d been thinking about on geoengineering and climate change, and the issues of social impacts, disruption of law and policy, and mechanisms to influence tech change that AI presents.
Which brings us to your role within the UCLA School of Law at AI PULSE. Tell us more about this program.
PULSE has been the programmatic home for a broad collection of interests related to law and technology in the UCLA law school for several years, but the center of gravity within that space has shifted over time—in terms of the topics addressed, broadly more towards the societal challenges posed by technology. Now the project on AI that Richard Re and I co-direct is the main activity within PULSE.
The program is funded by a $1.5 million grant from The Open Philanthropy Project, which was started by Cari Tuna and Dustin Moskovitz, co-founder of Facebook and Asana. To what end?
They support this work as part of their program on existential risk, which supports a wide range of work—by us, and by other institutions, to think through the risks and identify and assess potential governance mechanisms to limit them, so society can get the benefits AI promises but limit the accompanying risks.
Will PULSE turn out highly informed law students, ready to face a future where AI works alongside humans?
We hope so. There’s certainly a great deal of interest in these matters, from our students and our faculty colleagues. But to date, these have not been seen as typical or familiar law school topics. Most of law is conservative, incremental, looking backward for authorities [while] rapid tech change often challenges and disrupts legal and regulatory processes. To counterbalance this, we teach several courses, in law and AI and in related areas of rapidly evolving tech with strong implications for law.
Give us an example.
My favorite is a course called “Future Law,” in which we look comparatively across half a dozen dimensions of big societal change happening outside the law, including several areas of technology, plus a couple of dimensions of environmental change, and reason through what are the likely challenges to legal processes and concepts, and the likely and desirable responses.
That sounds highly applicable to the future.
[Laughs] I joke with my students that it’s the most practical course in law school, because it’s the one that will equip them to think about the biggest disruptions they’re likely to see over the course of their careers.
Good point. In a recent article, you and your co-authors point out that ‘managing technology-related risks [is] related to energy, environment, weapons, computation, and molecular biology in the 1960s and 1970s.’
There’s a lot you can learn from parallel and past examples of disruptive technological innovations, but you also have to be alert to differences. One of the big arguments that comes up around pretty much every area of controversial tech is how “novel” it is. Is it really just a continuation of, or closely analogous to, something familiar that we already have processes to deal with, or is it fundamentally new, sui generis? AI has been around for a long time; it’s just had a recent surge in power and visibility, which has led to the sharp debates of whether we need new regulations or processes to manage the risks it presents.
Indeed. Tell us more about that from your perspective.
AI is hard to define because, essentially, it’s a wraparound term that embraces many fields of academic algorithmic development and advanced mathematical computation, as well as the high-profile commercial developments of recent years including the successes of Alpha Go, IBM Watson, Google Translate, Facebook image tagging and recognition, and so on. But there’s active debate on whether these reflect some major advance in scientific knowledge or some emergent result of parallel, incremental gains in algorithms, data, and brute hardware computational capability.
It’s always frustrating to those who have toiled for decades in AI academia to read breathless online accounts of the latest ‘magic’ from NASDAQ-listed companies, as if they knocked it up in a weekend over pizza and leaded sodas.
[Laughs] It’s true. Many recent gains, especially the rollout of machine-learning applications in ever more distinct applications, are very much standing on the shoulders of giants.
Which is why, at AI Pulse, you’ve been focusing on the ‘actors,’ as in known entities working in the field, over a long period, to build up a body of knowledge and informed debate.
Exactly. In May 2018, we held our first workshop, gathering experts to debate: ‘the actors who develop and apply AI capabilities and their goals, incentives, capabilities, institutional settings and interrelationships.’ We may not know what specific tech advances are imminent, but we know something about the people and organizations developing and applying them, what their objectives and capabilities are, so we think this approach can help make some informed assessments of medium-term developments, risks, and responses.
Alongside your work at PULSE, you’re a tenured professor and teach a wide range of courses, which intersect on the subject of AI, including the intriguingly named Legal Issues in Science Fiction. Tell us more.
I conceived this course a decade ago while at the University of Michigan. It uses both serious speculative nonfiction and fiction to inform our imagination in thinking through legal and political implications of substantially expanded technological capabilities.
Which authors, and texts, do you study?
A few favorites are Ursula Le Guin The Dispossessed about large-scale political/legal organization—more or less an anarcho-syndicalist utopia—and its interactions with a neighbor planet that looks a lot like 20th Century Earth society. And Charles Stross, who wrote Accelerando, exploring three generations of human retreat in the face of integrated human/machine consciousness and increasingly powerful super-AI, with fascinating consideration along the way of how you preserve legal rights for people who have uploaded their minds into software, individually or collectively.
Any other texts you recommend for a lawyer’s look to the future?
Best of all, in my opinion, is Kim Stanley Robinson’s Mars trilogy. It contains the complete history of the first 200 years of human settlement on Mars, culminating in a 100-page description of the negotiation and adoption of the Martian constitution.
Talking of future off-world societies: in a speech last year, your colleague Professor Eugene Volokh wondered if the US would accept an AI as lawmaker. What are your thoughts on this proposition?
You can pose the same question about a bunch of state functions—regulatory decisions, various aspects of law-making—it’s plausible to imagine AI moving into many of these functions. There’s fascinating empirical work being done now on how people relate to, and change their behavior in the presence of algorithmic decision-making. Create an AI jerk to interact with people, the people will act like jerks. But the opposite applies too.
On that subject, I just interviewed a researcher who’s doing this within sites to nudge people away from trolling and towards more socially acceptable modes of online conduct.
A colleague once argued to me that people are basically generous, honest, and good, but the social and political institutions that determine who moves into positions of power in society select for vice—and proposed that alternative institutional structures could select for virtue, so the institutions that run society would be guided by the best of human nature, not the worst. Maybe [suitably designed] AI interacting with humans could help move in that direction.
Finally, what’s next for AI PULSE?
We’re deep into planning our summer institute, which will be jointly sponsored by the Canadian Institute for Advanced Research (CIFAR); grad students or early career researchers can apply online
The AI Pulse project of the UCLA School of Law [will take] place from July 21 – 24 at the Alberta Machine Intelligence Institute. Our explicit aim is to build dialog across the boundary between technical experts in AI, machine learning, and related fields; and scholars and professionals in the humanities, social sciences, public policy, and law with relevant expertise and interests.
We won’t be addressing the more remote advances, such as super-intelligent AI that may represent existential threats to human autonomy or survival, but the ‘real stuff’ [like] large-scale AI applications and impacts including economic coordination; governmental decision-making; labor displacement or manipulation of human decision-making.