π-calculus, Session Types research at Imperial College

Last year, the CONCUR conference series inaugurated its Test-of-Time Award, whose purpose is to recognise important achievements in Concurrency Theory that were published at the CONCUR conference and have stood the test of time. This year, Decoding Choice Encodings by Uwe Nestmann and Benjamin C. Pierce was one of four papers chosen to receive the CONCUR Test-of-Time Award for the periods 1994–1997 and 1996–1999 by a jury consisting of Rob van Glabbeek (chair), Luca de Alfaro, Nathalie Bertrand, Catuscia Palamidessi, and Nobuko Yoshida.

This post is devoted to the interview conducted with Uwe Nestmann and Benjamin C. Pierce via video conference. The interview proved to be a very engaging discussion and I trust that their thoughts on their award-winning paper will be of interest to many.

**Nobuko:** Could you tell us briefly what lead you to embark on
studying the expressiveness of choice in the asynchronous pi-calculus?

**Uwe:** I did my diploma thesis in ‘91. I was working on a topic that
had to do with communicating functions. I built a lambda calculus with
communication in it. It was a typed lambda calculus and it was strong enough
to type the Y-combinator. I went to Edinburgh and presented that at the
Concurrency Club, and they asked me, *“who is your supervisor?"*.
I didn’t really have one, because I was mostly self-driven those days.
What they told me is that it’s
better to not work on this topic without a competent supervisor. The second
piece of advice was to get a supervisor first and then look for a topic. So
I found Benjamin. Benjamin had this wonderful project at the time on trying
to make a programming language out of the pi-calculus and, in that language,
choice encodings (or at least choice operators) played a role. He invited me
to visit him in Paris, where he was on six-months’ leave, I think.

**Benjamin:** It was actually a “nested postdoc.” I did three
postdocs after finishing my PhD at CMU: one in Edinburgh, one in Paris, and
one in Cambridge. The Paris one was nested inside the Edinburgh one.

**Uwe:** I was in Paris for one week, and Benjamin told me to try
programming in his new programming language, Pict. I tried to write down the
dining philosophers problem, in a way such that I could use an abstraction
to pick up forks in either order, and I wanted to instantiate it with left
and right, and right and left, but the Pict language didn’t allow me to do
so. Behind was the interplay between choice and abstraction (and
instantiation) and that was the start of all of it from my point of
view. Then I wrote up an exposé and I ended up actually working on just a
third of that for my PhD thesis. And of course, there were technical
reasons for Benjamin and Dave [Turner] at the time for being interested in
choice constructs.

**Benjamin:** Of course, Dave Turner is the most important name that
needs to be mentioned here, besides obviously Robin Milner. All of this was
happening under the umbrella of Robin’s wonderful work on pi-calculus and
the amazing group that he had assembled at the time. He had this incredible
set of students, including Dave Turner and David Sangiorgi and Peter Sewell,
doing all sorts of things with pi-calculus. Dave, besides being a
first-class hacker, was also a really good theoretician. He truly married
the two. He and I started talking at some point about what kind of
programming language would you get if you treated the pi-calculus like the
Lisp people treated the lambda calculus. What that led to was a lot of
different language designs based on different versions of the pi-calculus,
but we kept wanting to make it simpler and simpler. Partly because we were
thinking of it as possibly even a distributed language, not just a
concurrent language, and as everybody knows, the choice operator – in the
full-blown pi-calculus or CCS sense – is not a real thing in distributed
systems: it’s not implementable. So we were trying to make the underlying
calculus simpler and simpler, and eventually wound up with this programming
language with no choice at all. But as Uwe discovered, there are things that
you might want to do where choice is the natural primitive, like dining
philosophers, which raises the question of how much of it can you get just
by programming on top of plain parallel composition plus messages on
channels. We found that programming up a restricted form of choice was a
little tricky but not *that* tricky. What *was* really tricky,
though, was justifying that it was correct. The reason why it turned into a
whole dissertation for Uwe, was because the well-known notions of correctness
that were lying around did not apply to this situation. I remember being
totally astonished at the length and technicality of the final proof that
Uwe ended up doing.

**Nobuko:** Did you imagine at the time that your award-winning paper
would have so much impact on the area of expressiveness in concurrency
theory, and how do you feel now?

**Benjamin:** Maybe Uwe did; I did not. I think we were just following
our noses.

**Uwe:** Actually, I would say “yes” and “no”. The “no” is when
it came to the CONCUR acceptance, I got the impression that we just about made it
because the competition was so tough and the pi-calculus was really hot at
that time. There were six or seven pi-calculus papers in the conference in
the end (I don’t know how many were in the submission pool). The tiny
“yes” that I would like to say is because Kohei [Honda] had foreseen
it. When I gave the presentation at the Newton Institute just in autumn
‘95 – that was the workshop that Benjamin organised on concurrent
high-level languages – Kohei came to me after the talk and said something
like, *“maybe you don’t know yet, but you will be known for
this”*. I can’t remember the exact wording. I think he
called it *“Nestmann’s Theorem”* or something. Me, a PhD student, the
first time in front of this crowd of expert people and then he tells me
something like that. I didn’t believe him. Of course not.

**Benjamin:** Kohei was ahead of his time in so many ways.

**Nobuko:** Could you tell us what the research environment was like in
Edinburgh and the UK as a whole at that time and how it has influenced the
rest of your career?

**Benjamin:** I came as a postdoc to Robin’s group. I was the last
postdoc in Edinburgh, and then travelled with him to Cambridge, where Peter
Sewell and I were his first postdocs. I would say that both Edinburgh and
Cambridge at the time, and still, were just incredible. In Edinburgh you had
Milner, you had [Gordon] Plotkin, you had Don Sannella, you had students
around you like Martin Hofmann and Philippa Gardner and Marcelo Fiore, and
the list goes on and on. You had other postdocs like Randy Pollack. It was
just an incredible place. People talking about amazing, deep, mind-bending
things all the time. It was particularly an amazing place for thinking about
concurrency. There were a lot people breaking new ground.

**Nobuko:** Benjamin, how did that experience influence your current
research?

**Benjamin:** For one thing, it solidified my interest in language
design. The whole Pict experience was so fruitful. It was so much fun
working with Dave on implementing this interesting language, and both the
design and the programming that we did in Pict gave rise to so many
interesting questions. For example, it led us to do a bunch of thinking
about type systems for concurrency, and I can see echoes of those ideas in
the work that you, Nobuko, and colleagues have done more recently with
session types. Though I don’t consider myself a core concurrency researcher
any more, the experience gave me an appreciation for the theory of
concurrency that has drawn me back to the area over and over.

**Nobuko:** Uwe, how did it influence your research?

**Uwe:** I did my PhD in Erlangen (University of
Erlangen-Nürnberg), which was a place that was not so much known at that
time for theory, and especially not for concurrency theory. I had the
opportunity by a bilateral travel exchange programme between these two
universities pushed by my other supervisor, Terry Stroup, at that time. And
when I came into Edinburgh, not only was there so much competence around,
which is mainly what Benjamin summarised, but there was so much
openness. There was so much openness for any kind of ideas. So much
curiosity and joy. So I was very lucky that I could regularly, every
couple of months, visit the LFCS for a few days. There, I was pumped up with content and
ideas, and did a presentation in the pi club in Robin’s tiny office, with
almost ten people sitting around a tiny blackboard, listening to my ideas
and my problems. It was just unbelievable at this time. That kind of culture
and atmosphere was so great. I traced it back, in May or June ‘95, since
we’re talking about this particular paper, it was culminating in the crucial part where
I was just before proving choice encodings correct. I only needed two
ingredients. One came a week later by Davide Sangiorgi posting, for the
first time, a short note on asynchronous bisimilarity. And the other was
that we were rediscovering, mostly together in the pi club with Ole-Högh
Jensen and Robin Milner, the notion of coupled similarity. Both Ole
and Robin had different ideas and came to the same conclusion. I came back
to Erlangen and found the old paper on coupled similarity by [Joachim]
Parrow and [Peter] Sjödin and within a week all of the pieces were just
about there. I “simply” had to write down the details and convince myself that
it went all the way through. That was the crucial moment and without Edinburgh,
without this culture, these possibilities, this openness, it would not have
happened and maybe I would not even have become a professor in Berlin. Just
because of this tiny situation and getting together of bright people.

**Nobuko:** Studying expressiveness this way was quite new and at the
beginning at that time, so you probably cared a lot about presentation and
how to communicate your ideas. Do you have any comments about this aspect?
I found your paper is still very readable and very clearly written for such a
subtle paper. How did you go about writing with this in mind? Apart from
technical details.

**Uwe:** I was a great fan of Benjamin’s presentation and communication
skills at that time. I was seeing him on stage and reading his papers and I had
the possibility to closely interact with this impressive guy and learn from
him. Just recently, I learnt about a citation that summarises this
approach about writing: *“Do not try to write such that you are
understood. Try to write such that you cannot be misunderstood."* I think
this expresses precisely what I think I learnt back then in trying to get
this paper out, and making these subtle observations, and finding the right
notation. It’s often underestimated how important the role of good notation is for
getting things across. The same goes for graphical presentations. And then,
polishing, polishing, polishing, polishing. *“Get simpler sentences,"*
Benjamin always said. I’m German, you know, we like complicated
constructions which are somewhat nice and deeply nested. I learnt at the time to
get it as simple as possible. Presentations were another thing. I found my
presentation back at the Newton Institute again and I remember I had this
table of contents written with **ABCDE**, which were the initial
letters of the concepts that I presented: **A**synchrony,
**B**isimulation, **C**oupled similarity, **D**ecoding, and I
think E was for **E**nd or something. I obviously like playing with
words, and I admire the power and joy of well-chosen language.

**Nobuko:** I do remember your presentation. You highlighted a coupled
simuluation as a part of Rob [van Glabbeek]’s famous diagram at branching
bisimilarities’ CONCUR paper. I still remember your presentation at Newton Institute.

**Benjamin:** I have always cared a lot about good writing.
Communicating ideas is really one of the most important parts of an
academic’s job. So it feels important to acknowledge the people I learned
about writing from. The first was Don Knuth – his level of attention to
writing, among all the other things he did, is totally inspiring to me. The
other was John Reynolds, who was one of my two supervisors as a PhD student
and who is the most careful writer that I’ve ever worked closely with. He
gave me one time a draft of one of his papers to proofread, and I said to
myself, *“Aha, this is my chance to get back at him for all the mistakes and
flaws he has found in my writing over the years!"* So I started reading it,
and I got more and more frustrated because I couldn’t find *anything* to
improve. Anything at all! In the whole paper – not a comma, not a
notational choice, not the way something was worded. Nothing. That
experience was both an inspiration and a humbling lesson to me.

The biggest thing I’ve learned over the years about writing is that the biggest ingredient of good writing is exactly what Uwe brought to this paper: the willingness to iterate until it’s good. Good writers are people that stop polishing later than bad writers.

**Nobuko:** How much of your later work has built on your award-winning
paper? What follow-up result of yours are you most proud of and why?

**Uwe:** I would like to mention three. Funnily, neither of them
was in the decade following the paper. The reason may be because I was dragged
into other projects, having to do with security protocols, pi-calculus,
and object calculi.
(1) By accident, I got back in contact with Ursula Goltz, who
was one of my PhD referees: she was working on a project on synchronous
and asynchronous systems and she asked me for literature because she
knew I was digging deep in the 80s about results on the first CSP
implementations.
In the course of this project, I got back to actually directly building
upon my PhD work, and I found Kirstin Peters, at that time a PhD student,
who got interested in that. We found a number of remarkable observations having
to do with distributed implementability and notions of distributability and
what this may have to do with encodings between calculi.
We discovered a hierarchy of calculi where you can
very easily see which of them are at the same level of distributed implementability. We
found that the asynchronous pi-calculus is actually not fully implementable
in a distributed system, like many others. There is the ESOP paper in 2013,
which I’m very proud of. Kirstin pushed this reasearch much further.
(2) Another follow-up work concerns the notion of
correctness that we were applying in the awarded paper, it was a lot about a
direct comparison between terms and their translations. Not by plain full
abstraction on two different levels and having an if and only if, but a
direct translation so you could not distinguish a term from its
translation. This kind of observation led to a rerun of, say, the research
on what we actually want from an encoding. What is a good criterion for a
good encoding? This culminated in the work with Daniele Gorla where we
criticised the notion of full abstraction in the sense that it’s a very
important notation but you can easily misuse it and get to wrong results or
useless results. (We also emphasized the importance of operational correspondence, and Daniele
went on to establish his, by now, quite standard and established set of criteria for what a good
encoding is.) That is a nice highly abstract paper with Daniele in
Mathematical Structures in Computer Science in 2016, only, so also well,
well after the CONCUR paper in 1996.
(3) In just the last two or
three years, my PhD student Benjamin Bisping finally studied algorithms and
implementations for checking coupled similarity. We found an amazing wealth of new
views on these kinds of equivalences that are slightly weaker than weak
bisimilarity. So back to the roots, in a sense, to what we were doing 25
years ago. (Like Kirstin Peters and Rob van Glabbeek who further showed that
coupled similarity is in fact very closely connected to encodings, in
general.) Seeing these developments makes a lot of fun.

**Nobuko:** This was a TACAS paper, right?

**Uwe:** Yes, and we also published the survey
article “Coupled Similarity – The First 32 Years”, for the Festschrift
for Robert van Glabbeek. It’s basically an advertising paper
for this great notion of equivalence, which is highly underestimated. It’s, in a sense, much
better than weak bisimilarity. Especially if you’re interested in – and this is
my favourite domain – distribution, distributability, distributed
implementations.

**Nobuko:** Benjamin, do you have any further comments?

**Benjamin:** For me, the answer is a little more oblique. I haven’t
written papers about choice encodings and things like that, besides this
one. But what it did for me was to really solidify my interest in the
asynchronous pi-calculus as a foundation for programming languages – and
as a foundation for thinking about concurrency – because this paper, Uwe’s
result, teaches us that the asynchronous pi-calculus is more powerful than
it looks – powerful enough to do a lot of programming in. You know
there’s this famous quote attributed to Einstein, *“Make everything as
simple as possible, but no simpler."* I felt like the asynchronous
pi-calculus was kind of “it” after seeing this result. And that calculus
then became the foundation for a whole bunch of my later work on type
systems for concurrency and language design.

**Uwe:** Actually the encodings we did back then went into
what is now called the “localised asynchronous pi-calculus”, but it simply
wasn’t yet known back then. The localised asynchronous pi-calculus is at
this perfect level of distributed implementability, as we know by now.

**Nobuko:** This is partly also Massimo Merro did with Davide
Sangiorgi, right?

**Uwe:** Yes, they did this few years later, towards the end of the ’90s.

**Nobuko:** What do you think of the current state and future
directions of the study of expressiveness in process calculi, or more
generally, concurrency theory as a whole?

**Uwe:** Back then, in Cambridge, I was having discussions with Peter
Sewell. Quite many of them. At the time, we were making fun by
saying, *“now we know how to do process calculi, we can do five of them for
breakfast."* We know the techniques, we know how to write down the rules, we
know what to look for in order to make it good. And I would say that for
studying encodings nowadays it’s kind of the same level of maturity; we know
what to look for when writing down encodings, pitfalls to avoid, and it’s
done. So what I found most interesting today, is that often enough, the
proximity between encodings and actually doing implementations is very close
and that is may be because the maturity of programming languages we can use
is much higher. We can use convenient abstractions in order to more-or-less
straightforwardly write down encodings.

What’s going on? Current state and future directions. The EXPRESS/SOS workshop is still alive and kicking. It attracts great papers and not that many are submitted but typically they’re great papers and I think that’s good. I think we had an impact on concurrent programming, and for example, if you look at Google Go, the concurrency primitives that you find in there is pretty much a process calculus. It’s message passing, and choice, and even mixed choice, and stuff like that. I cannot say right now that there are deep, deep, deep questions to be solved about encodings except for finding out what Robert van Glabbeek’s criteria have to do with Daniele Gorla’s criteria. There is an ongoing debate, but the issues are quite technical. What could use more research is typed languages, typed calculi, and typed encodings. It has been done and we have many nice results, but I think there are still some open questions on what the ideal criteria should be on those.

**Nobuko:** What advice would you give a young researcher interested in
working on concurrency theory and process calculus like today?

**Benjamin:** My best advice for people that want to do theory is: keep
one foot in practice. Don’t stop building things. Because that’s the way you
find interesting problems, it’s the way you keep yourself grounded, it’s the
way you make sure that the directions you’re looking and the questions that
you’re asking have something to do with …something! It’s the way to
stay connected to reality while also generating great questions.

**Nobuko:** Uwe, do you have anything to add to that?

**Uwe:** Having a foot in practice is also good for actually checking and
finding mistakes in your reasoning. Building systems not only for finding
problems but also for finding out that you have a problem in your
thinking. Apart from that, I would not like to push for any particular area
for concurrency theory. I mean, concurrency theory is incredibly wide. My
advice is: get the best possible supervisor that you can find and then work
on his project. I think this is very good advice. Be patient, dig deep. This
is very general advice. Never give up. It took me two years until, in one
week, the pieces fell together. So be patient, dig deep, and train your
communication skills, practice networking. All the general things. Ah, and
maybe what I found very useful for my own career: learn the basics and the
history of your field. Understand what has already been found and what that
means even twenty years after. I learned a lot from the early 80s papers
that I was mentioning beforehand on first implementations of the
communication primitives of CSP. There is one published supposedly
deadlock-free algorithm, which almost twenty years later was discovered to
be incorrect. The proof was incorrect; it was not actually deadlock
free. So, work on hard problems, dig deep, be patient. And communicate
well. This is also the best way to get help.

**Nobuko:** Wow. Anyone who can satisfy everything would be quite a
fantastic student. (Laughs.) Like you, Uwe, you know.

**Nobuko:** This is the last question: what are the research topics
that currently excite you the most? Can I ask Benjamin first?

**Benjamin:** I will name two. One is machine-checked proofs about real
software. Over the past fifteen or twenty years, the capabilities of proof
assistants, and the community around them, have reached the point where you
can really use them to verify interesting properties of real software; this
is an amazing opportunity that we are just beginning to exploit.

On a more pragmatic level, I’m very interested lately in testing. Specifically, specification-based (or property-based) testing in the style popularised by QuickCheck. It’s a beautiful compromise between rigor and accessibility. Compared to the effort of fully verifying a mathematically stated property, it is incredibly easier and lower-cost, and yet, you can get tremendous benefit, both from the process of thinking about the specification in the mathematical way that we’re used to in this community and from the process of testing against, for example, randomly generated or enumerated examples. It’s a sweet spot in the space of approaches to software quality.

**Nobuko:** These things are still very difficult for concurrency or
distributed systems. Do you have any comment because proof assistants for
concurrency theory is, I think, still quite difficult compared to
hand-written proof.

**Benjamin:** Yes, in both domains – both verification and testing
– concurrency is still hard. I don’t have a deep insight into why it is
hard in the verification domain, beyond the obvious difficulty that the
properties you want are subtle; but in the testing domain, the reason is
clear: the properties have too many quantifier alternations, which is hard
for testing. Not impossible – not always impossible, anyway – but it
raises hard challenges.

**Uwe:** There’s a recurring pattern in what I like doing and that is always to do
with looking at different levels of abstractions. You can think of it in
terms of encodings or as a distributed system, and I was always wondering
about the relation between global (higher-level) properties and local
(lower-level) implementation of systems. And throwing formal
methods, formal models, and theories at this problem has always been what I
liked, and I still do that, nowadays again, more on fault-tolerant
distributed algorithms. Maybe also because of the recent hype due to
blockchain and the strong interest in practical fault-tolerant Byzantine
algorithms, and so on. And, here I meet Benjmain again, at best doing
mechanical verification of those. Mechanical verification
is still hard and you can easily pull PhD students into a miserable state by
dragging them onto a problem that takes an awful lot of time, and then
you get out one paper, with the proof in Isabelle, in our case. On the
other hand, it’s getting more and more a tool that we just use. The more you’ve done, using a
proof assistant, the more you integrate it into your everyday life. Some students,
as a standard, test their definitions and their theorems and do their
proofs in Isabelle and we now even have bachelor students using that. Good
ones, I mean bright ones, of course, but it’s becoming more and more an everyday
thing. The other idea: Benjamin, you’re well-known also
for the software foundations series. I don’t know whether you’ve done
pedagogical research, learning theory, on top of that in the following
sense. What we are interested in, just recently, is understanding how people
learn how to do proofs. It’s a long, difficult, mental process and there are
a number of theories about this actually works, and whether this works, and
there’s magic involved, and whatnot. And getting used to, of
course. Learning from patterns. But then, what could be
the impact of using proof assistants for learning how to do proofs? Does it
actually help? Or does it actually hinder?

**Benjamin:** It turns people into hackers. (Laughs.)

**Uwe:** Yeah, yeah, yeah. We’re talking about computer science
students, not maths students, right? Programming is proving, proving
is programming. This is of course a slogan from type
theory, but one may actually use it as a motivation to write down first proofs, getting
feedback from the proof assistent, and go on from there. This is one of the interests that we
have, in actually understanding this process of learning how to do proofs.

**Nobuko:** I now conclude this interview. Thank you both very much
for giving us your time.