π-calculus, Session Types research at Imperial College


Scribble-Go is a toolchain for safe distributed programming using role-parametric session types in Go. It extends Scribble, a toolchain for safe distributed programming using basic (non-role-parametric) session types in Java, with:


An archive of the toolchain can be found here, 2.5GB. A version will appear in the ACM Digital Library as an artifact to the paper (doi:10.1145 / 3291614).

This document describes the artifact for “Distributed Programming using Role-Parametric Session Types in Go”, David Castro, Raymond Hu, Sung-Shik Jongmans, Nicholas Ng, and Nobuko Yoshida, published in the proceedings of POPL 2019.

The latest version of this document and materials can be found via the Scribble-Go project repo: see here.

We explain how to get the Scribble-Go toolchain running on the artifact VM (virtual machine), we give instructions on how to run our examples (demos and benchmarks), and we provide tutorials to get hands-on experience with the toolchain (including small exercises).

Table of contents

Scribble-Go is a toolchain for safe distributed programming using role-parametric session types in Go. It extends Scribble, a toolchain for safe distributed programming using basic (non-role-parametric) session types in Java, with:

To summarise the content of the paper relative to Scribble-Go: the key contributions of Scribble-Go are described in Sect. 2; an example-driven overview of the Scribble-Go methodology is given in Sect. 3; the theory behind Scribble-Go is presented in Sect. 4; the implementation of Scribble-Go is presented in Sect. 5; benchmarks and additional examples are presented in Sect. 6.

Programming methodology
  1. The user writes a (role-parametric) global protocol specification in the Scribble-Go language (Sect. 5.1).
  2. The Scribble-Go toolchain checks the global protocol specification for well-formedness (Sect. 4.5), infers role variants (Sect. 4.3), computes the projections onto the inferred role variants (Sect. 4.4), translates the resulting local protocol specifications to CFSMs (Sect. 5.2), and finally generates APIs based on the CFSMs (Sect. 5.3).
  3. The user writes Go endpoint programs that use the APIs generated for the global protocol. The Scribble-Go toolchain guarantees that a statically well-typed endpoint will never perform a non-compliant I/O actions w.r.t. the run-time instantiation of the global protocol, up to premature session termination (Sects. 4.5 and 5.4).

Sect. 3 exemplifies this methodology in detail.

Aims of this artifact

The purpose of this artifact is to support the practical claims of the paper:

Our artifact comprises four main components:

Claim (1) is supported by (B) that allows to browse and run all of the examples listed in Fig. 15 of the paper implemented using our tools, and (C) that offers a more hands-on feel for using our tools. We have prepared (B) and (C) to demonstrate all the core features of our Scribble-Go framework:

To be more specific about (B), of the applications #9–21 (Fig. 15), the “fully implemented” demos are:

For the remaining applications, we use our tools to specify their global protocols and implement the skeleton endpoint programs to be executable in terms of their I/O (as stated in Sect. 6.2). However, we do not fill these skeletons with “local” computation code (that is orthogonal to the I/O structure), so for the most part they do not perform any real application-level computations (beyond token decision making).

For claim (2), the VM (A) (via the instructions in (B)) only directly replicates our benchmarking methodology. Running the benchmarks inside the VM is not suitable for replicating the actual results presented in the paper (Sect. 6.1 and 6.2), that were conducted on the machine with the stated specification. To obtain more meaningful results, we also provide a separate (D) that allows to run the benchmark applications as a standalone package outside the VM (requirements and instructions included in the zip). Alternatively, the user may try to export the benchmark applications from the VM to a local setup.

Note. By default, our artifact currently intends the user to cancel the execution of the microbenchmark applications at some point after observing them running. Otherwise, the microbenchmarks will eventually encounter out-of-memory issues – we supply a 32-bit VM while our benchmarks were orginally designed for a 64-bit machine with 16GB of RAM. We do not intend for the benchmarks to be run to completion inside the VM (as the results are not meaningful) – please see the FAQ below for pointers on bypassing this issue inside the VM, and see the mentioned above for running the benchmarks outside the VM.

Claim (3) is directly attested by (B).


We are in the process of stabilising and merging our Scribble-Go prototype into the master branch of the central Scribble repository. To prepare this artifact for the above purposes, we have disabled the following features related to usability/cosmetics of the Scribble-Go toolchain in the artifact VM:

Note. The error messages reported by the toolchain need improvement. For instance, if a global protocol specification is badly-formed, the Scribble-Go toolchain may currently just print a full stack trace or a not very clear message.

Quick start
Installing/running the artifact VM

The following instructions have been tested using VirtualBox 5.2.18 on macOS {10.13, 10.14}, VirtualBox 5.2.22 on Debian 9, and VirtualBox 5.2.22 on Windows 10.

The artifact VM can now be started by selecting it and clicking “Start”.

To login: use the user name Ubuntu with an empty password.


Open “POPL'19 artifact demo” on the desktop to enter the demos menu (i.e., all systems listed in Fig. 15). Each demo will:

  1. show a (role-parametric) global protocol specification in the Scribble-Go language;
  2. run the Scribble-Go toolchain to generate APIs;
  3. show the interfaces of the generated APIs as an HTML page, as explained in more detail in Sect. “Exploring generated APIs";
  4. show endpoint programs in Go that use the generated APIs;
  5. run the endpoint programs, including the generated APIs, as a distributed system in a number of separate OS processes; at this point, one terminal will open for each endpoint.

As our main aim of these demos is to showcase the expressiveness of the Scribble-Go toolchain (and the underlying theory), we focused first and foremost on implementing the protocol/communication aspects of the applications listed in Fig. 15 (entries 9-21; entries 1-8 are not full applications anyway). However, to make the demos more attractive, we also extended the resulting “skeleton implementations” of some applications with real or mock computations. We emphasise that these extensions are essentially orthogonal to our actual aim. Specifically:

Inside the VM

Open “POPL'19 artifact benchmarks” on the desktop to enter the benchmarks menu (i.e., all systems discussed in Sect. 6.1). The menu has four items:

Outside the VM

We provide the following instructions to run the benchmarks outside the VM (using; these instructions can also be used to run the benchmarks manually inside the VM (but this is discouraged if the aim is to replicate our results, as explained above). The following instructions assume GOPATH is set (inside the VM, this is already done).

Benchmarks FAQ

Running the microbenchmarks inside the VM gives me out-of-memory problems.

The demonstration of our benchmark methodology in the VM is currently intended to be cancelled by the user at some point after observing them running. Please see the aims above.

The results from running the benchmarks inside the VM do not match those in the paper.

The VM itself replicates our benchmark methodology, but does not replicate our results. Please see the aims above.

I want to run the benchmarks inside the VM from the command line.

See the next question. (And also the next next question, for outside the VM.)

I want to run the microbenchmarks from start to finish inside the VM without any out-of-memory problems.

This can be achieved by reducing the number of repetitions/runs peformed by the microbenchmarks. The following commands will run the microbenchmarks with the duration of each run (we use the standard go test -bench facility) set to 250ms, and for a single repetition per run. In the VM, open a terminal and do:

cd ~/scribble-go-examples/microbenchmarks; \
go test -bench . -timeout 0 -benchtime 0.25s ./...

I want to replicate the results presented in the paper (by running the benchmarks in a replication of the actual benchmark environment stated in the paper, i.e., outside the VM).

Please see the package mentioned above.

Where is the benchmark output? What is the output format?

$benchmark_name/original.time $benchmark_name/scribble.time

where `$benchmark_name = { 14_k-nucleotied, 15_regex-redux, 16_spectral-norm }`.

goarch: amd64 pkg: BenchmarkManualSM/N=2-4 200000 727 ns/op BenchmarkManualSM/N=3-4 100000 1386 ns/op

    > `pkg` is the package. There are three microbenchmark packages:
    > `alltoall`, `gather`, and `scatter`. The output:
    > ```plainterm
BenchmarkManualSM/N=2-4      200000         727 ns/op
> Means that the benchmark `ManualSM/N=2` (handwritten, shared memory,
> parameter `N` set to 2) was executed 200000 times at 727ns per run. If
> the benchmark name contains `Scribble` instead of `Manual`, it means
> the benchmark is using the Scribble-Go runtime.  If the benchmark name
> contains `TCP` instead of `SM`, it means the benchmark is using TCP
> transport instead of shared memory transport.

Besides the VM itself and graphical execution scripts for the demos, the other main component of our artifact are the hands-on tutorials in tutorials.html. Please see that document for details.

Exploring generated APIs
Code organisation

Logically, Go code is organised in packages: each package consists of a number of types, and each type consists of a number of functions. “Physically”, every package is contained in a directory; every type and every function is defined in a .go-file in that directory. A subpackage is a package that is contained in a subdirectory of another package. For every (well-formed) global protocol specification, the Scribble-Go toolchain generates one protocol package and one or more role variant subpackages, as explained in more detail below.

The Scribble-Go toolchain uses the godoc tool to generate HTML documentation for generated APIs (similar to Python’s Docstring and Java’s Javadoc). In our experience, initially exploring the generated APIs through the HTML documentation is an effective way to get a first impression of how the generated APIs guide/force programmers towards communication-safety; if and when a more detailed view is required, the code can be accessed directly through the HTML documentation.

We now explain the structure of generated APIs in more detail, based on the HTML documentation for the Pipeline protocol. (Run the Pipeline demo as explained in Sect. “Demos” to open the HTML documentation.)

Protocol package

The following screenshots shows the page for the Pipeline protocol package:

We make the following initial remarks:

Type Pipeline is the main type of the generated code; it forms the starting point to participate in a Pipeline session. Specifically, function New creates a new Pipeline-object (which represents a Pipeline session), while the remaining functions create new endpoint objects for endpoint programs that participate in the session.

For instance, this function signature:

func (p *Pipeline) New_family_1_W_1toKsub1and2toK(K int, self int) *family_1_W_1toKsub1and2toK.W_1toKsub1and2toK

can be understood as follows:

By clicking on the name of a function in the “type section” of the page (second screenshot above), the code that implements that function is opened. By clicking on the name of a package or type, the page for that package or type is opened.

Role variant subpackages

The following screenshot shows the page for a role variant subpackage.

The general structure is the same as for a protocol package. However, there are more types:

It may be instructive to “click through the types” to “simulate” a session (especially for larger protocols), as demonstrated in the following video:


The Scribble-Go runtime contains all low-level infrastructure code that is used by the generated APIs. The code of the runtime is located at ~/go/src/

VM contents overview

Our software is installed in the VM by the following three (open source) git repository clones:


It is cloned from:  ``.

* `scribble-go-runtime`: the Scribble-Go Runtime is used by our generated APIs
to initialise session endpoints, perform connection actions and session I/O.
It supports shared memory (`shm`) and TCP (`tcp`) as message transports.
It is cloned in the VM at:  


It is symlinked (for the convenience of some import paths in the example Go programs) to: /home/ubuntu/go/src/
It is cloned from:


It is symlinked (for the convenience of some import paths in the example Go programs) to: /home/ubuntu/go/src/
It is cloned from:

The artifact was built by:

Our example applications (which include the macrobenchmarks, #14–16) and the microbenchmarks are subdirectories in the scribble-go-examples directory with the layout (or a close variation of):

|- ProtocolName.scr  # The Scribble protocol
|- internal
    |- protocolname
        |- protocolname.go  # The "core implementation" code of all roles
|- protocol-R  # R is the role name
    |- main.go  # The "main" file of the role R, "go build" in this
                # directory creates a binary "protocol-R" to run the role
|- ProtocolName
    |- ... generated code ... # These do not exist before running
        # "go generate" (i.e. using Scribble to generated the APIs)
Related publications