Cuneiform: Past, Present, and Future


I started out working on Cuneiform in March 2013. I was trying to find a way back into academia after jobbing a year in a public agency. I was second in line for a PhD position at Berlin’s Humboldt university but surprisingly got the job after the top person bailed out.

My topic would be a language-based approach to workflows. At my first day I tried to impress my professor Ulf Leser by talking about the workflow systems I knew. But he just looked out the window and told me to roll my own language.

At this time I was completely clueless about how programming languages work. Nevertheless, I set out to draft examples of workflow scripts as I thought I would like to read and write scientific workflows in an editor. I was working together with a fellow PhD candidate Marc Bux who was developing a distributed execution environment for workflows. Perhaps by coincidence, this separation of topics created a dichotomy between the workflow language and its distributed execution environment, which would later become important. For now, I was sitting in the first floor coding the first parser for Cuneiform (an endless ANTLR script with a lot sharp edges) while Marc was sitting in the fourth floor tweaking an alpha version of Hadoop YARN to process generic shell scripts. It was a great time because I got to play around with computers but I was still completely clueless about programming language design when my son was born in August 2014.

Ever since, Cuneiform had been a library buried in Marc’s modified Hadoop. Hadoop, in turn, needed to be installed via a Chef-based cloud orchestration system. A lot of frustration had built up in me because I started to realize that no one would install a half-baked cloud orchestration system to install a half-baked Hadoop modification to try out a new workflow language, no matter how good it was. So in 2015 I decided that Cuneiform had to stop being a library with a local test environment but instead needed to be completely independent from external software. I also realized that I was unable to create a reliable concurrent application out of the multi-threading facilities offered by vanilla Java. Sometimes the only way to find out a thread had died was to look hard at a JVisualVM process graph.

This all had to change. When I held my first recorded talk about Cuneiform in December 2015 I had little more than the notion that using Erlang would be a good idea. But in April 2016 the first Erlang-based Cuneiform release was live. Also Cuneiform now had its very own website. There was no excuse for hand-wavy documentation or flaky release management anymore. If I did not do it, it would not be there.

Also in 2015 Samuel Lampa created a blog post with the title Flow-based programming and Erlang style message passing, which I did not learn about for another year. Today I think it is a constituting text for the scientific workflow community in general because it anticipates the dichotomy between low-level number crunching and high-level distributed workflow organization at a time when languages like Nextflow, CWL, Scipipe, or Cuneiform were only forming.

2016 had been a cambric explosion for my understanding of the topic of languages and distributed systems. It started when Wolfgang Reisig pointed out that doing a programming language without also doing a semantics is kind of a pre-1970s attitude. I did not take it well, but with the help of Matthew Hennessy’s Structural Operational Semantics I was able to bolt together a semantics for Cuneiform that barely did not fall apart. Already it was a large improvement over anything I had done before and I was quite proud of myself. Nevertheless, it is a small miracle that we were given the chance to go into a major revision when we handed it in at a serious PL journal. We went through several rounds of revision and in December 2016 I had read through the first half of Benjamin Pierce’s TAPL and another few months later I had read Semantic Engineering. Suddenly I knew what I was doing.

Also in 2016 I started seriously looking into Petri nets as a way to model distributed systems. It turned out to be a good idea to separate the workflow language from the execution environment. The separation allowed me to use different modeling techniques for different purposes. Describing the workflow language using an operational semantics allowed to put it into perspective with the lambda calculus. Describing the execution environment as a Petri net allowed to get a bird’s eye view on what my Erlang processes needed to do. Ideally, I wanted a DSL to describe a Petri net and a compiler from the DSL to Erlang. The idea resulted in the gen_pnet OTP behavior. It served me well.

When I finally read Samuel’s post about the workflow language dichotomy for the first time I knew that my switch to Erlang and my take on modeling workflow systems as distributed systems was spot on. Also at this time, Nextflow and other workflow languages that take the distribution aspect very serious had lifted off the ground.

My cambric explosion ended in summer 2017 when I attended the Racket summer school. Racket, and especially Redex, is like a small superpower and ever since I can spend endless hours polishing language models. Right now, I am at the brink of putting all this together in a new Cuneiform implementation. This implementation combines the CRE (a language-agnostic distributed execution environment) and a Cuneiform interpreter with a simple type system based on a reduction semantics.

My university contract has ended in September 2017. I am looking forward to hand in my thesis while having a ton of ideas for Cuneiform and also for new distributed languages. I guess I will not pursue them in an academic environment.