Domain-specific languages – Initial impressions of ANTLR4.

I’ve always been into writing my own domain-specific languages — I firmly believe that used right, they are a massive win for developers. Your system is different from everyone else’s, so the only person able to write the perfect language for your problem is you. General-purpose programming languages look awfully crufty compared to an elegant, dedicated language.

The problem, of course, is that a dedicated language is something of a beast to write, at least if you start from scratch. I’ve been programming DSLs for a few years, now, and I’ve even written the Canto toolkits, some open-source parsing toolkits for JavaScript and C#. But this kind of small-scale craftsman approach is to be compared with the industrial-strength behemoth that is Antlr. Antlr is a toolkit some 25 years in the making, and I’ve been tracking it’s progress, on and off, since about version 2. It recently moved to version 4. In the past, I’ve had trouble with it; it imposed such a strong idiom to your way of writing and it wasn’t one that gelled very well with my own way of approaching the problem. However, it’s just moved to v4, so I thought I’d give it another look.

The truth is, Antlr4 is good. Really good. Depressingly so, for someone who’s writing parsing toolkits. In fact, I love it. I love it more than my own projects.

The idea with Antlr is that you write a `grammar` for your language — describing the syntax that your language has. Here’s a snippet of the grammar to recognise JSON, from https://github.com/antlr/grammars-v4/blob/master/json/JSON.g4

 

grammar JSON;

json: object | array ;

object : '{' pair (',' pair)* '}'
       | '{' '}' ;

pair: STRING ':' value ;

So if you know JSON, you should be able to read this without difficulty. The first rules says “a JSON document is either an object or an array”. The second implements this line from JSON.org; “An object is an unordered set of name/value pairs. An object begins with { (left brace) and ends with } (right brace). Each name is followed by : (colon) and the name/value pairs are separated by , (comma).”

You write the grammar, and Antlr builds you code to recognise that language. It handles two main jobs for you. The first is lexing — splitting a file into discrete words, or tokens — and the second is parsing — recognising syntactic structures like statements, lists, loop declarations, etc. Previous versions did that, but not in a way I ever really grokked. Until Antlr4…

So what makes Antlr4 so good?

First, the integration for Visual Studio is superb. There’s a VS extension which basically makes ANTLR part of C#; just do “Add | New Item… | ANTLR Combined Grammar” and you’ve started developing your language. Every time you build, it builds you a parser from the grammar. This is a big deal; no messing with the guts of .csproj files, or running hokey batch scripts. It’s better integrated than tools like T4, for example, which are actually part of visual studio.

Second, the structure of grammars has changed to make describing your language a distinct act from writing an application like an interpreter. Previously, you had to embed scraps of C# code throughout the grammar to make it do anything. That left something of a bad taste in my mouth in previous versions. Code was powerful, but got really ugly. By that I mean, you couldn’t look at an ANTLR grammar and say ‘Ah, I see clearly what the language looks like!’ There were bits and bats all over the grammar itself, obscuring the definition of the language. Now, Antlr takes the approach of providing post-parsing tools like listeners and visitors, which help you separate the act of parsing from the act of interpreting. This is the way a good parser should be structured, so I’m really glad to see that Antlr allows that kind of structure.

Just those two things meant that I felt confident letting other devs in on the secret. It doesn’t take a language wonk to be able to extend a language or interpreter. I know this because my pair-programming partner at work, when I was called away for a meeting, extended the language and the interpreter I’d been working on with literally no reading on the subject; just looked at the code, saw what it was doing, and extended it, in the time it took me to attend a meeting. Part of that is that she’s just a very smart dev; the other part is that it’s an intuitive language to play with, and if you understand the Visitor pattern, and if regular expressions don’t make you cry, you’ll pick it up.

Anyway, I can’t recommend it enough. Go try it for yourself.

 

Canto34-cs; a lexing and recursive-descend parsing toolkit for .Net

Introducing Cant34-cs – another OSS project I probably won’t maintain!

Alright, folks. I’ve just uploaded another library which has served as the base of language projects I’ve run over the years. It’s called Canto34-cs, and it’s available on both GitHub, where you can get the source under the very permissive MIT License, and on NuGet, where it’s currently on an Alpha 1 release. I’ve used it in a lot of places, so it’s pretty useful despite the version.

This blog post takes you round the basics. By the time you get to the end, you should be able to write a program to recognise simple languages.

Whenever you write a language — say, a DSL — you need to write a lexer and a parser. Lexers are pretty much identical the world over, and basically just need to recognise tokens; ‘for’ and ‘$foo’ and ‘int’. You do this by inheriting `Canto34.LexerBase` and making calls in the constructor, like this;

public class SExpressionLexer : LexerBase
{
public const int OP = 1;
public const int CL = 2;
public const int ATOM = 3;

public SExpressionLexer(string input): base(input)
{
this.AddLiteral(OP, "(");
this.AddLiteral(CL, ")");
this.AddPattern(ATOM, "[a-z]+", "atom");
this.StandardTokens.AddWhitespace();
}
}

So this is a lexer which recognises three token types – an open paren, a close paren, and an ‘atom’ token comprising strings like ‘a’, ‘abc’, or ‘abbbbbbbc’. So now you can split the contents of your file into tokens like this;

var lexer = new SExpressionLexer(input);
var tokens = lexer.Tokens().ToList();

So now you have a list of Token objects, and you can start doing the real work to recognise what’s going on inside the file. In our case, we’re going to recognise lisp-style s-expressions; that means either an individual item;

foo

or a list;

(a b c)

or a list of lists;

(def foo (lambda (x y) (+ x y)))

How’s that achieved? A program that recognises patterns in a sequence of tokens is a parser. In Canto34, you inherit from ParserBase;

public class SExpressionParser : ParserBase
{
internal dynamic SExpression()
{
if (LA1.Is(SExpressionLexer.OP))
{
Match(SExpressionLexer.OP);
}
else
{
var atom = Match(SExpressionLexer.ATOM).Content;
return atom;
}

var array = new List<dynamic>();
while (!EOF && !LA1.Is(SExpressionLexer.CL))
{
array.Add(SExpression());
}

Match(SExpressionLexer.CL);
return array;
}
}

So look at this code. you’ll see the parser has one method — SExpression() — which recognises an s-expression. There are two basic ideas here. One, look at the next token. Here, `LA1` is a class-level property representing the next token — `LA1` stands for “look ahead 1 token”. By looking at the next token in the stream, you can perform appropriate logic; “if the next token is an open bracket, start a new list”, or “if the next token is `for`, start looking for a for-loop.”

The second concept is matching a token — the `Match(tokenType)` method looks at the next token; if the next token isn’t the expected one, an exception is thrown, indicating a syntax error. But if it’s the right type, it returns the token and advances one token in the file.

So let’s say you’re trying to match expressions like

x = 3;
y = 10;

The pattern is;

assignment:
ID, EQUALS, NUMBER, SEMICOLON;

And your parser will have code like this;

public void Assignment()
{
var nameToken = Match(CalcLexer.ID);
Match(CalcLexer.EQUALS);
var numberToken = Match(CalcLexer.NUMBER);
Match(CalcLexer.SEMI);
Console.WriteLine("Recognised {0} = {1}", nameToken.Content, numberToken.Content);
}

See how every call to Match() is advancing through the file? As we keep matching tokens, we’ll move from the start of the file to the end, reading and interpreting the file as we go.

The last piece of the puzzle is calling parsing methods from inside other parsing methods. This is where the real power of this kind of parser comes from. Let’s take that assignment method and make it more powerful. Let’s say we want to handle both straight assignments, and addition expressions, like this;

x = 4;
y = 3 + 4;
y = 1 + 2 + 3 + 4;

We first need to recognise these open-ended maths expressions like “1 + 2 + 3 + 4”. We can define the language as;

assignment:
ID, EQUALS, additionexpression, SEMICOLON;

And our `additionexpression` is a recursively-defined expression;

additionexpression:
NUMBER [PLUS additionexpression];

See how that works? Either it’s a number, or it’s a number followed by ‘+’ and another addition expression.

So we define a parsing method that reflects that recursive definition, like this;

public int AdditionExpression()
{
// get the number
var numberToken = Match(CalcLexer.NUMBER);
var number = int.Parse(numberToken.Content, CultureInfo.InvariantCulture);

// look for a plus; recurse.
if (!EOF && LA1.Is(CalcLexer.PLUS))
{
Match(CalcLexer.PLUS);
number += AdditionExpression();
}
return number;
}

See how it matches the definition? read a number, look for a plus, if you see it read a number, look for a plus…

So now we can finish the job by calling this method in the original Assignment method;

public void Assignment()
{
var nameToken = Match(CalcLexer.ID);
Match(CalcLexer.EQUALS);
var number = AdditionExpression();
Match(CalcLexer.SEMI);
Console.WriteLine("Recognised {0} = {1}", nameToken.Content, number);
}

And now we can recognise everything we expect;

x = 12; // "Recognised x = 12"
x = 12 + 34; // "Recognised x = 46"
x = 12 + 34 + 56; // "Recognised x = 102"

So, we’re coming to the end of this intro to Canto34. Imagine the call stack when we are parsing that final expression. It’s going to look like this;

CalcParser.Assignment(); // x =
CalcParser.AdditionExpression(); // 12 +
CalcParser.AdditionExpression(); // 34 +
CalcParser.AdditionExpression(); // 56

This shape that the call stack makes is what gives this style of parser it’s name; a *recursive-descent* parser.

Introducing Canto34.js, a lexing and parsing library in JavaScript

I mentioned it in passing in my last post, but I’ve been working on a project called Canto 34 which helps you write lexers and parsers in JavaScript. Here’s the current README.md from the GitHub repo

—–

Canto 34 is a library for building recursive-descent parsers.

When it comes to writing a parser, you get two main choices. Write your own, or use a parser generator like PEGJS or ANTLR.

I’ve never really had much success with parser generators, and it’s always seemed pretty easy to write a recursive-descent parser yourself, if you have some basic tools like a regex-based lexer, and some basic functions for matching tokens and reporting errors. Canto34.js gives you the functions you need to write a regex-based lexer and a recursive descent parser yourself.

A Simple Example

Here’s a simple example of a language which just defines name-value pairs;

foo 1, bar 5, baz 8.

We’ll see how to parse this language. First, you write a lexer which identifies the different kinds of token — names, integers, commas, and periods;

var lexer = new canto34.Lexer();

// add a token for whitespace
lexer.addTokenType({ 
    name: "ws",       // give it a name
    regexp: /[ \t]+/, // match spaces and tabs
    ignore: true      // don't return this token in the result
});

// add a token type for names, defines as strings of lower-case characters
lexer.addTokenType({ name: "name", regexp: /^[a-z]+/  });

// bring in some predefined types for commas, period, and integers.
var types = canto34.StandardTokenTypes;
lexer.addTokenType(types.comma());
lexer.addTokenType(types.period());
lexer.addTokenType(types.integer());

And here’s how you use it;

var tokens = lexer.tokenize("foo 1, bar 2.");

which returns a set of tokens;

[
    { content: "foo", type: "name",    line: 1, character: 1 },
    { content: 1,     type: "integer", line: 1, character: 5 },
    { content: ",",   type: "comma",   line: 1, character: 6 },
    { content: "bar", type: "name",    line: 1, character: 8 },
    { content: 2,     type: "integer", line: 1, character: 12 },
    { content: ".",   type: "period",  line: 1, character: 13 }
]

Now you feed these tokens into a parser. Here’s a parser for our language;

var parser = new canto34.Parser();
parser.listOfNameValuePairs = function() {
    this.result = [];
    this.nameValuePair();
    while (!this.eof() && this.la1("comma")) {
        this.match("comma");
        this.nameValuePair();
    }
    this.match("period");
};

parser.nameValuePair = function() {
    var name = this.match("name").content;
    var value = this.match("integer").content;
    this.result.push({ name:name, value: value });
};

And it’s used like this;

var tokens = lexer.tokenize("foo 1, bar 2, baz 3.");
parser.initialize(tokens);
parser.listOfNameValuePairs();

parser.result now contains the value;

[
    {name:"foo", value:1},
    {name:"bar", value:2},
    {name:"baz", value:3},
]

And we’re done! That’s a basic lexer and parser, written in thirty lines of code.

What’s canto34.js good for?

Canto 34 is designed for quickly writing parsers for straightforward domain- specific languages (DSLs). Please don’t use it for writing parsers for general-purpose programming languages — it’s not designed to be that fast or powerful.