There are four major fixes here:
1. some eof tokens were being pattern matched with the wrong arity
2. tuples that are too long actually speculatively parse as an untyped tuple, and then complain that there were too many elements,
3. singleton tuples with a trailing comma are now handled differently to grouping parentheses, consistently between typed and untyped logic
4. the extra return values used to detect untyped singleton tuples are also used to pass the close paren position, so that too_many_elements can report the correct file position too.
Point 4. also completely removes the need for tracking open paren positions that I was doing, and that I thought I would need to do even more of in the ambiguous-open-paren-stack case.
This seemed like it was going to be insanely insanely complex, but
then it turns out the compiler doesn't accept spaces in qualified
names, so I can just dump periods in the lexer and hit it with
string:split/3. Easy.
This saves some effort and probably some performance for things like integers, but I'm mainly doing this in anticipation of string literals, because it would just be ridiculous to read code that lexes string literals twice.
Now tests compare the literal parser against the output of the
compiler. The little example contracts we are compiling for the
AACI already had the FATE value in them, in the form of the
instruction
{'RETURNR', {immediate, FateValue}}
so we just extract that and use it for the tests.
We tokenize, and then do the simplest possible recursive descent.
We don't want to evaluate anything, so infix operators are out,
meaning no shunting yard or tree rearranging or LR(1) shenanigans
are necessary, just write the code.
If we want to 'peek', just take the next token, and pass it around
from that point on, until it can actually be consumed.