In this part we will complete the missing statements from part 6 and finish our front end.
In part 5 we described the objects that we will need to semantically analyze a tiny program. In current part we will extend the parser of part 4 to do the semantic analysis and create the GENERIC trees.
In the last installment of this series we saw how to verify that the sequence of tokens of the input is syntactically valid. Today we will see what we need to give it meaning.
Now that we have a stream of tokens we can start performing syntactic analysis.
Now that the minimal infrastructure is already set, we can start with the implementation of our tiny front end. Today we will talk about the lexer.
The previous installment of this series was all about the syntax and the semantics of the tiny language. In this chapter we will start implementing a front end for tiny in GCC. The journey will be long but rewarding. Let’s get started.
In this series we will see the process of adding a new front end for a very simple language in GCC. If you, like me, marvel at the magic of compilers then these posts may be for you.