I’m considering to use Lezer in my project and have been reading its docs and sources for a while.
I noticed that despite Lezer grammar has it’s own tokenizer, all standard/example grammars use external tokenizers with a note like “Hand-written tokenizers for *** tokens that can’t be expressed by lezer’s built-in tokenizer”.
What’s the problem with it? Is it just for efficiency or it’s not capable of doing it?
Would it be able to do a C-style language without an external tokenizer?
In my project I would sacrifice the parsing efficiency for a simpler implementation.
I have already done it with PEG.js but unfortunately due to its design it can’t really handle mathematical expressions well. That’s why I’m looking at other parsers.