Does or can Lezer have "scope"?

I think scope would solve a lot of the problems I’m running in to over here: How do I parse part of a document? - #2 by NullVoxPopuli

For example, I need to match all text (to ignore it), and then top of that, match all these other things that text includes every character from (and it’s the order and composition of those characters that make them significant)

For example, how do I make:

two
{{"two"}} and {{"three"}
three

be detected as

Glimmer(
  Expression(String)
  Expression(String)
)

Ignoring all the text?

In a normally glimmer parser, text gets its own Node, but since Lezer is for highlighting, I should be able to skip it, yeah?

The scope here is within the {{ and }}

I’d like to have my tokens / etc only applicable within those outermost tokens, and not parseable anywhere else.

As is, _+ for Text overwrites everything, and no amount of precedence makes my grammar work.

Lezer tokenizing is contextual already (it will only match a given token when the grammar allows it in that position), which sounds like what you’re asking for here.

well that’s good — I guess then my closedBy definitions don’t take precedence over the generic Text?

1 Like

@NullVoxPopuli what do you mean by generic Text? Is it a built-in feature? I could not find it in the manual.

Precedence is determined by ordering of token definitions, but if a token matches “everything” then once it is scanned it will just gobble up the rest of the document. I had similar issues with delimiting comments. Curious to know how other people approach it.