Understanding the python indent tracker hash function

I am writing a lezer parser for a language with meaningful indentation. I’ve used the Python parser as a starting point. (GitHub - lezer-parser/python: A Python parser for Lezer)

I need to track a bit of extra context on top of the indent level, so I’ll need to update the hash function accordingly, and I’m confused by the current function. It 's calculated like this:

(parent ? parent.hash + parent.hash << 8 : 0) + depth + (depth << 4)

It looks to me like it’s shifting the parent’s hash up to higher bits, so that the bottom 8 bits are available for the current depth (so there is an assumption that depths will be <= 256, right?), but I would have expected just

(parent ? parent.hash + parent.hash << 8 : 0) + depth

Why is + (depth << 4) also needed?

I am not very knowledgeable about hash functions, and just modeled this after an example I found online. There’s definitely no assumption of depths being less than anything, it’s just smashing together bits to get values that hopefully won’t clash.

If they do clash, is that a performance hit, or would the parser be buggy?

That could lead to an incorrect parse, I guess, if the stars all align in the right way and the clashing context ends up being a candidate for reuse exactly at the point where the clash happens.