That's really interesting, indeed I can reproduce this by changing the comment. I also managed to get correct output for this sample by renaming the function.
Is it, though? The major selling point of coding LLMs is that you can use natural language to describe what you want. If minor changes to wording - the ones that would not make any difference with a human - can result in drastically worse results, that feels problematic for real-world scenarios.