I remember a professor in an undergrad philosophy class using an absurd example to make the indelible point, "Just because you have not seen a green moon, it does not mean that one doesn't exist." In common terms, the absence of evidence is never evidence of absence.
For those who don't experience the issue, their observations are irrelevant, per se. For debugging, only those with a legitimate problem (a sample size) count, and those that report it represent an even smaller sub-sample. Multiply that by some number to determine the truly affected.
The issue is reproducible to me and I guess to those affected. It is a repetitive fault. I use the filter to circumvent the annoyance. I adapt :-). Unless Florian knows of the code/library that triggers this effect already, he will have to rely on those affected for testing. I offer to help as the pandemic offers some free time.
How does someone test for a effect not observed but reported by other observers? One way is that you create a debug version of the software and ask the observers to test it and submit the logs. You make some change to the code and ask them to test that the undesired effect is gone. Right?