Thank you a lot for pointing this out.
No problem, happy to help.

I believe this will solve the problem in a much simpler way. What do you think?
Hmm, that's getting a bit too hacky I would say (i.e. adding one imperfect approach onto another). It's worth pointing out that if you
did have a good way to shuffle the participants (e.g. with a modern variant of the Fisher-Yates algorithm, and enough entropy to feed it with) then you wouldn't need anything else, because after a properly-executed shuffle you could simply read off the winners according to position (i.e. index 0 = winner 1, index 1 = winner 2, etc.)
Shuffling is actually a very natural way to think about this problem, and the code I provided earlier is capable of doing a "perfect" (unbiased) shuffle. If you call
pick_winners_unbiased with a list of (let's say) 50 names and you ask it to return 50 winners, then what results is a shuffled version of the original list.
Sorry to be discouraging, but I don't think the approach you've settled on (i.e. using the available entropy
directly, and coming up with an improvised scheme to generate a sequence of random numbers) is one that you should stick with for too long. Stretching out the entropy (with a cryptographic hash function, like Loyce and I have suggested) will let you explore better algorithms (and remove the limit on the number of winners, too).
Your algorithm suffering from "modulo bias" is not that much of a big deal (particularly for this application, and especially with the divisors being so much smaller than your random numbers), but it's a strong indication that you've probably gotten other details wrong, too. For example, the way you're prepending an extra hexadecimal digit for each winner beyond the first one, seems very sketchy to me. Aside from it having no mathematical basis for working correctly (at least, not one that I can see), that approach will overflow; after around 8 "steps" you'll start to lose precision (i.e. you'll be passing things into
parseInt that are greater than
Number.MAX_SAFE_INTEGER).
One way to test the quality of algorithms like these is to run simulations and compute histograms. So, I simulated 1,000,000 giveaways between 30 participants using your algorithm. In each giveaway "LoyceV" was included in the list of participants with 29 other names. Here's how many times LoyceV won, or came second, or third, etc:

For reference, here's the same simulation data run through the algorithm I proposed earlier:

Now, one of these is clearly working correctly and the other needs a little help.

If you feel like I must have made some kind of mistake, then feel free to replicate my results:
- Use the first 30 names from my previous post as the list of participants.
- Keep an array H of 30 integers, initialized to 0.
- Keep a running counter T set to the current trial # (from 0 to 999,999).
Then, do this 1,000,000 times:
- Compute an imaginary "blockhash" by taking the SHA-256 of T (as a string; e.g. "0" == "5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9").
- Pick 30 winners from the list of participants using your algorithm and the above blockhash.
- Find the position of "LoyceV" in the list of winners and increment H accordingly.
My suggestion to you would be to take a more careful look at my previous post. But, if you're committed to sticking with your approach (which you seem to be), then I suggest changing this line:
let n_winner_decimal = parseInt(blockhash.slice(-(6 + step)), 16)
To this:
let n_winner_decimal = parseInt(blockhash.slice(-(6 + step), blockhash.length - step), 16)
That will improve things quite a bit (both by avoiding overflow, and also by producing a less correlated sequence). With the above fix applied, your histogram looks very similar to mine:
