# [messaging] collaborative random number generation

Jeff Burdges burdges at gnunet.org
Sat Dec 12 08:45:10 PST 2015

```I'm never too keen on proof-of-work schemes like unicorn or bitcoin for
numerous reasons.

On Fri, 2015-12-11 at 12:48 +0000, Michael Rogers wrote:
> I think we may be able to improve on commit-reveal by adding an
> intermediate sorting step.
>
> Assumptions:
>
> 1. Closed set of participants, some of whom are colluding attackers
> 2. Some commonly known, predictable value that changes each round,
> e.g.
> the date or a counter
>
> In each round:
>
> 1. Each participant commits to a value
> 2. Any participant who doesn't commit is passed over
> 3. Hash the predictable value
> 4. Use the hash to sort the participants
> 5. The participants reveal their commitments in order
> 6. Any participant who doesn't reveal is passed over
> 7. Hash the revealed values to produce the output

In practice, asking participants to reveal in order sounds problematic
in the sense that ordering tends to require rounds.  Arguably you're
requiring one round per participant, completely killing scalability.

I think a typical two round commit and reveal works so long as you want
n+k bits of entropy where k is your security level, say 128 bits, and n
is your number of participants.  It's hard to reduce the number of bits
without giving power to an adversary though.

There is an interesting candidate scheme that operates in O(log |P|)
rounds where P is the set of participants:

Fix l>2 and s>128.  Let P_i be the set of participants in the ith
round.  Compute a hash h_i of length of either l |P_i|, or else k +
|P_n| when k > |P_i| + s.  If k > |P_i| + s then we're done and h_i is
the answer.  Otherwise, order the participants according to their
reveals, assigning l bits of the hash per participant, kick out any
participants whose bits are not all zeros, and repeat the procedure to
compute h_{i+1}, etc.

We must now find good parameters l>2 and s>96 that limit the
adversary's influence on the final round's hash:  In an honest round,
any machine in round i has a 2^{-l} chance to be in round i+1, but if
an adversary has x colluding machines then they can choose amongst the
most favorable 2^x outcomes from the 2^{l |P_i|} possible.

There are correct ways to compute how many machines the adversary
expects to keep in the game, but an overly simplistic computation
suggests adversary keeps around 1/l + 2^{-l} per round.  It's way more
than any honest participant, but they remain only a constant proportion
of the network as we scale down, so that's good.

We could improve upon this by attempting to expose the adversary, or at
least restart, if too many machines survive.  I'd suspect this halves
the adversary's effectiveness at keeping machines in, so around 1/(2l)
per round, or asymptotically less than a constant proportion of the
network.

All the above should be considered crazed frantic hand waving, not any
sort of proof, but I could check it if it actually seems useful for
anything.

Jeff

p.s. We could also ask honest participants to occasionally wait until
dangerously late in the round to reveal, frequently they drop out but
sometimes they screw the adversary.  Ain't clear if this helps though
since doing so harms efforts to expose the adversary.

p.s. I wonder if an altcoin could use a participant reduction scheme
like this to save power.  Just make them all publicly commit to the
transactions they wish to move before the first round, so that the
to take that conversation off list if discussing altcoin designs is a
bit too off topic.  ;)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: This is a digitally signed message part
URL: <http://moderncrypto.org/mail-archive/messaging/attachments/20151212/d66e9c58/attachment.sig>
```