[messaging] Panoramix decryption mixnet messaging spec and design documents
infinity0 at pwned.gg
Fri Nov 10 11:28:00 PST 2017
> On 10/11/17 14:31, Ximin Luo wrote:
>>> So if we're asking whether a system may be vulnerable to the attack, and
>>> it has the properties above, then we need to ask whether the system is
>>> doing something to produce that countervailing tendency. In other words,
>>> is it actively preventing the attack?
>> Indeed, I noted that the specific simple attack "assumes that nodes are unable
>> to use any other information to distinguish between faulty vs good neighbours.
>> There are various things you can do along those lines, and the paper you linked
>> includes a defense that based on their analysis is moderately effective. I'm
>> sure there are improvements to be made though.
> Absolutely. I'm not saying the eclipse attack rules out decentralised
> solutions. I'm saying any decentralised solution needs to specify how
> it's going to prevent the attack.
> If you think that's trivial, take a look at MorphMix and the attacks
> against it, and the papers from 2006-2010 that tried to do anonymous
> routing over P2P networks, or use P2P networks to replace the Tor
> directory system. I think most of them are cited by these two papers:
> Again, I'm not saying it's impossible and we should all go home, but
> it's definitely not a problem to be dismissed out of hand.
I'm not suggesting that it's trivial, but rather than I haven't been too impressed with the examples and counterexamples cited so far as supposed arguments against decentralised networks in general - not just from this thread, but from other researchers as well. That's not meant to be taken personally, it's just my view of the topic as a whole.
The collusion detection of MorphMix does not seem *particularly* advanced to me, so I'm not surprised that an adversarial approach could break it. What would be interesting would be to see papers that try to tackle the problem from an adversarial point of view, including self-awareness about the flaws in their own defense. That might eventually lead to more general theories about the security and performance of decentralised networks, something that I haven't seen much of yet. Some of the later stuff about community detection was quite interesting, which is why I mentioned it earlier.
More generally, I can understand that this topic is a rabbit hole if all you want to do is "just build an anonymous messaging system" - you have to know much more about decentralised systems, etc. So if one's funding is limited specifically to "build an anonymous messaging system" I can understand why one's preference would be for centralised directory authorities. But I see the development of decentralised systems as a wider goal that can have much greater and wider benefits beyond just building anonymous messaging systems, so that's why I maintain this interest in it.
Thanks for the papers though, will be interesting to give them a read through in more detail.
>>>> Going back to the original issue (epistemic attacks against mixnets), the key
>>>> point AFAIU is to ensure that n ~= N. Whether this is achieved in a centralised
>>>> or decentralised way seems immaterial to me.
>>> The question isn't really the size of the view, but how much overlap
>>> there is between the views of different users. Even if a user has some
>>> way to know the value of N in a decentralised system (which is a hard
>>> problem in its own right), how does she know whether the n ~= N nodes in
>>> her own view are also in other users' views?
>> If n ~= N then the overlaps are much closer and you can follow the maths in the
>> rest of the paper to see that the attack probabilities drop to very low.
> That's fine for analysing the system from outside, where the set of N
> nodes can be objectively known. But a user of the system doesn't have
> that objective knowledge. She has a view of n nodes, but she doesn't
> know N, so she can't tell to what extent her view overlaps with those of
> other users. This isn't just an issue of user confidence, it's also a
> practical problem: how does she know when she's learned about enough
> nodes to start communicating?
I agree but I think this is a "given" if one was working in this area - i.e. a system that claims to "guarantee n ~= N" would include the ability for each participant know that that with high probability, that's what "guarantee" means. If this wasn't met then the work would be a bad piece of work.
> Going back to the wider issue of epistemic attacks on anonymity systems,
> it may not even be necessary for a user's view to differ much from those
> of other users. For example, if the attacker can add a single node to a
> victim's view that's not in any other user's view, then any traffic
> passing through that node must come from the victim. So even n = N for
> the victim, and n = N-1 for all other users, doesn't ensure safety.
Well, if N is large then the target has low probability of selecting the poisoned node. Also if gossip was occurring then other nodes apart from the target would also be contacting the poison node. It's not such a clear-cut scenario to me, more mathematical analysis is required. :)
I also remain mildly skeptical of the focus on the specific threat model where having 1 message deanonymised out of all of their messages counts as a "loss" and that the focus of anonymity systems should be to reduce this probability, even if it means sacrificing "average" deanonymisation probability over all messages. But well, I haven't read very widely around here, maybe there are position papers that argue this point properly and solidly.
(Yes I understand that 1 messsage deanond means that many others might also be deanond in practise, but some way of quantifying this would put this assumption on a more solid footing. AIUI this was the main drive behind the "1-guard-9-months" change in Tor a few years ago, but I didn't hear an explanation of the *reason* behind this assumption yet, only "we assume".)
>>> I'm not interested in writing off decentralised systems any more than
>>> you are, but there's a burden of proof here. Given the existence of a
>>> pretty broad class of attacks that only apply to decentralised systems,
>>> a decentralised system needs to show it's not vulnerable to those attacks.
>> The attacks only work if the decentralised system literally makes no effort to
>> defend itself.
> That would only be true if every effort was effective. Look at MorphMix,
> for example. It had a clever defence to prevent an eclipse-like attack,
> but the defence was defeated by modelling its internal state.
Sorry, I was being too succinct there. Yes, I meant "effective effort".
More information about the Messaging