I think maybe the word "filter" which I use gives the impression that it is about hiding information. The system is more likely to be used to rank order information, so that information that has been deemed valuable by people you trust is more likely to bubble up to you. It is supposed to be a way to augment your abilities to sort through information and social cues to find competent people and trustworthy information, not a system to replace it.
No conversation that I have been a part of yet. But it is of course something that would be very interesting to discuss.
There is a family resemblance with the way something like Twitter is set up. There are a few differences:
How does this affect the formation of bubbles? I'm not sure. My guess is that it should reduce some of the incentives that drive the tribe-forming behaviors at Twitter.
I'm also not sure that bubbles are a massive problem, especially for the types of communities that would realistically be integrated into the system. This last point is loosely held, and I invited strong criticism, and it is something we are paying attention to as we run trials with larger groups. You could combine EigenKarma with other types of designs that counteract these problems if they are severe (though I haven't worked through that idea deeply).
As it is currently set up, you could start a blank account and give someone a single upvote and then you would see something pretty similar to their trust graph. You would see whom they trust.
It could, I guess, be used to figure out attack vectors for a person - someone trusted that can be compromised. This does not seem like something that would be problematic in the contexts where this system would realistically be implemented over a short to medium term. But it is something to keep in mind as we iterate on the system with more users onboard.