Introduction
While our society has ways to punish the abuse of power, many victims still suffer in silence. Sure, if you experience racism in the workplace you could step to the HR-department, but what if they don’t want to take the PR-hit? And sure, if you are sexually assaulted you could sue the assaulter, but what if your evidence is weak?
Addressing the issue could be worse than letting the injustice go unspoken. You might be silenced, discredited, fired or experience any number of actions that punish you for upsetting the status-quo. With such a high cost for accusation it is no wonder that many victims opt to stay quiet. This is especially tragic if the victims of the culprit collectively hold enough power to successfully press the issue, but can’t because they are all suffering alone. If only the victims could coordinate while staying anonymous.
Anonymous coordination
I propose we build a website where victims of the same perpetrator could find one another while staying anonymous. This is achieved by letting the victim do two things:
- Submit the name of the person or organization that mistreated them (with perhaps the facebook or linked-in account attached to prevent mix-ups)
- Give a threshold for how many fellow victims they need to feel confident enough to come forward with their story
This information is not publicly displayed. When a threshold is met the victims are provided with a chatroom where they can discuss their mistreatment and coordinate their future actions.
The thresholds can be different for different people, e.g. if there are five victims and two have a threshold of 2 and the other three victims have a threshold of 6, the only people in the chatroom are the two victims with the low threshold. The other victims are, however, made aware that a lower threshold is triggered and all the victims can see the number of users that the culprit has accrued on the website. This allows people to change their minds, e.g. if someone initially set a threshold of five, but now knows that three people are coordinating, they might be emboldened to join in. Users can change the threshold or drop the accusation at any time.
Usernames and anonymity
The site utilizes usernames which are encouraged to be different from their real name. This protects their identity in case of a transgressor wanting to use the site to figure out the names of potential accusers. Whenever you send in a name you can choose whether you want to have your username displayed or not.
If the user chooses to be anonymous they enter the chatroom as anonymous1 or anonymous2 etc. Users who choose to have their usernames displayed can choose whether usernames are only displayed in chatrooms or also on the list that shows the number of victims. By having your username displayed in more places you might run the risk of your identity being triangulated.
For example: if you have publicly accused persons A,B and use the site to coordinate against persons A,B and C, someone you know could use your public accusations of A,B to guess your identity and subsequently see that you also want to accuse person C. However, having displayed pseudonyms could be useful if a broader coordinated effort or pattern of mistreatment exists. If you observe that the same users were mistreated by the exact same group of people, it is unlikely that it’s a coincidence. Being able to detect and subsequently talk about your mistreatment of this more structural problem might help the healing and improve the countermeasures you undertake. It is a tradeoff the users can make but I would personally put the anonymous option as the default.
Ally users
Maybe you want to help but you aren’t a victim yourself. You can still send in the name of a person or association but simply tag yourself as “ally”. This complicates the procedure a little bit.
Users (both victims and allies) can set a separate threshold for victims and allies. For example: If the victim is only interested in coordinating with other victims, they can give a threshold for fellow victims but not participate in the threshold system with allies. Or perhaps a user wants to find a fellow victim but is willing to come forward if there is a sufficiently large number of allies. To achieve this they can set the threshold for number of victims low and the threshold for number of allies high. An ally can also set the victim and ally thresholds at different rates or opt to not participate in one. It’s important that both the victims and the allies can see the numbers of victims and allies. This serves as a warning for potential future victims. It could, however, also be misused…
Banning, flagging and blocking
A malignant actor could make a bunch of fake accounts and spam a target with accusations. One way to prevent this is to ban multiple accusations coming from the same IP address. A tech-savvy miscreant could sadly use a VPN to circumvent this. To minimize this type of abuse, users can flag other users for suspicious or unwanted behavior. This is also a way to catch and ban people who pretend to be someone they’re not or misuse the site in other ways (bullying, spreading misinformation etc). This type of security is not my area of expertise so feel free to leave other suggestions in the comments below.
Users can also send direct messages to each other but can be blocked either temporarily or permanently if one of the users becomes uncomfortable. Ideally someone could specify their messaging preferences. Some people might not want to receive any messages, some might want to only receive messages from a certain chatroom or threshold list. Modularity allows users to shape their inbox in a way they feel comfortable with.
The names are always displayed with the information the sender and receiver have of each other. If a users knows you as “anonymous2” from the “Barry MacBadguy chatroom” they will only see your name as “anonymous2 from the Barry MacBadguy chatroom” and not your username. This ensures that nobody can glean extra information by simply messaging someone.
Tagging
People can be maltreated in different ways. It might be uncomfortable if victims enter a chatroom only to discover that one is the victim of wage theft and the other of sexual harassment. A tagging system would circumvent this scenario. When submitting a name, you add tags for the type of mistreatments you received. You can then select which victims you want to coordinate with.
For example: If you are an Islamist fundamentalist who is the victim of racism, you might want to coordinate with people who tagged “racism” but not with the victims who tagged “homophobia”.
You also might want to make different thresholds for different types of victims. For example: If you are the victim of sexual assault you might want to immediately talk with someone who also experienced this, however, if the culprit also conducts wage theft you might want to only jump onboard if there are a sufficiently large number of people. The tagging system can, however, reduce your anonymity. Let’s say Barry MacBadguy only assaulted one person and now uses this site to pretend to be his own victim. If he sees that one user has tagged his name with “assault”, he can guess that persons identity. I therefore think it’s best if the tagging system is voluntary.
There could also be tags for the different types of ally you can be. Someone in the HR department may help in a very different way than someone who works in the IT department. The tags for the allies may also take the form of a concrete description of who they are and how they can help (if the ally wants to forgo their anonymity). Allies could, of course, also choose to stay completely anonymous or pseudonymous.
Building the website
I’m not a programmer. I don’t know what it would take to secure the website against bots, spam and hackers. I’m not a network theorist. I don’t know if there are ways this mechanism could fail or if there are better ways to do it. I’m not rich. I can’t pay someone else to make this website.
If you want to build this website, feel free to do so. If you want help, I have some experience with graphic design and mechanism design. If you have any feedback, please leave it in the comments below.
It seems like this could be useful for coordinating around dealing with bad people. But is there anything structurally to prevent it also being used to coordinate harassment, cancel culture etc.?
Also, on a more technical note, it seems like people might decide to self-accuse themselves of any possible transgressions, just to be present in the chats and get a heads up of any possible future accusations.
The problem of harassment is what the section 'Banning, flagging and blocking' is about. You can substitute the word "malignant actor" with "harasser". As stated, there are some mechanisms to minimize this, but more suggestions are always appreciated.
The option to self-accuse was pointed out in 'banning, flagging and blocking' ("people who pretend to be someone they’re not") and 'tagging' ("Let’s say Barry MacBadguy only assaulted one person and now uses this site to pretend to be his own victim"). The self-accuser can see that someone else accused them, but if the victim opts to stay completely anonymous the self-accuser can't see who accused them. The flagging and blocking mechanisms are there to punish pretenders, though without a third party verification system pretenders will still get in. Third party verification is possible, but does hamper anonymity and makes the project less scalable.
EDIT: You can't stay completely anonymous if you use the chatroom to share experiences, but you can stay anonymous if you use it to coordinate actions (e.g. On January 6th 13:00 we post our stories to our facebook pages). Obviously after you've come forward you can't stay completely anonymous, but you can't do that with any other method either. This project will not completely protect victims, merely improve upon the current situation.
A chat like this can't stay completely anonymous. What could you even say which would be useful and wouldn't reveal your identity, if the abuser is there to listen?
Don't let the pushbacks discourage you too much.
Like most projects, it's hard to come up with the correct solution just by thinking about it and without interviewing lots of users (or maybe checking why previous projects failed).
I personally didn't manage to solve this one (unfortunately :( ), but it doesn't mean you won't
Early on, victims could agree to a trusted individual to reveal their identities to (it can be one of the complainants or a third party) and verify identities in order to prevent abusers/harassers from getting in the conversations and identifying complainants or otherwise interfering. I think it would be easy for an abuser/harasser to identify complainants based on their descriptions of the events, since they should also know the events.
This would also get around individuals making multiple accounts.
OTOH, the abuser/harasser could have someone else enter the conversations on their behalf as a fake accuser. If this is a serious worry, it may be better for the complainants not to know each others' identities at all and to either discuss directly with a trusted individual who summarizes back to them without any identifying info, or to share very little about the events in their pseudonymous discussions, which may limit their usefulness.
For the EA community in particular, if I recall correctly, Julia Wise at CEA already acts as this trusted individual, although it could be good to have more options.