C

CarlaZoeC

1274 karmaJoined

Comments
29

ok, an incomplete and quick response  to the comments below (sry for typos). thanks to the kind person who alerted me to this discussion going on (still don't spend my time on your forum, so please do just pm me if you think I should respond to something)

1.

- regarding blaming Will or benefitting from the media attention

- i don't think Will is at fault alone, that would be ridiculous, I do think it would have been easy for him to make sure something is done, if only because he can delegate more easily than others (see below)

- my tweets are reaction to his tweets where he says he believes he was wrong to deprioritise measures

- given that he only says this after FTX collapsed, I'm saying, it's annoying that this had to happen before people think that institutional incentive setting needs to be further prioritised

- journalists keep wanting me to say this and I have had several interviews in which I argue against this simplifying position

2.
- i'm rather sick of hearing from EAs that i'm arguing in bad faith

- if I wanted to play nasty it wouldn't be hard (for anyone) to find attack lines, e.g. i have not spoken about my experience of sexual misconduct in EA and i continue to refuse to name names in respect to specific actions I criticise or continue to get passed information about, because I want to make sure the debate is not about individuals but about incentives/structures

- a note on me exploiting the moment of FTX to get media attention

- really?

- please join me in speaking with the public or with journalists, you'll see it's no fun at all doing it. i have a lot of things i'd rather be doing. many people will be able to confirm that i've tried to convince them to speak out too but i failed, likely because

- it's pretty risky because you end up having rather little control over how your quotes will be used, so you just hope to work with someone who cares, but every journalist has a pre-conception of course. it's also pretty time consuming with very little impact and then you have to deal with forum debates like this one. but hey if anyone want to join me, I encourage anyone who want to speak to the press to message me and I'll put you in touch.

- the reason I do it is because I think EA will 'work' just not in the way that many good people in it intend it to work

3. 

- I indeed agree that these measures are not 'proven' to be good because of FTX

- i think they were a good idea before FTX and they continue to be good ideas

- they are not 'my' ideas, they are absolutely standard measures against big bureaucracy misconduct

- i don't want anyone to 'implement my recommentions' just because they're apparently mine (they are not), they are a far bigger project than a single person should handle and my hope was that the EA community would be full of people who'd maybe take it as inpiration and do something with it in their local context - it would then be their implmentation.  

- i like the responses I had on twitter that were saying that FTX was in fact the first to do re-granting

- I agree and I thought that was great!

- in fact they were interested in funding a bunch of projects I care a lot about, including a whole section on 'epistemics'! I'm not sure it was done for the right reasons (maybe the incentive to spend money fast was also at play), and the re-granting was done without any academic rigor, data collection or metrics about how well it works (as far as I know), but I was still happy to see it

- I don't see how this invalidates the claim that re-granting is a good idea though

4. 

- those who only want to know if my recommendations would have prevented this specific debacle are missing the point. someone may have blown the whistle, some transparency may have helped raise alarms, fewer people may have accepted the money, distributed funding may have meant more risk averse people would have had a say about whether to accept the money - or not. risk reduction is about reduction, not bringing it down to 0. so, do those measures, depending on how they're set up, reduce risk? yes I can see how they would, e.g. is it true that there were slack messages on some slack for leaders which warned against SBF, or is it true that several orgisations decided (but don't disclose why) against taking FTX funding https://www.newyorker.com/news/annals-of-inquiry/sam-bankman-fried-effective-altruism-and-the-question-of-complicity? I don't know enough about the people involved to say what each would have needed to be incentivised to be more public about their concerns. but do you not think it would have been useful knowledge to have available, e. g. for those EA members who got indiv grants and made plans with those grants?

even if institutional measures would not have prevented the FTX case, they are likely to catch a whole host of other risks in the future.

5. 

-The big mistake that I am making is to not be an EA but to comment on EA. It makes me vulnerable to the attack of "your propositions are not concrete enough to fix our problems, so you must be doing it to get attention?" I am not here trying to fix your problems.

- I actually do think that outsiders are permitted to ask you to fix problems because your stated ambition is to do risk analysis for all of us, not just for effective altruism, but for, depending on what kind of EA you are, a whole category of sentient beings, including catagories as large as 'humanity' or 'future beings'. That means that even if I don't want to wear your brand, I can demand that you answer the questions of who gets to be in the positions to influence funding and why? And if it's not transparent, why is it not transparent? Is there a good reason for why it is not transparent? If I am your moral patient, you should tell me why your current organizational structures are more solid, more epistemically trustworthy than an alterntive ones.

6. 

- i don't say anywhere that 'every procedure ought to be fully democratised' or 'every organisation has to have its own whistleblower protection scheme' - do i?

- *clearly* these are broad arguments, geered towards starting a discussion across EA, within EA institutions that need to be translated into concrete proposals and adjustments and assessments that meet each contextual need

- there's no need to dismiss the question of what procedures actually lead to the best epistemic outcomes by arguing that 'democratising everything' would bring bureaucracy (of course it would and no one is arguing for that anyway)

- for all the analyses of my tweets, please also look at the top page of the list of recommendations for reforms , it says something like "clearly this needs to be more detailed to be relevant but I'll only put in my free time if I have reason to believe it will be worth my time". There was no interest by Will and his team to follow up with any of it, so I left it at that (i had sent another email after the meeting with some more concrete steps necessary to at least get data, do some prototyping and reserach to test some of my claims about decentralised funding, and in which I offered I could provide advice and help out but that they should employ someone else to actually lead the project). Will said he was busy and would forward it to his team. I said 'please reach out if you have any more questions' and never heard from anyone again. It won't be hard to come up with concrete experiments/ideas for a specific context/organisation/task/team but I'm not sure why it would be productive for me to do that publically rather than at the request of a specific organisation/team. If you're an EA who cares about EA having those measures in place, please come up with those implemenation details for your community yourself.

7. 

- I'd be very happy to discuss details of actually implementing some of these proposals for some particular contexts in which I believe it makes sense to try them. I'd be very happy to consult organizations that are trying to make steps in those directions. I'd be very happy to engage with and see a theoretical discussion about the actual state of the reserach.

But none of the discussions that I've seen so far are actually on the level of detail that would match the forefront of the experimental data and scholarly work that I've seen so far. Do you think scholars of democratic theory have not yet thought about a response to the typical 'but most people are stupid'? Everyone who dismisses decentralised reasoning as a viable and epistemically valuable approach, should at least engage with the arguments by political scientists (I've cited a bunch in previous publications/twitter, here again, e.g. Landemore, Hong&Page are a good start) who spent years on these questions (ie not me) and then argue on their level to bring the debate forward if they then still think they can.

8. 

Jan, you seem particularly unhappy with me, reach out if you like,  I'm happy to have a chat or answer some more questions. 




 


 

Thank you for taking the time to write this up, it is encouraging - I also had never thought to check my karma ...

Indeed Lukas, I guess what I'm saying is: given what I know about EA, I would not entrust it with the ring

The post in which I speak about EAs being uncomfortable about us publishing the article only talks about interactions with people who did not have any information about initial drafting with Torres. At that stage, the paper was completely different and a paper between Kemp and I. None of the critiques about it or the conversations about it involved concerns about Torres, co-authoring with Torres or arguments by Torres, except in so far as they might have taken Torres an example of the closing doors that can follow a critique. The paper was in such a totally different state and it would have been misplaced to call it a collaboration with Torres. 

There was a very early draft of Torres and Kemp which I was invited to look at (in December 2020) and collaborate on. While the arguments seemed promising to me, I thought it needed major re-writing of both tone and content. No one instructed me (maybe someone instructed Luke?) that one could not co-author with Torres. I also don't recall that we were forced to take Torres off the collaboration (I’m not sure who know about the conversations about collaborations we had): we decided to part because we wanted to move the content and tone in a very different direction, because Torres had (to our surprise) unilaterally published major parts of the initial draft as a mini-book already and because we thought that this collaboration was going to be very difficult. I recall video calls in which we discussed the matter with Torres, decided to take out sections that were initially supplied by Torres and cite Torres’ mini-book whereever we deemed it necessary to refer to it. The degree to which the Democratising Risk paper is influenced by Torres is seen in our in-text citations: we don't hide the fact that we find some of the arguments noteworthy! Torres agreed with those plans. 

At the time it seemed to me that I and Torres were trying to achieve fundamentally different goals: I wanted to start a critical discussion within EA and Torres was ready by that stage to incoculate others against EA and longtermism. It was clear to me that the tone and style of argumentation of initial drafts had little chance of being taken seriously in EA. My own opinion is that many arguments made by Torres are not rigorous enough to sway me, but that they often contain an initial source of contention that is worth spending time developping further to see whether it has substance. Torres and I agree in so far as we surely both think there are several worthy critiques of EA and longtermism that should be considered, but I think we differ greatly in our credences in the plausibility of different critiques, how we wanted to treat and present critiques and who we wanted to discuss them with.

The emotional contexual embedding of an argument matters greatly to its perception. I thought EAs, like most people, were not protected from assessing arguments emotionally and while I don't follow EA dramas closely (someone also kindly alerted me to this one unfolding), by early 2021 I had gotten the memo that Torres had become an emotional signal for EAs to discount much of what the name was attached to. At the time I thought it would not do the arguments justice to let them be discounted because of an associated name that many in EA seem to have an emotional reaction against and the question of reception did become one factor for why we thought it best not to consider the co-authorship with Torres. One can of course manage perception of a paper via co-authorship and we considered collaborating with respected EAs to give it more credibility but we decided both against name-dropping those people who invested via long conversations and commentary in the piece to boost it as much as we decided not to advertise that there are obvious overlaps with some of Torres’ critiques. There is nothing to hide in my view: one can read Torres' work and Democratising Risk (and in fact many other peoples’ critiques) and see similarities - this should probably strengthen one’s belief that there’s something in that ballpark of arguments that many people feel we should take seriously? 

Apart from the fact that it really is an entirely different paper (what you saw is version 26 or something and I think about 30 people have commented on it. I'm not sure it's meaningful to speak about V1 and V20 as being the same paper. And what you see is all there is: all the citations of Torres are indeed pointing to writing by Torres, but they are easily found and you'll see that it is not a disproportionate influence), we did indeed hope to avoid the exact scenario we find ourselves in now! The paper is at risk of being evaluated in light of any connection to Torres rather than on it's own terms, and my trustworthiness in reporting on EAs treatment of critiques is being questioned because I cared about the presentation and reception of the arguments in this paper? A huge amount of work went into adjusting the tone of the paper to EAs (irrespective of Torres, this was a point of contention between Luke and I too), to ensure the arguments would get a fair hearing and we had to balance this against non-EA outsiders who thought we were not forceful enough.

 I think we succeeded in this balance, since both sides still to tell us we didn't do quite enough (the tone still seems harsh to EAs and too timid to outsiders) but both EAs and outsiders do engage with the paper and the arguments and I do think it is true that there is a greater awareness about (self-) censorship risk and critiques being valuable. Having published , EAs have so far been kind towards me. This is great! I do hope it'll stay this way. Contrary to popular belief, it's not sexy to be seen as the critic. It doesn't feel great to be told a paper will damage an institution, to have others insinuate that I plug my own papers under pseudonyms in forum comments or that I had malicious intentions in being open about the experience, and it’s annoying to be placed into boxes with other authors who you might strongly disagree with. While I understand that those who don't know me must take any piece of evidence they can get to evaluate the trustworthiness of my claims, I find it a little concerning that anyone should be willing to infer and evaluate character from minor interactions. Shouldn’t we rather say: given that we can’t fully verify her experience, can we think about why such an experience would be bad for the project of EA and what safeguards we have in place such that those experiences don't happen? My hope was that I can serve as a positive example to others who feel the need to voice whatever opinion (“see it’s not so bad!”), so I thank anyone on here who is trying to ease the exhaust that inevitably comes with navigating criticism in a community. The experience so far has made me think that EAs care very much that all arguments (including those they disagree with) are heard. Even if you don’t think I'm trustworthy and earnest in my concerns, do please continue to keep the benefit of doubt in mind towards your perceived critics, I think we all agree they are valuable to have among us and if you care about EA, do keep the process of assessing trustworthiness amicable, if not for me then for future critics who do a better job than I. 

jumping in here briefly because someone alerted me to this post mentioning my name: I did not comment, I was not even aware of your forum post John, (sorry I don't tend to read the EA forum), don't tend to advertise previous works of mine in other peoples comments sections and if I'd comment anywhere it would certainly be under my own name 

Thanks for saying this publically too Nick, this is helpful for anyone who might worry about funding. 

Load more