All of Kerry_Vaughan's Comments + Replies

It's a fair point that we should treat Alice and Chloe separately and that deanonymizing one need not imply that we should deanonymize the other.

Yeah. 

Let's assume Nonlinear are completely right about how they describe Chloe and Alice. I'd summarize their perspective as follows:

Alice-as-described-by-Nonlinear is likely to be destructive in other contexts as well because that is a strong pattern with her generally. :(

By contrast,

Chloe-as-described-by-Nonlinear is significantly less likely to be destructive in other contexts. While Nonlinear claim that Chloe is entitled, it's still the case that her beef with them is largely around the tensions of living together (primes her to expect equal-ness... (read more)

9
Ben Millwood
4mo
I think if we deanonymise now, there's a strong chance that the next whistleblower will remember what happened as "they got deanonymised" and will be reluctant to believe it won't happen to them. It kind of doesn't matter if there's reasons why it's OK in this case, as long as they require digging through this post and all the comments to understand them. People won't do that, so they won't feel safe from getting the same treatment.

I don't know who Chloe is in real life (nor Alice for that matter), but based on what I've read, it seems really really off to me to say that she has the potential to be destructive to others in the community. [Edit: I guess you're not outright saying that, but I'm reading your comment as "if all that Nonlinear are saying about Chloe is true, then...," and my take on that is that apart from their statements of the sort of "Chloe is so mentally unhealthy that she makes things up" (paraphrased), none of the concrete claims are obviously red flags to me. It's... (read more)

I would strongly caution against doing so. Even if it turns out to be seemingly justified in this instance (and I offer no view either way whether it is or not), I cannot think of a more effective way of discouraging victims/whistleblowers from coming forward (in other cases in this community) in future situations. 

While there are several stylistic things one might disagree with in the post, to the main charges raised by Ben, this seems about as close to exonerating as one can reasonably expect to get in such cases.

Thanks for writing such an exhaustive post; it can't have been easy.

[anonymous]5mo10
2
0

If you both leaving was performance-related, it's sort of weird for you both to leave at the same time. Was both of you leaving performance-related? Or did you both leave the same time because you were dating? Can you say more about why you both left at the same time?

It was a short timeline. I don't remember exactly but we told senior leadership and the board quite soon after we decided to start dating.

-3[anonymous]5mo
Less than a month?

No, we didn't do anything wrong. Like I said, we followed the policy.

People were upset that we were dating but not because there was some coverup or anything. Some folks had strategic disagreements with me and us dating made that a larger problem.

-1[anonymous]5mo
how long did it take you to tell the organisation that you were dating?

I think the disagreement here is that we followed the CEA policy and were told explicitly and in writing at the time by the board that our dating had nothing to do with their decision. That doesn't mean staff weren't upset.

2[anonymous]5mo
So did you do something wrong then, even if that wasn't why you left? How long did it take you to tell the organisation that you were dating?
3
Habryka
5mo
I don't know what "nothing to do" means. I do now believe that it had nothing legally to do with the firing, but it still seems like the thing that "brought things to a head".

I was told this dozens of times by many different employees. None of them were board members, but they all seemed to agree it was the thing that caused the conflict to escalate.

2[anonymous]5mo
did you declare to the rest of CEA that you were dating as soon as you started dating? If not, how long was the gap?

My read is that Bostrom had reason to believe that the email would come out either way, and then he elected to get out in front of the probable blowback.

As evidence, here is Émile Torres indicating that they were planning to write something about the email.

That said, it's not entirely clear whether Bostrom knew the email specifically was going to be written about or knew that someone was poking around in the extropian mailing list and then guessed that the email would come out as a result. 

In any case, I think it's unlikely that he posted his apology for the email unprovoked. 

There's nothing formally organized that I am aware of.

I think CEA has been relatively clear over the past few years that it is not the leader of the EA community. My impression is that they see themselves as downstream of the community's intellectual leaders and the will of the community, broadly construed.

The claim on Twitter is different.

Can you clarify what you think is unfair? Happy to issue a correction.

https://twitter.com/KerryLVaughan/status/1591508739236188160?t=qL-dGKXar3b7EQ4EHs597Q&s=19

Edit: if anyone else wants to take a stab at explaining why the Twitter thread is unfair given this thread feel free. Would want to issue a correction sooner rather than later.

Will:

One item that should be a part of your reflections in the days and months to come is whether you are fit to be the public face of the effective altruism movement, given your knowledge of Sam's unethical behavior in the past, ties to him going back to 2013, and your history of vouching for Ben Delo, another disgraced crypto billionaire.

The EA community has many excellent people - including many highly capable women - who are uninvolved in this scandal and could step up to serve in this capacity.

It seems like there's this expectation of public figures to always have acted in a way that is correct given the information we have now - basically hindsight bias again.

One of the many wildly charismatic people you hang out with later turns out to be No Good? Well of course you shouldn't have associated with them. One of the many rumours you might have heard turns out to be true and a big deal? Off course you should have acted on it.

I don't think this is very fair or useful. I guess we might worry that the rest of the world will think like that but I don't see why we should.

I’d be interested to know why you thought it relevant to mention “women” specifically?

Is Ben Delo a "disgraced crypto billionaire"? From Jess Riedel's description, it wasn't obvious to me whether the thing BitMEX got fined for was something seriously evil, versus something closer to "overlooked a garden-variety regulation and had to go pay a fine, as large businesses often do".

(Conflict-of-interest notice: I work for MIRI, which received money from Open Phil in 2020 that came in part from Ben Delo.)

I'd prefer that the discussion focus on more concrete, less PR-ish things than questions of who is "fit to be the public face of the effective altruism movement". The latter feels close to saying that Will isn't allowed to voice his personal opinions in public if EAs think he fucked up.

I'd like to see EA do less PR-based distancing itself from people who have good ideas, and also less signal-boosting of people for reasons orthogonal to their idea quality (and ability to crisply, accurately, and honestly express those ideas). Think less "activist movement", more "field of academic discourse".

I am glad you felt okay to post this - being able to criticise leadership and think critically about the actions of the people we look up to is extremely important.

I personally would give Will the benefit of the doubt of his involvement in/knowledge about the specific details of the FTX scandal, but as you pointed out the fact remains that he and SBF were friends going back nearly a decade.

I also have questions about Will Macaskill's ties with Elon Musk, his introduction of SBF to Elon Musk, his willingness to help SBF put up to 5 billion dollars towards t... (read more)

I have recently spoken to someone involved who told me that SBF was not just cavalier, but unethical and violated commonsense ethical norms.

Are you in a position to be more specific about what SBF did that this is referring to?

8[anonymous]1y
no

I consider this credible.

It suggests that my categorization of "EA leadership" was probably too broad and that fewer people knew the details of the situation than I believed.

That means there is a question of how many people knew. I am confident that Nick Beckstead and Will MacAskill knew about the broken agreement and other problems at Alameda. I am confident they are not the only ones that knew.

Why are you confident of that?  In general, I think there's just less time and competence and careful checking to go around, in this world, than people would want to believe.  This isn't Hieronym's To The Stars or the partially Hieronym-inspired world of dath ilan.

He's now the program manager at a known cult that the EA movement has actively distanced itself from.


If you'd like to investigate whether Leverage was a cult, there are now several additional sources of information available.

 One source is Cathleen's post which is detailed, extensive, and written directly by a former employee. A board member conducted their own investigation into what Leverage could have done better between 2012 and 2019 by conducting interviews with former members of Leverage staff. 

You can also view Leverage's website to learn ... (read more)

I’d also recommend reading Zoe Curzi’s essay about her own (traumatic) experience at Leverage, the publishing of which was publicly supported by Leverage founder Geoff Anders.

I want to clarify the claims I'm making in the Twitter thread.

I am not claiming that EA leadership or members of the FTX Future fund knew Sam was engaging in fraudulent behavior while they were working at FTX Future Fund.

Instead, I am saying that friends of mine in the EA community worked at Alameda Research during the first 6 months of its existence. At the end of that period, many of them suddenly left all at once. In talking about this with people involved, my impression is:

1) The majority of staff at Alameda were unhappy with Sam's leadership of the co... (read more)

Huge thanks for spelling out the specific allegations about SBF's behavior in early Alameda; for the past couple days I'd been seeing a lot of "there was known sketchy stuff at Alameda in 2017-18" and it was kind of frustrating how hard it was to get any information about what is actually alleged to have happened, so I really appreciate this clear point-by-point summary.

I was one of the people who left at the time described. I don't think this summary is accurate, particularly (3).

(1) seems the most true, but anyone who's heard Sam on a podcast could tell you he has an enormous appetite for risk. IIRC he's publicly stated they bet the entire company on FTX despite thinking it had a <20% chance of paying off. And yeah, when Sam plays league of legends while talking to famous investors he seems like a quirky billionaire; when he does it to you he seems like a dick. There are a lot of bad things I can say about Sam, but t... (read more)

In 2021 I tried asking about SBF among what I suppose you could call "EA leadership", trying to distinguish whether to put SBF into the column of "keeps compacts but compact very carefully" versus "un-Lawful oathbreaker", based on having heard that early Alameda was a hard breakup.  I did not get a neatly itemized list resembling this one on either points 1 or 2, just heard back basically "yeah early Alameda was a hard breakup and the ones who left think they got screwed" (but not that there'd been a compact that got broken) (and definitely not that t... (read more)

4
Jason
1y
After the involved EAs consult with their lawyers, they may find a receptive audience to tell their stories at the Department of Justice or another federal agency. I would be shocked if the NDAs were effective as against cooperating with a federal investigation. If the quoted description is true, it seems relevant to the defense SBF seems to be trying to set up.
7
Sharmake
1y
I decided to speak about it because if true, it would imply bad things about how EA hasn't remembered the last time things went wrong. In many senses, this is EA's first adversarial interaction, where we can't rely on internal norms of always cooperating anymore.

It's worth noting that in the Bernie Madoff Ponzi scheme, victims were compensated largely by clawing back money previously withdrawn by people with no knowledge that Madoff was operating a Ponzi.

Hopefully, someone with more legal knowledge about this situation can comment, but given that fact, I wouldn't consider clawbacks out of the question.

2
aderonke
1y
You're most likely right due to the precedents you stated here. I wouldn't rule out a clawback either because the government has to be able to restore/maintain trust in the system.  It's why a thorough audit of EA's finance is in order. Any auditor among us could reach out to CEA to help with this.

Hi Ben,

Could you explain what's not practical about these simple steps that you could take:

1) Create an EA Forum policy against sockpuppeting, and apply it retroactively to this case. This might naturally result in deleting the offending posts or adding a notification indicating they are sockpuppets.

2) Remove the encoded names and replace them with numbers per the edit your team made originally.

3) Change the word "doxing" to "deanonymizing" in the following sentence from the Guide to Norms (since the behavior you intend to prohibit is not doxing):

Doxing —

... (read more)

The obvious thing to do in that case is to disclose this in a comment somewhere.

Also, this defense doesn't work for his efforts to promote his own anonymous posts from his public account.

I disagree with that interpretation especially given the context of Cathleen’s post. It includes lengthy discussions of poor and bizarre behavior by members of the EA community toward Leverage/Paradigm staff.

I think reading Cathleen’s post and then re-adding her name is either an intentional violation of her wishes or, at a minimum, shows a reckless disregard for her request for privacy.

In any case, I don’t think this issue is central. The original comment already was doxing given the context, which includes the “question” to which it was a reply, the post... (read more)

Hi Ben,

Another question I wanted to ask is whether the EA Forum has rules against creating sock puppet accounts. This is defined as follows:

Wikipedia: “Online, it came to be used to refer to a false identity assumed by a member of an internet community who spoke to, or about, themselves while pretending to be another person.”

Urban dictionary: “A false identity adopted by trolls and other malcontents to support their own postings.”

I ask because it appears to me that the poster acted deceptively in managing their multiple anonymous accounts by (1) using a se... (read more)

6
NunoSempere
1y
Note that the two anonymous accounts are spaced a year apart; seems likely that Ryan lost the password to the 1st. 

If you list your organizational affiliation on LinkedIn (and if you are indeed correct that Paradigm Academy was not trying to be a cover for Leverage Research that shielded it from public scrutiny), then I don't think you get to complain when someone quickly googles you and finds your LinkedIn profile.

 

First, as I noted in my response to Ben, some of the information included in the doxing was from the poster's personal knowledge and not from the people's LinkedIn profiles. Thus, you can't defend the doxing by saying that the information was publicly ... (read more)

Ben_West
1yModerator Comment19
0
0

At this point, the moderators are trying to focus on any practical steps we should take. Given that the names in question are encoded, and no one currently listed in the comment has reached out to us, we do not plan to take further action. Anyone who feels that private or incorrect information about themselves is posted on the Forum is always free to contact us.

6
Kerry_Vaughan
1y
Hi Ben, Another question I wanted to ask is whether the EA Forum has rules against creating sock puppet accounts. This is defined as follows: Wikipedia: “Online, it came to be used to refer to a false identity assumed by a member of an internet community who spoke to, or about, themselves while pretending to be another person.” Urban dictionary: “A false identity adopted by trolls and other malcontents to support their own postings.” I ask because it appears to me that the poster acted deceptively in managing their multiple anonymous accounts by (1) using a second anonymous account to reply to their first anonymous account; (2) promoting the actions of their anonymous account from their public account without disclosing that they were behind the anonymous account. Most forums I’ve been a part of have rules against sockpuppeting, but reviewing the EA Forum rules and guidelines, I did not see anything about that.

Kerry, what are you getting at? What do you think was the harm or the intended harm of Ryan's posts?

8
Jonathan_Michel
1y
[Disclaimer, I have very little context on this & might miss something obvious and important] AFAICT the disagreement between Kerry and Ben stems from interpreting the second part of Cathleen's ask differently. There seem to be two ways of reading the second part: 1. Asking people to refrain from naming others who are not already tied into this in general. 2. Asking people to refrain from naming others who are not already tied into this in discussions of her post. To me, it seems pretty clear that she means the latter, given the structure of the two sentences. If she was aiming for the first interpretation, I think she should have used a qualifier like "in general" in the second sentence. In the current formulation, the "And" at the beginning of the sentence connects the first ask/sentence very clearly to the second.  I guess this can be up for debate, and one could interpret it differently, but I would certainly not fault anyone for going with interpretation 2.[1] If we assume that 2 is the correct reading, Kerry's claim (cited above) does not seem relevant anymore / Ben's original remark (cited below) seems correct. The timeline of edits doesn't change things. Ben's original remark  (emphasis mine):  1. ^ Even if she meant interpretation 1, it is unclear to me that this would be a request that I would endorse other people enforcing. Her request in interpretation 2 seems reasonable, in part because it seems like an attempt to avoid people using her post in a way she doesn't endorse. A general "don't associate others with this organisation" would be a much bigger ask. I would not endorse other organisations asking the public not to connect its employees to them (e.g. imagine a GiveWell employee making the generic ask not to name other employees/collaborators in posts about GiveWell), and the Forum team enforcing that.

However, even if the poster had used only information from LinkedIn profiles and not their own knowledge, this would still constitute a reveal of private information because “private information” is context-dependent.

If you list your organizational affiliation on LinkedIn (and if you are indeed correct that Paradigm Academy was not trying to be a cover for Leverage Research that shielded it from public scrutiny), then I don't think you get to complain when someone quickly googles you and finds your LinkedIn profile. 

Like, if Scott Alexander had listed... (read more)

Unfounded rumors about Leverage were common in the EA community when I was involved, and it's disappointing that they continue to be perpetuated. 

Most of the rumors about Leverage that I heard were along the lines of what Zoe later described (which is also largely consistent with other accounts described here and here). So I wouldn’t call those rumors “unfounded” at all. In this case at least, where there was smoke there turned out to be a fire.

Other rumors I heard were quite consistent with Leverage’s own description (pretty culty in my opinion)... (read more)

You're directly employed by Leverage research, which has the more severe claims laid against it and is some sort of subsidiary or something of Pareto or vice versa[1], yes? I understand you've worked or been involved there in this circle for many years? 10 or so?

Given the above, it's unclear why you think stating your personal views and your friendships, would be informative. Given the host of other choices you could make, this is unpromising to me. For example, why not release these surveys and the narratives inside of them, and we can read and form opini... (read more)

7
AnonymousEAForumAccount
2y
The interview process seems to have been the most problematic part of Pareto and was presumably designed by your team members who ran the project. Who should have nipped that in the bud? If Will didn’t take over until mid-way, would that have been your responsibility? Are you aware of any accountability for anyone involved in the creation or oversight of the interview process? When Will signed off on having Pareto at the Leverage building, was he aware participants wouldn’t be informed about this?     Were fellows anonymous when submitting their evaluations and confident that their evaluations could not be traced back to them? I imagine they’d have been reluctant to criticize the program (and by extension highly influential EAs involved including yourself) if they could not be completely confident in their anonymity. I’d also note that the fellows likely had very high thresholds for cultyness given that they weren’t turned off by the interview process. Since CEA never shared the program evaluations (nor published its own evaluation despite commitments to do so), I feel like the most credible publicly available assessment is Beth Barnes’ (one of the Fellows) observation that “I think most fellows felt that it was really useful in various ways but also weird and sketchy and maybe harmful in various other ways.” I imagine the fellowship itself was less culty than the interview process (a pretty low bar). As to how culty it was, I’d say that depends to some degree on how culty one thinks Leverage was at the time since Barnes also noted: “Several fellows ended up working for Leverage afterwards; the whole thing felt like a bit of a recruiting drive.”  Zoe’s account (and other accounts described here and here) certainly make Leverage sound quite culty. That would be consistent with my own interactions with Leverage (admittedly quite limited); I remember coming out of those interactions feeling like I'd never encountered a community that (in my subjective opinion ba
7
Charles He
2y
I'm confused and skeptical that the defence of "cultiness" or other negative traits, includes referencing internal evaluations in these comments.  Setting aside how internally conducted evaluations are performative or manipulated subtly and easily by the administrators, I would expect most "highly demanding" organizations to filter and steer internal people, e.g. making sure they are "small" enough. E.g., The people inside these organizations, who would pass the various process, would not be reliable evaluators.  For evidence, this very interview below is designed as part of the system to create the resulting culture that is problematic. <Insert link/quote to the wildly inappropriate/aggressive/abusive interview I read about (I don't have time to fully write this comment.)>

The degree to which public presentation is likely to strengthen your feedback loops seems to depend quite a lot on the state of the field that you are investigating. In highly functional fields like those found in modern physics, it seems quite likely to be helpful. In less functional fields or those with fewer relevant researchers, this seems less helpful.

To my mind, one strong consideration in favor of publicly presenting your research if you're working in a less functional field is that even if you're right, causing future researchers to build on your w... (read more)

That said, I just want to point out that (at least as far as I understand it), there is a significant collection of people within and around EA who think that Leverage is a uniquely awful organization which suffered a multilevel failure extremely reminiscent of your run-of-the mill cult (not just for those who left it, but also for many people who are still in it), which soft-core threatens members to avoid negative publicity, exerts psychological control on members in ways that seem scary and evil. This is context that I think some people reading the ster

... (read more)

Instead I would specifically look at its output and approach to external engagement: if they're not publishing research I would take that as a strong negative signal for the project. Likewise, in participating in a research project I would want to ensure that we were writing publicly and opening our work to engaged and critical feedback.

I'm curious about why your conclusion is about the importance of public engagement instead of about the importance (and difficulty) of setting up good feedback loops for research.

It seems to me that it is possible to hav... (read more)

5
Jeff Kaufman
2y
I think feedback loops are the important thing, but public engagement is a powerful way to strengthen them which Leverage seemed to have suffered from deprioritizing. In the example of the Manhattan Project, they were studying and engineering physical things, which makes it a lot harder to be wrong about whether you're making progress. My understanding is also that they brought a shockingly high fraction of the experts in the field into the project, which might mean you could get some of what you'd normally get from public presentation internally?

I wanted to add a brief comment about EA Ventures.

I think this piece does a fair job of presenting the relevant facts about the project and why it did not ultimately succeed. However, the tone of the piece seems to suggest that something untoward was happening with the project in a way that seems quite unfair to me.

For example, you say:

Personally, I (and others) suspect the main reason EAV failed is that it did not actually have committed funding in place.

That this was a big part of the issue with the project is correct, but also, the lack of committed... (read more)

the lack of committed funding was no secret!

FWIW, while EAV was running I assumed there was at least some funding committed. I knew funders could decline to fund individual projects, but my impression was that at least some funders had committed at least some money to EAV. I agree EAV didn’t say this explicitly, but I don’t think my understanding was inconsistent with the quotes you cite or other EAV communications. I’m almost positive other people I talked to about EAV had the same impression I did, although this is admittedly a long time ago and I could ... (read more)

This post is great and I really admire you for posting it.

Very enlightening and useful post for understanding not only life sciences, but other areas of science funding as well.

One of the most straightforward and useful introductions to MIRIs work that I've read.

This post highlighted an important problem that would have taken much longer to address otherwise. I would point to this post as an example of how to hold powerful people accountable in a way that is fair and reasonable.

(Disclosure: I worked for CEA when this post was published)

I've read some of the work from the historical case studies project and it seems like a project that has the potential to be extremely useful for anyone interested in movement building. I did a comparatively shallow dive into the Neoliberal movement a while ago and found it very useful for my own thinking about movement building and this project seems like it is of substantially better quality. 

In fact, I'm surprised no one started a project of reviewing historical movement-building cases until now.

If I imagine being someone who is new-ish to EA, who wants to do good in the world and is considering making donations my plan for impact, I imagine that I really have two questions here:

  1. Is donating an effective way to do good in the world given the amount of money committed to EA causes?
  2. Will other people in the EA community like and respect me if I focus on donating money?

I think question 2) understandably matters to people, but it's a bit uncouth to say it out loud (which is why I'm trying to state it explicitly).

In the earliest days of EA, the answ... (read more)

On (2), I'll say something I've said a few times before on the Forum: I like and respect people who donate money. It seems like a very good character trait to be willing to make sacrifices to help others much more than you could help yourself. 

And feeling any less good about someone's donations because they could be working on a "better" career makes little sense to me — I don't dislike myself for being less than maximally productive in my own career, so extending dislike to someone who (like me, like almost everyone) has chosen a "less-than-maximal-i... (read more)

9
MichaelPlant
2y
Yeah, does not seem like a good outcome if people are donating, say, 10% of their salary, then they come to EA events and they get the feeling that people look down their noses at them as if to say "that's it? You don't have an 'EA' job?"

I think I still don't quite get why this seems implausible. (For what it's worth, I think your view is pretty mainstream, so I'm asking about it more to understand how people are thinking about AI and not as any kind of criticism of the post or the parenthetical.)

It seems clear to me that an AI weapon could exist. AI systems designed to autonomously identify and destroy targets seem like a particularly clear example. A ban which distinguishes that technology from nearby civilian technology doesn't seem much more difficult than distinguishing biological wea... (read more)

This isn't central to the post, but I'm interested in this parenthetical:

(To clarify - the BWC is an arms control treaty that prohibits bioweapons; it is unlikely that we’ll see anything similar with AI (i.e. a complete ban of any “AI weapons”, whatever this means.)

At first glance, a ban on AI weapons research or AI research with military uses seems pretty plausible to me. For example, one could ban research on lethal autonomous weapons systems and research devoted to creating an AGI without banning, e.g., the use of machine learning for image classification or text generation.

Can you say more about why this seems implausible from your point of view?

1
Yadav
3y
Hey Kerry! Good question. I included this disclaimer because to me it seems very hard to define what we exactly mean by an "AI weapon", which makes a complete ban, like the one the BWC has, implausible. 

I think the consensus around impact certificates was that they seemed like a good idea and yet the idea never really took off.

Lots of funding is implicitly retrospective in the sense that what you've done historically is a big input into whether individuals and groups get funding. Yet, because most funding mixes several factors including past work, anticipated future work, reputation, etc. I think there may be an open opportunity here.

I'd be particularly excited to see funding for projects that have already occurred where it is clear that the success or failure of the past project is all that is being considered. This might encourage more unconventional or initially hard-to-assess projects and would provide a more concrete signal about which projects actually succeeded historically.

In the world where changes to the survey explain the drop, I'd expect to see a similar number of people click through to the survey (especially in 2019) but a lower completion rate. Do you happen to have data on the completion rate by year?

If the number of people visiting the survey has dropped, then that seems consistent with the hypothesis that the drop is explained by the movement shrinking unless the increased time cost of completing the survey was made very clear upfront in 2019 and 2020.

4
David_Moss
3y
  Unfortunately (for testing your hypothesis in this manner) the length of the survey is made very explicit upfront. The estimated length of the EAS2019 was 2-3x longer than EAS2018 (as it happened, this was an over-estimate, though it was still much longer than in 2018),  while the estimated length of EAS2020 was a mere 2x longer than EAS2018. Also, I would expect a longer, more demanding survey to lead to fewer total respondents in the year of the survey itself (and not merely lagged a year), since I think current-year uptake can be influenced by word of mouth and sharing (I imagine people would be less likely to share and recommend others take the survey if they found the survey long or annoying). That said, as I noted in my original comment, I would expect to see lag effects (the survey being too long reduces response to the next year's survey) and I might expect these effects to be larger (and to stack if the next year's survey is itself too long) and this is exactly what we see: we see a moderate did from 2018 to 2019 and then a much larger dip from 2019 to 2020. "Completion rate" is not entirely straightforward, because we explicitly instruct respondents that the final questions of the survey are  especially optional "extra credit" questions and they should feel free to quit the survey before these. We can, however, look at the final questions of the main section of the survey (before the extra credit section) and here we see roughly the predicted pattern: a big drop in those 'completing' the main section from 2018 to 2019 followed by a smaller absolute drop 2019 to 2020, even though the percentage of those who started the survey completing the main section actually increased between 2019 and 2020 (which we might expect if some people, who are less inclined to take the survey, were put off taking it).

From context, that appears to be an incomplete list of metrics selected as positive counterexamples. I assumed there are others as well.

8
Aaron Gertler
3y
I do agree that lower survey participation is evidence in favor of a smaller community — I just think it's overwhelmed by other evidence. The metrics I mentioned were the first that came to mind. Trying to think of more: From what I've seen at other orgs (GiveDirectly, AMF, EA Funds), donations to big EA charities seem to generally be growing over time (GD is flat, the other two are way up). This isn't the same as "number of people in the a EA movement", but in the case of EA Funds, I think "monthly active donors" are quite likely to be people who'd think of themselves in that way. EA.org activity is also up quite a bit (pageviews up 35% from Jan 1 - May 23, 2021 vs. 2020, avg. time on page also up slightly). Are there any numbers that especially interest you, which I either haven't mentioned or have mentioned but not given specific data on?

You probably already agree with this, but I think lower survey participation should make you think it's more likely that the effective altruism community is shrinking than you did before seeing that evidence.

If you as an individual or CEA as an institution have any metrics you track to determine whether effective altruism is growing or shrinking, I'd find it interesting to know more about what they are.

6
Pablo
3y
He mentioned a number of relevant metrics:

I am well aware of the general reticence about mass media and the preference for a high fidelity model of spreading the ideas of effective altruism. However, I think that (1) the misrepresentation risks are less acute in the narrower effective-giving space and (2) some coverage —even if it is a bit off-target— can often be better than no coverage when you are launching a new organization.

I want to express some general support for being less concerned about fidelity when spreading ideas like effective giving.

Something that I didn't discuss in the article... (read more)

4
Andy_Schultz
3y
I agree it makes sense to spread the idea of effective giving widely. The only counterexample I can think of is the following, which was probably limited to affecting only one person's donations: https://forum.effectivealtruism.org/posts/yz5bqvG2sG92tLHmM/open-thread-4?commentId=pW684GoPXQE4YjdLs. Interestingly, this was a person to person interaction rather than through media. Overall it seems very good to share the idea of effective giving more.

I work at Leverage Research as the Program Manager for our Early Stage Science research.

I'm much less involved now than I was 12 months ago. 

There are a few reasons for this. The largest factor is that my engagement has steadily decreased since I stopped working an EA job where engagement with EA was a job requirement and took a non-EA job instead. My intellectual interests have also shifted to history of science which is mostly outside the EA purview.

More generally, from the outside, EA feels stagnant both intellectually and socially. The intellectual advances that I'm aware of seem to be concentrated in working out the details of longt... (read more)

6
ImmaSix
4y
Question for my understanding: what is your current job?

So I’m curious if intellectual progress which is dependent on physical tools is really that much different. I’d naively expect your results to translate to math as well.

This is an interesting point, and it's useful to know that your experience indicates there might be a similar phenomenon in math.

My initial reaction is that I wouldn’t expect models of early stage science to straightforwardly apply to mathematics because observations are central to scientific inquiry and don’t appear to have a straightforward analogue in the mathematical case (observatio

... (read more)

Hi edoarad,

Some off the bat skepticism. It seems a priori that the research on early stage science is motivated by early stage research directions and tools in Psychology. I'm wary of motivated reasoning when coming to conclusions regarding the resulting models in early stage, especially as it seems to me that this kind of research (like historical research) is very malleable and can be inadvertently argued to almost any conclusions one is initially inclined to.

What's your take on it?

Thanks for the question. This seems like the right kind of thing t

... (read more)
4
EdoArad
4y
Great, this helps me understand my confusion regarding what counts as early stage science. I come from a math background, and I feel that the cluster of attributes above represent a lot of how I see some of the progress there. There are clear examples where the language, intuitions and background facts are understood to be very far from grasping an observed phenomenon. Instruments and measurement tools in Math can be anything from intuitions of experts to familiar simplifications to technical tools that helps (graduate students) to tackle subcases (which would themselves be considered as "observations"). Different researchers may be in complete disagreement on what are the relevant tools (in the above sense) and directions to solve the problem. There is a constant feeling of progress even though it may be completely unrelated to the goal. Some tools require deep expertise in a specific subbranch of mathematics that makes it harder to collaborate and reach consensus. So I'm curious if intellectual progress which is dependent on physical tools is really that much different. I'd naively expect your results to translate to math as well.

Hey Milan,

I'm Kerry and I'm the program manager for our early stage science research.

We've already been engaging with some of the progress studies folks (we've attended some of their meetups and members of our team know some of the people involved). I haven't talked to any of the folks working on metascience since taking on this position, but I used to work at the Arnold Foundation (now Arnold Ventures) who are funders in the space, so I know a bit about the area. Plus, some of our initial research has involved gaining some familiarity with the academic re

... (read more)
Load more