Introducing Ayuda Efectiva

I am well aware of the general reticence about mass media and the preference for a high fidelity model of spreading the ideas of effective altruism. However, I think that (1) the misrepresentation risks are less acute in the narrower effective-giving space and (2) some coverage —even if it is a bit off-target— can often be better than no coverage when you are launching a new organization.

I want to express some general support for being less concerned about fidelity when spreading ideas like effective giving.

Something that I didn't discuss in the article on fidelity is risk assessment. While all ideas are susceptible to misunderstandings as they spread, not all misunderstandings are equally harmful. Effective giving appears to be a relatively low-risk idea to spread both both because the idea seems to be close to society's existing concepts and because there have been a number of past attempts at spreading the idea without any particular problematic results (I'd be interested in counterexamples if anyone knows of any).

How have you become more (or less) engaged with EA in the last year?

I work at Leverage Research as the Program Manager for our Early Stage Science research.

How have you become more (or less) engaged with EA in the last year?

I'm much less involved now than I was 12 months ago. 

There are a few reasons for this. The largest factor is that my engagement has steadily decreased since I stopped working an EA job where engagement with EA was a job requirement and took a non-EA job instead. My intellectual interests have also shifted to history of science which is mostly outside the EA purview.

More generally, from the outside, EA feels stagnant both intellectually and socially. The intellectual advances that I'm aware of seem to be concentrated in working out the details of longtermism using the tools of philosophy and economics -- important work to be sure, but not work that is likely to substantially influence my worldview or plans. 

Socially, many of the close friends I met in EA are drifting away from EA involvement. The newer people I've met also tend to have a notably different vibe from EAs in the past. Newer EAs seem to be looking to the older EA intellectuals to tell them the answer to what they should do with their lives and how they should think about the world. Something I liked about the vibe of the EA community in the past was the sense of possibility; the sense that there were many unanswered questions and that everyone had to work together to figure things out. 

As the EA community has matured, it seems to have narrowed its focus and reigned in its level of ambition. That's probably for the best, but I suspect it means that the intellectual explorers of the future are probably going to be located elsewhere.

Updates from Leverage Research: history, mistakes and new focus

So I’m curious if intellectual progress which is dependent on physical tools is really that much different. I’d naively expect your results to translate to math as well.

This is an interesting point, and it's useful to know that your experience indicates there might be a similar phenomenon in math.

My initial reaction is that I wouldn’t expect models of early stage science to straightforwardly apply to mathematics because observations are central to scientific inquiry and don’t appear to have a straightforward analogue in the mathematical case (observations are obviously involved in math, but the role and type seems possibly different).

I’ll keep the question of whether the models apply to mathematics in mind as we start specifying the early stage science hypotheses in more detail.

Updates from Leverage Research: history, mistakes and new focus

Hi edoarad,

Some off the bat skepticism. It seems a priori that the research on early stage science is motivated by early stage research directions and tools in Psychology. I'm wary of motivated reasoning when coming to conclusions regarding the resulting models in early stage, especially as it seems to me that this kind of research (like historical research) is very malleable and can be inadvertently argued to almost any conclusions one is initially inclined to.

What's your take on it?

Thanks for the question. This seems like the right kind of thing to be skeptical about. Here are a few thoughts.

First, I want to emphasize that we hypothesize that there may be a pattern here. Part of our initial reasoning for thinking that the hypothesis is plausible comes from both the historical case studies and our results from attempting early stage psychology research, but it could very well turn out that science doesn’t follow phases in the way we’ve hypothesized or that we aren’t able to find a single justified, describable pattern in the development of functional knowledge acquisition programs. If this happens we’d abandon or change the research program depending on what we find.

I expect that claims we make about early stage science will ultimately involve three justification types. The first is whether we can make abstractly plausible claims that fit the fact pattern from historical cases. The second is that our claims will need to follow a coherent logic of discovery that makes sense given the obstacles that scientists face in understanding new phenomena. Finally, if our research program goes well, I expect us to be able to make claims about how scientists should conduct early stage science today and then see whether those claims help scientists achieve more scientific progress. The use of multiple justification types makes it more difficult to simply argue for whatever conclusion one is already inclined towards.

Finally, I should note that the epistemic status of claims made on the basis of historical cases is something of an open question. There’s an active debate in academia about the use of history for reaching methodological conclusions, but at least one camp holds that historical cases can be used in an epistemically sound way. Working through the details of this debate is one of the topics I’m researching at the moment.

Also, I'm not quite sure where do you put the line on what is an early stage research. To take some familiar examples, Einstein's theory of relativity, Turing's cryptanalysis research on the enigma (with new computing tools), Wiles's proof of Fermat's last theorem, EA's work on longtermism, Current research on String theory - are they early stage scientific research?

I don’t yet have a precise answer to the question of which instances of scientific progress count as early stage science although I expect to work out a more detailed account in the future. Figuring out whether a case of intellectual progress counts as early stage science involves both figuring out whether it is science and then figuring out whether it is early stage science. I probably wouldn’t consider Wiles's proof of Fermat's last theorem and the development of cryptography as early stage science because I wouldn’t consider mathematical research of this type as science. Similarly, I probably wouldn’t consider EA work on longtermism as early stage science because I would consider it philosophy instead of science.

In terms of whether a particular work of science is early stage science, in our paper we gesture at the characteristics one might look for by identifying the following cluster of attributes:

A relative absence of established theories and well-understood instruments in the area of investigation, the appearance of strange or unexplained phenomena, and lack of theoretical and practical consensus among researchers. Progress seems to occur despite (and sometimes enabled by) flawed theories, individual researchers use imprecise measurement tools that are frequently new and difficult to share, and there exists a bi-directional cycle of improvement between increasingly sophisticated theories and increasingly precise measurement tools.

I don’t know enough about the details of how Einstein arrived at his general theory of relativity to say whether it fits this attribute cluster, but it appears to be missing the experimentation and improvement of measurements tools, and disagreements among researchers. Similarly, while there is significant disagreement among researchers working on theories in modern physics, I think there is substantial agreement on which phenomena need to be explained, how the relevant instruments work and so on.

Updates from Leverage Research: history, mistakes and new focus

Hey Milan,

I'm Kerry and I'm the program manager for our early stage science research.

We've already been engaging with some of the progress studies folks (we've attended some of their meetups and members of our team know some of the people involved). I haven't talked to any of the folks working on metascience since taking on this position, but I used to work at the Arnold Foundation (now Arnold Ventures) who are funders in the space, so I know a bit about the area. Plus, some of our initial research has involved gaining some familiarity with the academic research in both metascience and the history and philosophy of science and I expect to stay up to date with the research in these areas in the future. There was also a good meetup for people interested in improving science at EAG: London this year and I was able to meet a few EAs who are becoming interested in this general topic.

I expect to engage with all of these groups more in the future, but will personally be prioritizing research and laying out the intellectual foundations for early stage science first before prioritizing engaging with nearby communities.

Which Community Building Projects Get Funded?

"Business plans" aren't really a part of VC evaluations as far as I am aware. It certainly wasn't a part of YC's evaluation process. Eye-popping metrics that show growth are relevant as are the past experiencers of the founders, but VCs don't seem to rely much on abstract plans for what one intends to do as a component of evaluations.

Which Community Building Projects Get Funded?

My guess is that optimal grantmaking in EA community building is going to be heavily network-based for several reasons.

  1. Running an excellent EA community is a social activity.

Grantmakers gain tons of information about how capable someone is likely to be at doing this by interacting with them socially and that requires meeting them through your networks.

  1. There are some signficant downside risks in funding an EA community builder and network-based funding derisks this.

If an EA community builder does something bad, having been funded by CEA means that it now reflects on the community as a whole and not just on the specific people involved. This means that funders need to both protect against the downside risks as well as fund promising projects. Having someone you know and trust vouch for someone you don't know is, per unit of time involved, one of the best ways I know of to figure out who is and isn't like to accidentally cause harm.

  1. There aren't good objective criteria for evaluating newer community builders.

For someone who has just started running an EA group, it's hard to provide objective numbers that show that you should be funded. Group size, for example, isn't a good proxy becuase small groups of highly dedicated, capable people are likely to be more valuable than large groups of less dedicated, less capable people. An evaluation of the community builders themselves is probably required and information from people in your network helps with this.

Which Community Building Projects Get Funded?

An interesting comparison point is venture capital investing. VCs have a strong financial incentive to find and invest in all of the best companies regardless of location. Yet, as far as I know, networks matter a ton for getting VC funding and there are geographic clustering effects for companies get funded. We could conclude that VCs are allocating their capital inefficiently, and that there's a market opportunity for VC firms that have partners in many different locations all over the world.

I suspect that's not the right conclusion. Instead, I'd guess that the effect is created by lots of promising companies moving to a tech hub and the best companies being capable of networking their way to funders regardless of location. If you're a startup CEO and can't work out how to get a meeting with VCs, you might be in the wrong line of work.

Similarly, I think one conclusion that I'd like promising EA community leaders to reach from this analysis is that they should probably make sure to find ways to meet the people making grants in their areas. Being able to network seems like a core skill for a promising community builder, so this is an opportunity to exercise that skill. Of course, this doesn't mean that grantmakers shouldn't be working to expand the geographic scope of their grantmaking, it just means that if you're concerned that you're going to get left out of funding unfairly, there are steps you can take to prevent that.

Which Community Building Projects Get Funded?

Just to add a datapoint to this analysis. I was in charge of the referral-based round of EA Grants in 2018. At that time I was based in Fort Worth, Texas for personal reasons. My networks probably had some of the geographic biases that you're concerned about, but for more complex reasons than my physical location.

(Note: I no longer work at CEA and do not speak on CEA's behalf)

Load More