I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.
One specific related take that I'd note is that I've noticed that the top specific people donors (I.E. Jaan Tallinn / Dustin Moskovitz / others, not OP) get a great amount of respect and deference.
I think that these people are creating huge amounts of value.
However, I think a lot of people assume that the people contributing the most to EA (these donors) are the "most intense and committed EAs", and I think that's not the case. My impression is that these donors, while very smart and hardworking, are quite distinct from most "intense and committed EAs." They often have values and beliefs that are fairly different. I suspect that they donate to EA causes not because they are incredibly closely aligned, but because EA causes/organizations represent some of the closest options available of existing charity options.
Again, I think that their actions are overall quite good and that these people are doing great work.
But at the same time, when I look at the people I find most inspiring, or that I'd personally place the greatest trust if I had a great deal of money, I'd probably place it more in others I see on the extreme of hard working, intelligent, and reasonable, who often are researchers with dramatically lower net worths.
Correspondingly, one of the things I admire most about many top donors is their ability to defer to others who are better positioned to make specific choices. Normally, "Finding the best people for the job, accepting it's not you, and then mostly getting out of the way" is about the best you can do as an individual donor.
I broadly agree with this!
At the same time, I'd flag that I'm not quite sure how to frame this.
If I were a donor to 80k, I'd see this action as less "80k did something nice for the EA community that they themselves didn't benefit from" and more "80k did something that was a good bet in terms of expected value." In some ways, this latter thing can be viewed as more noble, even though it might be seen as less warm.
Basically, I think that traditional understandings of "being thankful" sort of break down when organizations are making intentional investments that optimize for expected value.
I'm not at all saying that this means that these posts are less valuable or noble or whatever. Just that I'd hope we could argue that they make sense strictly through the lens of EV optimization, and thus don't need to rely as much on the language of appreciation.
(I've been thinking about this with other discussions)
I generally believe that EA is effective at being pragmatic, and in that regard, I think it's important for the key organizations that are both giving and receiving funding in this area to coordinate, especially with topics like funding diversification. I agree that this is not the ideal world, but this goes back to the main topic.
For reference, I agree it's important for these people to be meeting with each other. I wasn't disagreeing with that.
However, I would hope that over time, there would be more people brought in who aren't in the immediate OP umbrella, to key discussions of the future of EA. At least have like 10% of the audience be strongly/mostly independent or something.
The o1-preview and Claude 3.5-powered template bots did pretty well relative to the rest of the bots.
As I think about it, this surprises me a bit. Did participants have access to these early on?
If so, it seems like many participants underperformed the examples/defaults? That seems kind of underwhelming. I guess it's easy to make a lot of changes that seem good at the time but wind up hurting performance when tested. Of course, this raises the point that it's concerning that there wasn't any faster/cheaper way of testing these bots first. Something seems a bit off here.
I think you raise some good points on why diversification as I discuss it is difficult and why it hasn't been done more.
Quickly:
> I agree with the approach's direction, but this premise doesn't seem very helpful in shaping the debate.
Sorry, I don't understand this. What is "the debate" that you are referring to?
> At the last, MCF funding diversification and the EA brand were the two main topics
This is good to know. While mentioning MCF, I would bring up that it seems bad to me that MCF seems to be very much within the OP umbrella, as I understand it. I believe that it was funded by OP or CEA, and the people who set it up were employed by CEA, which was primarily funded by OP. Most of the attendees seem like people at OP or CEA, or else heavily funded by OP.
I have a lot of respect for many of these people and am not claiming anything nefarious. But I do think that this acts as a good example of the sort of thing that seems important for the EA community, and also that OP has an incredibly large amount of control over. It seems like an obvious potential conflict of interest.
Quickly:
1. Some of this gets into semantics. There are some things that are more "key inspirations for what was formally called EA" and other things that "were formally called EA, or called themselves EA." GiveWell was highly influential around EA, but I think it was created before EA was coined, and I don't think they publicly associated as "EA" for some time (if ever).
2. I think we're straying from the main topic at this point. One issue is that while I think we disagree on some of the details/semantics of early EA, I also don't think that matters much for the greater issue at hand. "The specific reason why the EA community technically started" is pretty different from "what people in this scene currently care about."
When having conversations with people that are hard to reach, it's easy for discussions to take ages.
One thing I tried doing is for me to have a brief back-and-forth with Claude, asking it to provide all the key arguments against me. Then I'd make the conversation public, send a link to the chat, and ask the other person to see that. I find that this can get through a lot of the beginning points on complex topics, with minimal human involvement.
A surprising number of EA researchers I know have highly accomplished parents. Many have family backgrounds (or have married into families) that are relatively affluent and scientific.
I believe the nonprofit world attracts people with financial security. While compensation is often modest, the work can offer significant prestige and personal fulfillment.
This probably comes with a bunch of implications.
But the most obvious implication to me, for people in this community, is to realize that it's very difficult to access how impressive specific individual EAs/nonprofit people are, without understanding their full personal situations. Many prominent community members have reached their positions through a combination of merit, family/social networks, and fortunate life circumstances.