I run the non-engineering side of the EA Forum (this platform), run the EA Newsletter, and work on some other content-related tasks at CEA. Please feel free to reach out! You can email me. [More about my job.]
Some of my favorite of my own posts:
I finished my undergraduate studies with a double major in mathematics and comparative literature in 2021. I was a research fellow at Rethink Priorities in the summer of 2021 and was then hired by the Events Team at CEA. I've since switched to the Online Team. In the past, I've also done some (math) research and worked at Canada/USA Mathcamp.
Some links I think people should see more frequently:
I'd suggest using a different term or explicitly outlining how you use "expert" (ideally both in the post and in the report, where you first use the term) since I'm guessing that many readers will expect that if someone is called "expert" in this context, they're probably "experts in EA meta funding" specifically — e.g. someone who's been involved in the meta EA funding space for a long time, or someone with deep knowledge of grantmaking approaches at multiple organizations. (As an intuition pump and personal datapoint, I wouldn't expect "experts" in the context of a report on how to run good EA conference sessions to include me, despite the fact that I've been a speaker at EA Global a few times.) Given your description of "experts" above, which seems like it could include (for instance) someone who's worked at a specific organization and maybe fundraised for it, my sense is that the default expectation of what "expert" means in the report would this be mistaken.
Relatedly, I'd appreciate it if you listed numbers (and possibly other specific info) in places like this:
We interviewed numerous experts, including but not limited to staff employed by (or donors associated with) the following organizations: OP, EA Funds, MCF, GiveWell, ACE, SFF, FP, GWWC, CE, HLI and CEA. We also surveyed the EA community at large.
E.g. the excerpt above might turn into something like the following:
We interviewed [10?] [experts], including staff at [these organizations] and donors who have supported [these organizations]. We also ran an "EA Meta Funding Survey" of people involved in the EA community and got 25 responses.
This probably also applies in places where you say things like "some experts" or that something is "generally agreed". (In case it helps, a post I love has a section on how to be (epistemically) legible.)
I know Grace has seen this already, but in case others reading this thread are interested: I've shared some thoughts on not taking the pledge (yet) here.[1]
Adding to the post: part of the value of pledges like this comes from their role as a commitment mechanism to prevent yourself from drifting away from values and behaviors that you endorse. I'm not currently worried about drifting in this way, partly because I work for CEA and have lots of social connections to extremely altruistic people. If I started working somewhere that isn't explicitly EA-oriented and/or lost my connections to the EA community, I think I'd worry a lot more about drift and the usefulness of the pledge would jump for me. (I plan on thinking about taking some kind of pledge if/when that happens.)
I'll also note that I've recently seen multiple people ~dunking on folks in EA who haven't taken the pledge (or making fun of arguments against taking the pledge), and I think this is pretty unhelpful. I'm really grateful to the GWWC Pledge community, but I really don't think the pledge is right for everyone (and neither does GWWC). Even if you think almost all the people who aren't pledging are wrong and/or biased, dunking is probably a bad way to argue. Additionally, it disincentivizes people from coming out and answering Grace's question, since they might worry that they'll (indirectly) get ridiculed for it. So if you see someone you know ~dunking, consider asking them to avoid doing that (especially if you already know them and/or have been sharing arguments for taking the pledge).
To be clear: I totally believe my conclusion could be wrong, and I'm happy to see (more) arguments about why that could be. (Having said that, I should flag that I don't plan on spending time on this decision right now because I think I have more pressing decisions at the moment, but it's something I want to think more about in the future. So e.g. I might not respond to comments.)
Not sure if this already exists somewhere (would love recommendations!), but I'd be really excited to see a clear and carefully linked/referenced overview or summary of what various agriculture/farming ~lobby groups do to influence laws and public opinion, and how they do it (with a focus on anything related to animal welfare concerns). This seems relevant.
Just chiming in with a quick note: I collected some tips on what could make criticism more productive in this post: "Productive criticism: what could help?"
I'll also add a suggestion from Aaron: If you like a post, tell the author! (And if you're not sure about commenting with something you think isn't substantive, you can message the author a quick note of appreciation or even just heart-react on the post.) I know that I get a lot out of appreciative comments/messages related to my posts (and I want to do more of this myself).
I'm basically always interested in potential lessons for EA/EA-related projects from various social movements/fields/projects.
Note that you can find existing research that hasn't been discussed (much) on the Forum and link-post it (I bet there's a lot of useful stuff out there), maybe with some notes on your takeaways.
Example movements/fields/topics:
Some resources, examples, etc. (not exhaustive or even a coherent category):
I'd love to see two types of posts that were already requested in the last version of this thread:
I really like this post and am curating it (I might be biased in my assessment, but I endorse it and Toby can't curate his own post).
A personal note: the opportunity framing has never quite resonated with me (neither has the "joy in righteousness" framing), but I don't think I can articulate what does motivate me. Some of my motivations end up routing through something ~social. For instance, one (quite imperfect, I think!) approach I take[1] is to imagine some people (sometimes fictional or historical) I respect and feel a strong urge to be the kind of person they would respect or understand; I want to be able to look them in the eye and say that I did what I could and what I thought was right. (Another thing I do is try to surround myself with people[2] I'm happy to become more similar to, because I think I will often end up seeking their approval at least a bit, whether I endorse doing it or not.)
I also want to highlight a couple of related things:
both explicitly/on purpose (sometimes) and often accidentally/implicitly (I don't notice that I've started thinking about whether I could face Lay or Karel Capek or whoever else until later, when I find myself reflecting on it)
I'm mostly talking about something like my social circle, but I also find this holds for fictional characters, people I follow online, etc.
I'm going to butt in with some quick comments, mostly because:
I'm sharing comments and suggestions below, using your (Joel's) numbering. (In general, I'm not sharing my overall views on EA Funds or the report. I'm just trying to clarify some confusions that seem resolvable, based on the above discussion, and suggest changes that I hope would make the report more useful.)
I also want to say that I appreciate the work that has gone into the report and got value from e.g. the breakdown of quantitative data about funding — thanks for putting that together.
And I want to note potential COIs: I'm at CEA (although to be clear I don't know if people at CEA agree with my comment here), briefly helped evaluate LTFF grants in early 2022, and Linch was my manager when I was a fellow at Rethink Priorities in 2021.
E.g.
In relation to this claim: "They do not think of RP as doing cause prioritization, and though in their view RP could absorb more people/money in a moderately cost-effective way, they would consider less than half of what they do cause prioritization."
"...we mean reports of success or having public metrics of success. We didn't view reports on payouts to be evidence of success, since payouts are a cost, and not the desired end goal in itself. This contrasts with reports on output (e.g. a community building grant actually leading to increased engagement on XYZ engagement metrics) or much more preferably, report on impact (e.g. and those XYZ engagement metrics leading to actual money donated to GiveWell, from which we can infer that X lives were saved)."