Some norms I would like to see when folks use LLMs substantively (not copy editing or brainstorming):
explaining the extent of LLM use
explaining the extent of human revision or final oversight
Thanks for this write up! It was really insightful. A few questions:
People who apply to found an NGO come with all sorts of motivations.
Could you say more about what motivations they come with?
As this is a regional program, we couldn’t have a cohort composed entirely of only 4 countries, even though several were outstanding candidates.
Base on my experience working in India, I've seen a lot of benefits of having multiple orgs working in the same geographies at the same time/stage to share resources, advice, talent, etc. Curious what you were limited b...
FWIW, since 2022 (so after SWP and FWI), I count:
Have not thought about compounding returns to orgs! I can think of some concrete examples with AIM ecosystem charities (e.g one org helping bring another into creation or creating a need for others to exist). Food for thought.
Curious how you see the communitarianism playing out in practice?
There's definitely a cooperative side to things that makes it a lot easier to ask for help amongst EAs than the relevant professional groups someone might be a part of, but not sure I'm seeing obvious implications.
Very curious if you can describe the types of people you know, their profiles, what cause areas and roles they are have applied for, what constraints they have if any.
But typically (not MECE, written quickly, not in order of importance, some combination could work etc.):
For many years I've been trying to figure out a core disagreement I have with a bunch of underlying EA/rationalist school of thought. I think I've sort of figured it out: the emphasis on individual action, behavior & achievement over collective. And an understanding of how the collective changes individuals - through emergent properties (e.g. norms, power dynamics, etc.), and an unwillingness to engage.
This has improved a bunch since I first joined in 2017 (the biggest shock to the system was FTX and subsequent scandals). Why I think these issues...
Not a recruiter from this ama but just wanted to add:
I've seen a number of marketing roles advertised in the past year across field building and effective giving orgs in particular, but also (IIRC) some more direct work AI safety orgs.
There's also been calls for e.g. and AI Safety focused marketing agency and things like that.
Probably stemming from two things:
Other (probably more important, if combined) reasons :
Hey Ben! You might want to check out Probably Good (https://probablygood.org/) - they do global career advice but more focused on GH&D and animal welfare etc.
I don't think they are explicitly targeting talent from Africa or other LMICs, but they have already written up a bunch of career path profiles+ content and I think many could apply.
(+ Animal Advocacy Careers might be interesting too)
Yes! The overview/title captures what I've seen as well, esp from newer community members. I spend a lot of my time telling people that they know their situation better than I do (and have probably infuriated people by not answering questions directly :)).
One point I'd highlight: I find that people often lack confidence in the plans they make, and that makes them more uncertain, less likely to act, and maybe have less motivation or drive.
This is often caused by imposter syndrome, or chasing a unrealistic sense of certainty or assurance that doesn't exist. ...
Oh interesting. I want to dig into this more now, but my impression is that an individual's giving portfolio - both major donors & retail donors, but more so people who aren't serious philanthropists and/or haven't reflected a lot on their giving - is that they are malleable and not as zero-sum.
i think with donors likely to give to ea causes, a lot of them haven't really been stewarded & cultivated and there probably is a lot of room for them to increase their giving.
I agree he's not offering alternatives, as I mentioned previously. It would be good if Leif gave examples of better tradeoffs.
I still think your claim is too strongly stated. I don't think Leif criticizing GW orgs means he is discouraging life saving aid as a whole, or that people will predictably die as a result. The counterfactual is not clear (and it's very difficult to measure).
More defensible claims would be :
I didn't read the article you linked, I think it's plausible. (see more in my last para)
I'd like to address your second paragraph in more depth though:
He's clearly discouraging people from donating to GiveWell's recommendations. This will predictably result in more people dying. I don't see how you can deny this.
I don't think GW recommendations are the only effective charities out there, so I don't think this is an open-and-shut case.
I agree with the omission bias point, but the second half of the paragraph seems unfair.
Leif never discourages people from doing philanthropy (or, aid as he calls it). Perhaps he might make people unduly skeptical of bednets in particular - which I think is reasonable to critique him on.
But overall, he seems to just be advocating for people to be more critical of possible side effects from aid. From the article (bold mine)
...Making responsible choices, I came to realize, means accepting well-known risks of harm. Which absolutely does not mean that
This comment is mostly about the letter, not the wired article. I don't think this letter is particularly well argued (see end of article for areas of disagreement), but I'm surprised by the lack of substantive engagement with it.
This is fairly rough, i'm sure i've made mistakes in here, but figured it's better to share than not.
Here’s some stuff i think is reasonable (but would love for folks to chime in if i'm missing something)
Thanks for your time Lizka! As someone who has shared a bunch of feedback on the forum, I appreciated your willingness to always engage and stay curious.
Moderation is one of important and invisible jobs where it's really hard to please everyone. i think you / the team did a really good job in what was probably the hardest period of time to be a mod on this forum.
+1 to preparing to be in a position to do E2G. I think this is true for many career paths, but it's easier to justify it when you're doing a PhD in ML to work in TAIS research, or working in an entry level position in Congress to try to gain career capital and influence policy.
One general hesitation I had with parts of the post's framing was that it may not look at this as a long term career path (which means e.g. ramping up giving %'s , doing things to psychologically / emotionally feel good + confident about giving away more money).
well worth the time, and for sure! here are a few thoughts:
+10000 and advice i've given to folks working on any kind of CB / meta work. targeting users is always a good think (and you can always increase the personas you support over time). careers just take time to change, very much a marathon not a sprint (low hanging fruit are limited).
EA overall (EA thinking, funders, some parts of the EA community) have more blindspots / a lot of suspicion around longer impact timeline...
Sebastian addressed this in a comment below. I'll also add that the Hub is a volunteer-run project, and we have limited time / resources.
Fair point, I couldn't find a link to point to the budget, but:
"We launched this program in July 2022. In its first 12 months, the program had a budget of $10 million."
From their website - https://www.openphilanthropy.org/focus/ea-global-health-and-wellbeing/
I don't think they had dramatically more money in 2023, and (without checking the numbers again to save time) I am pretty sure they mostly maxed out their budget both years.
However, distancing yourself from 'small r' rationality is far more radical and likely less considered.
Could you share some examples of where people have done this or called for it?
From what I've seen online and the in person EA community members I know, people seem pretty clear about separating themselves from the Rationalist community.
It would be indeed very strange if people made the distinction, thought about the problem carefully, and advocated for distancing from 'small r' rationality in particular.
I would expect real cases to look like
- someone is deciding about an EAGx conference program; a talk on prediction markets sounds subtly Rationality-coded, and is not put on schedule
- someone applies to OP for funding to create rationality training website; this is not funded because making the distinction between Rationality and rationality would require too much nuance
- someone is decid...
Good Ventures have stopped funding efforts connected with the rationality community and rationality
Since that post doesn't specify specific causes they are exiting from, could you clarify if they specified that they are also not funding lower case r "rationality"?
More broadly, they are ultimately scared about the world returning to the sort of racism that led to the Holocaust, to segregation, and they are scared that if they do not act now, to stop this they will be part of maintaining the current system of discrimination and racial injustice.
This feels somewhat uncharitable.
Oh thanks for the clarification, I didn't realize that! I'd expect there to be less wealth in LMIC countries though - I assume the vast majority of wealth (not sure what reasonable numbers are here) is held in HIC's and by HNWIs / corporations / governments in those countries.
Also global GDP increased 43% between 2010 and 2022.
GDP per capita numbers are 2022 estimates, didn't make that clear earlier.
Changelog: added directorysf (https://www.directorysf.com/) to the list of places to look for housing. It's pretty active, but you will need an invite from an existing user to join.