Another Philosophers Against Malaria Fundraiser has begun: https://www.againstmalaria.com/FundraiserGroup.aspx?FundraiserID=9418
In the last years, we got ca $65.000 in donations. Early donations are especially helpful, as they populate the page and give a sense of dynamism!
Any share with philosophers or university patriots that you know would be especially welcome. The fundraiser is a 'competition' between departments that aggregates donations; the winner is announced on the popular philosophy blog 'DailyNous'. Last year, the good folks at Delaware won. Before that, Michigan took the crown. Ohio State and Villanova lie in shambles.
Any help much appreciated! These are easy to run - if you are interested in starting one for your discipline, please reach out.
Hey Emmannaemeka,
Thank you for writing this! I have little insight as to which EA roles you might or might not be a good fit for. But I wanted to chime in on ways of fitting into the EA community, as opposed to EA orgs. I am in academia, too, and do not myself strive to get a job with an EA org. I do not think this makes me 'less EA'. There are many really good ways to contribute to the overall EA project that are not at EA organizations.
I find one of the privileges that come with academia is teaching ambitious, talented students. Many students enter university with a burning zeal to change the world and bring about positive change. I think as teachers, we can have a real impact by guiding such students towards realizing their values and going into positions where they can effectively make the world a better place. I am naturally biased in my assessment of this, but I think its plausible that teaching can have a bigger impact than direct work - it is a realistic aim to get multiple students that you can help grow into direct roles in EA-style organizations. I often think that many of these students are 'better fits' than I myself would be in such roles.
It strikes me that as a faculty member in a genuinely meaningful and important field, you'd be in a premier position to have impact through your teaching.
I agree that we shouldn't use e2g as a shorthand for skillmaxing.
I am less optimistic about the 'fit' vs raw competence point. It's not clear to me that a good fit for the work position can easily be gleaned by work tests - a very competent person may be able to acquire that 'fit' within a few weeks on the job, for example, once they have more context for the kind of work the organization wants. So even if the candidates at the point of hiring looked very different, their comparison may differ unless we imagine both in an applied job context, having learned things they did not know at the time of hiring.
I am more broadly worried about 'fit' in EA hiring contexts, because as opposed to markers of raw competence, 'fit' provides a lot of flexibility for selecting traits that are relatively tangential to work performance and often unreliable. For example, value-fit might select for hiring likeminded folks who have read the same stuff the hiring manager has, and reduce epistemic diversity. A fit for similar research interests reduces epistemic diversity and locks in certain research agendas for a long time. A vibe-fit may select simply for friends and those who have internalized norms. A worktest that is on an explicitly EA project may select for those already more familiar with EA, even if it would be easy for an outsider candidate to pick up on basic EA knowledge quickly if they got the job.
My impression is that overall, EA does have a noticeable suboptimal tendency to hire likeminded folks and folks in overlapping social circles (i.e. friends; friends of friends). Insofar as 'fit' makes it easier to justify this tendency internally and externally, I worry that it will lead to suboptimal hiring. I acknowledge we may have very different kinds of 'fit' in mind here. I do think the examples I provide above do exist in EA hiring decisions.
I haven't done hiring rounds for EA, so I may be completely wrong - maybe your experience has been that after a few worktests it becomes abundantly clear who the right candidate is.
This is a cool list. I am unsure if this one is very useful:
* There aren't many salient examples of people doing direct work that I want to switch to e2g.
This is because I think that we are not able to evaluate what replacement candidate would fill the role if the employed EA had done e2g. My understanding is that many extremely talented EAs are having trouble finding jobs within EA, and that many of them are capable of work at the quality that current EA employees do.
This reason I think bites both ways:
* E2g is often less well optimised for learning useful object-level knowledge and skills than direct work.
My understanding is that many non-EA jobs provide useful knowledge and skills that are underrepresented in current EA organizations, albeit my impression is that this is improving as EA organizations professionalize. For example, I wouldn't be surprised if on average, a highly talented undergrad would likely become a more effective employee of an EA organization if they spent 2 years ETG at anonymous corporation before they started doing direct work. And if we're lucky, such experiences outside EA would promote epistemic diversity and reduce the risk of groupthink in EA organizations.
My understanding is that competition for EA jobs is extremely high, and that roles that are being posted attract sufficient numbers of outstanding candidates. This seems to be strong evidence to me that a fair share of people applying to EA jobs should consider ETG unless they have reason to believe that they specifically outshine other applicants for EA jobs (i.e., that the job would not be filled by an equally competent person).
Regarding skeptical optimism, how about
Cautious Optimism
Safety-conscious optimism
Lighthearted skepticism
Happy Skepticism
Happy Worries
Curious Optimism
Positive Skepticism
Worried Optimism
Careful Optimism
Vigilant Optimism
Vigilant Enthusiasm
Guarded Optimism
Guarded Enthusiasm
Mindful Optimism
Mindful Enthusiasm
Just throwing a bunch of suggestions out in case one of them sounds good to your ear.
To AMF, as part of this yearly fundraiser I run https://www.againstmalaria.com/FundraiserGroup.aspx?FundraiserID=9191
I'm happy to see engagement with this article, and I think you make interesting points.
One bigger-picture consideration that I think you are neglecting is that even if your arguments go through (which is plausible), the argument for longtermism/xrisk shifts significantly.
Originally, the claim is something like
There is really bad risky tech
There is a ton of people in the future
Risky tech will prevent these people from having (positive) lives
________________________________
Reduce tech risk
On the dialectic you sketch, the claim is something like
There is a lot of really bad risky tech
This tech, if wielded well, can reduce the risk of all other tech to zero
There is a small chance of a ton of people in the future
If we wield the tech well and get a ton of people in the future, thats great
_________________________________________
Reduce tech risk (and, presumably, make it powerful enough to eliminate all risk and start having kids)
I think the extra assumptions we need for your arguments against Thorstadt to go through are ones that make longtermism much less attractive to many people, including funders. They also make x-risk unattractive for people who disagree with p2 (i.e., people who do not believe in superintelligence).
I think people are aware that this makes longtermism much less attractive - I typically don't see x-risk work being motivated in this more assumption-heavy way. And, as Thorstad usefullly points out, there is virtually no serious e(v) calculus for longtermist intervention that does a decent job at accounting for these complexities. That's a shame, because EA at least originally seemed to be very dilligent about providing explicit, high-quality e(v) models instead of going by vibes and philosophical argument alone.