Max_Daniel

Chief of Staff at the Forethought Foundation for Global Priorities Research and Chair of the EA Infrastructure Fund.

Previously I participated in the first cohort of FHI's Research Scholars Programme (RSP) and then helped run it as one of its Project Managers.

Before that, my first EA-inspired jobs were with the Effective Altruism Foundation, e.g., running what is now the Center on Long-Term Risk. While I don't endorse their 'suffering-focused' stance on ethics, I'm still a board member there.

Unless stated otherwise, I post on the Forum in a personal capacity, and don't speak for any organization I'm affiliated with.

I like weird music and general abstract nonsense. In a different life I would be a mediocre mathematician or a horrible anthropologist.

Wiki Contributions

Comments

You can talk to EA Funds before applying

I'm almost certain that the website is correct.

I think the post you link to is outdated. It was superseded by this newer post.

I'd therefore encourage you to submit your application!

(But note that I'm a fund manager for the EAIF, not the LTFF, and so cannot speak for the LTFF.)

More EAs should consider “non-EA” jobs

FWIW, I think I did not consider non-EA jobs nearly enough right after my master's in 2016. However, my situation was somewhat idiosyncratic, and I'm not sure it could happen today in this form.

I ended up choosing between one offer from an EA org and one offer from a mid-sized German consulting firm. I chose the EA org. I think it's kind of insane that I hadn't even applied to, or strongly considered, more alternatives, and I think it's highly unclear if I made the right choice.

Post on maximizing EV by diversifying w/ declining marginal returns and uncertainty

I don't remember a post but Daniel Kokotajlo recently said the following in a conversation. Someone with maths background should have an easy time to check & make this precise.

> It is a theorem, I think, that if you are allocating resources between various projects that each have logarithmic returns to resources, and you are uncertain about how valuable the various projects are but expect that most of your total impact will come from whichever project turns out to be best (i.e. the distribution of impact is heavy-tailed) then you should, as a first approximation, allocate your resources in proportion to your credence that a project will be the best.

Most research/advocacy charities are not scalable

But the EA Infrastructure Fund currently only has ~$65k available

Hi, thanks for mentioning this - I am the chairperson of the EA Infrastructure Fund and wanted to quickly comment on this: We do have room for more funding, but the $65k number is too low. As of one week ago, the EAIF had at least $290k available. (The website for me now does show $270k, not $65k.)

It is currently hard to get accurate numbers, including for ourselves at EA Funds, due to an accounting change at CEA. Apologies for any confusion this might cause. We will fix the number on the website as soon as possible, and will also soon provide more reliable info on our room for more funding in an EA Forum post or comment.

ETA: according to a new internal estimate, as of August 10th the EAIF had $444k available.

EA Infrastructure Fund: Ask us anything!

(I'd be very interested in your answer if you have one btw.)

The Centre for the Governance of AI is becoming a nonprofit

FWIW I agree that for some lines of work you might want to do managing conflicts of interests is very important, and I'm glad you're thinking about how to do this.

Linch's Shortform

That seems fair. To be clear, I think "ground truth" isn't the exact framing I'd want to use, and overall I think the best version of such an exercise would encourage some degree of skepticism about the alleged 'better' answer as well.

Assuming it's framed well, I think there are both upsides and downsides to using examples that are closer to EA vs. clearer-cut. I'm uncertain on what seemed better overall if I could only do one of them.

Another advantage of my suggestion in my view is that it relies less on mentors. I'm concerned that having mentors that are less epistemically savvy than the best participants can detract a lot from the optimal value that exercise might provide, and that it would be super hard to ensure adequate mentor quality for some audiences I'd want to use this exercise for. Even if you're less concerned about this, relying on any kind of plausible mentor seems like less scaleable than a version that only relies on access to published material.

EA Infrastructure Fund: Ask us anything!

I haven't thought a ton about the implications of this, but my initial reaction also is to generally be open to this.

So if you're reading this and are wondering if it could be worth it to submit an application for funding for past expenses, then I think the answer is we'd at least consider it and so potentially yes.

If you're reading this and it really matters to you what the EAIF's policy on this is going forward (e.g., if it's decision-relevant for some project you might start soon), you might want to check with me before going ahead. I'm not sure I'll be able to say anything more definitive, but it's at least possible. And to be clear, so far all that we have are the personal views of two EAIF managers not a considered opinion or policy of all fund managers or the fund as a whole or anything like that.

Load More