Max_Daniel

Chief of Staff at the Forethought Foundation for Global Priorities Research and Chair of the EA Infrastructure Fund.

Previously I participated in the first cohort of FHI's Research Scholars Programme (RSP) and then helped run it as one of its Project Managers.

Before that, my first EA-inspired jobs were with the Effective Altruism Foundation, e.g., running what is now the Center on Long-Term Risk. While I don't endorse their 'suffering-focused' stance on ethics, I'm still a board member there.

Unless stated otherwise, I post on the Forum in a personal capacity, and don't speak for any organization I'm affiliated with.

I like weird music and general abstract nonsense. In a different life I would be a mediocre mathematician or a horrible anthropologist.

Wiki Contributions

Comments

EA Infrastructure Fund: Ask us anything!

(I'd be very interested in your answer if you have one btw.)

The Centre for the Governance of AI is becoming a nonprofit

FWIW I agree that for some lines of work you might want to do managing conflicts of interests is very important, and I'm glad you're thinking about how to do this.

Linch's Shortform

That seems fair. To be clear, I think "ground truth" isn't the exact framing I'd want to use, and overall I think the best version of such an exercise would encourage some degree of skepticism about the alleged 'better' answer as well.

Assuming it's framed well, I think there are both upsides and downsides to using examples that are closer to EA vs. clearer-cut. I'm uncertain on what seemed better overall if I could only do one of them.

Another advantage of my suggestion in my view is that it relies less on mentors. I'm concerned that having mentors that are less epistemically savvy than the best participants can detract a lot from the optimal value that exercise might provide, and that it would be super hard to ensure adequate mentor quality for some audiences I'd want to use this exercise for. Even if you're less concerned about this, relying on any kind of plausible mentor seems like less scaleable than a version that only relies on access to published material.

EA Infrastructure Fund: Ask us anything!

I haven't thought a ton about the implications of this, but my initial reaction also is to generally be open to this.

So if you're reading this and are wondering if it could be worth it to submit an application for funding for past expenses, then I think the answer is we'd at least consider it and so potentially yes.

If you're reading this and it really matters to you what the EAIF's policy on this is going forward (e.g., if it's decision-relevant for some project you might start soon), you might want to check with me before going ahead. I'm not sure I'll be able to say anything more definitive, but it's at least possible. And to be clear, so far all that we have are the personal views of two EAIF managers not a considered opinion or policy of all fund managers or the fund as a whole or anything like that.

Linch's Shortform

I would be very excited about someone experimenting with this and writing up the results. (And would be happy to provide EAIF funding for this if I thought the details of the experiment were good and the person a good fit for doing this.)

If I had had more time, I would have done this for the EA In-Depth Fellowship seminars I designed and piloted recently.

I would be particularly interested in doing this for cases where there is some amount of easily transmissible 'ground truth' people can use as feedback signal. E.g.

  • You first let people red-team deworming papers and then give them some more nuanced 'Worm Wars' stuff. (Where ideally you want people to figure out "okay, despite paper X making that claim we shouldn't believe that deworming helps with short/mid-term education outcomes, but despite all the skepticism by epidemiologists here is why it's still a great philanthropic bet overall" - or whatever we think the appropriate conclusion is.)
  • You first let people red-team particular claims about the effects on hen welfare from battery cages vs. cage-free environments and then you show them Ajeya's report.
  • You first let people red-team particular claims about the impacts of the Justinian plague and then you show them this paper.
  • You first let people red-team particular claims about "X is power-law distributed" and then you show them Clauset et al., Power-law distributions in empirical data.

(Collecting a list of such examples would be another thing I'd be potentially interested to fund.)

COVID: How did we do? How can we know?

We even saw an NYT article about the CDC and whether reform is possible.

There were some other recent NYT articles which based on my limited COVID knowledge I thought were pretty good, e.g. on the origin of the virus or airborne vs. droplet transmission [1].

The background of their author, however, seems fairly consistent with an "established experts and institutions largely failed" story:

Zeynep Tufekci, a contributing opinion writer for The New York Times, writes about the social impacts of technology. She is an assistant professor in the School of Information and Library Science at the University of North Carolina, a faculty associate at the Berkman Center for Internet and Society at Harvard, and a former fellow at the Center for Internet Technology Policy at Princeton. Her research revolves around politics, civics, movements, privacy and surveillance, as well as data and algorithms.

Originally from Turkey, Ms. Tufekci was a computer programmer by profession and academic training before turning her focus to the impact of technology on society and social change.

It is interesting that perhaps some of the best commentary on COVID in the world's premier newspaper comes from a former computer programmer whose main job before COVID was writing about tech issues.

(Though note that this is my super unsystematic impression. I'm not reading a ton of COVID commentary, neither in the NYT nor elsewhere. I guess a skeptical observer could also argue "well, the view you like is the one typically championed by Silicon Valley types and other semi/non-experts, so you shouldn't be surprised that if you see newspaper op-eds you like they are written by such people".)

--

[1] What do you do if you want to expand on this topic "without the word limits" of an NYT article? Easy.

How to get technological knowledge on AI/ML (for non-tech people)

This is great, thank you so much for sharing. I expect that many people will be in a similar situation, and so that I and others will link to this post many times in the future.

(For the same reason, I also think that pointers to potentially better resources by others in the comments would be very valuable.)

You can now apply to EA Funds anytime! (LTFF & EAIF only)

(The following is just my view, not necessarily the view of other EAIF managers. And I can't speak for the LTFF at all.)

FWIW I can think of a number of circumstances I'd consider a "convincing reason" in this context. In particular, cases where people know they won't be available for 6-12 months because they want to wrap up some ongoing unrelated commitment, or cases where large lead times are common (e.g., PhD programs and some other things in academia).

I think as with most other aspects of a grant, I'd make decisions on a case-by-case basis that would be somewhat hard to describe by general rules.

I imagine I'd generally be fairly open to considering cases where an applicant thinks it would be useful to get a commitment now for funding that would be paid out a few months out, and I would much prefer they just apply as opposed to worrying too much about whether their case for this is "convincing". 

What are some key numbers that (almost) every EA should know?

We've now turned most of these into Anki cards

Amazing, thank you so much!

Load More