All of Mathieu Putz's Comments + Replies

Can you say more about the 20% per year discount rate for community building? 

In particular, is the figure meant to refer to time or money? I.e. does it mean that

  1. you would trade at most 0.8 marginal hours spent on community building in 2024 for 1 marginal hour in 2023?
  2. you would trade at most 0.8 marginal dollars spent on community building in 2024 for 1 marginal dollar spent on community building in 2023? 
  3. something else? (possibly not referring to marginal resources?)

(For money a 20% discount rate seems very high to me, barring very short timelin... (read more)

2
Ben_West
1y
I honestly have mostly heard this in an offhanded way which doesn't differentiate well between the two, but I think closer to (1).

Minor nitpick: 

I would've found it more helpful to see Haydn's and Esben's judgments listed separately.

4
HaydnBelfield
1y
We came up with our rankings seperately, but when we compared it turned out we agreed on the top 4 + honourable mention. We then worked on the texts together. 
3
Nathan Young
1y
I guess the people I've heard argue this: "Parties create the atmosphere of a community and often a romantically active one, as opposed to a professional network"

Need is a very strong word so I'm voting no. Might sometimes be marginally advantageous though.

2
Nathan Young
1y
So you can see how the community self understands as well as what CEA what's the community to be and then there can be a discussion between those two.

Thanks for writing this up! Was gonna apply anyway, but a post like this might have gotten me to apply last year (which I didn't, but which would've been smart). It also contained some useful sections that I didn't know about yet!

I'm not sure what my general take is on this, I think it's quite plausible that keeping it exclusive is net good, maybe more likely good than not. But I want to add one anecdote of my own which pushes the other way.

Over the last two years, while I was a student, I made two career choices in part (though not only) to gain EA credibility:

  • I was a group organizer at EA Munich (~2 hours a week)
  • I did a part-time internship at an EA org (~10 hours a week)

Both of these were fun, but I think it's unlikely that they were good for my career or impact in ways other th... (read more)

This is so useful! I love this kind of post and will buy many things from this one in particular.

Probably a very naive question, but why can't you just take a lot of DHA **and** a lot of EPA to get both supplements' benefits? Especially if your diet means you're likely deficient in both (which is true of veganism? vegetarianism?).

Assuming the Reddit folk wisdom about DHA inducing depression was wrong (which it might not be, I don't want to dismiss it), I don't understand from the rest of what you wrote why this doesn't work? Why is there a trade-off?

2
Ben Auer
2y
Probably the best way to answer this question is to look at tolerable upper limit estimates for DHA and EPA that have been set by expert organisations. This page states that the US FDA recommends less than 3,000 mg per day of combined EPA, DPA and DHA intake (apparently it is not possibly to separate the three here). This typically would mean that no adverse affects have been observed below that level.
2
Aaron Bergman
2y
Honestly I don't have a great answer here other than my overall impression/intuition is that it's probably bad to take arbitrarily high doses of these (unlike water soluble vitamins, at least out of some sort of precautionary principle) and I recall seeing some anecdotes from others saying that they actively prefer only taking one or the other.  I don't think there's anything necessarily wrong with taking both (say, 1g per day of each) though

This seems really exciting!

I skimmed some sections so might have missed it in case you brought it up, but I think one thing that might be tricky about this project is the optics of where your own funding would be coming from. E.g. it might look bad if most (any?) of your funding was coming from OpenPhil and then Dustin Moskovitz and Cari Tuna were very highly ranked (which they probably should be!). In worlds where this project is successful and gathers some public attention, that kind of thing seems quite likely to come up.

So I think conditional on thinki... (read more)

Thanks for pointing this out, wasn't aware of that, sorry for the mistake. I have retracted my comment.

Thanks for pointing this out, wasn't aware of that, sorry for the mistake. I have retracted my comment.

Hey, interesting to hear your reaction, thanks.

I can't respond to all of it now, but do want to point out one thing.

And, of course, if elected he will very visibly owe his win to a single ultra-wealthy individual who is almost guaranteed to have business before the next congress in financial and crypto regulation.

I think this isn't accurate.

Donations from individuals are capped at $5,800, so whatever money Carrick is getting is not one giant gift from Sam Bankman-Fried, but rather many small ones from individual Americans. Some of them may work for org... (read more)

[This comment is no longer endorsed by its author]Reply
2
Joel Burget
2y
  There's a really easy way around the $5800 limitation, called a super PAC (Political Action Committee). Super PACs don't give to campaigns directly, but try to influence races via ads, etc. There are no restrictions on super PAC funds.   In this case the Protect Our Future (super) PAC (https://www.politico.com/news/2022/04/19/crypto-super-pac-campaign-finance-00026146). I'm not clear exactly how much POF spent on the Flynn campaign, but SBF donated $13M to POF.

SBF's Protect Our Future PAC has put more than $7M towards Flynn's campaign. I think this is what _pk  and others are concerned about, not  direct donations. And this is what most people concerned with "buying elections" are concerned about. (This is what the Citizens United controversy is about.)

If you're wondering who you might know in Oregon, you can search your Facebook friends by location:

Search for Oregon (or Salem) in the normal FB search bar, then go to People. You can also select to see "Friends of Friends".

I assume that will miss a few, so it's probably worth also actively thinking about your network, but this is probably a good low-effort first start.

Edit: Actually they need to live in district 6. The biggest city in that district is Salem as far as I can tell. Here's a map.

Thanks for writing this!

I believe there's a small typo here:

The expected deaths are N+P_nM in the human-combatant case and P_yM in the autonomous combatant case, with a difference in fatalities of (P_y−P_n)(M−N). Given how much larger M (~1-7 Bn) is than N (tens of thousands at most) it only takes a small difference (Py−Pn) for this to be a very poor exchange.

Shouldn't the difference be (P_y−P_n)M−N ?

This is *so* cool, thanks! Might be nice to have a feature where people can add a second location. E.g. I used to study in Munich, but spend ~2 months per year in Luxembourg. Many friends stayed much longer in Luxembourg. According to the EA survey, there are Luxembourgish EAs other than me, but I have so far failed to find them --- I'd expect many of them to be in a similar situation.

I recommend you add that in your bio, since the text search will match on both the map location and any text written in your bio. :)

I thought this was a great article raising a bunch of points which I hadn't previously come across, thanks for writing it!

Regarding the risk from non-state actors with extensive resources, one key question is how competent we expect such groups to be. Gwern suggests that terrorists are currently not very effective at killing people or inducing terror --- with similar resources, it should be possible to induce far more damage than they actually do. This has somewhat lowered my concern about bioterrorist attacks, especially when considering that successfull... (read more)

Fair, that makes sense! I agree that if it's purely about solving a research problem with long timelines, then linear or decreasing returns seem very reasonable.

I would just note that speed-sensitive considerations, in the broad sense you use it, will be relevant to many (most?) people's careers, including researchers to some extent (reputation helps doing research: more funding, better opportunities for collaboration etc). But I definitely agree there are exceptions and well-established AI safety researchers with long timelines may be in that class.

8
Jonas V
2y
FWIW I think superlinear returns are plausible even for research problems with long timelines, I'd just guess that the returns are less superlinear, and that it's harder to increase the number of work hours for deep intellectual work. So I quite strongly agree with your original point.

I agree that superlinearity is way more pronounced in some cases than in others.

However, I still think there can be some superlinear terms for things that aren't inherently about speed. E.g. climbing seniority levels or getting a good reputation with ever larger groups of people.

5
Jonas V
2y
The examples you give fit my notion of speed - you're trying to make things happen faster than the people with whom you're competing for seniority/reputation.

I think ASB's recent post about Peak Defense vs Trough Defense in Biosecurity is a great example of how the longtermist framing can end up mattering a great deal in practical terms.

Exactly my plan! Of course, this was 100% on purpose!

Great post, thanks for writing it! This framing appears a lot in my thinking and it's great to see it written up! I think it's probably healthy to be afraid of missing a big multiplier.

I'd like to slightly push back on this assumption:

If output scales linearly with work hours, then you can hit 60% of your maximum possible impact with 60% of your work hours

First, I agree with other commenters and yourself that it's important not to overwork / look after your own happiness and wellbeing etc.

Having said that, I do think working harder can often have super... (read more)

4
Jonas V
2y
A key question for whether there are strongly superlinear returns seems to be the speed at which reality moves. For quant trading and crypto exchanges in particular, this effect seems really strong, and FTX's speed is arguably part of why it was so successful. This likely also applies to the early stages of a novel pandemic, or AI crunch time. In other areas (perhaps, research that's mainly useful for long AI timelines), it may apply less strongly.

(I accidentally asked multiple versions of this question at once.

This was because I got the following error message when submitting:

"Cannot read properties of undefined (reading 'currentUser')"

So I wrongly assumed the submission didn't work.

@moderators)

3
JP Addison
2y
We've gotten multiple reports of this, and you're the first person to get the exact error message, thank you so much.

Make the best case against: "Some non-trivial fraction of highly talented EAs should be part- or full-time community builders." The argument in favor would be pointing to the multiplier effect. Assume you could attract the equivalent of one person as good as yourself to EA within one year of full-time community building. If this person is young and we assume the length of a career to be 40 years, then you have just invested 1 year and gotten 40 years in return. By the most naive / straightforward estimate then, a chance of about 1/40 of you attracting one ... (read more)

EA Hotel / CEEALAR except at EA Hubs

Effective Altruism

CEEALAR is currently located in Blackpool, UK. It would be a lot more attractive if it were in e.g. Oxford, the Bay Area, or London. This would allow guests to network with local EAs (as well as other smart people, of which there are plenty in all of the above cities). In as far as budget is less of a constraint now and in as far as EA funders are already financing trips to such cities for select individuals (for conferences and otherwise), it seems an EA Hotel would similarly be justified on the same grounds. (E.g. intercontinental flights can sometimes be more expensive than one month's rent in those cities)

Studying stimulants' and anti-depressants' long-term effects on productivity and health in healthy people (e.g. Modafinil, Adderall, and Wellbutrin)

Economic Growth, Effective Altruism

Is it beneficial or harmful for long-term productivity to take Modafinil, Adderall, Wellbutrin, or other stimulants on a regular basis as a healthy person (some people speculate that it might make you less productive on days where you're not taking it)? If it's beneficial, what's the effect size? What frequency hits the best trade-off between building up tolerance vs short-ter... (read more)

Thanks for this! I think it's good for people to suggest new pitches in general. And this one would certainly allow me to give a much cleaner pitch to non-EA friends than rambling about a handful of premises and what they lead to and why (I should work on my pitching in general!). I think I'll try this.

I think I would personally have found this pitch slightly less convincing than current EA pitches though. But one problem is that I and almost everyone reading this were selected for liking the standard pitch (though to be fair whatever selection mechanism ... (read more)

Thanks for the feedback! Yep, it's pretty hard to judge this kind of thing given survivorship bias. I expect this kind of pitch would have worked best on me, though I got into EA long enough ago that I was most grabbed by global health pitches. Which maybe got past my weirdness filter in a way that this one didn't. 

I'd love to see what happens if someone tries an intro fellowship based around reading the Most Important Century series!

I like "(very or most) dedicated EA". Works well for (2) and maybe (4).

From the perspective of a grant-maker, thinking about reduction in absolute basis points makes sense of course, but for comparing numbers between people, relative risk reduction might be more useful?

E.g. if one person thinks AI risk is 50% and another thinks it's 10%, it seems to me the most natural way for them to speak about funding opportunities is to say it reduces total AI risk by X% relatively speaking.

Talking about absolute risk reduction compresses these two numbers into one, which is more compact, but makes it harder to see where disagreements com... (read more)

What about individual Earning To Givers?

Is there some central place where all the people doing Earning To Give are listed, potentially with some minimal info about their potential max grant size and the type of stuff they are happy to fund?

If not, how do ETGers usually find non-standard funding opportunities? Just personal networks?

Hey Sean, thanks so much for letting me know this! Best of luck whatever you do!

I assume those estimates are for current margins? So if I were considering whether to do earning to give, I should use lower estimates for how much risk reduction my money could buy, given that EA has billions to be spent already and due to diminishing returns your estimates would look much worse after those had been spent?

2
Linch
2y
Yes it's about marginal willingness to spend, not an assessment of absolute impact so far.

Great question! Guarding against pandemics do advocacy for pandemic prevention and need many small donors due to legal reasons for some of their work. Here's an excerpt from their post on the EA Forum:

While GAP’s lobbying work (e.g. talking to members of Congress) is already well-funded by Sam Bankman-Fried and others, another important part of GAP’s work is supporting elected officials from both parties who will advocate for biosecurity and pandemic preparedness. U.S. campaign contribution limits require that this work be supported by many small-to-medi

... (read more)
9
jared_m
2y
Agree that GAP is a great cause for small U.S. donors! Their team is approaching the opportunity in a sophisticated way.  We've given to GAP twice this fall, and expect to give more this winter / next year.

Thanks so much for looking after possibly my favorite place on the internet!

Hey, thanks for writing this!

Strong +1 for this part:

I had conversations along the lines of “I already did a Bachelor’s in Biology and just started a Master’s in Nanotech, surely it’s too late for me to pivot to AI safety”. To which my response is “You’re 22, if you really want to go into AI safety, you can easily switch”.

I think this pattern is especially suspicious when used to justify some career that's impactful in one worldview over one that's impactful in another.

E.g. I totally empathize with people who aren't into longtermism, but the reasoning ... (read more)

Here's a couple that came to mind just now.

  1. How smart do you need to be to contribute meaningfully t AI safety? Near top in class in high-school? Near top in class at ivy-league? Potential famous prof at ivy league? Potential fields medalist?

  2. Also, how hard should we expect alignment to be? Are we trying to throw resources at a problem we expect to be able to at least partially solve in most worlds (which is e.g. the superficial impression I get from biorisk) or are we attempting a hail mary, because it might just work and it's important enough to be

... (read more)

I agree it's fine if fellowships aren't interesting to already-engaged EAs and I also see why the question is asked --- I don't even have a strong view on whether it's a bad idea to ask it.

I do think though that the fellowship would have been boring to me at times, even if I had known much less about EA. But maybe I'm just not the type of person who likes to learn stuff in groups and I was never part of the target audience.

Thanks for writing this, I think it's great you're thinking about alternatives!

The way I learned about EA was just by spending too much time on the forum and with the 80k podcast.

Then, I once attended one session of a fellowship and was a little underwhelmed. I remember the question "so can anybody name the definition of an existential risk according to Toby Ord" after we had been asked to read about exactly that — this just seemed like a waste of time. But to be fair, I was also much more familiar with EA at that point than an average fellow. It's very possible that other people had a better experience in the same session.

But I definitely agree there's room for experimentation and probably improvement!

5
mic
2y
I actually think the question about the definition of existential risk is useful, in order to make sure that everyone understands it correctly and doesn't think it means "risk that humanity goes extinct". If you've spent a lot of time learning about EA already, I don't think you would find much novel information from the Intro EA Fellowship, and I think that's fine.

Thanks for writing this up, super interesting!

Intuitively I would expect persistence effects to be weaker now than e.g. 300 years ago. This is mostly because today society changes much more rapidly than back then. I would guess that it's more common now to live hundreds of kilometres from where you grew up, that the internet allows people to "choose" their culture more freely (my parents like EA less than I do), that the same goes for bigger cities etc. Generally advice from my parents and grandparents sometimes feels outdated, which makes me less likely t... (read more)

3
Jaime Sevilla
2y
I do think so! It's hard to contest that change across many dimensions has been accelerating. And it would make sense that this accelerating change makes parental advice less applicable, and thus parents less influential overall. 

I agree! I've added an edit to the post, referencing your comment.

Thanks for pointing this out! Hadn't known about this, though it totally makes sense in retrospect that markets would find some way of partially cancelling that inefficiency. I've added an edit to the post.

Thanks for pointing that out! I agree it's notable and have added it to the list. I don't have a strong opinion on how important this is relative to other things on there.

Thanks for your comment! Super interesting to hear all that.

And my pledge is 10%, although I expect more like 50-75% to go to useful world-improving things but don't want to pledge it because then I'm constrained by what other people think is effective.

Amazing! Glory to you :) I've added this to the post.

Thanks, it's probably better that way!

Load more