All of Mathieu Putz's Comments + Replies

Stuff I buy and use: a listicle to boost your consumer surplus and productivity

This is so useful! I love this kind of post and will buy many things from this one in particular.

Probably a very naive question, but why can't you just take a lot of DHA **and** a lot of EPA to get both supplements' benefits? Especially if your diet means you're likely deficient in both (which is true of veganism? vegetarianism?).

Assuming the Reddit folk wisdom about DHA inducing depression was wrong (which it might not be, I don't want to dismiss it), I don't understand from the rest of what you wrote why this doesn't work? Why is there a trade-off?

2Ben Auer1mo
Probably the best way to answer this question is to look at tolerable upper limit estimates for DHA and EPA that have been set by expert organisations. This page [https://www.nrv.gov.au/nutrients/fats-total-fat-fatty-acids] states that the US FDA recommends less than 3,000 mg per day of combined EPA, DPA and DHA intake (apparently it is not possibly to separate the three here). This typically would mean that no adverse affects have been observed below that level.
2Aaron Bergman1mo
Honestly I don't have a great answer here other than my overall impression/intuition is that it's probably bad to take arbitrarily high doses of these (unlike water soluble vitamins, at least out of some sort of precautionary principle) and I recall seeing some anecdotes from others saying that they actively prefer only taking one or the other. I don't think there's anything necessarily wrong with taking both (say, 1g per day of each) though
Proposal: Impact List -- like the Forbes List except for impact via donations

This seems really exciting!

I skimmed some sections so might have missed it in case you brought it up, but I think one thing that might be tricky about this project is the optics of where your own funding would be coming from. E.g. it might look bad if most (any?) of your funding was coming from OpenPhil and then Dustin Moskovitz and Cari Tuna were very highly ranked (which they probably should be!). In worlds where this project is successful and gathers some public attention, that kind of thing seems quite likely to come up.

So I think conditional on thinki... (read more)

Why Helping the Flynn Campaign is especially useful right now

Thanks for pointing this out, wasn't aware of that, sorry for the mistake. I have retracted my comment.

Why Helping the Flynn Campaign is especially useful right now

Thanks for pointing this out, wasn't aware of that, sorry for the mistake. I have retracted my comment.

Why Helping the Flynn Campaign is especially useful right now

Hey, interesting to hear your reaction, thanks.

I can't respond to all of it now, but do want to point out one thing.

And, of course, if elected he will very visibly owe his win to a single ultra-wealthy individual who is almost guaranteed to have business before the next congress in financial and crypto regulation.

I think this isn't accurate.

Donations from individuals are capped at $5,800, so whatever money Carrick is getting is not one giant gift from Sam Bankman-Fried, but rather many small ones from individual Americans. Some of them may work for org... (read more)

[This comment is no longer endorsed by its author]Reply
2Joel Burget2mo
There's a really easy way around the $5800 limitation, called a super PAC (Political Action Committee). Super PACs don't give to campaigns directly, but try to influence races via ads, etc. There are no restrictions on super PAC funds. In this case the Protect Our Future (super) PAC (https://www.politico.com/news/2022/04/19/crypto-super-pac-campaign-finance-00026146). I'm not clear exactly how much POF spent on the Flynn campaign, but SBF donated $13M to POF.

SBF's Protect Our Future PAC has put more than $7M towards Flynn's campaign. I think this is what _pk  and others are concerned about, not  direct donations. And this is what most people concerned with "buying elections" are concerned about. (This is what the Citizens United controversy is about.)

Why Helping the Flynn Campaign is especially useful right now

If you're wondering who you might know in Oregon, you can search your Facebook friends by location:

Search for Oregon (or Salem) in the normal FB search bar, then go to People. You can also select to see "Friends of Friends".

I assume that will miss a few, so it's probably worth also actively thinking about your network, but this is probably a good low-effort first start.

Edit: Actually they need to live in district 6. The biggest city in that district is Salem as far as I can tell. Here's a map.

Why those who care about catastrophic and existential risk should care about autonomous weapons

Thanks for writing this!

I believe there's a small typo here:

The expected deaths are N+P_nM in the human-combatant case and P_yM in the autonomous combatant case, with a difference in fatalities of (P_y−P_n)(M−N). Given how much larger M (~1-7 Bn) is than N (tens of thousands at most) it only takes a small difference (Py−Pn) for this to be a very poor exchange.

Shouldn't the difference be (P_y−P_n)M−N ?

New forum feature: Map of Community Members

This is *so* cool, thanks! Might be nice to have a feature where people can add a second location. E.g. I used to study in Munich, but spend ~2 months per year in Luxembourg. Many friends stayed much longer in Luxembourg. According to the EA survey, there are Luxembourgish EAs other than me, but I have so far failed to find them --- I'd expect many of them to be in a similar situation.

I recommend you add that in your bio, since the text search will match on both the map location and any text written in your bio. :)

Decomposing Biological Risks: Harm, Potential, and Strategies

I thought this was a great article raising a bunch of points which I hadn't previously come across, thanks for writing it!

Regarding the risk from non-state actors with extensive resources, one key question is how competent we expect such groups to be. Gwern suggests that terrorists are currently not very effective at killing people or inducing terror --- with similar resources, it should be possible to induce far more damage than they actually do. This has somewhat lowered my concern about bioterrorist attacks, especially when considering that successfull... (read more)

Effectiveness is a Conjunction of Multipliers

Fair, that makes sense! I agree that if it's purely about solving a research problem with long timelines, then linear or decreasing returns seem very reasonable.

I would just note that speed-sensitive considerations, in the broad sense you use it, will be relevant to many (most?) people's careers, including researchers to some extent (reputation helps doing research: more funding, better opportunities for collaboration etc). But I definitely agree there are exceptions and well-established AI safety researchers with long timelines may be in that class.

8Jonas Vollmer2mo
FWIW I think superlinear returns are plausible even for research problems with long timelines, I'd just guess that the returns are less superlinear, and that it's harder to increase the number of work hours for deep intellectual work. So I quite strongly agree with your original point.
Effectiveness is a Conjunction of Multipliers

I agree that superlinearity is way more pronounced in some cases than in others.

However, I still think there can be some superlinear terms for things that aren't inherently about speed. E.g. climbing seniority levels or getting a good reputation with ever larger groups of people.

5Jonas Vollmer3mo
The examples you give fit my notion of speed - you're trying to make things happen faster than the people with whom you're competing for seniority/reputation.
"Long-Termism" vs. "Existential Risk"

I think ASB's recent post about Peak Defense vs Trough Defense in Biosecurity is a great example of how the longtermist framing can end up mattering a great deal in practical terms.

[April fool's post] Proposal to assign careers by birthdate

Exactly my plan! Of course, this was 100% on purpose!

Effectiveness is a Conjunction of Multipliers

Great post, thanks for writing it! This framing appears a lot in my thinking and it's great to see it written up! I think it's probably healthy to be afraid of missing a big multiplier.

I'd like to slightly push back on this assumption:

If output scales linearly with work hours, then you can hit 60% of your maximum possible impact with 60% of your work hours

First, I agree with other commenters and yourself that it's important not to overwork / look after your own happiness and wellbeing etc.

Having said that, I do think working harder can often have super... (read more)

9Jonas Vollmer3mo
A key question for whether there are strongly superlinear returns seems to be the speed at which reality moves. For quant trading and crypto exchanges in particular, this effect seems really strong, and FTX's speed is arguably part of why it was so successful. This likely also applies to the early stages of a novel pandemic, or AI crunch time. In other areas (perhaps, research that's mainly useful for long AI timelines), it may apply less strongly.
What's the best machine learning newsletter? How do you keep up to date?

(I accidentally asked multiple versions of this question at once.

This was because I got the following error message when submitting:

"Cannot read properties of undefined (reading 'currentUser')"

So I wrongly assumed the submission didn't work.

@moderators)

3JP Addison3mo
We've gotten multiple reports of this, and you're the first person to get the exact error message, thank you so much.
$100 bounty for the best ideas to red team

Make the best case against: "Some non-trivial fraction of highly talented EAs should be part- or full-time community builders." The argument in favor would be pointing to the multiplier effect. Assume you could attract the equivalent of one person as good as yourself to EA within one year of full-time community building. If this person is young and we assume the length of a career to be 40 years, then you have just invested 1 year and gotten 40 years in return. By the most naive / straightforward estimate then, a chance of about 1/40 of you attracting one ... (read more)

The Future Fund’s Project Ideas Competition

EA Hotel / CEEALAR except at EA Hubs

Effective Altruism

CEEALAR is currently located in Blackpool, UK. It would be a lot more attractive if it were in e.g. Oxford, the Bay Area, or London. This would allow guests to network with local EAs (as well as other smart people, of which there are plenty in all of the above cities). In as far as budget is less of a constraint now and in as far as EA funders are already financing trips to such cities for select individuals (for conferences and otherwise), it seems an EA Hotel would similarly be justified on the same grounds. (E.g. intercontinental flights can sometimes be more expensive than one month's rent in those cities)

The Future Fund’s Project Ideas Competition

Studying stimulants' and anti-depressants' long-term effects on productivity and health in healthy people (e.g. Modafinil, Adderall, and Wellbutrin)

Economic Growth, Effective Altruism

Is it beneficial or harmful for long-term productivity to take Modafinil, Adderall, Wellbutrin, or other stimulants on a regular basis as a healthy person (some people speculate that it might make you less productive on days where you're not taking it)? If it's beneficial, what's the effect size? What frequency hits the best trade-off between building up tolerance vs short-ter... (read more)

Simplify EA Pitches to "Holy Shit, X-Risk"

Thanks for this! I think it's good for people to suggest new pitches in general. And this one would certainly allow me to give a much cleaner pitch to non-EA friends than rambling about a handful of premises and what they lead to and why (I should work on my pitching in general!). I think I'll try this.

I think I would personally have found this pitch slightly less convincing than current EA pitches though. But one problem is that I and almost everyone reading this were selected for liking the standard pitch (though to be fair whatever selection mechanism ... (read more)

Thanks for the feedback! Yep, it's pretty hard to judge this kind of thing given survivorship bias. I expect this kind of pitch would have worked best on me, though I got into EA long enough ago that I was most grabbed by global health pitches. Which maybe got past my weirdness filter in a way that this one didn't. 

I'd love to see what happens if someone tries an intro fellowship based around reading the Most Important Century series!

The phrase “hard-core EAs” does more harm than good

I like "(very or most) dedicated EA". Works well for (2) and maybe (4).

How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?

From the perspective of a grant-maker, thinking about reduction in absolute basis points makes sense of course, but for comparing numbers between people, relative risk reduction might be more useful?

E.g. if one person thinks AI risk is 50% and another thinks it's 10%, it seems to me the most natural way for them to speak about funding opportunities is to say it reduces total AI risk by X% relatively speaking.

Talking about absolute risk reduction compresses these two numbers into one, which is more compact, but makes it harder to see where disagreements com... (read more)

List of EA funding opportunities

What about individual Earning To Givers?

Is there some central place where all the people doing Earning To Give are listed, potentially with some minimal info about their potential max grant size and the type of stuff they are happy to fund?

If not, how do ETGers usually find non-standard funding opportunities? Just personal networks?

An update in favor of trying to make tens of billions of dollars

Hey Sean, thanks so much for letting me know this! Best of luck whatever you do!

How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?

I assume those estimates are for current margins? So if I were considering whether to do earning to give, I should use lower estimates for how much risk reduction my money could buy, given that EA has billions to be spent already and due to diminishing returns your estimates would look much worse after those had been spent?

2Linch7mo
Yes it's about marginal willingness to spend, not an assessment of absolute impact so far.
What Small Weird Thing Do You Fund?

Great question! Guarding against pandemics do advocacy for pandemic prevention and need many small donors due to legal reasons for some of their work. Here's an excerpt from their post on the EA Forum:

While GAP’s lobbying work (e.g. talking to members of Congress) is already well-funded by Sam Bankman-Fried and others, another important part of GAP’s work is supporting elected officials from both parties who will advocate for biosecurity and pandemic preparedness. U.S. campaign contribution limits require that this work be supported by many small-to-medi

... (read more)
9jared_m7mo
Agree that GAP is a great cause for small U.S. donors! Their team is approaching the opportunity in a sophisticated way. We've given to GAP twice this fall, and expect to give more this winter / next year.
Announcing my retirement

Thanks so much for looking after possibly my favorite place on the internet!

When to get off the train to crazy town?

Hey, thanks for writing this!

Strong +1 for this part:

I had conversations along the lines of “I already did a Bachelor’s in Biology and just started a Master’s in Nanotech, surely it’s too late for me to pivot to AI safety”. To which my response is “You’re 22, if you really want to go into AI safety, you can easily switch”.

I think this pattern is especially suspicious when used to justify some career that's impactful in one worldview over one that's impactful in another.

E.g. I totally empathize with people who aren't into longtermism, but the reasoning ... (read more)

What is most confusing to you about AI stuff?

Here's a couple that came to mind just now.

  1. How smart do you need to be to contribute meaningfully t AI safety? Near top in class in high-school? Near top in class at ivy-league? Potential famous prof at ivy league? Potential fields medalist?

  2. Also, how hard should we expect alignment to be? Are we trying to throw resources at a problem we expect to be able to at least partially solve in most worlds (which is e.g. the superficial impression I get from biorisk) or are we attempting a hail mary, because it might just work and it's important enough to be

... (read more)
We need alternatives to Intro EA Fellowships

I agree it's fine if fellowships aren't interesting to already-engaged EAs and I also see why the question is asked --- I don't even have a strong view on whether it's a bad idea to ask it.

I do think though that the fellowship would have been boring to me at times, even if I had known much less about EA. But maybe I'm just not the type of person who likes to learn stuff in groups and I was never part of the target audience.

We need alternatives to Intro EA Fellowships

Thanks for writing this, I think it's great you're thinking about alternatives!

The way I learned about EA was just by spending too much time on the forum and with the 80k podcast.

Then, I once attended one session of a fellowship and was a little underwhelmed. I remember the question "so can anybody name the definition of an existential risk according to Toby Ord" after we had been asked to read about exactly that — this just seemed like a waste of time. But to be fair, I was also much more familiar with EA at that point than an average fellow. It's very possible that other people had a better experience in the same session.

But I definitely agree there's room for experimentation and probably improvement!

5mic7mo
I actually think the question about the definition of existential risk is useful, in order to make sure that everyone understands it correctly and doesn't think it means "risk that humanity goes extinct". If you've spent a lot of time learning about EA already, I don't think you would find much novel information from the Intro EA Fellowship, and I think that's fine.
Can we influence the values of our descendants?

Thanks for writing this up, super interesting!

Intuitively I would expect persistence effects to be weaker now than e.g. 300 years ago. This is mostly because today society changes much more rapidly than back then. I would guess that it's more common now to live hundreds of kilometres from where you grew up, that the internet allows people to "choose" their culture more freely (my parents like EA less than I do), that the same goes for bigger cities etc. Generally advice from my parents and grandparents sometimes feels outdated, which makes me less likely t... (read more)

3Jaime Sevilla7mo
I do think so! It's hard to contest that change across many dimensions has been accelerating [https://www.cold-takes.com/this-cant-go-on/]. And it would make sense that this accelerating change makes parental advice less applicable, and thus parents less influential overall.
An update in favor of trying to make tens of billions of dollars

I agree! I've added an edit to the post, referencing your comment.

An update in favor of trying to make tens of billions of dollars

Thanks for pointing this out! Hadn't known about this, though it totally makes sense in retrospect that markets would find some way of partially cancelling that inefficiency. I've added an edit to the post.

An update in favor of trying to make tens of billions of dollars

Thanks for pointing that out! I agree it's notable and have added it to the list. I don't have a strong opinion on how important this is relative to other things on there.

An update in favor of trying to make tens of billions of dollars

Thanks for your comment! Super interesting to hear all that.

And my pledge is 10%, although I expect more like 50-75% to go to useful world-improving things but don't want to pledge it because then I'm constrained by what other people think is effective.

Amazing! Glory to you :) I've added this to the post.

How to use the Forum

Thanks, it's probably better that way!

An update in favor of trying to make tens of billions of dollars

Thanks a lot for saying this!

Yeah, I wonder about the flexibility as well. At least, "I have good reason to think I could've gone to MIT/ Jane Street..." should go a long way (if you're not delusional).

How to use the Forum

Are upvotes anonymous or is there a way to view who upvoted your comments / posts? I'm not saying it should be one way or another, just curious.

3Aaron Gertler8mo
Upvotes (and downvotes) are anonymous.
First vs. last name policies?

Thanks for adding your opinion!

Yeah, coming from Luxembourg and studying in Germany, I do get the feeling that the norms differ here. I prefer first name norms though, so that's great :)

First vs. last name policies?

Thanks for your answer! I agree it's strange that these kinds of formalities are still so much of a thing among otherwise egalitarian people.

Load More