Mathieu Putz

Hey, I'm Matt! I just finished studying Engineering Science at the Technical University of Munich (TUM), with a specialization in Machine Learning. Currently I'm in a career exploration phase, where I try to identify what my next step should be by researching relevant questions and trying different projects. I'm particularly interested in AI safety and biorisk. I used to be a group organizer at EA Munich and a research analyst at Nonlinear, but am currently winding down those responsibilities.

Topic Contributions

Comments

Why Helping the Flynn Campaign is especially useful right now

Thanks for pointing this out, wasn't aware of that, sorry for the mistake. I have retracted my comment.

Why Helping the Flynn Campaign is especially useful right now

Thanks for pointing this out, wasn't aware of that, sorry for the mistake. I have retracted my comment.

Why Helping the Flynn Campaign is especially useful right now

Hey, interesting to hear your reaction, thanks.

I can't respond to all of it now, but do want to point out one thing.

And, of course, if elected he will very visibly owe his win to a single ultra-wealthy individual who is almost guaranteed to have business before the next congress in financial and crypto regulation.

I think this isn't accurate.

Donations from individuals are capped at $5,800, so whatever money Carrick is getting is not one giant gift from Sam Bankman-Fried, but rather many small ones from individual Americans. Some of them may work for organizations that get a lot of funding from big EA donors, but it's still their own salary which they are free to spend however they like. As an aside, probably in most cases the funding of these orgs will currently still come from OpenPhil (who give away Dustin Moskovitz's and Cari Tuna's wealth), rather than FTX Future Fund (who give away SBF's wealth among others).

I think it's important that for the most part, this is money that not-crazy-rich Americans could have spent on themselves, but chose to donate to this campaign instead.

[This comment is no longer endorsed by its author]Reply
Why Helping the Flynn Campaign is especially useful right now

If you're wondering who you might know in Oregon, you can search your Facebook friends by location:

Search for Oregon (or Salem) in the normal FB search bar, then go to People. You can also select to see "Friends of Friends".

I assume that will miss a few, so it's probably worth also actively thinking about your network, but this is probably a good low-effort first start.

Edit: Actually they need to live in district 6. The biggest city in that district is Salem as far as I can tell. Here's a map.

Why those who care about catastrophic and existential risk should care about autonomous weapons

Thanks for writing this!

I believe there's a small typo here:

The expected deaths are N+P_nM in the human-combatant case and P_yM in the autonomous combatant case, with a difference in fatalities of (P_y−P_n)(M−N). Given how much larger M (~1-7 Bn) is than N (tens of thousands at most) it only takes a small difference (Py−Pn) for this to be a very poor exchange.

Shouldn't the difference be (P_y−P_n)M−N ?

New forum feature: Map of Community Members

This is *so* cool, thanks! Might be nice to have a feature where people can add a second location. E.g. I used to study in Munich, but spend ~2 months per year in Luxembourg. Many friends stayed much longer in Luxembourg. According to the EA survey, there are Luxembourgish EAs other than me, but I have so far failed to find them --- I'd expect many of them to be in a similar situation.

Decomposing Biological Risks: Harm, Potential, and Strategies

I thought this was a great article raising a bunch of points which I hadn't previously come across, thanks for writing it!

Regarding the risk from non-state actors with extensive resources, one key question is how competent we expect such groups to be. Gwern suggests that terrorists are currently not very effective at killing people or inducing terror --- with similar resources, it should be possible to induce far more damage than they actually do. This has somewhat lowered my concern about bioterrorist attacks, especially when considering that successfully causing a global pandemic worse than natural ones is not easy. (Lowered my concern in relative terms that is --- I still think this risk is unacceptably high and prevention measures should be taken. I don't want to rely on terrorists being incompetent.) This suggests both that terrorist groups may not pursue bioterrorism even if it were the best way to achieve their goals and that they may not be able to execute well on such a difficult task. Hence, without having thought about it too much, I think I might rate the risks from non-state actors somewhat lower than you do (though I'm not sure, especially since you don't give numerical estimates --- which is totally reasonable). For instance, I'm not sure whether we should expect risks of GCBRs caused by non-state actors to be higher than risks of GCBRs caused by state actors (as you suggest).

Effectiveness is a Conjunction of Multipliers

Fair, that makes sense! I agree that if it's purely about solving a research problem with long timelines, then linear or decreasing returns seem very reasonable.

I would just note that speed-sensitive considerations, in the broad sense you use it, will be relevant to many (most?) people's careers, including researchers to some extent (reputation helps doing research: more funding, better opportunities for collaboration etc). But I definitely agree there are exceptions and well-established AI safety researchers with long timelines may be in that class.

Load More