Linch

"To see the world as it is, rather than as I wish it to be."

I work for the EA research nonprofit Rethink Priorities. Despite my official title, I don't really think of the stuff I do as "research." In particular, when I think of the word "research", I think of people who are expanding the frontiers of the world's knowledge, whereas often I'm more interested in expanding the frontiers of my knowledge, and/or disseminating it to the relevant parties.

I'm also really interested in forecasting.

People may or may not also be interested in my comments on Metaculus and Twitter:

Metaculus: https://pandemic.metaculus.com/accounts/profile/112057/

Twitter: https://twitter.com/LinchZhang

Clarification on commenting norms: https://forum.effectivealtruism.org/posts/myp9Y9qJnpEEWhJF9/shortform?commentId=RbbHzo99rKewBXj24

Wiki Contributions

Comments

Linch's Shortform

Okay now I'm back to being confused.

How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?

a) I don't think "very high certainty" interventions exist for xrisk, no. But I think there exists interventions where people can produce relatively robust estimates if given enough time, in the sense that further armchair thinking and near-term empirical feedback are unlikely to affect the numbers by more than say 0.5 orders of magnitude.

And when that happens, the uncertainty in the moral debate of "how much funding per unit x-risk reduction is moral?" would get overshadowed by the uncertainty in the more practical debate of "how much x-risk reduction does this intervention provide?"

I think you're misunderstanding this question. I am not asking for how much funding per unit x-risk reduction is moral in the abstract, I'm asking to get a sense of what the current margin of funding looks like, as a way to help researchers and others prioritize our efforts. 

Now in theory with perfect probabilistic calibration, assessment and coordination, EA should just fund the marginally most cost-effective thing to do until we are out of money. But in practice we just have a lot of  uncertainty, etc. Researchers often have a sense (not necessarily very good!) of how cost-effective a few of the projects they are investigating are, and maybe a larger number of other projects, but may not have a deep sense of at what margin funders are sufficiently excited to fund (I know I at least didn't have a good idea before working through this question! And I'm still somewhat confused). 

If we have a sense of what the margin/price point looks like (or even rough order of magnitude estimates), then it's easier to be actively excited to do research or incubate new projects much below that price point, to deprioritize those research at much above that price point, and work hard on figuring out more accurate pricing for projects around that price point. 

Some thoughts on the effectiveness of the Fraunhofer Society

That said, this is a supposedly large basic research group that I've never heard of before, which I feel like is a bit of evidence against them actually being really impressive?

Their "notable projects" section also feels a bit underwhelming to me, given that they have a comparable research budget to a large American research university.

Some thoughts on the effectiveness of the Fraunhofer Society

Ah yeah you're right I was probably being overly credulous. 

Linch's Shortform

Oh wow thanks that's a really good point and cleared up my confusion!! I never thought about it that way before.

Effective Altruism is a Question (not an ideology)

This article affected me a lot when I first read it (in 2015 or so), and is/was a nontrivial part of what I considered "effective altruism" to mean. Skimming it again, I think it might be a little oversimplified, and has a bit of a rhetorical move that I don't love of conflating "what the world is like' vs "what I want the world to be like."

Still, I think this article was strong at the time, and I think it is still strong now. 

Some thoughts on the effectiveness of the Fraunhofer Society

Like NunoSempere, I appreciate the brutal honesty. It's good and refreshing to see someone  recognize that the lies in the thing that a) their society views as high-status and good and b) they personally have a vested interest in believing is really good.

I think this is an important virtue in EA, and we should applaud it in most situations where we see it.  

Frank Feedback Given To Very Junior Researchers

I think I have a fairly different attitude towards feedback compared to you and some of the other commenters. My generally view is that subject to time constraints, giving and receiving lots of feedback is both individually and institutionally healthier, and also we should be more willing to give low-quality and low-certainty feedback when we're not sure (and disclaim that we're not sure) rather than leave things unsaid.  

In general I think people aren't correctly modeling that constructive feedback is both time and emotionally costly, and 1) suggesting more roadblocks to making it harder to deliver such feedback makes our community worse and 2) what happens when you don't give negative feedback isn't that people are slightly deluded but overall emotionally happier. People's emotions adjust and now a lot of junior EAs basically act like they're stepping on eggshells because they don't know if what they're doing is perceived as bad/dumb because nobody would tell them.

Frank Feedback Given To Very Junior Researchers

I think a lot of this is an empirical question of what's needed. I think my own view is that some people in the position I described will grow stronger and contribute to the movement more if they are willing to try difficult ambitious things outside of the movement and come back when they/EA have both matured somewhat in slightly uncorrelated ways, rather than thinking of their impact as primarily through donations (which for most people may not look like trying their best to do a really good job either starting something new or trying hard to climb career ladders, but more like being relatively mediocre). 

It's an empirical question however, and I'm open to people thinking I'm wrong and the long-term impact-maximizing thing for almost everybody who aren't doing direct EA jobs is usually donations or relatively untargeted external jobs. 

Load More