Linch

"To see the world as it is, rather than as I wish it to be."

I work for the EA research nonprofit Rethink Priorities. Despite my official title, I don't really think of the stuff I do as "research." In particular, when I think of the word "research", I think of people who are expanding the frontiers of the world's knowledge, whereas often I'm more interested in expanding the frontiers of my knowledge, and/or disseminating it to the relevant parties.

I'm also really interested in forecasting.

People may or may not also be interested in my comments on Metaculus and Twitter:

Metaculus: https://pandemic.metaculus.com/accounts/profile/112057/

Twitter: https://twitter.com/LinchZhang

Wiki Contributions

Comments

Philosophy PhD Application: Advice on Written Submission

Another question of course is why philosophy PhD programs are the best way to go if OP is more interested in researching robust decision making than other questions in philosophy. Not knowing too much about the field, David Manheim's dissertation for example seems pretty related. 

EA needs consultancies

"Y" is a strictly stronger claim than "If X, then Y", but many people get more emotional with "If X, then Y."

Consider "Most people around 2000 years ago had a lot of superstitions and usually believed wrong things" vs "Before Jesus Christ, people had a lot of superstitions and usually believed wrong things."

In hindsight I wish I'd given your wording, not mine, but oh well

Oh what an interesting coincidence.

EA needs consultancies

I tried answering your question on the object level a few times but I notice myself either trying to be reconciliatory or defensive, and I don't think I will endorse either response upon reflection. 

The motivated reasoning critique of effective altruism

Hi. I'm glad you appear to have gained a lot from my quick reply, but for what it's worth I did not intend my reply as an admonishment.

I think the core of what I read as your comment is probably still valid. Namely, that if I misidentified problems as biases when almost all of the failures are due to either a) noise/error or b) incompetence unrelated to decision quality (eg mental health, insufficient technical skills, we aren't hardworking enough), then the bias identification isn't true or useful. Likewise, debiasing is somewhere between neutral to worse than useless if the problem was never bias to begin with.

The motivated reasoning critique of effective altruism

I'm suspicious of 1), especially if taken too far, because I think if taken too far it would justify way too much complacency in worlds where foreseeable moral catastrophes are not only possible but probable

The motivated reasoning critique of effective altruism

Some quick thoughts: I would guess that Open Phil is better at this than other EA orgs, both because of individually more competent people and much better institutional incentives (ego not wedded to specific projects working). For your specific example, I'm (as you know) new to AI governance, but I would naively guess that most (including competence-weighted) people in AI governance are more positive about AI interventions than you are. 

Happy to be corrected empirically. 

(I also agree with Larks that publishing a subset of these may be good for improving the public conversation/training in EA, but I understand if this is too costly and/or if the internal analyses embed too much sensitive information or models)
 

The Importance-Avoidance Effect

You might also like Aaron Schwartz's notes on productivity:

Assigned problems

Assigned problems are problems you’re told to work on. Numerous psychology experiments have found that when you try to “incentivize” people to do something, they’re less likely to do it and do a worse job. External incentives, like rewards and punishments, kills what psychologists call your “intrinsic motivation” — your natural interest in the problem. (This is one of the most thoroughly replicated findings of social psychology — over 70 studies have found that rewards undermine interest in the task.)5 People’s heads seem to have a deep avoidance of being told what to do.6

[LZ Sidenote: I think I'd want to actually read the studies or at least a meta-analysis of recent replications first before being sure of this]

The weird thing is that this phenomenon isn’t just limited to other people — it even happens when you try to tell yourself what to do! If you say to yourself, “I should really work on X, that’s the most important thing to do right now” then all of the sudden X becomes the toughest thing in the world to make yourself work on. But as soon as Y becomes the most important thing, the exact same X becomes much easier.

Create a false assignment

This presents a rather obvious solution: if you want to work on X, tell yourself to do Y. Unfortunately, it’s sort of difficult to trick yourself intentionally, because you know you’re doing it.7 So you’ve got to be sneaky about it.

One way is to get someone else to assign something to you. The most famous instance of this is grad students who are required to write a dissertation, a monumentally difficult task that they need to do to graduate. And so, to avoid doing this, grad students end up doing all sorts of other hard stuff.

The task has to both seem important (you have to do this to graduate!) and big (hundreds of pages of your best work!) but not actually be so important that putting it off is going to be a disaster.


 

EA needs consultancies

Hmm, did you read the asterisk in the quoted comment?

*The natural Gricean implicature of that claim is that I'm saying that EA orgs are an exception. I want to disavow that implication. For context, I think this is plausibly the second or third biggest limitation for my own work.

(No worries if you haven't, I'm maybe too longwinded and it's probably unreasonable to expect people to carefully read everything on a forum post with 76 comments!)

If you've read it and still believe that I "sound breathtakingly arrogant ", I'd be interested in whether you can clarify whether "breathtakingly arrogant" means either a) what I say is untrue or b) what I say is true but insufficiently diplomatic. 

More broadly, I mostly endorse the current level of care and effort and caveats I put on the forum. (though I want to be more concise, working on it!) 

I can certainly make my writing more anodyne and less likely to provoke offense, e.g. by defensive writing and pre-empting all objections I can think of,  by sprinkling the article heavily with caveats throughout, by spending 3x as much time on each sentence, or just by having much less public output (the last of which is empirically what most EAs tend to do).

I suspect this will make my public writing worse however. 

The motivated reasoning critique of effective altruism

Thanks a lot! Is there a writeup of this somewhere? I tend to be a pretty large fan of explicit rationality (at least compared to EAs or rationalists I know), so evidence that reasoning in this general direction is empirically kind of useless would be really useful to me!

The motivated reasoning critique of effective altruism

Yeah I'm surprised by this as well. Both classical utilitarianism (in the extreme version, "everything that is not morally obligatory is forbidden") and longtermism just seem to have many lower degrees of freedom than other commonly espoused ethical systems, so it would naively be surprising if these worldviews can justify a broader range of actions than close alternatives. 

Load More