All of david_reinstein's Comments + Replies

How many people have heard of effective altruism?

That is Jamie Elsey's magic, and I anticipate 'more where this came from' coming soon. :)

Most students who would agree with EA ideas haven't heard of EA yet (results of a large-scale survey)

Very detailed and interesting, thanks.

One clarification/suggestion: Where you give sort of 'null results', e.g.,

There was no correlation with SAT scores. And notably, there was no correlation with studying economics, which suggests that these effects may not necessarily be driven by having learned expected value theory.

Can you do more to illustrate the confidence/credible intervals on these (above, correlation coefficient), so we can get a sense of 'how tightly bounded the result is' and how much confidence we can have that 'any difference is likely to... (read more)

Tentative Reasons You Might Be Underrating Having Kids

I think having kids is widely seen as changing your perspective on what’s important, maybe toward a narrower moral circles (towards your kids and away from others)  This could be a cost to consider? E.g., 

https://twitter.com/made_in_cosmos/status/1381511741520089089?s=21&t=oT7Vy0k53ElFXMyK67BPyg for example

david_reinstein's Shortform

Are you engaging in motivated reasoning ... or çommitting other reasoning fallacies?

I propose  the following  good epistemic check using Elicit.org's "reason from one claim to another" tool

Whenever you have a theory that 

Feed this tool  your theory, negating one side or the other[1]


 and/or 


And see if any of the arguments it presents seem equally plausible to your arguments for 

If so, believe your arguments and conclusion less. 

Caveat: the tool is not working great yet, and often requires a few rounds of... (read more)

It's not obvious to me that according to the EA framework, AI Safety is helpful

I had a similar question. Well stated. One answer is various arguments that “sentient valenced AGIs won’t maximise happiness of themselves” as noted by other commenters.

But I don’t think that is satisfying. Because most of the arguments (AFAIK) and appeals against AI risk don’t even mention this. So i think the appeal seems to take on board our feelings that “even if AIs take over and make themselves super happy with all the paper clips, that still feels bad”.

Try to sell me on [ EA idea ] if I'm [ person with a viewpoint ]

More from Elicit.org's "reason from one claim to another"

I am responsible for my family and local community ~ I should donate to the global poor

I am responsible for my family and local community ➟ I feel some moral obligation to them. ➟ I feel equally compelled to care about others more globally. ➟ Therefore, I should donate to the global poor.

I am responsible for my family and local community ➟ It is an innate moral obligation to care for loved ones ➟ The global poor are as deserving of our assistance as loved ones.

I am responsible for my family and local... (read more)

Try to sell me on [ EA idea ] if I'm [ person with a viewpoint ]

From Elicit.org's "reason from one task to another" 
 


I’m a vegan abolitionist ➟ I want animals to have equal or greater moral consideration than humans do. ➟ At present, large food companies would be unlikely to favouring the abolition of farming over the reduction of suffering, so we should work with them. ➟ This will enable us to simultaneously cause greater improvements in animal welfare and reduce future farming intensity
 

Giving What We Can - Pledge page trial (EA Market Testing)

GWWC is already quite well-known and referenced as 'the place you go to donate 10% of your income'. So if a lot of people are coming onto your page with that goal in mind, then it would make sense that the layouts that centre that option and make it as frictionless as possible will do better

Thanks, that makes sense to me in a general senses.

I was thinking in this direction too but having a hard time putting in words 'why seeing options other than the expected one would make me less likely to follow through'.

Can you dive a little deeper into what the act... (read more)

7Rob Mitchell6d
I think the key is that 'following through' can mean several things that are similar from the perspective of GWWC but quite different from the perspective of the person pledging. In my case I'd already been giving >10% for quite a while but thought it might be nice to formalise it. If I hadn't filled in the pledge it wouldn't have made any difference to my giving. So the value of the pledge to me was relatively low. If the website had been confusing or offputting I might have given up. There are others who will already have decided to give 10% but haven't yet started. The pledge then would have a bit more value if there's a chance it could prevent backsliding but assuming the person had fully committed to giving at this level already, the GWWC pledge wouldn't be crucial to the potential pledger. Finally, there are people who for whatever reason come across the website without yet having decided to give 10% (or even 1%) and make a decision to sign up when they're there. This is where the more standard marketing theory comes into play. For the first two groups, the non-conversion is something like 'I can't even see what I'm meant to be signing up for. Never mind, it's not going to affect how I'll actually give anyway.' Friction in this case is anything that makes it harder to identify what the 10% pledge is and how to sign up to it. I spent a couple of seconds looking between the three options but it was ultimately pretty easy to work out which one was the one I wanted. This would be even easier if it was the one main option. For the third, it could well be 'There's too much choice, maybe I don't want to do it.' At any rate, it will be much different from people who had already committed to giving 10%. The 'loss' to GWWC for all three looks the same but there's only a substantial loss to the wider world with the third group. I know people not always remembering what's in their minds can be an issue but I doubt it would be a problem on something like 'did you in
EA Housing Slack

That makes sense I think. But if it proves too much to maintain a whole slack maybe a channel in another slack with threads for each city would be an intermediate option.

EA Housing Slack

Would this not work better as a channel on another slack group like “global ea discussion”

4ThomasWoodside9d
The idea is for there to be a channel for each location in the slack (e.g. Oxford, Berkeley, etc.). I think that would be unwieldy as part of another slack.
david_reinstein's Shortform

Modest proposal on a donation mechanism for people doing direct work? 

Preamble


Employees at EA orgs and people doing direct work are often also donors/pledgers to other causes. But charitable donations are not always exactly 1-1 deductible from income taxes.  E.g., in the USA it’s only deductible if you forgo the standard deduction and ‘itemize your deductions’, and in many countries in the EU there is very limited tax deductibility. 

So, if you are paid $1 more by your employer/funded and donate it to the Humane League, Malaria Consortium, et... (read more)

2Kat Woods13d
Good idea! I messaged them
If you had an hour with a political leader, what would you focus on?

It’s hard to say without any context. And the hardest thing may be starting the conversation without seeming overbearing.

To get the conversation going, I might consider suggesting something involving the pandemic preparedness funding, or chicken or pig welfare standards, or eye stalk ablation among shrimps.

Animal Welfare: UK Government roles recruiting

Nice ... Boosting https://www.impactfulgovcareers.org/ for people who are interested in UK civil service more generally.

(Fixed link)

Has anyone actually talked to conservatives* about EA?

For my current purposes, a practical (but obviously hand-waving) characterization would be "someone who often votes for Republicans and would strongly consider doing so in the future". I added the word Republicans in the post to help clarify.

Has anyone actually talked to conservatives* about EA?

Good point, although I sort of have this suspicion that there are few conservatives here?

How could I rephrase it?

4david_reinstein18d
I added an asterisk that should help a bit I hope
david_reinstein's Shortform

"Room for more funding": A critique/explanation

Status: WIP, rough, needs consolidating

David Reinstein: I have argued against this idea of 'room for more funding' as a binary thing. generally imagine in these areas there is always room for more funding, at least in the horizon of a year or more.

It's just a combination of

  • diminishing returns, perhaps past a threshold of 'these interventions are better than alternatives'
  • limited capacity because of short-run constraints that take some time to adjust (hire more staff, negotiate more vaccine access, assess new
... (read more)
The Fair Trade Scandal - book review and summary

I wrote THIS paper (with Joon) on one possible way in which the fair trade idea could be net beneficial, in theory, for “rational altruists”.

I tried to supplement it with some evidence on whether the premium paid by consumers was less, equal to, or more than the premium that the farmers were getting per unit. I never published the empirical part because the analysis was very hard to do; it was very hard to compare like-for-like in terms of the quality of fair trade and non-fair trade products.

david_reinstein's Shortform

EA aligned version of "Oxfam stores" ... in the USA+

I used to buy a lot and give away a lot of stuff at Oxfam stores in the UK. I don’t agree with all of the approaches and campaigns but I think that they do a great deal of good. I think that before their prostitution scandal broke the stores were earning about £20 million per year.

Do we have anything like that in the US? We have Goodwill and the Salvation Army but those are doing domestic charity only and thus an order of magnitude less effective, I suspect.

This made me think: would there be any value... (read more)

Doing good easier: how to have passive impact

Yeah I think I was channeling Peter’s post here

Doing good easier: how to have passive impact

Possible caveat.

If the 'passive impact' is of the 'convince someone else to do it' form, obviously we need some people willing to actually do the active things.

I think we don't want too much of a culture of

  • 'Person A who convinces other people to do X get the credit', and

  • 'Person B who actually does X gets less credit'

This would make it hard to get people to actually motivate people to be the person B actual do-er of the thing X.

Another possible caveat is that there is some possible deadweight loss in the time spent convincing other person to do X... (read more)

Definitely! It's a specific instance of a potential meta-trap (another piece here about the idea).

The big questions are:

1. What ratio of meta to direct work should there be in the community?

2. How do we allocate credit?

Which is much beyond the scope of this post, but very important to discuss!

What are the best journals to publish AI governance papers in?

Note the unjournal is trying to build a sort of best of both worlds approach, gaining feedback and prestige/credibility without the inefficiencies of traditional journals. I hope that a top unjournal rating will end up having more cred in some circles than a publication in (E.g) Futures.

I think some AI governance work will be particularly relevant to the Unjournal’s scope. So consider submitting it to the Unjournal (as we get it going) while perhaps also submitting it to one of the journals you mention.

Unjournal will assess and link work; it does not “pub... (read more)

Big List of Cause Candidates: January 2021–March 2022 update

Is this available in a data format somewhere (spreadsheet, airtable, database etc?)

It could be helpful to connect with the research database I’m trying to build for the Unjournal project.

3Leo22d
There is a rough draft. I’ll try to update it and let you know.
Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill)

Valuing “saves” lives that are already exist/likely to exist versus creating (or making it possible for others to create more lives)?

Perhaps that’s the main distinction in the deep assumptions/values.

1Michael_Wiebe21d
Although, they argue that longtermism goes through even if you accept person-affecting views:
What “pivotal” and useful research ... would you like to see assessed? (Bounty for suggestions)

I agree on most of your counts.

Regardless, I think that there would be a lot of value in these sort of reports getting peer reviewed by academics/experts, especially where they are influential in the EA community.

I agree, but I don't think this is what the Unjournal should handle right now. It should be done, but maybe with a different vehicle and approach.

I'd prefer maybe 10x as much research, at .1x the quality.

I tend to disagree with this. My concern is that most/much research is often 'vomited out' to satisfy tenure requirements and other need... (read more)

4PeterSlattery12d
Thanks for replying. When I say I'd prefer maybe 10x as much research, at .1x the quality, I don't want to miss out on quality overall. Instead, I'd like more small scale incremental and iterative research, where the rigour and the length, increase in proportion to the expected ROI. For instance, this could involve a range of small studies that increase in quality as they show evidence, followed by a rigorous review and replication process. I also think that the reason for a lot of the current research vomit is that we don't let people publish short and simple articles. I think that if you took most articles and pulled out their method, results and conclusion, you would give the reader about 95% of the value of the article in maybe 1/10th the space/words of the full article. If a researcher just had to write these sections and a wrapper rather than plan and coordinate a whole document, they might produce and disseminate their insights in 2-5% of the time that it currently takes.
What “pivotal” and useful research ... would you like to see assessed? (Bounty for suggestions)

Thank you. This is a useful list. Some of these directly link academic work/ work that claims finding rigorous empirical results. In other cases I will have to dig into these to find 'what is the paper being cited, if any', which I will try to do.

3brb24325d
Thanks! I also added some more links. Some are issues of omission, analysis, or interpretation so may be especially challenging to spot and rationalize.
Confused about funding shortages and earning to give

Quick response. That is not what it means. 80k’s priorities have shifted towards caring about the long term survival and flourishing of humanity. They are particularly concerned with risk from artificial intelligence that may not be well aligned with sentient life. In those areas some argue that it is very unclear what is worth funding, and the priority has been to try to get people to work in particular career and research areas.

However the Givewell global health and well-being charities, as well as other charities in this area (like fistula surgery etc... (read more)

1eric6981mo
Yes, this is enlightening, thanks. The 80k article wasn't clear that they were talking about longtermism to the exclusion of more old-school EA priorities, and you make this clear.
What are the key claims of EA?

I sort of disagree-ing with us

'Agreeing on a set of Facts'.

It seems somewhat to disagree with the truth-seeking part. I would say "it is bad for our epistemic norms" ... but I'm not sure I use that terminology correctly.

Aside from that, I think some of the empirics you mentioned probably have a bit less consensus in EA than you suggest... such as

We live in an “unusual” time in history

My impression was that even among longtermists the 'hinge of history' thing is greatly contested

Most humans in the world have net positive lives

Maybe now they do, but ... (read more)

What are the key claims of EA?

I think you get a lot right, but some of these claims, especially the empirical ones, seem to apply only to certain (perhaps long-termist) segments only.

I'd agree on/focus on

  1. Altruism, willingness to substantially give (money, time) from one's own resources, and the goodness of this (but not necessary an 'obligation')

  2. Utilitarianism/consequentialism

(Corollary): The importance of maximization and prioritization in making choices about doing good.

  1. A wide moral circle

  2. Truth-seeking and reasoning transparency

I think these four things are fairly... (read more)

2david_reinstein1mo
I sort of disagree-ing with us 'Agreeing on a set of Facts'. It seems somewhat to disagree with the truth-seeking part. I would say "it is bad for our epistemic norms" ... but I'm not sure I use that terminology correctly. Aside from that, I think some of the empirics you mentioned probably have a bit less consensus in EA than you suggest... such as My impression was that even among longtermists the 'hinge of history' thing is greatly contested Maybe now they do, but in future, I don't think we can have great confidence. Also, the 'most' does a lot of work here. It seems plausible to me that at least 1 billion people in this world have net negative lives. Most EAs (and most humans?) surely believe at least some animals sentient. But non-biological, I'm not sure how widespread this belief is. At least I don't think there is any consensus that we 'know of non-bios who are currently sentient', nor do we have consensus that 'there is a way to know what direction the valence of the non-bios goes [https://forum.effectivealtruism.org/posts/fFDM9RNckMC6ndtYZ/shortform?commentId=dKwKuzJuZQfEAtDxP] '. I'm not sure that's been fully taken on board. In what ways? Are we prioritizing 'create the maximum number of super-happy algorithms'? (Maybe I'm missing something though; this is a legit question.)
What are the key claims of EA?

When considering the relevant lives, this includes all humans, animals and future people. We generally do not discount the lives of future people intrinsically at all. This longtermist claim is common but not absolute in EA, and I’m brushing over mutliple population ethics questions here. (e.g. severals EA might hold person-affecting views)

I don't think this is a longtermist claim, nor does it preclude person-affecting views.

You can still value future people equally as present people, and not discount them at all insofar as they are sure to exist. If th... (read more)

What “pivotal” and useful research ... would you like to see assessed? (Bounty for suggestions)

I think that inviting submissions from research in preprints or written up in EA forum posts is a good idea.

Definitely the former, but which ones? As to EA forum posts, I guess they mainly (with some exceptions) don't have the sort of rigor that would permit the kind of review we want... And that would help unjournal evaluations become a replacement for academic journal publications?

A submission would simply be giving you permission to publish/host the original document and reviews in the unjournal. Post review, authors could have the option to provi

... (read more)
4PeterSlattery24d
See >PS> Definitely the former, but which ones? >PS> Yeah, the only easy options I can suggest now are to consider some of items in the BS newsletter As to EA forum posts, I guess they mainly (with some exceptions) don't have the sort of rigor that would permit the kind of review we want... And that would help unjournal evaluations become a replacement for academic journal publications? >PS> This is probably a bigger discussion, but this makes me realise that that one difference between us is that I probably want the unjournal (and social science in general) to accept a lower level of rigor than most journal (perhaps somewhere between a very detailed forum/blog post and a short journal or conference article). One reason is that I personally think that most social science journal articles sacrifice too much speed for better quality, given heterogeneity etc. I'd prefer maybe 10x as much research, at .1x the quality. To be clear, I am keen on keeping the key parts (e.g., a good method and explanation of theory and findings), but not having so much of the fluff (e.g., summarising much prior or future potential research etc). A second is that I expect a lot more submissions near the level of conference work or a detailed forum post level, than will be journal level. There are probably 100x more forum posts and reports produced than journal articles. Additionally, there is a lot of competition for journal level submissions. If you expect an article to get accepted at a journal then you will probably submit it to one. On the other hand, if you wrote up a report pretty close to journal level in some regards and have nowhere to put it, or no patience with the demand of a journal or uncertainty, then the unjournal is relatively attractive given the lack of alternatives. Actually, I don't propose to host or publish anything. Just linking a page with a DOI ... and review it based on this, no? >PS> Yeah, sounds good. I think I should go through this carefully, for s
Release of Existential Risk Research database

Thanks. Your answers are very helpful! My skim also suggested that there was a lot that would be hard for academic economists and other people in the general area to evaluate. (but some of it does seem workable and I’m adding it to my list.)

One of the challenges was that a lot of the work makes or considers multiple claims, and seems to give semi rigourous common sense anecdotal and case study based evidence for these claims. Other work involves areas of expertise that we are not targeting, some “hard“ expertise in the natural and physical sciences or leg... (read more)

Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

Got it. I’m not sure that this “common definition of longtermism” would or should be widely accepted by longtermists, upon reflection. As you suggest it is a claim about an in-principle measurable outcome (‘value … mostly depends … VMDLT’). It is not a core belief or value.

The truth value of VMDLT depends on a combination of empirical things (e.g., potential to affect long term future, likely positive nature of the future, …) and moral value things (especially total utilitarianism).[1].

What I find slightly strange about this definition of longtermism in an... (read more)

3Michael_Wiebe1mo
Yes, I agree. I think longtermism is a step backwards from the original EA framework of importance/tractability/crowdedness, where we allocate resources to the interventions with the highest expected value. If those happen to be aimed at future generations, great. But we're going to have a portfolio of interventions, and the 'best' intervention (which optimally receives the marginal funding dollar) will change as increased funding decreases marginal returns.
What are examples of EA work being reviewed by non-EA researchers?

I'd love a follow-up on this. Particularly interested in how this might offer lessons for Unjournal.

(The link above is dead).

Improve/promote a post in situ.

I was thinking more about posts that continue to build and evolve, but this is another good use case. You might think 'short form that upgrades to a regular post' could do this, but we don't have that functionality. ... (and also short forms don't really get enough exposure)

Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

My statement above (not a 'definition', right?) is that

If you are not a total utilitarian, you don't value "creating more lives" ... at least not without some diminishing returns in your value. ... perhaps you value reducing suffering or increasing happiness for people, now and in future, that will definitely or very likely exist...

then it is not clear that "[A] reducing extinction risk is better than anything else we can do" ...

because there's also a strong case that, if the world is getting better, then helping people and animals right now is the most c... (read more)

1Michael_Wiebe1mo
I'm referring to this common definition of longtermism: >'the value of your action depends mostly on its effect on the long term future
What makes a statement a normative statement?

Just remembering I have seen it but maybe it was in common parlance but not on social science.

EA "CB Radio" anytime chat?

Okay it worked and it was pretty fun. We discussed apple butter and other stuff.

Will have to look into the terms and conditions, but it will be great if we could find someone (nonlinear??) who would keep this, or something like this open 24/7.

Or maybe it’s just a permanently open zoom call?

EA "CB Radio" anytime chat?

Ok testing it out now. Someone join so we can see how it works?

https://twitter.com/i/spaces/1jMJgeqbYXbKL

4david_reinstein1mo
Okay it worked and it was pretty fun. We discussed apple butter and other stuff. Will have to look into the terms and conditions, but it will be great if we could find someone (nonlinear??) who would keep this, or something like this open 24/7. Or maybe it’s just a permanently open zoom call?
EA "CB Radio" anytime chat?

But that is not the same thing as a “space”. This seems like something that categorises Twitter posts but it’s not the place that permits audio discussions.

To do that I guess we will have to start a space. Maybe I will do it?

2david_reinstein1mo
Ok testing it out now. Someone join so we can see how it works? https://twitter.com/i/spaces/1jMJgeqbYXbKL [https://twitter.com/i/spaces/1jMJgeqbYXbKL]
Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

<y statement above (not a 'definition', right?) is that

If you are not a total utilitarian, you don't value "creating more lives" ... at least not without some diminishing returns in your value. ... perhaps you value reducing suffering or increasing happiness for people, now and in future, that will definitely or very likely exist...

then it is not clear that "[A] reducing extinction risk is better than anything else we can do" ...

because there's also a strong case that, if the world is getting better, then helping people and animals right now is the mos... (read more)

Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

I see 'extinction' as doing a few things people might value, with different ethics and beliefs:

  1. Killing the current generation and maybe causing them to suffer/lose something. All ethics probably see this as bad.

  2. Preventing the creation of more lives, possibly many more. So, preventing extinction is 'creating more lives'.

Happy lives? We can't be sure, but maybe the issue of happiness vs suffering should be put in a different discussion?

Assuming the lives not-extincted ergo created are happy, the total utilitarian would value this part, and that's wh... (read more)

How should people spend money to be more productive?

Software (mostly free/cheap)

  • typeitforme text expansion
  • tes clipboard manager
  • Airtable
  • rectangle for organizing your screens
  • vifm (if you use the terminal, great for managing files)

Physical/hardware:

  • point lighting that illuminates the particular thing you are reading
  • ereader like Boox
  • treadmill desk/standing desk with treadmill underneath ... wobble board can also be good
How should people spend money to be more productive?

Partner suggests:

  • Kitchen timer and Pomodoro technique.
  • Miro mindmapping and whiteboard app for collaborating.
  • Large format paper and colored markers for analog todo lists and long term calendars
2david_reinstein1mo
Software (mostly free/cheap) * typeitforme text expansion * tes clipboard manager * Airtable * rectangle for organizing your screens * vifm (if you use the terminal, great for managing files) Physical/hardware: * point lighting that illuminates the particular thing you are reading * ereader like Boox * treadmill desk/standing desk with treadmill underneath ... wobble board can also be good
How should people spend money to be more productive?

Yeah we need to make this a wiki entry or airtable or something so it doesn’t keep popping up without being synthesised.

3Ben Williamson1mo
This is the idea of what I'm writing up! I think I could have made this clearer in the short description above but I'm trying to synthesise all the recommendations that currently exist into a database (see rough Google sheet here: https://docs.google.com/spreadsheets/d/1InTlwLwAKprqFeD65oF0XXz64lZTHuhBxzHDt8fqzNM/edit?usp=sharing) [https://docs.google.com/spreadsheets/d/1InTlwLwAKprqFeD65oF0XXz64lZTHuhBxzHDt8fqzNM/edit?usp=sharing)] It seemed worth giving people a new opportunity to share any new/ further recommendations but the idea is to include everything suggested in all past posts on the same topic on the EA Forum, LessWrong, etc.
How should people spend money to be more productive?

“Big monitors” seems like a big win.

But it somewhat conflicts with being able to change locations, work outside or in your car, etc. I find these are also good ways to stay productive and avoid getting in a rut.

Not sure how to resolve it other than to say “have big monitors in at least one place”.

What “pivotal” and useful research ... would you like to see assessed? (Bounty for suggestions)

This seems very relevant; if you find the name/link, please do add it in the Airtable or just here in the comments. Thanks.

4Charles He1mo
Ok I think I found my “source”: https://www.givewell.org/research/incubation-grants/Malaria-Consortium-monitoring-Ondo-July-2021 [https://www.givewell.org/research/incubation-grants/Malaria-Consortium-monitoring-Ondo-July-2021] It seems valuable, but it doesn’t seem to be an RCT. I can’t immediately tell what it is but it looks like collecting trend data without a control group. (To onlookers, I know that sounds frowned upon but it’s a real thing and probably judging the value of the design requires great domain knowledge.) So it looks a lot less pivotal than an “RCT on AMF”. So my original answer above might have been misleading.
Load More