All of AlexRichard's Comments + Replies

[Likely not a crux]

EA often uses an Importance - Neglectedness - Tractability framework for cause prioritization.  I would expect things producing progress to be somewhat less neglected than working on XR; it is still somewhat possible to capture some of the benefits. 

We do indeed see vast amounts of time and money being spent on research and development, in comparison to the amount being spent on XR concerns. Possibly you'd prefer to compare with PS itself, rather than with all R&D? (a) I'm not sure how justified that is; (b) it still feels ... (read more)

Datapoint: Before the 2016 election, the Koch brothers set a specific budget for their intended spending; this would allow for something potentially similar, with Democratic mega-donors claiming to donate less if the Koch brothers lowered their intended spending. I tried to contact the Koch brothers and a number of Democratic donors, but received no replies. Uptake outside EA may be difficult.

Inside EA, there’s likely to be a big imbalance, with far more Democrats than Republicans- do you have plans to recruit outside Republicans in particular?

We should not explicitly debate politics or endorse one side or another in an official-ish EA venue like this.

0
Peter Wildeford
7y
I wouldn't take a contributor post by a single contributor (either Haydn or Henry) as an endorsement of politics.
xccf
7y13
0
0

I'd be happy if this was made an official policy going forwards.

However, I don't see how this post differs meaningfully from Haydn's post. Both posts present evidence for various political beliefs without endorsing a candidate. It's hard to argue that either makes even an implicit endorsement, given that the election has finished.

I worry about a norm against debating politics which in practice means "liberal political positions are not up for debate". The EA movement sometimes feels this way, and it definitely decreases my enthusiasm for engaging with EA. It's also epistemically dangerous.

Oh hey, didn't see this at the time.

If EA becomes an explicitly political movement, people who disagree with it will not join; non-political donations are distinct from politics in the sense that they do not need to be identified with one side or another; EA values might be associated with one side or another, but this is an official-seeming EA venue, not just a private-ish place for discussion.

I have some pretty strong concerns about making EA explicitly political, especially in public and official-ish EA venues like this one.

0
MichaelDello
8y
Happy to hear what they are Alex. The final article had a title change and it was made clear numerous times that it was a personal analysis, not necessarily representing the views of Effective Altruism. In fact, we worked off the premise of voting to maximise wellbeing, not to further EA. I posted it here and shared it with EAs because they are used to thinking about ways to maximise wellbeing, and I've never seen an analysis that looks at multiple parties and policies to try and select the 'best' party (many have agreed that this doesn't seem to have been done before). I figured the title including 'draft' would make it clear that this is by no means a final piece, but perhaps I could have been clearer. I think not making an attempt to select the best party at all is also problematic. Here is the final piece if you are interested, although the election is over now. http://www.michaeldello.com/?p=839

I feel like this is a basic confusion on my part, but wouldn't it be better to delay until the end of REG's fundraising period, i.e. when they are making spending decisions for the next year, and then top them off in explicit coordination with other EAs thinking about this? Like, RfMF should be an easily solved problem in cases with a friendly/communicative nonprofit and a donorbase explicitly discussing RfMF with each other.

How much does or will Buck donate?

Buck has stated that he plans to donate ~40,000 dollars this year, although that might not be true anymore.

1
Evan_Gaensbauer
8y
I didn't know they were running a fundraiser, but yeah, waiting until it's over makes the most sense. Coordination itself is difficult, so the above was an experiment in trying to figure out what do when coordination doesn't seem feasible over a given time period, which seems possible. Denis Drescher and I have been discussing attempts to coordinate donors, and we actually intend to post on the EA Forum soon a discussion where donors can register their upcoming donations for the remainder of 2015, across all EA meta-charities.
0
Evan_Gaensbauer
8y
Both Liron Shapira as a private individual, and Quixey as a company, have donated $15,000 to MIRI. That definitely constitutes "plausibly type-1". For a medium-sized company, I'd want to know more of what they intend to do, re: donations. I'll look into it some more.
0
Evan_Gaensbauer
8y
How's that? When I watched the livestream of EAG 2015, the speakers invited any attendees in the audience to come up and address everyone with any important points they had. Topher Hallquist, an employee of Quixey, hopped on stage and told anyone qualified to apply to Quixey because the company was actively hiring. I figure Mr. Hallquist wouldn't have done this if Quixey wasn't a type-3 company. So, what's the mistake I'm making in reading the situation?

Perhaps MIRI should have multiple competitors, each with different stauff, pursuing the same ultimate goals in their techincal research, but otherwise running their organizations quite differently, to minimize the dependence on one organization to save the world.

AFAICT, this was target #5 of MIRI's summer fundraiser. As is, MIRI probably lacks the funding to do this.

FYI for everybody: you can browse articles by tag by going to effective-altruism.com/tag/tag-name, or by clicking on article navigation and then the tag or the arrows.

4
Jacy
9y
I think someone was also planning to organize links to the posts in a single article for easy reference, probably towards the end of the event!

Sure!

There are two broad groups we targeted. One was relevant classes; e.g. anything dealing with ethics, Peter Singer, etc. We would approach professors and ask permission to pitch our group to the class at a relevant point in the curriculum.

The other was other student groups. IIRC, we went to a local LW meetup (which only met once) and the Stanford Transhumanist Society, and had a joint Skype call to Rob Mather with Stanford's chapter of Resource Generation. (There are likely others I'm forgetting about.) For the first two, we just showed up at meetings; for Resource Generation, it was a joint event arranged with their leadership.

1[anonymous]9y
Thanks
3
MichaelDickens
9y
The partnership with Resource Generation hasn't been very fruitful so far--I think only one person from there showed up to Rob Mather's talk. I would add to this that we got two new regular members from a Slate Star Codex meetup. One recruiting strategy may be to try to get Scott Alexander to come to your school and host a meetup.

Thanks for making this!

FYI, almost all of GiveDirectly's income comes from Good Ventures or non-EAs. Its funding sources break down as follows:

Total: 17.4 million Good Ventures: 7 million Other Givewell: ~3.4 million Other non-EA: 7 million (presumably)

I think this sort of comparison is very valuable overall!

Another response to this is Nick Bostrom's astronomical waste argument.

tl;dr: The resources in our light cone will decrease even if we don't make use of them. It's quite plausible that even a few months of a massive, highly advanced civilization could have more moral worth than the total moral worth of the next 500 years of human civilization. So accelerating development by even a small amount, allowing an eventual advanced civilization to be slightly larger and last slightly longer, is still massively important relative to other non x-risk causes.

0
Paul_Christiano
10y
I discussed some quantitative estimates of this here, with a general argument for why it would be small in light of model uncertainty. Overall it seems at least a few orders of magnitude smaller than other issues that favor faster progress.