Welcome to the fifth open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts.
Welcome to the fifth open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts.
I've been thinking of doing a 'live below the line' to raise money for MIRI/CFAR, and asking someone at MIRI/CFAR to do the same for CEA in return. The motivation is mostly to have a bit of fun. Does anyone think this is a good or bad idea?
I made a map with the opinions of many Effective Altruists and how they changed over the years.
My sample was biased by people I live with and read. I tried to account for many different starting points, and of course, I got many people's opinions wrong, since I was just estimating them.
Nevertheless there seems to be a bottleneck on accepting Bostrom's Existential Risk as The Most Important Task for Humanity. If the trend is correct, and if it continues, it would generate many interesting predictions about where new EA's will come from.
Here, have a look... (read more)
I suspect that one could make a chart to show a bottle neck in a lot of different places. From my understanding GW does not seem to think what the YED chart would imply.
"I reject the idea that placing high value on the far future – no matter how high the value – makes it clear that one should focus on reducing the risks of catastrophes" http://blog.givewell.org/2014/07/03/the-moral-value-of-the-far-future/
The yED chart shows Givewell being of the opinion that poverty alleviation is desirable and quite likely the best allocation of resources in 2013. This does not seem to be a controversial claim. There are no other claims about Givewell's opinion in any other year.
Notice also that the arrows in that chart mean only that empirically it has been observed that individuals espousing one yellow opinion frequently change their opinion to one below it. The reverse can also happen, though it is less frequent, and frequenty people spend years, if not decades, within a particular opinion.
Can you give an example of a chart where a bottleneck would occur in a node that is not either the X-risk node, or the transition to the far future node? I would be interested in seeing patterns that escaped my perception, and it is really easy to change the yED graph if you download it.
The bottom part of your diagram has lots of boxes in it. Further up, "poverty alleviation is most important" is one box. If there was as much detail in the latter as there is in the former, you could draw an arrow from "poverty alleviation" to a lot of other boxes: economic empowerment, reducing mortality rates, reducing morbidity rates, preventing unwanted births, lobbying for lifting of trade restrictions, open borders (which certainly doesn't exclusively belong below your existential risk bottleneck), education, etc. There could be lots of arrows going every which way in amongst them, and "poverty alleviation is most important" would be a bottleneck.
Similarly (though I am less familiar with it), if you start by weighting animal welfare highly, then there are lots of options for working on that (leafleting, lobbying, protesting, others?).
I agree that there's some real sense in which existential risk or far future concerns is more of a bottleneck than human poverty alleviation or animal welfare -- there's a bigger "cause-distance" between colonising Mars and working on AI than the "cause-distance" between health system logistics and lobbying to remove trade restrictions. But I think the level of detail in all those boxes about AI and "insight" overstates the difference.
I just read Katja's post on vegetarianism (recommended). I have also been convinced by arguments (from Beckstead and others) that resources can probably be better spent to influence the long-term future. Have you seen any convincing arguments that vegetarianism or veganism are competitively cost-effective ways of doing good?
I'm thinking of giving "Giving games" for Christmas this year.
Family and friends gets a envelope with two cards. A nice Christmas card saying they now have x NOK to give on a charity of their choosing. Then it presents some interesting recommendations and encourage them to look more into them if they want to. When they have decided they have to write it down on an accompanying empty (but postaged) card addressed to me and when I get the card after Christmas I will donate the money.
Have somebody else though of something similar? Do you have any ideas that could make it more interesting or better in any way?
As a follow-up to this comment: I gave my 10-minute talk on effective altruism at Scribd. The talk went better than I expected: several of my coworkers told me afterwards that it was really good. So I thought I would summarize the contents of the talk so it can be used as a data point for presenting on effective altruism.
You can see the slides for my talk in keynote, pptx, and html. Here are some notes on the slides:
The thought experiment on the second slide was Peter Singer's drowning child thought experiment. After giving everyone a few seconds to
Hi there! In this comment, I will discuss a few things that I would like to see 80,000 Hours consider doing, and I will also talk about myself a bit.
I found 80,000 Hours in early/mid-2012, after a poster on LessWrong linked to the site. Back then, I was still trying to decide what to focus on during my undergraduate studies. By that point in time, I had already decided that I needed to major in a STEM field so that I would be able to earn to give. Before this, in late 2011, I had been planning on majoring in philosophy, so my decision in early 2012 to do ... (read more)
Should we try to make a mark on the Volgbrother's "Project 4 Awesome"? It can expose effective altruism to a wide and, on average, young audience.
I would love to help in any way possible, but video editing is not my thing...
People often criticise GWWC for bad reasons. In particular, people harshly criticise it for not being perfect, despite not doing anything much of value themselves. Perhaps we should somewhat discount such armchair reasoning.
However, if we do so, we should pay extra attention when people who have donated hundreds of millions of dollars, a majority of their net worth, and far more than most of us will, have harsh criticism of giving pledges.
Animal Charity Evaluators has/have found that leafleting is a highly effective form of antispeciesist activism. I want to use it generally for effective altruism too. Several times a year I’m at conventions with lots of people who are receptive to the ideas behind EA, and I would like to put some well-designed flyers into their hands.
That’s the problem—“well-designed” is. My skills kind of end at “tidy,” and I haven’t been able to find anything of the sort online. So it would be great if a gifted EA designer could create some freely licensed flyers as SVG ... (read more)
[Your recent EA activities]
Tell us about these, as in Kaj's thread last month. I would love to hear about them - I find it very inspirational to hear what people are doing to make the world a better place!
(Giving this thread another go after it didn't get any responses last month.)
I'm planning on starting an EA group at the University of Utah once I get back in January, and I need a good first meeting idea that will have broad appeal.
I was thinking that I could get someone who's known outside of EA to do a short presentation/question and answer session on Skype. Peter Singer is the obvious choice, but I doubt he'd have time (let me know if you think otherwise). Can anyone suggest another EA who might have name recognition among college students who haven't otherwise heard of EA?
Is there an audio recording of Holden's "Altruistic Career Choice Conference call"? If so, can someone point me in the right direction. I'm aware of the transcript:
http://files.givewell.org/files/calls/Altruistic%20career%20choice%20conference%20call.pdf
Thanks!
I've been growing skeptical that we will make it through AI, due to
1) civilizational competence (that is incompetence) and
2) Apparently all human cognition is based on largely subjective metaphors of radial categories which have arbitrary internal asymmetries that we have no chance of teaching a coded AI in time.
This on top of all the other impossibilities (solving morality, consciousness, the grounding problem, or at least their substitute: value loading).
So it is seeming more and more to me that we have to go with the forms of AI's that have some smal... (read more)
I posted this late before, and was told to post in a newer Open Thread so here it goes:
Is voting valuable?
There are four costs associated with voting:
1) The time you spend deciding on whom to vote.
2) The risk you incur in going to the place where you vote (a non-trivial likelihood of dying due to unusual traffic that day).
3) The attention you pay to politics and associated decision cost.
4) The sensation you made a difference (this cost is conditional on voting not making a difference).
What are the benefits associated with voting:
1) If an election is decid... (read more)
I suspect that one could make a chart to show a bottle neck in a lot of different places. From my understanding GW does not seem to think what the YED chart would imply.
"I reject the idea that placing high value on the far future – no matter how high the value – makes it clear that one should focus on reducing the risks of catastrophes" http://blog.givewell.org/2014/07/03/the-moral-value-of-the-far-future/
The yED chart shows Givewell being of the opinion that poverty alleviation is desirable and quite likely the best allocation of resources in 2013. This does not seem to be a controversial claim. There are no other claims about Givewell's opinion in any other year.
Notice also that the arrows in that chart mean only that empirically it has been observed that individuals espousing one yellow opinion frequently change their opinion to one below it. The reverse can also happen, though it is less frequent, and frequenty people spend years, if not decades, withi... (read more)