Welcome to the fourth open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts.
Welcome to the fourth open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts.
I'm interested in if anyone has any data or experience attempting to introduce people to EA through conventional charitable activities like blood drives, volunteering at a food bank, etc. The idea I've been kicking around is basically start or co-opt a blood drive or whatever event.
While people are engaged in the activity, or before or after, you introduce them to the idea of EA. Possibly even using this conventional charitable event as the prelude to a giving game. On the plus side the people you are speaking with are self-selected for doing charitable a... (read more)
There is a lot of discussion about what to DO in the context of EA. But for everything I do, there is something else that I don't.
What have you decided NOT to do, because it has a (somewhat) lower priority than other things?
Things that I downprioritized:
some recreational activities: playing the guitar, cooking, baking cakes, reading novels.
I quit volunteering in an online education project. It was low time cost anyway.
meditating (would that increase productivity more than the time spent on it? I don't really care about the other benefits.)
keep an
In a couple of weeks, I'm going to give a 10-minute talk (with slides) on effective altruism at the software company I work for (Scribd.com). The audience will be ~40 people, many of whom I am friends with & many of whom are well-compensated and intelligent software engineers/designers/etc. (This is part of a thing Scribd does where employees periodically give talks on random topics that interest them.)
I'd love to hear any suggestions for the content of my talk. I'm curious what evidence we have about the most effective ways to convince people of ef... (read more)
[AMF and its RFMF]
I'm curious as to whether people are giving to AMF, and if so what they think of its room for more funding. I used to favour it but haven't done so since GiveWell stopped recommending it due to room for more funding concerns. Their financial information suggests that they still have a large cash reserve, but I'd be interested to hear from anyone who's looked into this.
Is there any interest in an EA blogging carnival?
How it works is that each month, a different blogger "hosts" the carnival by selecting a topic. Everyone interested in participating for that month then writes a blog post about that topic. The host then writes up a post linking to all the submissions.
We are planning to do a survey of a representative selection of students at NTNU, our university in Trondheim, Norway. There are about 23 000 students across a few campuses. We want to measure the students':
... basic knowledge of global development, aid and health (like Hans Rosling's usual questions)
... current willingness and habits of giving (How much? To what? Why?)
... estimates of what they will give in the future, that is after graduating
And of course background information.
We think we may use this survey for multiple ends. Our initial motiv... (read more)
Per Bernadette, getting good data from these sorts of project requires significant expertise (if your university is as bad as mine, you can get student media attention for attention-grabbing but methodologically suspect survey data, but I doubt you would get much more). I'm reluctant to offer advice beyond 'find an expert'. But I will add a collection of problems that surveys run by amateurs fall into as pitfalls to avoid, and further to provide further evidence why expertise is imperative.
1: Plan more, trial less
A lot of emphasis in EA is on trialling thi... (read more)
[Your recent EA activities]
Tell us about these, as in Kaj's thread last month. I would love to hear about them - I find it very inspirational to hear what people are doing to make the world a better place!
Can anyone recommend to me some work on existential threats as a whole? I don't just mean AI or technology related threats but nuclear war, climate change, etc.
Btw Nick Bostrom's Superintelligence is already at the top of my reading list, and I know Less Wrong is currently engaged in a reading group on that book.
GiveWell have released a summary of the status of their assessments of risks through the Open Philanthropy Project so far. The top contenders are biosecurity and geoengineering, followed by AI, geomagnetic storms, nuclear and food security, although these assessments are at various stages of completion.
We sometimes discuss why EA wasn't invented before. Here's an example of GWWC being re-invented
Is voting valuable?
There are four costs associated with voting:
1) The time you spend deciding on whom to vote.
2) The risk you incur in going to the place where you vote (a non-trivial likelihood of dying due to unusual traffic that day).
3) The attention you pay to politics and associated decision cost.
4) The sensation you made a difference (this cost is conditional on voting not making a difference).
What are the benefits associated with voting:
1) If an election is decided based on one vote, and you voted on one of the winning contestants, your vote decides... (read more)
Recently on the site there have been a number of cross-posts from other websites. I recognise that is great and can bring a lot of value. But I subscribe to the site in an RSS reader and have a very good group of feeds already, including all of the sites content has been cross posted from so far - so the effect for me is to create double posts. My RSS reader has a feature to filter tags or parts of the titles of posts. Would it be possible to tag or add a reddit style brackets tag to cross posts so I can filter them?
Per Bernadette, getting good data from these sorts of project requires significant expertise (if your university is as bad as mine, you can get student media attention for attention-grabbing but methodologically suspect survey data, but I doubt you would get much more). I'm reluctant to offer advice beyond 'find an expert'. But I will add a collection of problems that surveys run by amateurs fall into as pitfalls to avoid, and further to provide further evidence why expertise is imperative.
1: Plan more, trial less
A lot of emphasis in EA is on trialling things instead of spending a lot of time planning them: lean startups, no plan survives first contact, VoI etc. But lean trial design hasn't taken off in the way lean start-ups have. Your data can be poisoned to the point of being useless in innumerable ways, and (usually) little can be done about this post-hoc: many problems revealed in analysis could only have been fixed in original design.
1a: Especially plan analysis
Gathering data and then analysing it always suspect: one can wonder whether the investigators have massaged the analysis to satisfy their own preconceptions or prejudices. The usual means to avoiding it is specifying the analysis you will perform: the analysis might be ill-conceived, but at least it won't be data-dredging. It is hard to plan in advance what sort of hypotheses the data would inspire you to inspect, so seek expert help.
2: Care about sampling
With 'true' random sampling, the errors in your estimates fall as your sample size increases. The problem with bias/directional error is that its magnitude doesn't change with your sample size.
Perfect probabilistic sampling is probably a platonic ideal - especially with voluntary surveys, the factors that make someone take the survey will probably change the sample from the population of interest along axis that aren't perfectly orthogonal to your responses. It remains an ideal worth striving for: significant sampling bias makes your results all-but-uninterpretable (modulo very advanced ML techniques, and not always even then). It is worth thinking long and hard about the population you are actually interested, the sampling frame you will use to try and capture them, etc. etc.
Even with a perfect sample, they still might not provide good data depending on the questions you use. There are a few subtle pitfalls besides the more obvious ones of forgetting to include the questions you wanted to ask or lapses of wording: allowing people to select multiple options of an item then wondering how to aggregate it, having a 'choose one' item with too many selections for the average person to read, or sub dividing it inappropriately: ("Is your favourite food Spaghetti, Tortollini, Tagliatelle, Fusili, or Pizza?")
Again, people who spend a living designing surveys try and do things to limit these problems - item pools, pilots where they look at different questions and see which yield the most data, etc. etc.
3a. Too many columns in the database
There's a habit towards a 'kitchen sink' approach of asking questions - if in doubt, add it in, as it can only give good data, right? The problem is that false positives become increasingly difficult if you just fish for interesting correlations, as the possible comparisons increase geometrically. There are ways of overcoming this (dimension reduction, family-wise or false-discovery error control), but they aren't straightforward.
There are probably many more I've forgotten. But tl;dr: it is tricky to do this right!