I was just looking at the EA funds dashboard. To what extent do you think the money coming into EA funds is EA money that was already going to be allocated to similarly effective charities?
I saw the EA funds post on hacker news, are you planning to continue promoting EA funds outside the existing EA community?
You can understand some of what people are downvoting you for by looking at which of your comments are most downvoted - ones where you're very critical without much explanation and where you suggest that people in the community have bad motives: http://effective-altruism.com/ea/181/introducing_ceas_guiding_principles/ah7 http://effective-altruism.com/ea/181/introducing_ceas_guiding_principles/ah6 http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8p9
Well-explained criticisms won't get downvoted this much.
I think we have a real problem in EA of turning ideas into work. There have been great ideas sitting around for ages (e.g. Charity Entrepreneurship's list of potential new international development charities, OpenPhil's desire to see a new science policy think tank, Paul Christiano's impact certificate idea) but they just don't get worked on.
Yes! The conversations and shallow reviews are the first place I start when researching a new area for EA purposes. They've saved me lots of time and blind alleys.
OpenPhil might not see these benefits directly themselves, but without information sharing individual EAs and EA orgs would keep re-researching the same topics over and over again and not be able to build on each other's findings.
It may be possible to have information sharing through people's networks but this becomes increasingly difficult as the EA network grows, and excludes competent people who might not know the right people to get information from.
Thanks, that clarifies.
I think I was confused by 'small donor' - I was including in that category friends who donate £50k-£100k and who fund small organisations in their network after a lot of careful analysis. If the fund is targeted more at <$10k donors that makes sense.
OpenPhil officers makes sense for MVP.
On EA Ventures, points 1 and 2 seem particularly surprising when put together. You found too few exciting projects but even they had trouble generating funder interest? So are you saying that even for high-quality new projects, funder interest was ...
Small donors have played a valuable role by providing seed funding to new projects in the past. They can often fund promising projects that larger donors like OpenPhil can't because they have special knowledge of them through their personal networks and the small projects aren't established enough to get through a large donor's selection process. These donors therefore act like angel investors. My concern with the EA fund is that:
Hi Richard,
Thanks a lot for the feedback. I work at CEA on the EA Funds project. My thoughts are below although they may not represent the views of everyone at CEA.
Funding new projects
I think EA Funds will improve funding for new projects.
As far as I know small donors (in the ~$10K or below range) have traditionally not played a large role in funding new projects. This is because the time it takes to evaluate a new project is substantial and because finding good new projects requires developing good referral networks. It generally doesn't make sense for a ...
This is really exciting, looking forward to these posts.
The Charity Entrepreneurship model is interesting to me because you're trying to do something analogous to what we're doing at the Good Technology Project - cause new high impact organisations to exist. Whereas we started meta (trying to get other entrepreneurs to work on important problems) you started at the object level (setting up a charity and only later trying to get other people to start other charities). Why did you go for this depth-first approach?
Exploration through experimentation might also be neglected because it's uncomfortable and unintuitive. EAs traditionally make a distinction between 'work out how to do the most good' and 'do it'. We like to work out whether something is good through careful analysis first, and once they're confident enough of a path they then optimise for exploitation. This is comforting because we then get to do only do work when we're fairly confident of it being the right path. But perhaps we need to get more psychologically comfortable with mixing the two together in an experimental approach.
Is there an equivalent to 'concrete problems in AI' for strategic research? If I was a researcher interested in strategy I'd have three questions: 'What even is AI strategy research?', 'What sort of skills are relevant?', 'What are some specific problems that I could work on?' A 'concrete problems'-like paper would help with all three.
I know some effective altruists who see EAs like Holden Karnofsky or what not do incredible things, and feel a little bit of resentment at themselves and others; feeling inadequate that they can’t make such a large difference.
I think there's a belief that people often have when looking at successful people which is really harmful, the belief that "I am fundamentally not like them - not the type of person who can be successful." I've regularly had this thought, sometimes explicitly and sometimes as a hidden assumption behind other thoughts and ...
An EA stackexchange would be good for this. There is one being proposed: http://area51.stackexchange.com/proposals/97583/effective-altruism
But it needs someone to take it on as a project to do all that's necessary to make it a success. Oli Habryka has been thinking about how to make it a success, but he needs someone to take on the project.
Is it worth cross-posting this to LessWrong? Anna Salamon is leading an effort to get LessWrong used again as a locus of rationality conversation, and this would fit well there.
In response to b, I think that's true for the 80k job. I decided not to apply for the 80k job because it was WordPress, which is horrible to work with and bad for career capital as a developer. Other developers I spoke to about it felt similarly.
But this isn't true of all of the jobs.
For example, the GiveDirectly advert says "GiveDirectly is looking for a full-stack developer who is ready to own, develop, and refine a broad portfolio of products, ranging from mobile and web applications to backend data integrations. As GiveDirectly’s only full-time t...
Video calls could help overcome geographic splintering of EAs. For example, I've been involved in EA for 5 years and I still haven't met many bay area EAs because I've always been put off going by the cost of flights from the UK.
I've considered skyping people but here's what puts me off:
However, at house parties I've talked...
I'm not sure if this discussion has changed your view on using deceptive marketing for EA Global, but if it has, what do you plan to do to avoid it happening in future work by EA Outreach?
Also, it's easy for EAs with mainly consequentialist ethics to justify deception and non-transparency for the greater good, without considering consequences like the ones discussed here about trust and cooperation. Would it be worth EAO attempting to prevent future deception by promoting the idea that we should be honest and transparent in our communications?
This may just be the way you phrased it, but you talk about spreading "EA and earning-to-give" as if earning-to-give is the primary focus of EA. I'm not sure if this is your view, but if it is, it's worth reading 80,000 Hours' arguments on why only a small proportion of people should earn to give in the long term.
Given these arguments and the low salaries in Russia, it might be better to concentrate on encouraging other sorts of effective altruist activity such as direct work, research, or advocacy. And there may be some altruistic work that is e...
I can understand why we should care about climate change (because of the impact on humans) but I'm confused about what the purpose of environmentalism that focusses on preventing destruction of natural habitats is. Here are some possibilities:
Here are a few data sources for finding cities with a culture or sub-culture that has EA-potential:
That makes sense, you're not preventing your own moving by doing the analysis as you have other reasons for not moving yet.
Can I suggest an amalgamation of our approaches then:
Phase 1: Exploration. In this phase, those that can move in the next 4 months move to a location that would be good for them and try to join together with other EAs in doing this. They also try to explore more than one location and report back their findings to the whole group. Those that can't move that soon but are interested in the idea can contribute through online research. Ever...
I agree that you have to do some thinking in advance - you have to choose at least one place to go. However, I don't think this is a very hard a choice for someone to make because the digital nomad scene has already identified a handful of good places. From my reading of recommended places in digital nomad forums, here are the places that stand out for cutting your living costs whilst doing remote internet-based work if you are from a Western country:
I'm glad you're doing work on this - it's a potentially very valuable project. I think we could go about it in a different way though. There's a risk of analysis paralysis in trying to find the optimal location in advance so that we can commit to something as big as buying and converting property. Instead we could just find the people who are likely to move somewhere cheaper in the next few months (I'm one of those people) and see if we can do it together. We might also want to drop the framing of it as 'A new EA hub' at this stage because that makes the t...
Your suggestions are good and we can imagine doing them in the future, but I think we should prioritise the research problem for reasons I'll explain.
For your matching developers with projects scenarios (e.g. conference or prizes), they would make sense if:
We think that there is some truth in this - it's hard to find lists of tech orgs of any type, and there aren't many lists of tech o...
I'm a little unclear on what your project involves, could you email me at richard@goodtechnologyproject.org and we can talk further.
I agree that this can be a problem. I've previously found myself demoralised after suggesting ideas for projects only to be immediately met with questions like 'Why you, not someone else?', 'Wouldn't x group do this better?' I think having a cofounder helps greatly with handling this. It's also something that founders just have to learn to deal with.
In this case though, I think Gleb_T's question was good. We explicitly asked for feedback and we wanted to get questions like this so that we were forced to think through things we may not have properly conside...
Thanks for asking this as it's made me think more carefully about it.
Partly it's separate just because of how we got started. It's a project that Michael and I thought up because we needed it ourselves, and so we just got going with it. Given that we don't work for 80,000 Hours, it wasn't part of it.
But the more important question is 'Should it become part of 80,000 Hours in the future?' We talked to Ben Todd from 80,000 Hours and asked him what he thought of the potential for overlap. He thought it wasn't an issue as 80,000 Hours doesn't have time to go i...
Have you seen Nick Beckstead's slides on 'How to compare broad and targeted attempts to shape the far future'?
He gives a lot of ideas for broad interventions, along with ways of thinking about them.
I not sure I understand your argument. Could you help me out with some examples of:
Do you have any examples of successful 'broad tent' social movements that we can learn from?
One example would be science, which is like effective altruism in that it is defined more by questions and methods than by answers.
One counterexample might be liberal christianity, which is more accepting of a diversity of views but has grown much more slowly than churches with stricter theology. This phenomenon has been studied by sociologists, one paper is here: http://www.majorsmatter.net/religion/Readings/RationalChoice.pdf
I started the ball rolling on a London EA house just by posting to the London EA facebook group asking if anyone was interested. Lots were, and we ended up with two houses. It's been a huge boost to my happiness and productivity.
One piece of advice: don't overanalyse things, just get going. I watched a huge email thread develop on the London LessWrong google group about setting up a rationalist house. It never got anywhere because they just spent ages arguing about the best way to handle housework, resolve disputes etc. With the London EA house we just worked out which locations we wanted to live in and then we started looking.
This is nice and practical - it's good that it focusses on specific behaviours that people can practice rather than saying anything that could come across as "you're alienating people and you should feel bad".
One thing I'd add to this is to try to debate less and be curious more. Often discussions can turn into person A defending one position and person B rebutting this position and defending their own. I've found that it is often more helpful for both people to collaborate on analysing different models of the world in a curious way. Person A pro...
Not sure, it's really hard to make volunteer-run projects work and often a small core team do all the work anyway.
This half-written post of mine contains some small project ideas: https://docs.google.com/document/d/1zFeSTVXqEr3qSrHdZV0oCxe8rnRD8w912lLw_tX1eoM/edit