I work as an Advisor for 80,000 Hours, before which I worked at the Global Priorities Institute and ran Giving What We Can.
I'm just speculating, but I read the claim in the post to be: There's not much discussion / active work in EA on how to improve / spin up the physical manufacture and distribution of physical goods beyond donating money to existing organisations. GiveWell's recommendations and the talks given by AMF/SCI are good examples of EAs noticing others doing important work with physical objects who need money, and trying to direct money to them. But there's much on how to become excellent at doing the logistical work involved, or go further and improve the way the logistics is done.
That doesn't seem right - since this comment was made, Holly's gone from being EA London strategy director to not really identifying with EA, which is more like the 5% per year.
I'm not so convinced on this. I think the framing of 'this was the founding team' was a little misleading: in 2011 all of us were volunteers and students. The lower bar for doing ~5 hours a week of volunteering for EA for ~1 year. Obviously students are typically in a uniquely good position for having time to volunteer. But it's not clear all the people on this list had uniquely large amounts of power. Also, I think situational effects were still strong: I felt it made a huge difference to what I did that I made a few friends who were very altruistic and had good ideas of how to put that into practice. I don't think we can assume that all of us on this list would have displayed similarly strong effective altruist inclinations without having met others in the group.
Some may also have started off longtermist without that being obvious - I knew I was a total utilitarian and cared about the long run future from ~2009, but didn't feel like I knew how to act on that until much later. So I guess from the outside my views may look like they changed over the last couple of years in a way they didn't.
Here is the podcast episode I mentioned.
There's also reference to 'moving office' in Oxford. That's because the cluster of EA organisations currently sharing an office in Oxford - CEA, FHI and GPI - have outgrown their current office and are together moving to a bigger one.
I'll leave Ben to respond to this comment more broadly, but I wanted to express that I’m sorry to hear you had a bad experience with 80,000 Hours advising, Sam. I personally find it a hard balance to strike between giving my views on what it would be most impactful for the person to do, and simply eliciting from them what they think it would be most impactful for them to do. That’s all the more so because I can help people far more in some areas than others. So they might get the impression that I’m keen for them to work on, say, pandemic preparedness rather than cybersecurity because I know more about the former, can point to more resources about it etc. I think in the past we erred too much towards being prescriptive about what we thought it would be most impactful for people to do, and we’ve tried to correct that. In general, I try to be candid about the considerations that seem most significant to me and what direction I think they point in, while being clear about my uncertainty. I’m keen to continue learning more specifics about a wider range of areas and also to improve how I communicate the fact that my having less detailed knowledge of an area should not be taken as evidence I don’t care about it.
One significant distinction I’d want to draw here is between uncertainty with regard to which beneficiaries count, and uncertainty with regard to how to help them most. I feel fairly sure about the fact that the welfare of all people matter to me, regardless of where in the world there are or when in time they live. And I feel fairly sure that the welfare of all sentient animals matter to me. On the other hand, I feel very uncertain about what the best ways to help sentient creatures are – should be improving government institutions? Reducing the chance of specific existential risks over the next century, and if so which? Increasing economic growth? I think the most productive conversations I have are likely to be those where we broadly agree on which beneficiaries matter, so I think it makes sense to mostly talk to people with similar views on that. Whereas I am keen to talk to people working on a broad range of interventions, and to improve the advice I give on them in the ways described above.
The international events calendar is a great idea! Really like you can add it to your google calendar and so easily see something of interest coming up.
Is there a way to report events being at the wrong time to it? The international icebreakers look from Facebook like they've changed time, but haven't been moved in the calendar.
I love this
If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things—praying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts
I think he donated £25 for that year, but I'm not sure how he picked that number and I have to admit I haven't been very systematic since then. I think the following year I donated £100 to ACE, then missed a year, then for 2 years did 10% of my annual donations to the animal welfare EA fund (I'm a member of Giving What We Can, so that's 1% of my salary).
I'm not sure I have a reasoned case for donating to animal welfare charities as offsets, since the animals that are helped are different to those I harm and consequentially it would surely be best to make all my donations to the organisation I think will help sentient beings most. But it seems pretty good to remember that I think it's important and impactful to help various groups to whom I don't give the lions share of my donations, and it seems plausibly good to show to others that I care about them by doing something concrete. With those considerations in mind it simply seems important for the donation to be an amount that feels non-negligible to me and others, rather than an amount exactly equal to the harm I'm doing. (That may simply be a rationalisation though, because I would rather not know exactly how much harm I'm causing and it would be a hassle to figure it out.)