I think, in general, personal consumption decisions should be thought in the context of moral seriousness (see Will MacAskill's comments in recent podcast).
Should we take seriously efforts to avoid unnecessary emissions? Yes! Is EA doing this? I'm not sure. My impression is that EAs are fairly likely to avoid unnecessary flights, take public transport etc - that's the attitude I take myself, anyway. This is less unusual than veganism - the thoughtful Londoners I'm surrounded by do the same. So I think it would be easy to underestimate the extent to which E...
Thanks for sharing your talk.
I'm at the UK's Competition and Markets Authority. Very happy to talk to anyone about the intersection of competition policy and AI.
How much did the $13 million shift the odds? That's the key question. The conventional political science on this is skeptical that donations have much of an effect on outcomes (albeit it's a bit more positive about lower profile candidates like Carrick) https://fivethirtyeight.com/features/money-and-elections-a-complicated-love-story/
(In this case, given the crypto backlash, it's surely possible SBF's donations hurt Carrick's election chances. I don't want to suggest this was actually the case, just noting that the confidence interval should include the po...
Fundraising is particularly effective in open primaries, such as this one. From the linked article:
...But in 2017, Bonica published a study that found, unlike in the general election, early fundraising strongly predicted who would win primary races. That matches up with other research suggesting that advertising can have a serious effect on how people vote if the candidate buying the ads is not already well-known and if the election at hand is less predetermined along partisan lines.
Basically, said Darrell West, vice president and director of governance studi
I'm certain EA would welcome you, whether you think AI is an important x-risk or not.
If you do continue wrestling with these issues, I think you're actually extremely well placed to add a huge amount of value as someone who is (i) ML expert, (ii) friendly/sympathetic to EA, (iii) doubtful/unconvinced of AI risk. It gives you an unusual perspective which could be useful for questioning assumptions.
From reading this post, I think you're temperamentally uncomfortable with uncertainty, and prefer very well defined problems. I suspect that explains why you feel...
Excession, Surface Detail and The Hydrogen Sonata are the three I'd recommend from a longtermist perspective.
Consider Phlebas is (by some margin) the worst novel in the series. It's a shame it seems like the obvious place to start.
On this theme, I was struck by the 80,000 hours podcast with Tom Moynihan, which discussed the widespread past belief in the 'principle of plenitude': "Whatever can happen will happen", with the implication that the current period can't be special. In a broad sense (given humanity's/earth's position), all such beliefs were wrong. But it struck me that several of the earliest believers in plenitude were especially wrong - just think about how influential Plato and Aristotle have been!
I wonder if there would be a strong difference between "What do you think of a group/concept called 'effective altruism'", "Would you join a group called 'effective altruism'", "What would you think of someone who calls themselves an 'effective altruist'", "Would you call yourself an 'effective altruist'".
I wonder which of these questions is most important in selecting a name.
I don't mind rhetorical descriptions of China as having 'less economic and political freedom than the United States', in a very general discussion. But if you're going to make any sort of proposal like 'there should be more political freedom!' I would feel the need to ask many follow-up clarifying questions (freedom to do what? freedom from what consequences? freedom for whom?) to know whether I agreed with you.
Well-being is vague too, I agree, but it's a more necessary term than freedom (from my philosophical perspective, and I think most others).
This sounds a lot like a version of preference utilitarianism, certainly an interesting perspective.
I know a lot of effort in political philosophy has gone into trying to define freedom - personally, I don't think it's been especially productive, and so I think 'freedom' as a term isn't that useful except as rhetoric. Emphasising 'fulfilment of preferences' is an interesting approach, though. It does run into tricky questions around the source of those preferences (eg addiction).
3 months late, but better than never: it's incredibly inspiring to see how the community has grown over the past decade.
I'm all for focusing on the power of policy, but I'm not sure giving up any of our positions on personal donations will help get us there.
This is a discussion that has happened a few times. I do think that 'global priorities' has already grown as a brand enough to be seriously considered for wider use, and perhaps even as the main term for the movement.
I'd still be reluctant to ditch 'effective altruism' entirely. There is an important part of the original message of the movement (cf pond analogy) that's about asking people to step up and give more (whether money or time) - questioning personal priorities/altruism. I think we've probably developed a healthier sense of how to balance that ('altruism/life balance') but it feels like 'global priorities' wouldn't cover it.
This is an excellent point. I "joined" EA because of the pond idea. I found the idea of helping a lot of people with the limited funds I could spare really appealing, and it made me feel like I could make a real difference. I didn't get into EA because of its focus on global prioritization research.
Of course, what I happened to join EA because of is not super important, but I wonder how others feel. Like EA as a "donate more to AMF and other effective charities" is a really different message than EA as "research and philosophize about what issues are reall...
I've always thought the Repugnant Conclusion was mostly status quo bias, anyway, combined with the difficulty of imagining what such a future would actually be like.
I think the Utility Monster is a similar issue. Maybe it would be possible to create something with a much richer experience set than humans, which should be valued more highly. But any such being would actually be pretty awesome, so we shouldn't resent giving it a greater share of resources.
Economist in the civil service here. I wouldn't sweat this decision, unless there's a transparently better alternative. It sounds like good progression for you, from which you can look for an even higher impact role.
My main reaction (rather banal): I think we shouldn't use an acronym like IBC! If this is something we think people should think about early in their time as an effective altruist, let's stick to more obvious phrases like "how to prioritise causes".
One issue to consider is whether catastrophic risk is a sufficiently popular issue for an agency to use it to sustain itself. Independent organisations can be vulnerable to cuts. This probably varies a lot by country.
This book is a core text on this subject, which explicitly considers when specific agencies are effective and motivated to pursue particular goals: https://www.amazon.co.uk/Bureaucracy-Government-Agencies-Basic-Classics/dp/0465007856
I'm also reminded of Nate Silver's interviews with the US hurricane forecasting agency in The Signal and the Noise.
https://breakingdefense.com/2022/08/ignoring-global-catastrophic-risk-threatens-american-national-security/
This seems like a major success in influencing US policy.