Edited to add: I think that I phrased this post misleadingly; I meant to complain mostly about low quality criticism of EA rather than eg criticism of comments. Sorry to be so unclear. I suspect most commenters misunderstood me.I think that EAs, especially on the EA Forum, are too welcoming to low quality criticism [EDIT: of EA]. I feel like an easy way to get lots of upvotes is to make lots of vague critical comments about how EA isn’t intellectually rigorous enough, or inclusive enough, or whatever. This makes me feel less enthusiastic about engaging wit... (read more)
I pretty much agree with your OP. Regarding that post in particular, I am uncertain about whether it's a good or bad post. It's bad in the sense that its author doesn't seem to have a great grasp of longtermism, and the post basically doesn't move the conversation forward at all. It's good in the sense that it's engaging with an important question, and the author clearly put some effort into it. I don't know how to balance these considerations.
"Are Ideas Getting Harder to Find?" (Bloom et al.) seems to me to suggest that ideas are actually surprisingly easy to find.
The paper looks at the difficulty of finding new ideas in a variety of fields. It finds that in all cases, effort on finding new ideas is growing exponentially over time, while new ideas are growing exponentially but at a lower rate. (For a summary, see Table 7 on page 31.) This is framed as a surprising and bad thing.
But it actually seems surprisingly good to me. My intuition is that the number of ideas should grow logarithmically wi... (read more)
Is there much EA work into tail risk from GMOs ruining crops or ecosystems?
If not, why not?
It's not on the 80k list of "other global issues", and doesn't come up on a quick search of Google or this forum, so I'd guess not. One reason might be that the scale isn't large enough-- it seems much harder to get existential risk from GMOs than from, say, engineered pandemics.
[status: mostly sharing long-held feelings&intuitions, but have not exposed them to scrutiny before]
I feel disappointed in the focus on longtermism in the EA Community. This is not because of empirical views about e.g. the value of x-risk reduction, but because we seem to be doing cause prioritisation based on a fairly rare set of moral beliefs (people in the far future matter as much as people today), at the expense of cause prioritisation models based on other moral beliefs.
The way I see the potential of the EA community is by helping people to unde... (read more)
I mostly share this sentiment. One concern I have: I think one must be very careful in developing cause prioritization tools that work with almost any value system. Optimizing for naively held moral views can cause net harm; Scott Alexander implied that terrorists might just be taking beliefs too seriously when those beliefs only work in an environment of epistemic learned helplessness.
One possible way to identify views reasonable enough to develop tools for is checking that they're consistent under some amount of reflection; another way could be che... (read more)
I'm worried about EA values being wrong because EAs are unrepresentative of humanity and reasoning from first principles is likely to go wrong somewhere. But naively deferring to "conventional" human values seems worse, for a variety of reasons:
I have recently been thinking about the exact same thing, down to getting anthropologists to look into it! My thoughts on this were that interviewing anthropologists who have done fieldwork in different places is probably the more functional version of the idea. I have tried reading fairly random ethnographies to built better intuitions in this area, but did not find it as helpful as I was hoping, since they rarely discuss moral worldviews in as much detail as needed.
My current moral views seem to be something close to "reflected" preference utilitarianism... (read more)
in order to assess the value (or normative status) of a particular action we can in the first instance just look at the long-run effects of that action (that is, those after 1000 years), and then look at the short-run effects just to decide among those actions whose long-run effects are among the very best.
Is this not laughable? How could anyone think that "looking at the 1000+ year effects of an action" is workable?
I was thinking along the lines of Taylor polynomial approximations of functions. So actually this polynomial can have infinite terms especially if t is unbounded, and n is just the iterating degree of each term representing the relationship between time and value for an action. And we choose the n to approximate v well, accepting the it is more important for the approximation to have correct end behavior but many actions have flat end behaviors and end behavior is less certain. For instance, when considering the action of taking my kittens out for a walk, ... (read more)
Here I list all the EA-relevant books I've read (well, mainly listened to as audiobooks) since learning about EA, in roughly descending order of how useful I perceive/remember them being to me. I share this in case others might find it useful, as a supplement to other book recommendation lists. (I found Rob Wiblin, Nick Beckstead, and/or Luke Muehlhauser's lists very useful.)
That said, this isn't exactly a recommendation list, because some of factors making these books more/less useful to me won't generalise to most other people, and because I'm incl... (read more)
Climate change could get really bad. Let's imagine a world with 4 degrees warming. This would probably mean mass migration of billions of people to Canada, Russia, Antartica and Greenland.
Out of these, Canada and Russia will probably have fewer decisions to make since they already have large populations and will likely see a smooth transition into a billion+ people country. Antarctica could be promising to influence, but it will be difficult for a single effective altruist sin... (read more)
Hmm this is interesting. I think I broadly agree with you. I think a key consideration is that humans have a good-ish track record of living/surviving in deserts, and I would expect this to continue.
EA seems to have been doing a pretty great job attracting top talent from the most prestigious universities. While we attract a minority of the total pool, I imagine we get some of the most altruistic+rational+agentic individuals. If this continues, it could be worth noting that this could have significant repercussions for areas outside of EA; the ones that we may divert them from. We may be diverting a significant fraction of the future "best and brightest" in non-EA fields. If this seems possible, it's especially important that we do a really, really good job making sure that we are giving them good advice.
My name is Markus Woltjer. I'm a computer scientist living in Portland, Oregon. I have an interest in developing a blue carbon capture-and-storage project. This project is still in its inception, but I am already looking for expertise in the following areas, starting mostly with remote research roles.
Please contact me here or at firstname.lastname@example.org if you're interested, and I will be happy to fill in more details and discuss whether your background and interests are aligned with the roles available.
Is there a name for a moral framework where someone cares more about the moral harm they directly cause than other moral harm?
I feel like a consequentialist would care about the harm itself whether or not it was caused by them.
And a deontologist wouldn't act in a certain way even if it meant they would act that way less in the future.
Here's an example (it's just a toy example; let's not argue whether it's true or not).
A consequentialist might eat meat if they can use the saved resources to make 10 other people vegans.
A deontologis... (read more)
I am relatively new to the community and am still getting acquainted with the shared knowledge and resources.
I have been wondering what the prevailing thoughts are regarding growing the EA community or growing the use of EA style thought frameworks. The latter is a bit imprecise, but, at a glance, it appears to me that having more organizations and media outlets communicate in a more impact-aware way may have a very high expected value.
What are people's thoughts on this problem? It likely fits into a meta-category of EA work, but lately I have been... (read more)
I have noticed recently that while there's a large discussion about how to help the poorest people in the world, I've been unable to find any research simply asking the beneficiaries themselves what they think would help them most. Does anyone know more about the state of such research - whether it exists or not, and why?
It seems important to know what problems or even what interventions the poorest people in the world think would help them the most. There are local details that an EA working on global poverty from an ocean away simply doesn&apos... (read more)
I think this is one of the principals of GiveDirectly. I imagine that more complicated attempts at this could get pretty hairy (try to get the local population to come up with large coordinated proposals like education reform), but could be interesting.
What are the comparative statics for how uncertainty affects decisionmaking? How does a decisionmaker's behavior differ under some uncertainty compared to no uncertainty?
Consider a social planner problem where we make transfers to maximize total utility, given idiosyncratic shocks to endowments. There are two agents, A and B, with endowments eA=5 (with probability 1) and eB=0∼p,10∼1−p. So B either gets nothing or twice as much as A.
We choose a transfer T to solve:maxT u(5−T)+p⋅u(0+T)+(1−p)⋅u... (read more)
I think we can justify ruling out all options the maximality rule rules out, although it's very permissive. Maybe we can put more structure on our uncertainty than it assumes. For example, we can talk about distributional properties for p without specifying an actual distribution for p, e.g. p is more likely to be between 0.8 and 0.9 than 0.1 and 0.2, although I won't commit to a probability for either.
The Repugnant Conclusion is worse than I thought
At the risk of belaboring the obvious to anyone who has considered this point before: The RC glosses over the exact content of happiness and suffering that are summed up to the quantities of “welfare” defining world A and world Z. In world A, each life with welfare 1,000,000 could, on one extreme, consist purely of (a) good experiences that sum in intensity to a level 1,000,000, or on the other, (b) good experiences summing to 1,000,000,000 minus bad experiences summing (in absolute value) to 99... (read more)
I don't call the happiness itself "slight," I call it "slightly more" than the suffering (edit: and also just slightly more than the happiness per person in world A). I acknowledge the happiness is tremendous. But it comes along with just barely less tremendous suffering. If that's not morally compelling to you, fine, but really the point is that there appears (to me at least) to be quite a strong moral distinction between 1,000,001 happiness minus 1,000,000 suffering, and 1 happiness.
I have a social constructivist view of technology - that is, I strongly believe that technology is a part of society, not an external force that acts on it. Ultimately, a technology's effects on a society depend on the interactions between that technology and other values, institutions, and technologies within that society. So for example, although genetic engineering may enable human gene editing, the specific ways in which humans use gene editing would depend on cultural attitudes and institutions regarding the technology.
How... (read more)
On encountering global priorities research (from my blog).
People who are new to a field usually listen to experienced experts. Of course, they don’t uncritically accept whatever they’re told. But they tend to feel that they need fairly strong reasons to dismiss the existing consensus.
But people who encounter global priorities research - the study of what actions would improve the world the most - often take a different approach. Many disagree with global priorities researchers’ rankings of causes, preferring a ranking of their own.
This... (read more)
Yeah, I agree that there are differences between different fields - e.g. physics and sociology - in this regard. I didn't want to go into details about that, however, since it would have been a bit a distraction from the main subject (global priorities research).
I’ve recently been thinking about medieval alchemy as a metaphor for longtermist EA.
I think there’s a sense in which it was an extremely reasonable choice to study alchemy. The basic hope of alchemy was that by fiddling around in various ways with substances you had, you’d be able to turn them into other things which had various helpful properties. It would be a really big deal if humans were able to do this.
And it seems a priori pretty reasonable to expect that humanity could get way better at manipulating substances, because there was an established hist... (read more)
I'd like feedback on an idea if possible. I have a longer document with more detail that I'm working on but here's a short summary that sketches out the core idea/motivation:
Potential idea: hosting a competition/experiment to find the most convincing argument for donating to long-termist organisations
Recently, Professor Eric Shwitzgebel and Dr Fiery Cushman conducted a study to find the most convincing philosophical/logical argument for short-term causes. By ‘philosophical/logical argument’ I mean an argument that ... (read more)
This is an interesting idea. You might need to change the design a bit; my impression is that the experiment focused on getting people to donate vs not donating, whereas the concern with longtermism is more about prioritisation between different donation targets. Someone's decision to keep the money wouldn't necessarily mean they were being short-termist: they might be going to invest that money, or they might simply think that the (necessarily somewhat speculative) longtermist charities being offered were unlikely to improve long-term outcomes.
[epistemic status: musing]
When I consider one part of AI risk as 'things go really badly if you optimise straightforwardly for one goal' I occasionally think about the similarity to criticisms of market economies (aka critiques of 'capitalism').
I am a bit confused why this does not come up explicitly, but possibly I have just missed it, or am conceptually confused.
Some critiques of market economies think this is exactly what the problem with market economies is: they should maximize for what people want, but instead they maximize for profit instead, and th... (read more)
I mused about something similar here - about corporations as dangerous optimization demons which will cause GCRs if left unchecked :
Not sure how fruitful it was.
For capitalism more generally, GPI also has "Alternatives to GDP" in their research agenda, presumably because the GDP measure is what the whole world is pretty much optimizing for, and creating a new measure might be really high value.