Shortform Content [Beta]

Buck's Shortform

Edited to add: I think that I phrased this post misleadingly; I meant to complain mostly about low quality criticism of EA rather than eg criticism of comments. Sorry to be so unclear. I suspect most commenters misunderstood me.

I think that EAs, especially on the EA Forum, are too welcoming to low quality criticism [EDIT: of EA]. I feel like an easy way to get lots of upvotes is to make lots of vague critical comments about how EA isn’t intellectually rigorous enough, or inclusive enough, or whatever. This makes me feel less enthusiastic about engaging wit... (read more)

Showing 3 of 14 replies (Click to show all)

I pretty much agree with your OP. Regarding that post in particular, I am uncertain about whether it's a good or bad post. It's bad in the sense that its author doesn't seem to have a great grasp of longtermism, and the post basically doesn't move the conversation forward at all. It's good in the sense that it's engaging with an important question, and the author clearly put some effort into it. I don't know how to balance these considerations.

2Max_Daniel2hI agree that post is low-quality in some sense (which is why I didn't upvote it), but my impression is that its central flaw is being misinformed, in a way that's fairly easy to identify. I'm more worried about criticism where it's not even clear how much I agree with the criticism or where it's socially costly to argue against the criticism because of the way it has been framed. It also looks like the post got a fair number of downvotes, and that its karma is way lower than for other posts by the same author or on similar topics. So it actually seems to me the karma system is working well in that case. (Possibly there is an issue where "has a fair number of downvotes" on the EA FOrum corresponds to "has zero karma" in fora with different voting norms/rules, and so the former here appearing too positive if one is more used to fora with the latter norm. Conversely I used to be confused why posts on the Alignment Forum that seemed great to me had more votes than karma score.)
2Max_Daniel2hI agree with this as stated, though I'm not sure how much overlap there is between the things we consider low-quality criticism. (I can think of at least one example where I was mildly annoyed that something got a lot of upvotes, but it seems awkward to point to publicly.) I'm not so worried about becoming the target of low-quality criticism myself. I'm actually more worried about low-quality criticism crowding out higher-quality criticism. I can definitely think of instances where I wanted to say X but then was like "oh no, if I say X then people will lump this together with some other person saying nearby thing Y in a bad way, so I either need to be extra careful and explain that I'm not saying Y or shouldn't say X after all". I'm overall not super worried because I think the opposite failure mode, i.e. appearing too unwelcoming of criticism, is worse.
MichaelDickens's Shortform

"Are Ideas Getting Harder to Find?" (Bloom et al.) seems to me to suggest that ideas are actually surprisingly easy to find.

The paper looks at the difficulty of finding new ideas in a variety of fields. It finds that in all cases, effort on finding new ideas is growing exponentially over time, while new ideas are growing exponentially but at a lower rate. (For a summary, see Table 7 on page 31.) This is framed as a surprising and bad thing.

But it actually seems surprisingly good to me. My intuition is that the number of ideas should grow logarithmically wi... (read more)

Halffull's Shortform

Is there much EA work into tail risk from GMOs ruining crops or ecosystems?

If not, why not?

It's not on the 80k list of "other global issues", and doesn't come up on a quick search of Google or this forum, so I'd guess not. One reason might be that the scale isn't large enough-- it seems much harder to get existential risk from GMOs than from, say, engineered pandemics.

Denise_Melchin's Shortform

[status: mostly sharing long-held feelings&intuitions, but have not exposed them to scrutiny before]

I feel disappointed in the focus on longtermism in the EA Community. This is not because of empirical views about e.g. the value of x-risk reduction, but because we seem to be doing cause prioritisation based on a fairly rare set of moral beliefs (people in the far future matter as much as people today), at the expense of cause prioritisation models based on other moral beliefs.

The way I see the potential of the EA community is by helping people to unde... (read more)

Showing 3 of 4 replies (Click to show all)

I mostly share this sentiment. One concern I have: I think one must be very careful in developing cause prioritization tools that work with almost any value system. Optimizing for naively held moral views can cause net harm; Scott Alexander implied that terrorists might just be taking beliefs too seriously when those beliefs only work in an environment of epistemic learned helplessness.

One possible way to identify views reasonable enough to develop tools for is checking that they're consistent under some amount of reflection; another way could be che... (read more)

8Denise_Melchin2dYes, completely agree, I was also thinking of non-utilitarian views when I was saying non-longtermist views. Although 'doing the most good' is implicitly about consequences and I expect for someone who wants to be the best virtual ethicist one can be to not find the EA community as valuable for helping them on that path than for people who want to optimize for specific consequences (i.e. the most good). I would be very curious what a good community for that kind of person is however and what good tools for that path are. I agree dividing between the desirability of different moral views is hardly doable in a principled manner, but even just looking at longtermism we have disagreements whether they should be suffering-focussed or not, so there already is no one simple truth. I'd be really curious what others think about whether humanity collectively would be better off according to most if we all worked effectively towards our desired worlds, or not, since this feels like an important crux to me.
1brb2434dI think that thinking about longtermism enables people to feel empowered to solve problems somewhat beyond the reality, truly feeling the prestige/privilege/knowing-better of 'doing the most good'- also, this may be a viewpoint applicable for those who really do not have to worry about finances, but also that is relative. Which also links to my second point that some affluent persons enjoy speaking about innovative solutions, reflecting current power structures defined by high-technology, among others. It would be otherwise hard to make a community of people feeling the prestige of being paid a little to do good or donating to marginally improve some of the current global institutions, that cause the present problems. Or wouldit
Thomas Kwa's Shortform

I'm worried about EA values being wrong because EAs are unrepresentative of humanity and reasoning from first principles is likely to go wrong somewhere. But naively deferring to "conventional" human values seems worse, for a variety of reasons:

  • There is no single "conventional morality", and it seems very difficult to compile a list of what every human culture thinks of as good, and not obvious how one would form a "weighted average" between these.
  • most people don't think about morality much, so their beliefs are like
... (read more)

I have recently been thinking about the exact same thing, down to getting anthropologists to look into it! My thoughts on this were that interviewing anthropologists who have done fieldwork in different places is probably the more functional version of the idea. I have tried reading fairly random ethnographies to built better intuitions in this area, but did not find it as helpful as I was hoping, since they rarely discuss moral worldviews in as much detail as needed.

My current moral views seem to be something close to "reflected" preference utilitarianism... (read more)

Michael_Wiebe's Shortform

Will says:

in order to assess the value (or normative status) of a particular action we can in the first instance just look at the long-run effects of that action (that is, those after 1000 years), and then look at the short-run effects just to decide among those actions whose long-run effects are among the very best.

Is this not laughable? How could anyone think that "looking at the 1000+ year effects of an action" is workable?

3Markus_Woltjer18hIt's often laughable. I would think of it like this. Each action can be represented as a polynomial that calculates the value at a time based on time: v(t) = c1*t^n + c2*t^(n-1 )+...+c3*t+c4 I would think of the value function of the decisions in my life to be the sum of the individual value functions. With every decision I'm presented with multiple functions, and I get to pick one and the coefficients will basically be added into my life's total value function. Consider foresight to be the ability to predict the end behavior of v for large t. If t=1000 means nothing to you, then c1 is far less important to you than if t=1000 means a lot to you. Some people probably consciously ignore large t, for example educated people and politicians sometimes make the argument (and many of them certainly believe) that t greater than their life expectancy doesn't matter. This is why the climate crisis has been so difficult to prioritize, especially for people in power who might not have ten years left to live. But also foresight is an ability. A toddler has trouble consider the importance of t=0.003 (the next day), and because of that no coefficients except for c4 matter. Resisting the entire tub of ice cream is impossible if you can't imagine a stomach ache. It is unusual, probably even unnatural, to consider t=1000, but it is of course important. The largest t values we can imagine tell us the most about the coefficients for the high degree terms in the polynomial. It is unusual that most of our choices have effects for these coefficients, but some will, or some might, and those should be noticed, highlighted, etc. Until I learned the benefits of veganism, I had almost no consideration for high t values, and I was electrified by the short-term, medium-term, and especially long-term benefits such as avoiding a tipping point for the climate crisis. That was seven years ago and it's faded a little as I'm just passively supporting plant-based meats (consequences are sometime
1Michael_Wiebe15hWhat isn? It seems all the work is being done by havingnin the exponent.

I was thinking along the lines of Taylor polynomial approximations of functions. So actually this polynomial can have infinite terms especially if t is unbounded, and n is just the iterating degree of each term representing the relationship between time and value for an action. And we choose the n to approximate v well, accepting the it is more important for the approximation to have correct end behavior but many actions have flat end behaviors and end behavior is less certain. For instance, when considering the action of taking my kittens out for a walk, ... (read more)

MichaelA's Shortform

Here I list all the EA-relevant books I've read (well, mainly listened to as audiobooks) since learning about EA, in roughly descending order of how useful I perceive/remember them being to me. I share this in case others might find it useful, as a supplement to other book recommendation lists. (I found Rob Wiblin, Nick Beckstead, and/or Luke Muehlhauser's lists very useful.) 

That said, this isn't exactly a recommendation list, because some of factors making these books more/less useful to me won't generalise to most other people, and because I'm incl... (read more)

Prabhat Soni's Shortform

High impact career for Danish people: Influencing what will happen with Greenland

Climate change could get really bad. Let's imagine a world with 4 degrees warming. This would probably mean mass migration of billions of people to Canada, Russia, Antartica and Greenland.

Out of these, Canada and Russia will probably have fewer decisions to make since they already have large populations and will likely see a smooth transition into a billion+ people country. Antarctica could be promising to influence, but it will be difficult for a single effective altruist sin... (read more)

Showing 3 of 4 replies (Click to show all)
4Prabhat Soni5dThanks Ryan for your comment! It seems like we've identified a crux here: what will be the total number of people living in Greenland in 2100 / world with 4 degrees warming? I have disagreements with some of your estimates. Large populations currently reside in places like India, China and Brazil. These currently non-drylands could be converted to drylands in the future (and also possibly desertified). Thus, the 35% figure could increase in the future. Drylands are categorised into {desert, arid, semi-arid, dry sub-humid}. It's only when a place is in the desert category, that people seriously consider moving out (for reference all of California comes under arid or semi-arid category). In the future, deserts could form a larger share of drylands, and less arid regions could form a smaller share. So, you could have more than 10% of people from places called "drylands" leaving in the future. Yes, that is correct. But that is also a figure from 2019. A more relevant question would be how many migrants would there be in 2100? I think it's quite obvious that as the Earth warms, the number of climate migrants will increase. I don't really agree with the 5% estimate. Specifically for desertified lands, I would guess the %age of people migrating to be significantly higher. This is a figure from 2020 and I don't think you can simply extrapolate this. After revising my estimates to something more sensible, I'm coming with ~50M people in Greenland. So, Greenland would be far from being a superpower. I'm hesitant to share my calculations because my confidence level for my calculations is low -- I wouldn't be surprised if the actual number was upto 2 orders of magnitude smaller or greater. A key uncertainity: Does desertification of large regions imply that in-country / local migration is useless? The world, 4 degrees warmer. A map from Parag Khanna's book Connectography
2RyanCarey5dI'm not sure you've understood how I'm calculating my figures, so let me show how we can set a really conservative upper bound for the number of people who would move to Greenland. Based on current numbers, 3.5% of world population are migrants, and 6% are in deserts. So that means less than 3.5/9.5=37% of desert populations have migrated. Even if half of those had migrated because of the weather, that would be less than 20% of all desert populations. Moreover, even if people migrated uniformly according to land area, only 1.4% of migrants would move to Greenland (that's the fraction of land area occupied by Greenland). So an ultra-conservative upper bound for the number of people migrating to Greenland would be 1B*.37*.2*.014=1M. So my initial status-quo estimate was 1e3, and my ultra-conservative estimate was 1e6. It seems pretty likely to me that the true figure will be 1e3-1e6, whereas 5e7 is certainly not a realistic estimate.

Hmm this is interesting. I think I broadly agree with you. I think a key consideration is that humans have a good-ish track record of living/surviving  in deserts, and I would expect this to continue.

Ozzie Gooen's Shortform

EA seems to have been doing a pretty great job attracting top talent from the most prestigious universities. While we attract a minority of the total pool, I imagine we get some of the most altruistic+rational+agentic individuals. 

If this continues, it could be worth noting that this could have significant repercussions for areas outside of EA; the ones that we may divert them from. We may be diverting a significant fraction of the future "best and brightest" in non-EA fields. 

If this seems possible, it's especially important that we do a really, really good job making sure that we are giving them good advice. 

markuswoltjer@gmail.com's Shortform

My name is Markus Woltjer. I'm a computer scientist living in Portland, Oregon. I have an interest in developing a blue carbon capture-and-storage project. This project is still in its inception, but I am already looking for expertise in the following areas, starting mostly with remote research roles.

  • Botany and plant decomposition
  • Materials science
  • Environmental engineering

Please contact me here or at markuswoltjer@gmail.com if you're interested, and I will be happy to fill in more details and discuss whether your background and interests are aligned with the roles available.

Mati_Roy's Shortform

Is there a name for a moral framework where someone cares more about the moral harm they directly cause than other moral harm?

I feel like a consequentialist would care about the harm itself whether or not it was caused by them.

And a deontologist wouldn't act in a certain way even if it meant they would act that way less in the future.

Here's an example (it's just a toy example; let's not argue whether it's true or not).

A consequentialist might eat meat if they can use the saved resources to make 10 other people vegans.

A deontologis... (read more)

aysu's Shortform

I am relatively new to the community and am still getting acquainted with the shared knowledge and resources.

I have been wondering what the prevailing thoughts are regarding growing the EA community or growing the use of EA style thought frameworks. The latter is a bit imprecise, but, at a glance, it appears to me that having more organizations and media outlets communicate in a more impact-aware way may have a very high expected value.

What are people's thoughts on this problem? It likely fits into a meta-category of EA work, but lately I have been... (read more)

Solander's Shortform

I have noticed recently that while there's a large discussion about how to help the poorest people in the world, I've been unable to find any research simply asking the beneficiaries themselves what they think would help them most. Does anyone know more about the state of such research - whether it exists or not, and why?

It seems important to know what problems or even what interventions the poorest people in the world think would help them the most. There are local details that an EA working on global poverty from an ocean away simply doesn&apos... (read more)

I think this is one of the principals of GiveDirectly. I imagine that more complicated attempts at this could get pretty hairy (try to get the local population to come up with large coordinated proposals like education reform), but could be interesting.

3dominicroser9dHere's one piece of research -- it'd be wonderful if there were much more in this vein: https://forum.effectivealtruism.org/posts/mKGbeX5tQu4zshY4j/alice-redfern-moral-weights-in-the-developing-world [https://forum.effectivealtruism.org/posts/mKGbeX5tQu4zshY4j/alice-redfern-moral-weights-in-the-developing-world]
Michael_Wiebe's Shortform

What are the comparative statics for how uncertainty affects decisionmaking? How does a decisionmaker's behavior differ under some uncertainty compared to no uncertainty?

Consider a social planner problem where we make transfers to maximize total utility, given idiosyncratic shocks to endowments. There are two agents, , with endowments   (with probability 1) and  So  either gets nothing or twice as much as .

We choose a transfer  to solve:
... (read more)

3MichaelStJules4dIt's possible in a given situation that we're willing to commit to a range of probabilities, e.g.p∈[a,b](without committing toE[p]=a+b2or any other number), so that we can check the recommendations for each value ofp(sensitivity analysis). I don't think maxmin utility follows, but it's one approach we can take. Yes, I think so. I'm not sure specifically, but I'd expect it to be more permissible and often allow multiple options for a given setup. I think the specific approach in that paper is like assuming that we only know the aggregate (not individual) utility function up to monotonic transformations, not even linear transformations, so that any action which is permissible under some degree of risk aversion with respect to aggregate utility is permissible generally. (We could also have uncertainty about individual utility/welfare functions, too, which makes things more complicated.)

I think we can justify ruling out all options the maximality rule rules out, although it's very permissive. Maybe we can put more structure on our uncertainty than it assumes. For example, we can talk about distributional properties for  without specifying an actual distribution for , e.g.  is more likely to be between 0.8 and 0.9 than 0.1 and 0.2, although I won't commit to a probability for either.

antimonyanthony's Shortform

The Repugnant Conclusion is worse than I thought

At the risk of belaboring the obvious to anyone who has considered this point before: The RC glosses over the exact content of happiness and suffering that are summed up to the quantities of “welfare” defining world A and world Z. In world A, each life with welfare 1,000,000 could, on one extreme, consist purely of (a) good experiences that sum in intensity to a level 1,000,000, or on the other, (b) good experiences summing to 1,000,000,000 minus bad experiences summing (in absolute value) to 99... (read more)

1MichaelDickens4dIt seems to me that you're kind of rigging this thought experiment when you define an amount of happiness that's greater than an amount of suffering, but you describe the happiness as "slight" and the suffering as "tremendous", even though the former is larger than the latter.

I don't call the happiness itself "slight," I call it "slightly more" than the suffering (edit: and also just slightly more than the happiness per person in world A). I acknowledge the happiness is tremendous. But it comes along with just barely less tremendous suffering. If that's not morally compelling to you, fine, but really the point is that there appears (to me at least) to be quite a strong moral distinction between 1,000,001 happiness minus 1,000,000 suffering, and 1 happiness.

evelynciara's Shortform

Social constructivism and AI

I have a social constructivist view of technology - that is, I strongly believe that technology is a part of society, not an external force that acts on it. Ultimately, a technology's effects on a society depend on the interactions between that technology and other values, institutions, and technologies within that society. So for example, although genetic engineering may enable human gene editing, the specific ways in which humans use gene editing would depend on cultural attitudes and institutions regarding the technology.

How... (read more)

Stefan_Schubert's Shortform

On encountering global priorities research (from my blog).


People who are new to a field usually listen to experienced experts. Of course, they don’t uncritically accept whatever they’re told. But they tend to feel that they need fairly strong reasons to dismiss the existing consensus.

But people who encounter global priorities research - the study of what actions would improve the world the most - often take a different approach. Many disagree with global priorities researchers’ rankings of causes, preferring a ranking of their own.

This... (read more)

2Denise_Melchin5dI'm not sure I agree with this, so it is not obvious to me that there is anything special about GP research. But it depends on who you mean by 'people' and what your evidence is. The reference class of research also matters - I expect people are more willing to believe physicists, but less so sociologists.

Yeah, I agree that there are differences between different fields - e.g. physics and sociology - in this regard. I didn't want to go into details about that, however, since it would have been a bit a distraction from the main subject (global priorities research).

Buck's Shortform

I’ve recently been thinking about medieval alchemy as a metaphor for longtermist EA.

I think there’s a sense in which it was an extremely reasonable choice to study alchemy. The basic hope of alchemy was that by fiddling around in various ways with substances you had, you’d be able to turn them into other things which had various helpful properties. It would be a really big deal if humans were able to do this.

And it seems a priori pretty reasonable to expect that humanity could get way better at manipulating substances, because there was an established hist... (read more)

So-Low Growth's Shortform

I'd like feedback on an idea if possible. I have a longer document with more detail that I'm working on but here's a short summary that sketches out the core idea/motivation:

Potential idea: hosting a competition/experiment to find the most convincing argument for donating to long-termist organisations

Brief summary

Recently, Professor Eric Shwitzgebel and Dr Fiery Cushman conducted a study to find the most convincing philosophical/logical argument for short-term causes. By ‘philosophical/logical argument’ I mean an argument that ... (read more)

This is an interesting idea. You might need to change the design a bit; my impression is that the experiment focused on getting people to donate vs not donating, whereas the concern with longtermism is more about prioritisation between different donation targets. Someone's decision to keep the money wouldn't necessarily mean they were being short-termist: they might be going to invest that money, or they might simply think that the (necessarily somewhat speculative) longtermist charities being offered were unlikely to improve long-term outcomes.

Denise_Melchin's Shortform

[epistemic status: musing]

When I consider one part of AI risk as 'things go really badly if you optimise straightforwardly for one goal' I occasionally think about the similarity to criticisms of market economies (aka critiques of 'capitalism').

I am a bit confused why this does not come up explicitly, but possibly I have just missed it, or am conceptually confused.

Some critiques of market economies think this is exactly what the problem with market economies is: they should maximize for what people want, but instead they maximize for profit instead, and th... (read more)

Showing 3 of 4 replies (Click to show all)

I mused about something similar here - about corporations as dangerous optimization demons which will cause GCRs if left unchecked :

https://forum.effectivealtruism.org/posts/vy2QCTXfWhdiaGWTu/corporate-global-catastrophic-risks-c-gcrs-1

Not sure how fruitful it was.

For capitalism more generally, GPI also has "Alternatives to GDP" in their research agenda, presumably because the GDP measure is what the whole world is pretty much optimizing for, and creating a new measure might be really high value.

3Denise_Melchin6dThank you so much for the links! Possibly I was just being a bit blind. I was pretty excited about the Aligning Recommender systems article as I had also been thinking about that, but only now managed to read it in full. I somehow had missed Scott's post. I'm not sure whether they quite get to the bottom of the issue though (though I am not sure whether there is a bottom of the issue, we are back to 'I feel like there is something more important here but I don't know what'). The Aligning recommender systems article discusses the direct relevance to more powerful AI alignment a fair bit which I was very keen to see. I am slightly surprised that there is little discussion on the double layer of misaligned goals - first Netflix does not recommend what users would truly want, second it does that because it is trying to maximize profit. Although it is up to debate whether aligning 'recommender systems' to peoples' reflected preferences would actually bring in more money than just getting them addicted to the systems, which I doubt a bit. Your second paragraph feels like something interesting in the capitalism critiques - we already have plenty of experience with misalignment in market economies between profit maximization and what people truly want, are there important lessons we can learn from this?
4David_Moss6dA similar analogy with the fossil fuel industry is mentioned by Stuart Russell (crediting Danny Hillis) here [https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/] : It also seems that "things go really badly if you optimise straightforwardly for one goal" bears similarities to criticisms of central planning or utopianism in general though.
Load More