All of Owen_Cotton-Barratt's Comments + Replies

Some thoughts on David Roodman’s GWP model and its relation to AI timelines

I came in with roughly the view you describe as having had early on in the project, and I found this post extremely clear in laying out the further considerations that shifted you. Thanks!

A do-gooder's safari

Interesting idea!

I'm keen for the language around this to convey the correct vibe about the epistemic status of the framework: currently I think this is "here are some dimensions that I and some other people feel like are helpful for our thinking". But not "we have well-validated ways of measuring any of these things" nor "this is definitely the most helpful carving up in the vicinity" nor "this was demonstrated to be helpful for building a theory of change for intervention X which did verifiably useful things". I think the animal names/pictures are kind o... (read more)

4Peterslattery2moThanks for the response Owen. I understand about the epistemic status. I imagine that I meet some new EA and I am trying to get to know them. After the standard where did you hear about EA, what cause areas are you most interested in, I might want to ask about the sort engagement they have with EA and doing good. At this point it would be useful to be able to reference the dimensions you have outlined and similar. I.e., 'So what sort of EA are you? How do you rate yourself on [abbreviation]?' As this example might suggest, I think that an abbreviation could make such conversations more likely to occur by making the dimensions you have outlined easier to recall and communicate and increasing the probability that they disseminate widely. I don't think that it is a high priority thing to do but I think that an EA/do-gooder personality test could be quite useful in the future for understanding differences between do-gooders (within and outside EA), connecting people to the right projects/causes, and building the right sorts of teams (i.e., with a balance of across key dimensions). I know for example that Spencer Greenberg uses personality tests to help people determine fit for entrepreneurship [] and we could have something similar.
What should we call the other problem of cluelessness?

I think this is a good point which I wasn't properly appreciating. It doesn't seem particularly worse for (2) than for (1), except insofar as terminology is more locked in for (1) than (2).

Of course, a possible advantage of "clueless" is that it strikes a self-deprecating tone; if we're worried about being perceived as arrogant then having the language err on the side of assigning blame to ourselves rather than the universe might be a small help

What should we call the other problem of cluelessness?

I think that bare terms like "unpredictability" or particularly "uncertainty" are much too weak; they don't properly convey the degree of epistemic challenge, and hence don't pick out what's unusual about the problem situation that we're grappling with.

"Unforseeability" is a bit stronger, but still seems rather too weak. I think "unknowability", "radical uncertainty", and "cluelessness" are all in the right ballpark for their connotations.

I do think "unknowability" for (2) and "absolute/total unknowability" for (1) is an interesting alternative. Using "unknowable" rather than "clueless" puts the emphasis on the decision situation rather than the agent; I'm not sure whether that's better.

2Stefan_Schubert3moYeah, I agree that one would need to add some adjective (e.g. "total" or "radical") to several of these. "Unknowability" sounds good at first glance; I'd need to think about use cases. I see now that you made the agent-decision situation distinction that I also made above. I do think that "unknowable" putting an emphasis on the decision situation is to its advantage.
What should we call the other problem of cluelessness?

To me it sounds slightly odd to use the word "clueless" for (2), however, given the associations that word has (cf. Cambridge dictionary).

In everyday language I actually think this fits passably well. The dictionary gives the definition "having no knowledge of something". For (2) I feel like informally I'd be happy with someone saying that the problem is we have no knowledge of how our actions will turn out, so long as they clarified that they didn't mean absolutely no knowledge. Of course this isn't perfect; I'd prefer they said "basically no knowledge" i... (read more)

2Stefan_Schubert3moYeah, I'm unsure. I think that the term "clueless" is usually used to refer to people who are incompetent (cf. the synonyms). (That's why they have no knowledge.) But in this case we don't lack knowledge because we're incompetent, but because the task at hand is hard. And one might consider using a term or phrase that implies that. But there are pros and cons of all candidates.
What should we call the other problem of cluelessness?

(1) is not a gradable concept - if we're clueless, then in Hilary Greaves' words, we "can never have even the faintest idea" which of two actions is better.

(2), on the other hand, is a gradable concept - it can be more or less difficult to find the best strategies. Potentially it would be good to have a term that is gradable, for that reason.

I appreciate you making this distinction. Although I find that it all the more makes me want to use one term (e.g. clueless) for (2), and a modified version (absolutely clueless, or totally clueless, or perhaps infinit... (read more)

What should we call the other problem of cluelessness?

One possibility is something relating to (un)predictability or (un)foreseeability. That has the advantage that it relates to forecasting. 

Hmm, I'm unsure whether the link to forecasting is more of an advantage or a disadvantage. It's suggestive of the idea that one deals with the problem by becoming better at forecasting, which I think is something which is helpful, but probably only a small minority of how we should address it.

2Stefan_Schubert3moI agree that that shouldn't be the main strategy. But my sense is that this issue isn't a disadvantage of using a term like "predictability" or a synonym. I think one advantage of such a term is that it relates to major areas of research, that many people know about. Another term is "uncertainty"; cf. "radical uncertainty" [].
What should we call the other problem of cluelessness?

Some alternatives in a similar vein: (1) = strong cluelessness / (2) = weak cluelessness (1) = total cluelessness / (2) = partial cluelessness

I guess I kind of like the word "practical" for (2), to point to the fact that it isn't the type of thing that will have a clean philosophical resolution.

2Davidmanheim3moI've mentioned in a different thread that we could refer to them as (1) aleatory versus (2) epistemic.
What should we call the other problem of cluelessness?

I suggest that (1) should be called "the problem of absolute cluelessness" and that (2) should be called "the practical problem of cluelessness".

When context is clear one could drop the adjective. My suspicion is that with time (1) will come to be regarded as a solved problem, and (2) will still want a lot of attention. I think it's fine/desirable if at that point it gets to use the pithier term of "cluelessness". I also think that it's probably good if (1) and (2) have names which make it clear that there's a link between them. I think there may be a small transition cost from current usage, but (a) there just isn't that much total use of the terms now, and (b) current usage seems inconsistent about whether it includes (2).

I agree that this distinction is important and that it would be good to have two terms for these different concepts.

I see the motivation for terms like "weak cluelessness" or "the practical problem of cluelessness". To me it sounds slightly odd to use the word "clueless" for (2), however, given the associations that word has (cf. Cambridge dictionary).

(1) is not a gradable concept - if we're clueless, then in Hilary Greaves' words, we "can never have even the faintest idea" which of two actions is better.

(2), on the other hand, is a gradable concept - it c... (read more)

5jackmalde3moCould also go for tractable and intractable cluelessness? Also I wonder if we should be distinguishing between empirical and moral cluelessness - with the former being about claims about consequences and the latter about fundamental ethical claims.
3Owen_Cotton-Barratt3moSome alternatives in a similar vein: (1) = strong cluelessness / (2) = weak cluelessness (1) = total cluelessness / (2) = partial cluelessness I guess I kind of like the word "practical" for (2), to point to the fact that it isn't the type of thing that will have a clean philosophical resolution.
Anki deck for "Some key numbers that (almost) every EA should know"

Neat! Is there any easy way to read the content without using the Anki software?

2Pablo3moYes, you can read the contents here []. This is the org mode [] file I use to generate the Anki deck (with the Anki editor package []), so it will always reflect the most recent version. (I've edited the original post to add this information.)

I imported them into RemNote where you can read all the cards. You can also quiz yourself on the questions using the queue functionality at the top.  Or here's a Google Doc.

If someone was interested in adding more facts to the deck, there are a bunch in these notes from The Precipice. (It's fairly easy to export from RemNote to Anki and vice versa, though formatting is sometimes a little broken.)

2Linch3moI'm also interested in this!
[Meta] Is it legitimate to ask people to upvote posts on this forum?

As a clarification: I don't think "here are some good effects that would come out of getting lots of upvotes" would count as such an argument.

I am now feeling like the legitimate use cases for such arguments might be narrow enough, and their benefits small enough, that it might be better to have a norm that disallows them, for the sake of being a cleaner rule. Or maybe it should be okay to make arguments so long as you explicitly cancel any implicature that you're asking people to upvote? Confused about what's best here.

[Meta] Is it legitimate to ask people to upvote posts on this forum?

I agree. But I think it should be okay to present arguments for why the post might get fewer upvotes than it deserves.

As a clarification: I don't think "here are some good effects that would come out of getting lots of upvotes" would count as such an argument.

I am now feeling like the legitimate use cases for such arguments might be narrow enough, and their benefits small enough, that it might be better to have a norm that disallows them, for the sake of being a cleaner rule. Or maybe it should be okay to make arguments so long as you explicitly cancel any implicature that you're asking people to upvote? Confused about what's best here.

Concerns with ACE's Recent Behavior

I didn't downvote (because as you say it's providing relevant information), but I did have a negative reaction to the comment. I think the generator of that negative reaction is roughly: the vibe of the comment seems more like a political attempt to close down the conversation than an attempt to cooperatively engage. I'm reminded of "missing moods";  it seems like there's a legitimate position of "it would be great to have time to hash this out but unfortunately we find it super time consuming so we're not going to", but it would naturally come with a... (read more)

5evhub5moThat's a great point; I agree with that.
"Good judgement" and its components

Yeah my quick guess is that (as for many complex skills) g is very helpful, but that it's very possible to be high g without being very good at the thing I'm pointing at (partially because feedback loops are poor, so people haven't necessarily has a good training signal for improving).

Forget replaceability? (for ~community projects)

I guess I significantly agree with all of the above, and I do think it would have been reasonable for me to mention these considerations.  But since I think the considerations tend to blunt rather than solve the issues, and since I think the audience for my post will mostly be well aware of these considerations,  it still feels fine to me to have omitted mention of them? (I mean, I'm glad that they've come up in the comments.)

I guess I'm unsure whether there's an interesting disagreement here. 

6MichaelA5moYeah, I think I'd agree that it's reasonable to either include or not include explicit mention of those considerations in this post, and that there's no major disagreement here. My original comment was not meant as criticism of this post, but rather as an extra idea - like "Maybe future efforts to move our community closer to having 'implicit impact markets without infrastructure', or to solve the problems that that solution is aimed at solving, should include explicit mention of those considerations?"
Forget replaceability? (for ~community projects)

Yeah, I totally agree that if you're much more sophisticated than your (potential) donors you want to do this kind of analysis. I don't think that applies in the case of what I was gesturing at with "~community projects", which is where I was making the case for implicit impact markets.

Assuming that the buyers in the market are sophisticated:

  1. in the straws case, they might say "we'll pay $6 for this output" and the straw org might think "$6 is nowhere close to covering our operating costs of $82,000” and close down
  2. I think too much work is being done by y
... (read more)
2MichaelStJules6moI'm guessing 2 is in response to the example I removed from my comment, roughly starting a new equally cost-effective org working on the same thing as another org would be pointless and create waste. I agree that there could be efficiency improvements, but now we're asking how much and if that justifies the co-founders' opportunity costs and other costs. The impact of the charity now comes from a possibly only marginal increase in cost-effectiveness. That's a completely different and much harder analysis. I'm also more skeptical of the gains in cases where EA charities are already involved, since they are already aiming to maximize cost-effectiveness.
Forget replaceability? (for ~community projects)

This kind of externality should be accounted for by the market (although it might be that the modelling effectively happens in a distributed way rather than anyone thinking about it all).

So you might get VCs who become expert in judging when early-stage projects are a good bet. Then people thinking of starting projects can somewhat outsource the question to the VCs by asking "could we get funding for this?"

2MichaelStJules6moHmm, I'm kind of skeptical. Suppose there's a group working on eliminating plastic straws. There's some value in doing that, but suppose that just the existence of the group takes attention away from more effective environmental interventions to the point that it does more harm than good regardless of what (positive) price you can buy its impact for. Would a market ensure that group gets no funding and does no work? Would you need to allow negative prices? Maybe within a market of eliminating plastic waste, they would go out of business since there are much more cost-effective approaches, but maybe eliminating plastic waste in general is a distraction from climate change, so that whole market shouldn't exist. It sounds like VCs would need to make these funding diversion externality judgements themselves, or it would be better if they could do them well.
Forget replaceability? (for ~community projects)

Moral trade is definitely relevant here. Moral trade basically deals with cases with fundamental-differences-in-values (as opposed to coordination issues from differences in available information etc.).

I haven't thought about this super carefully, but it seems like a nice property of impact markets is that they'll manage to simultaneously manage the moral trade issues and the coordination issues. Like in the example of donors wishing to play donor-of-last-resort it's ambiguous whether this desire is driven by irreconcilably different values or different empirical judgements about what's good.

Forget replaceability? (for ~community projects)

I agree that these considerations would blunt the coordination issues some.

So I think that a proposal for "Implicit impact markets without infrastructure" should probably include as one element a reminder for people to take these considerations into account. 

I guess I think that it should include that kind of reminder if it's particularly important to account for these things under an implicit impact markets set-up. But I don't think that; I think they're important to pay attention to all of the time, and I'm not in the business (in writing this post)... (read more)

5MichaelA5moHmm, I don't think this seems quite right to me. I think I've basically never thought about moral uncertainty or epistemic humility when buying bread or getting a haircut, and I think that that's been fine. And I think in writing this post you're partly in the business of trying to resolve things like "donors of last resort" issues, and that that's one of the sorts of situations where explicitly remembering the ideas of moral uncertainty and epistemic humility is especially useful, and where explicitly remembering those ideas is one of the most useful things one can do. This seems right to me, but I don't think this really pushes against my suggestion much. I say this because I think the goals here relate to fixing certain problems, like "donors of last resort" issues, rather than thinking of what side dishes go best with (implicit or explicit) impact markets. So I think what matters is just how much value would be added by reminding people about moral uncertainty and epistemic humility when trying to help resolve those problems - even if implicit impact markets would make those reminders less helpful, I still think they'd be among the top 3-10 most helpful things. (I don't think I'd say this if we were talking about actual, explicit impact markets; I'm just saying it in relation to implicit impact markets without infrastructure.)
Forget replaceability? (for ~community projects)

Yeah, Shapley values are a particular instantiation of a way that you might think the implicit credit split would shake out. There are some theoretical arguments in favour of Shapley values, but I don't think the case is clear-cut. However in practice they're not going to be something we can calculate on-the-nose, so they're probably more helpful as a concept to gesture with.

Forget replaceability? (for ~community projects)

Of course "non-EA funding" will vary a lot in its counterfactual value. But roughly speaking I think that if you are pulling in money from places where it wouldn't have been so good, then on the implicit impact markets story you should get a fraction of the credit for that fundraising. Whether or not that's worth pursuing will vary case-to-case.

Basically I agree with Michael that it's worth considering but not always worth doing. Another way of looking at what's happening is that starting a project which might appeal to other donors creates a non-transferrable fundraising opportunity. Such opportunities should be evaluated, and sometimes pursued.

Forget replaceability? (for ~community projects)

I agree that in principle that you could model all of this out explicitly, but it's the type of situation where I think explicit modelling can easily get you into a mess (because there are enough complicated effects that you can easily miss something which changes the answer), and also puts the cognitive work in the wrong part of the system (the job of funders is to work out what would be the best use of their resources; the job of the charities is to provide them with all relevant information to help them make the best decision).

I think impact markets (im... (read more)

2MichaelStJules6moWould impact markets be useful without people doing this kind of modeling? Would they be at risk of assuming away these externalities otherwise?
Everyday longtermism in practice

I like the jumping in! I think using vignettes as a starting point for discussion of norms has some promise.

In these cases, I imagine it being potentially fruitful to have more-discussion-per-vignette about both whether the idea captured is a good one (I think it's at least unclear in some of your examples), as well as how good it would be if the norm were universalised ... we don't want to spend too much attention on promoting norms that while positive just aren't a very big deal.

1JohnsonRamsaur6moThanks for reading and commenting! I agree, these are great considerations to take this framework further and help discover and refine what norms may be best to promote and spread.
Forget replaceability? (for ~community projects)

Default expectations of credit

Maybe we should try to set default expectations of how much credit for a project goes to different contributors? With the idea that not commenting is a tacit endorsement that the true credit split is probably in that ballpark (or at least that others can reasonably read it that way).

One simple suggestion might be four equal parts credits: to founding/establishing the org and setting it in a good direction (including early funders and staff); to current org leadership; to current org staff; to current funders. I do expect subst... (read more)

Forget replaceability? (for ~community projects)

Inefficiencies from inconsistent estimates of value

Broadening from just considering donations, there's a worry that the community as a whole might be able to coordinate to get better outcomes than we're currently managing. For instance opinions about the value of earning to give vary quite a bit; here's a sketch to show how that can go wrong:

Alice and Beth could each go into direct work or into earning-to-give. We represent their options by plotting a point showing how much they would achieve on the relevant dimension for each option. The red and green poi
... (read more)
Why I prefer "Effective Altruism" to "Global Priorities"

Definitely didn't mean to shut down conversation! I felt like I had a strong feeling that it was not an option on the table (because of something like coherence reasons -- cf. my reply to Jonas -- not because it seemed like a bad or too-difficult idea). But I hadn't unpacked my feeling. I also wasn't sure whether I needed to, or whether when I posted everyone would say something like "oh, yeah, sure" and it would turn out to be a boring point. This was why I led with "I don't know how much of an outlier I am"; I was trying to invite people to let me know if this was a boring triviality after it was pointed out, or if it was worth trying to unpack.

P.S. I appreciate having what seemed bad about the phrasing pointed out.

Why I prefer "Effective Altruism" to "Global Priorities"

Hmm, no, I didn't mean something that feels like pessimism about coordination ability, but that (roughly speaking) thing you get if you try to execute a "change the name of the movement" operation is not the same movement with a different name, but a different (albeit heavily overlapping) movement with the new name. And so it's better understood as a coordinated heavy switch to emphasising the new brand than it is just a renaming (although I think the truth is actually somewhere in the middle).

I don't think that's true if the name change is minor so that t... (read more)

3Jonas Vollmer6moThanks, that makes sense!
Why I prefer "Effective Altruism" to "Global Priorities"

I don't know how much of an outlier I am, but I feel like "change the name of the movement" is mostly not an option on the table. Rather there's a question about how much (or when) to emphasise different labels, with the understanding that the different labels will necessarily refer to somewhat different things. (This is a different situation than an organisation considering a rebrand; in the movement case people who preferred the connotations of the older label are liable to just keep using it.)

Anyhow, I like your defence of "effective altruism", and I don't think it should be abandoned (while still thinking that there are some contexts where it gets used but something else might be better).

I think it's almost certainly possible to change the name of the movement if we want to – I think this would take an organization taking ownership of the project, hosting a well-organized Coordination Forum for the main stakeholders, and some good naming suggestions that lots of people can get behind.  Doing something ambitious like this might also generally improve the EA community's ability to coordinate around larger projects, which generally seems a useful capacity to develop.

That said, it would be a very effortful project, and should be carefully... (read more)

Name for the larger EA+adjacent ecosystem?

I agree that this is potentially an issue. I think it's (partially) mitigated the more it's used to refer to ideas rather than people, and the more it's seen to be a big (and high prestige) thing.

Name for the larger EA+adjacent ecosystem?

Maybe the obvious suggestion then is "new enlightenment"? I googled, and the term has some use already (e.g. in a talk by Pinker), but it feels pretty compatible with what you're gesturing at. I guess it would suggest a slightly broader conception (more likely to include people or groups not connected to the communities you named), but maybe that's good?

3Florian Habermacher6moI find "new enlightenment" very fitting. But wonder whether it might at times be perceived as a not very humble name (must not be a problem, but I wonder whether some, me included, might at times end up feeling uncomfortable calling ourselves part of it).
Name for the larger EA+adjacent ecosystem?

Thanks, makes sense. This makes me want to pull out the common characteristics of these different groups and use those as definitional (and perhaps realise we should include other groups we're not even paying attention to!), rather than treat it as a purely sociological clustering. Does that seem good?

Like maybe there's a theme about trying to take the world and our position in it seriously?

4RyanCarey6moMakes sense - I guess they're all taking an enlightenment-style worldview and pursuing intellectual progress on questions that matter over longer timescales...
Name for the larger EA+adjacent ecosystem?

Could you say a little more about the context(s) where a name seems useful?

(I think it's often easier to think through what's wanted from a name when you understand the use case, and sometimes when you try to do this you realise it was a slightly different thing that you really wanted to name anyway.)

8RyanCarey6moTBH, it's a question that popped into mind from background consciousness, but I can think of many possible applications: * helping people in various parts of the EA-adjacent ecosystem know about the other parts, which they may be better-suited to helping * helping people in various parts of this ecosystem understand what thinking (or doing) has already been done in other parts of the ecosystem * building kinship between parts of the ecosystem * academically studying the overall ecosystem - why have these similar movements sprung up at a similar time? * planning for which parts are comparatively advantaged at what different types of tasks
Should I transition from economics to AI research?

Note that I think that the mechanisms I describe aren't specific to economics, but cover academic research generally-- and will also  include most of how most AI safety researchers (even those not in academia) will have impact.

There are potentially major crux moments around AI, so there's also the potential to do an excellent job engineering real transformative systems to be safe at some point (but most AI safety researchers won't be doing that directly). I guess that perhaps the indirect routes to impact for AI safety might feel more exciting because... (read more)

Should I transition from economics to AI research?

Finally, I imagine quant trading is a non-starter for a longtermist who is succeeding in academic research. As a community, suppose we already have significant ongoing funding from 3 or so of the world's 3k billionaires. What good is an extra one-millionaire? Almost anyone's comparative advantage is more likely to lie in spending the money, but even more so if one can do so within academic research.

It seems quite wrong to me to present this as so clear-cut. I think if we don't get major extra funding the professional longtermist community might plateau at ... (read more)

3EAguy7moThanks a lot for your comment. What you describe is a different route to impact than what I had in mind, but I suppose I could see myself do this, even though it sounds less exciting than making a difference by contributing directly to making AI safer.
Alternatives to donor lotteries

I guess I wouldn't recommend the donor lottery to people who wouldn't be happy entering a regular lottery for their charitable giving (but I would usually recommend them to be happy with that regular lottery!).

Btw, I'm now understanding your suggestions as not really alternatives to the donor lottery, since I don't think you buy into its premises, but alternatives to e.g. EA Funds.

(In support of the premise of respecting individual autonomy about where to allocate money: I think that making requests to pool money in a way that rich donors expect to lose co... (read more)

I guess I wouldn't recommend the donor lottery to people who wouldn't be happy entering a regular lottery for their charitable giving

Strong +1.

If I won a donor lottery, I would consider myself to have no obligation whatsoever towards the other lottery participants, and I think many other lottery participants feel the same way. So it's potentially quite bad if some participants are thinking of me  as an "allocator" of their money. To the extent there is ambiguity in the current setup, it seems important to try to eliminate that.

1HaydnBelfield7moYour policy seems reasonable. Although I wonder if the analogy with a regular lottery might risk confusing people. When one thinks of "entering a regular lottery for charitable giving", one might think of additional money - money that counterfactually wouldn't have gone to charity. But that's not true of donor lotteries - there is no additional money. On your second point: "making requests to pool money in a way that rich donors expect to lose control" describes the EA Funds, which I don't think are a scam. In fact, the EA funds pool money in such a way that donors are certain to lose control.
Alternatives to donor lotteries

By dominant action I mean "is ~at least as good as other actions on ~every dimension, and better on at least one dimension".

My confusion is something like: there's no new money out there! Its a group of donors deciding to give individually or give collectively. So the perspective of "what will lead to optimal allocation of resources at the group level?" is the right one.

I don't think donor lotteries are primarily about collective giving. As a donor lottery entrant, I'd be just as happy giving $5k for a 5% chance of controlling a $100k pot of pooled winning... (read more)

3HaydnBelfield7moI'm sure you would be just as happy entering a regular lottery - you're one of the few people that could approach the ideal I mentioned of the "perfect rational maximising Homo economicus"! For us lesser mortals though, there are two reasons we might be queasy about entering a regular lottery. First if we're cautious/risk-sensitive - if we have a bias towards our donations being likely to do good. We might not feel comfortable being risk-neutral and just doing the expected value calculation. Second, if we're impatient/time-sensitive - for example if we believe there's a particular window for donations open now that would not be open if we waited several years to win the lottery. That's about approaching it as a regular lottery. But again I really don't think we should be approaching these systems as matters just for individual donors. We've moved so far away from the "just maximise the impact of your own particular donation" perspective in other parts of EA! Its not just a matter for individuals - we as a community, through institutions like CEA, are supporting (logistically and through approval/sanction) some particular donor pooling systems and not others. It's worth considering what dynamics we could be reinforcing, whether alternatives might be better. - On the benefits of pooling, I quite agree about the time:donation size ratio. As I said: "Donor pooling has several advantages. First, it saves everyone’s time. There are also gains from specialisation – 1 allocator spending 50 hours researching the best opportunity will likely produce better results than 50 donors spending 1 hour. Second, there are opportunities that are only available to an allocator with a large pool. Charities are more willing to provide information and spend time on discussions." If you've got a $5k donation, its not worth spending as much time on - so maybe you should just donate to a donor pool is in a pool with a predetermined allocator(s) e.g. the EA Funds [https://app.effectiveal
Alternatives to donor lotteries

I think your analysis of the alternatives is mostly from the perspective of "what will lead to optimal allocation of resources at the group level?"

But the strongest case for donor lotteries, in my view, isn't in these terms at all. Rather, it's that entering a lottery is often a dominant action from the perspective of the individual donor (if most other things they would consider giving to don't exhibit noticeably diminishing returns over the amount they are attempting to get in the lottery). The winner of a lottery need not be the allocator for the money;... (read more)

2HaydnBelfield7moThanks for your comment. I'm not entirely sure I understand what you mean by dominant action, so if you don't mind saying more about that I'd appreciate it. My confusion is something like: there's no new money out there! Its a group of donors deciding to give individually or give collectively. So the perspective of "what will lead to optimal allocation of resources at the group level?" is the right one. Even if people are taking individual actions comparing 'donate to x directly' or 'donate to a lottery, then to x', those individual decision create a collective institution, for which the question of group optimality is relevant. Also, the EA community (+CEA) is not just endorsing this system, its providing a lot of logistical support. So the question of what its effects are and how we should be structuring it are key ones. On another note, I don't know enough about game theory to phrase this intuition correctly, but something seems off about the suggestion that its dominant for each of the donors. E.g. if there are 10 donors in a pool, only one of them is going to be selected. They can't all 'win'. Feels a bit like defect being dominant in a prisoner's dilemma. But again, could be misunderstanding. My understanding is that past people selected to allocate the pool haven't tended to delegate that allocation power. And indeed if you're strongly expecting to do so, why not just give the allocation power to that person beforehand, either over your individual donation (e.g. through an EA fund) or over a pool. Why go through the lottery stage?
Everyday Longtermism

I spent a little while thinking about this. My guess is that of the activities I list:

  • Alice and Bob's efforts look comparable to donating (in external benefit/effort) when the longtermist portfolio is around $100B-$1T/year
  • Clara's efforts looks comparable to donating when the longtermist portfolio is around $1B-$10B/year
  • Diya's efforts look comparable to donating when the longtermist portfolio is around $10B-$100B/year
  • Elmo's efforts are harder to say because they're closer to directly trying to grow longtermist support, so the value diminishes as the existin
... (read more)
AGB's Shortform

One argument goes via something like the reference class of global autopoeitic information-processing systems: life has persisted since it started several billion years ago; multicellular life similarly; sexual selection similarly. Sure, species go extinct when they're outcompeted, but the larger systems they're part of have only continued to thrive.

The right reference class (on this story) is not "humanity as a mammalian species" but "information-based civilization as the next step in faster evolution". Then we might be quite optimistic about civilization in some meaningful sense continuing indefinitely (though perhaps not about particular institutions or things that are recognisably human doing so).

4Max_Daniel9moIf I understand you correctly, the argument is not "autopoietic systems have persisted for billions of years" but more specifically "so far each new 'type' of such systems has persisted, so we should expect the most recent new type of 'information-based civilization' to persist as well". This is an interesting argument I hadn't considered in this form. (I think it's interesting because I think the case that it talks about a morally relevant long future is stronger than for the simple appeal to all autopoietic systems as a reference class. The latter include many things that are so weird - like eusocial insects, asexually reproducing organisms, and potentially even non-living systems like autocatalytic chemical reactions - that the argument seems quite vulnerable to the objection that knowing that "some kind of autopoietic system will be around for billions of years" isn't that relevant. We arguably care about something that, while more general than current values or humans as biological species, is more narrow than that. [Tbc, I think there are non-crazy views that care at least somewhat about basically all autopoietic systems, but my impression is that the standard justification for longtermism doesn't want to commit itself to such views.]) However, I have some worries about survivorship bias: If there was a "failed major transition in evolution", would we know about it? Like, could it be that 2 billion years ago organisms started doing sphexual selection (a hypothetical form of reproduction that's as different from previous asexual reproduction as sexual reproduction but also different from the latter) but that this type of reproduction died out after 1,000 years - and similarly for sphexxual selection, sphexxxual selection, ... ? Such that with full knowledge we'd conclude the reverse from your conclusion above, i.e. "almost all new types of autopoietic systems died out soon, so we should expect information-based civilization to die out soon as well"? (FWIW
AGB's Shortform

Some fixed models also support macroscopic probabilities of indefinite survival: e.g. if in each generation each individual has a number of descendants drawn from a Poisson distribution with parameter 1.1, then there's a finite chance of extinction in each generation but these diminish fast enough (as the population gets enormous) that if you make it through an initial rocky period you're pretty much safe.

That model is clearly too optimistic because it doesn't admit crises with correlated problems across all the individuals in a generation. But then there'... (read more)

Blueprints (& lenses) for longtermist decision-making

My primary blueprint is as follows:

I want the world in 30 years time to be in as good a state as it can be in order to face whatever challenges that will come next.

I like this! I sometimes use a perspective which is pretty close (though often think about 50 years rather than 30 years, and hold it in conjunction with "what are the challenges we might need to face in the next 50 years?"). I think 30 vs 50 years is a kind-of interesting question. I've thought about 50 because if I imagine e.g. that we're going to face critical junctures with the development o... (read more)

Everyday Longtermism

I appreciate the pushback!

I have two different responses (somewhat in tension with each other):

  1. Finding "everyday" things to do will necessitate identifying what's good to do in various situations which aren't the highest-value activity an individual can be undertaking
    • This is an important part of deepening the cultural understanding of longtermism, rather than have all of the discussion be about what's good to do in a particular set of activities that's had strong selection pressure on it
      • This is also important for giving people inroads to be able to practic
... (read more)
5MichaelA9moAh, your first point makes me realise that at times I mistook the purpose of this "everyday longtermism" idea/project as more similar to finding Task Ys [] than it really is. I now remember that you didn't really frame this as "What can even 'regular people' do, even if they're not in key positions or at key junctures?" (If that was the framing, I might be more inclined to emphasise donating effectively, as well as things like voting effectively - not just for politicians with good characters - and meeting with politicians to advocate for effective policies.) Instead, I think you're talking about what anyone can do (including but not limited to very dedicated and talented people) in "everyday situations", perhaps alongside other, more effective actions. I think at times I was aware of that, but times I forgot it. That's probably just on me, rather than an issue with the clarity of this post or project. But I guess perhaps misinterpretations along those lines are a failure mode to look out for and make extra efforts to prevent? --- As for concrete examples, off the top of my head, the key thing is just focusing more on donating more and more effectively. This could also include finding ways to earn or save more money. I think that those actions are accessible to large numbers of people, would remain useful at scale (though with diminishing returns, of course), and intersect with lots of everyday situations (e.g., lots of everyday situations could allow opportunities to save money, or to spend less time on X in order to spend more time working out where to donate). To be somewhat concrete: In a scenario with 5 million longtermists, if we choose a somewhat typical teacher who wants to make the world better, I think they'd do more good by focusing a bit more on donating more and more effectively than by focusing a bit more on trying to cause the
Everyday Longtermism

Thanks, I agree with both of those points.

Everyday Longtermism

I really appreciate you highlighting these connections with other pieces of thinking -- a better version of my post would have included more of this kind of thing.

Everyday Longtermism

Some further suggestions:

  1. Be more cooperative. (There are arguments about increasing cooperation, especially from people working on reducing S-risks, but I couldn't find any suitable resource in a brief search)
  2. Take a strong stance against narrow moral circles.
  3. Have a good pitch prepared about longtermism and EA broadly. Balance confidence with adequate uncertainty.
  4. Have a well-structured methodology for getting interested acquaintances more involved with EA.
  5. Help friends in EA/longtermism more.
  6. Strengthen relationships with friends who have a high potential to
... (read more)
4EdoArad9moGidon Kadosh, from EA Israel, is drafting a post with a suggested pitch for EA :)
Everyday Longtermism

I think that the suggestions here, and most of the arguments, should apply to "Everyday EA " which isn't necessarily longtermistic.  I'd be interested in your thoughts about where exactly should we make a distinction between everyday longtermist actions and non-longtermist everyday actions.

I agree that quite a bit of the content seems not to be longtermist-specific. But I was approaching it from a longtermist perspective (where I think the motivation is particularly strong), and I haven't thought it through so carefully from other angles.

I think the k... (read more)

4EdoArad9moHmm. There are many studies on "friend of a friend" relationships (say this [] on how happiness propagates through the friendship network). I think that it would be interesting to research how some moral behaviors or beliefs propagate through the friendship networks (I'd be surprised if there isn't a study on the effects of a transition to a vegetarian diet, say). Once we have a reasonable model of how that works we could make a basic analysis of the impact of such daily actions. (Although I expect some non-linear effects that would make this very complicated)
Good altruistic decision-making as a deep basin of attraction in meme-space

Yes, that's the kind of thing I had in the back of my mind as I wrote that.

I guess I actually think:

  • On average moving people further into the basin should lead to more useful work
  • Probably we can identify some regions/interventions where this is predictably not the case
    • It's unclear how common such regions are
Good altruistic decision-making as a deep basin of attraction in meme-space

I have a sense that a large part of the success of scientific norms comes down to their utility being immediately visible.

I agree with this. I don't think science has the attractor property I was discussing, but it has this other attraction of being visibly useful (which is even better). I was trying to use science as an example of the self-correction mechanism.

Or perhaps I am having a semantic confusion: is science self-propagating in that scientists, once cultivated, go on to cultivate others?

Yes, this is the sense of self-propagating that I intended.

Good altruistic decision-making as a deep basin of attraction in meme-space

In my words, what you've done is point out that approximate-consequentialism + large-scale preferences is an attractor.

I think that this is a fair summary of my first point (it also needs enough truth seeking to realise that spreading the approach is valuable). It doesn't really speak to the point about being self-correcting/improving.

I'm not trying to claim that it's obviously the strongest memeplex in the long term. I'm saying that it has some particular strengths (which make me more optimistic than before I was aware of those strengths).

I think anoth... (read more)

Load More