All of Alex HT's Comments + Replies

We're Redwood Research, we do applied alignment research, AMA

What factors do you think would have to be in place for some other people to set up some similar but different organisation in 5 years time?

I imagine this is mainly about the skills and experience of the team, but also interested in other things if you think that's relevant

6Buck2moI think the main skillsets required to set up organizations like this are: * Generic competence related to setting up any organization--you need to talk to funders, find office space, fill out lots of IRS forms, decide on a compensation policy, make a website, and so on. * Ability to lead relevant research. This requires knowledge of running ML research, knowledge of alignment, and management aptitude. * Some way of getting a team, unless you want to start the org out pretty small (which is potentially the right strategy). * It’s really helpful to have a bunch of contacts in EA. For example, I think it’s been really helpful for EA that I spent a few years doing lots of outreach stuff for MIRI, because it means I know a bunch of people who can potentially be recruited or give us advice. Of course, if you had some of these properties but not the others, many people in EA (eg me) would be very motivated to help you out, by perhaps introducing you to cofounders or helping you with parts you were less experienced with. People who wanted to start a Redwood competitor should plausibly consider working on an alignment research team somewhere (preferably leading it) and then leaving to start their own team. We’d certainly be happy to host people who had that aspiration (though we’d think that such people should consider the possibility of continuing to host their research inside Redwood instead of leaving).
We're Redwood Research, we do applied alignment research, AMA

This looks brilliant, and I want to strong-strong upvote!

What do you foresee as your biggest bottlenecks or obstacles in the next 5 years? Eg. finding people with a certain skillset, or just not being able to hire quickly while preserving good culture.

Thanks for the kind words!

Our biggest bottlenecks are probably going to be some combination of:

  • Difficulty hiring people who are good at some combination of leading ML research projects, executing on ML research, and reasoning through questions about how to best attack prosaic alignment problems with applied research.
  • A lack of sufficiently compelling applied research available, as a result of theory not being well developed enough.
  • Difficulty with making the organization remain functional and coordinated as it scales.
Honoring Petrov Day on the EA Forum: 2021

What if LessWrong is taken down for another reason? Eg. the organisers of this game/exercise want to imitate the situation Petrov was in, so they create some kind of false alarm

4Peter Wildeford2moLast year the site looked very obviously nuked. If I see that situation, I will retaliate. If I see some other situation, I will use my best judgement.
Honoring Petrov Day on the EA Forum: 2021

An obvious question which I'm keen to hear people's thoughts on - does MAD work here? Specifically, does it make sense for the EA forum users with launch codes to commit to a retaliatory attack? The obvious case for it is deterrence. The obvious counterarguments are that the Forum could  go down for a reason other than a strike from LessWrong, and that once the Forum is down, it doesn't help us to take down LW (though this type of situation might be regular enough that future credibility makes it worth it)

 

Though of course it would be really bad for us to have to take down LW, and we really don't want to. And I imagine most of us trust the 100 LW users with codes not to use them :)

I know we're trying to remember when the US and USSR had their weapons pointed at each other but it feels more like the North and South islands of New Zealand are trying to decide whether to nuke each other!

Edit: Not even something so violent - just temporarily inconvenience each other

The question is whether precommitment would actually change behavior. In this case, anyone shutting down either site is effectively playing nihilist, and doesn't care, so it shouldn't.

In fact, if it does anything, it would be destabilizing - if "they" commit to pushing the button if "we" do, they are saying they aren't committed to minimizing damage  overall, which should make us question whether we're actually on the same side. (And this is a large part of why MAD only works if you are both selfish, and scared of losing.)

The importance of optimizing the first few weeks of uni for EA groups

This is great!  I'm tentatively interested in groups trying outreach slightly before the start of term. It seems like there's a discontinuous increase in people's opportunity cost when they arrive at university - suddenly there are loads more cool clubs and people vying for their attention. Currently, EA groups are mixed in with this crowd of stuff. 

One way this could look is running a 1-2 week residential course for offer holders the summer before they start at university (a bit like SPARC or Uncommon Sense).  

To see if this is something a ... (read more)

2PeterSlattery4moSame. Useful also from an org impact perspective. E.g., over x people have viewed our posts (might be combined with website views etc)
2NunoSempere4moSame.
3Darius_M4moAgreed, I'd love this feature! I also frequently rely on pageview statistics [https://pageviews.toolforge.org/] to prioritize which Wikipedia articles to improve.
Towards a Weaker Longtermism

I tentatively believe (ii), depending on some definitions. I'm somewhat surprised to see Ben and Darius implying it's a really weird view, and makes me wonder what I'm missing.

I don't want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don't mean indirect effects more broadly in the sense of 'better health in poor countries' --> '... (read more)

9Darius_M4moThis may be the crux - I would not count a ~ 1000x multiplier as anywhere near "astronomical" and should probably have made this clearer in my original comment. Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, refers to differences in value of something like 1030 x. All my comment was meant to say is that it seems highly implausible that something like such a 1030x multiplier also applies to claim (ii), regarding the expected cost-effectiveness differences of long-term targeted versus near-term targeted interventions. It may cause significant confusion if the term "astronomical" is used in one context to refer to a 1030x multiplier and in another context to a 1000x multiplier.
How Do AI Timelines Affect Giving Now vs. Later?

Nice, thanks for these thoughts.

But there's no way to save up labor to be used later, except in the sense that you can convert labor into capital and then back into labor (although these conversions might not be efficient, e.g., if you can't find enough talented people to do the work you want). So the tradeoff with labor is that you have to choose what to prioritize. This question is more about traditional cause prioritization than about giving now vs. later. 

Ah sorry I think I was unclear. I meant 'capacity-building' in the narrow sense of 'getting m... (read more)

3MichaelDickens4moI think we are falling for the double illusion of transparency [https://www.lesswrong.com/posts/sBBGxdvhKcppQWZZE/double-illusion-of-transparency] : I misunderstood you, and the thing I thought you were saying was even further off than what you thought I thought you were saying. I wasn't even thinking about capacity-building labor as analogous to investment. But now I think I see what you're saying, and the question of laboring on capacity vs. direct value does seem analogous to spending vs. investing money. At a high level, you can probably model labor in the same way as I describe in OP: you spend some amount of labor on direct research, and the rest on capacity-building efforts that increase the capacity for doing labor in the future. So you can take the model as is and just change some numbers. Example: If you take the model in OP and assume we currently have an expected (median) 1% of required labor capacity, a rate of return on capacity-building of 20%, and a median AGI date of 2050, then the model recommends exclusively capacity-building until 2050, then spending about 30% of each decade's labor on direct research. One complication is that this super-easy model treats labor as something that only exists in the present. But in reality, if you have one laborer, that person can work now and can also continue working for some number of decades. The super-easy model assumes that any labor spent on research immediately disappears, when it would be more accurate to say that research labor earns a 0% return (or let's say a -3% return, to account for people retiring or quitting) while capacity-building labor earns a 20% return (or whatever the number is). This complication is kind of hard to wrap my head around, but I think I can model it with a small change to my program, changing the line in run_agi_spending that reads capital *= (1 - spending_schedule[y]) * (1 + self.investment_return)**10 to research_return = -0.03 capital *= spending_sche
How Do AI Timelines Affect Giving Now vs. Later?

This is cool, thanks for posting :) How do you think this generalises to a situation where labor is the key resource rather than money?

I'm a bit more interested in the question 'how much longtermist labor should be directed towards capacity-building vs. 'direct' work (eg. technical AIS research)?' than the question 'how much longtermist money should be directed towards spending now vs. investing to save later?'

I think this is mainly because longtermism, x-risk, and AIS seem to be bumping up against the labor constraint much more than the money constraint. ... (read more)

2MichaelDickens4moThat's an interesting question, and I agree with your reasoning on why it's important. My off-the-cuff thoughts: Labor tradeoffs don't work in the same way as capital tradoffs because there's no temporal element. With capital, you can spend it now or later, and if you spend later, you get to spend more of it. But there's no way to save up labor to be used later, except in the sense that you can convert labor into capital and then back into labor (although these conversions might not be efficient, e.g., if you can't find enough talented people to do the work you want). So the tradeoff with labor is that you have to choose what to prioritize. This question is more about traditional cause prioritization than about giving now vs. later. This is something EAs have already written a lot about, and it's probably worth more attention overall than the question of giving (money) now vs. later, but I believe the latter question is more neglected and has more low-hanging fruit. The question of optimal giving rate might be irrelevant if, say, we're confident that the optimal rate is somewhere above 1%, we don't know where, but it's impossible to spend more than 1% due to a lack of funding opportunities. But I don't think we can be that confident that the optimal spending rate is that high. And even if we are, knowing the optimal rate still matters if you expect that we can scale up work capacity in the future. I'd guess >50% chance that the optimal spending rate is faster than the longtermist community[1] is currently spending, but I also expect the longtermist spending rate to increase a lot in the future due to increasing work capacity plus capital becoming more liquid—according to Ben Todd's estimate [https://forum.effectivealtruism.org/posts/zA6AnNnYBwuokF8kB/is-effective-altruism-growing-an-update-on-the-stock-of] , about half of EA capital is currently too illiquid to spend. [1] I'm talking about longtermism specifically and not all EA because the optimal spending rate
2Alex HT5moAlso this: https://longtermrisk.org/the-future-of-growth-near-zero-growth-rates/ [https://longtermrisk.org/the-future-of-growth-near-zero-growth-rates/]
Non-consequentialist longtermism

I've haven't read it, but the name of this paper from Andreas at GPI at least fits what you're asking - "Staking our future: deontic long-termism and the non-identity problem"

1Nathan_Barnard6moHi Alex, the link isn't working
Is there evidence that recommender systems are changing users' preferences?

 Is The YouTube Algorithm Radicalizing You? It’s Complicated.

Recently, there's been significant interest among the EA community in investigating short-term social and political risks of AI systems. I'd like to recommend this video (and Jordan Harrod's channel as a whole) as a starting point for understanding the empirical evidence on these issues.

3reallyeli8moFrom reading the summary in this post, it doesn't look like the YouTube video discussed bears on the question of whether the algorithm is radicalizing people 'intentionally,' which I take to be the interesting part of Russell's claim.
Confusion about implications of "Neutrality against Creating Happy Lives"

I agree with this answer. Also, lots of people do think that temporal position (or something similar, like already being born) should affect ethics.

But yes OP, accepting time neutrality and being completely indifferent about creating happy lives does seem to me to imply the counterintuitive conclusion you state. You might be interested in this excellent emotive piece or section 4.2.1 of this philosophy thesis. They both argue that creating happy lives is a good thing.

Response to Phil Torres’ ‘The Case Against Longtermism’

I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?


Yep that is what I'm saying. I think I don't agree but thanks for explaining :)

Response to Phil Torres’ ‘The Case Against Longtermism’

Can you say a bit more about why the quote is objectionable? I can see why the conclusion 'saving a life in a rich country is substantially more important than saving a life in a poor country' would be objectionable. But it seems Beckstead is saying something more like 'here is an argument for saving lives in rich countries being relatively more important than saving lives in poor countries' (because he says 'other things being equal').

2Garrison9moThe main issue I have with this quote is that it's so divorced from the reality of how cost effective it is to save lives in rich countries vs. poor countries (something that most EAs probably know already). I understand that this objection is addressed by the caveat 'other things being equal', but it seems important to note that it costs orders of magnitude more to save lives in rich countries, so unless Beckstead thinks the knock-on effects of saving lives in rich countries are sufficient to offset the cost differences, it would still follow that we should focus our money on saving lives in poor countries.
3jtm9moI’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion? I think if you provide arguments that lead to a very troubling conclusion, then you should ensure that they’re very strongly supported, eg by empirical or historical evidence. Since Beckstead didn't do that (which perhaps is to be expected in a philosophy thesis), I think it would at the very least have been appropriate to recognise that the premises for the argument are extremely speculative. I also think the argument warrants some disclaimers – e.g., a warning that following this line of reasoning could lead to undesirable neglect of global poverty or a disclaimer that we should be very wary of any argument that leads to conclusions like 'we should prioritise people like ourselves.' Like Dylan Balfour said above, I am otherwise a big fan of this important dissertation; I just think that this quote is not a great look and it exemplifies a form of reasoning that we longtermists should be careful about.
Should I transition from economics to AI research?

There are also more applied AI/tech focused economics questions that seem important for longtermists (eg if GPI stuff seems to abstract for you)

1jackmalde9moYes this 80,000 Hours article [https://80000hours.org/articles/research-questions-by-discipline/#introduction] has some good ideas
Running an AMA on the EA Forum

Agree with Marisa that you'd be well suited to do an AMA

How can non-biologists contribute to wild animal welfare?

Also not CS and you may already know it: this EAG talk is about wild animal welfare research using economics techniques. Both authors of the paper discussed are economists, not biologists.

1evelynciara10moNope, I haven't seen this yet. Thanks for the link!
Were the Great Tragedies of History “Mere Ripples”?

Thanks for you comment, it makes a good point . My comment was hastily written and I think my argument that you're referring to is weak, but not as weak as you suggest.

At some points the author is specifically critiquing longtermism the philosophy (not what actual longtermists think and do) eg. when talking about genocide. It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it'd be better if the switch was made clear. 

There are many longtermists that don't hold these views (eg. Will MacAskill is literally... (read more)

2Lukas_Finnveden10moAgreed. Yeah this seems right, maybe with the caveat that Will has (as far as I know) mostly expressed skepticism about this being the most influential century, and I'd guess he does think this century is unusually influential, or at least unusually likely to be unusually influential. And yes, I also agree that the quoted views are very extreme, and that longtermists at most hold weaker versions of them.
Ecosystems vs Projects in EA Movement Building

from 'Things CEA is not doing' forum post https://forum.effectivealtruism.org/posts/72Ba7FfGju5PdbP8d/things-cea-is-not-doing 

We are not actively focusing on:

...

  • Cause-specific work (such as community building specifically for effective animal advocacy, AI safety, biosecurity, etc.)
2Linda Linsefors10moThanks for the much improved source!
Were the Great Tragedies of History “Mere Ripples”?

I don’t have time to write a detailed and well-argued response, sorry. Here are some very rough quick thoughts on why I downvoted.  Happy to expand on any points and have a discussion.

In general, I think criticisms of longtermism from people who 'get' longtermism are incredibly valuable to longtermists.

One reason if that if the criticisms carry entirely, you'll save them from basically wasting their careers. Another reason is that you can point out weaknesses in longtermism or in their application of longtermism that they wouldn't have spotted themsel... (read more)

1philosophytorres10mo[Responding to Alex HT above:] I'll try to find the time to respond to some of these comments. I would strongly disagree with most of them. For example, one that just happened to catch my eye was: "Longtermism does not say our current world is replete with suffering and death." So, the target of the critique is Bostromism, i.e., the systematic web of normative claims found in Bostrom's work. (Just to clear one thing up, "longtermism" as espoused by "leading" longtermists today has been hugely influenced by Bostromism -- this is a fact, I believe, about intellectual genealogy, which I'll try to touch upon later.) There are two main ingredients of Bostromism, I argue: total utilitarianism and transhumanism. The latter absolutely does indeed see our world the way many religious traditions have: wretched, full of suffering, something to ultimately be transcended (if not via the rapture or Parousia then via cyborgization and mind-uploading). This idea, this theme, is so prominent in transhumanist writings that I don't know how anyone could deny it. Hence, if transhumanism is an integral component of Bostromism (and it is), and if Bostromism is a version of longtermism (which it is, on pretty much any definition), then the millennialist view that our world is in some sort of "fallen state" is an integral component of Bostromism, since this millennialist view is central to the normative aspects of transhumanism. Just read "Letter from Utopia." It's saturated in a profound longing to escape our present condition and enter some magically paradisiacal future world via the almost supernatural means of radical human enhancement. (Alternatively, you could write a religious scholar about transhumanism. Some have, in fact, written about the ideology. I doubt you'd find anyone who'd reject the claim that transhumanism is imbued with millennialist tendencies!)
7Lukas_Finnveden10moI haven't read the top-level post (thanks for summarising!); but in general, I think this is a weak counterargument. If most people in a movement (or academic field, or political party, etc) holds a rare belief X, it's perfectly fair to criticise the movement for believing X. If the movement claims that X isn't a necessary part of their ideology, it's polite for a critic to note that X isn't necessarily endorsed as the stated ideology, but it's important that their critique of the movement is still taken seriously. Otherwise, any movement can choose a definition that avoids mentioning the most objectionable part of their ideology without changing their beliefs or actions. (Similar to the motte-and-bailey fallacy [https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy]). In this case, the author seems to be directly worried about longtermists' beliefs and actions; he isn't just disputing the philosophy.
6Linch10moThanks, this comment saved me time/emotional energy from reading the post myself.

I had left this for a day and had just come back to write a response to this post but fortunately you've made a number of the points I was planning on making.

I think it's really good to see criticism of core EA principles on here, but I did feel that a number of the criticisms might have benefited from being fleshed out more fully .  

OP made it clear that he doesn't agree with a number of Nick Bostrom's opinions but I wasn't entirely clear (I only read it the once and quite quickly, so it may be the case that I missed this) where precisely the main di... (read more)

I upvoted Phil's post, despite agreeing with almost all of AlexHT's response to EdoArad above. This is because I want to encourage good faith critiques, even those which I judge to contain serious flaws. And while there were elements of Phil's book that read to me more like attempts at mood affiliation than serious engagement with his interlocutor's views (e.g. 'look at these weird things that Nick Bostrom said once!'), on the whole I felt that there was enough effort at engagement that I was glad Phil took the time to write up his concerns. 

Two aspec... (read more)

AMA: Ajeya Cotra, researcher at Open Phil

I’d be keen to hear your thoughts about the (small) field of AI forecasting and its trajectory. Feel free to say whatever’s easiest or most interesting. Here are some optional prompts:

  • Do you think the field is progressing ‘well’, however you define ‘well’? 
  • What skills/types of people do you think AI forecasting needs?
  • What does progress look like in the field? Eg. does it mean producing a more detailed report, getting a narrower credible interval, getting better at making near-term AI predictions...(relatedly, how do we know if we're making progress?)
  • Can you make any super rough predictions like ‘by this date I expect we’ll be this good at AI forecasting’? 
5Ajeya10moHm, I think I'd say progress at this stage largely looks like being better able to cash out disagreements about big-picture and long-term questions in terms of disagreements about more narrow, empirical, or near-term questions, and then trying to further break down and ultimately answer these sub-questions to try to figure out which big picture view(s) are most correct. I think given the relatively small amount of effort put into it so far and the intrinsic difficulty of this project, returns have been pretty good on that front -- it feels like people are having somewhat narrower and more tractable arguments as time goes on. I'm not sure about what exact skillsets the field most needs. I think the field right now is still in a very early stage and could use a lot of disentanglement research [https://forum.effectivealtruism.org/posts/RCvetzfDnBNFX7pLH/personal-thoughts-on-careers-in-ai-policy-and-strategy#Disentanglement_research_is_needed_to_advance_AI_strategy_research__and_is_extremely_difficult] , and it's often pretty chaotic and contingent what "qualifies" someone for this kind of work. Deep familiarity with the existing discourse and previous arguments/attempts at disentanglement is often useful, and some sort of quantitative background (e.g. economics or computer science or math) or mindset is often useful, and subject matter expertise (in this case machine learning and AI more broadly) is often useful, but none of these things are obviously necessary or sufficient. Often it's just that someone happens to strike upon an approach to the question that has some purchase, they write it up on the EA Forum or LessWrong, and it strikes a chord with others and results in more progress along those lines.
7Aryeh Englander10moI know you asked Ajeya, but I'm going to add my own unsolicited opinion that we need more people with professional risk analysis backgrounds, and if we're going to do expert judgment elicitations as part of forecasting then we need people with professional elicitation backgrounds. Properly done elicitations are hard. (Relevant background: I led an AI forecasting project for about a year.)
Lessons from my time in Effective Altruism

Joey, are there unusual empirical beliefs you have in mind other than the two mentioned? Hits based giving seems clearly related to Charity Entrepreneurship's work - what other important but unusual empirical beliefs do you/CE/neartermist EAs hold? (I'm guessing hinge of history hypothesis is irrelevant to your thinking?)

6Joey1yI think the majority of unusual empirical beliefs that came to mind were more in the longtermist space. In some ways these are unusual at even a deeper level than the suggested beliefs e.g. I think EAs generally give more credence epistemically to philosophical/a priori evidence, Bayesian reasoning, sequence thinking, etc. If I think about unusual empirical beliefs Charity Entrepreneurship has as well, it would likely be something like the importance of equal rigor, focusing on methodology in general, or the ability to beat the charity market using research. In both cases these are just a couple that came to mind – I suspect there are a bunch more.
Can people be persuaded by anything other than an appeal to emotion?

My guess is that few EAs care emotionally about cost effectiveness and that they care emotionally about helping others a lot. Given limited resources, that means they have to be cost effective. Imagine a mother with a limited supply of food to share between her children. She’s doesn’t care emotionally about rationing food, but she’ll pay a lot of attention to how best to do rationing.

I do think there are things in the vicinity of careful reasoning/thinking clearly/having accurate beliefs that are core to many EAs identities. I think those can be developed naturally to some extent, and don’t seem like complete prerequisites to being an EA

Should Effective Altruists Focus More on Movement Building?

Thanks for writing this and contributing to the conversation :)

Relatedly, an “efficient market for ideas” hypothesis would suggest that if MB really was important, neglected, and tractable, then other more experienced and influential EAs would have already raised its salience.

I do think the salience of movement building has been raised elsewhere eg:

... (read more)
1aaronb501yThanks for all those references. Don't know how I missed the 80,000 page on the topic, but that's a pretty big strike against it being ignored. Regarding your second point, I largely agree but there are surely some MB interventions that don't require full-time generalists. For example, message testing and advertising (I assume) can be mostly outsourced with enough money.
A case against strong longtermism

You really don't seem like a troll! I think the discussion in the comments on this post is a very valuable conversation and I've been following it closely. I think it would be helpful for quite a few people for you to keep responding to comments

Of course, it's probably a lot of effort to keep replying carefully to things, so understandable if you don't have time :)

Introducing High Impact Athletes

Thanks! I appreciate it :)

It makes me feel anxious to get a lot of downvotes with no explanation so I really appreciate your comment.

Just to clarify when you say "if that is a real tradeoff that a founder faces in practice, it is nearly always an indication the founder just hasn't bothered to put much time or effort into cultivating a diverse professional network" I think I agree, but that this isn't always something the founder could have predicted ahead of time, and the founder isn't necessarily to blame. I think it can be very easy to 'accidentally' end... (read more)

4IanDavidMoss1yRe: the downvotes, I wish I could just say not to let them bother you, but the truth is they make me anxious too. Unfortunately there are a handful of EA Forum users who routinely strong-downvote posts and comments that have any whiff of a social/racial justice message.
9IanDavidMoss1yOh sure, and I didn't mean to imply otherwise. Lots of people have homogeneous networks through no fault of their own. But if that's the case for you and you're trying to do something for which having a diverse network would be helpful, then it's something you need to budget time and energy towards just as it would be the case for ensuring strong organizational infrastructure, funding, etc. So that's why I thought it was really valuable for you to point that out to Marcus, who seems to be getting an otherwise very promising project off the ground. :)
Introducing High Impact Athletes

Was this meant as a reply to my comment or a reply to Ben's comment?

I was just asking what the position was and made explicit I wasn't suggesting Marcus change the website.

3Linch1yThreading etiquette is confusing! It was unclear to me whether the right person to respond to was Ben, Marcus, or you. So I went for the most top-level comment that seemed reasonable. In retrospect I should have just commented on the post directly.
Introducing High Impact Athletes

Yep! I assumed this kind of thing was the case (and obviously was just flagging it as something to be aware of, not trying to finger-wag)

Introducing High Impact Athletes

I don't find anything wrong at all with 'saintly' personally, and took it as a joke. But I could imagine someone taking it the wrong way. Maybe I'd see what others on the forum think

Upside seems low, downside seems pretty high.

Introducing High Impact Athletes

It looks like all the founders, advisory team, and athletes are white or white-passing. I guess you're already aware of this as something to consider, but it seems worth flagging (particularly given the use of 'Saintly' for those donating 10% :/).

Some discussion of why this might matter here: https://forum.effectivealtruism.org/posts/YCPc4qTSoyuj54ZZK/why-and-how-to-make-progress-on-diversity-and-inclusion-in

Edit: In fact, while I think appearing all-white and implicitly describing some of your athletes as 'Saintly' are both acceptable PR risks, having the... (read more)

1Mati_Roy1yThat's awesome, good work! :)
6IanDavidMoss1yHi Alex, I want to voice my support both for you raising this in the first place and for the gentle, nonconfrontational way in which you did so. This was a good example of "calling in" a well-intentioned colleague, in my opinion. More generally, as a founder of several initiatives myself I've come to believe that prioritizing diversity, especially racial diversity, in the early stages of growth is quite important for projects that have an outward-facing mission and wide potential audience such as Marcus's. The reason it's more important than people often give it credit for is that the composition of a founding team has follow-on effects for who else it recruits, what networks it builds initial strength in, and even in some cases how it makes decisions about what programming to prioritize. Once those choices are made and the initial history of the organization is written, it becomes much harder (though not impossible) to "diversify" authentically after the fact. Of course I don't recommend sacrificing things like team cohesion or effectiveness for the sake of demographic diversity, but if that is a real tradeoff that a founder faces in practice, it is nearly always an indication the founder just hasn't bothered to put much time or effort into cultivating a diverse professional network. Again, for some kinds of work it might not be that important. For fundraising and visibility among a diverse worldwide community of athletes, it's essential.

Also, I would love to have a wide variety of athletes represented by HIA. As it's still very new I'm focusing outreach on those I have personal relationships with, which means tennis, which is predominantly white in the professional space at this point in time. I'm hoping that over time I can get in touch with a more diverse range of athletes from many different sports. 

2Marcus Daniell1yThis is a good point and not one I'd thought of before. Thank you. Re 'saintly', it is intended as a joke. Do you think it's more offensive than funny? Or not worth the risk? Re diversity, I can't help that I'm the founder and I'm white, but having a more diverse advisory board sounds good. Do you have any ideas as to who would be good advisors for this sort of thing? Important to note that all the advisors are completely pro bono.
Introducing High Impact Athletes

Some of the wording on the 'Take the Pledge' section seems a little bit off (to me at least!). Eg. saying a 1-10% pledge will 'likely have zero noticeable impact on your standard of living' seems misleading, and could give off the impression that the pledge is only for the very wealthy (for whom the statement is more likely to be true). I'm also not sure about the 'Saintly' categorisation of the highest giving level (10%). It could come across as a bit smug or saviour-ish. I'm not sure about the tradeoffs here though and obviously you have much more context than me.

Maybe you've done this already, but it could be good to ask Luke from GWWC for advice on tone here.

2Marcus Daniell1yI would argue that most people reading the website are very wealthy - living in a western country almost inevitably qualifies you as very wealthy. For the main target audience - successful professional athletes - a 10% pledge would not change quality of life one whit.
Introducing High Impact Athletes

I see you mention that HIA's recommendations are based on a suffering-focused perspective. It's great that you're clear about where you're coming from/what you're optimising for. To explore the ethical perspective of HIA further - what is HIA's position on longtermism?

(I'm not saying you should mention your take on longtermism on the website.)

9Linch1yWe all have different beliefs and intuitions about the world, including about how other people see the world. Compared to the rest of us, Marcus has a strong comparative advantage in both a) having an intuition for what messages work for professional athletes and would be easier for them to relate to, and more importantly, b) access to a network to test different messages. I would personally be excited if, rather than for us to debate at length of what will or won't be appealing for a hypothetical audience, for Marcus to just go out and experiment with different messages with the actual audience that he has. The results may or may not surprise us.
5Marcus Daniell1ySee below about casting the net - being an athlete myself and knowing many personally I think longtermism is too much of a stretch conceptually for most athletes at this point.
Introducing High Impact Athletes

This is really cool! Thanks for doing this :)

Is there a particular reason the charity areas are 'Global Health and Poverty' and 'Environmental Impact' rather than including any more explicit mention of animal welfare? (For people reading this - the environmental charities include the Good Food Institute and the Humane League along with four climate-focussed charities.)

4Louis_Dixon1yBy the way EA Funds now includes the Founders Pledge climate fund [https://founderspledge.com/funds/climate-change-fund] which I think is a bit more straightforward than the animal welfare argument
2Marcus Daniell1yHi Alex, thanks for your comments! I'll reply to each. I'm aiming to cast the net as widely as possible within the athlete community. To me this means mixing the novel (effective altruism) with the known. I think it is also valid to say that the animal welfare charities represented have a large impact on the environment.
The Case for Space: A Longtermist Alternative to Existential Threat Reduction

Welcome to the forum!

Have you read Bostrom's Astronomical Waste? He does a very similar estimate there. https://www.nickbostrom.com/astronomical/waste.html

I'd be keen to hear more about why you think it's not possible to meaningfully reduce existential risk.

1Giga1yI have not seen that, but I will check it out. As for the existential threat, it is for a few reasons, I will make a more detailed post about it later. First off, I believe very few things are existential threats to humanity itself. Humans are extremely resilient and live in every nook and cranny on earth. Even total nuclear war would have plenty of survivors. As far as I see it, only an asteroid or aliens could wipe us out unexpectedly. AI could wipe out humanity, however I believe it would be a voluntary extinction in that case. Future humans may believe AI has qualia, and is much more efficient at creating utility than biological life. I cannot imagine future humans being so stupid as to have AI connected to the internet and a robot army able to be hijacked by said AI at the same time. I do believe there is an existential threat to civilization, however it is not present yet, and we will be capable of self-sustaining colonies off Earth by the time is will arise(meaning that space acceleration would be a form of existential threat reduction). Large portions of Africa, and smaller portions of the Americas and Asia are not at a civilizational level that would make a collapse possible, however they will likely cross that threshold this century. If there is a global civilizational collapse, I do not think civilization would ever return. However, there are far too many unknowns as too how to avoid said collapse meaningfully. Want to prevent a civilization ending nuclear war? You could try to bolster the power of the weaker side to force a cold war. Or maybe you want to make the sides more lopsided so intimidation will be enough. However as we do not know which strategy is more effective, and they have opposite praxis, there is no way to know if you would be increasing existential threats or not. Lastly, most existential threat reduction is political by nature. Politics are also extremely unpredictable, and extremely hard to influence even if you know what you are do
What quotes do you find most inspire you to use your resources (effectively) to help others?

"Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea.

If we are the only rational beings in the Universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that ... (read more)

2MichaelA1y(In case any future readers are wondering, this quote is from Derek Parfit [https://www.goodreads.com/quotes/8575881-what-now-matters-most-is-how-we-respond-to-various] .)
Why we should grow One for the World chapters alongside EA student groups

Thanks for writing this! I and an EA community builder I know found it interesting and helpful.

I'm pleased you have a 'counterarguments' section, though I think there are some counterarguments missing:

  • OFTW groups may crowd out GWWC groups. You mention the anchoring effect on 1%, but there's also the danger of anchoring on a particular cause area. OFTW is about ending extreme poverty, whereas GWWC is about improving the lives of others (much broader)

  • OFTW groups may crowd out EA groups. If there's a OFTW group at a university, the EA group may have to

... (read more)
2Jack Lewars1yHi Alex - these are very good points and largely correct, I think - thanks for contributing them. I've added some thoughts and mitigations below: 1. Yes, we definitely do anchor around poverty. I think this can be good 'scaffolding' to come into the movement; but sometimes it will anchor people there. It is worth noting, though, that global health and poverty is consistently the most popular cause area in the EA survey, so there are clearly other factors anchoring to this cause area - it's hard to say how much OFTW counterfactually increases this effect (and whether it counterfactually stops people from progressing beyond global health and poverty). In terms of mitigation for competing with GWWC - we are in close touch with them and both sides are working hard to foster collaboration and avoid competition. 2. On point 2, our experience so far is that OFTW and EA groups actually coexist very well. I think (without any systematic evidence) some of this may because a lot of EA groups don't prioritise donations, preferring to focus on things like career advice, and so OFTW chapters can sort of 'own' the donation space; sometimes, though, they just find a way to work alongside each other. I'm not sure it follows that we have to 'compete for altruistically motivated people' - in fact, I don't really see any reason why someone couldn't take the OFTW pledge and then carry on engaging with EA uninterrupted - but I agree that we could compete on this front. A lot seems to depend on OFTW's approach/message/ask. Maybe a virtue of OFTW is that we really only need people's attention for a short period to get them to take one action - so we aren't competing for their sustained attention, in a way that would crowd out EA programming. Indeed, we can actually be a funnel to get them to pay attention to this content - see for example our recent webinar with Toby Ord on x-risk, which attracte
What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

Thanks, that's helpful for thinking about my career (and thanks for asking that question Michael!) 

Edit: helpful for thinking about my career because I'm thinking about getting economics training, which seems useful for answering specific sub-questions in detail ('Existential Risk and Economic Growth' being the perfect example of this),  but one economic model alone is very unlikely to resolve a big question.

3Max_Daniel1yGlad it's helpful! I think you're very likely doing this anyway, but I'd recommend to get a range of perspectives on these questions. As I said, my own views here don't feel that resilient, and I also know that several epistemic peers disagree with me on some of the above.
Urgency vs. Patience - a Toy Model
  1. I think I've conflated patient longtermist work with trajectory change (with the example of reducing x-risk in 200 years time being patient, but not trajectory change). This means the model is really comparing trajectory change with XRR. But trajectory change could be urgent (eg. if there was a lock-in event coming soon), and XRR could be patient. 
    1. (Side note: There are so many possible longtermist strategies! Any combination of  is a distinct strategy. This is interesting as often p
... (read more)
4Anjay F1yRe 1. That makes a lot of sense now. My intuition is still leaning towards trajectory change interacting with XRR for the reason that maybe the best ways to reduce x-risks that appear after 500+ years is to focus on changing the trajectory of humanity (i.e. stronger institutions, cultural shift, etc.) But I do think that your model is valuable for illustrating the intuition you mentioned, that it seems easier to create a positive future via XRR rather than trajectory change that aims to increase quality. Re 2,3. I think that is reasonable and maybe when I mentioned the meta-work before, it was due to my confusion between GPR and trajectory change.
Should We Prioritize Long-Term Existential Risk?

Thanks for writing this, I like that it's short and has a section on subjective probability estimates. 

  1. What would you class as longterm x-risk (reduction) vs. nearterm? Is it entirely about the timescale rather than the approach? Eg. hypothetically very fast institutional reform could be nearterm, and doing AI safety field building research in academia could hypothetically be longterm if you thought it would pay off very late. Or do you think the longterm stuff necessarily has to be investment or intitutional reform?
  2. Is the main crux for 'Long-term x-r
... (read more)
3MichaelDickens1yThanks for the questions! 1. I don't have strong beliefs about what could reduce long-term x-risk. Longtermist institutional reform just seemed like the best idea I could think of. 2. As I said in the essay, the lower the level of x-risk, the more valuable it is to reduce x-risk by a fixed proportion. The only way you can claim that reducing short-term x-risk matters more is by saying that it will become too intractable to reduce x-risk below a certain level, and that we will reach that level at some point in the future (if we survive long enough). I think this claim is plausible. But simply claiming that x-risk is currently high is not sufficient to prioritize reducing current x-risk over long-term x-risk, and in fact argues in the opposite direction. 3. I mentioned this in my answer to #2—I think it's more likely that reducing x-risk by a fixed proportion becomes more difficult as x-risk gets lower. But others (e.g., Yew-Kwang Ng [https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12318] and Tom Sittler [https://fragile-credences.github.io/ltf-paper/]) have used this assumption that reducing x-risk by a fixed proportion has constant difficulty.
What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

This is really interesting and I'd like to hear more. Feel free to just answer the easiest questions:

Do you have any thoughts on how to set up a better system for EA research, and how it should be more like academia? 

What kinds of specialisation do you think we'd want - subject knowledge? Along different subject lines to academia? 

Do you think EA should primarily use existing academia for training new researchers, or should there be lots of RSP-type things?

What do you see as the current route into longtermist research? It seems like entry-level research roles are relatively rare, and generally need research experience. Do you think this is a good model?

[Off the top of my head. I don't feel like my thoughts on this are very developed, so I'd probably say different things after thinking about it for 1-10 more hours.]

[ETA: On a second reading, I think some of the claims below are unhelpfully flippant and, depending on how one reads them, uncharitable. I don't want to spend the significant time required for editing, but want to flag that I think my dispassionate views are not super well represented below.]

Do you have any thoughts on how to set up a better system for EA research, and how it sho
... (read more)
What (other) posts are you planning on writing?

I'd really like to see "If causes differ astronomically in EV, then personal fit in career choice is unimportant"

2kbog1yAssume that a social transition is expected in 40 years and the post transition society has 4x times as much welfare as a pre-transition society. Also assume that society will last for 1000 more years. Increasing the rate of economic growth by a few percent might increase our welfare pre-transition by 5% and move up the transition by 2 years. Then the welfare gain of the economic acceleration is (0.05*35)+(3*2)=8. Future welfare without the acceleration is 40+(4*1000)=4040, so a gain of 8 is like reducing 0.2% existential risk. Obviously the numbers are almost arbitrary but you should see the concepts at play. Then if you think about a longer run future then the tradeoff becomes very different, with existential risk being far more important. If society lasts for 1 million more years then the equivalent is 0.0002% X-risk.
Are there superforecasts for existential risk?

Thanks for the answer.

Will MacAskill mentioned in this comment that he'd 'expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view.'

You're a good forecaster right? Does it seem right to you that a panel of good forecasters would come to something like Will's view, rather than the median FHI view?

8Linch1yI'm not sure. I mentioned as a reply [https://forum.effectivealtruism.org/posts/oPGJrqohDqT8GZieA/ask-me-anything?commentId=aZb3WgLmpfJJ7cNz7] to that comment that I was unimpressed with the ability of existing "good" forecasters to think about low-probability and otherwise out-of-distribution problems. My guess is that they'd change their minds if "exposed" to all the arguments, and specifically have views very close to the median FHI view, if "exposed" -> reading the existing arguments very carefully and put lots of careful thought into them. However, I think this is a very tough judgement call, and does seem like the type of thing that'd be really bad if we get it wrong! My beliefs here are also tightly linked to me thinking that the median FHI view is more likely to be correct than Will's view, and it is a well-known bias that people think their views are more common/correct than they actually are.
Are there superforecasts for existential risk?

Thanks for the answer.

Will MacAskill mentioned in this comment that he'd 'expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view.'

You're a good forecaster right? Does it seem right to you that a panel of good forecasters would come to something like Will's view, rather than the median FHI view?

8Davidmanheim1yI'll speak for the consensus when I say I think there's not a clear way to decide if this is correct without actually doing it - and the outcome would depend a lot on what level of engagement the superforecasters had with these ideas already. (If I got to pick the 5 superforecasters, even excluding myself, I could guarantee it was either closer to FHI's viewpoints, or to Will's.) Even if we picked from a "fair" reference class, if I could have them spend 2 weeks at FHI talking to people there, I think a reasonable proportion would be convinced - though perhaps this is less a function of updating neutrally towards correct ideas as it is the emergence of consensus in groups. Lastly, I have tremendous respect for Will, but I don't know that he's calibrated particularly well to make a prediction like this. (Not that I know he isn't - I just don't have any reason to think he's spent much time working on this skillset.)
Are there superforecasts for existential risk?

Thanks, those look good and I wasn't aware of them

The Moral Value of Information - edited transcript

Yep - the author can click on the image and then drag from the corner to enlarge them (found this difficult to find myself)

4JJXWang1yThanks for the tip- have now edited!
Load More