All of Alex HT's Comments + Replies

I claim that you can get near the frontier of alignment knowledge in ~6 months to a year. 

How do you think people should do this?

I really appreciate you writing this. Getting clear on one's own reasoning about AI seems really valuable, but for many people, myself included, it's too daunting to actually do. 

If you think it's relevant to your overall point, I would suggest moving the first two footnotes (clarifying what you mean by short timelines and high risk) into the main text. Short timelines sometimes means <10 years and high risk sometimes means >95%

I think you're expressing your attitude to the general cluster of EA/rationalist views around AI risk typified b... (read more)

4
elifland
2y
Thanks for pointing this out. I agree that I wasn't clear about this in the post. My hesitations have been around adopting views with timelines and risk level that are at least as concerning as the OpenPhil cluster (Holden, Ajeya, etc.) that you're pointing at; essentially views that seem to imply that AI and things that feed into it are clearly the most important cause area. I wouldn't go as far as no evidence at all given that my understanding is Eliezer (+ MIRI) was heavily involved in influencing the OpenPhil's cluster's views so it's not entirely independent, but I agree it's much weaker evidence for less extreme views. I was going to say that it seems like a big difference within our community, but both clusters of views are very far away from the median pretty reasonable person and the median AI researcher. Though I suppose the latter actually isn't far away on timelines (potentially depending on the framing?). It definitely seems to be in significant tension with how AI researchers and the general public / markets / etc. act, regardless of stated beliefs (e.g. I found it interesting how short the American public's timelines are, compared to their actions).  Anyway, overall I think you're right that it makes a difference but it seems like a substantive concern for both clusters of views. The conclusion I intend to convey is something like "I'm no longer as hesitant about adopting views which are at least as concerning as >50% of AGI/TAI/APS-AI within 30 years, and >15% chance of existential catastrophe this century" which as I referred to above seem to make AI clearly the most important cause area. Copying my current state on the object level views from another recent post:
Answer by Alex HTJul 06, 20221
0
0

This seems like a good place to look for studies:

The research I’ve reviewed broadly supports this impression. For example:

  • Rieber (2004) lists “training for calibration feedback” as his first recommendation for improving calibration, and summarizes a number of studies indicating both short- and long-term improvements on calibration.4 In particular, decades ago, Royal Dutch Shell began to provide calibration for their geologists, who are now (reportedly) quite well-calibrated when forecasting which sites will produce oil.5
  • Since 2001, Hubbard Decisi
... (read more)
3
Tyner
2y
Thanks for the reply.   First bullet: I read citation #4 and it describes improvement in a lab with like domain (e.g. trivia) not across domains (e.g. trivia => world events) as far as I could tell.  The Shell example is also within domain. The second bullet is the same info shared in Hubbard's book, not a controlled trial and he doesn't provide the underlying data. Unfortunately, I don't think any of this info is very persuasive for answering the question about cross-domain applicability.

Are these roles visa eligible, or do candidates need a right to work in the US already? (Or can you pay contractors outside of the US?)

1
Scott Emmons
2y
While the roles are not currently visa eligible, we can pay contractors outside of the United States!
Answer by Alex HTMar 03, 20225
0
0

[A quick babble based on your premise]

What are the best bets to take to fill the galaxies with meaningful value?

How can I personally contribute to the project of filling the universe with value, given other actors’ expected work and funding on the project?

What are the best expected-value strategies for influencing highly pivotal (eg galaxy-affecting) lock-in events?

What are the tractable ways of affecting the longterm trajectory of civilisation? Of those, which are the most labour-efficient?

How can we use our life’s work to guide the galaxies to better tra... (read more)

We think most of them could reduce catastrophic biorisk by more than 1% or so on the current margin (in relative[1] terms).

Imagine all six of these projects was implemented to a high standard. How robust do you think the world would be to catastrophic biorisk? Ie. how sufficient do you think this list of projects is? 

The job application for the Campus Specialist programme has been published. Apologies for the delay

Hi Elliot, thanks for your questions.

Is this indicative of your wider plans?/ Is CEA planning on keeping a narrow focus re: universities?

I’m on the Campus Specialist Manager team at CEA, which is a sub-team of the CEA Groups team, so this post does give a good overview of my plans, but it’s not necessarily indicative of CEA’s wider plans. 

As well as the Campus Specialist programme, the Groups team runs a Broad University Group programme staffed by Jessica McCurdy with support from Jesse Rothman. This team provides support for all university groups reg... (read more)

3
Chris Leong
2y
"Running a mini-conference every week (one group has done this already - they have coworking, seminar programmes, a talk, and a social every week of term, and it seems to have been very good for engagement, with attendance regularly around 70 people). I could imagine this being even bigger if there were even more concurrent ‘tracks’" Which group was this?
2
ElliotJDavies
2y
Interesting! I actually think the most interesting question was the one that was skipped:  Regarding general strategy, which I may understand you don't want to answer (but I hope someone will) - there really has to be some thought put into whether you are sending an inviting message to national group organisers. At the time we applied for national funding, both EA-infrastructure funds and CBG grants claimed not to be available to us (EA-funds website contained out of date advice). Luckily, we applied anyway and were successful (with EA-Infrastructure funds) - although I am not sure how "close" the decision was on EA-infrastructure funds side. At the time I predicted our chance of success as being <50%, and we could have very easily not applied for that reason.  A few months later I can see how national groups, including our-own, are a vital piece of infrastructure for not only community building, but also donation collecting and the distribution of salaries. It's very interesting to me that CEA has no plans to accelerate this. 

Thanks for this comment and the discussion it’s generated! I’m afraid I don’t have time to give as detailed response as I would like, but here are some key considerations:

  • In terms of selecting focus universities, we mentioned our methodology here (which includes more than just university rankings, such as looking at alumni outcomes like number of politicians, high net worth individuals, and prize winners).
  • We are supporting other university groups - see my response to Elliot below for more detail on CEA’s work outside Focus universities.
  • You can view our two
... (read more)

Thanks Vaidehi!

One set of caveats is that you might not be a good fit for this type of work (see what might make you a good fit above). For instance: 

  • This is a role with a lot of autonomy, so if you prefer more externally set structure, this role probably isn’t a good fit for you
  • If you find talking to people about EA ideas difficult or uncomfortable, this may be a bad fit
  • You might be  a good fit for doing field building, but prefer doing so with another age range (e.g. mid career, high school)

Some other things people considering this path might w... (read more)

What factors do you think would have to be in place for some other people to set up some similar but different organisation in 5 years time?

I imagine this is mainly about the skills and experience of the team, but also interested in other things if you think that's relevant

6
Buck
3y
I think the main skillsets required to set up organizations like this are:  * Generic competence related to setting up any organization--you need to talk to funders, find office space, fill out lots of IRS forms, decide on a compensation policy, make a website, and so on. * Ability to lead relevant research. This requires knowledge of running ML research, knowledge of alignment, and management aptitude. * Some way of getting a team, unless you want to start the org out pretty small (which is potentially the right strategy). * It’s really helpful to have a bunch of contacts in EA. For example, I think it’s been really helpful for EA that I spent a few years doing lots of outreach stuff for MIRI, because it means I know a bunch of people who can potentially be recruited or give us advice. Of course, if you had some of these properties but not the others, many people in EA (eg me) would be very motivated to help you out, by perhaps introducing you to cofounders or helping you with parts you were less experienced with. People who wanted to start a Redwood competitor should plausibly consider working on an alignment research team somewhere (preferably leading it) and then leaving to start their own team. We’d certainly be happy to host people who had that aspiration (though we’d think that such people should consider the possibility of continuing to host their research inside Redwood instead of leaving).

This looks brilliant, and I want to strong-strong upvote!

What do you foresee as your biggest bottlenecks or obstacles in the next 5 years? Eg. finding people with a certain skillset, or just not being able to hire quickly while preserving good culture.

Buck
3y12
0
0

Thanks for the kind words!

Our biggest bottlenecks are probably going to be some combination of:

  • Difficulty hiring people who are good at some combination of leading ML research projects, executing on ML research, and reasoning through questions about how to best attack prosaic alignment problems with applied research.
  • A lack of sufficiently compelling applied research available, as a result of theory not being well developed enough.
  • Difficulty with making the organization remain functional and coordinated as it scales.

What if LessWrong is taken down for another reason? Eg. the organisers of this game/exercise want to imitate the situation Petrov was in, so they create some kind of false alarm

4
Peter Wildeford
3y
Last year the site looked very obviously nuked. If I see that situation, I will retaliate. If I see some other situation, I will use my best judgement.

An obvious question which I'm keen to hear people's thoughts on - does MAD work here? Specifically, does it make sense for the EA forum users with launch codes to commit to a retaliatory attack? The obvious case for it is deterrence. The obvious counterarguments are that the Forum could  go down for a reason other than a strike from LessWrong, and that once the Forum is down, it doesn't help us to take down LW (though this type of situation might be regular enough that future credibility makes it worth it)

 

Though of course it would be really bad for us to have to take down LW, and we really don't want to. And I imagine most of us trust the 100 LW users with codes not to use them :)

I know we're trying to remember when the US and USSR had their weapons pointed at each other but it feels more like the North and South islands of New Zealand are trying to decide whether to nuke each other!

Edit: Not even something so violent - just temporarily inconvenience each other

The question is whether precommitment would actually change behavior. In this case, anyone shutting down either site is effectively playing nihilist, and doesn't care, so it shouldn't.

In fact, if it does anything, it would be destabilizing - if "they" commit to pushing the button if "we" do, they are saying they aren't committed to minimizing damage  overall, which should make us question whether we're actually on the same side. (And this is a large part of why MAD only works if you are both selfish, and scared of losing.)

This is great!  I'm tentatively interested in groups trying outreach slightly before the start of term. It seems like there's a discontinuous increase in people's opportunity cost when they arrive at university - suddenly there are loads more cool clubs and people vying for their attention. Currently, EA groups are mixed in with this crowd of stuff. 

One way this could look is running a 1-2 week residential course for offer holders the summer before they start at university (a bit like SPARC or Uncommon Sense).  

To see if this is something a ... (read more)

2
mic
2y
Do you envision these activities before the start of the term as being virtual or in-person? I don't know how many people would be on-campus two weeks before the start of the semester. I think I would like to start email blasts slightly before the start of the semester though.
2
PeterSlattery
3y
Same. Useful also from an org impact perspective. E.g., over x people have viewed our posts (might be combined with website views etc)
2
NunoSempere
3y
Same.
3
DM
3y
Agreed, I'd love this feature! I also frequently rely on pageview statistics to prioritize which Wikipedia articles to improve.

I tentatively believe (ii), depending on some definitions. I'm somewhat surprised to see Ben and Darius implying it's a really weird view, and makes me wonder what I'm missing.

I don't want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don't mean indirect effects more broadly in the sense of 'better health in poor countries' --> '... (read more)

DM
3y11
0
0

I'm not sure what counts as 'astronomically' more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii).

This may be the crux - I would not count a ~ 1000x multiplier as anywhere near "astronomical" and should probably have made this clearer in my original comment. 

Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term,  refers to differences in value of something like 1030 x.

All my comment was meant to say is that it seems hi... (read more)

Nice, thanks for these thoughts.

But there's no way to save up labor to be used later, except in the sense that you can convert labor into capital and then back into labor (although these conversions might not be efficient, e.g., if you can't find enough talented people to do the work you want). So the tradeoff with labor is that you have to choose what to prioritize. This question is more about traditional cause prioritization than about giving now vs. later. 

Ah sorry I think I was unclear. I meant 'capacity-building' in the narrow sense of 'getting m... (read more)

3
MichaelDickens
3y
I think we are falling for the double illusion of transparency: I misunderstood you, and the thing I thought you were saying was even further off than what you thought I thought you were saying. I wasn't even thinking about capacity-building labor as analogous to investment. But now I think I see what you're saying, and the question of laboring on capacity vs. direct value does seem analogous to spending vs. investing money. At a high level, you can probably model labor in the same way as I describe in OP: you spend some amount of labor on direct research, and the rest on capacity-building efforts that increase the capacity for doing labor in the future. So you can take the model as is and just change some numbers. Example: If you take the model in OP and assume we currently have an expected (median) 1% of required labor capacity, a rate of return on capacity-building of 20%, and a median AGI date of 2050, then the model recommends exclusively capacity-building until 2050, then spending about 30% of each decade's labor on direct research. One complication is that this super-easy model treats labor as something that only exists in the present. But in reality, if you have one laborer, that person can work now and can also continue working for some number of decades. The super-easy model assumes that any labor spent on research immediately disappears, when it would be more accurate to say that research labor earns a 0% return (or let's say a -3% return, to account for people retiring or quitting) while capacity-building labor earns a 20% return (or whatever the number is). This complication is kind of hard to wrap my head around, but I think I can model it with a small change to my program, changing the line in run_agi_spending that reads capital *= (1 - spending_schedule[y]) * (1 + self.investment_return)**10 to research_return = -0.03 capital *= spending_schedule[y] * ((1 + research_return)**10) + (1 - spending_schedule[y]) * ((1 + self.invest

This is cool, thanks for posting :) How do you think this generalises to a situation where labor is the key resource rather than money?

I'm a bit more interested in the question 'how much longtermist labor should be directed towards capacity-building vs. 'direct' work (eg. technical AIS research)?' than the question 'how much longtermist money should be directed towards spending now vs. investing to save later?'

I think this is mainly because longtermism, x-risk, and AIS seem to be bumping up against the labor constraint much more than the money constraint. ... (read more)

2
MichaelDickens
3y
That's an interesting question, and I agree with your reasoning on why it's important. My off-the-cuff thoughts: Labor tradeoffs don't work in the same way as capital tradoffs because there's no temporal element. With capital, you can spend it now or later, and if you spend later, you get to spend more of it. But there's no way to save up labor to be used later, except in the sense that you can convert labor into capital and then back into labor (although these conversions might not be efficient, e.g., if you can't find enough talented people to do the work you want). So the tradeoff with labor is that you have to choose what to prioritize. This question is more about traditional cause prioritization than about giving now vs. later. This is something EAs have already written a lot about, and it's probably worth more attention overall than the question of giving (money) now vs. later, but I believe the latter question is more neglected and has more low-hanging fruit. The question of optimal giving rate might be irrelevant if, say, we're confident that the optimal rate is somewhere above 1%, we don't know where, but it's impossible to spend more than 1% due to a lack of funding opportunities. But I don't think we can be that confident that the optimal spending rate is that high. And even if we are, knowing the optimal rate still matters if you expect that we can scale up work capacity in the future. I'd guess >50% chance that the optimal spending rate is faster than the longtermist community[1] is currently spending, but I also expect the longtermist spending rate to increase a lot in the future due to increasing work capacity plus capital becoming more liquid—according to Ben Todd's estimate, about half of EA capital is currently too illiquid to spend. [1] I'm talking about longtermism specifically and not all EA because the optimal spending rate for neartermist causes could be pretty different.
2
Alex HT
3y
Also this: https://longtermrisk.org/the-future-of-growth-near-zero-growth-rates/
Answer by Alex HTJun 05, 20212
0
0

I've haven't read it, but the name of this paper from Andreas at GPI at least fits what you're asking - "Staking our future: deontic long-termism and the non-identity problem"

1
Nathan_Barnard
3y
Hi Alex, the link isn't working 
Answer by Alex HTApr 13, 20216
0
0

 Is The YouTube Algorithm Radicalizing You? It’s Complicated.

Recently, there's been significant interest among the EA community in investigating short-term social and political risks of AI systems. I'd like to recommend this video (and Jordan Harrod's channel as a whole) as a starting point for understanding the empirical evidence on these issues.

3
Eli Rose
3y
From reading the summary in this post, it doesn't look like the YouTube video discussed bears on the question of whether the algorithm is radicalizing people 'intentionally,' which I take to be the interesting part of Russell's claim.

I agree with this answer. Also, lots of people do think that temporal position (or something similar, like already being born) should affect ethics.

But yes OP, accepting time neutrality and being completely indifferent about creating happy lives does seem to me to imply the counterintuitive conclusion you state. You might be interested in this excellent emotive piece or section 4.2.1 of this philosophy thesis. They both argue that creating happy lives is a good thing.

I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?


Yep that is what I'm saying. I think I don't agree but thanks for explaining :)

Can you say a bit more about why the quote is objectionable? I can see why the conclusion 'saving a life in a rich country is substantially more important than saving a life in a poor country' would be objectionable. But it seems Beckstead is saying something more like 'here is an argument for saving lives in rich countries being relatively more important than saving lives in poor countries' (because he says 'other things being equal').

3
Garrison
3y
The main issue I have with this quote is that it's so divorced from the reality of how cost effective it is to save lives in rich countries vs. poor countries (something that most EAs probably know already). I understand that this objection is addressed by the caveat 'other things being equal',  but it seems important to note that it costs orders of magnitude more to save lives in rich countries, so unless Beckstead thinks the knock-on effects of saving lives in rich countries are sufficient to offset the cost differences, it would still follow that we should focus our money on saving lives in poor countries. 
4
jtm
3y
I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion? I think if you provide arguments that lead to a very troubling conclusion, then you should ensure that they’re very strongly supported, eg by empirical or historical evidence. Since Beckstead didn't do that (which perhaps is to be expected in a philosophy thesis), I think it would at the very least have been appropriate to recognise that the premises for the argument are extremely speculative.  I also think the argument warrants some disclaimers – e.g., a warning that following this line of reasoning could lead to undesirable neglect of global poverty or a disclaimer that we should be very wary of any argument that leads to conclusions like 'we should prioritise people like ourselves.' Like Dylan Balfour said above, I am otherwise a big fan of this important dissertation; I just think that this quote is not a great look and it exemplifies a form of reasoning that we longtermists should be careful about.

There are also more applied AI/tech focused economics questions that seem important for longtermists (eg if GPI stuff seems to abstract for you)

1
JackM
3y
Yes this 80,000 Hours article has some good ideas

Agree with Marisa that you'd be well suited to do an AMA

Answer by Alex HTFeb 18, 20216
0
0

Also not CS and you may already know it: this EAG talk is about wild animal welfare research using economics techniques. Both authors of the paper discussed are economists, not biologists.

1
BrownHairedEevee
3y
Nope, I haven't seen this yet. Thanks for the link!

Thanks for you comment, it makes a good point . My comment was hastily written and I think my argument that you're referring to is weak, but not as weak as you suggest.

At some points the author is specifically critiquing longtermism the philosophy (not what actual longtermists think and do) eg. when talking about genocide. It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it'd be better if the switch was made clear. 

There are many longtermists that don't hold these views (eg. Will MacAskill is literally... (read more)

2
Lukas Finnveden
3y
Agreed. Yeah this seems right, maybe with the caveat that Will has (as far as I know) mostly expressed skepticism about this being the most influential century, and I'd guess he does think this century is unusually influential, or at least unusually likely to be unusually influential. And yes, I also agree that the quoted views are very extreme, and that longtermists at most hold weaker versions of them.

from 'Things CEA is not doing' forum post https://forum.effectivealtruism.org/posts/72Ba7FfGju5PdbP8d/things-cea-is-not-doing 

We are not actively focusing on:

...

  • Cause-specific work (such as community building specifically for effective animal advocacy, AI safety, biosecurity, etc.)
2
Linda Linsefors
3y
Thanks for the much improved source!

I don’t have time to write a detailed and well-argued response, sorry. Here are some very rough quick thoughts on why I downvoted.  Happy to expand on any points and have a discussion.

In general, I think criticisms of longtermism from people who 'get' longtermism are incredibly valuable to longtermists.

One reason if that if the criticisms carry entirely, you'll save them from basically wasting their careers. Another reason is that you can point out weaknesses in longtermism or in their application of longtermism that they wouldn't have spotted themsel... (read more)

-1
philosophytorres
3y
[Responding to Alex HT above:] I'll try to find the time to respond to some of these comments. I would strongly disagree with most of them. For example, one that just happened to catch my eye was: "Longtermism does not say our current world is replete with suffering and death." So, the target of the critique is Bostromism, i.e., the systematic web of normative claims found in Bostrom's work. (Just to clear one thing up, "longtermism" as espoused by "leading" longtermists today has been hugely influenced by Bostromism -- this is a fact, I believe, about intellectual genealogy, which I'll try to touch upon later.) There are two main ingredients of Bostromism, I argue: total utilitarianism and transhumanism. The latter absolutely does indeed see our world the way many  religious traditions have: wretched, full of suffering, something to ultimately be transcended (if not via the rapture or Parousia then via cyborgization and mind-uploading). This idea, this theme, is so prominent in transhumanist writings that I don't know how anyone could deny it. Hence, if transhumanism is an integral component of Bostromism (and it is), and if Bostromism is a version of longtermism (which it is, on pretty much any definition), then the millennialist view that our world is in some sort of "fallen state" is an integral component of Bostromism, since this millennialist view is central to the normative aspects of transhumanism. Just read "Letter from Utopia." It's saturated in a profound longing to escape our present condition and enter some magically paradisiacal future world via the almost supernatural means of radical human enhancement. (Alternatively, you could write a religious scholar about transhumanism. Some have, in fact, written about the ideology. I doubt you'd find anyone who'd reject the claim that transhumanism is imbued with millennialist tendencies!)
7
Lukas Finnveden
3y
I haven't read the top-level post (thanks for summarising!); but in general, I think this is a weak counterargument. If most people in a movement (or academic field, or political party, etc) holds a rare belief X, it's perfectly fair to criticise the movement for believing X. If the movement claims that X isn't a necessary part of their ideology, it's polite for a critic to note that X isn't necessarily endorsed as the stated ideology, but it's important that their critique of the movement is still taken seriously. Otherwise, any movement can choose a definition that avoids mentioning the most objectionable part of their ideology without changing their beliefs or actions. (Similar to the motte-and-bailey fallacy). In this case, the author seems to be directly worried about longtermists' beliefs and actions; he isn't just disputing the philosophy.
6
Linch
3y
Thanks, this comment saved me time/emotional energy from reading the post myself.

I had left this for a day and had just come back to write a response to this post but fortunately you've made a number of the points I was planning on making.

I think it's really good to see criticism of core EA principles on here, but I did feel that a number of the criticisms might have benefited from being fleshed out more fully .  

OP made it clear that he doesn't agree with a number of Nick Bostrom's opinions but I wasn't entirely clear (I only read it the once and quite quickly, so it may be the case that I missed this) where precisely the main di... (read more)

9
Tyle_Stelzig
3y
I upvoted Phil's post, despite agreeing with almost all of AlexHT's response to EdoArad above. This is because I want to encourage good faith critiques, even those which I judge to contain serious flaws. And while there were elements of Phil's book that read to me more like attempts at mood affiliation than serious engagement with his interlocutor's views (e.g. 'look at these weird things that Nick Bostrom said once!'), on the whole I felt that there was enough effort at engagement that I was glad Phil took the time to write up his concerns.  Two aspects of the book that I interpreted somewhat differently than Alex:  * The genocide argument that Alex expressed confusion about: I thought Phil's concern was not that longtermism would merely consider genocide while evaluating options, but that it seems plausible to Phil that longtermism (or a future iteration of it encountering different facts) could endorse genocide - i.e. that Phil is worried about genocide as an output of longtermism's decision process, not as an input. My model of Phil is that if he were confident that longtermism would always reject genocide, then he wouldn't be concerned merely that such possibilities are evaluated. Confidence: Low/moderate.  * The section describing utilitarianism: I read this section as merely aiming to describe an aspect of longtermism and to highlight features which might be wrong or counter-intuitive, not to actually make any arguments against the views he describes. This could explain Alex's confusion about what was being argued for (nothing) and feeling that intuitions were just being thrown at him (yes). I think Phil's purpose here is to lay the groundwork for his later argument that these ideas could be dangerous.  The only argument I noticed against utilitarianism comes later - namely, that together with empirical beliefs about the possibility of a large future it leads to conclusions that Phil rejects. Confidence: Low.  I agree with Alex that the book was not clea

I’d be keen to hear your thoughts about the (small) field of AI forecasting and its trajectory. Feel free to say whatever’s easiest or most interesting. Here are some optional prompts:

  • Do you think the field is progressing ‘well’, however you define ‘well’? 
  • What skills/types of people do you think AI forecasting needs?
  • What does progress look like in the field? Eg. does it mean producing a more detailed report, getting a narrower credible interval, getting better at making near-term AI predictions...(relatedly, how do we know if we're making progress?)
  • Can you make any super rough predictions like ‘by this date I expect we’ll be this good at AI forecasting’? 
5
Ajeya
3y
Hm, I think I'd say progress at this stage largely looks like being better able to cash out disagreements about big-picture and long-term questions in terms of disagreements about more narrow, empirical, or near-term questions, and then trying to further break down and ultimately answer these sub-questions to try to figure out which big picture view(s) are most correct. I think given the relatively small amount of effort put into it so far and the intrinsic difficulty of this project, returns have been pretty good on that front -- it feels like people are having somewhat narrower and more tractable arguments as time goes on. I'm not sure about what exact skillsets the field most needs. I think the field right now is still in a very early stage and could use a lot of disentanglement research, and it's often pretty chaotic and contingent what "qualifies" someone for this kind of work. Deep familiarity with the existing discourse and previous arguments/attempts at disentanglement is often useful, and some sort of quantitative background (e.g. economics or computer science or math) or mindset is often useful, and subject matter expertise (in this case machine learning and AI more broadly) is often useful, but none of these things are obviously necessary or sufficient. Often it's just that someone happens to strike upon an approach to the question that has some purchase, they write it up on the EA Forum or LessWrong, and it strikes a chord with others and results in more progress along those lines.
7
Aryeh Englander
3y
  I know you asked Ajeya, but I'm going to add my own unsolicited opinion that we need more people with professional risk analysis backgrounds, and if we're going to do expert judgment elicitations as part of forecasting then we need people with professional elicitation backgrounds. Properly done elicitations are hard. (Relevant background: I led an AI forecasting project for about a year.)

Joey, are there unusual empirical beliefs you have in mind other than the two mentioned? Hits based giving seems clearly related to Charity Entrepreneurship's work - what other important but unusual empirical beliefs do you/CE/neartermist EAs hold? (I'm guessing hinge of history hypothesis is irrelevant to your thinking?)

6
Joey
3y
I think the majority of unusual empirical beliefs that came to mind were more in the longtermist space. In some ways these are unusual at even a deeper level than the suggested beliefs e.g. I think EAs generally give more credence epistemically to philosophical/a priori evidence, Bayesian reasoning, sequence thinking, etc. If I think about unusual empirical beliefs Charity Entrepreneurship has as well, it would likely be something like the importance of equal rigor, focusing on methodology in general, or the ability to beat the charity market using research.  In both cases these are just a couple that came to mind – I suspect there are a bunch more.
Answer by Alex HTJan 02, 20216
0
0

My guess is that few EAs care emotionally about cost effectiveness and that they care emotionally about helping others a lot. Given limited resources, that means they have to be cost effective. Imagine a mother with a limited supply of food to share between her children. She’s doesn’t care emotionally about rationing food, but she’ll pay a lot of attention to how best to do rationing.

I do think there are things in the vicinity of careful reasoning/thinking clearly/having accurate beliefs that are core to many EAs identities. I think those can be developed naturally to some extent, and don’t seem like complete prerequisites to being an EA

Thanks for writing this and contributing to the conversation :)

Relatedly, an “efficient market for ideas” hypothesis would suggest that if MB really was important, neglected, and tractable, then other more experienced and influential EAs would have already raised its salience.

I do think the salience of movement building has been raised elsewhere eg:

... (read more)
1
Aaron Bergman
3y
Thanks for all those references. Don't know how I missed the 80,000 page on the topic, but that's a pretty big strike against it being ignored. Regarding your second point, I largely agree but there are surely some MB interventions that don't require full-time generalists. For example, message testing and advertising (I assume) can be mostly outsourced with enough money. 

You really don't seem like a troll! I think the discussion in the comments on this post is a very valuable conversation and I've been following it closely. I think it would be helpful for quite a few people for you to keep responding to comments

Of course, it's probably a lot of effort to keep replying carefully to things, so understandable if you don't have time :)

Thanks! I appreciate it :)

It makes me feel anxious to get a lot of downvotes with no explanation so I really appreciate your comment.

Just to clarify when you say "if that is a real tradeoff that a founder faces in practice, it is nearly always an indication the founder just hasn't bothered to put much time or effort into cultivating a diverse professional network" I think I agree, but that this isn't always something the founder could have predicted ahead of time, and the founder isn't necessarily to blame. I think it can be very easy to 'accidentally' end... (read more)

4
IanDavidMoss
3y
Re: the downvotes, I wish I could just say not to let them bother you, but the truth is they make me anxious too. Unfortunately there are a handful of EA Forum users who routinely strong-downvote posts and comments that have any whiff of a social/racial justice message.
9
IanDavidMoss
3y
Oh sure, and I didn't mean to imply otherwise. Lots of people have homogeneous networks through no fault of their own. But if that's the case for you and you're trying to do something for which having a diverse network would be helpful, then it's something you need to budget time and energy towards just as it would be the case for ensuring strong organizational infrastructure, funding, etc. So that's why I thought it was really valuable for you to point that out to Marcus, who seems to be getting an otherwise very promising project off the ground. :)

Was this meant as a reply to my comment or a reply to Ben's comment?

I was just asking what the position was and made explicit I wasn't suggesting Marcus change the website.

3
Linch
3y
Threading etiquette is confusing! It was unclear to me whether the right person to respond to was Ben, Marcus, or you. So I went for the most top-level comment that seemed reasonable.  In retrospect I should have just commented on the post directly. 

Yep! I assumed this kind of thing was the case (and obviously was just flagging it as something to be aware of, not trying to finger-wag)

I don't find anything wrong at all with 'saintly' personally, and took it as a joke. But I could imagine someone taking it the wrong way. Maybe I'd see what others on the forum think

Upside seems low, downside seems pretty high.

It looks like all the founders, advisory team, and athletes are white or white-passing. I guess you're already aware of this as something to consider, but it seems worth flagging (particularly given the use of 'Saintly' for those donating 10% :/).

Some discussion of why this might matter here: https://forum.effectivealtruism.org/posts/YCPc4qTSoyuj54ZZK/why-and-how-to-make-progress-on-diversity-and-inclusion-in

Edit: In fact, while I think appearing all-white and implicitly describing some of your athletes as 'Saintly' are both acceptable PR risks, having the... (read more)

1
Mati_Roy
3y
That's awesome, good work! :)
6
IanDavidMoss
3y
Hi Alex, I want to voice my support both for you raising this in the first place and for the gentle, nonconfrontational way in which you did so. This was a good example of "calling in" a well-intentioned colleague, in my opinion. More generally, as a founder of several initiatives myself I've come to believe that prioritizing diversity, especially racial diversity, in the early stages of growth is quite important for projects that have an outward-facing mission and wide potential audience such as Marcus's. The reason it's more important than people often give it credit for is that the composition of a founding team has follow-on effects for who else it recruits, what networks it builds initial strength in, and even in some cases how it makes decisions about what programming to prioritize. Once those choices are made and the initial history of the organization is written, it becomes much harder (though not impossible) to "diversify" authentically after the fact. Of course I don't recommend sacrificing things like team cohesion or effectiveness for the sake of demographic diversity, but if that is a real tradeoff that a founder faces in practice, it is nearly always an indication the founder just hasn't bothered to put much time or effort into cultivating a diverse professional network. Again, for some kinds of work it might not be that important. For fundraising and visibility among a diverse worldwide community of athletes, it's essential.

Also, I would love to have a wide variety of athletes represented by HIA. As it's still very new I'm focusing outreach on those I have personal relationships with, which means tennis, which is predominantly white in the professional space at this point in time. I'm hoping that over time I can get in touch with a more diverse range of athletes from many different sports. 

2
Marcus Daniell
3y
This is a good point and not one I'd thought of before. Thank you.  Re 'saintly', it is intended as a joke. Do you think it's more offensive than funny? Or not worth the risk?  Re diversity, I can't help that I'm the founder and I'm white, but having a more diverse advisory board sounds good. Do you have any ideas as to who would be good advisors for this sort of thing? Important to note that all the advisors are completely pro bono. 

Some of the wording on the 'Take the Pledge' section seems a little bit off (to me at least!). Eg. saying a 1-10% pledge will 'likely have zero noticeable impact on your standard of living' seems misleading, and could give off the impression that the pledge is only for the very wealthy (for whom the statement is more likely to be true). I'm also not sure about the 'Saintly' categorisation of the highest giving level (10%). It could come across as a bit smug or saviour-ish. I'm not sure about the tradeoffs here though and obviously you have much more context than me.

Maybe you've done this already, but it could be good to ask Luke from GWWC for advice on tone here.

2
Marcus Daniell
3y
I would argue that most people reading the website are very wealthy - living in a western country almost inevitably qualifies you as very wealthy. For the main target audience - successful professional athletes - a 10% pledge would not change quality of life one whit. 

I see you mention that HIA's recommendations are based on a suffering-focused perspective. It's great that you're clear about where you're coming from/what you're optimising for. To explore the ethical perspective of HIA further - what is HIA's position on longtermism?

(I'm not saying you should mention your take on longtermism on the website.)

9
Linch
3y
We all have different beliefs and intuitions about the world, including about how other people see the world.  Compared to the rest of us, Marcus has a strong comparative advantage in both a) having an intuition for what messages work for professional athletes and would be easier for them to relate to, and more importantly, b) access to a network to test different messages. I would personally be excited if, rather than for us to debate at length of what will or won't be appealing for a hypothetical audience, for Marcus to just go out and experiment with different messages with the actual audience that he has. The results may or may not surprise us.
5
Marcus Daniell
3y
See below about casting the net - being an athlete myself and knowing many personally I think longtermism is too much of a stretch conceptually for most athletes at this point. 

This is really cool! Thanks for doing this :)

Is there a particular reason the charity areas are 'Global Health and Poverty' and 'Environmental Impact' rather than including any more explicit mention of animal welfare? (For people reading this - the environmental charities include the Good Food Institute and the Humane League along with four climate-focussed charities.)

4
Ben
3y
By the way EA Funds now includes the Founders Pledge climate fund which I think is a bit more straightforward than the animal welfare argument
2
Marcus Daniell
3y
Hi Alex, thanks for your comments! I'll reply to each. I'm aiming to cast the net as widely as possible within the athlete community. To me this means mixing the novel (effective altruism) with the known. I think it is also valid to say that the animal welfare charities represented have a large impact on the environment. 

Welcome to the forum!

Have you read Bostrom's Astronomical Waste? He does a very similar estimate there. https://www.nickbostrom.com/astronomical/waste.html

I'd be keen to hear more about why you think it's not possible to meaningfully reduce existential risk.

1
Giga
3y
I have not seen that, but I will check it out. As for the existential threat, it is for a few reasons, I will make a more detailed post about it later. First off, I believe very few things are existential threats to humanity itself. Humans are extremely resilient and live in every nook and cranny on earth. Even total nuclear war would have plenty of survivors. As far as I see it, only an asteroid or aliens could wipe us out unexpectedly. AI could wipe out humanity, however I believe it would be a voluntary extinction in that case. Future humans may believe AI has qualia, and is much more efficient at creating utility than biological life. I cannot imagine future humans being so stupid as to have AI connected to the internet and a robot army able to be hijacked by said AI at the same time. I do believe there is an existential threat to civilization, however it is not present yet, and we will be capable of self-sustaining colonies off Earth by the time is will arise(meaning that space acceleration would be a form of existential threat reduction). Large portions of Africa, and smaller portions of the Americas and Asia are not at a civilizational level that would make a collapse possible, however they will likely cross that threshold this century. If there is a global civilizational collapse, I do not think civilization would ever return. However, there are far too many unknowns as too how to avoid said collapse meaningfully. Want to prevent a civilization ending nuclear war? You could try to bolster the power of the weaker side to force a cold war. Or maybe you want to make the sides more lopsided so intimidation will be enough. However as we do not know which strategy is more effective, and they have opposite praxis, there is no way to know if you would be increasing existential threats or not. Lastly, most existential threat reduction is political by nature. Politics are also extremely unpredictable, and extremely hard to influence even if you know what you are do
Answer by Alex HTNov 18, 202022
0
0

"Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea.

If we are the only rational beings in the Universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that ... (read more)

2
MichaelA
3y
(In case any future readers are wondering, this quote is from Derek Parfit.)

Thanks for writing this! I and an EA community builder I know found it interesting and helpful.

I'm pleased you have a 'counterarguments' section, though I think there are some counterarguments missing:

  • OFTW groups may crowd out GWWC groups. You mention the anchoring effect on 1%, but there's also the danger of anchoring on a particular cause area. OFTW is about ending extreme poverty, whereas GWWC is about improving the lives of others (much broader)

  • OFTW groups may crowd out EA groups. If there's a OFTW group at a university, the EA group may have to

... (read more)
2
Jack Lewars
3y
Hi Alex - these are very good points and largely correct, I think - thanks for contributing them. I've added some thoughts and mitigations below: 1. Yes, we definitely do anchor around poverty. I think this can be good 'scaffolding' to come into the movement; but sometimes it will anchor people there. It is worth noting, though, that global health and poverty is consistently the most popular cause area in the EA survey, so there are clearly other factors anchoring to this cause area - it's hard to say how much OFTW counterfactually increases this effect (and whether it counterfactually stops people from progressing beyond global health and poverty). In terms of mitigation for competing with GWWC - we are in close touch with them and both sides are working hard to foster collaboration and avoid competition. 2. On point 2, our experience so far is that OFTW and EA groups actually coexist very well. I think (without any systematic evidence) some of this may because a lot of EA groups don't prioritise donations, preferring to focus on things like career advice, and so OFTW chapters can sort of 'own' the donation space; sometimes, though, they just find a way to work alongside each other. I'm not sure it follows that we have to 'compete for altruistically motivated people' - in fact, I don't really see any reason why someone couldn't take the OFTW pledge and then carry on engaging with EA uninterrupted - but I agree that we could compete on this front. A lot seems to depend on OFTW's approach/message/ask. Maybe a virtue of OFTW is that we really only need people's attention for a short period to get them to take one action - so we aren't competing for their sustained attention, in a way that would crowd out EA programming. Indeed, we can actually be a funnel to get them to pay attention to this content - see for example our recent webinar with Toby Ord on x-risk, which attracted ~200 people, many of whom came from OFTW chapters. 3. Yes, fair. I'd just bear in mind, t

Thanks, that's helpful for thinking about my career (and thanks for asking that question Michael!) 

Edit: helpful for thinking about my career because I'm thinking about getting economics training, which seems useful for answering specific sub-questions in detail ('Existential Risk and Economic Growth' being the perfect example of this),  but one economic model alone is very unlikely to resolve a big question.

3
Max_Daniel
4y
Glad it's helpful! I think you're very likely doing this anyway, but I'd recommend to get a range of perspectives on these questions. As I said, my own views here don't feel that resilient, and I also know that several epistemic peers disagree with me on some of the above.
Load more