All of elifland's Comments + Replies

Thanks for organizing this! Tentatively excited about work in this domain.

I do think that generating models/rationales is part of forecasting as it is commonly understood (including in EA circles), and certainly don't agree that forecasting by definition means that little effort was put into it!
Maybe the right place to draw the line between forecasting rationales and “just general research” is asking “is the model/rationale for the most part tightly linked to the numerical forecast?" If yes, it's forecasting, if not, it's something else. 

 

Thanks for clarifying! Would you consider OpenPhil worldview investigations repor... (read more)

5
BenjaminTereick
18d
I think it’s borderline whether reports of this type are forecasting as commonly understood, but would personally lean no in the specific cases you mention (except maybe the bio anchors report).   I really don’t think that this intuition is driven by the amount of time or effort that went into them, but rather the percentage of intellectual labor that went into something like “quantifying uncertainty” (rather than, e.g. establishing empirical facts, reviewing the literature, or analyzing the structure of commonly-made arguments).   As for our grantmaking program: I expect we’ll have a more detailed description of what we want to cover later this year, where we might also address points about the boundaries to worldview investigations.

Thanks for writing this up, and I'm excited about FutureSearch! I agree with most of this, but I'm not sure framing it as more in-depth forecasting is the most natural given how people generally use the word forecasting in EA circles (i.e. associated with Tetlock-style superforecasting, often aggregation of very part-time forecasters' views, etc.). It might be imo more natural to think of it as being a need for in-depth research, perhaps with a forecasting flavor. Here's part of a comment I left on a draft.

However, I kind of think the framing of the essay

... (read more)
3
dschwarz
21d
Agreed Eli, I'm still working to understand where the forecasting ends and the research begins. You're right, the distinction is not whether you put a number at the end of your research project. In AGI (or other hard sciences) the work may be very different, and done by different people. But in other fields, like geopolitics, I see Tetlock-style forecasting as central, even necessary, for research. At the margin, I think forecasting should be more research-y in every domain, including AGI. Otherwise I expect AGI forecasts will continue to be used, while not being very useful.

Thanks Ozzie for chatting! A few notes reflecting on places I think my arguments in the conversation were weak:

  1. It's unclear what short timelines would mean for AI-specific forecasting. If AI timelines are short it means you shouldn’t forecast non-AI things much, but it’s unclear what it means about forecasting AI stuff. There’s less time for effects to compound but you have more info and proximity to the most important decisions. It does discount non-AI forecasting a lot though, and some flavors of AI forecasting.
  2. I also feel weird about the comparison I ma
... (read more)

Just chatted with @Ozzie Gooen about this and will hopefully release audio soon. I probably overstated a few things / gave a false impression of confidence in the parent in a few places (e.g., my tone was probably a little too harsh on non-AI-specific projects); hopefully the audio convo will give a more nuanced sense of my views. I'm also very interested in criticisms of my views and others sharing competing viewpoints.

Also want to emphasize the clarifications from my reply to Ozzie:

  1. While I think it's valuable to share thoughts about the value of differen
... (read more)
6
Ozzie Gooen
1mo
Audio/podcast is here: https://forum.effectivealtruism.org/posts/fsnMDpLHr78XgfWE8/podcast-is-forecasting-a-promising-ea-cause-area

Thanks Ozzie for sharing your thoughts!

A few things I want to clarify up front:

  1. While I think it's valuable to share thoughts about the value of different types of work candidly, I am very appreciative of both people working on forecasting projects and grantmakers in the space for their work trying to make the world a better place (and am friendly with many of them). As I maybe should have made more obvious, I am myself affiliated with Samotsvety Forecasting, and Sage which has done several forecasting projects. And I'm also doing AI forecasting research at
... (read more)
6
Ozzie Gooen
1mo
Thanks for the replies! Some quick responses. First, again, overall, I think we generally agree on most of this stuff. I agree to an extent. But I think there are some very profound prioritization questions that haven't been researched much, and that I don't expect us to gain much insight from by experimentation in the next few years. I'd still like us to do experimentation (If I were in charge of a $50Mil fund, I'd start spending it soon, just not as quickly as I would otherwise). For example: * How promising is it to improve the wisdom/intelligence of EAs vs. others? * How promising are brain-computer-interfaces vs. rationality training vs. forecasting? * What is a good strategy to encourage epistemic-helping AI, where philanthropists could have the most impact? * What kinds of benefits can we generically expect from forecasting/epistemics? How much should we aim for EAs to spend here? We might be disagreeing a bit on what the bar for "valuable for EA decision-making" is. I see a lot of forecasting like accounting - it rarely leads to a clear and large decision, but it's good to do, and steers organizations in better directions. I personally rely heavily on prediction markets for key understandings of EA topics, and see that people like Scott Alexander and Zvi seem to. I know less about the inner workings of OP, but the fact that they continue to pay for predictions that are very much for their questions seems like a sign. All that said, I think that ~95%+ of Manifold and a lot of Metaculus is not useful at all. I'm not sure how much to focus on OP's narrow choices here. I found it surprising that Javier went from governance to forecasting, and that previously it was the (very small) governance team that did forecasting. It's possible that if I evaluated the situation, and had control of the situation, I'd recommend that OP moved marginal resources to governance from forecasting. But I'm a lot less interested in this question than I am, "is forecasting co

All views are my own rather than those of any organizations/groups that I’m affiliated with. Trying to share my current views relatively bluntly. Note that I am often cynical about things I’m involved in. Thanks to Adam Binks for feedback. 

Edit: See also child comment for clarifications/updates

Edit 2:  I think the grantmaking program has different scope than I was expecting; see this comment by Benjamin for more.

Following some of the skeptical comments here, I figured it might be useful to quickly write up some personal takes on forecastin... (read more)

Just chatted with @Ozzie Gooen about this and will hopefully release audio soon. I probably overstated a few things / gave a false impression of confidence in the parent in a few places (e.g., my tone was probably a little too harsh on non-AI-specific projects); hopefully the audio convo will give a more nuanced sense of my views. I'm also very interested in criticisms of my views and others sharing competing viewpoints.

Also want to emphasize the clarifications from my reply to Ozzie:

  1. While I think it's valuable to share thoughts about the value of differen
... (read more)

I feel like I need to reply here, as I'm working in the industry and defend it more.

First, to be clear, I generally agree a lot with Eli on this. But I'm more bullish on epistemic infrastructure than he is.

Here are some quick things I'd flag. I might write a longer post on this issue later.

  1. I'm similarly unsure about a lot of existing forecasting grants and research. In general, I'm not very excited about most academic-style forecasting research at the moment, and I don't think there are many technical groups at all (maybe ~30 full time equivalents in the f
... (read more)
  1. So in the multi-agent slowly-replacing case, I'd argue that individual decisions don't necessarily represent a voluntary decision on behalf of society (I'm imagining something like this scenario). In the misaligned power-seeking case, it seems obvious to me that this is involuntary. I agree that it technically could be a collective voluntary decision to hand over power more quickly, though (and in that case I'd be somewhat less against it).
  2. I think emre's comment lays out the intuitive case for being careful / taking your time, as does Ryan's. I think the e
... (read more)

(edit: my point is basically the same as emre's)

I think there is very likely at some point going to be some sort of transition to a world where AIs are effectively in control. It seems worth it to slow down on the margin to try to shape this transition as best we can, especially slowing it down as we get closer to AGI and ASI. It would be surprising to me if making the transfer of power more voluntary/careful led to worse outcomes (or only led to slightly better outcomes such that the downsides of slowing down a bit made things worse).

Delaying the arrival ... (read more)

6
Matthew_Barnett
3mo
Two questions here: 1. Why would accelerating AI make the transition less voluntary? (In my own mind, I'd be inclined to reverse this sentiment a bit: delaying AI by regulation generally involves forcibly stopping people from adopting AI. Force might be justified if it brings about a greater good, but that's not the argument here.) 2. I can understand being "careful". Being careful does seem like a good thing. But "being careful" generally trades off against other values in almost every domain I can think of, and there is such a thing as too much of a good thing. What reason is there to think that pushing for "more caution" is better on the margin compared to acceleration, especially considering society's default response to AI in the absence of intervention?

If Scott had used language like this, my guess is that the people he was trying to convince would have completely bounced off of his post.

I mostly agree with this, I wasn't suggesting he included that specific type of language, just that the arguments in the post don't go through from the perspective of most leader/highly-engaged EAs. Scott has discussed similar topics on ACT here but I agree the target audience was likely different.

I do think part of his target audience was probably EAs who he thinks are too critical of themselves, as I think he's written... (read more)

EDIT: Scott has admitted a mistake, which addresses some of my criticism:



(this comment has overlapping points with titotal's)

I've seen a lot of people strongly praising this article on Twitter and in the comments here but I find some of the arguments weak. Insofar as the goal of the post is to say that EA has done some really good things, I think the post is right. But I don't think it convincingly argues that EA has been net positive for the world.[1]

First: based on surveys, it seems likely that most (not all!) highly-engaged/leader EAs believe GCRs/longt... (read more)

seems arguably higher than .0025% extinction risk and likely higher than 200,000 lives if you weight the expected value of all future people >~100x of that of current people.

If Scott had used language like this, my guess is that the people he was trying to convince would have completely bounced off of his post.

I interpreted him to be saying something like "look Ezra Klein et al., even if we start with your assumptions and reasoning style, we still end up with the conclusion that EA is good." 

And it seems fine to me to argue from the basis of someon... (read more)

These were the 3 snippets I was most interested in

Under pure risk-neutrality, whether an existential risk intervention can reduce more than 1.5 basis points per billion dollars spent determines whether the existential risk intervention is an order of magnitude better than the Against Malaria Foundation (AMF). 

If you use welfare ranges that are close to Rethink Priorities’ estimates, then only the most implausible existential risk intervention is estimated to be an order of magnitude more cost-effective than cage-free campaigns and the hypothetical shr... (read more)

3
DanielFilan
6mo
Thanks!
2
Vasco Grilo
9mo
Thanks! Corrected.

In an update on Sage introducing quantifiedintuitions.org, we described a pivot we made after a few months:

As stated in the grant summary, our initial plan was to “create a pilot version of a forecasting platform, and a paid forecasting team, to make predictions about questions relevant to high-impact research”. While we build a decent beta forecasting platform (that we plan to open source at some point), the pilot for forecasting on questions relevant to high-impact research didn’t go that well due to (a) difficulties in creating resolvable questions rele

... (read more)

Personally the FTX regrantor system felt like a nice middle ground between EA Funds and donor lotteries in terms of (de)centralization. I'd be excited to donate to something less centralized than EA Funds but more centralized than a donor lottery.

4
Linda Linsefors
1y
Maybe something like the S-process used by SFF? https://survivalandflourishing.fund/s-process It would be cool to have a grant system where anyone can list them selves as fund manager, and donors can pick which fund managers decisions they want to back with their donations.  If I remember correctly, the s-process could facilitate something like that.

Which part of my comment did you find as underestimating how grievous SBF/Alameda/FTX's actions were? (I'm genuinely unsure)

3
Persona14246
1y
Sorry for the confusion, I was adding on to your comment. I agree with you obviously. It was more a statement on the forum over the past five-six days. 

Nitpick, but I found the sentence:

Based on things I've heard from various people around Nonlinear, Kat and Emerson have a recent track record of conducting Nonlinear in a way inconsistent with EA values [emphasis mine].

A bit strange in the context of the rest of the comment. If your characterization of Nonlinear is accurate, it would seem to be inconsistent with ~every plausible set of values and not just "EA values".

Appreciate the quick, cooperative response.

I want you to write a better post arguing for the same overall point if you agreed with the title, hopefully with more context than mine.

Not feeling up to it right now and not sure it needs a whole top-level post. My current take is something like (very roughly/quickly written):

  1. New information is currently coming in very rapidly.
  2. We should at least wait until the information comes in a bit slower before thinking seriously in-depth about proposed mitigations so we have a better picture of what went wrong. But "babbl
... (read more)
3
Persona14246
1y
Just for some perspective here,  the DOJ could be pursuing SBF for wire fraud, which comes with a maximum sentence of twenty years. FTX's bankruptcy couldn't be construed as a mistake past the first day or so of last week, and this is still very generous. I find that this forum has consistently underestimated how grievous the actions taken by SBF, Alameda and FTX have been compared to the individuals I know who work in finance or crypto.  https://fortune.com/crypto/2022/11/13/could-sam-bankman-fried-go-to-prison-for-the-ftx-disaster/

I thought I would like this post based on the title (I also recently decided to hold off for more information before seriously proposing solutions), but I disagree with much of the content.

A few examples:

It is uncertain whether SBF intentionally committed fraud, or just made a mistake, but people seem to be reacting as if the takeaway from this is that fraud is bad.

I think we can safely say with at this point >95% confidence that SBF basically committed fraud even if not technically in the legal sense (edit: but also seems likely to be fraud in the lega... (read more)

I don't say this often, but thanks for your comment!

This seems wrong, e.g. EA leadership had more personal context on Sam than investors. See e.g. Oli here with a personal account and my more abstract argument here.

Interesting! You have changed my mind on this. You clearly know more about this than I. I want you to write a better post arguing for the same overall point if you agreed with the title, hopefully with more context than mine.

The fact that we have such different pictures I think may be an effect of what I'm seeing on the forum. So many top le... (read more)

I’m not as sure about advisors, as I wrote here. Agree on recipients

It's a relevant point but I think we can reasonably expect EA leadership to do better at vetting megadonors than Sequoia due to (a) more context on the situation, e.g. EAs should have known more about SBF's past than Sequoia and/or could have found it out more easily via social and professional connections (b) more incentive to avoid downside risks, e.g. the SBF blowup matters a lot more for EA's reputation than Sequoia's.

To be clear, this does not apply to charities receiving money from FTXFF, that is a separate question from EA leadership.

8
timunderwood
1y
You expect the people being given free cash to do a better job of due diligence than the people handing someone a giant cash pile? Not to mention that the Future Fund donations probably did more good for EA causes than the reputational damage is going to do harm to them (making the further assumption that this is actually net reputational damage, as opposed to a bunch of free coverage that pushes some people off and attracts some other people).
4
Nathan Young
1y
Also, to be pithy: If we are so f*****g clever as to know what risks everyone else misses and how to avoid them, how come we didn't spot that one of our best and brightest was actually a massive fraudster
1[anonymous]1y
I think a) and b) are good points. Although there's also c) it's reasonable to give extra trust points to a member of the community who's just given you a not-insignificant part of their wealth to spend on charitable endeavours as you see fit. Note that I'm obviously not saying this implied SBF was super trustworthy on balance, just that it's a reasonable consideration pushing in the other direction when making the comparison with Sequoia who lacked most of this context (I do think it's a good thing that we give each other trust points for signalling and demonstrating commitments to EA).

Thanks for clarifying. To be clear, I didn't say I thought they were as bad as Leverage. I said "I have less trust in CEA's epistemics to necessarily be that much better than Leverage's , though I'm uncertain here"

I've read it. I'd guess we have similar views on Leverage, but different views on CEA. I think it's very easy for well-intentioned, generally reasonable people's epistemics to be corrupted via tribalism, motivated reasoning, etc.

But as I said above I'm unsure.

Edited to add: Either way, might be a distraction to debate this sort of thing further. I'd guess that we both agree in practice that the allegations should be taken seriously and investigated carefully, ideally by independent parties.

I agree that these can technically all be true at the same time, but I think the tone/vibe of comments is very important in addition to what they literally say, and the vibe of Arepo's comment was too tribalistic.

I'd also guess re: (3) that I have less trust in CEA's epistemics to necessarily be that much better than Leverage's , though I'm uncertain here (edited to add: tbc my best guess is it's better, but I'm not sure what my prior should be if there's a "he said / she said" situation, on who's telling the truth. My guess is closer to 50/50 than 95/5 in log odds at least).

Mea culpa for not being clear enough. I don't think handwavey statements from  someone whose credibility I doubt have much evidential value, but I strongly think CEA's epistemics and involvement should be investigated - possibly including Vaughan's.

I find it bleakly humourous to be interpreted as tribalistically defending CEA when I've written gradually more public criticisms of them and their lack of  focus -and honestly, while I don't understand thinking they're as bad as Leverage, I think they've historically probably been a counterfactual neg... (read more)

Caro
1y21
6
0

I agree that the tone was too tribalistic, but the content is correct.

(Seems a bit like a side-topic, but you can read more about Leverage on this EA Forum post and, even more importantly, in the comments. I hope that's useful for you! The comments definitely changed my views - negatively - about the utility of Leverage's outputs and some cultural issues.)

I’m guessing I have a lower opinion of Leverage than you based on your tone, but +1 on Kerry being at CEA for 4 years making it more important to pay serious attention to what he has to say even if it ultimately doesn’t check out. We need to be very careful to minimize tribalism hurting our epistemics.

Caro
1y59
23
0

For what it's worth, these different considerations can be true at the same time:

  1. "He may have his own axe to grind.": that's probably true, given that he's been fired by CEA.
  2. "Kerry being at CEA for four years makes it more important to pay serious attention to what he has to say even if it ultimately doesn’t check out.": it also seems like he may have particularly useful information and contexts.
  3. "He's now the program manager at a known cult that the EA movement has actively distanced itself from": it does seem like Leverage is shady and doesn't have a very
... (read more)

Does the "deliberate about current evidence" part includes thinking a lot about AI alignment to identify new arguments or considerations that other people on Earth may not have thought of, or would that count as new evidence?

It seems like if that would not count as new evidence, that the team you described might be able to come up with much better forecasts than we have today, and I'd think their final forecast would be more likely to end up much lower or much higher than e.g. your forecast. One consequence of this might then be be that your 90% confidence

... (read more)
2
WilliamKiely
1y
Thanks for the response! This clarifies what I was wondering well: I have some more thoughts regarding the following, but want to note up front that no response is necessary--I'm just sharing my thoughts out loud: I agree there's a ton of irreducible uncertainty here, but... what's a way of putting it... I think there are lots of other strong forecasters who think this too, but might look at the evidence that humanity has today and come to a significantly different forecast than you. Like who is to say that Nate Soares and Daniel Kokotajlo's forecasts are wrong? (Though actually it takes a smaller likelihood ratio for you to update to reach their forecasts than it does for you to reach MacAskill's forecast.) Presumably they've thought of some arguments and considerations that you haven't read or thought of before. I think it wouldn't surprise me if this team deliberating on humanity's current evidence for a thousand years would come across those arguments or considerations (or some other ones) in their process of logical induction (to use a term I learned from MIRI that roughly means updating without new evidence) and ultimately decide on a final forecast very different than yours as a result. Perhaps another way of saying this is that your current forecast may be 35% not because that's the best forecast that can be made with humanity's current evidence, given the irreducible uncertainty in the world, but rather because you don't currently have all of humanity's current evidence. Perhaps your 35% is more reflective of your own ignorance than the actual amount of irreducible uncertainty in the world. Reflecting a bit more, I'm realizing I should ask myself what I think is the appropriate level of confidence that 3% is too low. Thinking about it a bit more, 90% actually doesn't seem that high, even given what I just wrote above. I think my main reason for thinking it may be too high is that 1000 years is a long time for a team of 100 reasonable people to think ab

Thanks Max! Was great seeing you as well. I did take some time off and was a bit more chill for a little while blogging however much I felt like. I've been doing a lot better for the past 2 months.

2
MaxRa
2y
Nice, that’s good to hear. :)

The impact isn't coming from the literal $10M donation to OpenAI, it's coming from spearheading its founding.

See https://twitter.com/esyudkowsky/status/1446562238848847877

2
NunoSempere
2y
Yeah, I'm aware Yudkowsky thinks that, though I think I don't agree with it. In particular, it could be the case in expectation, but Yudkowsky seems like he speaks more of certainties.

“P(misalignment x-risk|AGI)”: Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI.

I'm guessing this definition is meant to separate misalignment from misuse, but I'm curious whether you are including either/both of these 2 cases as misalignment x-risk:

  1. AGI is deployed and we get locked into a great outcome by today's standards, but we get a world with <=1% of the value of "humanity's potential". So we sort of have an existential catastrophe, without a discrete cata
... (read more)
2
Nick_Beckstead
2y
1 - counts for purposes of this question 2 - doesn't count for purposes of this question (but would be a really big deal!)

In particular, I think many of the epistemically best EAs go into stuff like grant making, philosophy, general longtermist research, etc. which leaves a gap of really epistemically good people focusing full-time on AI. And I think the current epistemic situation in the AI alignment field (both technical and governance) is pretty bad in part due to this.

5
Linch
2y
Interestingly, I have the opposite intuition, that entire subareas of EA/longtermism are kinda plodding along and not doing much because our best people keep going into AI alignment. Some of those areas are plausibly even critical for making the AI story go well. Still, it's not clear to me whether the allocation is inaccurate, just because alignment is so important. Technical biosecurity and maybe forecasting might be exceptions though.

Thanks for clarifying. Might be worth making clear in the post (if it isn’t already, I may have missed something).

I mean EAs. I’m most confident about “talent-weighted EAs”. But probably also EAs in general.

In particular, I think many of the epistemically best EAs go into stuff like grant making, philosophy, general longtermist research, etc. which leaves a gap of really epistemically good people focusing full-time on AI. And I think the current epistemic situation in the AI alignment field (both technical and governance) is pretty bad in part due to this.

(I've only skimmed the post)

This seems right theoretically, but I'm  worried that people will read this and think this consideration ~conclusively implies fewer people should go into AI alignment, when my current best guess is the opposite is true. I agree sometimes people make the argmax vs. softmax mistake and there are status issues, but I still think not enough people proportionally go into AI for various reasons (underestimating risk level, it being hard/intimidating, not liking rationalist/Bay vibes, etc.).

2
Jan_Kulveit
2y
I'm a bit confused if by 'fewer people' / 'not enough people proportionally' you mean 'EAs'. In my view, while too few people (as 'humans') work on AI alignment, too large fraction of EAs 'goes into AI'.
4
Gavin
2y
Agree that this could be misused, just as the sensible 80k framework is misused, or as anything can be. Some skin in the game then: me and Jan both spend most of our time on AI.

Thanks Will!

My dad just sent me a video of the Yom Kippur sermon this year (relevant portion starting roughly here) at the congregation I grew up in. It was inspired by longtermism and specifically your writing on it, which is pretty cool. This updates me emotionally toward your broad strategy here, though I'm not sure how much I should update rationally.

Answer by eliflandOct 05, 202212
2
0

Agree with other commenters that we shouldn’t put too much weight on anecdotes, but just to provide a counter-anecdote to yours I’ve been ~99% vegan for over 3 years and it seems like my thinking ability and intellectual output has if anything improved during that time.

My best guess is that it varies based on the person and situation, but for the majority of people (including probably me) a decently planned vegan diet has ~no effect on thinking ability.

9
Elizabeth
7mo
Could you say more about what "decently planned" means to you? I think this is where a lot of the dragons live. 

Is the UAT mentioned anywhere in the bio anchors report as a reason for thinking DL will scale to TAI? I didn't find any mentions of it quickly ctrl-fing in any of the 4 parts or the appendices.

5
jylin04
2y
Yes, it's mentioned on page 19 of part 4 (as point 1, and my main concern is with point 2b).

Yeah it's an EAF bug with crossposting linkposts from LessWrong. For now copy and paste the text into the browser and it will work, or click here.

1
Joel Becker
2y
Ah! Thank you! Didn't think you needed to copy literal text, rather than "Copy Link Address."
  • There was a vague tone of "the goal is to get accepted to EAG" instead of "the goal is to make the world better," which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world. 
    • Because of this, I don't feel strongly about the EAG team providing feedback to people on why they were rejected. The EAG team's goals isn't to advise on how applicants can fill up their "EA resume." It's to facilitate impactful work in the world. 
  • I remembered a comment that I really lik
... (read more)

Good idea, I made a very quick version. Anyone should feel free to update it.

Yeah I realized this when proofreading and left it as I thought it drove home my point well :p

I think a lot of the value of university is providing a peer group for social and intellectual development. To the extent you hasn't found a great group of friends at university, I think you should actively try very hard to find a group that you enjoy spending time with then lean into this. To the extent you can fill this need outside of university, it seems very reasonable to drop out.

This is the advice I'd give to my younger self: I didn't make many great friends in college and should have tried much harder to find my niche. But I don't think I was ready... (read more)

I agree that self study without a peer group is really problematic.

And yet so many people work from home alone.

This is surely solvable, and, I think, worth someone's attention in solving.

FYI: You can view community median forecasts for each question at this link. Currently it looks like:

Thanks for the suggestion and glad you found the forecasts helpful :)

I personally have a distaste for academic credentialist culture so am probably not the best person to turn this into a more prestigious looking report. I agree it might be valuable despite my distaste, so if anyone reading this is interested in doing so feel free to DM me and I can probably help with funding and review if you have a good writing track record.

It sounds like at least part of your argument could be summarized as: "Will MacAskill underrates x-risk relative to most longtermists; The Precipice is a better introduction because it focuses on x-risk."

I don't have a strong view about the focus on x-risk in general. I care most about the lack of clarity on which areas are highest priority, and what I believe is a mistake in not focusing on AI enough (and the wrong emphasis within AI).

In the post I wrote:

One big difference between The Precipice and WWOTF that I’m not as sure about is the framing of reduci

... (read more)

I'm fine with other phrasings and am also concerned about value lock-in and s-risks though I think these can be thought of as a class of x-risks

I'm not keen on classifying s-risks as x-risks because, for better or worse, most people really just seem to mean "extinction or permanent human disempowerment" when they talk about "x-risks." I worry that a motte-and-bailey can happen here, where (1) people include s-risks within x-risks when trying to get people on board with focusing on x-risks, but then (2) their further discussion of x-risks basically equates them with non-s-x-risks. The fact that the "dictionary definition" of x-risks would include s-risks doesn't solve this problem.

I do think it's more about whether you're doing things in such a way that if they knew why you were doing them, they'd mostly not be bothered (ie passing the red face test). But that doesn't really solve the problem that digital sentience is a weird reason to do a lot of things, and there are lots of things I endorse it being inappropriate to be too explicit about.

Of course this is a spectrum, and we shouldn't put up a public website listing all our beliefs including the most controversial ones or something like that (no one in EA is very close to this ext... (read more)

2
ChanaMessinger
2y
Oh, sorry, those were two different thoughts.  "digital sentience is a weird reason to do a lot of things" is one thing, where it's not most people's crux and so maybe not the first thing you say, but agree, should definitely come up, and separately, "there are lots of things I endorse it being inappropriate to be too explicit about", like the granularity of assessment you might be making of a person at any given time (though possibly more transparency about the fact that you're being assessed in a bunch of contexts would be very good!)

I think steps 1 and 2 in your chain are also questionable, not just 3-5.

  1. Want to maximize number of EAs 

Why do we want to maximize number of EAs, this seems very non-obvious to me? Some people would add much more to the community than others via epistemics, culture, direct talent, etc. If we added enough of certain types of people to the community, especially too quickly, it could easily be net negative.

2. Use framings, arguments and examples that you don't think hold water but work at getting people to join your group [I don't think EAs do this, I'm g

... (read more)

I didn't mean to imply that. I think we very likely need to solve alignment at some point to avoid existential catastrophe (since we need aligned powerful AIs to help us achieve our potential), but I'm not confident that the first misaligned AGI would be enough to cause this level of catastrophe (especially for relatively weak definitions of "AGI").

Load more