All of elifland's Comments + Replies

Is the 1-3% x-risk from bio including bio catastrophes mediated by AI (via misuse and/or misalignment? Is it taking into account ASI timelines?

Also, just comparing % x-risk seems to miss out on the value of shaping AI upside / better futures, s-risks + acausal stuff, etc. (also are you counting ai-enabled coups / concentration of power?). And relatedly the general heuristic of working on the thing that will be the dominant determinant of the future once developed (and which might be developed soon).

4
Ajeya
I'm largely deferring to ASB on these numbers, so he can potentially speak in more detail, but my guess is this includes AI-mediated misuse and accident (people using LLMs or bio design tools to invent nastier bioweapons and then either deliberately or accidentally releasing them), but excludes misaligned AIs using bioweapons as a tactic in an AI takeover attempt. Since the biodefenses work could also help with the latter, the importance ratio here is probably somewhat stacking the deck in favor of AI (though I don't think it's a giant skew, because bioweapons are just one path to AI takeover). ASB has pretty short ASI timelines that are broadly similar to mine and these numbers take that into account. If you feel moved by these things and are a good fit to work on them, that's a much stronger reason to work on AI over bio than most people have. But the vast bulk of generalist EAs working on AI are working on AI takeover and more mundane misuse stuff that feels like it's a pretty apples-to-apples comparison to bio.

There are virtually always domain experts who have spent their careers thinking about any given question, and yet superforecasters seem to systematically outperform them.

I don't think this has been established. See here

I would advise looking into plans that are robust to extreme uncertainty in how AI actually goes, and avoid actions that could blow up in your face if you turn out to be badly wrong. 

Seeing you highlight this now it occurs to me that I basically agree with this w.r.t. AI timelines (at least on one plausible interpretation, my guess is that titotal could have a different meaning in mind). I mostly don't think people should take actions that blow up in their face if timelines are long (there are some exceptions, but overall I think long timelines are pl... (read more)

(edit: here is a more comprehensive response)

Thanks titotal for taking the time to dig deep into our model and write up your thoughts, it's much appreciated. This comment speaks for Daniel Kokotajlo and me, not necessarily any of the other authors on the timelines forecast or AI 2027. It addresses most but not all of titotal’s post.

Overall view: titotal pointed out a few mistakes and communication issues which we will mostly fix. We are therefore going to give titotal a $500 bounty to represent our appreciation.  However, we continue to disagree on th... (read more)

6
Manuel Allgaier
Side note: I appreciate that you actually sought out critiques with your bounty offer and took the time to respond and elaborate on your thinking here, thanks! 

While the model is certainly imperfect due to limited time and the inherent difficulties around forecasting AGI timelines, we still think overall it’s the “least bad” timelines model out there and it’s the model that features most prominently in my overall timelines views. I think titotal disagrees, though I’m not sure which one they consider least bad

I also would be interested in learning what the "least bad" model is. Titotal says:

In my world, you generally want models to have strong conceptual justifications or empirical validation with existing data be

... (read more)

I’m strongly in favor of allowing intuitive adjustments on top of quantitative modeling when estimating parameters.

We had a brief thread on this over on LW, but I'm still keen to hear why you endorse using precise probability distributions to represent these intuitive adjustments/estimates. I take many of titotal's critiques in this post to be symptoms of precise Bayesianism gone wrong (not to say titotal would agree with me on that).

ETA: Which, to be clear, is a question I have for EAs in general, not just you. :)

Centre for the Governance of AI does alignment research and policy research. It appears to focus primarily on the former, which, as I've discussed, I'm not as optimistic about. (And I don't like policy research as much as policy advocacy.)

I'm confused, the claim here is that GovAI does more technical alignment than policy research?

8
MichaelDickens
That's the claim I made, yes. Looking again at GovAI's publications, I'm not sure why I thought that at the time since they do look more like policy research. Perhaps I was taking a strict definition of "policy research" where it only counts if it informs policy in some way I care about. Right now it looks like my past self was wrong but I'm going to defer to him because he spent more time on it than I'm spending now. I'm not going to spend more time on it because this issue isn't decision-relevant, but there's a reasonable chance I was confused about something when I wrote that.

Would you be interested in making quantitative predictions on the revenue of OpenAI/Anthropic in upcoming years, and/or when various benchmarks like these will be saturated (and OSWorld, released since that series was created), and/or when various Preparedness/ASL levels will be triggered?

Want to discuss bot-building with other competitors? We’ve set up a Discord channel just for this series. Join it here.

 

I get "Invite Invalid"

1
christian
Thanks, I've created a new link which shouldn't expire and I've updated the post. 

How did you decide to target Cognition? 

IMO it makes much more sense to target AI developers who are training foundation models with huge amounts of compute. My understanding is that Cognition isn't training foundation models, and is more of a "wrapper" in the sense that they are building on top of others' foundation models to apply scaffolding, and/or fine-tuning with <~1% of the foundation model training compute. Correct me if I'm wrong.

Gesturing at some of the reasons I think that wrappers should be deprioritized:

  1. Much of the risks from scheming
... (read more)

Good question! I basically agree with you about the relative importance of foundation model developers here (although I haven’t thought too much about the third point you mentioned. Thanks for bringing it up.)

I should say we are doing some other work to raise awareness about foundation model risks - especially at OpenAI, given recent events - but not at the level of this campaign.

The main constraint was starting (relatively) small. We’d really like to win these campaigns, and we don’t plan to let up until we have. The foundation model developers are genera... (read more)

Thanks for organizing this! Tentatively excited about work in this domain.

I do think that generating models/rationales is part of forecasting as it is commonly understood (including in EA circles), and certainly don't agree that forecasting by definition means that little effort was put into it!
Maybe the right place to draw the line between forecasting rationales and “just general research” is asking “is the model/rationale for the most part tightly linked to the numerical forecast?" If yes, it's forecasting, if not, it's something else. 

 

Thanks for clarifying! Would you consider OpenPhil worldview investigations repor... (read more)

5
BenjaminTereick
I think it’s borderline whether reports of this type are forecasting as commonly understood, but would personally lean no in the specific cases you mention (except maybe the bio anchors report).   I really don’t think that this intuition is driven by the amount of time or effort that went into them, but rather the percentage of intellectual labor that went into something like “quantifying uncertainty” (rather than, e.g. establishing empirical facts, reviewing the literature, or analyzing the structure of commonly-made arguments).   As for our grantmaking program: I expect we’ll have a more detailed description of what we want to cover later this year, where we might also address points about the boundaries to worldview investigations.

Thanks for writing this up, and I'm excited about FutureSearch! I agree with most of this, but I'm not sure framing it as more in-depth forecasting is the most natural given how people generally use the word forecasting in EA circles (i.e. associated with Tetlock-style superforecasting, often aggregation of very part-time forecasters' views, etc.). It might be imo more natural to think of it as being a need for in-depth research, perhaps with a forecasting flavor. Here's part of a comment I left on a draft.

However, I kind of think the framing of the essay

... (read more)
3
dschwarz
Agreed Eli, I'm still working to understand where the forecasting ends and the research begins. You're right, the distinction is not whether you put a number at the end of your research project. In AGI (or other hard sciences) the work may be very different, and done by different people. But in other fields, like geopolitics, I see Tetlock-style forecasting as central, even necessary, for research. At the margin, I think forecasting should be more research-y in every domain, including AGI. Otherwise I expect AGI forecasts will continue to be used, while not being very useful.

Thanks Ozzie for chatting! A few notes reflecting on places I think my arguments in the conversation were weak:

  1. It's unclear what short timelines would mean for AI-specific forecasting. If AI timelines are short it means you shouldn’t forecast non-AI things much, but it’s unclear what it means about forecasting AI stuff. There’s less time for effects to compound but you have more info and proximity to the most important decisions. It does discount non-AI forecasting a lot though, and some flavors of AI forecasting.
  2. I also feel weird about the comparison I ma
... (read more)

Just chatted with @Ozzie Gooen about this and will hopefully release audio soon. I probably overstated a few things / gave a false impression of confidence in the parent in a few places (e.g., my tone was probably a little too harsh on non-AI-specific projects); hopefully the audio convo will give a more nuanced sense of my views. I'm also very interested in criticisms of my views and others sharing competing viewpoints.

Also want to emphasize the clarifications from my reply to Ozzie:

  1. While I think it's valuable to share thoughts about the value of differen
... (read more)
6
Ozzie Gooen
Audio/podcast is here: https://forum.effectivealtruism.org/posts/fsnMDpLHr78XgfWE8/podcast-is-forecasting-a-promising-ea-cause-area

Thanks Ozzie for sharing your thoughts!

A few things I want to clarify up front:

  1. While I think it's valuable to share thoughts about the value of different types of work candidly, I am very appreciative of both people working on forecasting projects and grantmakers in the space for their work trying to make the world a better place (and am friendly with many of them). As I maybe should have made more obvious, I am myself affiliated with Samotsvety Forecasting, and Sage which has done several forecasting projects. And I'm also doing AI forecasting research at
... (read more)
6
Ozzie Gooen
Thanks for the replies! Some quick responses. First, again, overall, I think we generally agree on most of this stuff. I agree to an extent. But I think there are some very profound prioritization questions that haven't been researched much, and that I don't expect us to gain much insight from by experimentation in the next few years. I'd still like us to do experimentation (If I were in charge of a $50Mil fund, I'd start spending it soon, just not as quickly as I would otherwise). For example: * How promising is it to improve the wisdom/intelligence of EAs vs. others? * How promising are brain-computer-interfaces vs. rationality training vs. forecasting? * What is a good strategy to encourage epistemic-helping AI, where philanthropists could have the most impact? * What kinds of benefits can we generically expect from forecasting/epistemics? How much should we aim for EAs to spend here? We might be disagreeing a bit on what the bar for "valuable for EA decision-making" is. I see a lot of forecasting like accounting - it rarely leads to a clear and large decision, but it's good to do, and steers organizations in better directions. I personally rely heavily on prediction markets for key understandings of EA topics, and see that people like Scott Alexander and Zvi seem to. I know less about the inner workings of OP, but the fact that they continue to pay for predictions that are very much for their questions seems like a sign. All that said, I think that ~95%+ of Manifold and a lot of Metaculus is not useful at all. I'm not sure how much to focus on OP's narrow choices here. I found it surprising that Javier went from governance to forecasting, and that previously it was the (very small) governance team that did forecasting. It's possible that if I evaluated the situation, and had control of the situation, I'd recommend that OP moved marginal resources to governance from forecasting. But I'm a lot less interested in this question than I am, "is forecasting co

All views are my own rather than those of any organizations/groups that I’m affiliated with. Trying to share my current views relatively bluntly. Note that I am often cynical about things I’m involved in. Thanks to Adam Binks for feedback. 

Edit: See also child comment for clarifications/updates

Edit 2: I think the grantmaking program has different scope than I was expecting; see this comment by Benjamin for more.

Following some of the skeptical comments here, I figured it might be useful to quickly write up some personal takes on forecasting’s pr... (read more)

Just chatted with @Ozzie Gooen about this and will hopefully release audio soon. I probably overstated a few things / gave a false impression of confidence in the parent in a few places (e.g., my tone was probably a little too harsh on non-AI-specific projects); hopefully the audio convo will give a more nuanced sense of my views. I'm also very interested in criticisms of my views and others sharing competing viewpoints.

Also want to emphasize the clarifications from my reply to Ozzie:

  1. While I think it's valuable to share thoughts about the value of differen
... (read more)

I feel like I need to reply here, as I'm working in the industry and defend it more.

First, to be clear, I generally agree a lot with Eli on this. But I'm more bullish on epistemic infrastructure than he is.

Here are some quick things I'd flag. I might write a longer post on this issue later.

  1. I'm similarly unsure about a lot of existing forecasting grants and research. In general, I'm not very excited about most academic-style forecasting research at the moment, and I don't think there are many technical groups at all (maybe ~30 full time equivalents in the f
... (read more)
  1. So in the multi-agent slowly-replacing case, I'd argue that individual decisions don't necessarily represent a voluntary decision on behalf of society (I'm imagining something like this scenario). In the misaligned power-seeking case, it seems obvious to me that this is involuntary. I agree that it technically could be a collective voluntary decision to hand over power more quickly, though (and in that case I'd be somewhat less against it).
  2. I think emre's comment lays out the intuitive case for being careful / taking your time, as does Ryan's. I think the e
... (read more)

(edit: my point is basically the same as emre's)

I think there is very likely at some point going to be some sort of transition to a world where AIs are effectively in control. It seems worth it to slow down on the margin to try to shape this transition as best we can, especially slowing it down as we get closer to AGI and ASI. It would be surprising to me if making the transfer of power more voluntary/careful led to worse outcomes (or only led to slightly better outcomes such that the downsides of slowing down a bit made things worse).

Delaying the arrival ... (read more)

6
Matthew_Barnett
Two questions here: 1. Why would accelerating AI make the transition less voluntary? (In my own mind, I'd be inclined to reverse this sentiment a bit: delaying AI by regulation generally involves forcibly stopping people from adopting AI. Force might be justified if it brings about a greater good, but that's not the argument here.) 2. I can understand being "careful". Being careful does seem like a good thing. But "being careful" generally trades off against other values in almost every domain I can think of, and there is such a thing as too much of a good thing. What reason is there to think that pushing for "more caution" is better on the margin compared to acceleration, especially considering society's default response to AI in the absence of intervention?

If Scott had used language like this, my guess is that the people he was trying to convince would have completely bounced off of his post.

I mostly agree with this, I wasn't suggesting he included that specific type of language, just that the arguments in the post don't go through from the perspective of most leader/highly-engaged EAs. Scott has discussed similar topics on ACT here but I agree the target audience was likely different.

I do think part of his target audience was probably EAs who he thinks are too critical of themselves, as I think he's written... (read more)

EDIT: Scott has admitted a mistake, which addresses some of my criticism:



(this comment has overlapping points with titotal's)

I've seen a lot of people strongly praising this article on Twitter and in the comments here but I find some of the arguments weak. Insofar as the goal of the post is to say that EA has done some really good things, I think the post is right. But I don't think it convincingly argues that EA has been net positive for the world.[1]

First: based on surveys, it seems likely that most (not all!) highly-engaged/leader EAs believe GCRs/longt... (read more)

seems arguably higher than .0025% extinction risk and likely higher than 200,000 lives if you weight the expected value of all future people >~100x of that of current people.

If Scott had used language like this, my guess is that the people he was trying to convince would have completely bounced off of his post.

I interpreted him to be saying something like "look Ezra Klein et al., even if we start with your assumptions and reasoning style, we still end up with the conclusion that EA is good." 

And it seems fine to me to argue from the basis of someon... (read more)

These were the 3 snippets I was most interested in

Under pure risk-neutrality, whether an existential risk intervention can reduce more than 1.5 basis points per billion dollars spent determines whether the existential risk intervention is an order of magnitude better than the Against Malaria Foundation (AMF). 

If you use welfare ranges that are close to Rethink Priorities’ estimates, then only the most implausible existential risk intervention is estimated to be an order of magnitude more cost-effective than cage-free campaigns and the hypothetical shr... (read more)

3
DanielFilan
Thanks!
2
Vasco Grilo🔸
Thanks! Corrected.

In an update on Sage introducing quantifiedintuitions.org, we described a pivot we made after a few months:

As stated in the grant summary, our initial plan was to “create a pilot version of a forecasting platform, and a paid forecasting team, to make predictions about questions relevant to high-impact research”. While we build a decent beta forecasting platform (that we plan to open source at some point), the pilot for forecasting on questions relevant to high-impact research didn’t go that well due to (a) difficulties in creating resolvable questions rele

... (read more)

Personally the FTX regrantor system felt like a nice middle ground between EA Funds and donor lotteries in terms of (de)centralization. I'd be excited to donate to something less centralized than EA Funds but more centralized than a donor lottery.

4
Linda Linsefors
Maybe something like the S-process used by SFF? https://survivalandflourishing.fund/s-process It would be cool to have a grant system where anyone can list them selves as fund manager, and donors can pick which fund managers decisions they want to back with their donations.  If I remember correctly, the s-process could facilitate something like that.

Which part of my comment did you find as underestimating how grievous SBF/Alameda/FTX's actions were? (I'm genuinely unsure)

3
Persona14246
Sorry for the confusion, I was adding on to your comment. I agree with you obviously. It was more a statement on the forum over the past five-six days. 

Nitpick, but I found the sentence:

Based on things I've heard from various people around Nonlinear, Kat and Emerson have a recent track record of conducting Nonlinear in a way inconsistent with EA values [emphasis mine].

A bit strange in the context of the rest of the comment. If your characterization of Nonlinear is accurate, it would seem to be inconsistent with ~every plausible set of values and not just "EA values".

Appreciate the quick, cooperative response.

I want you to write a better post arguing for the same overall point if you agreed with the title, hopefully with more context than mine.

Not feeling up to it right now and not sure it needs a whole top-level post. My current take is something like (very roughly/quickly written):

  1. New information is currently coming in very rapidly.
  2. We should at least wait until the information comes in a bit slower before thinking seriously in-depth about proposed mitigations so we have a better picture of what went wrong. But "babbl
... (read more)
3
Persona14246
Just for some perspective here,  the DOJ could be pursuing SBF for wire fraud, which comes with a maximum sentence of twenty years. FTX's bankruptcy couldn't be construed as a mistake past the first day or so of last week, and this is still very generous. I find that this forum has consistently underestimated how grievous the actions taken by SBF, Alameda and FTX have been compared to the individuals I know who work in finance or crypto.  https://fortune.com/crypto/2022/11/13/could-sam-bankman-fried-go-to-prison-for-the-ftx-disaster/

I thought I would like this post based on the title (I also recently decided to hold off for more information before seriously proposing solutions), but I disagree with much of the content.

A few examples:

It is uncertain whether SBF intentionally committed fraud, or just made a mistake, but people seem to be reacting as if the takeaway from this is that fraud is bad.

I think we can safely say with at this point >95% confidence that SBF basically committed fraud even if not technically in the legal sense (edit: but also seems likely to be fraud in the lega... (read more)

I don't say this often, but thanks for your comment!

This seems wrong, e.g. EA leadership had more personal context on Sam than investors. See e.g. Oli here with a personal account and my more abstract argument here.

Interesting! You have changed my mind on this. You clearly know more about this than I. I want you to write a better post arguing for the same overall point if you agreed with the title, hopefully with more context than mine.

The fact that we have such different pictures I think may be an effect of what I'm seeing on the forum. So many top le... (read more)

I’m not as sure about advisors, as I wrote here. Agree on recipients

It's a relevant point but I think we can reasonably expect EA leadership to do better at vetting megadonors than Sequoia due to (a) more context on the situation, e.g. EAs should have known more about SBF's past than Sequoia and/or could have found it out more easily via social and professional connections (b) more incentive to avoid downside risks, e.g. the SBF blowup matters a lot more for EA's reputation than Sequoia's.

To be clear, this does not apply to charities receiving money from FTXFF, that is a separate question from EA leadership.

8
timunderwood
You expect the people being given free cash to do a better job of due diligence than the people handing someone a giant cash pile? Not to mention that the Future Fund donations probably did more good for EA causes than the reputational damage is going to do harm to them (making the further assumption that this is actually net reputational damage, as opposed to a bunch of free coverage that pushes some people off and attracts some other people).
4
Nathan Young
Also, to be pithy: If we are so f*****g clever as to know what risks everyone else misses and how to avoid them, how come we didn't spot that one of our best and brightest was actually a massive fraudster
1[anonymous]
I think a) and b) are good points. Although there's also c) it's reasonable to give extra trust points to a member of the community who's just given you a not-insignificant part of their wealth to spend on charitable endeavours as you see fit. Note that I'm obviously not saying this implied SBF was super trustworthy on balance, just that it's a reasonable consideration pushing in the other direction when making the comparison with Sequoia who lacked most of this context (I do think it's a good thing that we give each other trust points for signalling and demonstrating commitments to EA).

Thanks for clarifying. To be clear, I didn't say I thought they were as bad as Leverage. I said "I have less trust in CEA's epistemics to necessarily be that much better than Leverage's , though I'm uncertain here"

I've read it. I'd guess we have similar views on Leverage, but different views on CEA. I think it's very easy for well-intentioned, generally reasonable people's epistemics to be corrupted via tribalism, motivated reasoning, etc.

But as I said above I'm unsure.

Edited to add: Either way, might be a distraction to debate this sort of thing further. I'd guess that we both agree in practice that the allegations should be taken seriously and investigated carefully, ideally by independent parties.

I agree that these can technically all be true at the same time, but I think the tone/vibe of comments is very important in addition to what they literally say, and the vibe of Arepo's comment was too tribalistic.

I'd also guess re: (3) that I have less trust in CEA's epistemics to necessarily be that much better than Leverage's , though I'm uncertain here (edited to add: tbc my best guess is it's better, but I'm not sure what my prior should be if there's a "he said / she said" situation, on who's telling the truth. My guess is closer to 50/50 than 95/5 in log odds at least).

Mea culpa for not being clear enough. I don't think handwavey statements from  someone whose credibility I doubt have much evidential value, but I strongly think CEA's epistemics and involvement should be investigated - possibly including Vaughan's.

I find it bleakly humourous to be interpreted as tribalistically defending CEA when I've written gradually more public criticisms of them and their lack of  focus -and honestly, while I don't understand thinking they're as bad as Leverage, I think they've historically probably been a counterfactual neg... (read more)

Caro
21
6
0

I agree that the tone was too tribalistic, but the content is correct.

(Seems a bit like a side-topic, but you can read more about Leverage on this EA Forum post and, even more importantly, in the comments. I hope that's useful for you! The comments definitely changed my views - negatively - about the utility of Leverage's outputs and some cultural issues.)

I’m guessing I have a lower opinion of Leverage than you based on your tone, but +1 on Kerry being at CEA for 4 years making it more important to pay serious attention to what he has to say even if it ultimately doesn’t check out. We need to be very careful to minimize tribalism hurting our epistemics.

Caro
59
23
0

For what it's worth, these different considerations can be true at the same time:

  1. "He may have his own axe to grind.": that's probably true, given that he's been fired by CEA.
  2. "Kerry being at CEA for four years makes it more important to pay serious attention to what he has to say even if it ultimately doesn’t check out.": it also seems like he may have particularly useful information and contexts.
  3. "He's now the program manager at a known cult that the EA movement has actively distanced itself from": it does seem like Leverage is shady and doesn't have a very
... (read more)

Does the "deliberate about current evidence" part includes thinking a lot about AI alignment to identify new arguments or considerations that other people on Earth may not have thought of, or would that count as new evidence?

It seems like if that would not count as new evidence, that the team you described might be able to come up with much better forecasts than we have today, and I'd think their final forecast would be more likely to end up much lower or much higher than e.g. your forecast. One consequence of this might then be be that your 90% confidence

... (read more)
2
WilliamKiely🔸
Thanks for the response! This clarifies what I was wondering well: I have some more thoughts regarding the following, but want to note up front that no response is necessary--I'm just sharing my thoughts out loud: I agree there's a ton of irreducible uncertainty here, but... what's a way of putting it... I think there are lots of other strong forecasters who think this too, but might look at the evidence that humanity has today and come to a significantly different forecast than you. Like who is to say that Nate Soares and Daniel Kokotajlo's forecasts are wrong? (Though actually it takes a smaller likelihood ratio for you to update to reach their forecasts than it does for you to reach MacAskill's forecast.) Presumably they've thought of some arguments and considerations that you haven't read or thought of before. I think it wouldn't surprise me if this team deliberating on humanity's current evidence for a thousand years would come across those arguments or considerations (or some other ones) in their process of logical induction (to use a term I learned from MIRI that roughly means updating without new evidence) and ultimately decide on a final forecast very different than yours as a result. Perhaps another way of saying this is that your current forecast may be 35% not because that's the best forecast that can be made with humanity's current evidence, given the irreducible uncertainty in the world, but rather because you don't currently have all of humanity's current evidence. Perhaps your 35% is more reflective of your own ignorance than the actual amount of irreducible uncertainty in the world. Reflecting a bit more, I'm realizing I should ask myself what I think is the appropriate level of confidence that 3% is too low. Thinking about it a bit more, 90% actually doesn't seem that high, even given what I just wrote above. I think my main reason for thinking it may be too high is that 1000 years is a long time for a team of 100 reasonable people to think ab

Thanks Max! Was great seeing you as well. I did take some time off and was a bit more chill for a little while blogging however much I felt like. I've been doing a lot better for the past 2 months.

2
MaxRa
Nice, that’s good to hear. :)

The impact isn't coming from the literal $10M donation to OpenAI, it's coming from spearheading its founding.

See https://twitter.com/esyudkowsky/status/1446562238848847877

2
NunoSempere
Yeah, I'm aware Yudkowsky thinks that, though I think I don't agree with it. In particular, it could be the case in expectation, but Yudkowsky seems like he speaks more of certainties.

“P(misalignment x-risk|AGI)”: Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI.

I'm guessing this definition is meant to separate misalignment from misuse, but I'm curious whether you are including either/both of these 2 cases as misalignment x-risk:

  1. AGI is deployed and we get locked into a great outcome by today's standards, but we get a world with <=1% of the value of "humanity's potential". So we sort of have an existential catastrophe, without a discrete cata
... (read more)
2
Nick_Beckstead
1 - counts for purposes of this question 2 - doesn't count for purposes of this question (but would be a really big deal!)

In particular, I think many of the epistemically best EAs go into stuff like grant making, philosophy, general longtermist research, etc. which leaves a gap of really epistemically good people focusing full-time on AI. And I think the current epistemic situation in the AI alignment field (both technical and governance) is pretty bad in part due to this.

5
Linch
Interestingly, I have the opposite intuition, that entire subareas of EA/longtermism are kinda plodding along and not doing much because our best people keep going into AI alignment. Some of those areas are plausibly even critical for making the AI story go well. Still, it's not clear to me whether the allocation is inaccurate, just because alignment is so important. Technical biosecurity and maybe forecasting might be exceptions though.

Thanks for clarifying. Might be worth making clear in the post (if it isn’t already, I may have missed something).

I mean EAs. I’m most confident about “talent-weighted EAs”. But probably also EAs in general.

In particular, I think many of the epistemically best EAs go into stuff like grant making, philosophy, general longtermist research, etc. which leaves a gap of really epistemically good people focusing full-time on AI. And I think the current epistemic situation in the AI alignment field (both technical and governance) is pretty bad in part due to this.

(I've only skimmed the post)

This seems right theoretically, but I'm  worried that people will read this and think this consideration ~conclusively implies fewer people should go into AI alignment, when my current best guess is the opposite is true. I agree sometimes people make the argmax vs. softmax mistake and there are status issues, but I still think not enough people proportionally go into AI for various reasons (underestimating risk level, it being hard/intimidating, not liking rationalist/Bay vibes, etc.).

2
Jan_Kulveit
I'm a bit confused if by 'fewer people' / 'not enough people proportionally' you mean 'EAs'. In my view, while too few people (as 'humans') work on AI alignment, too large fraction of EAs 'goes into AI'.
4
technicalities
Agree that this could be misused, just as the sensible 80k framework is misused, or as anything can be. Some skin in the game then: me and Jan both spend most of our time on AI.

Thanks Will!

My dad just sent me a video of the Yom Kippur sermon this year (relevant portion starting roughly here) at the congregation I grew up in. It was inspired by longtermism and specifically your writing on it, which is pretty cool. This updates me emotionally toward your broad strategy here, though I'm not sure how much I should update rationally.

Answer by elifland12
2
0

Agree with other commenters that we shouldn’t put too much weight on anecdotes, but just to provide a counter-anecdote to yours I’ve been ~99% vegan for over 3 years and it seems like my thinking ability and intellectual output has if anything improved during that time.

My best guess is that it varies based on the person and situation, but for the majority of people (including probably me) a decently planned vegan diet has ~no effect on thinking ability.

9
Elizabeth
Could you say more about what "decently planned" means to you? I think this is where a lot of the dragons live. 

Is the UAT mentioned anywhere in the bio anchors report as a reason for thinking DL will scale to TAI? I didn't find any mentions of it quickly ctrl-fing in any of the 4 parts or the appendices.

5
jylin04
Yes, it's mentioned on page 19 of part 4 (as point 1, and my main concern is with point 2b).

Yeah it's an EAF bug with crossposting linkposts from LessWrong. For now copy and paste the text into the browser and it will work, or click here.

1
Joel Becker
Ah! Thank you! Didn't think you needed to copy literal text, rather than "Copy Link Address."
  • There was a vague tone of "the goal is to get accepted to EAG" instead of "the goal is to make the world better," which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world. 
    • Because of this, I don't feel strongly about the EAG team providing feedback to people on why they were rejected. The EAG team's goals isn't to advise on how applicants can fill up their "EA resume." It's to facilitate impactful work in the world. 
  • I remembered a comment that I really lik
... (read more)

Good idea, I made a very quick version. Anyone should feel free to update it.

Load more