All of Nick_Beckstead's Comments + Replies

Thanks for all of your hard work on EV, Will! I’ve really appreciated your individual example of generosity and commitment, boldness, initiative-taking, and leadership. I feel like a lot of things would happen more slowly or less ambitiously---or not at all---if it weren’t for your ability to inspire others to dive in and act on the courage of their convictions. I think this was really important for Giving What We Can, 80,000 Hours, Centre for Effective Altruism, the Global Priorities Institute, and your books. Inspiration, enthusiasm, and positivity from you has been a force-multiplier on my own work, and in the lives of many others that I have worked with. I wish you all the best in your upcoming projects.

Thank you Max for your years of dedicated service at CEA. Under your leadership as Executive Director, CEA grew significantly, increased its professionalism, and reached more people than it had before. I really appreciate your straightforward but kind communication style, humility, and eagerness to learn and improve. I'm sorry to see you go, and wish you the best of luck in whatever comes next.

I didn't know you (Max) well but the comment above captures a lot of what was also my impression following CEA's progress from further away! Sorry to see you step back. Best wishes with taking more time for yourself and with future roles (if you plan to pursue them)!

Thanks, I think this is subtle and I don't think I expressed this perfectly.

> If someone uses AI capabilities to create a synthetic virus (which they wouldn't have been able to do in the counterfactual world without that AI-generated capability) and caused the extinction or drastic curtailment of humanity, would that count as "AGI being developed"?

No, I would not count this. 

I'd probably count it if the AI a) somehow formed the intention to do this and then developed the pathogen and released it without human direction, but b) couldn't yet produce as much economic output as full automation of labor.

3
finnhambly
1y
Okay great, that makes sense to me. Thank you very much for the clarification!

No official rules on that. I do think that if you have some back and forth in the comments that's a way to make your case more convincing, so some edge there.

1 - counts for purposes of this question
2 - doesn't count for purposes of this question (but would be a really big deal!)

Thanks for this post! Future Fund has removed this project from our projects page in response.

Thanks for the feedback! I think this is a reasonable comment, and the main things that prevented us from doing this are:
(i) I thought it would detract from the simplicity of the prize competition, and would be hard to communicate clearly and simply
(ii) I think the main thing that would make our views more robust is seeing what the best arguments are for having quite different views, and this seems like it is addressed by the competition as it stands.

For simplicity on our end, I'd appreciate if you had one post at the end that was the "official" entry, which links to the other posts. That would be OK!

Plausibility, argumentation, and soundness will be inputs into how much our subjective probabilities change. We framed this in terms of subjective probabilities because it seemed like the easiest way to crisply point at ideas which could change our prioritization in significant ways.

Thanks! The part of the post that was supposed to be most responsive to this on size of AI x-risk was this:

For "Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI." I am pretty sympathetic to the analysis of Joe Carlsmith here. I think Joe's estimates of the relevant probabilities are pretty reasonable (though the bottom line is perhaps somewhat low) and if someone convinced me that the probabilities on the premises in his argument should be much higher or lowe

... (read more)
7
Guy Raveh
2y
I think it's kinda weird and unproductive to focus a very large prize on things that would change a single person's views, rather than be robustly persuasive to many people. E.g. does this imply that you personally control all funding of the FF? (I assume you don't, but then it'd make sense to try to convince all FF managers, trustees etc.)

Do you believe that there is something already published that should have moved our subjective probabilities outside of the ranges noted in the post? If so, I'd love to know what it is! Please use this thread to collect potential examples, and include a link. Some info about why it should have done that (if not obvious) would also be welcome. (Only new posts are eligible for the prizes, though.)

This is more of a meta-consideration around shared cultural background and norms. Could it just be a case of allowing yourselves to update toward more scary-sounding probabilities? You have all the information already. This video from Rob Miles ("There's No Rule That Says We'll Make It")[transcript copied from YouTube]  made me think along these lines. Aside from background culture considerations around human exceptionalism (inspired by religion) and optimism favouring good endings (Hollywood; perhaps also history to date?), I think there is also an i... (read more)

I think considerations like those presented in Daniel Kokotajlo's Fun with +12 OOMs of Compute suggest that you should have ≥50% credence on AGI by 2043.

Do you believe some statement of this form?

"FTX Foundation will not get submissions that change its mind, but it would have gotten them if only they had [fill in the blank]"

E.g., if only they had…

  • Allowed people to publish not on EA Forum / LessWrong / Alignment Forum
  • Increased the prize schedule to X
  • Increased the window of the prize to size Y
  • Advertised the prize using method Z
  • Chosen the following judges instead
  • Explained X aspect of their views better

Even better would be a statement of the form:

  • "I personally would compete in this prize competition, but only
... (read more)
1
Noah Scales
2y
I personally would compete in this prize competition, but only if I were free to explore:  P(misalignment x-risk|AGI): Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to concentration of power derived from AGI technology. You wrote: but this list does not include the conditional probability that interests me. You wrote: This seems really motivating. You identify: * global poverty * animal suffering * early death * debilitating disease as problems that TAI could help humanity solve. I will offer briefly that humans are sensitive to changes in their behaviors, at least as seen in advance, that deprive them of choices they have already made. We cause: *  global poverty through economic systems that support exploitation of developing countries and politically-powerless people (e.g., through corporate capitalism and military coups) * animal suffering through widespread factory farming (enough to dominate terrestrial vertebrate populations globally with our farm animals) and gradual habitat destruction (enough to threaten the extinction of a million species) * early death through lifestyle-related debilitating disease (knock-on effects of lifestyle choices in affluent countries now spread throughout the globe). So these TAI would apparently resolve, through advances in science and technology, various immediate causes, with a root cause found in our appetite (for wealth, power, meat, milk, and unhealthy lifestyles).  Of course, there are other reasons for debilitating disease and early death than human appetite. However, your claim implies to me that we invent robots and AI to either reduce or feed our appetites harmlessly.  Causes of global poverty, animal suffering, some debilitating diseases, and early human death are maintained by incentive structures that benefit a subset of the global population. TAI will apparently remove those incentive structures, but not by any mechanism that I

...if they had explained why their views were not moved by the expert reviews OpenPhil has already solicited.

In "AI Timelines: Where the Arguments, and the 'Experts,' Stand," Karnofsky writes:

Then, we commissioned external expert reviews.7

Speaking only for my own views, the "most important century" hypothesis seems to have survived all of this. Indeed, having examined the many angles and gotten more into the details, I believe it more strongly than before.

The footnote text reads, in part:

Reviews of Bio Anchors are here; reviews of Explosive Growth are here

... (read more)

I would have also suggested a prize that generally confirms your views, but with an argument that you consider superior to your previous beliefs. 

 This prize is similar to the bias of printing research that claims something new rather than confirming previous research. 

That would also resolve any particular bias baked into the process that compels people to convince you that you have to update instead of actually figuring out what they actually think is right.

[anonymous]2y26
6
2

I really think you need to commit to reading everyone's work, even if it's an intern skimming it for 10 minutes as a sifting stage.

The way this is set up now - ideas proposed by unknown people in community are unlikely to be engaged with, and so you won't read them.

Look at the recent cause exploration prizes. Half the winners had essentially no karma/engagement and were not forecasted to win. If open phanthropy hadn't committed to reading them all, they could easily have been missed.

Personally, yes I am much less likely to write something and put effort in if I think no one will read it.

Linch
2y56
23
0

I attach less than 50% in this belief, but probably higher than the existing alternative hypotheses:

FTX Foundation will not get submissions that change its mind, but it would have gotten them if only they had [fill in the blank]

Given 6 months or a year for people to submit to the contest rather than 3 months. 

I think forming coherent worldviews take a long time, most people have day jobs or school,  and even people who have the flexibility to take weeks/ a month off to work on this full-time probably need some warning to arrange this with their w... (read more)

"FTX Foundation will not get submissions that change its mind, but it would have gotten them if only they had [broadened the scope of the prizes beyond just influencing their probabilities]"



Examples of things someone considering entering the competition would presumably consider out of scope are:

  • Making a case that AI misalignment is the wrong level of focus – even if AI risks are high it could be that AI risks and other risks are very heavily weighted towards specific risk factor scenarios, such as a global hot or cold war. This view is apparently expresse
... (read more)
3
David Johnston
2y
FTX Foundation might get fewer submissions that change its mind than they would have gotten if only they had considered strategic updates prize worthy. The unconditional probability of takeover isn’t necessarily the question of most strategic interest. There’s a huge difference between “50% AI disempowers humans somehow on the basis of naive principle of indifference” and “50% MIRI-style assumptions about AI are correct”*. One might conclude from the second that the first is also true, but the first has no strategic implications (the principle of indifference ignores such things!), while the second has lots of strategic implications. For example, it suggests “ totally lock down AI development, at least until we know more” is what we need to aim for. I’m not sure exactly where you stand on whether that is needed, but given that your stated position seems to be relying substantially on outside view type reasoning, it might be a big update. The point is: middling probabilities of strategically critical hypotheses might actually be more important updates than extreme probabilities of strategically opaque hypotheses. My suggestion (not necessarily a full solution) is that you consider big strategic updates potentially prizeworthy. For example: do we gain a lot by delaying AGI for a few years? If we consider all the plausible paths to AGI, do we gain a lot by hastening the development of the top 1% most aligned by a few years? I think it’s probably too hard to pre-specify exactly which strategic updates would be prizes worthy. *By which I mean something like “more AI capability eventually yields doom, no matter what, unless it’s highly aligned”

you might already be planning on dong this, but it seems like you increase the chance of getting a winning entry if you advertise this competition in a lot of non-EA spaces. I guess especially technical AI spaces e.g. labs, universities. Maybe also trying to advertise outside the US/UK. Given the size of the prize it might be easy to get people to pass on the advertisement among their groups. (Maybe there's a worry about getting flack somehow for this, though. And also increases overhead to need to read more entries, though sounds like you have some system... (read more)

Could you put some judges on the panel who are a bit less worried about AI risk than your typical EA would be? EA opinions tend to cluster quite strongly around an area of conceptual space that many non-EAs do not occupy, and it is often hard for people to evaluate views that differ radically from their own. Perhaps one of the superforecasters could be put directly onto the judging panel, pre-screening for someone who is less worried about AI risk.

"FTX Foundation will not get submissions that change its mind, but it would have gotten them if only they had [fill in the blank]"

"I personally would compete in this prize competition, but only if..."

Ehh, the above is too strong, but:

  • You would get more/better submissions if...
  • I would be more likely to compete in that if...

your reward schedule rewarded smaller shifts in proportion to how much they moved your probabilities (e.g., $X per bit). 

E.g., as it is now, if two submissions together move you across a threshold, it would seem as if:

  • neither gets a
... (read more)

… if only they had allowed people not to publish on EA Forum, LessWrong, and Alignment Forum :)

Honestly, it seems like a mistake to me to not allow other ways of submission. For example, some people may not want to publicly apply for a price or be associated with our communities. An additional submission form might help with that.

rgb
2y19
10
0

I think that the post should explain briefly, or even just link to, what a “superforecaster” is. And if possible explain how and why this serves an independent check.

The superforecaster panel is imo a credible signal of good faith, but people outside of the community may think “superforecasters” just means something arbitrary and/or weird and/or made up by FTX.

(The post links to Tetlock’s book, but not in the context of explaining the panel)

9
Zach Stein-Perlman
2y
Agree with Habryka: I believe there exist decisive reasons to believe in shorter timelines and higher P(doom) than you accept, but I don't know what your cruxes are.

I don't have anything great, but the best thing I could come up with was definitely "I feel most stuck because I don't know what your cruxes are". 

I started writing a case for why I think AI X-Risk is high, but I really didn't know whether the things I was writing were going to be hitting at your biggest uncertainties. My sense is you probably read most of the same arguments that I have, so our difference in final opinion is probably generated by some other belief that you have that I don't, and I don't really know how to address that preemptively. 

I might give it a try anyways, and this doesn't feel like a defeater, but in this space it's the biggest thing that came to mind.

There are some better processes that would be used for some smaller groups of high-trust people competing with each other, but I think we don't really have a good process for this particular use case of:

* Someone wants to publish something
* They are worried it might be an information hazard
* They want someone logical to look at it and assess that before they publish

I think it would be a useful service for someone to solve that problem. I am certainly feeling some pain from it right now, though I'm not sure how general it is. (I would think it's pretty general, especially in biosecurity, and I don't think there are good scalable processes in place right now.)

8
Fedor
2y
Hey Lorenzo pointed me to this comment. I work in InfoSec. The first step is defining what your threats are, and what are you trying to defend. I'll be blunt, if large, highly capable geopolitical powers actively want to get your highly valuable information, beyond passive bulk collection, then they will be able to get it. I don't quite know how to say this, but security is bad at what we do. If you want to keep something secret they want as much as say nuclear secrets, then we don't know how to do that, so that it will work with a high chance of success. If your information is sensitive, confidential, but nation state actors only want it as much as, say something that would give a press scandal then there is opportunity. If you want to disclose infohazards safely, there's a lot to learn from whistleblower publisher orgs (like wikileaks), and CitizenLab. The cheap, usable, option is for someone to have a otherwise unused phone and create a protonmail and signal with it, and then publish those on any https website (like this forum), and then the info never gets forwarded from the phone. Publish the protonmail PGP key, and make sure people email it from either Protonmail itself or if they understand PGP (so not   normal gmail). That gets everything to a device with minimal attack surface, and is reasonably user friendly.  If you have problems in this area, I can help.
8
Lorenzo Buonanno
2y
Probably missing something obvious, but could they either: * PGP encrypt it with the reviewer's public key, and send it via email? * Use an e2e encrypted messaging medium? (Don't know which are trustworthy, but I'm sure there's an expert consensus) Or are those not user friendly enough? I think this is a solved problem in infosec (but am probably missing something)

We are very unsure on both counts! There are some Manifold Markets on the first question, though!

I do think articles wouldn't necessarily need to be that long to be convincing to us, and this may be a consequence of Open Philanthropy's thoroughness. Part of our hope for these prizes is that we'll get a wider range of people weighing in on these debates (and I'd expect less length there).

4
MichaelDickens
2y
Link doesn't work for me. What does work for me is going to http://manifold.markets/ and searching "future fund", it does work (and this gives me the exact URL that you linked, so I'm not sure why the link doesn't work).
4
Greg_Colbourn
2y
There is also the feedback loop involving the Future Fund itself. As Michael Dickens points out here: I think it's much easier to argue that p(misalignment x-risk|AGI) >35% (or 75%) as things stand.

Thanks for the feedback! This is an experiment, and if it goes well we might do more things like it in the future. For now, we thought it was best to start with something that we felt we could communicate and judge relatively cleanly.

1
howdoyousay?
2y
  Thanks for clarifying this is in fact the case Nick. I get how setting a benchmark - in this case an essay's persuasiveness at shifting probabilities you assign to different AGI / extinction scenarios - makes it easier to judge across the board. But as someone who works in this field, I can't say I'm excited by the competition or feel it will help advance things.  Basically, I don't know if this prize is incentivising things which matter most. Here's why: 1. The focus is squarely on likelihood of things going wrong against different timelines. It has nothing to do with solutions space 2. But solutions are still needed, even if the likelihood reduces / increases by a large amount, because the impact would be so high.  1. Take Proposition 1: humanity going extinct or drastically curtailing its future due to loss of control of AGI. I can see how a paper which changes your probabilities from 15% to either 7% or 35% would lead to FTX changing the amount invested in this risk relative to other X risks - this is good. However, I doubt it'd lead to a full on disinvestment, let alone that you still wouldn't want to fund the best solutions, or be worried if the solutions to hand looked weak 3. Moreover, capabilities advancements have rapidly changed priors of when AGI / transformative AI would be developed, and will likely continue to do so iteratively. Once this competition is done, new research could have shifted the dial again. Solutions space will likely be the same 4. So long as the capabilities-alignment advancements gap persists, solutions will more likely come from the AI governance space than AI alignment research space just yet 5. The solution space is pretty sparse still in terms of governance of AI. But given the argument in 2), I think this is a big risk and one where further work should be stimulated. There's likely loads of value off the table, people sitting on ideas, especially people outside the EA community who have worked in governance / non-

Maybe you could talk about betting odds as if you're an observer outside this world or otherwise assume away (causal and acausal) influence other than through the payout.

Yes, the intention is roughly something like this.

Hi Seena! We'll post about it on our website if/when we do another open call. We'll also announce it on Twitter: https://twitter.com/ftxfuturefund

Please see our grants page: https://ftxfuturefund.org/our-grants/

Thanks for your comment! I wanted to try to clarify a few things regarding the two claims you see us as making.  I agree there are major benefits to providing feedback to applicants. But there are also significant costs, too, and I want to explain why it’s at least a non-obvious decision what the right choice is here.

On (1), I agree with Sam that it wouldn't be the right prioritization for our team right now to give detailed feedback to >1600 applications we rejected, and would cut into our total output for the year significantly. I think it could ... (read more)

3
Ferenc Huszár
2y
Thanks for the response, and thanks for being open to  improving your process, and I agree with many of your points about the importance of scaling teams cautiously.

A model that I heard TripleByte used sounds interesting to me.

I wrote a comment about TripleByte's feedback process here; this blog post is great too. In our experience, the fear of lawsuits and PR disasters from giving feedback to rejected candidates was much overblown, even at a massive scale. (We gave every candidate feedback regardless of how well they performed on our interview.)

Something I didn't mention in my comment is that much of TripleByte's feedback email was composed of prewritten text blocks carefully optimized to be helpful and non-offe... (read more)

7
Tee
2y
Very much appreciate the considerate engagement with this. Wanted to flag that my primary response to your initial comment can be found here.  All this makes a lot of sense to me. I suspect some people got value out of the presentation of this reasoning. My goal here was to bring this set of consideration to yours and Sam's attention and upvote its importance, hopefully it's factored into what is definitely non-obvious and complex to decide moving forward. Great to see how thoughtful you all have been and thanks again! 

We tend to do BOTECs when we have internal disagreement about whether to move forward with a large grant, or when we have internal disagreement about whether to fund in a given area. But this is only how we make a minority of decisions.

There are certain standard numbers I think about in the background of many applications, e.g. how large I think different classes of existential risks are and modifiers for how tractable I think they are. My views are similar to Toby Ord's table of risks in The Precipice. We don't have standardized and carefully explained es... (read more)

2
MichaelStJules
2y
Do you have standard numbers for net x-risk reduction (share or absolute) for classes of interventions you fund, too?

About 99% of applicants have received a decision at this point. The remaining 1% have received updates on when they should expect to hear from us next. Some of these require back-and-forth with the applicant and we can't unilaterally conclude the process with all the info we need. And in some of these cases the ball is currently in our court.

We will be reporting on the open call more systematically in our progress update which we publish in a month or so.

Thanks for the thoughts, Irena! It's true that there are some proposals that did not receive decisions in 14 days and perhaps we should have communicated more carefully.

That said, I think if you look at the text on the website and compare it with what's happening, it actually matches pretty closely.

We wrote:

"We aim to arrive at decisions on most proposals within 14 days (though in more complex cases, we might need more time).

  • If your grant request is under $1 million, we understand it, we like it, and we don’t see potential for major downsides, it’ll probab
... (read more)
9
Writer
2y
The indication I got said that FTX would reach out "within two weeks", which meant by April 20. I haven't heard back since, though. I reached out eight days ago to ensure that my application or relevant e-mails haven't been lost, but I haven't received an answer. :( (I get that this is probably not on purpose, and that grant decisions take as long as they need to, but if I see an explicit policy of "we are going to reach out even if we haven't made a decision yet" then I'm left wondering if something has broken down somewhere and about what to do. It seems a good choice to try to reach out myself... and comment under this thread to provide a data point.)

Thanks for sharing your thoughts and concerns, Tee. I'd like to comment on application feedback in particular. It's true that we are not providing feedback on the vast majority of applications, and I can see how it would be frustrating and confusing to be rejected without understanding the reasons, especially when funders have such large resources at their disposal.

We decided not to give feedback on applications because we didn't see how to do it well and stay focused on our current commitments and priorities. We think it would require a large time investm... (read more)

Agree with this--it's impossible to give constructive feedback on thousands of applications.  The decision is between not giving grants, or accepting that most grant applications won't get much feedback from us.  We chose the latter.

Makes sense! We are aiming to post a progress update in the next month or so.

We're still finishing up about 30 more complicated applications (of ~1700 originally submitted). Then we're going to review the process, and share some of what we learned!

1
barkbellowroar
2y
Sounds good, thanks for responding Nick!

We don't know yet! We're finishing up about 30 more complicated applications (of ~1700 originally submitted), and then we're going to review the process and make a decision about this.

1
kris.reiser
2y
We were very excited about this new opportunity! Just checking in to see how/when the results would be communicated. We have our confirmation email with summary but haven't had any results yet.  Would an update on the progress of the submissions be possible? Thank you!  

"Over the coming two weeks, the FTX foundation will be making its decisions and likely disbursing $100m, possibly more."

Just wanted to quickly correct this. Though we aim to give out at least $100M this year, we are not aiming to do that in this first call for proposals.

2
Sanjay
2y
I was unsure about this, thanks for clarifying

We have a number of entities we can use to provide funding, and which we use depends on the exact circumstances. It could be our non-profit entity FTX Foundation Inc or it could be a DAF of one of our board members or it could be something else if it's a for-profit investment. We will work with people we support to find the best way for them to receive the funding.

2
Sanjay
2y
Thanks very much Nick. Is it possible to name one of the organisations providing the DAF? (e.g. is it National Philanthropic Trust, or Charities Aid Foundation, or whatever). Ideally if there's one in the UK, it would be great to name them, but failing that if you could provide the name of any of them off the top of your head, that would be great.

2000 Center St
Ste 400
Berkeley, CA 94704

Not from us, but please try to keep your answers brief. Not sure about Google!

1
Gab
2y
Thanks Nick! 

We don't know yet. We're going to see how this one goes and then decide after that.

The first open call will end March 21. We'll probably have more calls for proposals in the future, but we’re really not sure when, and this will depend in part on how this experiment goes.

1
Em_B
2y
Thank you for the information! 

Quick addition to this: For colleges and universities, indirect costs may not exceed 10% of the direct costs. On this front, Future Fund will mimic Open Philanthropy's indirect costs policy.


 

1
Aleksandar Bogdanoski
2y
Thanks, this is very helpful, Nick! We're planning on submitting a proposal from UC Berkeley, however, our research administration team needs some info regarding the FTX Foundation, such as its address, complete name, and charitable status in the Bahamas. Could you share direct us to where we can find this information?
1
Charles Tsai
2y
Hi Nick, Is the 10% limit just for colleges and universities? Or does it apply to other nonprofits as well? Thanks.

Thank you! 

We definitely include non-human sentient beings as moral patients. Future Fund focuses on humanity in our writing because we think the human trajectory is the main factor we can influence in order to benefit both humans and non-humans in the long run.

Yes, we're open to funding academic research relevant to our mission and/or areas of interest.

No, funding applications will not be made public.

Thanks for pointing this out; we didn’t know about this. I think the easiest solution would be for you to either (a) use a different google account or (b) create a new google account for this purpose.

You could also perhaps try not attaching any files and just spending us links to Google docs set to "anyone with the link can view."

We’d be willing to fund professionals who submit an application directly.


 

If we're funding a for-profit organization to do something profitable, we'd like to receive equity. If you can arrange for that, we're all set.


 

FTX Foundation has funded some animal work in the past, and almost certainly will do so in the future. Future Fund won’t be funding animal welfare work except when we see a good case that it's one of the best ways to improve the longterm future. Basically James Ozden has it right.


 

There aren’t restrictions on multiple discrete applications.


 

1
Fielding Grasty
2y
Thank you, Nick.

We have a more robust interest in neglected existential risks, such as AI and bio. However, we think the issues discussed in our economic growth section are good from a longtermist POV, and we'd like to see what ideas people put forward.

Our areas of interest aren’t in order of priority, and there's internal disagreement about the order of priority.

Load more