Thank you Max for your years of dedicated service at CEA. Under your leadership as Executive Director, CEA grew significantly, increased its professionalism, and reached more people than it had before. I really appreciate your straightforward but kind communication style, humility, and eagerness to learn and improve. I'm sorry to see you go, and wish you the best of luck in whatever comes next.
I didn't know you (Max) well but the comment above captures a lot of what was also my impression following CEA's progress from further away! Sorry to see you step back. Best wishes with taking more time for yourself and with future roles (if you plan to pursue them)!
Thanks, I think this is subtle and I don't think I expressed this perfectly.
> If someone uses AI capabilities to create a synthetic virus (which they wouldn't have been able to do in the counterfactual world without that AI-generated capability) and caused the extinction or drastic curtailment of humanity, would that count as "AGI being developed"?
No, I would not count this.
I'd probably count it if the AI a) somehow formed the intention to do this and then developed the pathogen and released it without human direction, but b) couldn't yet produce as much economic output as full automation of labor.
No official rules on that. I do think that if you have some back and forth in the comments that's a way to make your case more convincing, so some edge there.
1 - counts for purposes of this question
2 - doesn't count for purposes of this question (but would be a really big deal!)
Thanks for the feedback! I think this is a reasonable comment, and the main things that prevented us from doing this are:
(i) I thought it would detract from the simplicity of the prize competition, and would be hard to communicate clearly and simply
(ii) I think the main thing that would make our views more robust is seeing what the best arguments are for having quite different views, and this seems like it is addressed by the competition as it stands.
For simplicity on our end, I'd appreciate if you had one post at the end that was the "official" entry, which links to the other posts. That would be OK!
Plausibility, argumentation, and soundness will be inputs into how much our subjective probabilities change. We framed this in terms of subjective probabilities because it seemed like the easiest way to crisply point at ideas which could change our prioritization in significant ways.
Thanks! The part of the post that was supposed to be most responsive to this on size of AI x-risk was this:
...For "Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI." I am pretty sympathetic to the analysis of Joe Carlsmith here. I think Joe's estimates of the relevant probabilities are pretty reasonable (though the bottom line is perhaps somewhat low) and if someone convinced me that the probabilities on the premises in his argument should be much higher or lowe
Do you believe that there is something already published that should have moved our subjective probabilities outside of the ranges noted in the post? If so, I'd love to know what it is! Please use this thread to collect potential examples, and include a link. Some info about why it should have done that (if not obvious) would also be welcome. (Only new posts are eligible for the prizes, though.)
This is more of a meta-consideration around shared cultural background and norms. Could it just be a case of allowing yourselves to update toward more scary-sounding probabilities? You have all the information already. This video from Rob Miles ("There's No Rule That Says We'll Make It")[transcript copied from YouTube] made me think along these lines. Aside from background culture considerations around human exceptionalism (inspired by religion) and optimism favouring good endings (Hollywood; perhaps also history to date?), I think there is also an i...
I think considerations like those presented in Daniel Kokotajlo's Fun with +12 OOMs of Compute suggest that you should have ≥50% credence on AGI by 2043.
Do you believe some statement of this form?
"FTX Foundation will not get submissions that change its mind, but it would have gotten them if only they had [fill in the blank]"
E.g., if only they had…
Even better would be a statement of the form:
...if they had explained why their views were not moved by the expert reviews OpenPhil has already solicited.
In "AI Timelines: Where the Arguments, and the 'Experts,' Stand," Karnofsky writes:
Then, we commissioned external expert reviews.7
Speaking only for my own views, the "most important century" hypothesis seems to have survived all of this. Indeed, having examined the many angles and gotten more into the details, I believe it more strongly than before.
The footnote text reads, in part:
...Reviews of Bio Anchors are here; reviews of Explosive Growth are here
I would have also suggested a prize that generally confirms your views, but with an argument that you consider superior to your previous beliefs.
This prize is similar to the bias of printing research that claims something new rather than confirming previous research.
That would also resolve any particular bias baked into the process that compels people to convince you that you have to update instead of actually figuring out what they actually think is right.
I really think you need to commit to reading everyone's work, even if it's an intern skimming it for 10 minutes as a sifting stage.
The way this is set up now - ideas proposed by unknown people in community are unlikely to be engaged with, and so you won't read them.
Look at the recent cause exploration prizes. Half the winners had essentially no karma/engagement and were not forecasted to win. If open phanthropy hadn't committed to reading them all, they could easily have been missed.
Personally, yes I am much less likely to write something and put effort in if I think no one will read it.
I attach less than 50% in this belief, but probably higher than the existing alternative hypotheses:
FTX Foundation will not get submissions that change its mind, but it would have gotten them if only they had [fill in the blank]
Given 6 months or a year for people to submit to the contest rather than 3 months.
I think forming coherent worldviews take a long time, most people have day jobs or school, and even people who have the flexibility to take weeks/ a month off to work on this full-time probably need some warning to arrange this with their w...
"FTX Foundation will not get submissions that change its mind, but it would have gotten them if only they had [broadened the scope of the prizes beyond just influencing their probabilities]"
Examples of things someone considering entering the competition would presumably consider out of scope are:
you might already be planning on dong this, but it seems like you increase the chance of getting a winning entry if you advertise this competition in a lot of non-EA spaces. I guess especially technical AI spaces e.g. labs, universities. Maybe also trying to advertise outside the US/UK. Given the size of the prize it might be easy to get people to pass on the advertisement among their groups. (Maybe there's a worry about getting flack somehow for this, though. And also increases overhead to need to read more entries, though sounds like you have some system...
Could you put some judges on the panel who are a bit less worried about AI risk than your typical EA would be? EA opinions tend to cluster quite strongly around an area of conceptual space that many non-EAs do not occupy, and it is often hard for people to evaluate views that differ radically from their own. Perhaps one of the superforecasters could be put directly onto the judging panel, pre-screening for someone who is less worried about AI risk.
"FTX Foundation will not get submissions that change its mind, but it would have gotten them if only they had [fill in the blank]"
"I personally would compete in this prize competition, but only if..."
Ehh, the above is too strong, but:
your reward schedule rewarded smaller shifts in proportion to how much they moved your probabilities (e.g., $X per bit).
E.g., as it is now, if two submissions together move you across a threshold, it would seem as if:
… if only they had allowed people not to publish on EA Forum, LessWrong, and Alignment Forum :)
Honestly, it seems like a mistake to me to not allow other ways of submission. For example, some people may not want to publicly apply for a price or be associated with our communities. An additional submission form might help with that.
I think that the post should explain briefly, or even just link to, what a “superforecaster” is. And if possible explain how and why this serves an independent check.
The superforecaster panel is imo a credible signal of good faith, but people outside of the community may think “superforecasters” just means something arbitrary and/or weird and/or made up by FTX.
(The post links to Tetlock’s book, but not in the context of explaining the panel)
I don't have anything great, but the best thing I could come up with was definitely "I feel most stuck because I don't know what your cruxes are".
I started writing a case for why I think AI X-Risk is high, but I really didn't know whether the things I was writing were going to be hitting at your biggest uncertainties. My sense is you probably read most of the same arguments that I have, so our difference in final opinion is probably generated by some other belief that you have that I don't, and I don't really know how to address that preemptively.
I might give it a try anyways, and this doesn't feel like a defeater, but in this space it's the biggest thing that came to mind.
There are some better processes that would be used for some smaller groups of high-trust people competing with each other, but I think we don't really have a good process for this particular use case of:
* Someone wants to publish something
* They are worried it might be an information hazard
* They want someone logical to look at it and assess that before they publish
I think it would be a useful service for someone to solve that problem. I am certainly feeling some pain from it right now, though I'm not sure how general it is. (I would think it's pretty general, especially in biosecurity, and I don't think there are good scalable processes in place right now.)
We are very unsure on both counts! There are some Manifold Markets on the first question, though!
I do think articles wouldn't necessarily need to be that long to be convincing to us, and this may be a consequence of Open Philanthropy's thoroughness. Part of our hope for these prizes is that we'll get a wider range of people weighing in on these debates (and I'd expect less length there).
Thanks for the feedback! This is an experiment, and if it goes well we might do more things like it in the future. For now, we thought it was best to start with something that we felt we could communicate and judge relatively cleanly.
Maybe you could talk about betting odds as if you're an observer outside this world or otherwise assume away (causal and acausal) influence other than through the payout.
Yes, the intention is roughly something like this.
Hi Seena! We'll post about it on our website if/when we do another open call. We'll also announce it on Twitter: https://twitter.com/ftxfuturefund
Thanks for your comment! I wanted to try to clarify a few things regarding the two claims you see us as making. I agree there are major benefits to providing feedback to applicants. But there are also significant costs, too, and I want to explain why it’s at least a non-obvious decision what the right choice is here.
On (1), I agree with Sam that it wouldn't be the right prioritization for our team right now to give detailed feedback to >1600 applications we rejected, and would cut into our total output for the year significantly. I think it could ...
A model that I heard TripleByte used sounds interesting to me.
I wrote a comment about TripleByte's feedback process here; this blog post is great too. In our experience, the fear of lawsuits and PR disasters from giving feedback to rejected candidates was much overblown, even at a massive scale. (We gave every candidate feedback regardless of how well they performed on our interview.)
Something I didn't mention in my comment is that much of TripleByte's feedback email was composed of prewritten text blocks carefully optimized to be helpful and non-offe...
We tend to do BOTECs when we have internal disagreement about whether to move forward with a large grant, or when we have internal disagreement about whether to fund in a given area. But this is only how we make a minority of decisions.
There are certain standard numbers I think about in the background of many applications, e.g. how large I think different classes of existential risks are and modifiers for how tractable I think they are. My views are similar to Toby Ord's table of risks in The Precipice. We don't have standardized and carefully explained es...
About 99% of applicants have received a decision at this point. The remaining 1% have received updates on when they should expect to hear from us next. Some of these require back-and-forth with the applicant and we can't unilaterally conclude the process with all the info we need. And in some of these cases the ball is currently in our court.
We will be reporting on the open call more systematically in our progress update which we publish in a month or so.
Thanks for the thoughts, Irena! It's true that there are some proposals that did not receive decisions in 14 days and perhaps we should have communicated more carefully.
That said, I think if you look at the text on the website and compare it with what's happening, it actually matches pretty closely.
We wrote:
"We aim to arrive at decisions on most proposals within 14 days (though in more complex cases, we might need more time).
Thanks for sharing your thoughts and concerns, Tee. I'd like to comment on application feedback in particular. It's true that we are not providing feedback on the vast majority of applications, and I can see how it would be frustrating and confusing to be rejected without understanding the reasons, especially when funders have such large resources at their disposal.
We decided not to give feedback on applications because we didn't see how to do it well and stay focused on our current commitments and priorities. We think it would require a large time investm...
Agree with this--it's impossible to give constructive feedback on thousands of applications. The decision is between not giving grants, or accepting that most grant applications won't get much feedback from us. We chose the latter.
We're still finishing up about 30 more complicated applications (of ~1700 originally submitted). Then we're going to review the process, and share some of what we learned!
We don't know yet! We're finishing up about 30 more complicated applications (of ~1700 originally submitted), and then we're going to review the process and make a decision about this.
"Over the coming two weeks, the FTX foundation will be making its decisions and likely disbursing $100m, possibly more."
Just wanted to quickly correct this. Though we aim to give out at least $100M this year, we are not aiming to do that in this first call for proposals.
We have a number of entities we can use to provide funding, and which we use depends on the exact circumstances. It could be our non-profit entity FTX Foundation Inc or it could be a DAF of one of our board members or it could be something else if it's a for-profit investment. We will work with people we support to find the best way for them to receive the funding.
The first open call will end March 21. We'll probably have more calls for proposals in the future, but we’re really not sure when, and this will depend in part on how this experiment goes.
Quick addition to this: For colleges and universities, indirect costs may not exceed 10% of the direct costs. On this front, Future Fund will mimic Open Philanthropy's indirect costs policy.
Thank you!
We definitely include non-human sentient beings as moral patients. Future Fund focuses on humanity in our writing because we think the human trajectory is the main factor we can influence in order to benefit both humans and non-humans in the long run.
Thanks for pointing this out; we didn’t know about this. I think the easiest solution would be for you to either (a) use a different google account or (b) create a new google account for this purpose.
You could also perhaps try not attaching any files and just spending us links to Google docs set to "anyone with the link can view."
If we're funding a for-profit organization to do something profitable, we'd like to receive equity. If you can arrange for that, we're all set.
FTX Foundation has funded some animal work in the past, and almost certainly will do so in the future. Future Fund won’t be funding animal welfare work except when we see a good case that it's one of the best ways to improve the longterm future. Basically James Ozden has it right.
We have a more robust interest in neglected existential risks, such as AI and bio. However, we think the issues discussed in our economic growth section are good from a longtermist POV, and we'd like to see what ideas people put forward.
Our areas of interest aren’t in order of priority, and there's internal disagreement about the order of priority.
Thanks for all of your hard work on EV, Will! I’ve really appreciated your individual example of generosity and commitment, boldness, initiative-taking, and leadership. I feel like a lot of things would happen more slowly or less ambitiously---or not at all---if it weren’t for your ability to inspire others to dive in and act on the courage of their convictions. I think this was really important for Giving What We Can, 80,000 Hours, Centre for Effective Altruism, the Global Priorities Institute, and your books. Inspiration, enthusiasm, and positivity from you has been a force-multiplier on my own work, and in the lives of many others that I have worked with. I wish you all the best in your upcoming projects.