I agree with most of the points in this post (AI timelines might be quite short; probability of doom given AGI in a world that looks like our current one is high; there isn't much hope for good outcomes for humanity unless AI progress is slowed down somehow). I will focus on one of the parts where I think I disagree and which feels like a crux for me on whether advocating AI pause (in current form) is a good idea.
You write:
...But we can still have all the nice things (including a cure for ageing) without AGI; it might just take a bit longer than hoped. We d
I've wondered about this for independent projects and there's some previous discussion here.
See also the shadows of the future term that Michael Nielsen uses.
I think a general and theoretically sound approach would be to build a single composite game to represent all of the games together
Yeah, I did actually have this thought but I guess I turned it around and thought: shouldn't an adequate notion of value be invariant to how I decide to split up my games? The linearity property on Wikipedia even seems to be inviting us to just split games up in however manner we want.
And yeah, I agree that in the real world games will overlap and so there will be double counting going on by splitting games up. But if that's...
I asked my question because the problem with infinities seems unique to Shapley values (e.g. I don't have this same confusion about the concept of "marginal value added"). Even with a small population, the number of cooperative games seems infinite: for example, there are an infinite number of mathematical theorems that could be proven, an infinite number of Wikipedia articles that could be written, an infinite number of films that could be made, etc. If we just use "marginal value added", the total value any single person adds is finite across all such co...
I don't think the example you give addresses my point. I am supposing that Leibniz could have also invented calculus, so . But Leibniz could have also invented lots of different things (infinitely many things!), and his claim to each invention would be valid (although in the real world he only invents finitely many things). If each invention is worth at least a unit of value, his Shapley value across all inventions would be infinite, even if Leibniz was "maximally unluckly" and in the actual world got scooped every single time and so did not inve...
Disagree-voting a question seems super aggressive and also nonsensical to me. (Yes, my comment did include some statements as well, but they were all scaffolding to present my confusion. I wasn't presenting my question as an opinion, as my final sentence makes clear.) I've been unhappy with the way the EA Forum has been going for a long time now, but I am noting this as a new kind of low.
What numerator and denominator? I am imagining that a single person could be a player in multiple cooperative games. The Shapley value for the person would be finite in each game, but if there are infinitely many games, the sum of all the Shapley values (adding across all games, not adding across all players in a single game) could be infinite.
Example 7 seems wild to me. If the applicants who don't get the job also get some of the value, does that mean people are constantly collecting Shapley value from the world, just because they "could" have done a thing (even if they do absolutely nothing)? If there are an infinite number of cooperative games going on in the world and someone can plausibly contribute at least a unit of value to any one of them, then it seems like their total Shapley value across all games is infinite, and at that point it seems like they are as good as one can be, all without having done anything. I can't tell if I'm making some sort of error here or if this is just how the Shapley value works.
Do you know of any ways I could experimentally expose myself to extreme amounts of pleasure, happiness, tranquility, and truth?
I'm not aware of any way to expose yourself to extreme amounts of pleasure, happiness, tranquility, and truth that is cheap, legal, time efficient, and safe. That's part of the point I was trying to make in my original comment. If you're willing forgo some of those requirements, then as Ian/Michael mentioned, for pleasure and tranquility I think certain psychedelics (possibly illegal depending on where you live, possibly unsafe,...
It may end up being that such intensely positive values are possible in principle and matter as much as intense pains, but they don’t matter in practice for neartermists, because they're too rare and difficult to induce. Your theory could symmetrically prioritize both extremes in principle, but end up suffering-focused in practice. I think the case for upside focus in longtermism could be stronger, though.
If by "neartermism" you mean something like "how do we best help humans/animals/etc who currently exist using only technologies that currently exist, ...
I think there are multiple ways to be a neartermist or longtermist, but "currently existing" and "next 1 year of experiences" exclude almost all effective animal advocacy we actually do and the second would have ruled out deworming.
Are you expecting yourself (or the average EA) to be able to cause greater quantities of intense pleasure than quantities of intense suffering you (or the average EA) can prevent in the next ~30 years, possibly considering AGI? Maybe large numbers of artificially sentient beings made to experience intense pleasure, or new drugs ...
I am worried that exposing oneself to extreme amounts of suffering without also exposing oneself to extreme amounts of pleasure, happiness, tranquility, truth, etc., will predictably lead one to care a lot more about reducing suffering compared to doing something about other common human values, which seems to have happened here. And the fact that certain experiences like pain are a lot easier to induce (at extreme intensities) than other experiences creates a bias in which values people care the most about.
Carl Shulman made a similar point in this post: "...
It may end up being that such intensely positive values are possible in principle and matter as much as intense pains, but they don’t matter in practice for neartermists, because they're too rare and difficult to induce. Your theory could symmetrically prioritize both extremes in principle, but end up suffering-focused in practice. I think the case for upside focus in longtermism could be stronger, though.
It's also conceivable that pleasurable states as intense as excruciating pains in particular are not possible in principle after refining our definitions...
Has Holden written any updates on outcomes associated with the grant?
Not to my knowledge.
I don't think that lobbying against OpenAI, other adversarial action, would have been that hard.
It seems like once OpenAI was created and had disrupted the "nascent spirit of cooperation", even if OpenAI went away (like, the company and all its employees magically disappeared), the culture/people's orientation to AI stuff ("which monkey gets the poison banana" etc.) wouldn't have been reversible. So I don't know if there was anything Open Phil could have done to...
Eliezer's tweet is about the founding of OpenAI, whereas Agrippa's comment is about a 2017 grant to OpenAI (OpenAI was founded in 2015, so this was not a founding grant). It seems like to argue that Open Phil's grant was net negative (and so strongly net negative as to swamp other EA movement efforts), one would have to compare OpenAI's work in a counterfactual world where it never got the extra $30 million in 2017 (and Holden never joined the board) with the actual world in which those things happened. That seems a lot harder to argue for than what Elieze...
What textbooks would you recommend for these topics? (Right now my list is only “Linear Algebra Done Right”)
I would recommend not starting with Linear Algebra Done Right unless you already know the basics of linear algebra. The book does not cover some basic material (like row reduction, elementary matrices, solving linear equations) and instead focuses on trying to build up the theory of linear algebra in a "clean" way, which makes it enlightening as a second or third exposure to linear algebra but a cruel way to be introduced to the subject for the fi...
Many domains that people tend to conceptualize as "skill mastery, not cult indoctrination" also have some cult-like properties like having a charismatic teacher, not being able to question authority (or at least, not being encouraged to think for oneself), and a social environment where it seems like other students unquestioningly accept the teachings. I've personally experienced some of this stuff in martial arts practice, math culture, and music lessons, though I wouldn't call any of those a cult.
Two points this comparison brings up for me:
He was at UW in person (he was a grad student at UW before he switched his PhD to AI safety and moved back to Berkeley).
Setting expectations without making it exclusive seems good.
"Seminar program" or "seminar" or "reading group" or "intensive reading group" sound like good names to me.
I'm guessing there is a way to run such a group in a way that both you and I would be happy about.
The actual activities that the people in a fellowship engage in, like reading things and discussing them and socializing and doing giving games and so forth, don't seem different from what a typical reading club or meetup group does. I am fine with all of these activities, and think they can be quite valuable.
So how are EA introductory fellowships different from a bare reading club or meetup group? My understanding is that the main differences are exclusivity and the branding. I'm not a fan of exclusivity in general, but especially dislike it when there do...
I didn't. As far as I know, introductory fellowships weren't even a thing in EA back in 2014 (or if they were, I don't remember hearing about them back then despite reading a bunch of EA things on the internet). However, I have a pretty negative opinion of these fellowships so I don't think I would have wanted to start one even if they were around at the time.
(I tried starting the original EA group at UW in 2014. I'm no longer a student at UW and don't even live in the Seattle area currently.)
Seems like you found the Messenger group, which is the most active thing I am aware of. You've also probably seen the Facebook group and could try messaging some of the people there who joined recently.
I don't want to discourage you from trying, but here are some more details: I was unable to start an EA group at UW in 2014 (despite help from Seattle EA organizers). At the time I thought this was mainly due to my poor soci...
Did you run an introductory fellowship? (Probably not since introductory fellowships only really started/took off in 2018.) I've found a big difference with trying to start an EA group through discussion meetings vs an introductory fellowship—the latter has been much more successful. Introductory fellowships are the core of CEA's University Group Accelerator Program (previously called MVP Group Pilot Program, MVP standing for "minimum viable product").
Scott Garrabrant has discussed this (or some very similar distinction) in some LessWrong comments. There's also been a lot of discussion about babble and prune, which is basically the same distinction, except happening inside a single mind instead of across multiple minds.
There are already websites like Master How To Learn and SuperMemo Guru, the various guides on spaced repetition systems on the internet (including Andy Matuschak's prompt-writing guide which is presented in the mnemonic medium), and books like Make It Stick. If I was working on such a project I would try to more clearly lay out what is missing from these existing resources.
My personal feeling is that enough popularization of learning techniques is already taking place (though one exception I can think of is to make SuperMemo-style incremental reading more ...
(I read the non-blockquote parts of the post, skimmed the blockquotes, and did not click through to any of the links.)
It seems like the kind of education discussed in this post is exclusively mass schooling in the developing world, which is not clear from the title or intro section. If that's right, I would suggest editing the title/intro to be clearer about this. The reason is that I am quite interested in improving education so I was interested to read objections to my views, but I tend to focus on technical subjects at the university level so I feel like this post wasn't actually relevant to me.
For the past five years I have been doing contract work for a bunch of individuals and organizations, often overlapping with the EA movement's interests. For a list of things I've done, you can see here or here. I can say more about how I got started and what it's like to do this kind of work if there is interest.
What are your thoughts on chronic anxiety and DP/DR induced by psychedelics? Do you have an idea of how common this kind of condition is and how best to treat or manage it?
For me, I don't think there is a single dominant reason. Some factors that seem relevant are:
We currently have a donor who is funding everything. In the future, we intend for it to be a combination of 1) fundraising for specific ideas when they are identified and 2) fundraising for non-earmarked donations from people who trust our research and assessment process.
Another idea is to set up conditional AMAs, e.g. "I will commit to doing an AMA if at least n people commit to asking questions." This has the benefit of giving each AMA its own time (without competing for attention with other AMAs) while trying to minimize the chance of time waste and embarrassment.
In the April 2020 payout report, Oliver Habryka wrote:
I’ve also decided to reduce my time investment in the Long-Term Future Fund since I’ve become less excited about the value that the fund can provide at the margin (for a variety of reasons, which I also hope to have time to expand on at some point).
I'm curious to hear more about this (either from Oliver or any of the other fund managers).
Regardless of whatever happens, I've benefited greatly from all the effort you've put in your public writing on the fund Oliver.
I am wondering how the fund managers are thinking more long-term about encouraging more independent researchers and projects to come into existence and stay in existence. So far as I can tell, there hasn't been much renewed granting to independent individuals and projects (i.e. granting for a second or third time to grantees who have previously already received an LTFF grant). Do most grantees have a solid plan for securing funding after their LTFF grant money runs out, and if so what do they tend to do?
I think LTFF is doing something valuable by giving pe...
The LTFF is happy to renew grants so long as the applicant has been making strong progress and we believe working independently continues to be the best option for them. Examples of renewals in this round include Robert Miles, who we first funded in April 2019, and Joe Collman, who we funded in November 2019. In particular, we'd be happy to be the #1 funding source of a new EA org for several years (subject to the budget constraints Oliver mentions in his reply).
Many of the grants we make to individuals are for career transitions, such as someone retrainin...
Yeah, I am also pretty worried about this. I don't think we've figured out a great solution to this yet.
I think we don't really have sufficient capacity to evaluate organizations on an ongoing basis and provide good accountability. Like, if a new organization were to be funded by us and then grow to a budget of $1M a year, I don't feel like we have the capacity to evaluate their output and impact sufficiently well to justify giving them $1M each year (or even just $500k).
Our current evaluation process routes feels pretty good for smaller projec...
Ok I see, thanks for the clarification! I didn't notice the use of the phrase "the MIRI method", which does sound like an odd way to phrase it (if MIRI was in fact not involved in coming up with the model).
MIRI and the Future of Humanity Institute each created models for calculating the probability that a new researcher joining MIRI will avert existential catastrophe. MIRI’s model puts it at between and , while the FHI estimates between and .
The wording here makes it seem like MIRI/FHI created the model, but the link in the footnote indicates that the model was created by the Oxford Prioritisation Project. I looked at their blog post for the MIRI model but it looks like MIRI wasn't involved in creating the model (although the post author...
Did you end up writing this post? (I looked through your LW posts since the timestamp of the parent comment but it doesn't seem like you did.) If not, I would be interested in seeing some sort of outline or short list of points even if you don't have time to write the full post.
I think the forum software hides comments from new users by default. You can see here (and click the "play" button) to search for the most recently created users. You can see that Nathan Grant and ssalbdivad have comments on this post that are only visible via their user page, and not yet visible on this post.
Edit: The comments mentioned above are now visible on this post.
So if stopping growth would lower the hazard rate, it would be a matter of moving from 1% to 0.8% or something, not from 20% to 1%.
Can you say how you came up with the "moving from 1% to 0.8%" part? Everything else in your comment makes sense to me.
So you think the hazard rate might go from around 20% to around 1%?
I'm not attached to those specific numbers, but I think they are reasonable.
That's still far from zero, and with enough centuries with 1% risk we'd expect to go extinct.
Right, maybe I shouldn't have said "near zero". But I still think my basic point (of needing to lower the hazard rate if growth stops) stands.
What's doing the work for you? Do you think the probability of anthropogenic x-risk with our current tech is close to zero? Or do you think that it's not but that if growth stopped we'd keep working on safety (say developing clean energy, improving relationships between US and China etc.) so that we'd eventually be safe?
I think the first option (low probability of x-risk with current technology) is driving my intuition.
Just to take some reasonable-seeming numbers (since I don't have numbers of my own): in The Precipice, Toby Ord estimates ~19% chance of
...Dustin Moskovitz has a relevant thread on Twitter
The timing of this AMA is pretty awkward, since many people will presumably not have access to the book or will not have finished reading the book. For comparison, Stuart Russell's new book was published in October, and the AMA was in December, which seems like a much more comfortable length of time for people to process the book. Personally, I will probably have a lot of questions once I read the book, and I also don't want to waste Toby's time by asking questions that will be answered in the book. Is there any way to delay the AMA or hold a second one at a later date?
Thanks for the comment! Toby is going to do a written AMA on the Forum later in the year too. This one is timed so that we can have video answers during Virtual EA Global.
I don't think you can add the percentages for "top or near top priority" and "at least significant resources". If you look at the row for global poverty, the percentages add up to over 100% (61.7% + 87.0% = 148.7%), which means the table is double counting some people.
Looking at the bar graph above the table, it looks like "at least significant resources" includes everyone in "significant resources", "near-top priority", and "top priority". For mental health it looks like "significant resources" has 37%, and "near-top priority" and "top priority" combined have 21.5% (shown as 22% in the bar graph).
So your actual calculation would just be 0.585 * .25 which is about 15%.
Stocking ~1 month of nonperishable food and other necessities
Can you say more about why 1 month, instead of 2 weeks or 3 months or some other length of time?
Also can you say something about how to decide when to start eating from stored food, instead of going out to buy new food or ordering food online?
I think that's one of the common ways for a post to be interesting, but there are other ways (e.g. asking a question that generates interesting discussion in the comments).
This has been the case for quite a while now. There was a small discussion back in December 2016 where some people expressed similar opinions. My guess is that 2015 is the last year the group regularly had interesting posts, but I might be remembering incorrectly.
How did you decide on "blog posts, cross-posted to EA Forum" as the main output format for your organization? How deliberate was this choice, and what were the reasons going into it? There are many other output formats that could have been chosen instead (e.g. papers, wiki pages, interactive/tool website, blog+standalone web pages, online book, timelines).
This was a very deliberate decision on our part. Our primary goal is to get EA decision-makers to make better decisions as a result of our work. We thought the most likely place these decision-makers would see our work is on the EA Forum. We also thought people would be more likely to read the work if we wrote it in an article that didn’t require clicking through to a further page or PDF, on the idea that the clickthrough rate to reports is pretty low. It’s also nice to be able to track our impact via some loose proxies like upvotes and the EA Forum prize.
...Follow-up question: Have you been happy with this choice so far? Are there ways the Forum could change such that you'd expect to get a lot more value out of posting research here?
wikieahuborg_w-20180412-history.xml
contains the dump, which can be imported to a MediaWiki instance.
Re: The old wiki on the EA Hub, I'm afraid the old wiki data got corrupted, it wasn't backed up properly and it was deemed too difficult to restore at the time :(. So it looks like the information in that wiki is now lost to the winds.
I think a dump of the wiki is available at https://archive.org/details/wiki-wikieahuborg_w.
I was indeed simplifying, and e.g. probably should have said "global catastrophe" instead of "human extinction" to cover cases like permanent totalitarian regimes. I think some of the scenarios you mention could happen, but also think a bunch of them are pretty unlikely, and also disagree with your conclusion that "The bulk of the probability lies somewhere in the middle". I might be up for discussing more specifics, but also I don't get the sense that disagreement here is a crux for either of us, so I'm also not sure how much value there would be in continuing down this thread.