All of riceissa's Comments + Replies

I was indeed simplifying, and e.g. probably should have said "global catastrophe" instead of "human extinction" to cover cases like permanent totalitarian regimes. I think some of the scenarios you mention could happen, but also think a bunch of them are pretty unlikely, and also disagree with your conclusion that "The bulk of the probability lies somewhere in the middle". I might be up for discussing more specifics, but also I don't get the sense that disagreement here is a crux for either of us, so I'm also not sure how much value there would be in continuing down this thread.

I agree with most of the points in this post (AI timelines might be quite short; probability of doom given AGI in a world that looks like our current one is high; there isn't much hope for good outcomes for humanity unless AI progress is slowed down somehow). I will focus on one of the parts where I think I disagree and which feels like a crux for me on whether advocating AI pause (in current form) is a good idea.

You write:

But we can still have all the nice things (including a cure for ageing) without AGI; it might just take a bit longer than hoped. We d

... (read more)
2
Greg_Colbourn
6mo
How about an Option A.1: pause for a few years or a decade to give alignment a chance to catch up? At least stop at the red lights for a bit to check whether anyone is coming, even if you are speeding! I think this easily goes through, even for 1-10% p(doom|AGI), as it seems like ageing is basically already a solved problem or will be within a decade or so (see the video I linked to - David Sinclair; and there are many other people working in the space with promising research too).
6
titotal
6mo
Are you simplifying here, or do you actually believe that "utopia in our lifetime" or "extinction" are the only two possible outcomes given AGI? Do you assign a 0% chance that we survive AGI, but don't have a utopia in the next 80 years?  What if AGI stalls out at human level, or is incredibly expensive, or is buggy and unreliable like humans are? What if the technology required for utopia turns out to be ridiculously hard even for AGI, or substantially bottlenecked by available resources? What if technology alone can't create a utopia, and the extra tech just exacerbates existing conflicts? What if AGI access is restricted to world leaders, who use it for their own purposes?  What if we build an unaligned AGI, but catch it early and manage to defeat it in battle? What if early, shitty AGI screws up in a way that causes a worldwide ban on further AGI development? What if we build an AGI, but we keep it confined to a box and can only get limited functionality out of it? What if we build an aligned AGI, but people hate it so much that it voluntary shuts off? What if the AGI that gets built is aligned to the values of people with awful views, like religious fundamentalists? What if AGI wants nothing to do with us and flees the galaxy? What if [insert X thing I didn't think of here]?.  IMO, extinction and utopia are both unlikely outcomes. The bulk of the probability lies somewhere in the middle. 

I've wondered about this for independent projects and there's some previous discussion here.

See also the shadows of the future term that Michael Nielsen uses.

I think a general and theoretically sound approach would be to build a single composite game to represent all of the games together

Yeah, I did actually have this thought but I guess I turned it around and thought: shouldn't an adequate notion of value be invariant to how I decide to split up my games? The linearity property on Wikipedia even seems to be inviting us to just split games up in however manner we want.

And yeah, I agree that in the real world games will overlap and so there will be double counting going on by splitting games up. But if that's... (read more)

I asked my question because the problem with infinities seems unique to Shapley values (e.g. I don't have this same confusion about the concept of "marginal value added"). Even with a small population, the number of cooperative games seems infinite: for example, there are an infinite number of mathematical theorems that could be proven, an infinite number of Wikipedia articles that could be written, an infinite number of films that could be made, etc. If we just use "marginal value added", the total value any single person adds is finite across all such co... (read more)

I don't think the example you give addresses my point. I am supposing that Leibniz could have also invented calculus, so . But Leibniz could have also invented lots of different things (infinitely many things!), and his claim to each invention would be valid (although in the real world he only invents finitely many things). If each invention is worth at least a unit of value, his Shapley value across all inventions would be infinite, even if Leibniz was "maximally unluckly" and in the actual world got scooped every single time and so did not inve... (read more)

2
NunoSempere
1y
Assuming an infinite number of players. If there are only a finite number of players, there are only finite terms in the Shapley value calculation, and if each invention has finite value, that's finite.
2
MichaelStJules
1y
The Wikipedia page says: This means that there must be gains to distribute for anyone to get nonzero credit from that game, and that they in fact "collaborated" (although this could be in name only) to get any credit at all. Ignoring multiverses, infinitely many things have not been invented yet, but maybe infinitely many things will be invented in the future. In general, I don't think that Leibniz cooperated in infinitely many games, or even that infinitely many games have been played so far, unless you define games with lots of overlap and double counting (or you invoke multiverses, or consider infinitely long futures, or some exotic possibilities, and then infinite credit doesn't seem unreasonable). Furthermore, in all but a small number of games, he might make no difference to each coalition even when he cooperates, so get no credit at all. Or the credit could decrease fast enough to have a finite sum, even if he got nonzero credit in infinitely many games, as it becomes vanishingly unlikely that he would have made any difference even in worlds where he cooperates.

Disagree-voting a question seems super aggressive and also nonsensical to me. (Yes, my comment did include some statements as well, but they were all scaffolding to present my confusion. I wasn't presenting my question as an opinion, as my final sentence makes clear.) I've been unhappy with the way the EA Forum has been going for a long time now, but I am noting this as a new kind of low.

What numerator and denominator? I am imagining that a single person could be a player in multiple cooperative games. The Shapley value for the person would be finite in each game, but if there are infinitely many games, the sum of all the Shapley values (adding across all games, not adding across all players in a single game) could be infinite.

2
Linch
1y
Hmm I would guess that the number of realistic cooperative games in the world to grow ~linearly (or some approximation[1]) with the number of people in the world, hence the denominator.   [1] I suppose if you think the growth is highly superlinear and there are ~infinity people, than Shapley values can grow to be ~infinite? But this feels like a general problem with infinities and not specific to Shapleys. 

Example 7 seems wild to me. If the applicants who don't get the job also get some of the value, does that mean people are constantly collecting Shapley value from the world, just because they "could" have done a thing (even if they do absolutely nothing)? If there are an infinite number of cooperative games going on in the world and someone can plausibly contribute at least a unit of value to any one of them, then it seems like their total Shapley value across all games is infinite, and at that point it seems like they are as good as one can be, all without having done anything. I can't tell if I'm making some sort of error here or if this is just how the Shapley value works.

2
NunoSempere
1y
I agree that this is unintuitive. Personally the part of that that I like less is that it feels like people could cheat that, by standing in line.  But they can't cheat it! See this example: <http://shapleyvalue.com/?example=4>. You can't even cheat by noticing that something is impactful, and then self-modifying so that in the worlds where you were needed you would do it, because in the worlds where you would be needed, you wouldn't have done that modification (though there are some nuances here, like if you self-modify and there is some chance that you are needed in the future). Not sure if that addresses part of what you were  asking about. I agree that SV's don't play nice with infinities, though I'm not sure whether there could be an extension which could (for instance, looking at the limit of the Shapley value).
2
MichaelStJules
1y
In general, I don't think you should sum an individual's Shapley values across possible and maybe even actual games, because some actions the individual could take could be partially valuable in the same way in multiple games simultaneously, and you would double count value by summing. The sum wouldn't represent anything natural or useful in such cases. However, there may be specific sets of games where it works out, maybe when the value across games is in fact additive for the value to the world. This doesn't mean the games can't interact or compete in principle, but the value function for each game can't depend on the specific coalition set of any other game, but it can average over them. I think a general and theoretically sound approach would be to build a single composite game to represent all of the games together, but the details could be tricky or unnatural, because you need to represent in which games an individual cooperates, given that they can only do so much in a bounded time interval. 1. Maybe you use the set of all players across all games as the set of players in the composite game, and cooperating in any game counts as cooperating in the composite game. To define the value function, you could model the distribution of games the players cooperate in conditional on the set of players cooperating in any game (taking an expected value). Then you get Shapley values the usual way. But now you're putting a lot of work into the value function. 2. Maybe you can define the set of players to be the product of the set of all players across all of the games and the set of games. That is, with a set I of individuals (across all games) and a set X of games, (i,x)∈I×X cooperates if and only if i cooperates in game x. Then you can define i's Shapley value as the sum of Shapley values over the "players" (i,x), ranging over the x. If you have infinitely many games in X, you get an infinite number of "players". There is work on games with infinitely many players (e
2
riceissa
1y
Disagree-voting a question seems super aggressive and also nonsensical to me. (Yes, my comment did include some statements as well, but they were all scaffolding to present my confusion. I wasn't presenting my question as an opinion, as my final sentence makes clear.) I've been unhappy with the way the EA Forum has been going for a long time now, but I am noting this as a new kind of low.
4
Linch
1y
Presumably everything adds up to normality? Like you have a high numerator but also a high denominator. (But this is mostly a drive-by comment, I don't really understand Shapleys)

Do you know of any ways I could experimentally expose myself to extreme amounts of pleasure, happiness, tranquility, and truth?

I'm not aware of any way to expose yourself to extreme amounts of pleasure, happiness, tranquility, and truth that is cheap, legal, time efficient, and safe. That's part of the point I was trying to make in my original comment. If you're willing forgo some of those requirements, then as Ian/Michael mentioned, for pleasure and tranquility I think certain psychedelics (possibly illegal depending on where you live, possibly unsafe,... (read more)

3
Ren Ryba
1y
Sure, makes sense. Thanks for your reply. If I wanted to prove or support the claim:  "given the choice between preventing extreme suffering and giving people more [pleasure/happiness/tranquility/truth], we should pick the latter option" How would you recommend I go about proving or supporting that claim? I'd be keen to read or experience the strongest possible evidence for that claim. I've read a fair bit about pleasure and happiness, but for the other, less-tangible values (tranquility and truth) I'm less familiar with any arguments. It would be a major update for me if I found evidence strong enough to convince me that giving people more tranquility and truth (and pleasure and happiness in any practical setting, under which I include many forms of longtermism) could be good enough to forego preventing extreme suffering. This would have major implications for my current work and my future directions, so I would like to understand this view as well as I can in case I'm wrong and therefore missing out on something important.

It may end up being that such intensely positive values are possible in principle and matter as much as intense pains, but they don’t matter in practice for neartermists, because they're too rare and difficult to induce. Your theory could symmetrically prioritize both extremes in principle, but end up suffering-focused in practice. I think the case for upside focus in longtermism could be stronger, though.

If by "neartermism" you mean something like "how do we best help humans/animals/etc who currently exist using only technologies that currently exist, ... (read more)

7
MichaelStJules
1y
I think identical distributions for efficiency is a reasonable ignorance prior, ignoring direct intuitions and evidence one way or the other, but we aren't so ignorant that we can't make any claims one way or the other. The kinds of claims Shulman made are only meant to defeat specific kinds of arguments for negative skew over symmetry, like direct intuition, not to argue for positive skew. Given the possibility that direct intuition in this case could still be useful (and indeed skews towards negative being more efficient, which seems likely), contra Shulman, then without arguments for positive skew (that don't apply equally in favour of negative skew), we should indeed expect the negative to be more efficient. Furthermore, based on the arguments other than direct intuition I made above, and, as far as I know, no arguments for pleasure being more efficient than pain that don't apply equally in reverse, we have more reason to believe efficiencies should skew negative. Also similar to gwern's comment, if positive value on non-hedonistic views does depend on things like reliable perception of the outside world or interaction with other conscious beings (e.g. compared to the experience machine or just disembodied pleasure) but bads don't (e.g. suffering won't really be any less bad in an experience machine or if disembodied), then I'd expect negative value to be more efficient than positive value, possibly far more efficient, because perception and interaction require overhead and may slow down experiences. However, similar efficiency for positive value could still be likely enough that the expected efficiencies are still similar enough and other considerations like their frequency dominate.

I think there are multiple ways to be a neartermist or longtermist, but "currently existing" and "next 1 year of experiences" exclude almost all effective animal advocacy we actually do and the second would have ruled out deworming.

Are you expecting yourself (or the average EA) to be able to cause greater quantities of intense pleasure than quantities of intense suffering you (or the average EA) can prevent in the next ~30 years, possibly considering AGI? Maybe large numbers of artificially sentient beings made to experience intense pleasure, or new drugs ... (read more)

I am worried that exposing oneself to extreme amounts of suffering without also exposing oneself to extreme amounts of pleasure, happiness, tranquility, truth, etc., will predictably lead one to care a lot more about reducing suffering compared to doing something about other common human values, which seems to have happened here. And the fact that certain experiences like pain are a lot easier to induce (at extreme intensities) than other experiences creates a bias in which values people care the most about.

Carl Shulman made a similar point in this post: "... (read more)

7
Timothy Chan
1y
I also have (moderate) depression and anxiety but I guess I wouldn't consider my experiences 'intense/extreme suffering' (although 'extreme amounts of suffering', as you've written, might make sense here). The kind of suffering that's experienced when, e.g. being eaten alive by predators, seems to me to be qualitatively different from the depression-induced suffering I experience. I somehow also 'got used to' depression-suffering after a while (probably independent of the anti-depressant effects) and also don't mind it as much as I did, but that numbness and somewhat bearable intensity doesn't seem to come with the 'more physical' causes of suffering.

It may end up being that such intensely positive values are possible in principle and matter as much as intense pains, but they don’t matter in practice for neartermists, because they're too rare and difficult to induce. Your theory could symmetrically prioritize both extremes in principle, but end up suffering-focused in practice. I think the case for upside focus in longtermism could be stronger, though.

It's also conceivable that pleasurable states as intense as excruciating pains in particular are not possible in principle after refining our definitions... (read more)

9
Ren Ryba
1y
I think this is a fair point, if you believe that pleasure can outweigh really awful suffering in practice. I do not currently believe this, for all practical purposes. Basically, my position is that these other human values - while somewhat valuable - are simply trivial in the face of the really awful suffering that is very common in our world. Do you know of any ways I could experimentally expose myself to extreme amounts of pleasure, happiness, tranquility, and truth? I'd be willing to expose myself to whatever you suggest, plus extreme suffering, to see if this changes my mind. Or we can work together to design a different experimental setup if you think that would produce better evidence.

Has Holden written any updates on outcomes associated with the grant?

Not to my knowledge.

I don't think that lobbying against OpenAI, other adversarial action, would have been that hard.

It seems like once OpenAI was created and had disrupted the "nascent spirit of cooperation", even if OpenAI went away (like, the company and all its employees magically disappeared), the culture/people's orientation to AI stuff ("which monkey gets the poison banana" etc.) wouldn't have been reversible. So I don't know if there was anything Open Phil could have done to... (read more)

3
Agrippa
2y
I don't mean to say anything pro DeepMind and I'm not sure there is anything positive to say re: DeepMind. I think that once the nascent spirit of cooperation is destroyed, you can indeed take the adversarial route. It's not hard to imagine successful lobbying efforts that lead to regulation -- most people are in fact skeptical of tech giants wielding tons of power using AI! Among other things known to slow progress and hinder organizations. It is beyond me why such things are so rarely discussed or considered. I'm sure that Open Phil and 80k open cooperation with OpenAI has a big part in shaping narrative away from this kind of thing.

Eliezer's tweet is about the founding of OpenAI, whereas Agrippa's comment is about a 2017 grant to OpenAI (OpenAI was founded in 2015, so this was not a founding grant). It seems like to argue that Open Phil's grant was net negative (and so strongly net negative as to swamp other EA movement efforts), one would have to compare OpenAI's work in a counterfactual world where it never got the extra $30 million in 2017 (and Holden never joined the board) with the actual world in which those things happened. That seems a lot harder to argue for than what Elieze... (read more)

1
Agrippa
2y
Has Holden written any updates on outcomes associated with the grant?  I am not making this argument but certainly I am alluding to it. EA strategy (weighted by impact) has been to do things that in actuality accelerate timelines, and even cooperate with doing so under the "have a good person standing nearby" theory. I don't think that lobbying against OpenAI, other adversarial action, would have been that hard. But OpenPhil and other EA leadership of the time decided to ally and hope for the best instead. This seems off the rails to me.

What textbooks would you recommend for these topics? (Right now my list is only “Linear Algebra Done Right”)

I would recommend not starting with Linear Algebra Done Right unless you already know the basics of linear algebra. The book does not cover some basic material (like row reduction, elementary matrices, solving linear equations) and instead focuses on trying to build up the theory of linear algebra in a "clean" way, which makes it enlightening as a second or third exposure to linear algebra but a cruel way to be introduced to the subject for the fi... (read more)

Many domains that people tend to conceptualize as "skill mastery, not cult indoctrination" also have some cult-like properties like having a charismatic teacher, not being able to question authority (or at least, not being encouraged to think for oneself), and a social environment where it seems like other students unquestioningly accept the teachings. I've personally experienced some of this stuff in martial arts practice, math culture, and music lessons, though I wouldn't call any of those a cult.

Two points this comparison brings up for me:

  • EA seems unu
... (read more)

He was at UW in person (he was a grad student at UW before he switched his PhD to AI safety and moved back to Berkeley).

Setting expectations without making it exclusive seems good.

"Seminar program" or "seminar" or "reading group" or "intensive reading group" sound like good names to me.

I'm guessing there is a way to run such a group in a way that both you and I would be happy about.

The actual activities that the people in a fellowship engage in, like reading things and discussing them and socializing and doing giving games and so forth, don't seem different from what a typical reading club or meetup group does. I am fine with all of these activities, and think they can be quite valuable.

So how are EA introductory fellowships different from a bare reading club or meetup group? My understanding is that the main differences are exclusivity and the branding. I'm not a fan of exclusivity in general, but especially dislike it when there do... (read more)

2
mic
2y
I think the main difference is the commitment entailed by an introductory fellowship, due to having to apply and being accepted; you're expected to continue showing up to sessions and let your facilitator know if you can't make it. That way, attendance and enrollment are probably much higher than they would otherwise be. It doesn't have to be exclusive; many smaller groups accept everyone who applies. Based on some EA Forum comments I've read by Harvard EA members, you're right that the term "fellowship" is intended to "manufacture prestige". EA Oxford uses the term "seminar program" instead which I think gets the job done and is apparently less confusing to graduate students.

I didn't. As far as I know, introductory fellowships weren't even a thing in EA back in 2014 (or if they were, I don't remember hearing about them back then despite reading a bunch of EA things on the internet). However, I have a pretty negative opinion of these fellowships so I don't think I would have wanted to start one even if they were around at the time.

mic
2y11
0
0

Interesting, could you say why you have a negative opinion of introductory fellowships?

(I tried starting the original EA group at UW in 2014. I'm no longer a student at UW and don't even live in the Seattle area currently.)

Seems like you found the Messenger group, which is the most active thing I am aware of. You've also probably seen the Facebook group and could try messaging some of the people there who joined recently.

I don't want to discourage you from trying, but here are some more details: I was unable to start an EA group at UW in 2014 (despite help from Seattle EA organizers). At the time I thought this was mainly due to my poor soci... (read more)

1
Alex Mallen
2y
I was wondering how Rohin tried starting the group. If he was doing it remotely, then it seems like that may have been a factor in why it failed the second time (because it would be hard to form a community). Thanks for suggesting messaging the people who most recently joined the UW EA Facebook group--I didn't think there were any new people, but there are a few!
mic
2y13
0
0

Did you run an introductory fellowship? (Probably not since introductory fellowships only really started/took off in 2018.) I've found a big difference with trying to start an EA group through discussion meetings vs an introductory fellowship—the latter has been much more successful. Introductory fellowships are the core of CEA's University Group Accelerator Program (previously called MVP Group Pilot Program, MVP standing for "minimum viable product").

Scott Garrabrant has discussed this (or some very similar distinction) in some LessWrong comments. There's also been a lot of discussion about babble and prune, which is basically the same distinction, except happening inside a single mind instead of across multiple minds.

2
Ozzie Gooen
2y
Good find, I didn't see that discussion before.  For those curious; Scott makes the point that it's good to separate "idea generation" from "vetted ideas that aren't wrong"; and that's it's valuable to have spaces where people can suggest ideas without needing them to be right. I agree a lot with this.

There are already websites like Master How To Learn and SuperMemo Guru, the various guides on spaced repetition systems on the internet (including Andy Matuschak's prompt-writing guide which is presented in the mnemonic medium), and books like Make It Stick. If I was working on such a project I would try to more clearly lay out what is missing from these existing resources.

My personal feeling is that enough popularization of learning techniques is already taking place (though one exception I can think of is to make SuperMemo-style incremental reading more ... (read more)

(I read the non-blockquote parts of the post, skimmed the blockquotes, and did not click through to any of the links.)

It seems like the kind of education discussed in this post is exclusively mass schooling in the developing world, which is not clear from the title or intro section. If that's right, I would suggest editing the title/intro to be clearer about this. The reason is that I am quite interested in improving education so I was interested to read objections to my views, but I tend to focus on technical subjects at the university level so I feel like this post wasn't actually relevant to me.

2
Aaron Gertler
3y
Fair comment; I've edited the title and the introduction.

For the past five years I have been doing contract work for a bunch of individuals and organizations, often overlapping with the EA movement's interests. For a list of things I've done, you can see here or here. I can say more about how I got started and what it's like to do this kind of work if there is interest.

Vipul Naik asked a similar question near the beginning of the pandemic.

What are your thoughts on chronic anxiety and DP/DR induced by psychedelics? Do you have an idea of how common this kind of condition is and how best to treat or manage it?

2
Dr. Matthew W. Johnson
3y
We don't know the population rate of how often these happen, but we do know that they happen. We have published multiple survey studies on such enduring negative psychological effects, e.g., https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5551678/ . This was survey work probing folks who claimed to have had a challenging experience, but among these there was a clear signal for some people to have long lasting disturbances. I've been approached by a many such folks.  I would note that we have not seen this in the modern or older eras of research when good screening and safety practices are in place.
2
Michael Pollan
3y
I don't know what DP/DR is, sorry. Episodes of anxiety are not uncommon, and some people using psychedelics outside of clinical trials  have panic attacks; these are sometimes mis-diagnosed as psychotic breaks, because they can present similarly, but they usually pass.  I can't stress enough the differences between using psychedelics "free range" and in a therapeutic setting, where even "bad trips" can prove so useful and constructive that they're typically referred to not as "bad" but "challenging trips." An experienced facilitator can divert a patient from a frightening episode, often by advising the patient to "surrender" to what's happening in the mind. The dissolution of one's ego can be terrifying unless you've been prepared to let it go, in which case it can be ecstatic. Setting and setting is everything when it comes to psychedelics.

What do you think of the research chemicals scene (e.g. r/researchchemicals)?

1
Dr. Matthew W. Johnson
3y
Feel free to be more specific as there are many topics within this area. I'm hoping many of these will move to clinical research once the appropriate toxicology is conducted.  Many appear to have fascinating effects with great potential. They are all slightly different biological probes with different receptor profiles, they the scientific potential from triangulating across compounds is enormous. For casual users, I certainly advise caution as many don't show the remarkable physiological profile of psilocybin, LSD, and DMT for health people. For many of these other compounds, you can't just take 5 or 10 times the dose and expect to live. So in that sense they are like most medications and even caffeine or alcohol. People need to realize that psilocybin, LSD, and DMT, and even mescaline, are pretty freakish in their relative physiological safety profile, and people shouldn't extrapolate that this will be true of other compounds. There have also been deaths from folks taking one compound and thinking it is another, and compounds drastically differ in their dose range. 
Answer by riceissaMar 30, 202130
0
0

For me, I don't think there is a single dominant reason. Some factors that seem relevant are:

  • Moral uncertainty, both at the object-level and regarding metaethics, which makes me uncertain about how altruistic I should be. Forming a community around "let's all be altruists" seems like an epistemic error to me, even though I am interested in figuring out how to do good in the world.
  • On a personal level, not having any close friends who identify as an effective altruist. It feels natural and good to me that a community of people interested in the same thing
... (read more)

How is Nonlinear currently funded, and how does it plan to get funding for the RFPs?

We currently have a donor who is funding everything. In the future, we intend for it to be a combination of 1) fundraising for specific ideas when they are identified and 2) fundraising for non-earmarked donations from people who trust our research and assessment process.

Another idea is to set up conditional AMAs, e.g. "I will commit to doing an AMA if at least n people commit to asking questions." This has the benefit of giving each AMA its own time (without competing for attention with other AMAs) while trying to minimize the chance of time waste and embarrassment.

That one is linked from Owen's post.

In the April 2020 payout report, Oliver Habryka wrote:

I’ve also decided to reduce my time investment in the Long-Term Future Fund since I’ve become less excited about the value that the fund can provide at the margin (for a variety of reasons, which I also hope to have time to expand on at some point).

I'm curious to hear more about this (either from Oliver or any of the other fund managers).

[anonymous]3y18
0
0

Regardless of whatever happens, I've benefited greatly from all the effort you've put in your public writing on the fund Oliver. 

I am wondering how the fund managers are thinking more long-term about encouraging more independent researchers and projects to come into existence and stay in existence. So far as I can tell, there hasn't been much renewed granting to independent individuals and projects (i.e. granting for a second or third time to grantees who have previously already received an LTFF grant). Do most grantees have a solid plan for securing funding after their LTFF grant money runs out, and if so what do they tend to do?

I think LTFF is doing something valuable by giving pe... (read more)

3
gavintaylor
3y
Just to add a comment with regards to sustainable funding for independent researchers. There haven't previously been many options available for this, however, there are a growing number of virtual research institutes through which affiliated researchers can apply to academic funding agencies. The virtual institute can then administer the grant for a researcher (usually for much lower overheads than a traditional institution), while they effectively still do independent work. The Ronin Institute administers funding from US granters, and I am a Board member at IGDORE which can receive funding from some European granters. That said, it may still be quite difficult for individuals to secure academic funding without having some traditional academic credentials (PhD, publications, etc.). 

The LTFF is happy to renew grants so long as the applicant has been making strong progress and we believe working independently continues to be the best option for them. Examples of renewals in this round include Robert Miles, who we first funded in April 2019, and Joe Collman, who we funded in November 2019. In particular, we'd be happy to be the #1 funding source of a new EA org for several years (subject to the budget constraints Oliver mentions in his reply).

Many of the grants we make to individuals are for career transitions, such as someone retrainin... (read more)

Yeah, I am also pretty worried about this. I don't think we've figured out a great solution to this yet. 

I think we don't really have sufficient capacity to evaluate organizations on an ongoing basis and provide good accountability. Like, if a new organization were to be funded by us and then grow to a budget of $1M a year, I don't feel like we have the capacity to evaluate their output and impact sufficiently well to justify giving them $1M each year (or even just $500k). 

Our current evaluation process routes feels pretty good for smaller projec... (read more)

Ok I see, thanks for the clarification! I didn't notice the use of the phrase "the MIRI method", which does sound like an odd way to phrase it (if MIRI was in fact not involved in coming up with the model).

MIRI and the Future of Humanity Institute each created models for calculating the probability that a new researcher joining MIRI will avert existential catastrophe. MIRI’s model puts it at between and , while the FHI estimates between and .

The wording here makes it seem like MIRI/FHI created the model, but the link in the footnote indicates that the model was created by the Oxford Prioritisation Project. I looked at their blog post for the MIRI model but it looks like MIRI wasn't involved in creating the model (although the post author... (read more)

4
kokotajlod
3y
Thanks for this! It's been a long time since I wrote this so I don't remember why I thought it was from MIRI/FHI. I think it's because the guesstimate model has two sub-models, one titled "the MIRI method" and one titled "The community method (developed by Owen CB and Daniel Dewey" who were at the time associated with FHI I believe. So I must have figured the first model came from MIRI and the second model came from FHI. I'll correct the error.  

Did you end up writing this post? (I looked through your LW posts since the timestamp of the parent comment but it doesn't seem like you did.) If not, I would be interested in seeing some sort of outline or short list of points even if you don't have time to write the full post.

2
kokotajlod
3y
Thanks for following up. Nope, I didn't write it, but comments like this one and this one are making me bump it up in priority! Maybe it's what I'll do next.

I think the forum software hides comments from new users by default. You can see here (and click the "play" button) to search for the most recently created users. You can see that Nathan Grant and ssalbdivad have comments on this post that are only visible via their user page, and not yet visible on this post.

Edit: The comments mentioned above are now visible on this post.

2
Aaron Gertler
4y
Issa is correct about comments from new users being counted but hidden (until a moderator approves those users). Deleted comments also show up in the comment count for a brief time, though they get removed from the count eventually (otherwise, spam would create a lot more "ghost comments" that are current visible).

So if stopping growth would lower the hazard rate, it would be a matter of moving from 1% to 0.8% or something, not from 20% to 1%.

Can you say how you came up with the "moving from 1% to 0.8%" part? Everything else in your comment makes sense to me.

1
trammell
4y
I'm just putting numbers to the previous sentence: "Say the current (instantaneous) hazard rate is 1% per century; my guess is that most of this consists of (instantaneous) risk imposed by existing stockpiles of nuclear weapons, existing climate instability, and so on, rather than (instantaneous) risk imposed by research currently ongoing." If "most" means "80%" there, then halting growth would lower the hazard rate from 1% to 0.8%.

So you think the hazard rate might go from around 20% to around 1%?

I'm not attached to those specific numbers, but I think they are reasonable.

That's still far from zero, and with enough centuries with 1% risk we'd expect to go extinct.

Right, maybe I shouldn't have said "near zero". But I still think my basic point (of needing to lower the hazard rate if growth stops) stands.

2
trammell
4y
Hey, thanks for engaging with this, and sorry for not noticing your original comment for so many months. I agree that in reality the hazard rate at t depends not just on the level of output and safety measures maintained at t but also on "experiments that might go wrong" at t. The model is indeed a simplification in this way. Just to make sure something's clear, though (and sorry if this was already clear): Toby's 20% hazard rate isn't the current hazard rate; it's the hazard rate this century, but most of that is due to developments he projects occurring later this century. Say the current (instantaneous) hazard rate is 1% per century; my guess is that most of this consists of (instantaneous) risk imposed by existing stockpiles of nuclear weapons, existing climate instability, and so on, rather than (instantaneous) risk imposed by research currently ongoing. So if stopping growth would lower the hazard rate, it would be a matter of moving from 1% to 0.8% or something, not from 20% to 1%.

What's doing the work for you? Do you think the probability of anthropogenic x-risk with our current tech is close to zero? Or do you think that it's not but that if growth stopped we'd keep working on safety (say developing clean energy, improving relationships between US and China etc.) so that we'd eventually be safe?

I think the first option (low probability of x-risk with current technology) is driving my intuition.

Just to take some reasonable-seeming numbers (since I don't have numbers of my own): in The Precipice, Toby Ord estimates ~19% chance of

... (read more)
1
Alex HT
4y
So you think the hazard rate might go from around 20% to around 1%? That's still far from zero, and with enough centuries with 1% risk we'd expect to go extinct. I don't have any specific stories tbh, I haven't thought about it (and maybe it's just pretty implausible idk).
5
BrownHairedEevee
4y
Unrolled for convenience I have Twitter blocked using StayFocusd (which gives me an hour per day to view blocked websites), so reading it on a separate website allows me to take my time with it.

The timing of this AMA is pretty awkward, since many people will presumably not have access to the book or will not have finished reading the book. For comparison, Stuart Russell's new book was published in October, and the AMA was in December, which seems like a much more comfortable length of time for people to process the book. Personally, I will probably have a lot of questions once I read the book, and I also don't want to waste Toby's time by asking questions that will be answered in the book. Is there any way to delay the AMA or hold a second one at a later date?

Thanks for the comment! Toby is going to do a written AMA on the Forum later in the year too. This one is timed so that we can have video answers during Virtual EA Global.

2
Linch
4y
Strongly concur, as someone who preordered the book and is excited to read it.

I don't think you can add the percentages for "top or near top priority" and "at least significant resources". If you look at the row for global poverty, the percentages add up to over 100% (61.7% + 87.0% = 148.7%), which means the table is double counting some people.

Looking at the bar graph above the table, it looks like "at least significant resources" includes everyone in "significant resources", "near-top priority", and "top priority". For mental health it looks like "significant resources" has 37%, and "near-top priority" and "top priority" combined have 21.5% (shown as 22% in the bar graph).

So your actual calculation would just be 0.585 * .25 which is about 15%.

4
Milan_Griffes
4y
Good point, thanks. I've edited my comment to correct the double-counting.

Stocking ~1 month of nonperishable food and other necessities

Can you say more about why 1 month, instead of 2 weeks or 3 months or some other length of time?

Also can you say something about how to decide when to start eating from stored food, instead of going out to buy new food or ordering food online?

7
eca
4y
(Based on feedback I've updated the dock to say "at least 1 month") This is largely me aggregating numbers from people I respect, and my views are in flux (e.g. above) That said I think it makes sense on a couple of grounds: * if you are below the age of 40 and/or have a mild case, this is ~enough food to ride out a self quarantine after showing symptoms (recovery time estimates vary widely, but I've seen 14-30 days) * This is also enough food to self quarantine for the estimated incubation period (5-14 days, with some reports of 20-25 days) if you think you might have been exposed. My model is that there may be short-term food shocks as well, e.g. runs on grocery stores after more cases are discovered in the U.S. 1 month seems like probably 4x what you need for one of those. The way I view a food stock is to minimize the number of trips to high-risk places like grocery stores (as a young, healthy person). For people over 40 I don't much more food, say 4 months is crazy because it might make more sense to completely self-isolate with higher mortality risk. As for when it makes sense to start eating stocks instead of grocery shopping or going out, its really hard to say. I personally plan on evaluating each public trip based on the logic of how many people are infected in my area * adjustment for undertesting * number of people who have been to the location I am visiting in the last week * ppe safety likelihood . A lot of magic is happening in the adjustment for undertesting bit. I expect this to mean that I avoid crowded restaurants and grocery stores at peak hours starting nowish. My guess is I will choose to start eating my food stocks and only making rare large restock trips somewhere in the 100s of cases in U.S. but I'm not sure. Would welcome any other ways to think about this.

I think that's one of the common ways for a post to be interesting, but there are other ways (e.g. asking a question that generates interesting discussion in the comments).

This has been the case for quite a while now. There was a small discussion back in December 2016 where some people expressed similar opinions. My guess is that 2015 is the last year the group regularly had interesting posts, but I might be remembering incorrectly.

2
Aaron Gertler
4y
By "interesting posts", do you mean original writing that hasn't been posted elsewhere first?

How did you decide on "blog posts, cross-posted to EA Forum" as the main output format for your organization? How deliberate was this choice, and what were the reasons going into it? There are many other output formats that could have been chosen instead (e.g. papers, wiki pages, interactive/tool website, blog+standalone web pages, online book, timelines).

This was a very deliberate decision on our part. Our primary goal is to get EA decision-makers to make better decisions as a result of our work. We thought the most likely place these decision-makers would see our work is on the EA Forum. We also thought people would be more likely to read the work if we wrote it in an article that didn’t require clicking through to a further page or PDF, on the idea that the clickthrough rate to reports is pretty low. It’s also nice to be able to track our impact via some loose proxies like upvotes and the EA Forum prize.

... (read more)

Follow-up question: Have you been happy with this choice so far? Are there ways the Forum could change such that you'd expect to get a lot more value out of posting research here?

wikieahuborg_w-20180412-history.xml contains the dump, which can be imported to a MediaWiki instance.

Re: The old wiki on the EA Hub, I'm afraid the old wiki data got corrupted, it wasn't backed up properly and it was deemed too difficult to restore at the time :(. So it looks like the information in that wiki is now lost to the winds.

I think a dump of the wiki is available at https://archive.org/details/wiki-wikieahuborg_w.

1
VPetukhov
4y
Unfortunately, no. The archive there contains only the html with the main page and some logos...
Load more