All of Stephen Clare's Comments + Replies

Your first job out of college is the hardest to get. Later on you'll be able to apply for jobs while working, which is less stressful, and you'll have a portfolio of successful projects you can point to. So hopefully it's some small comfort that applying for jobs will probably never suck as much as it does for you right now. I know how hard it can be though, and I'm sorry. A few years ago after graduating from my Master's, I submitted almost 30 applications before getting an offer and accepting one.

I do notice that the things you're applying to all seem ve... (read more)

3
Ávila Carmesí
1d
Thank you very much Stephen, this was a nice comment to receive, and it does provide some much-needed reassurance and good advice. I'm going to widen my search now. I also hope my post provided some reassurance to others in my situation.

Your steelman doesn't seem very different from "I didn’t have strong views on whether either of these opinions were true. My aim was just to introduce the two of them, and let them have a conversation and take it from there."

0
Elizabeth
3d
I think if all he'd said was "My aim was just to introduce the two of them, and let them have a conversation and take it from there", I'd have found that a satisfactory answer. It's also not something I considered to need justification in the first place, although I hadn't looked into it very much. I'm inferring from the fact that Will gave a full paragraph explanation of why this seemed high EV indicates that he thinks that reasoning is important. 

Many organizations I respect are very risk-averse when hiring, and for good reasons. Making a bad hiring decision is extremely costly, as it means running another hiring round, paying for work that isn't useful, and diverting organisational time and resources towards trouble-shooting and away from other projects. This leads many organisations to scale very slowly.

However, there may be an imbalance between false positives (bad hires) and false negatives (passing over great candidates). In hiring as in many other fields, reducing false positives often means ... (read more)

3
Nathan Young
9d
Some interesting stuff to read on the topic of when helpful things probably hurt people: Helping is helpful * My understanding is that minimum wage literature generally finds the minimum wage is good on net Helping is hurtful * Banning background checks increases racial discrimination by Matt Yglesias * I sense that making no-fault firing more difficult is similarly damaging because orgs are scared to hire
2
Joseph Lemien
12d
It looks like there are two people who voted disagree with this. I'm curious as to what they disagree with. Do they disagree with the claim that some organizations are "very risk-averse when hiring"? Do they disagree with the claim that "reducing false positives often means raising false negatives"? That this has a causal effect with organisations scale slowly? Or perhaps that "the costs of a bad hire are somewhat bounded"? I would love for people who disagree vote to share information regarding what it is they disagree with.
2
Joseph Lemien
12d
Forgive my rambling. I don't have much to contribute here, but I generally want to say A)I am glad to see other people thinking about this, and B) I sympathize with the difficulty The "reducing false positives often means raising false negatives" is one of the core challenges in hiring. Even the researchers who investigate the validity of various methods and criteria in hiring don't have a great way to deal with it this problem. Theoretically we could randomly hire 50% of the applicants and reject 50% of them, and then look at how the new hires perform compared to the rejects one year later. But this is (of course) infeasible. And of course, so much of what we care about is situationally specific: If John Doe thrives in Organizational Culture A performing Role X, that doesn't necessarily mean he will thrive in Organizational Culture B performing Role Y. I do have one suggestion, although it isn't as good of a suggestion as I would like. Ways to "try out" new staff (such as 6-month contacts, 12-month contracts, internships, part-time engagements, and so on) can let you assess with greater confidence how the person will perform in your organization in that particular role much better than a structured interview, a 2-hour work trial test, or a carefully filled out application form. But if you want to have a conversation with some people that are more expert in this stuff I could probably put you in touch with some Industrial Organizational Psychologists who specialize in selection methods. Maybe a 1-hour consultation session would provide some good directions to explore? I've shared this image[1] with many people, as I think it is a fairly good description of the issue. I generally think of one of the goals of hiring to be "squeezing" this shape to get as much off the area as possible in the upper right and lower left, and to have as little as possible in the upper left and lower right. We can't squeeze it infinitely thin, and there is a cost to any squeezing, but t
3
Sarah Levin
13d
This depends a lot on what "eventually" means, specifically. If a bad hire means they stick around for years—or even decades, as happened in the organization of one of my close relatives—then the downside risk is huge.  OTOH my employer is able to fire underperforming people after two or three months, which means we can take chances on people who show potential even if there are some yellow flags. This has paid off enormously, e.g. one of our best people had a history of getting into disruptive arguments in nonprofessional contexts, but we had reason to think this wouldn't be an issue at our place... and we were right, as it turned out, but if we lacked the ability to fire relatively quickly, then I wouldn't have rolled those dice.  The best advice I've heard for threading this needle is "Hire fast, fire fast". But firing people is the most unpleasant thing a leader will ever have to do, so a lot of people do it less than they should.
3
Brad West
13d
One way of thinking about the role is how varying degrees of competence correspond with outcomes.  You could imagine a lot of roles have more of a satisficer quality- if a sufficient degree of competence is met, the vast majority of the value possible from that role is met. Higher degrees of excellence would have only marginal value increases; insufficient competence could reduce value dramatically. In such a situation, risk-aversion makes a ton of sense: the potential benefit of getting grand slam placements is very small in relation to the harm caused by an incompetent hire. On the other hand, you might have roles where the value scales very well with incredible placements. In these situations, finding ways to test possible fit may be very worth it even if there is a risk of wasting resources on bad hires.

Sam said he would un-paywall this episode, but it still seems paywalled for me here and on Spotify. Am I missing something? (The full thing is available on youtube)

2
Simon_M
23d
If you click preview episode on that link you get the full episode. I also get the whole thing on my podcast feed (PocketCasts, not Spotify). Perhaps it's a Spotify issue?

CEA's elaborate adjustments confirm everyone's assertions: constantly evolving affiliations cause extreme antipathy. Can everyone agree, current entertainment aside, carefully examining acronyms could engender accuracy? 

Clearly, excellence awaits: collective enlightenment amid cost effectiveness analysis.

cool effort amigo

Considering how much mud was being slung around the FTX collapse, "clearing CEA's name" and proving that no one there knew about the fraud seems not just like PR to me, but pretty important for getting the org back to a place where it’s able to meaningfully do its work.

Plus, that investigation is not the only thing mentioned in the reflection reform paragraph. The very next sentence also says CEA has "reinvested in donor due diligence, updated our conflict-of-interest policies and reformed the governance of our organization, replacing leadership on the board and the staff."

0
Habryka
1mo
I don't know of any staff that was let go as a result of FTX reflections (and I have asked about this repeatedly). Many people quit, but nobody was fired for any FTX things among leadership, and nobody who quit would have been fired. There is some small chance I am missing some supposed staff changes here, but claiming that CEA "replaced leadership on the staff" as a result of FTX seems straightforwardly false (though if there was something behind the scenes that I don't know about, I would love to hear it, but I currently disbelieve the bolded section). The rest of the statements here seem pretty vacuous and almost impossible to falsify, and very hard distinguish from being done for PR reasons as opposed to genuine reflection. Briefly going through the areas where change is claimed:  Donor due diligence: I mean, I don't think the right lesson to take away from FTX is to be much more hesitant about accepting money from people. The key thing to understand is why EA seems to have created FTX in the first place. So I don't see the relevance of this. Yes, accepting the money was bad PR, I don't think it was bad for the world (there is some decision-theoretic argument here that maybe by refusing to accept money from bad people would have disincentivized the bad things happening, but I think that's very weak). Conflict-of-interest policies: This seems maybe real. Conflicts of interest did possibly play a substantial role in FTX, but it really doesn't seem like the primary dynamic going on. I do struggle to understand how anything like CEA-internal conflict of interest policies would have helped with anything like FTX.  I also haven't seen these conflict of interest policies, and judging whether they seem like meaningful reform would require engaging with the details.  Reformed the governance of our organization: EV is shutting down, which seems like the biggest governance reform. It is the case that EV was a huge legal mess, and that FTX did seem like mild like evide

I think you have a point with animals, but I don't think the balance of human experience means that non-existence would be better than the status quo.

Will talks about this quite a lot in ch. 9 of WWOTF ("Will the future be good or bad?"). He writes:

If we assume, following the small UK survey, that the neutral point on a life satisfaction scale is between 1 and 2, then 5 to 10 percent of the global population have lives of negative wellbeing. In the World Values Survey, 17 percent of respondents classed themselves as unhappy. In the smaller skipping study o

... (read more)

For anyone finding themselves in this random corner of the Forum: this study has now been published. Conclusion: "Our results do not support large effects of creatine on the selected measures of cognition. However, our study, in combination with the literature, implies that creatine might have a small beneficial effect."

Thanks Vasco! I'll come back to this to respond in a bit more depth next week (this is a busy week).

In the meantime, curious what you make of my point that setting a prior that gives only a 1 in 15 trillion chance of experiencing an extinction-level war in any given year seems wrong?

2
Vasco Grilo
3mo
You are welcome, Stephen! No worries, and thanks for still managing to make an in-depth comment! I only managed to reply to 3 of your points yesterday and during this evening, but I plan to address that 4th one still today.

Thanks again for this post, Vasco, and for sharing it with me for discussion beforehand. I really appreciate your work on this question. It's super valuable to have more people thinking deeply about these issues and this post is a significant contribution.

The headline of my response is I think you're pointing in the right direction and the estimates I gave in my original post are too high. But I think you're overshooting and the probabilities you give here seem too low.

I have a couple of points to expand on; please do feel free to respond to each in indivi... (read more)

2
Vasco Grilo
3mo
I think there is a potential misunderstanding here. Joe Carlsmith's[1] discussion on the contraints on future updating apply to one's best guess. In contrast, my astronomically low best guess prior is not supposed to be neither my current best guess nor a preliminary best guess from which one should formally update towards one's best guess. That being said, historical war deaths seem to me like the most natural prior to assess future war deaths, so I see some merit in using my astronomically low best guess prior as a preliminary best guess. I also agree with Joe that an astronomically low annual AI extinction risk (e.g. 6.36*10^-14) would not make sense (see this somewhat related thread). However, I would think about the possibility of AI killing all humans in the context of AI risk, not great power war. I feel like the sentiment you are expressing describing current events and trends would also have applied in the past, and today to risks which you might consider overly low. On the one hand, I appreciate a probability like 6.36*10^-14 intuitively feels way too small. On the other, humans are not designed to intuitively/directly assess the probability of rare events in a reliable way. These involve many steps, and therefore give rise to scope neglect.  As a side note, I do not think there is an evolutionary incentive for an individual human to accurately distinguishing between an extinction risk of 10^-14 and 0.01 %, because both are negligible in comparison with the annual risk of death 1 % (for a life expectancy of 100 years). Relatedly, I mentioned in the post that: In addition, I guess my astronomically low annual war extinction risk feels like an extreme value to many because they have in the back of their minds Toby's guesses for the existential risk between 2021 and 2120 given in The Precipice. The guess was 0.1 % for nuclear war, which respects an annual existential risk of around 10^-5, way larger than the estimates for annual war extinction risk I pres
2
Vasco Grilo
3mo
Looking into annual war deaths as a fraction of the global population is relevant to estimate extinction risk, but the international relations world is not focussing on this. For reference, here is what I said about this matter in the post: Do you have any thoughts on the above? It is unclear to me whether this is a major issue, because both methodolies lead to essentially the same annual war extinction risk for a power law:
2
Vasco Grilo
3mo
Historical war deaths seem to me like the most natural prior to assess future war deaths. I guess you consider it a decent prior too, as you relied on historical war data to get your extinction risk, but maybe you have a better reference class in mind? Aron's point about annual war deaths not being IID over time does not have a clear impact on my estimate for the annual extinction risk. If one thinks war deaths have been decreasing/increasing, then one should update towards a lower/higher extinction risk. However: * There is not an obvious trend in the past 600 years (see last graph in the post). * My impression is that there is lots of debate in the literature, and that the honest conclusion is that we do not have enough data to establish a clear trend. I think Aron's paper (Clauset 2018) agrees with the above: I think there is also another point Aron was referring to in footnote 9 (emphasis mine): Relevant context for what I highlighted above: I think Aron had the above in mind, and therefore was worried about assuming wars are IID over a long time, because this affects how much time it would take in expectation for a war to cause extinction. However, in my post I am estimating this time, but rather the nearterm annual probability of a war causing extinction, which does not rely on assumptions about whether wars will be IID over a long time horizon. I alluded to this in footnote 9: It is possible you missed this part, because it was not in the early versions of the draft. Some thoughts on the above: * What directy matters to assess the annual probability of a war causing human extinction is not war deaths, but annual war deaths as a fraction of the global population. For instance, one can have increasing war deaths with constant annual probability of a war causing human extinction if wars become increasinly long and population increases. Hopefully not, but it is possible wars in the far future will routinely wipe out e.g. trillions of digital minds whi
2
Vasco Grilo
3mo
Thanks for all the feedback, and early work on the topic, Stephen! I will reply to your points in different comments as you suggested. To be fair, you and Rani had a section on breaking the [power] law where you say other distributions would fit the data well (although you did not discuss the implications for tail risk): With respect to the below, I encourage readers to check the respective thread for context. As I explained in the thread, I do not think a simple mean is appropriate. That being said, the mean could also lead to astronomically low extinction risk. With the methodology I followed, one has to look into at least 34 distributions for the mean not to be astronomically low. I have just obtained the following graph in this tab: You suggested using the geometric mean, but it is always 0 given the null annual extinction risk for the top distribution, so it does not show up in the above graph. The median only is non-null for at least 84 distributions. I looked into all the 111 types of distributions available in SciPy, since I wanted to minimise cherry-picking as much as possible, but typical analyses only study 1 or a few. So it would have been easy to miss on noticing that the mean could lead to a much higher extinction risk. Incidentally, the steep increase in the red line of the graph above illustrates one worry I have about using the mean I had alluded to in the thread. The simple mean is not resistant to outliers, in the sense that these are overweighted[1]. I have a strong intuition that, given 33 models outputting an annual extinction risk between 0 and 9.07*10^-14, with mean 1.15*10^-13 among them, one should not update upwards by 8 OOMs to an extinction risk of 6.45*10^-6 after integrating a 34th model outputting an annual extinction risk of 0.0219 % (similar to yours of 0.0124 %). Under these conditions, I think one should put way less weight on the 34th model (maybe roughly no weight?). As Holden Karnofsky discusses in the post Why we can’t ta

"I inferred for Stephen's results, the probability of a war causing human extinction conditional on it causing an annual population loss of at least 10 % has to be at least 14.8 %."

This is interesting! I hadn't thought about it that way and find this framing intuitively compelling. 

That does seem high to me, though perhaps not ludicrously high. Past events have probably killed at least 10% of the global population, WWII was within an order of magnitude of that, and we've increased out warmaking capacity since then. So I think it would be reasonable to... (read more)

2
Vasco Grilo
3mo
Thanks for jumping in, Stephen! Note the 14.8 % I mentioned in my last comment refers to "the probability of a war causing human extinction conditional on it causing an annual population loss of at least 10 %", not to the annual probability of a war causing a population loss of 10 %. I think 14.8 % for the former is super high[1], but I should note the Metaculus' community might find it reasonable: * It is predicting: * A 5 % chance of a nuclear catastrophe causing a 95 % population loss conditional on it causing a population loss of at least 10 %. * A 10 % chance of a bio catastrophe causing a 95 % population loss conditional on it causing a population loss of at least 10 %. * I think a nuclear or bio catastrophe causing a 95 % population loss would still be far from causing extinction, so I could still belive the above suggest the probability of a nuclear or bio war causing extinction conditional on it causing a population loss of at least 10 % is much lower than 5 % and 10 %, and therefore much lower than 14.8 % too. * However, the Metaculus' community may find extinction is fairly likely conditional on a 95 % population loss. Note "the probability of a war causing human extinction conditional on it causing an annual population loss of at least 10 %" increases quite superlinearly with the annual probability of a war causing human extinction (see graph in my last comment). So this will be too high by more than 1 OOM if the 14.8 % I mentioned is high by 1 OOM. To be precise, for the best fit distribution with a "probability of a war causing human extinction conditional on it causing an annual population loss of at least 10 %" of 1.44 %, which is roughly 1 OOM below 14.8 %, the annual probability of a war causing human extinction is 3.41*10^-7, i.e. 2.56 (= log10(1.24*10^-4/(3.41*10^-7))) OOMs lower. In reality, I suspect 14.8 % is high by many OOMs, so an astronomically low prior still seems reasonable to me. I have just finished a draft where I get an

Hm, yeah, I think you're right. I remember seeing some curve where the value of saving a life initially rises as a person ages, then falls, but it must be determined by the other factors mentioned by others rather than the mortality thing.

This is very sad to think about, but in some contexts, it may also not be the case that "saving the baby leads to greater total lifespan". In places with high childhood mortality, for example, the expected number of life-years gained from saving a relatively young adult might be higher than a baby. This is because some proportion of babies will die from various diseases early in life, whereas young adults who have "made it through" are more likely to die in old age.

I'm not sure how high infant mortality rates would have to be to make a difference though. I... (read more)

7
Larks
4mo
I'm skeptical this consideration actually applies in practice. This argument would have applied in the past but not any more; according to OWID, Somalia has the world's highest infant mortality at 14%. So even there a young adult (say 1/3 of the way through their life) is probably going to have fewer remaining expected life years than a baby. 
1
David T
4mo
A related consideration is that instead of being cared for by its mother, a surviving baby's early life may be in the hands of bereaved relatives or an orphanage, which probably has adverse effect on childhood mortality, and even more so on expected lifetime utility

Super interesting list! I hadn't heard of most of these and have ordered a few of them to read. Thank you!

Hey Ben, thanks for this great post. Really interesting to read about your experience and decision not to continue in this space.

I'm wondering if you have any sense of how quickly returns to new projects in this space might diminish? Founding an AI policy research and advocacy org seems like a slam dunk, but I'm wondering how many more ideas nearly that promising are out there.

2
Ben Snodin
3mo
Hi Stephen, thanks for the kind words!   I guess my rough impression is that there's lots of possible great new projects if there's a combination of a well-suited founding team and support for that team. But "well-suited founding team" might be quite a high bar.
1
Roman Leventov
4mo
I've earlier argued against the sentiment that "we need as many technical approaches ('stabs') to solve the AI alignment problem", and therefore, probably, new organisations to pursue these technical agendas, too.

(I edited an earlier comment to include this, but it's a bit buried now, so I wanted to make a new comment.)

I've read most of the post and appendix (still not everything). To be a bit more constructive, I want to expand on how I think you could have responded better (and more quickly):

  1. We were sad to hear that two of our former employees had such negative experiences working with us. We were aware of some of their complaints, but others took us by surprise.
  2. We have a different perspective on many of the issues they raise. In particular, we dispute some of th
... (read more)

While I agree that this would largely have been an effective rebuttal that prevented many people from having the vibes-based reactions they're having, I think it itself excludes a thing I find rather valuable from this post... namely, that the thing that happened here is one that the community (and indeed most if not all communities) did not handle well and I think are overall unprepared for handling in future circumstances.

Open to hearing ways that point could have been made in a different way, but your post still treats this all as "someone said untrue things about us, here's the evidence they were untrue and our mistakes," and I think more mistakes were made beyond just NL or Alice/Chloe.

Thanks for this update, your leadership, and your hard work over the last year, Zach.

It's great to hear that Mintz's investigation has wrapped (and to hear they found no evidence of knowledge of fraud, though of course I'm not surprised by that). I'm wondering if it would be possible for them to issue an independent statement or comment confirming your summary?

Dear Stephen and the EA community:  

Shortly after the early November 2022 collapse of FTX, EV asked me and my law firm, Mintz, to conduct an independent investigation into the relationship between FTX/Alameda and EV.  I led our team’s investigation, which involved reviewing tens of thousands of documents and conducting dozens of witness interviews with people who had knowledge about EV’s relationship with FTX and Alameda.  As background, I spent 11 years serving as a federal prosecutor in the United States Attorney’s Office for the Sout... (read more)

If I'm understanding this right, you assume that if someone upvoted the post, it's because they changed their mind?

2
Kat Woods
4mo
Yes. It's not completely precise, but I do think it's unlikely that somebody upvoted the post if they didn't either largely update or already think that Alice and Chloe had told falsehoods and misleading claims about us.  It's Facebook though, for my friends, not for the EA Forum. I would try to post more precise numbers here. I'm not going to do a whole mathematical model for Facebook though. This was posted here without my permission and I also said in the post that this was a napkin math guesstimate. 

FWIW I reached out to someone involved in this at a high level a few months ago to see if there was a potential project here. They said the problem was "persuading WHO to accelerate a fairly logistically complex process". It didn't seem like there were many opportunities to turn money or time into impact so I didn't pursue anything further.

I can see where Ollie's coming from, frankly. You keep referring to these hundreds of pages of evidence, but it seems very likely you would have been better off just posting a few screenshots of the text messages that contradict some of the most egregious claims months ago. The hypothesising about "what went wrong", the photos, the retaliation section, the guilt-tripping about focusing on this, etc. - these all undermine the discussion about the actual facts by (1) diluting the relevant evidence and (2) making this entire post bizarre and unsettling.

Hi Vasco, thank you for this! I agree with you that just extrapolating the power law likely overestimates the chance of an enormous or extinction-level war by quite a bit. I'd mentioned this in my 80,000 Hours article but just as an intuition, so it's useful to have a mathematical argument, too. I'd be very interested to see you run the numbers, especially to see how they compare to the estimates from other strands of evidence I talk about in the 80K article.

Interesting, thanks for checking that!

What I had in mind were the data from this Pritchett paper. He sets out a range of estimates depending on what exactly you measure. For example he shows that the US wage for construction work is 10x the median of the poorest 30 countries (p. 5). The income gains for a low skill worker moving to the US vary depending on where they're coming from, but range from 2.4x (Thailand) to 16x (Nigeria) (p. 4).

That's pretty different than the paper you cite. I'm not sure what accounts for that right now. Hopefully we see more work in this area!

4
Karthik Tadepalli
5mo
Yeah the discrepancy comes from assuming that immigrants in a category would earn the same as natives in that category. The first problem is that there's substantial occupational downgrading; immigrants almost always work in lower-paid occupations than their pre-migration occupation. The second problem is that even within the same occupation, immigrants tend to have lower wages than natives (although they also have faster wage growth). The Hendricks and Schoellman paper, in contrast, focuses on getting immigrants to the US to report their own wages before and after migration - so I think it's a better reference on the wage gains from migration than comparing average wages.

As an example of how powerful these demographic shifts will be, this recent paper claims that ~all of Japan's poor economic performance relative to other developed nations since the '90s can be explained by its demographic shift (specifically the decline in the population share of working age adults). Think about how much consternation there has been about Japan's slow growth. We're all headed that way.

Interestingly, AFAIK Japan has not drastically liberalized its immigration much in response to its slow growth. The proportion of foreign-born residents has... (read more)

  1. I think both of these trends can occur simultaneously
  2. I'm not sure it's very helpful to think of this as "jobs moving from one country to another". It makes it seem zero-sum, whereas it is actually a positive-sum efficiency gain
  3. Migrants to higher-income countries benefit from public goods like better services and public safety in addition to higher incomes
  4. As Lant has pointed out, the higher income someone gains from moving from a low- to high-income country is enormous. IIRC it can be something like a 10x increase in consumption even if they're working the
... (read more)
1
Arturo Macias
5mo
But the pool of jobs is not fixed at all! Globalization show how easily is moving jobs to poor countries and how strong is resistance to immigration. Inmigrant communities often become traditionalists or end up in ghettoes in the receiving countries, while development in poor countries beguings a chain  chain reaction of social emacipation. What is the great welfare story of the late XXth century? Export oriented development. Mass inmigration is not so brigth... New technology allow for export oriented development in a substantial part of the services sector of the West. 
2
Karthik Tadepalli
5mo
A 10x increase in consumption doesn't pass the sniff test, and indeed migrants to the US earn on average 2x more than before they migrated, 3x if they come from very poorest countries. (source, table 2)

Great post, Tom, thanks for writing!

One thought is that a GCR framing isn't the only alternative to longtermism. We could also talk about caring for future generations. 

This has fewer of the problems you point out (e.g. differentiates between recoverable global catastrophes and existential catastrophes). To me, it has warm, positive associations. And it's pluralistic, connected to indigenous worldviews and environmentalist rhetoric.

I loved Chris Miller's Chip War.

If you're looking for something less directly related to things like AI, I like Siddhartha Mukerjee's books (The Emperor of all Maladies, The Gene), Charles C. Mann's The Wizard and the Prophet, and Andrew Roberts' Napoleon the Great

2
Kaleem
5mo
I’d held off on Chip Wars because I had assumed it’d be too surface level for the average EA who listens to 80k and follows AI progress (e.g. me) but your endorsement definitely has me reconsidering that Thanks !
4
Lizka
5mo
+1 to The Emperor of all Maladies

What a sweet post 💙

Thank you also to you for setting up Vida Plena! By putting so much work into setting up a new organization you've helped a lot of people.

1
Joy Bittner
5mo
Thank you Stephen, I feel really blessed to get to be part of this team, and grateful for all the people who trust us to help them.

Ranil Dissanayake actually just published an article in Asterisk about the history of the poverty line concept. The dollar-a-day (now $1.25 a day or something) line was kind of arbitrary and kind of not:

rather than make their own judgment on what constituted sufficient living, they could instead rely on the judgment of poor countries themselves. They would simply take an average of the poorest countries in the world and declare this to be the global minimum of human sufficiency

noting further in a footnote that

Of course, things are never quite so pure: The

... (read more)

Great comment, I appreciate this perspective and have definitely updated towards thinking the 10x gap is more explainable than I thought.

I do note that some of the examples you gave still leave me wondering if the families would rather just have the cash. Sure, perhaps it would be spent on high-priority and perhaps social signal-y things like weddings. But if they can't currently afford to send all their kids to school or other medical treatment, I wonder if they'd sensibly rather have the cash to spend on those things than a bednet.

(Also, my understanding... (read more)

4
JackM
6mo
In theory people will always prefer cash because they can spend it on whatever they want (unless of course it is difficult to buy what they most want). This isn’t really up for debate. What is up for debate is if people actually spend money in a way that most effectively improves their welfare. It sounds paternalistic to say, but I suspect they don’t for the reasons Nick and others have given.
7
NickLaing
6mo
Oh I'm almost sure they would rather have the cash, I'm not arguing against that. And yes the evidence is clear that they mostly spend it on essentials - I would argue most of the items considered investments you see (iron roofs, motorbikes etc.) are borderline essentials anyway, or at least you could call them an investment in everyday welfare that will quickly pay dividends either in future wellbeing or financially. Which one is more effective though depends on how much we weight preference. Give Directly (see their comment) weight the preferences of the recipients most heavily "GiveDirectly believes that the weights that should count the most are those of the specific people we’re trying to help" (Givedirectly's "North star), while GiveWell (and myself) more heavily weight objective measures, like DALYs averted/QALYs gained and longer term financial benefits, which is where mosquito nets will dominate whatever people can spend the equivalent 5 dollars on.  For example that 5 dollars might pay for half a term's school fees at a crappy village school (which will bring some benefit), wheras sleeping under a net for 2 years might reduce time off school and improve iron levels which helps learning and brain development, while also reducing the chance of dying or serious disability. Not too hard to see the potential 10x benefit. Apologies if this is obvious and I'm sucking eggs here. I do wonder though how much motivated reasoning comes into give directly's take on impact. Obviously if we weight pure preference most heavily, cash will dominate everything - perhaps even Strongminds measured by Wellbys ;). I know from experience how hard it is as someone running an NGO not to lean (or even swing) towards measures which will seem to favour your own intervention above others.

I'm somewhat sympathetic to something like GiveDirectly's take. If bednets are something like 10x more valuable than the cash used to purchase them, I find it a bit weird that people don't usually buy them when given a cash transfer. 

I've previously written a short comment about mechanisms that could explain this and do think there are important factors that can explain part of the gap (e.g. coordination problems). But I'm still a bit skeptical that the "real" value is 10x different.

While I'm generally sympathetic to GiveDirectly's position (I really like their work on so many fronts and think that cash outperforms so many interventions), it seems intuitive to me that it often won't outperform the very best interventions until we have a lot more funding supply (and I applaud their ambition for increasing that funding supply).

I often think of interventions like bednets as analogous to vaccines (something else that is often distributed for free when there's a widespread disease instead of sold for cash) for a few reasons:

  1. Stopping the sp
... (read more)

I'm sympathetic to that take as well, but after reading psychology, RCT results and living in Uganda for 10 years it doesn't surprise me at all that the "real" value of a net could be 10x cash, yet people don't buy them.

Yes your points about lack of information, lack of markets and externalities can explain the gap in part. Also short term thinking is a big problem.

Even just prioritising urgent over important is a massive and understandable issue. For example sending kids to school in Uganda is so highly valued, yet most people can't send all their kids to... (read more)

I suppose we could straightforwardly just transfer enough cash to everyone below a certain poverty line until their annual income is above it. The Longview team has estimated this would cost about $258 billion [edit: annually] (pp. 8-10 here).

$258B for one year

a lot of my confidence in the above comes from farmed animal welfare strictly dominating GiveWell in terms of any plausibly relevant criteria save for maybe PR

Well some people might have ethical views or moral weights that are extremely favourable to people-focused interventions.

Or people could really value certainty of impact, and the evidence base could lead them to be much more confident that marginal donations to GiveWell charities have a counterfactual impact than marginal donations to animal welfare advocacy orgs.

FWIW I'm more likely to donate to ani... (read more)

Asssume "Philanthropy to the Right-of-Boom" is a roaring success (say, a 95th-percentile good outcome for that report). In a few years, how does the world look different? (Pick any number of years you'd like!)

Thanks for the question! In 3 years, this might include:

  • Overall, "right of boom" interventions make up a larger fraction of funding (perhaps 1/4), even as total funding grows by an order of magnitude
  • There are major public and private efforts to understand escalation management (conventional and nuclear), war limitation, and war termination in the three-party world.
  • Much more research and investment in "civil defense" and resilience interventions across the board, not just nuclear. So that might include food security, bunkers, transmission-blocking intervent
... (read more)

I first got interested in civilizational collapse and global catastrophic risks by working on a Maya archaeological excavation in Guatemala.

I didn't know this, and it's awesome.

What did your work on the Mayans teach you about civilizational collapse?

I'm curious who you've seen recommending starting with Mearsheimer? That seems like an unbalanced starting point to me.

I'd personally recommend a textbook, like an older edition of World Politics.

1
trevor1
7mo
Agreed on the choice of an older edition. Mearsheimer still makes for a good base, especially for someone whose main exposure to international affairs and military/government was from reading the news and high school civics class (e.g. constitutional checks and balances), which unfortunately is the level that many non-international focused regulation-specialists in EA are still at. I don't think this speaks badly to their skill level and certainly not their potential, just that they start out in a really unfair circumstance, with a head filled with a bunch of bullshit that just needs to be thrown out as cleanly as possible, and Mearsheimer is a great way to do that. I don't remember much about the benefits from reading my first IR textbook, but I remember feeling extremely confused and disoriented, whereas Mearsheimer left me with a clear model that I could tweak and criticize and add gears to. Mearsheimer starts you out with "yes, this is how it is, and the stuff you grew up with and still see in the news is total bullshit and you're going to have a bad time trying to build a solid model off of that" and people can't do well with modelling international affairs unless they're prepared to bite the bullet and do that at some point. Mearsheimer's model itself is of course insufficient on its own, but its empirical base and predictiveness are strong, and it sets up the student to add in their own gears (a big one being information warfare). That's really important for being able to forecast how slow takeoff will disrupt and transform the system in historically unprecedented ways.

Thanks for writing this. I think a lot of it is pointing at something important. I broadly agree that (1) much of the current AI governance and safety chat too swiftly assumes an us-v-them framing, and that (2) talking about countries as actors obscures a huge amount of complexity and internal debate.

On (2), I think this tendency leads to analysis that assumes  more coordination among governments, companies, and individuals in other countries than is warranted. When people talk about "the US" taking some action, readers of this Forum are much more lik... (read more)

0
Oliver Sourbut
7mo
Thanks for this thoughtful response! This seems exactly right and is what I'm frustrated by. Though, further than you give credit (or un-credit) for, frequently I come across writing or talking about "US success in AI", "US leading in AI", or "China catching up to US", etc. which are all almost nonsense as far as I'm concerned. What do those statements even mean? In good faith I hope for someone to describe what these sorts of claims mean in a way which clicks for me, but I have come to expect that there is probably none. Do people actually think that Google+OpenAI+Anthropic (for sake of argument) are the US? Do they think the US government/military can/will appropriate those staff/artefacts/resources at some point? Are they referring to integration of contemporary ML/DS into the economy? The military? Or impacts on other indicators[1]? What do people mean by "China" here: CCP, Alibaba, Tencent, ...? If people mean these things, they should say those things, or otherwise say what they do mean. Otherwise I think people motte-and-bailey themselves (and others) into some really strange understandings. There's not some linear scoreboard which "US" and "China" have points on but people behave/talk like they actually think in those terms. Thanks, this would indeed be too strong :) but it's not what I mean. (Also thank you for the example bullets below that, for me and for other readers.) I don't mean to imply they have no influence on AI development and deployment[2]. What I meant by 'not currently meaningful players in AI development and deployment' was that, to date, governments have had little to no say in the course or nature of AI development. Rather, they have been mostly passive or unwitting passengers, with recent interventions (to date) comprising coarse economy-level lever-pulls, like your examples of regulation on chip production and sales. Can you think of a better compression of this than what I wrote? 'currently mainly passive except for coarse intervent

I disagree fwiw. The benefits of transparency seem real but ultimately relatively small to me, whereas there could be strong personal reasons for some people to decline to publicise their participation.

3
Guy Raveh
7mo
The scandals of the last year have shown us that the importance of transparency and oversight is anything but small. It's easy to dismiss it, but the fact is you have no idea if the people whose identities are hidden are ones you could trust. And even if they were, as long as much money and influence are at play, they corrupt people.
2
Rebecca
7mo
I think it's hard for a lot of people to think of what those reasons could be that are compelling, without further specificity

More country-specific content could be really interesting. I'd be interested in broad interviews covering:

  • China - economic projections, expert views and disagreement on stability of CCP, tech progress, info on public opinion about US/West, demographic challenges, entrepreneurship, etc. (not sure he'd be the best person to cover all this, but maybe Kaiser Kuo?)
  • India - whether high growth rates can be sustained, Sino-Indian relations, complexity of India's diplomatic relationships with Russia and US, challenges and stability of world's largest democracy, int
... (read more)
2
Nathan Young
7mo
Yeah I think this is underrated. Also ways to think well about these countries. What should our mental models contain?

This is a tangent, but I think it's important to consider predictors' entire track records, and on the whole I don't think Mearsheimer's is very impressive. Here's a long article on that.

6
Pablo
8mo
Indeed. And there are other forecasting failures by Mearsheimer, including one in which he himself apparently admits (prior to resolution) that such a failure would constitute a serious blow to his theory. Here’s a relevant passage from a classic textbook on nuclear strategy:[1] 1. ^ Lawrence Freedman & Jeffrey Michaels, The Evolution of Nuclear Strategy, 4th ed., London, 2019, pp. 579–580

I think this is a ridiculous idea, but the linked article (and headline of this post) is super clickbait-y. This idea is mentioned in two sentences in the court documents (p. 20 of docket 1886, here). All we know is that Gabriel, Sam's brother, sent a memo to someone at the FTX Foundation mentioning the idea. We have no idea if Sam even heard about this or if anyone at the Foundation "wanted" to follow through with it. I'm sure all sorts of wild possibilities got discussed around that time. Based on the evidence it's a a huge leap to say there were desires or plans to act on them.

6
Arepo
9mo
One way for us to find that out would be for the person who was sent the memo and thought it was a silly idea to make themselves known, and show the evidence that they shot it down or at least assert publicly that they didn't encourage it.  Since there seems to be little downside to them doing so if that's what happened, if no-one makes such a statement we should increase our credence that they were seriously entertaining it.

I agree that the media coverage implies SBF endorsed the content of this memo more than is warranted based on this text alone. But I would guess there was serious discussion of this kind of thing (maybe not buying Nauru specifically, but buying other pieces of land for the purpose of building bunkers/shelters).

In this EAG Fireside Chat from October 2021, Will MacAskill says: "I'm also really keen on more disaster preparedness work, so like, buying coal mines I'm totally serious on... but also just other things for if there's a collapse of civilization... a... (read more)

Actual text from the complaint to save everyone time:

One memo exchanged between Gabriel Bankman-Fried and an officer of the FTX Foundation describes a plan to purchase the sovereign nation of Nauru in order to construct a “bunker / shelter” that would be used for “some event where 50%-99.99% of people die [to] ensure that most EAs [effective altruists] survive” and to develop “sensible regulation around human genetic enhancement, and build a lab there.” The memo further noted that “probably there are other things it’s useful to do with a sovereign country, too.”

Thanks for this! I agree interventions in this direction would be worth looking into more, though I'd also say that tractability remains a major concern. I'm also just really uncertain about the long-term effects.

I think the Quincy Institute is interesting but want to note that it's also very controversial. Seems like they can be inflammatory and dogmatic about restraint policies. From an outside perspective I found it hard to evaluate the sign of their impact, much less its magnitude. I don't think I'd recommend 80K put them on the job board right now.

1
Radical Empath Ismam
9mo
I largely agree with your assessment that Quincy is controversial and dogmatic about restraint/ non-intervention. That being said, they are a valuable source of disagreement in the wider foreign policy community, and doing something very neglected (researching & advocating for restraint/non-intervention). I know Quincy staff disagree with each other, coming from libertarian, leftist, realist perspectives. So it is troubling that Cirincione departed because that difference in perspective is needed. Although I do suspect Parsi is describing things accurately when he says Cirincione left because he wanted the Institute to adopt his position in the Russian-initiated war on Ukraine. Quincy are exploring a controversial analysis in this current conflict in Russia-Ukraine, to identify if Russia's invasion could have been avoided in the 1st place (e.g. by bringing Russia into NATO way back when they were wanting to join), and advocating Ukraine and Russia compromise to reduce casualties (to be fair, it's reported the White House has also urged Ukraine to make compromises at times). Whilst controversial, I do think this is worthwhile, and I myself might disagree (and I believe they all disagree amongst themselves), I want to see this research/advocacy explored and debated. I had been nervous when the invasion started that Quincy's work could dip into Kremlin-apologetics, but they have seemed to steer away from that, and have nuanced perspectives. Their work on the Iran Nuclear Deal, the conflict in Yemen, is far less controversial, and promising. I find value in them being a counterbalance to the more hawkish think tanks which are much better resourced. On the 80K job board, you have a few institutions (well respected and worthwhile no doubt) like CSIS & RAND, which are more interventionist and/or funded by arms manufacturers (even RAND is indirectly funded by the grants it receives from AEI), so I do worry that there is a systemic bias for interventionist views. I ho

Thanks for catching that, you're absolutely right. That should either read about 100,000 deaths or hundreds of thousands of casualties. I'll get that fixed.

I can certainly empathize with the longtermist EA community being hard to ignore. It's much flashier and more controversial.

For what it's worth I think it would be possible and totally reasonable for you to filter out longtermist (and animal welfare, and community-building, etc.) EA content and just focus on the randomista stuff you find interesting and inspiring. You could continue following GiveWell,  Founders Pledge's global health and development work, and HLI. Plus, many of Charity Entrepreneurship's charities are randomista-influenced.

For exampl... (read more)

4
Agrippa
1y
Yeah. (as a note I am also a fan of the animal welfare stuff). This is good suggestion.  I think most of this stuff is too dry to hold my attention by itself. I would like a social environment that was engaging yet systematically directed my attention more often to things I care about. This happens naturally if I am around people who are interesting/fun but also highly engaged and motivated about a topic. As such I have focused on community and community spaces more than, for example, finding a good randomista newsletter or extracting randomista posts from the forums.  Another reason to focus on community interaction, is that it is both much more fun and much more useful to help with creative problem solving. But forum posts tend to report the results of problem solving / report news. I would rather be engaging with people before that step, but I don't know of a place where one could go to participate in that aside from employment. In contrast, I do have a sense of where one could go to participate in this kind of group or community re: AI safety. 

I agree with you about SNT/ITN. I like that chapter of your thesis a lot, and also find John's post here convincing.

It does seem to me that randomista EA is alive and largely well—GW is still growing, global health still gets the most funding (I think), many of Charity Entrepreneurship's new charities are randomista-influenced, etc.

There's a lot of things going on under the "EA" umbrella. HLI's work feels very different from what other EAs do, but equally a typical animal welfare org's work will feel very different, and a typical longtermist org's work wil... (read more)

Just curious - do you not feel like GiveWell, Happier Lives Institute, and some of Founders Pledge's work, for example, count as randomista-flavoured EA?

6
MichaelPlant
1y
Just chiming in here as HLI was mentioned - although this definitely isn't the most important part of the post. I certainly see us as randomista-inspired - wait, should that be 'randomista-adjacent' -  but I would say that what we do feels very different from what other EAs, notably longtermists, do.  Also, we came into existence about 5 years after Doing Good Better was published. I also share Habryka's doubts about how EA's original top interventions were chosen. The whole "scale, neglectedness, tractability' framework strikes me as a confusing, indeterminate methodology that was developed post hoc to justify the earlier choices. I moaned about the SNT framework at length in chapter 5 (pp171) of my PhD thesis. 
6
Agrippa
1y
"It doesn't exist" is too strong for sure. I consider GiveWell central to the randomista part and it was my entrypoint into EA at large. Founder's Pledge was also pretty randomista back when I was applying for a job there in college. I don't know anything about HLI.  There may be a thriving community around GiveWell etc that I am ignorant to. Or maybe if I tried to filter out non-randomista stuff from my mind then I would naturally focus more on randomista stuff when engaging EA feeds.  The reality is that I find stuff like "people just doing AI capabilities work and calling themselves EA" to be quite emotionally triggering and when I'm exposed to it thats what my attention goes to (if I'm not, as is more often the case, avoiding the situation entirely). Naturally this probably makes me pretty blind to other stuff going on in EA channels. There are pretty strong selection effects on my attention here.  All of that said, I do think that community building in EA looks completely different than how it would look if it were the GiveWell movement.

On (1), I commented above, but most supplemental creatine is vegan as far as I can tell.

1
more better
1y
Thanks, I'm seeing that here, too:  "It should be noted that although creatine is found mostly in animal products, the creatine in most supplements is synthesized from sarcosine and cyanamide [39,40], does not contain any animal by-products, and is therefore “vegan-friendly. The only precaution is that vegans should avoid creatine supplements delivered in capsule form because the capsules are often derived from gelatin and therefore could contain animal by-products."

I think most supplemental creatine is vegan? From what I can tell it's lab-synthesized from chemicals. Folks should obviously double-check that for themselves and their specific supplements, though.

I think that one's a reach, tbh.

(I also think the one about using guilt to control is a stretch.)

Load more