L

lilly

2464 karmaJoined Jan 2023

Posts
3

Sorted by New
6
lilly
· 7mo ago · 1m read

Comments
116

I think it's a nice op-ed; I also appreciate the communication strategy here—anticipating that SBF's sentencing will reignite discourse around SBF's ties to EA, and trying to elevate the discourse around that (in particular by highlighting the reforms EA has undertaken over the past 1.5 years). 

First of all, kudos on writing an op-ed! I think it’s a good thing to do, and I think earning to give is a much better path than what most Ivy League grads wind up doing, so if you persuade a few people, that’s good.

My basic problem with the argument you make here (and with earning to give in general) is that some bad things tend to go along with “selling out” (as you put it), rendering it difficult to maintain one’s initial commitment to earning to give. Some worries I have about college students deciding to do this:

  1. Erosion of values. When your social group becomes full of Meta employees (vs. idealistic college students), you find a partner (who may or may not be EA), you have kids, and so on, your values shift, and it becomes easier to justify not donating. I have seen a lot of people become gradually less motivated to do good between the ages of 20 and 30, but while having committed to a career path in, eg, global health makes it harder for this value shift to be accompanied by a shift in the social value of one’s work (since most global health jobs are somewhat socially valuable), having committed to a career path in earning to give presents no such barriers.

  2. Relatedly, lifestyle creep occurs. As you get richer (and befriend your colleagues at Meta and so on), people start inviting you to expensive birthday dinners and on nice trips and stuff. And so your ability to maintain a relatively more frugal lifestyle can be compromised by desire/pressure to buy nice stuff.

In other words, I think it’s harder to maintain your EA values when you’re earning to give vs. working at, eg, an NGO. These challenges are then further compounded by:

(3) Selection bias. I suspect that the group of EA-interested people who are drawn to earning to give in the first place are more interested in having a bougie lifestyle (etc) than the average EA who isn’t drawn to earning to give. And, correspondingly, I think they’re more likely to be affected by (1) and (2).

Again, I think this post is missing nuance; for example:

  1. Induction of fetal demise is done through a variety of means in multiple respects--different medications are given (i.e., digoxin, lidocaine, or KCl) via different routes (i.e., intra-fetal vs. intra-amniotic). (Given that lidocaine is a painkiller, I could see a different version of this post compellingly making the case that to the extent clinicians have discretion in choosing what agents to use to induce fetal demise, they should prioritize using ones that are likely to have off-target analgesic effects.)
  2. So, the link you post refers to a small minority of abortions, as it's only routine to inject the amniotic fluid (specifically) with potassium chloride (specifically) prior to the delivery of anesthesia in some second-trimester abortions.
  3. Potassium chloride is a medication that's routinely given via IV to replete potassium. The dose has a significant effect on how painful this is, as does the route of administration; people tolerate oral potassium fine. Importantly, the fetus is not even being given KCl intravenously (vs. intra-amniotically or intra-fetally), so it's hard for me to infer from "it is sometimes painful to get KCl via IV" that it would be painful for a fetus to get potassium via a different route. Correspondingly, then, I don't think the "inflames the potassium ions in the sensory nerve fibers, literally burning up the veins as it travels to the heart" applies.

I don't have time to research this in depth, but am pretty sure this post is missing a lot of nuance about how anesthesia works in abortion. Importantly, because mother and fetus share a circulation, IV sedation that is given to the mother will—to some extent—sedate the fetus as well, depending on the specific regimen used. So it's not quite right to say "The fetus is administered a lethal injection with no anesthesia." Correspondingly, I think this post overstates the risk of fetal suffering associated with abortion. 

Yeah, I think this is basically right. EA orgs probably favor Profile 1 people because they've demonstrated more EA alignment, meaning: (1) the Profile 1 people will tend to be more familiar with EA orgs than the Profile 2 people, so may be better positioned to assess their fit for any given org/role, (2) conversely, EA orgs will tend to be more familiar with Profile 1 people, since they've been in the community for a while, meaning orgs may be better able to assess a prospective Profile 1 employee's fit, and (3) if the Profile 1 employee leaves/is fired, they'll be less inclined to trash/sue the EA org.

Favoring Profile 1 people because of (3) would be bad (and I hope orgs aren't explicitly or implicitly doing this!), but favoring them because of (1) + (2) seems pretty reasonable, even though there are downsides associated with this (e.g., bad norms are less likely to get challenged, insights/innovations from other spheres won't make it into EA, etc). 

That said, I think one thing your post misses is that there are a lot of people who are closer to Profile 2 people (professionally) who are pretty embedded in EA (socially, academically, extracurricularly, etc). And I think orgs also tend to favor these people, which may mitigate at least some of the aforementioned downsides of EA being an insular ecosystem (i.e., the insights/innovations from other spheres one, if not the challenging norms one). 

A final piece of speculation: getting a job at an EA org is a lot more prestigious for EAs than it is for people outside of EA, and the career capital conferred by working at EA orgs has a much lower exchange rate outside of EA. As a result, it wouldn't shock me if top Profile 2 candidates are applying to EA jobs at much lower rates and are much less likely to take EA jobs they're offered. If this is the case, the discrepancy you're observing may not reflect an unwillingness of EA orgs to hire impressive Profile 2 candidates, but rather a lack of interest from Profile 2 candidates whose backgrounds are on par with the Profile 1 candidates'. 

I really appreciate your and @Katja_Grace's thoughtful responses, and wish more of this discussion had made it into the manuscript. (This is a minor thing, but I also didn't love that the response rate/related concerns were introduced on page 20 [right?], since it's standard practice—at least in my area—to include a response rate up front, if not in the abstract.) I wish I had more time to respond to the many reasonable points you've raised, and will try to come back to this in the next few days if I do have time, but I've written up a few thoughts here.

Note that we didn't tell them the topic that specifically.

I understand that, and think this was the right call. But there seems to be consensus that in general, a response rate below ~70% introduces concerns of non-response bias, and when you're at 15%—with (imo) good reason to think there would be non-response bias—you really cannot rule this out. (Even basic stuff like: responders probably earn less money than non-responders, and are thus probably younger, work in academia rather than industry, etc.; responders are more likely to be familiar with the prior AI Impacts survey, and all that that entails; and so on.) In short, there is a reason many medical journals have a policy of not publishing surveys with response rates below 60%; e.g., JAMA asks for >60%, less prestigious JAMA journals also ask for >60%, and BMJ asks for >65%. (I cite medical journals because their policies are the ones I'm most familiar with, not because I think there's something special about medical journals.)

Tried sending them $100 last year and if anything it lowered the response rate.

I find it a bit hard to believe that this lowered response rates (was this statistically significant?), although I would buy that it didn't increase response rates much, since I think I remember reading that response rates fall off pretty quickly as compensation for survey respondents increases. I also appreciate that you're studying a high-earning group of experts, making it difficult to incentivize participation. That said, my reaction to this is: determine what the higher-order goals of this kind of project are, and adopt a methodology that aligns with that. I have a hard time believing that at this price point, conducting a survey with a 15% response rate is the optimal methodology. 

If you are inclined to dismiss this based on your premise "many AI researchers just don’t seem too concerned about the risks posed by AI", I'm curious where you get that view from, and why you think it is a less biased source.

My impression stems from conversations I've had with two CS professor friends about how concerned the CS community is about the risks posed by AI. For instance, last week, I was discussing the last AI Impacts survey with a CS professor (who has conducted surveys, as have I); I was defending the survey, and they were criticizing it for reasons similar to those outlined above. They said something to the effect of: the AI Impacts survey results do not align with my impression of people's level of concern based on discussions I've had with friends and colleagues in the field. And I took that seriously, because this friend is EA-adjacent; extremely competent, careful, and trustworthy; and themselves sympathetic to concerns about AI risk. (I recognize I'm not giving you enough information for this to be at all worth updating on for you, but I'm just trying to give some context for my own skepticism, since you asked.) 

Lastly, as someone immersed in the EA community myself, I think my bias is—if anything—in the direction of wanting to believe these results, but I just don't think I should update much based on a survey with such a low response rate.

I think this is going to be my last word on the issue, since I suspect we'd need to delve more deeply into the literature on non-response bias/response rates to progress this discussion, and I don't really have time to do that, but if you/others want to, I would definitely be eager to learn more.

I earn about $15/hour and donate much more than 1%. I don't think it's that hard to do this, and it seems weird to set such a low bar.

No, because the response rate wouldn't be 100%; even if it doubled to 30% (which I doubt it would), the cost would still be lower ($120k).

lilly
3mo63
26
13
9

I appreciate that a ton of work went into this, and the results are interesting. That said, I am skeptical of the value of surveys with low response rates (in this case, 15%), especially when those surveys are likely subject to non-response bias, as I suspect this one is, given: (1) many AI researchers just don’t seem too concerned about the risks posed by AI, so may not have opened the survey and (2) those researchers would likely have answered the questions on the survey differently. (I do appreciate that the authors took steps to mitigate the risk of non-response bias at the survey level, and did not find evidence of this at the question level.)

I don’t find the “expert surveys tend to have low response rates” defense particularly compelling, given: (1) the loaded nature of the content of the survey (meaning bias is especially likely), (2) the fact that such a broad group of people were surveyed that it’s hard to imagine they’re all actually “experts” (let alone have relevant expertise), (3) the fact that expert surveys often do have higher response rates (26% is a lot higher than 15%), especially when you account for the fact that it’s extremely unlikely other large surveys are compensating participants anywhere close to this well, and (4) the possibility that many expert surveys just aren’t very useful.

Given the non-response bias issue, I am not inclined to update very much on what AI researchers in general think about AI risk on the basis of this survey. I recognize that the survey may have value independent of its knowledge value—for instance, I can see how other researchers citing these kinds of results (as I have!) may serve a useful rhetorical function, given readers of work that cites this work are unlikely to review the references closely. That said, I don’t think we should make a habit of citing work that has methodological issues simply because such results may be compelling to people who won’t dig into them.

Given my aforementioned concerns, I wonder whether the cost of this survey can be justified (am I calculating correctly that $138,000 was spent just compensating participants for taking this survey, and that doesn’t include other costs, like those associated with using the outside firm to compensate participants, researchers’ time, etc?). In light of my concerns about cost and non-response bias, I am wondering whether a better approach would instead be to randomly sample a subset of potential respondents (say, 4,000 people), and offer to compensate them at a much higher rate (e.g., $100), given this strategy could both reduce costs and improve response rates.

Load more