L

lilly

2507 karmaJoined

Posts
3

Sorted by New
6
lilly
· · 1m read

Comments
118

lilly
36
7
0
1

Somewhat unrelated, but since people are discussing whether this example is cherry-picked vs. reflective of a systemic problem with infrastructure-related grants, I'm curious about the outcome of another, much larger grant:

Has there been any word on what happened to the Harvard Square EA coworking space that OP committed $8.9 million to and that was projected to open in the first half of 2023?

I really enjoyed this series; thanks for writing it!

One piece of stylistic feedback on Anti-Philanthropic Misdirection: I think the piece's hostile tone—e.g., "Wenar is here promoting a general approach to practical reasoning that is very obviously biased, stupid, and harmful: a plain force for evil in the world"—will make your piece less persuasive to non-EA readers for two reasons. First, I suspect all the italics and adjectives will trigger readers' bias radars, making people who aren't already sympathetic to EA approach the piece more critically/less openmindedly than they would have otherwise (e.g., if you had written: "Wenar promotes a general approach to practical reasoning that is both incorrect and harmful"). Second, it reads as hypocritical, since in the piece you criticize "the hostile, dismissive tone of many critics." (And unless readers have read Wenar's piece pretty closely and are pretty familiar with EA, they're not going to be well-positioned to assess whose hostility and dismissiveness are justified.) So, while I understand the frustration, and think the tone is in some sense warranted, I suspect the piece would be more effective at morally redirecting people if it read as more neutral/measured. The arguments speak for themselves. 

I think it's a nice op-ed; I also appreciate the communication strategy here—anticipating that SBF's sentencing will reignite discourse around SBF's ties to EA, and trying to elevate the discourse around that (in particular by highlighting the reforms EA has undertaken over the past 1.5 years). 

First of all, kudos on writing an op-ed! I think it’s a good thing to do, and I think earning to give is a much better path than what most Ivy League grads wind up doing, so if you persuade a few people, that’s good.

My basic problem with the argument you make here (and with earning to give in general) is that some bad things tend to go along with “selling out” (as you put it), rendering it difficult to maintain one’s initial commitment to earning to give. Some worries I have about college students deciding to do this:

  1. Erosion of values. When your social group becomes full of Meta employees (vs. idealistic college students), you find a partner (who may or may not be EA), you have kids, and so on, your values shift, and it becomes easier to justify not donating. I have seen a lot of people become gradually less motivated to do good between the ages of 20 and 30, but while having committed to a career path in, eg, global health makes it harder for this value shift to be accompanied by a shift in the social value of one’s work (since most global health jobs are somewhat socially valuable), having committed to a career path in earning to give presents no such barriers.

  2. Relatedly, lifestyle creep occurs. As you get richer (and befriend your colleagues at Meta and so on), people start inviting you to expensive birthday dinners and on nice trips and stuff. And so your ability to maintain a relatively more frugal lifestyle can be compromised by desire/pressure to buy nice stuff.

In other words, I think it’s harder to maintain your EA values when you’re earning to give vs. working at, eg, an NGO. These challenges are then further compounded by:

(3) Selection bias. I suspect that the group of EA-interested people who are drawn to earning to give in the first place are more interested in having a bougie lifestyle (etc) than the average EA who isn’t drawn to earning to give. And, correspondingly, I think they’re more likely to be affected by (1) and (2).

Again, I think this post is missing nuance; for example:

  1. Induction of fetal demise is done through a variety of means in multiple respects--different medications are given (i.e., digoxin, lidocaine, or KCl) via different routes (i.e., intra-fetal vs. intra-amniotic). (Given that lidocaine is a painkiller, I could see a different version of this post compellingly making the case that to the extent clinicians have discretion in choosing what agents to use to induce fetal demise, they should prioritize using ones that are likely to have off-target analgesic effects.)
  2. So, the link you post refers to a small minority of abortions, as it's only routine to inject the amniotic fluid (specifically) with potassium chloride (specifically) prior to the delivery of anesthesia in some second-trimester abortions.
  3. Potassium chloride is a medication that's routinely given via IV to replete potassium. The dose has a significant effect on how painful this is, as does the route of administration; people tolerate oral potassium fine. Importantly, the fetus is not even being given KCl intravenously (vs. intra-amniotically or intra-fetally), so it's hard for me to infer from "it is sometimes painful to get KCl via IV" that it would be painful for a fetus to get potassium via a different route. Correspondingly, then, I don't think the "inflames the potassium ions in the sensory nerve fibers, literally burning up the veins as it travels to the heart" applies.

I don't have time to research this in depth, but am pretty sure this post is missing a lot of nuance about how anesthesia works in abortion. Importantly, because mother and fetus share a circulation, IV sedation that is given to the mother will—to some extent—sedate the fetus as well, depending on the specific regimen used. So it's not quite right to say "The fetus is administered a lethal injection with no anesthesia." Correspondingly, I think this post overstates the risk of fetal suffering associated with abortion. 

Yeah, I think this is basically right. EA orgs probably favor Profile 1 people because they've demonstrated more EA alignment, meaning: (1) the Profile 1 people will tend to be more familiar with EA orgs than the Profile 2 people, so may be better positioned to assess their fit for any given org/role, (2) conversely, EA orgs will tend to be more familiar with Profile 1 people, since they've been in the community for a while, meaning orgs may be better able to assess a prospective Profile 1 employee's fit, and (3) if the Profile 1 employee leaves/is fired, they'll be less inclined to trash/sue the EA org.

Favoring Profile 1 people because of (3) would be bad (and I hope orgs aren't explicitly or implicitly doing this!), but favoring them because of (1) + (2) seems pretty reasonable, even though there are downsides associated with this (e.g., bad norms are less likely to get challenged, insights/innovations from other spheres won't make it into EA, etc). 

That said, I think one thing your post misses is that there are a lot of people who are closer to Profile 2 people (professionally) who are pretty embedded in EA (socially, academically, extracurricularly, etc). And I think orgs also tend to favor these people, which may mitigate at least some of the aforementioned downsides of EA being an insular ecosystem (i.e., the insights/innovations from other spheres one, if not the challenging norms one). 

A final piece of speculation: getting a job at an EA org is a lot more prestigious for EAs than it is for people outside of EA, and the career capital conferred by working at EA orgs has a much lower exchange rate outside of EA. As a result, it wouldn't shock me if top Profile 2 candidates are applying to EA jobs at much lower rates and are much less likely to take EA jobs they're offered. If this is the case, the discrepancy you're observing may not reflect an unwillingness of EA orgs to hire impressive Profile 2 candidates, but rather a lack of interest from Profile 2 candidates whose backgrounds are on par with the Profile 1 candidates'. 

I really appreciate your and @Katja_Grace's thoughtful responses, and wish more of this discussion had made it into the manuscript. (This is a minor thing, but I also didn't love that the response rate/related concerns were introduced on page 20 [right?], since it's standard practice—at least in my area—to include a response rate up front, if not in the abstract.) I wish I had more time to respond to the many reasonable points you've raised, and will try to come back to this in the next few days if I do have time, but I've written up a few thoughts here.

Note that we didn't tell them the topic that specifically.

I understand that, and think this was the right call. But there seems to be consensus that in general, a response rate below ~70% introduces concerns of non-response bias, and when you're at 15%—with (imo) good reason to think there would be non-response bias—you really cannot rule this out. (Even basic stuff like: responders probably earn less money than non-responders, and are thus probably younger, work in academia rather than industry, etc.; responders are more likely to be familiar with the prior AI Impacts survey, and all that that entails; and so on.) In short, there is a reason many medical journals have a policy of not publishing surveys with response rates below 60%; e.g., JAMA asks for >60%, less prestigious JAMA journals also ask for >60%, and BMJ asks for >65%. (I cite medical journals because their policies are the ones I'm most familiar with, not because I think there's something special about medical journals.)

Tried sending them $100 last year and if anything it lowered the response rate.

I find it a bit hard to believe that this lowered response rates (was this statistically significant?), although I would buy that it didn't increase response rates much, since I think I remember reading that response rates fall off pretty quickly as compensation for survey respondents increases. I also appreciate that you're studying a high-earning group of experts, making it difficult to incentivize participation. That said, my reaction to this is: determine what the higher-order goals of this kind of project are, and adopt a methodology that aligns with that. I have a hard time believing that at this price point, conducting a survey with a 15% response rate is the optimal methodology. 

If you are inclined to dismiss this based on your premise "many AI researchers just don’t seem too concerned about the risks posed by AI", I'm curious where you get that view from, and why you think it is a less biased source.

My impression stems from conversations I've had with two CS professor friends about how concerned the CS community is about the risks posed by AI. For instance, last week, I was discussing the last AI Impacts survey with a CS professor (who has conducted surveys, as have I); I was defending the survey, and they were criticizing it for reasons similar to those outlined above. They said something to the effect of: the AI Impacts survey results do not align with my impression of people's level of concern based on discussions I've had with friends and colleagues in the field. And I took that seriously, because this friend is EA-adjacent; extremely competent, careful, and trustworthy; and themselves sympathetic to concerns about AI risk. (I recognize I'm not giving you enough information for this to be at all worth updating on for you, but I'm just trying to give some context for my own skepticism, since you asked.) 

Lastly, as someone immersed in the EA community myself, I think my bias is—if anything—in the direction of wanting to believe these results, but I just don't think I should update much based on a survey with such a low response rate.

I think this is going to be my last word on the issue, since I suspect we'd need to delve more deeply into the literature on non-response bias/response rates to progress this discussion, and I don't really have time to do that, but if you/others want to, I would definitely be eager to learn more.

I earn about $15/hour and donate much more than 1%. I don't think it's that hard to do this, and it seems weird to set such a low bar.

Load more