Jason

Working (6-15 years of experience)
7272Joined Nov 2022

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Posts
2

Sorted by New
6
Jason's Shortform
Jason
· 3mo ago · 1m read

Comments
714

Topic Contributions
2

Jason1h31

You may want to note that the requirements for being an accredited investor are pretty significant:

https://www.sec.gov/education/capitalraising/building-blocks/accredited-investor

I wonder if it would be worthwhile/legal to add a different mechanism for the significant majority of people who don't qualify as accredited investors. I could "purchase" equity in exchange for the author's commitment to counterfactually give the appropriate portion of any winnings to a stated charity of my choice.

Manifund (or someone else) might be able to buy the impact certificates on my behalf and act as my agent. Other mechanisms are possible but would require more trust.

Jason1h31

In both cases, that's a nuclear power attacking a non-nuclear one. Contrast how Putin is being dealt with for doing Putin things -- no one is suggesting bombing Russia.

Jason2h31

Unless the non-state actor is operating off a ship in international waters, it's operating within a nation-state's boundaries, and bombing it would be a serious incursion on that nation-state's territorial sovereignty. There's a reason such incursions against a nuclear state have been off the table except in the most dire of circumstances.

The possibility of some actor having the financial and intellectual resources necessary to develop AGI without the acquiescence of the nation within which operates seems rather remote. And the reference elsewhere to nuclear options probably colors this statement -- why discuss that if the threat model is some random terrorist cell?

Jason19h30

Also, one challenge on adjusting based on discussions with a country's government or health service is that you're going to lose some efficiencies/economies of scale. Each country has different priorities and resources, so different programs will be at the margin in each.

Jason19h20

Thanks for sharing, Tom! Could you say a little more about how you see the "classic" EA global health programs fitting into your paradigm? These programs tend to do one thing -- like hand out anti-malarial bednets -- and aim at doing that very well. EA funders try to be very careful not to fund things that the government (or a non-EA funder) would have otherwise funded. So that would suggest classic EA interventions are "marginal" rather than "core" in your framework. On the other hand, they have a very high return for each dollar invested, which suggests you might classify them as "core."

Jason1d64

Not an expert either, but safest to say the corporate-law question is nuanced and not free from doubt. It's pretty clear there's no duty to maximize short-term profits, though.

But we can surmise that most boards that allow the corporation to seriously curtail its profits -- at least its medium-term profits -- will get replaced by shareholders soon enough. So the end result is largely the same.

Jason2d84

I'd go a bit further. The proposed norm has several intended benefits: promoting fairness to the criticized organization by not blindsiding the organization, generating higher-quality responses, minimizing fire drills for organizations and their employees, etc. I think it is a good norm in most cases.

However, there are some circumstances in which the norm would not significantly achieve its intended goals. For instance, the rationale behind the norm will often have less force where the poster is commenting on the topic of a fresh news story. The organization already feels pressure to respond to the news story on a news-cycle timetable; the marginal burden of additionally having a discussion of the issue on the Forum is likely modest. If the media outlet gave the org a chance to comment on the story, the org should also not be blindsided by the issue.

Likewise, criticism in response to a recent statement or action by the organization may or may not trigger some of the same concerns as more out-of-the-blue criticism. Where the nature of the statement/action is such that the criticism was easily foreseeable, the organization should already be in a place to address it (and was not caught unawares by its own statement/action). This assumes, of course, that the criticism is not dependent on speculation about factual matters or the like.

Also, I think the point about a delayed statement being less effective at conveying a message goes both ways: if an organization says or does something today, people will care less about an poster's critical reaction posted eight days later than a reaction posted shortly after the organization action/statement.

Finally, there may also be countervailing reasons that outweigh the norm's benefits in specific cases.

Jason2d64

So is the number of comments here (5 at time of this comment) vs. there (69).

Jason2d20

Thanks for these points! The idea that people care about more than their wellbeing may be critical here. I'm thinking of a simplified model with the following assumptions: a mean lifetime wellbeing of 5, SD 2, normal distribution, wellbeing is constant through the lifespan, with a neutral point of 4 (which is shared by everyone). 

Under these assumptions, AMF gets no "credit" (except for grief avoidance) for saving the life of a hypothetical person with wellbeing of 4. I'm really hesitant to say that saving that person's life doesn't morally "count" as a good because they are at the neutral point. On the one hand, the model tells me that saving this person's life doesn't improve total wellbeing. On the other hand, suppose I (figuratively) asked the person whose life was saved, and he said that he preferred his existence to non-existence and appreciated AMF saving his life. 

At that point, I think the WELLBY-based model might not be incorporating some important data -- the person telling us that he prefers his existence to non-existence would strongly suggest that saving his life had moral value that should indeed "count" as a moral good in the AMF column. His answers may not be fully consistent, but it's not obvious to me why I should fully credit his self-reported wellbeing but give zero credence to his view on the desirability of his continued existence. I guess he could be wrong to prefer his continued existence, but he is uniquely qualified to answer that question and so I think I should be really hesitant to completely discount what he says. And a full 30% of the population would have wellbeing of 4 or less under the assumptions.

Even more concerning, AMF gets significantly "penalized" for saving the life of a hypothetical person with wellbeing of 3 who also prefers existence to non-existence. And almost 16% of the population would score at least that low.

Of course, the real world is messier than a quick model. But if you have a population where the neutral point is close enough to the population average, but almost everyone prefers continued existence, it seems that you are going to have a meaningful number of cases where AMF gets very little / no / negative moral "credit" for saving the lives of people who want (or would want) their lives saved. That seems like a weakness, not a feature, of the WELLBY-based model to me.

Jason2d86

From HLI's perspective, it makes sense to describe how the moral/philosophical views one assumes affect the relative effectiveness of charities. They are, after all, a charity recommender, and donors are their "clients" in a sense. GiveWell doesn't really do this, which makes sense -- GiveWell's moral weights are so weighted toward saving lives that it doesn't really make sense for them to investigate charities with other modes of action. I think it's fine to provide a bottom-line recommendation on whatever moral/philosophical view a recommender feels is best-supported, but it's hardly obligatory. 

We recognize donor preferences in that we don't create a grand theory of effectiveness and push everyone to donate to longtermist organizations, or animal-welfare organizations, or global health organizations depending on the grand theory's output. Donors choose among these for their own idiosyncratic reasons, but moral/philosophical views are certainly among the critical criteria for many donors. I don't see why that shouldn't be the case for interventions within a cause area that produce different kinds of outputs as well.

Here, I doubt most global-health donors -- either those who take advice from GiveWell or from HLI -- have finely-tuned views on deprivationism, neutral points, and so on. However, I think many donors do have preferences that indirectly track on some of those issues. For instance, you describe a class of donors who "want to give to mental health." While there could be various reasons for that, it's plausible to me that these donors place more of an emphasis on improving experience for those who are alive (e.g., they give partial credence to epicureanism) and/or on alleviating suffering. If they did assess and chart their views on neutral point and philosophical view, I would expect them to end up more often at points where SM is ranked relatively higher than the average global-health donor would. But that is just conjecture on my part.

One interesting aspect of thinking from the donor perspective is the possibility that survey results could be significantly affected by religious beliefs. If many respondents chose a 0 neutral point because their religious tradition led them to that conclusion, and you are quite convinced that the religious tradition is just wrong in general, do you adjust for that? Does not adjusting allow the religious tradition to indirectly influence where you spend your charitable dollar?

To me, the most important thing a charity evaluator/recommender does is clearly communicate what the donation accomplishes (on average) if given to various organizations they identify -- X lives saved (and smaller benefits), or Y number of people's well-being improved by Z amount. That's the part the donor can't do themselves (without investing a ton of time and resources).

I don't think the neutral point is as high as 3. But I think it's fine for HLI to offer recommendations for people who do.

Load more