J

Jason

10095 karmaJoined Nov 2022Working (15+ years)

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Posts
2

Sorted by New
6
Jason
· 9mo ago · 1m read

Comments
1116

Topic Contributions
2

Although a non-directed donation could potentially enable a significant chain of donations. I think one could count all recipients in the chain if the non-directed donation is a but-for cause of them receiving livers, but would need to include costs to all donors as well.

I could see an argument for reasons-giving, at least from a checklist, on strong downvotes. Strong downvotes should be uncommon, so the extra few seconds to select a reason shouldn't lead to exhaustion.

Would donating prevent you from other opportunities in the future?

I'm assuming a somewhat looser standard than the norms for mediators generally, in light of the parties' presumed interest in an EA-associated mediator. However, in my view, the conflict standards for third-party neutrals are significantly higher than just about any other role type, and rightfully so.

I think having an E2Ger as benefactor is probably the best practicable answer to conflicts, although you would inherit all the conflicts any major benefactor.  I would probably not try to mediate any matter in which a reasonable person might reasonably question the impartiality of any major (over 10-20%??) funder of your work. Hopefully, you could find a E2Ger without many conflicts?

If you're dependent on a fund for more than 10-20%, I think that conflict would extend to all the fund managers in a position to vote on your grants, and the organizations that employ them. So taking money from a fund would probably preclude you from working on matters involving many of the major organizations. In my view, a reasonable person could question whether a mediator could be impartial toward Org X when someone from Org X had a vote on whether to renew one of your major grants [or a vote on a major grant you intended to submit].

Some of that is potentially waivable where both parties to the dispute have approximately equal power, but I do not think it would be appropriate to waive the potential appearance of influence where a significant power imbalance existed in favor of the funder.

One challenge you'll want to think about is how to demonstrate your effectiveness to your funder(s) while maintaining confidentiality of the parties (unless you obtain a waiver from them to disclose information to the funder(s)).

How would you maintain your independence as a third-party neutral? The two usual approaches to mitigate the risk or at least appearance of partiality are that the disputants split the cost, or that the neutral is part of a large panel such that their livelihood isn't materially dependent on a disputant's good will.

I think we agree on somewhat more than it seems at first glance. I don't think the current GiveWell top charities are the pinnacle of cost-effectiveness, support further cause exploration and incubating the most promising ideas into charities, and think it's quite possible for EA funders to miss important stuff. 

The crux is that I don't think it's warranted to directly compare cost-effectiveness analyses conducted on a few weeks of desktop research, expert interviews and commissioning of surveys and quantitative modelling to evaluations of specific charities at scale and in action, and I think your original post did that with allusions to scamming, GiveWell charities as $1000 liters of milk, and being a sucker. 

Although CEARCH is too young to us retrospectively compare its analyses to the cost-effectiveness of launched charities, I think something like drug development is a good analogy. Lots of stuff looks great on paper, in situ, or even in animal models, only to fall apart entirely in the multi-phase human clinical trial process on the way to full approval. Comparing how a drug does in limited animal models to how another drug does in Phase III trials is comparing apples to oranges. Moreover, "risk the model/drug will fall apart in later phases" is distinct from "risk after Phase III trials that the model/drug will not work in a specific country/patient."

To be very clear, this is not a criticism of CEARCH -- as I see it, its job is to screen candidate interventions, not to bring them up to the level of mature, shovel-ready interventions. The next step would be either incubation or a deep dive on a specific target charity already doing this work. I would expect to see a ton of false positives, just as I would expect that from the earliest phases of drug development. It's worth it to find the next ten-figure drug / blockbuster EA intervention.

And these causes pretty easy to find. CEARCH has been started in 2022 and has already found 4 causes 10x GiveWell under my aforementioned pessimistic assumptions. CE and RP have found more. There are big funding gaps, because there are many causes like this. There are many big world governments to do lobbying to. We should aim to close the funding gaps as soon as possible, because that would help more people. 

I think this should make you question your assumptions to some extent. GiveWell has evaluated tons of interventions for a number of years, and made significant grants for a number of them. If CEARCH has come up with 4 causes that are 10X top charities in ~ a year with 2 FTEs, while GiveWell hasn't come up with anything better than 1x in many years with lots more FTEs, what conclusion do we draw from that? I think it more likely that CEARCH is applying more generous assumptions than that GiveWell is badly screwing up its analysis of intervention after intervention. (And no one else, e.g., Founders' Pledge, has been able to come up with clearly better interventions either, at least based on neartermist global health priorities.)

More generous assumptions come with the territory of early-stage CEAs, so I am not suggesting that is problematic given CEARCH's mission. But I think its analysis supports a conclusion of "we should incubate a charity pursuing this intervention," not "we should conclude that our GiveWell donations were very poor value and immediately divert tens of millions of dollars into sodium-reduction policy." In my view, your original post was relatively closer to the later than your reply comment.

As for CE, it estimates that "starting a high-impact charity has the same impact as donating $200,000 to the most effective NGOs every year." That doesn't suggest a belief that a lot of its incubated charities are 10x+ GiveWell and able to absorb significant funding.

GiveWell has shown a willingness to fund policy work out of its All Grants Fund where it thinks the cost-effectiveness is there (cf. $7MM to the Centre for Pesticide Suicide Prevention for general support in January 2021, also for work on alcohol policy). So a general antipathy toward policy/lobbying work doesn't seem to explain what is going on here. Rather, I think there's a fundamental, difficult-to-resolve disagreement about the EV of lobbying/policy work. It's certainly possible that I -- and it seems, most EA funders -- are simply wrong in our estimation on that point. But I don't think referring to the criterion standard non-policy interventions as $1000 liters of milk acknowledges that disagreement and the reasons for it.

If that was true, then all EAs seeking to maximize expected value would roughly agree on where to donate their money. Rather, we see the community being split into 4 main parts (global H&P, animals, existential risk, meta). Some people in EA simply don't and won't donate to some of these parts. This shows that at least a part of the community might donate to worse charities. 

I think this is predominately about the donor's values and ethical framework (e.g., the relative value of human vs. animal welfare, the extent to which future lives matter), although there are some strategic elements as well. I'm not aware of any reason to think the people who donate to global health are hostile to lobbying efforts if that is the most effective approach. 

Probably as important as the quality of the adviser is their fee structure. For a lot of these questions, I believe you want a fee-only advisor, whose compensation is strictly an hourly rate and not based on commissions or assets under management. E.g.: for an unbiased answer to "donate now vs. invest and donate later," you don't want your advisor to have a financial interest in one of the outcomes!

Answer by JasonAug 11, 20232
1
0

Listing the jurisdiction or jurisdictions in which you might incorporate, as well as a general description of your intended purpose, would probably help.

Chaplains dont raise all of the same concerns here. They generally aren't getting above-market salaries (either for professional-degree holders generally, or compared to other holders of their degree), and there's a very large barrier to entry (in the US, often a three-year grad degree costing quite a bit of money). So there's much less incentive and opportunity for someone to gift into a chaplain position; chaplains tend to be doing it because they really believe in their work.

The PI scores the application from -5 to +5. 

Does the zero point have any specific meaning? Specifically, does a negative score convey a belief that the proposal has net-negative EV?

Load more