HowieL

I work on strategy and content at 80k. Before that, I worked on global catastrophic risk at Open Phil. Comments here are my own views only, not my present or past employers', unless otherwise specified.

HowieL's Posts

Sorted by New

HowieL's Comments

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

Fwiw, I think you're both right here. If you were to hire a reasonably good lawyer to help with this, I suspect the default is they'd say what Habryka suggests. That said, I also do think that lawyers are trained to do things like remove vagueness from policies.

Basically, I don't think it'd be useful to hire a lawyer in their capacity as a lawyer. But, to the extent there happen to be lawyers among the people you'd consider asking for advice anyway, I'd expect them to be disproportionately good at this kind of thing.

[Source: I went to two years of law school but haven't worked much with lawyers on this type of thing.]

Long-Term Future Fund: April 2019 grant recommendations

You say no to "Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?" and say "2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years."


I feel like I'd much better understand what you mean if you were up for giving some probabilities here even if there's a range or they're imprecise or unstable. There's a really big range within "likely" and I'd like some sense of where you are on that range.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

[Note - I endorse the idea of splitting it into two much more strongly than any of the specifics in this comment]

Agree that you shouldn't be quite as vague as the GW policy (although I do think you should put a bunch of weight on GW's precedent as well as Open Phil's).

Quick thoughts on a few benefits of staying at a higher level (none of which are necessarily conclusive):

1) It's not obviously less informative.

If somebody clicks on a conflict of interest policy wanting to figure out if they generally trust thee LTF and they see a bunch of stuff about metamours and psychedelics that's going to end up incredibly salient to them and that's not necessarily making them more informed about what they actually cared about. It can actually just be a distraction.

Like, let's say analogous institutions also have psychedelic-related COIs but just group them under "important social relationships" or something. Now, the LTF looks like that fund where all the staff are doing psychedelics with the grantees. I don't think anybody became more informed. (This is especially the case if the info is available *somewhere* for people who care about the details).


2) Flexibility

It's just really hard to anticipate all of the relevant cases and the principles you're using are the thing you might actually want to lock in.


3) Giving lots of detail means lack of disclosure can send a lot of signal.

If you have enough detail about exactly what level of friends someone needs to be with someone else in order to trigger a disclosure then you end up forcing members to send all sorts of weird signals by not disclosing things (e.g. I don't actually consider my friendship with person X that important). This just gets complicated fast.

---

All that said, I think a lot of this just has to be determined by the level of disclosure and type of policy LTF donors are demanding. I've donated a bit and would be comfortable trusting something more general but also am probably not representative.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

I guess I think a private board might be helpful even with pretty minimal time input. I think you mostly want some people who seem unbiased to avoid making huge errors as opposed to trying to get the optimal decision in ever case. That said, I'm sympathetic to wanting to avoid the extra bureaucracy.

The comparison to the for-profit sector seems useful but I wouldn't emphasize it *too* much. When you can't rely on markets to hold an org accountable, it makes sense that you'll sometimes need an extra layer.

When for-profits start to need to achieve legitimacy that can't be provide by markets, they seem to start to look towards these kinds of boards, too. (E.g. FB looking into governance boards).

That said, I don't have a strong take on whether this is a good idea.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

Having a private board for close calls also doesn't seem crazy to me.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

Hmm. Do you have to make it public every time someone recuses themself? If someone could nonpublicly recuse themself that at least gives them the option to avoid biasing the result but also not have to stick their past romantic lives on the internet.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

(Note that I'm not saying that recusal would necessarily be bad)

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

Wanted to +1 this in general although I haven't thought through exactly where I think the tradeoff should be.

My best guess is that the official policy should be a bit closer to the level of detail GiveWell uses to describe their policy than to the level of detail you're currently using. If you wanted to elaborate, one possibility might be to give some examples of how you might respond to different situations in an EA Forum post separate from the official policy.

Load More