HowieL

I work on strategy and content at 80k. Before that, I worked on global catastrophic risk at Open Phil. Comments here are my own views only, not my present or past employers', unless otherwise specified.

Posts

Sorted by New

Comments

Some thoughts on EA outreach to high schoolers

I'm very worried that staff at EA orgs (myself included) seem to know very little about Gen Z social media and am really glad you're learning about this.

Some thoughts on EA outreach to high schoolers

I think it's especially dangerous to use this word when talking about high schoolers, especially given the number of cult and near-cult groups that have arisen in communities adjacent to EA.

MichaelA's Shortform

"People have found my summaries and collections very useful, and some people have found my original research not so useful/impressive"

I haven't read enough of your original research to know whether it applies in your case but just flagging that most original research has a much narrower target audience than the summaries/collections, so I'd expect fewer people to find it useful (and for a relatively broad summary to be biased against them).

That said, as you know, I think your summaries/collections are useful and underprovided.

Should surveys about the quality/impact of research outputs be more common?

This all seems reasonable to me though I haven't thought much about my overall take.

I think the details matter a lot for "Even among individual researchers who work independently, or whose org isn't running surveys, probably relatively few should run their own, relatively publicly advertised individual surveys"

A lot of people might get a lot of the value from a fairly small number of responses, which would minimise costs and negative externalities. I even think it's often possible to close a survey after a certain number of responses.

A counterargument is that the people who respond earliest might be unrepresentative. But for a lot of purposes, it's not obvious to me you need a representative sample. "Among the people who are making the most use of my research, how is it useful" can be pretty informative on its own.

Should surveys about the quality/impact of research outputs be more common?

[Not meant to express an overall view.] I don't think you mention the time of the respondents as a cost of these surveys, but I think it can be one of the main costs. There's also risk of survey fatigue if EA researchers all double down on surveys.

Asking for advice

I find it off-putting though I don't endorse my reaction and overall think the time savings mean I'm personally net better off when other people use it.

I think for me, it's about taking something that used to be a normal human interaction and automating it instead. Feels unfriendly somehow. Maybe that's a status thing?

An argument for keeping open the option of earning to save

Though there's a bit of a tradeoff where putting the money into a DAF/trust might alleviate some of the negative effects Ben mentioned but also loses out on a lot of the benefits Raemon is going for.

An argument for keeping open the option of earning to save

[My own views here, not necessarily Ben’s or “80k’s”. I reviewed the OP before it went out but don’t share all the views expressed in it (and don’t think I’ve fully thought through all the relevant considerations).]

Thanks for the comment!

“You say you take (1) to be obvious, but I think that you’re treating the optimal percentage as kind of exogenous rather than dependent on the giving opportunities in the system.”

I mostly agree with this. The argument’s force/applicability is much weaker because of this. Indeed, if EAs are spending a higher/lower proportion of their assets at some point in the future, that’s prima facie evidence that the optimal allocation is higher/lower at that time.

(I do think a literal reading of the post is consistent with the optimal percentage varying endogenously but agree that it had an exogenous 'vibe' and that's important.)

“So the argument really feels like:
Maybe in the future the community will give to some places that are worse than this other place [=saving]. If you’re smarter than the aggregate community then it will be good if you control a larger slice of the resources so you can help to hedge against this mistake. This pushes towards earning.
I think if you don’t have reason to believe you’ll do better than the aggregate community then this shouldn’t get much weight; if you do have such reason then it’s legitimate to give it some weight. But this was already a legitimate argument before you thought about saving! It applies whenever there are multiple possible uses of capital and you worry that future people might make a mistake. I suppose whenever you think of a new possible use of capital it becomes a tiny bit stronger?”

I think this is a good point but a bit too strong, as I do think there’s more to the argument than just the above. I feel pretty uncertain whether the below holds together and would love to be corrected but I understood the post to be arguing something like:

i) For people whose assets are mostly financial, it’s pretty easy to push the portfolio toward the now/later distribution they think is best. If this was also true for labour and actors had no other constraints/incentives, then I’d expect the community’s allocation to reflect its aggregate beliefs about the optimum so pushing away from that would constitute a claim that you know better.

ii) But, actors making up a large proportion of total financial assets may have constraints other than maximising impact, which could lead the community to spend faster than the aggregate of the community thinks is correct:

  • Large donors usually want to donate before they die (and Open Phil’s donors have pledged to do so). (Of course, it’s arguable whether this should be modeled as such a constraint or as a claim about optimal timing).

Other holders of financial capital may not have enough resources to realistically make up for that.

iii) In an idealised ‘perfect marketplace’ holders of human capital would “invest” their labour to make up for this. But they also face constraints:

  • Global priorities research, movement/community building, and ‘meta’ can only usefully absorb a limited amount of labour.
  • Human capital can’t be saved after you die and loses value each year as you age.
  • [I’m less sure about this one and think it’s less important.] As career capital opportunities dry up when people age, it will become more and more personally costly for them to stay in career capital mode to ‘invest’ their labour. This might lead reasonable behaviour from a self-interested standpoint to diverge from what would create a theoretically optimal portfolio for the community.

This means that for the community to maintain the allocation it thinks is optimal, people may have to convert their labour into capital so that it can be ‘saved/invested.’ But most people don’t even know that this is an option (ETA: or at least it's not a salient one) and haven’t heard of earning to save. So pointing this out may empower the community to achieve its aggregate preferences, as opposed to being a way to undermine them.

“But at present I’m worried that this isn’t really a new argument and the post risks giving inappropriate prominence to the idea of earning to save (which I think could be quite toxic for the community for reasons you mention), even given your caveats.”

I agree this is a reasonable concern and I was a bit worried about it, too, since I think this is overall a small consideration in favor of earning to save, which I agree could be quite toxic. But I do think the post tries to caveat a lot and it overall seems good for there to be a forum where even minor considerations can be considered in a quick post., so I thought it was worth posting. (Fwiw, I think getting this reaction from you was valuable.)

I’m open to the possibility that this isn’t realistic, though. And something like “some considerations on earning to save” might have been a better title.

The academic contribution to AI safety seems large

If you want some more examples of specific research/researchers, a bunch of the grantees from FLI's 2015 AI Safety RFP are non-EA academics who have done some research in fields potentially relevant to mid-term safety.

https://futureoflife.org/ai-safety-research/

Load More