All of HowieL's Comments + Replies

Open Thread: July 2021

Welcome! Glad you found us.

RyanCarey's Shortform

Yep - agree with all that, especially that it would be cool for somebody to look into the general question.

RyanCarey's Shortform

My impression is that a lot of her quick success was because her antitrust stuff tapped into progressive anti Big Tech sentiment. It's possible EAs could somehow fit into the biorisk zeitgeist but otherwise, I think it would take a lot of thought to figure out how an EA could replicate this.

Agreed that in her outlying case, most of what she's done is tap into a political movement in ways we'd prefer not to. But is that true for high-performers generally? I'd hypothesise that elite academic credentials + policy-relevant research + willingness to be political, is enough to get people into elite political positions, maybe a tier lower than hers, a decade later, but it'd be worth knowing how all the variables in these different cases contribute.

Intervention options for improving the EA-aligned research pipeline

Fair enough. I guess just depends on exactly how broad/narrow of a category Linch was gesturing at.

4Linch2moI think the crux to me is to what extent Allan's involvement in EAish AI governance is overdetermined. If, in a world with 75% less public writings on transformative AI of Bostrom's calibre, Allan would still be involved in EAish AI governance, then this would point against the usefulness of this step in the pipeline (at least with the Allan anecdote).
Intervention options for improving the EA-aligned research pipeline

I don't think Alan's really an example of this.

 

I think I’ve always been interested in computers and artificial intelligence. I followed Kasparov and Deep Blue, and it was actually Ray Kurzweil’s Age of Spiritual Machines, which is an old book, 2001 … It had this really compelling graph. It’s sort of cheesy, and it involves a lot of simplifications, but in short, it shows basically Moore’s Law at work and extrapolated ruthlessly into the future. Then, on the second y-axis, it shows the biological equivalent of computing capacity of the machine. It sho

... (read more)
2MichaelA2moI think that quote makes it sound like Allan already had a similar worldview and cause prioritisation to EA, but wasn't aware of or engaged with the EA community (though he doesn't explicitly say that), and so he still seems like sort-of an example. It also sounds like he wasn't actively and individually reached out to by a person from the EA community, but rather just found relevant resources himself and then reached out (to Bostrom). But that still seems like it fits the sort of thing Linch is talking about - in this case, maybe the "intervention (for improving the EA-aligned research pipeline)" was something like Bostrom's public writing and talks, which gave Allan a window into this community, which he then joined. And that seems like a good example of a field building intervention? (But that's just going from that quote and my vague knowledge of Allan.)
A bunch of reasons why you might have low energy (or other vague health problems) and what to do about it

Fwiw, for mental health I'm not sure whether therapy is more likely to treat the 'root causes' than medications. You could have a model where some 'chemical thingie' that can be treated by meds is the root cause of mental illness and the actual cognitive thoughts treated by therapy are the symptoms. 

In reality, I'm not sure the distinction is even meaningful given all the feedback loops involved. 

How well did EA-funded biorisk organisations do on Covid?

I don't think most people would consider prevention a type of preparation. EA-funded biorisk efforts presumably did not consider it that way. And more to the point, I do not want to lump prevention together with preparation because I am making an argument about preparation that is separate from prevention. So it's not about just semantics, but precision on which efforts did well or poorly.

I think it actually is common to include prevention under the umbrella of pandemic preparedness. for example, here's the Council on Foreign Relation's independent committ... (read more)

How well did EA-funded biorisk organisations do on Covid?

I think research into novel vaccine platforms like mRNA is a top priority. It's neglected in the sense that way more resources should be going into it but also my impression[1] is that the USG does make up a decent proportion of funding for early stage research into that kind of thing. So that's a sense in which the U.S.'s preparedness was prob good relative to other countries though not in an absolute sense.

Here's an article I skimmed about the importance of govt (mostly NIH) funding for the development of mRNA vaccines. https://www.scientificamerican.com... (read more)

7tessa2moI second the impression that it's not that much of a surprise. For example, CEPI was founded with a goal of accelerating vaccine development against the WHO R&D Blueprint priority diseases [https://www.who.int/activities/prioritizing-diseases-for-research-and-development-in-emergency-contexts] and according to their R&D webpage [https://cepi.net/research_dev/technology/]: I think it was a surprise that non-self-amplifying mRNA vaccines work as well as they do (mRNA is more immunogenic than predicted, I guess, at least for COVID?). 18 months ago, I don't think I would have bet on mRNA platform vaccines as the future over DNA or adenovirus vaccines.
How well did EA-funded biorisk organisations do on Covid?

"effective pandemic response is not about preparation"
 

FYI - my impression is that pandemic preparedness is often defined broadly enough to include things like research into defensive technology (e.g. mRNA vaccines). It does seem like those investments were important for the response.

2kbog2moHm, certainly the vaccine rollout was in hindsight the second most important thing after success or failure at initial lockdown and containment. It does seem to have been neglected by preparation efforts and EA funding before the pandemic, but that's understandable considering how much of a surprise this mRNA stuff was.
Which non-EA-funded organisations did well on Covid?

Several other people who work with them are connected to EA.

How well did EA-funded biorisk organisations do on Covid?

Note that Open Phil funded this project. https://www.nti.org/newsroom/news/nti-launch-global-health-security-index-new-grant-open-philanthropy-project/

How well did EA-funded biorisk organisations do on Covid?

In case anybody's curious: https://coronavirus.jhu.edu/map.html

How well did EA-funded biorisk organisations do on Covid?

I do think CHS should get some credit for arguing for taking pandemic response very seriously early on. For example, I think Tom had some tweets arguing for pulling out all the stops on manufacturing more PPE in January 2020. 

Note - I'm a bit biased since I was working on biorisk at Open Phil the first time Open Phil funded CHS.

How well did EA-funded biorisk organisations do on Covid?

Fwiw, my vague memory is that some other people at CHS, including Tom Inglesby (the director) did better than Adalja. I think Inglesby's Twitter was generally pretty sensible though I don't have time to go back and check. I'd guess that, like most experts, he was too pessimistic about travel restrictions, though. Maybe masks, too?

I do think CHS should get some credit for arguing for taking pandemic response very seriously early on. For example, I think Tom had some tweets arguing for pulling out all the stops on manufacturing more PPE in January 2020. 

Note - I'm a bit biased since I was working on biorisk at Open Phil the first time Open Phil funded CHS.

How well did EA-funded biorisk organisations do on Covid?

If you're referring to what I think you are, it was a different group at Hopkins

2Lukas_Gloor2moOh, you're probably right then!
EA Infrastructure Fund: May 2021 grant recommendations

If I had to pick two parts of it, it would be 3 and 4 but fwiw I got a bunch out of 1 and 2 over the last year for reasons similar to Max.

Meta-EA Needs Models

Also seems relevant that both 80k and CEA went through YC (though I didn't work for 80k back then and don't know all the details).

2Ben_West4moGood point
What Makes Outreach to Progressives Hard

Indeed, IIRC, EAs tend to be more progressive/left-of-center than the general population. I can't find the source for this claim right now.

 

The 2019 EA Survey says:


"The majority of respondents (72%) reported identifying with the Left or Center Left politically and just over 3% were on the Right or Center Right, very similar to 2018."

https://forum.effectivealtruism.org/posts/wtQ3XCL35uxjXpwjE/ea-survey-2019-series-community-demographics-and#Politics

8timunderwood4moI think the survey is fairly strong evidence that EA has a comparative advantage in terms of recruiting left and center left people, and should lean into that. The other side though is that the numbers show that there are a lot of libertarians (around 8 percent) and more 'center left' people who responded to the survey than there are 'left' people. There are substantial parts of SJ politics that are extremely disliked amongst most libertarians, and lots of 'center left' people. So while it might be okay from a recruiting and community stability pov to not really pay attention to right wing ideas, it is likely essential for avoiding community breakdown to maintain the current situation where this isn't a politicized space vis a vis left v center left arguments. Probably the idea approach is some sort of marketing segmentation where the people in Yale or Harvard EA communities use a different recruiting pitch and message that emphasizes the way that EA is a way to fulfill the broader aim of attacking global oppression, inequity and systemic issues, while people who are talking to Silicon Valley inspired earn-to-give tech bros should keep with the current messages that seem to strongly resonate with them. More succinctly: Scott Alexander shouldn't change what he's saying, but a guy trying to convince Yale Law students to join up shouldn't sound exactly like Scott. Epistemologically this suggests we should spend more time engaging with the ideas of people who identify as being on the right, since clearly this is very likely to a bigger blindspot than ideas popular with people who are 'left wing'.
2Cullen_OKeefe5moThanks!
Why I find longtermism hard, and what keeps me motivated

I figured some people might be interested in whether the orientation toward longtermism that Michelle describes above is common at EA orgs, so I wanted to mention that almost everything in this post could also be describing my personal experience. (I'm the director of strategy at 80,000 Hours.)

Some preliminaries and a claim

I think this request undermines how karma systems should work on a website. 'Only people who have engaged with a long set of prerequisites can decide to make this post less visible' seems like it would systematically prevent posts people want to see less of from being downvoted.

1Milan_Griffes5moMy request applies to both positive and negative responses.
When you shouldn't use EA jargon and how to avoid it

Most native English speakers from outside of particular nerd cultures also would have no clue what it means.

What types of charity will be the most effective for creating a more equal society?

Fair enough.

Fwiw, the forum explicitly discourages unnecessary rudeness (and encourages kindness). I think tone is part of that and the voting system is a reasonable mechanism for setting that norm. But there's room for disagreement.

If the original poster came back and edited in response to feedback or said that the tone wasn't intentional, I'd happily remove my downvote.

What types of charity will be the most effective for creating a more equal society?

I downvoted this. "Please, if you disagree with me, carry your precious opinion elsewhere" reads to me as more than slightly rude and effectively an intentional insult to people who disagree with the OP and would otherwise have shared their views. I think it's totally reasonable to worry in advance about a thread veering away from the topic you want to discuss and to preempt that with a request to directly answer your question [Edited slightly] and I wouldn't have downvoted without the reference to other people's "precious views."

-2HaukeHillebrandt10moI downvoted this for comment for not addressing the central claim but only the tone. [https://en.wikipedia.org/wiki/File:Graham%27s_Hierarchy_of_Disagreement.svg] Getting the tone right can sometimes be challenging especially for non-native speakers. [https://www.degruyter.com/view/journals/iral/46/3/article-p245.xml?language=en]
No More Pandemics: a lobbying group?

Lobbying v. grassroots advocacy

This is just semantic but I think you probably don't want to call what you're proposing a "lobbying group." Lobbying usually refers to one particular form of advocacy (face to face meetings with legislators) and in many countries[1] it is regulated more heavily than other forms of advocacy.

(It's possible that in the UK, "lobbying group" means something more general but in the U.S.)

[1] This is true in the U.S., which I know best. Wikipedia suggests it's true in the EU but appears less tr... (read more)

2RayTaylor10moYes lobbying prevents charity / nonprofit registration in the USA, but advocacy doesn't.
3Sanjay10moThis is very useful, thank you!
5,000 people have pledged to give at least 10% of their lifetime incomes to effective charities

I didn't actually become a member until after the wording of the pledge changed but I do vividly remember the first wave of press because all my friends sent me articles showing that there were some kids in Oxford who were just like me.

Learning about Giving What We Can (and, separately, Jeff and Julia) made me feel less alone in the world and I feel really grateful for that.

3Michelle_Hutchinson10mo<3
80,000 Hours user survey closes this Sunday

Hi RandomEA,

Thanks for pointing this out (and for the support).

We only update the 'Last updated' field for major updates not small ones. I think we'll rename it 'Last major update' to make it clearer.

The edit you noticed wasn't intended to indicate that we've changed our view on the effectiveness of existential risk reduction work. That paragraph was only meant to demonstrate how it’s possible that x-risk reduction could be competitive with top charities from a present-lives-saved perspective. The author decided w... (read more)

Thanks Howie.

Something else I hope you'll update is the claim in that section that GiveWell estimates that it costs the Against Malaria Foundation $7,500 to save a life.

The archived version of the GiveWell page you cite does not support that claim; it states the cost per life saved of AMF is $5,500. (It looks like earlier archives of that same page do state $7,500 (e.g. here), so that number may have been current while the piece was being drafted.)

Additionally, the $5,500 number, which is based on GiveWell's Aug. 2017 estimates (click here and ... (read more)

Long-Term Future Fund: April 2020 grants and recommendations

Not an expert but, fwiw, my impression is that this is more common in CS than philosophy and the social science areas I know best.

Some thoughts on EA outreach to high schoolers

I'm very worried that staff at EA orgs (myself included) seem to know very little about Gen Z social media and am really glad you're learning about this.

2Milan_Griffes5moI've found High Tea [https://hightea.substack.com/] to be a helpful resource for staying in touch with Gen Z trends.
Some thoughts on EA outreach to high schoolers

I think it's especially dangerous to use this word when talking about high schoolers, especially given the number of cult and near-cult groups that have arisen in communities adjacent to EA.

MichaelA's Shortform

"People have found my summaries and collections very useful, and some people have found my original research not so useful/impressive"

I haven't read enough of your original research to know whether it applies in your case but just flagging that most original research has a much narrower target audience than the summaries/collections, so I'd expect fewer people to find it useful (and for a relatively broad summary to be biased against them).

That said, as you know, I think your summaries/collections are useful and underprovided.

2MichaelA1yGood point. Though I guess I suspect that, if the reason a person finds my original research not so useful is just because they aren't the target audience, they'd be more likely to either not explicitly comment on it or to say something about it not seeming relevant to them. (Rather than making a generic comment about it not seeming useful.) But I guess this seems less likely in cases where: * the person doesn't realise that the key reason it wasn't useful is that they weren't the target audience, or * the person feels that what they're focused on is substantially more important than anything else (because then they'll perceive "useful to them" as meaning a very similar thing to "useful") In any case, I'm definitely just taking this survey as providing weak (though useful) evidence, and combining it with various other sources of evidence.
Should surveys about the quality/impact of research outputs be more common?

This all seems reasonable to me though I haven't thought much about my overall take.

I think the details matter a lot for "Even among individual researchers who work independently, or whose org isn't running surveys, probably relatively few should run their own, relatively publicly advertised individual surveys"

A lot of people might get a lot of the value from a fairly small number of responses, which would minimise costs and negative externalities. I even think it's often possible to close a survey after a certain number of respon... (read more)

3MichaelA1yAgreed. This sort of thing is part of why I wrote "relatively publicly advertised", and added "And maybe it doesn't hold for surveys sent out in a more targeted manner." But good point that someone could run a relatively publicly advertised survey and then just close it after a small-ish number of responses; I hadn't considered that option.
Should surveys about the quality/impact of research outputs be more common?

[Not meant to express an overall view.] I don't think you mention the time of the respondents as a cost of these surveys, but I think it can be one of the main costs. There's also risk of survey fatigue if EA researchers all double down on surveys.

3MichaelA1yStrong upvote for two good points that, in retrospect, I feel should've been obvious to me! In light of those points as well as what I mentioned above, my new, quickly adjusted, bottom-line view, would be that: * People considering running these surveys should take into account that cost and that risk which you mention. * I probably still think most EA research organisations should run such a survey at least once. * In many cases, it may make the most sense to just send it to some particular group of people, or post it in some place more targeted to their target audience than the EA forum as a whole. This would reduce the risk of survey fatigue somewhat, in that not all these surveys are being publicised to basically all EAs. * In many cases, it may make sense for the survey to be even shorter than my one. * In many cases, it may make sense to run the survey only once, rather than something like annually. * Probably no/very few individual researchers who are working at organisations who are themselves running surveys should run their own, relatively publicly advertised individual surveys (even if it's at a different time to the org's survey). * This is because those individuals survey would probably provide relatively little marginal value, while still having roughly the same time costs and survey fatigue risk. * But maybe this doesn't hold if the org only does a survey once, and the researcher is considering running a survey more than a year later. * And maybe it doesn't hold for surveys sent out in a more targeted manner. * Even among individual researchers who work independently, or whose org isn't running surveys, probably relatively few should run their own, relatively publicly advertised individual surveys. * The exceptions may tend to be those who wrote a large number of outputs, on a wide range of topics, for relatively broad aud
Asking for advice

I find it off-putting though I don't endorse my reaction and overall think the time savings mean I'm personally net better off when other people use it.

I think for me, it's about taking something that used to be a normal human interaction and automating it instead. Feels unfriendly somehow. Maybe that's a status thing?

5Denise_Melchin1yVery similar here. I wouldn't quite say unfriendly/status thing, but like a social interaction with a friend got sucked into commercialized business mode ("capitalism ate your friendships!" - definitely not my endorsed reaction, but feels kind of true).
4Ben Garfinkel1yI would also like to come out of the woodwork as someone who finds Calendly vaguely annoying, for reasons that are entirely opaque to me. (Although it's also unambiguously more convenient for me when people send me Calendly links -- and, given the choice, I think I'd mostly like people to keep doing this.)
An argument for keeping open the option of earning to save

Though there's a bit of a tradeoff where putting the money into a DAF/trust might alleviate some of the negative effects Ben mentioned but also loses out on a lot of the benefits Raemon is going for.

An argument for keeping open the option of earning to save

[My own views here, not necessarily Ben’s or “80k’s”. I reviewed the OP before it went out but don’t share all the views expressed in it (and don’t think I’ve fully thought through all the relevant considerations).]

Thanks for the comment!

“You say you take (1) to be obvious, but I think that you’re treating the optimal percentage as kind of exogenous rather than dependent on the giving opportunities in the system.”

I mostly agree with this. The argument’s force/applicability is muc... (read more)

3Owen_Cotton-Barratt1yThanks for pulling this out, I think this is the heart of the argument. (I think it's quite valuable to show how the case relies on this, as it helps to cancel a possible reading where everyone should assume that they personally will have better judgement than the aggregate community.) I think it's an interesting case, and worth considering carefully. We might want to consider: 1. Whether this will actually lead to incorrect spending? * My central best guess is that there will be enough flow of other money into longtermist-aligned purposes that this won't be an issue in coming decades, but I'm quite uncertain about that 2. What are the best options for mitigating it? * Earning to save is certainly one possibility, but we could also consider e.g. whether there are direct work opportunities which would have a significant effect of passing capital into the hands of future longtermists
7Owen_Cotton-Barratt1yThanks for the thoughtful reply! On reflection I realise that in some sense the heart of my objection to the post was in vibe, and I think I was subconsciously trying to correct for this by leaning into the vibe (for my response) of "this seems wrongfooted". I quite agree that it's good if even minor considerations can be considered in a quick post. I think the issue is that the tone of the post is kind of didactic, let-me-explain-all-these-things (and the title is "an argument for X", and the post begins "I used to think not-X"): combined these are projecting quite a sense of "X is solid", and while it's great that it had lots of explicit disclaimers about this just being one consideration etc., I don't think they really do the work of cancelling the tone for feeding into casual readers' gut impressions. For an exaggerated contrast, imagine if the post read like: I think that would have triggered approximately zero of my vibe concerns. Alternatively I think it could have worked to have a didactic post on "Considerations around earning-to-save" that felt like it was trying to collect the important considerations (which I'm not sure have been well laid out anywhere, so there might not be a canonical sense of which arguments are "new") rather than particularly emphasise one consideration.
The academic contribution to AI safety seems large

If you want some more examples of specific research/researchers, a bunch of the grantees from FLI's 2015 AI Safety RFP are non-EA academics who have done some research in fields potentially relevant to mid-term safety.

https://futureoflife.org/ai-safety-research/

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

Fwiw, I think you're both right here. If you were to hire a reasonably good lawyer to help with this, I suspect the default is they'd say what Habryka suggests. That said, I also do think that lawyers are trained to do things like remove vagueness from policies.

Basically, I don't think it'd be useful to hire a lawyer in their capacity as a lawyer. But, to the extent there happen to be lawyers among the people you'd consider asking for advice anyway, I'd expect them to be disproportionately good at this kind of thing.

[Source: I went to two years of law school but haven't worked much with lawyers on this type of thing.]

Long-Term Future Fund: April 2019 grant recommendations

You say no to "Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?" and say "2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years."


I feel like I'd much better understand what you mean if you were up for giving some probabilities here even if there's a range or they're imprecise or unstable. There's a really big range within "likely" and I'd like some sense of where you are on that range.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

[Note - I endorse the idea of splitting it into two much more strongly than any of the specifics in this comment]

Agree that you shouldn't be quite as vague as the GW policy (although I do think you should put a bunch of weight on GW's precedent as well as Open Phil's).

Quick thoughts on a few benefits of staying at a higher level (none of which are necessarily conclusive):

1) It's not obviously less informative.

If somebody clicks on a conflict of interest policy wanting to figure out if they generally trust thee LTF and they see a bunch ... (read more)

4Habryka1yThis is a more general point that shapes my thinking here a bit, not directly responding to your comment. I feel like the thing that is happening here makes me pretty uncomfortable, and I really don't want to further incentivize this kind of assessment of stuff. A related concept in this space seems to me to be the Copenhagen Interpretation of Ethics [https://blog.jaibot.com/the-copenhagen-interpretation-of-ethics/]: I feel like there is a similar thing going on with being concrete about stuff like sexual and romantic relationships (which obviously have massive consequences in large parts of the world). And maybe more broadly having this COI policy in the first place. My sense is that we can successfully avoid a lot of criticism by just not having any COI policy, or having a really high-level and vague one, because any policy we would have would clearly signal we have looked at the problem, and are now to blame for any consequences related to it. More broadly, I just feel really uncomfortable with having to write all of our documents to make sense on a purely associative level. I as a donor would be really excited to see a COI policy as concrete as the one above, similarly to how all the concrete mistake pages on all the EA org websites make me really excited. I feel like making the policy less concrete trades of getting something right and as such being quite exciting to people like me, in favor of being more broadly palatable to some large group of people, and maybe making a bit fewer enemies. But that feels like it's usually going to be the wrong strategy for a fund like ours, where I am most excited about having a small group of really dedicated donors who are really excited about what we are doing, much more than being very broadly palatable to a large audience, without anyone being particularly excited about it.
Request for Feedback: Draft of a COI policy for the Long Term Future Fund

I guess I think a private board might be helpful even with pretty minimal time input. I think you mostly want some people who seem unbiased to avoid making huge errors as opposed to trying to get the optimal decision in ever case. That said, I'm sympathetic to wanting to avoid the extra bureaucracy.

The comparison to the for-profit sector seems useful but I wouldn't emphasize it *too* much. When you can't rely on markets to hold an org accountable, it makes sense that you'll sometimes need an extra layer.

When for-profits start to need to... (read more)

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

Having a private board for close calls also doesn't seem crazy to me.

So, the problem here is that we are already dealing with a lot of time-constraint, and I feel pretty doomy about having a group that has even less time than the fund already has, to be involved in this kind of decision-making.

I also have a more general concern where when I look at dysfunctional organizations, one of the things I often see are profusions of board upon boards, each one of which primarily serves to spread accountability around, overall resulting in a system in which no one really has any skin in the game and in which even very simple tasks o... (read more)

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

Hmm. Do you have to make it public every time someone recuses themself? If someone could nonpublicly recuse themself that at least gives them the option to avoid biasing the result but also not have to stick their past romantic lives on the internet.

3Habryka1yOh, no. To be clear, recusals are generally non-public. The document above should be more clear about that. Edit: Actually, the document above does just straightforwardly say:
Request for Feedback: Draft of a COI policy for the Long Term Future Fund

(Note that I'm not saying that recusal would necessarily be bad)

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

Wanted to +1 this in general although I haven't thought through exactly where I think the tradeoff should be.

My best guess is that the official policy should be a bit closer to the level of detail GiveWell uses to describe their policy than to the level of detail you're currently using. If you wanted to elaborate, one possibility might be to give some examples of how you might respond to different situations in an EA Forum post separate from the official policy.

7Habryka1yYeah, splitting it into two seems reasonable, one of which is linked more prominently and one that is here on the forum, though I do much prefer to be more concrete than the GiveWell policy. I guess I am kind of confused about the benefit of being vague and high-level here. It just seems better for everyone if we are very concrete here, and I kind of don't feel super great about writing things that are less informative, but make people feel better when reading them.
Load More