All of Howie_Lempel's Comments + Replies

No worries! It takes a couple clicks to get there.

Hey Larks,

[For transparency - I no longer work at EV.]

Yep - this was followed up on. Here are links to the pages for EV UK and EV US.

1
Larks
4mo
Thanks! I guess I was just bad at navigating the website.

This is correct. We ended up needing to resolve a couple issues related to the inquiry before we can file. We’ve stayed in touch with the Charity Commission about the delay.

Additional restaurants I'd recommend:

  • Buddha Bodai (Chinatown) has my favorite vegetarian Chinese food. 
    • The non-vegetarian colleagues I introduced it to ended up ordering lunch there once/week for months. 
    • When I was in college, it was one of two restaurants that helped convince me I could survive as a vegetarian. (RIP Williamsburg's Foodswings, which had the world's best vegan buffalo wings).
  • Red Bamboo (Washington Square Park) has great vegan bbq wings and chicken. 
9
Rockwell
9mo
The age-old Bodhi on Mulberry vs. Buddha Bodai on Mott battle has officially made its way to the Forum!

I wanted to make some additional non-EA recommendations but don't want to blow up the comments section with non-EA stuff, so here's a thread for people to do that.

7
RyanCarey
9mo
Jazz: * Village Vanguard is by far my favourite jazz club in NYC. Generally there are two world-class sets per night. * Other great jazz venues: Smalls, Blue Note, Dizzy’s, Birdland, Lincoln Center, etc. Non-dinner recs: * Breads Bakery: good chocolate Babka * Bo’s Bagels * Tompkins Square Bagels Dinner recs (NB. I'm vegetarian, not vegan): * Bosino Ristorante: good Italian * Dirt Candy: upscale, modern food. A non-tipping establishment. * Lamalo: great mezzas and amazing bread * Zou Zous: good mediteranean * Don Angie: good italian food * Dar525: good mediterranean food in Brooklyn * Hummus Market: good mediterranean food in Brooklyn

Additional restaurants I'd recommend:

  • Buddha Bodai (Chinatown) has my favorite vegetarian Chinese food. 
    • The non-vegetarian colleagues I introduced it to ended up ordering lunch there once/week for months. 
    • When I was in college, it was one of two restaurants that helped convince me I could survive as a vegetarian. (RIP Williamsburg's Foodswings, which had the world's best vegan buffalo wings).
  • Red Bamboo (Washington Square Park) has great vegan bbq wings and chicken. 

Thanks for sharing. I think it was brave and I appreciated getting to read this. I'm sorry you've had to go through this and am glad to hear you're feeling optimistic.

2
Devin Kalish
9mo
Thanks, I really appreciate it

This seems like an improvement to me. Thanks!

Feedback on a minor pain point for me. When I'm looking at quick takes on the front page and want to go to the permalink for the relevant take (e.g. to see all the discussion under it), I often look around for a big title to click on for a while before remembering that I'm supposed to click on the icon on the top right, which is small, doesn't stand out much, and feels too me like it's somehow violating an implicit expectation I have for where to find this for this kind of content.

I have no clue whether this is... (read more)

2
quinn
10mo
The top right is a divergence from lesswrong, right? it used to be clicking the timestamp would permalink, and I think lesswrong is still that way. 

Probably the best thing I got out of two years in law school. . .

Haha thanks Howie! I want to also give a shout-out to Amanda, who's been a leader on this work at Open Phil since 2018. And to the hundreds of EAs, including Jakub, who have done the hard work to turn funding into results for animals :)

I'd wondered about this. Doing this survey seems like a really useful contribution. Thanks!

2
Jamie Elsey
10mo
Much appreciated Howie, thanks!

I think this was a cool post and I'm excited to see this kind of discussion here. (I think it misses a bunch of advantages of small orgs but seems fine to have a post that's mostly about the disadvantages. unfortunately don't have time to write out my object level thoughts here - just wanted to be clear that this comment is a "like" not a "(fully) agree." )

4
Ozzie Gooen
10mo
Thanks! > I think it misses a bunch of advantages of small orgs Yep. This topic could easily become a much larger post+project. My main priority was to represent this one side, as I've felt like this side of the discussion hasn't gotten as much attention. One point I'll flag on the "advantages of small orgs" that I've come across:  I've come across some specific problems of specific large organizations, but I'm not sure why exactly they exist. I hear people complain about legal/PR restrictions that come from larger organizations, but it's not clear to me if funders should try to address that problem by encouraging small organizations, or by improving the big ones. Some "big organizations" challenges might be fundamental/inherent to big organizations, and some challenges exist for reasons we can fix. 

New grads are hired at L3, and almost everyone makes it to L4, typically within 2-3y. Most of them get to L5, typically 3-5y from then. L5 is a fine place to stay, and getting promoted above that is harder and is no longer most people. I was hired at L3 and got promoted at L6 after about 9y.

Looking at levels.fyi I see average total comp of:

  • L3: $182k
  • L4: $270k
  • L5: $357k
  • L6: $474k
  • L7: $657k

I think this is across the whole US, though, and while I can't get it to show me Bay Area numbers right now, my memory is they are about 30% higher?

But seems like I sh... (read more)

I wonder if Jack would be equally happy with the weaker claim that giving 10% is not advisable for the median American in their twenties. I'm not sure whether I'd agree even with that but wm it seems more plausible to me than claiming it's not feasible.

4
Jack Lewars
10mo
Yes, this is more what I meant (although not sure this defuses the criticisms/disagreement)

And giving 10% could be not advisible (in the sense that it may not be the best possible use of the median 20s person's funds) but superior to their counterfactual use of the funds.

Hey Bob - Howie from EV UK here. Thanks for flagging this! I definitely see why this would look concerning so I just wanted to quickly chime in and let you/others know that we’ve already gotten in touch with relevant regulators about this and I don’t think there’s much to worry about here.

The thing going on is that EV UK has an extended filing deadline (from 30 April to 30 June 2023) for our audited accounts,[1] which are one of the things included in our Annual Return. So back in April, we notified the Charity Commission that we’ll be filing our Annu... (read more)

[Only a weak recommendation.] I last looked at this >5 years ago and never read the whole thing. But FYI that Katja Grace wrote a case study on the Asilomar Conference on Recombinant DNA, which established a bunch of voluntary guidelines that have been influential in biotech. Includes analogy to AI safety. (No need to pay me.)  https://intelligence.org/files/TheAsilomarConference.pdf

Hi, thanks for raising these questions. I wanted to confirm that Effective Ventures has seen this and is looking into it. We take our legal obligations seriously and have started an internal review to make sure we know the relevant facts.

Hi Matt - thanks for the suggestion. I agree that we should have a page like this. I’ve asked someone to take this on but we’ve got a lot of things to update at the moment so it won’t go up immediately. In the meantime, CEA’s team page has links to bios for most of the trustees here.

4
Larks
4mo
Hey Howie, over ten months later I still don't see anything on the website. (Unless I am just unusually bad at reading websites). Was this followed up on?
2
Matt Goodman
1y
Hi Howie, I'm getting back to this 3 months later. I don't think this feature has been added and I'd like to raise again that it would be good for transparency. The link to the CEA team page doesn't have bios for Tasha McCauley and Becca Kagan (who has since resigned from EVF, I guess it could be worth listing former board members). 

Thanks for the update on this! I don't think I'd heard about it.

"In 1993, he obtained a bachelor's degree in radio from Emerson College in Boston,[4] where one of his professors was the writer David Foster Wallace"

https://en.wikipedia.org/wiki/Bill_Burr

Yes — since the first week of the crisis, Nick and Will have been recused from the relevant discussions / decisions on the boards of both EV entities to avoid any potential conflict of interest. Staff in both EV entities were informed about that decision in mid-November.

4
Jason
1y
Thanks, Howie. That is reassuring. My suspicion is that FTX fallout may be the defining issue of EVF's corporate governance for the next few years, so having nearly half the board recused ain't ideal but is certain better than non-recusal.

My guess is that Part II, trajectory changes will have a bunch of relevant stuff. Maybe also a bit of part 5. But unfortunately I don't remember too clearly.

It's been a while since I read it but Joe Carlsmith's series on expected utility might help some. 

1
Ryan Beck
2y
Thanks, I'll check that out!

[My impression. I haven't worked on grantmaking for a long time.] I think this depends on the topic, size of the grant, technicality of the grant, etc. Some grantmakers are themselves experts. Some grantmakers have experts in house. For technical/complicated grants, I think non-expert grantmakers will usually talk to at least some experts before pulling the trigger but it depends on how clearcut the case for the grant is, how big the grant is, etc.

I think parts of What We Owe the Future by Will MacAskill discuss this approach a bit.

1
Jordan Arel
2y
Mm good point! I seem to remember something.. do you remember what chapter/s by chance?

Others, most of which I haven't fully read and not always fully on topic:

Much narrower recommendation for nearby problems is Overcoming Perfectionism (~a CBT workbook). 

I'd recommend to some EAs who are already struggling with these feelings (and know some who've really benefitted from it). (It's not precisely aimed at this but I think it can be repurposed for a subset of people.)

Wouldn't recommend to students recently exposed to EA who are worried about these feelings in future.

If you haven't come across it, a lot of EAs have found Nate Soares' Replacing Guilt series useful for this. (I personally didn't click with it but have lots of friends who did).

I like the way some of Joe Carlsmith's essays touch on this. 

4
Howie_Lempel
2y
Much narrower recommendation for nearby problems is Overcoming Perfectionism (~a CBT workbook).  I'd recommend to some EAs who are already struggling with these feelings (and know some who've really benefitted from it). (It's not precisely aimed at this but I think it can be repurposed for a subset of people.) Wouldn't recommend to students recently exposed to EA who are worried about these feelings in future.

FYI - subsamples of that survey were asked about this in other ways, which gave some evidence that "extremely bad outcome" was ~equivalent to extinction.


Explicit P(doom)=5-10% The levels of badness involved in that last question seemed ambiguous in retrospect, so I added two new questions about human extinction explicitly. The median respondent’s probability of x-risk from humans failing to control AI [1]was 10%, weirdly more than median chance of human extinction from AI in general,[2] at 5%. This might just be because different people got these

... (read more)

Thanks for this! It was really useful and will save 80,000 Hours a lot of time.

I think the people responsible for EA Global admissions (including Amy Labenz, Eli Nathan, and others) have added a bunch of value to me over the years by making it more likely that a conversation or meeting with somebody at EA Global who I don’t already know will end up being productive. Making admissions decisions at EAG (and being the public face of an exclusive admissions policy) sounds like a really thankless job and I know a bunch of the people involved end up having to make decisions that make them pretty sad because they think it’s best for the wor... (read more)

I'm curious whether there's any answer AI experts could have given that would be a reasonably big update for you.

For example is there any level of consensus against ~AGI by 2070 (or some other date) that would be strong enough to move your forecast by 10 percentage points?

9
NunoSempere
2y
Note that we'd probably also look at the object level reasons for why they think that. E.g., new scaling laws findings could definitely shift our/my forecast by 10%.
7
elifland
2y
Fair question. I say little weight but if it was far enough from my view I would update a little. My view also may not be representative of other forecasters, as is evident from Misha's comment. From the original Grace et al. survey (and I think the more recent ones as well? but haven't read as closely) the ML researchers clearly had very incoherent views depending on the question being asked and elicitation techniques, which I think provides some evidence they haven't thought about it that deeply and we shouldn't take it too seriously (some incoherence is expected, but I think they gave wildly different answers for HLMI (human-level machine intelligence) and full automation of labor). So I think I'd split up the thresholds by somewhat coherent vs. still very incoherent. My current forecast for ~AGI by 2100 barring pre-AGI catastrophe is 80%. To move it to 70% based just a survey of ML experts, I think I'd have to see something like one of: 1. ML experts still appear to be very incoherent, but are giving a ~10% chance of ~AGI by 2100 on average across framings. 2. ML experts appear to be somewhat coherent, and are giving a ~25% chance of ~AGI by 2100. (but I haven't thought about this a lot, these numbers could change substantially on reflection or discussion/debate)

Good question. I think AI researchers views inform/can inform me. A few examples from the recent NLP Community Metasurvey. I would quote bits from this summary.

Few scaling maximalists: 17% agreed that Given resources (i.e., compute and data) that could come to exist this century, scaled-up implementations of established existing techniques will be sufficient to practically solve any important real-world problem or application in NLP.

This was surprsing and updated me somewhat against shorter timelines (and higher risk) as, for example, it clashes with ... (read more)

I definitely agree that takeaway would be a mistake. I think my view is more like "if the specifics of what MT says on a particular topic don't feel like they really fit your organisation, you should not feel bound to them. Especially if you're a small organisation with an unusual culture or if their advice seems to clash with conventional wisdom from other sources, especially in silicon valley.

I'd endorse their book as useful for managers at any org. A lot of the basic takeaways (especially having consistent one on ones) seem pretty robust and it would be surprising if you shouldn't do them at all.

1
Richard Möhn
2y
I agree. Thanks for taking the time to hash this out with me!

Agree with a lot of this post. I lived in DC from 2008-2010 and various short periods before and after and overall I liked it (though I'd probably like it a bit less today and expect a lot of EAs to like it less than I did).

The features of DC that most affected me: -DC felt like a company town. This had advantages. I liked having tons of friends who were think tank analysts or worked on the Hill and were trying to change the world (though I suspect polarization has made the vibe a bit worse). It also had disadvantages. Relative to NYC (which I knew best at... (read more)

"I don't think they would put out material that fails to apply to them."

I think we mostly agree but I don't think that's necessarily true. My impression is that they mainly study what's useful to their clients and from what I can glean from their book, those clients are mostly big and corporate. I think they might fall outside of their main target audience.

+1 to Paul grahams essays.

3
Richard Möhn
2y
Addendum – from https://www.manager-tools.com/2019/01/manager-tools-data-one-ones-part-1-hall-fame-guidance: WO3s … weekly one-on-ones, R&R … results and retention This is not directly relevant to the article above, but it's about one-on-ones, which are a core MT thing and which chapter 4 of the Effective Manager book is about. Another excerpt, talking about their data in general: They often say that their guidance is for 90 % of people 90 % of the time. And their goal is: ‘Every manager effective, every professional productive.’ (Since I realize that I sound like a shill for MT, I'll say again that I'm not affiliated nor have any hidden agenda. It's just that my article refers to a lot of MT material and I'm trying to add evidence for their authority.)
1
Richard Möhn
2y
Makes sense. I'm a bit worried that people reading this will take away: ‘We're a small shop, therefore MT doesn't apply at all.’ This is not the case and I think Howie would agree. I've never worked at a big organization and MT still has helped me a lot. I've also read and listened to a ton of non-MT material on leadership, doing work, business, processes etc. So I could well be putting MT guidance in its proper context without being aware of it.

[Unfortunately didn't have time to read this whole post but thought it was worth chiming in with a narrow point.]

I like Manager Tools and have recommended it but my impression is that some of their advice is better optimized for big, somewhat corporate organizations than for small startups and small nonprofits with an unusual amount of trust among staff. I'd usually recommend somebody pair MT with a source of advice targeted at startups (e.g. CEO Within though the topics only partially overlap) so you know when the advice differs and can pick between them.

5
Richard Möhn
2y
Good point, thanks! Manager Tools usually explain their guidance in detail, which makes it adaptable to all kinds of organizations. And since MT itself is a small company with, I guess, an unusual amount of trust among staff, I don't think they would put out material that fails to apply to them. But I do agree that wider reading is necessary. Paul Graham's essays, for example, are a good counterpoint to MT's corporate emphasis, too.

Just making sure you saw Eli Nathan's comment saying that this year plus next year they didn't/won't hit venue capacity so you're not taking anybody's spot

3
Elika
2y
Thanks!! Good to know :)

tl;dr I wouldn't put too much weight on my tweet saying I think I probably wouldn't be working on x-risk if I knew the world would end in 1,000 years and I don't think my (wild) guess at the tractability of x-risk mitigation is particularly pessimistic.

***

Nice post. I agree with the overall message of as well as much of Ben's comment on it. In particular, I think emphasizing the significance of future generations, and not just reducing x-risk, might end up as a crux for how much you care about: a) how much an intervention reduces x-risk v. GCRs that are un... (read more)

2
elifland
2y
Thanks for clarifying, and apologies for making an incorrect assumption about your assessment on tractability. I edited your tl;dr and a link to this comment into the post.

I agree with Caleb that theoretical AIS, infinite ethics, and rationality techniques don't currently seem to be overprioritized. I don't think there are all that many people working full-time on theoretical AIS (I would have guessed less than 20). I'd guess less than 1 FTE on infinite ethics. And not a ton on rationality, either. 

Maybe your point is more about academic or theoretical research in general? I think FHI and MIRI have both gotten smaller over the last couple of years and CSER's work seems less theoretical. But you might still think there's... (read more)

6
Davidmanheim
2y
First, yes, my overall point was about academic and theoretical work in general, and yes, as you pointed out, in large part this relates to how object level work on specific cause areas is undervalued relative to "meta" work - but I tried to pick even more concrete areas and organizations because I think that being more concrete was critical, even though it was nearly certain to have more contentious specific objections. And perhaps I'm wrong, and the examples I chose aren't actually overvalued - though that was not my impression. I also want to note that I'm more concerned about trajectory rather than numbers - putting aside intra-EA allocation of effort, if all areas of EA continue to grow, I think many get less attention than they deserve at a societal level,  I think that the theoretical work should grow less than other areas, and far less than they seem poised to grow. And as noted in another thread, regarding work on infinite ethics and other theoretical work, I got a very different impression at the recent GPI conference - though I clearly have a somewhat different view of what EAs work on compared to many others since I don't ever manage to go to EAG. (Which they only ever have over the weekend, unfortunately.) Relatedly, on rationality techniques, I see tons of people writing about them, and have seen people who have general funding pending lots of time thinking and writing about it, though I will agree there is less recently,  but (despite knowing people who looked for funding,) no-one seems interested in funding more applied work on building out rationality techniques in curricula, or even analysis of what works. Lastly, on your final point, my example was across the domains, but I do see the same when talking to people about funding for theoretical work on biosafety, compared to applied policy  or safety work. But I am hesitant to give specific examples because the ones I would provide are things other people have applied for funding on, whereas the tw

Know that other people have gone through the disillusionment pipeline, including (especially!) very smart, dedicated, caring, independent-minded people who felt strong affinity for EA. Including people who you may have seen give talks at EA Global or who have held prestigious jobs at EA orgs.

Also, I think even people like this who haven't gone through the disillusionment pipeline are often a lot more uncertain about many (though not all) things than most newcomers would guess. 

Thanks for writing this post. I think it improved my understanding of this phenomenon and I've recommended reading it to others.

Hopefully this doesn't feel nitpicky but if you'd be up for sharing, I'd be pretty interested in roughly how many people you're thinking of:

"I know at least a handful of people who have experienced this (and I’m sure there are many more I don’t know)—people who I think are incredibly smart, thoughtful, caring, and hard-working, as well as being independent thinkers. In other words, exactly the kind of people EA needs. Typically, t... (read more)

1
Elizabeth
2y
GWWC is another source of data. 40% of EA survey takers who signed report not meeting their commitment (that year), and presumably the rate among non-survey takers is much higher. I couldn't find direct data from giving what we can that was more recent than 2014.
2
Elizabeth
2y
I wonder if you could rough numbers on this from EAF analytics? Look for people who used to post frequently and then dropped off, and then hand check the list for people who are known to stayed in the movement.

Before writing the post, I was maybe thinking of 3-5 people who have experienced different versions of this? And since posting I have heard from at least 3 more (depending how you count) who have long histories with EA but felt the post resonated with them.

So far the reactions I've got suggest that there are quite a lot of people who are more similar to me (still engage somewhat with EA, feel some distance but have a hard time articulating why). That might imply that this group is a larger proportion than the group that totally disengages... but the group that totally disengages wouldn't see an EA forum post, so I'm not sure :)

"My best guess is that I don't think we would have a strong connection to Hanson without Eliezer"

Fwiw, I found Eliezer through Robin Hanson.

2
Habryka
2y
Yeah, I think this isn't super rare, but overall still much less common than the reverse.
5
Akhil
2y
Yeah it is a little bit of a counter-inuitive presentation.  Basically of vaccines low income countries did receive in 2021, they  averted 180 300 (171 400–188 900) deaths.  If LIC had achieved the WHO target of 40% vaccination rate, they would have averted an additional 200 000 (187 900–211 900) deaths 200 000/ 180 300 = 111%

Agree they have a bunch of very obnoxious business practices. Just fyi you can change a seeing so nobody can see whose pages you look at.

I think Open Philanthropy has done some of this. For example:

The Open Philanthropy technical reports I've relied on have had significant external expert review. Machine learning researchers reviewed Bio Anchors; neuroscientists reviewed Brain Computation; economists reviewed Explosive Growth; academics focused on relevant topics in uncertainty and/or probability reviewed Semi-informative Priors.2 (Some of these reviews had significant points of disagreement, but none of these points seemed to be cases where the reports contradicted a clear consensus of exp

... (read more)

Was this in the deleted tweet? The tweet I see is just him tagging someone with an exclamation point. I don't really think it would be accurate to characterise that as "Torres supports the 'voluntary human extinction' movement"

Yeah that does sell me a bit more on delegating choice.

I think that's an improvement though "delegating" sounds a bit formal and it's usually the authority doing the delegating. Would "deferring on views" vs "deferring on decisions" get what you want?

3
Owen Cotton-Barratt
2y
No, that doesn't work because epistemic deferring is also often about decisions, and in fact one of the key distinctions I want to make is when someone is deferring on a decision how that can be for epistemic or authority reasons, and how those look different. I agree it's slightly awkward that authorities often delegate, but I think that that's usually delegating tasks; "delegating choices" to me has much less connotation of a high-status person delegating to a low-status person. Although ... one of the examples of "deferring to authority" in my sense is a boss deferring to the authority of a subordinate after the subordinate has been tasked with making a decision, even though the boss disagrees and has the power to override it. With this example, "delegating choice" has very much the right connotation, and "deferring to authority" feels a bit of a stretch.

Thanks for writing this post. I think it's really  to distinguish the two types of deference and push the conversation toward the question of when to defer as opposed to how good it is in general.

ButI think "deferring to authority" is  bad branding (as you worry about below) and I'm not sure your definition totally captures what you mean.  I think it's probably worth changing even though I haven't come up with great alternatives.

Branding. To my ear, deferring to authority has a very negative connotation. It suggests deferring to a preexistin... (read more)

2
Vaidehi Agarwalla
2y
Related post to the importance of delegating choice, but that was not framed as a trade-off between buying into a thing vs doing it was Jan Kulveit's What to do with people from a few years ago. 
4
Owen Cotton-Barratt
2y
Perhaps "deferring on views" vs "delegating choices" ?
Load more