All of Lorenzo's Comments + Replies

Moral trades for tax deductibility

Very happy to hear the project is still active!

Thank you so much for picking this up!

Kurzgesagt - The Last Human (Longtermist video)

Obligatory comment wondering about downvotes, did I mess up the numbers?

Kurzgesagt - The Last Human (Longtermist video)

Even without considering that, if we stay at ~140 million births per year, in 800 years 50% of all humans will have been born in our future.
And in ~7 millennia 90% of all humans will have been born in our future.

0Lorenzo7d
Obligatory comment wondering about downvotes, did I mess up the numbers?
Nonprofit Boards are Weird

Lots of comments on Hacker News and LessWrong.

I also found the section of Charity Entrepreneurship's Book on Boards (Chapter 26: "Wisdom") really interesting, from the perspective of the "CEO"s.

They suggest having two separate "boards":

  • An informal "Advisory Board" that helps with advice, mentoring, domain expertise, signaling, fundraising and personal support; but has no legal power. Example terms: https://bit.ly/CSHterms
  • A formal Legal Board that handles finances, legal compliance, evaluates the executive team, and also provides advice
Yonatan Cale's Shortform

> Give me the obvious stuff

I expect that people that read shortforms on the EA forum are not those that would give useful advice, and I think there are a lot of people that would be happy to give advice to someone with your skills

Related, "my own social circle is not worried or knowledgable about AGI", might it make sense to spend time networking with people working on AI Safety and getting a feel for needs and opportunities in the area e.g. joining discussion groups?

 

Still, random questions on plan A as someone not knowledgable but worried about A... (read more)

2Yonatan Cale11d
I don't think it will help with the social aspect which I'm trying to point at -------------------------------------------------------------------------------- I think it's best if one person goes do the user research instead of each person like me bothering the AGI researchers (?) I'm happy to talk to any such person who'll talk to me and summarize whatever there is for others to follow, if I don't pick it up myself -------------------------------------------------------------------------------- Could be nice actually -------------------------------------------------------------------------------- I mean "figure out what AGI researchers need" [which is a "product" task] and help do that [which helps the community, rather than helping the research directly] -------------------------------------------------------------------------------- I'm in touch with them and basically said "yes", but they want full time and by default I don't think I'll be available, but I'm advancing and checking it
Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies

Thanks for the great post and for bringing this topic on the forum!

I found this and this recent talks by Easterlin on the topic very interesting (especially the first one).

I do wonder, if economic growth does not drive increased SWB at the population level, what does?
I'm really curious about your view on the topic, and if it relates to HLI research agenda.
[edit: I had somehow missed section 5 🤦, with some tentative suggestions]
[edit2: https://www.happierlivesinstitute.org/2022/07/01/2022-summer-research-fellows/ apparently it's the focus of one of the 202... (read more)

EA Dedicates

Discussion from 6 years ago: Against Segregating EAs gives reasons why having this binary distinction might be counterproductive, but comments suggest that even at the time it was useful.
The top comment even proposes "Dedicated EAs", is it a coincidence or has the term been used elsewhere?

Why EAs should normalize using Glassdoor

I think that sending something generic like "I'm not optimistic about [org] impact", or even a very neutral review, can give some information without a significant litigation risk.

I would also consider an extremely positive review a useful signal, especially by ex-employees or ex-volunteers.

I think I would personally find it much more informative than a glassdoor review, after hearing a lot of very negative stories about Glassdoor (including yours).

Open Thread: June — September 2022

There's a link to "library" in the menu on the left of the homepage.

It links to https://forum.effectivealtruism.org/library which has a list of sequences at the bottom.

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

Another poster reached out and mentioned he was writing a post about this particular mistake, so I thought I'd leave the example up.

 

Please feel free to edit it, it will take me a while to actually post anything, I'm still thinking about how to handle this issue in the general case. I think you can keep the interpretability by doing mean(cost)/mean(QALY), but I still don't know how to handle the probability distribution.


I think by not editing it we risk other people copying the mistake

3NunoSempere14d
Done.
7CalebWithers6d
Hello, I've recently taken over monitoring the donation swaps. There have historically been a handful of offers listed each month, but it looks like the system has broken sometime over the past few weeks - thanks to Oscar below for emailing to bring this to our attention. I'm sorry for the inconvenience for anyone who has been trying to use the service and will hopefully provide a further update in the not-too-distant future!
1Oscar Delaney15d
Thanks, I had no idea! Early signs are that it is not active, but I will update this if I hear otherwise.
You don’t have to respond to every comment

I am shocked that the first good definition I found is on Urban Dictionary: https://www.urbandictionary.com/define.php?term=Epistemic%20Status

The epistemic status is a short disclaimer at the top of a post that explains how confident the author is in the contents of the post, how much reputation the author is willing to stake on it, what sorts of tests the thesis has passed.

1Guy Raveh15d
This seems like "how much I'm sure of this" then, isn't it?
Quantifying Uncertainty in GiveWell's GiveDirectly Cost-Effectiveness Analysis

why not R? Or write a package for R or Python? (I don't know what DSL means).

It's a Domain Specific Language: a programming language that's optimized for a specific domain.

Wikipedia lists some main advantages and disadvantages, I mostly agree with this section and think in most cases we should go for an R, Python or Apps Script/js library or framework, but there are cases of successful DSLs (e.g. LaTeX).
I'm curious what the target audience is for squiggle, maybe professional forecasters that are not programmers?

2Ozzie Gooen15d
I'll write more of a post about it in a few weeks. Right now it's not really meant for external use; the API isn't quite stable. That said, you can see some older posts which give the main idea: https://www.lesswrong.com/s/rDe8QE5NvXcZYzgZ3
Uncertainty and sensitivity analyses of GiveWell's cost-effectiveness analyses

How did the chat go?

I wonder if porting GiveWell cost-effectiveness models to causal.app might make them more understandable

2david_reinstein17d
He was very positive about it and willing to move forward on it. I didn't/don't have all the bandwidth to follow up as much as I'd like to, but maybe someone else could do. (And I'd hope to turn back to this at some point.) I think this could be done in addition to and in complement to HazelFire's [https://forum.effectivealtruism.org/posts/ycLhq4Bmep8ssr4wR/quantifying-uncertainty-in-givewell-s-givedirectly-cost] work. Note that the Hazelfire effort is using squiggle language. I've been following up and encouraging them as well. I hope that we can find a way to leverage the best features of each of these tools, and also bring in domain knowledge.
On Deference and Yudkowsky's AI Risk Estimates

I believe Drexler is now giving the ballpark figure of 2013. My own guess would be no later than 2010…

 

I didn't see the "my own guess" part in the linked document  (or the archived version), but it seems to be visible here, was probably edited between 2001 and 2004. Mentioned it in case others are confused after trying to find the quote in context.

A database of effective productivity recommendations

Thanks for making this!

I would recommend ublock origin instead of adblock plus: it blocks more things, is faster, and just works better.

It's also the recommended ad blocker in the LW post you link to, so not sure where the recommendation for adblock plus comes from,

2Ben Williamson16d
Changed!
Why the EA aversion to local altruistic action?

 How does one do something like the "improving institutions" with the little circle so that all the forum posts under that topic pop up?


It's just a link to the topic page, like you linked to a post. You can find all topics here https://forum.effectivealtruism.org/topics/all (on the left on the homepage there is a "topics" link).

meta q: where one direct these types of getting starting tech support q's?

There's the Open Thread on the homepage, How to use the Forum, the Feature Suggestion Thread, or the support/feedback chat in the bottom right (at least ... (read more)

Why the EA aversion to local altruistic action?

There is a MASSIVE need for money, time and talent to develop more effective government operations.


You might be interested in the topic "Improving Institutional Decision Making" https://forum.effectivealtruism.org/topics/institutional-decision-making.
There's a lot of work in this area, e.g. recently an EA was running for congress and there was a big campaign around it https://forum.effectivealtruism.org/posts/Qi9nnrmjwNbBqWbNT/the-best-usd5-800-i-ve-ever-donated-to-pandemic-prevention

And posts like U.S. EAs Should Consider Applying to Join U.S. Diplomacy, ... (read more)

Who's hiring? (May-September 2022)

It's now possible to sort answers by date! \o.o/

Lorenzo didn't mention that he built the dang feature himself. Thanks Lorenzo. Also thanks Ryan for the suggestion.

6Ryan Beck1mo
Fantastic! Thank you!
Open Thread: Spring 2022

Hi Chris! You're probably already aware of this, but founders pledge and giving green are doing great research on this and might be worth contacting.

You might also be interested in the forum posts tagged climate change or climate engineering, and maybe contact their authors or some commenters that seem subject matter experts.

Good luck on the project!

1Chris Dz1mo
Good advice, thanks!
Open Thread: Spring 2022

Hi Locke!

is there a EA member directory?

I don't think there's a definition of "EA member", there is a list of users of this forum by location, a list of Giving What We Can pledgers, some profiles on ea hub. But many people very involved with the movement are not in any of these lists, and there are people in these lists that don't identify as "EA".

I'd be curious to learn more about why people participate in the movement. Perhaps there's a thread on that topic?

That's an interesting question, I would make one! You would get new answers and maybe someone will... (read more)

Training a GPT model on EA texts: what data?

I honestly really don't know :/
I know it doesn't help you, but I would expect both blogs (and all the other stuff on the websites that's not in the blogs) to have some content aimed at a wider audience and some content that goes more into depth for a narrower audience.

What is the state of the art in EA cost-effectiveness modelling?

I would say that GiveWell's cost-effectiveness analyses are considered excellent (here is a guide from 2019), but they should be taken in context.
From https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/cost-effectiveness-models "we consider our cost-effectiveness numbers to be extremely rough."
"There are many limitations to cost-effectiveness estimates, and we do not assess charities only—or primarily—based on their estimated cost-effectiveness."

And this old blog post: https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-e... (read more)

1Froolow1mo
Thank you - really helpful additional information and very useful to have it confirmed that GiveWell are considered high quality models by the EA community. Really appreciate it.
Training a GPT model on EA texts: what data?

Does the EA community have the norm that these comments are public? I want to make sure the consent of participants is obtained.


That's a very good point and I think it's definitely not the norm, didn't think about text potentially getting leaked from the training set.

 

How should GiveWell blog and 80,000 hours blog weighted against each other?

What do you mean against each other? Do you mean compared to everything else, including the forum posts/comments?
I have no idea, I think the number of views might lead to a better representation of the wider commu... (read more)

1JoyOptimizer1mo
How much % of the training mix should be the GiveWell blog and how much should be the 80,000 hours blog? In other words, how many bytes of blog posts should be used from each, relative to the entire dataset? What kinds of posts are on each blog, and which best reflects the wider EA community, and which reflects the professional EA community? How can this be used to create a dataset? I also checked and neither blog has a direct view count measure-- some other proxy metric would need to be used.
Training a GPT model on EA texts: what data?

Some other resources that come to mind, not sure if they would all be useful and I'm probably forgetting tons:
 

 - https://forum.effectivealtruism.org/library
 - https://forum.effectivealtruism.org/topics/all
 - https://blog.givewell.org/ (maybe including comments)
   - Besides the blog, there's lots of other great stuff and links to documents around GiveWell website, random samples: https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/comparing-moral-weights
 https://docs.google.com/document/d/1ZKq-MNU-xtn_48u... (read more)

1JoyOptimizer1mo
Thanks for these sources. How should GiveWell blog and 80,000 hours blog weighted against each other? My instinct is to weight by the number of views. Does the EA community have the norm that these comments are public? I want to make sure the consent of participants is obtained.
Michael Nielsen's "Notes on effective altruism"

and not much less effective than donating 100%


Wouldn't it be roughly a tenth to half as effective?

Whereas choosing the wrong cause could cost orders of magnitude.

6Stefan_Schubert1mo
Fwiw, I think the logic is very different when it comes to direct work, and that phrasing it in terms of what fraction of one's time one donates isn't the most natural of thinking about it.
Who's hiring? (May-September 2022)

I'm really surprised to read this, I think the 80k jobs board is awesome!

In what ways do you think this format is an improvement?

9Milan_Griffes1mo
Decentralized / less gatekept, postings can be voted on, more ability to customize contact info / next steps. (Nothing against the 80k board, which is also a valuable service.)
Who's hiring? (May-September 2022)

I don't know if there's an easy way to sort by newest, made a quick and dirty codepen using the forum API https://codepen.io/lorenzo-buonanno/full/xxYjvgW 

[This comment is no longer endorsed by its author]Reply
1Ryan Beck1mo
Very cool, thank you!
How to determine distribution parameters from quantiles

Useful stuff! I was working on something similar months ago and ended up eyeballing things.

I think the links to the sheets are broken though, they just link to this page

1Vasco Grilo1mo
Sorry, and thanks! They are working now.
Will there be an EA answer to the predictable famines later this year?

I don't think people see the "right" answer as bednets even today.

GiveWell maximum impact fund and the very similar EA global development fund seem to me to be more recommended choices, in part because of time-sensitive opportunities like you mention ( e.g. grants on covid-19 in 2020 )

As mentioned in another comment, the maximum impact fund is also funding malnutrition treatment

1karthik-t1mo
Sure, bednets was just a standin for systemic issues (including malnutrition). I think COVID was an outlier in EA responsiveness to a current crisis, and the more longstanding conventional wisdom is that emergencies are usually not neglected or tractable enough to be worth spending on compared to systemic issues. Doing Good Better definitely made that argument about the inefficacy of disaster relief.
Will there be an EA answer to the predictable famines later this year?

Thanks! Fixed (ironically your link is also broken with the . )

I know other people that are interested in GiveWell's research on malnutrition, I think it would be super interesting if you'll add a comment with your eventual findings! (Or even another post)

Will there be an EA answer to the predictable famines later this year?

You might also be interested in GiveWell's post on malnutrition: https://blog.givewell.org/2021/11/19/malnutrition-treatment/ 

Copying their conclusions:

Overall, we see malnutrition as a very promising area for funding and further research. It potentially offers $1 billion or more in funding opportunities at cost-effectiveness levels that are consistent with our top charities.

We have many open questions and a long road to go to answer them. We are currently investing significant energy into addressing our uncertainties, and we look forward to sharing m

... (read more)
1Sebastian Schwiecker1mo
Thanks a lot for this. Hope GiveWell will speed up there process (will contact them directly as well). Edit: Removed broken link.
Yglesias on EA and politics

If you tell the leading EA people that you are really only interested in helping save children’s lives in Africa, they will give you funny looks.


I'm really surprised to read this!
I guess this points strongly towards EA being just longtermism now, at least in how "the leading EA people" present it

4timunderwood1mo
It's definitely not just long termism - and at least before sbf's money started becoming a huge thing there is still an order of magnitude more money going to children in Africa than anything else. For that matter, I'm mostly longtermist mentally, and most of what I do (partly because of inertia, partly because it's easier to promote) is saving children in Africa style donations.
Does it make sense for EA’s to be more risk-seeking in earning to give?

This is something I'm thinking about for my personal situation, and I strongly agree with this comment (but don't have a lot of actual data to back this view).

Considering the subset of people that are donating everything after rent and food, this model might predict lower total donations for the higher variance distribution (I expect rent and food costs to increase once you have a more intense and higher risk job, because of opportunity costs).

But I think that in that case it's still very likely that choosing riskier options will have a much higher expecte... (read more)

EA Common App Development Further Encouragement

Hey Yonatan!

As another MVP idea, what would you think about monthly "who's hiring" and "who wants to be hired" forum threads, like they do on Hacker News?

I've heard mixed feelings about the HN version, and am curious to hear your perspective on copying it on this forum.

6Yonatan Cale1mo
I think the MVP-MVP would be doing this once (before we try to do it monthly) wdyt?
EA and the current funding situation

it's unreasonable to expect idea providers to also have to be the idea executors in the ideal impact marketplace

I could be wrong, but I think that most people think that the key bottleneck is "idea executors", not "idea providers". (E.g. I heard Charity Entrepreneurship has many intervention ideas, but even after extensive selection and training they are bottlenecked by finding enough founders).

So one shouldn't be surprised if they share a great idea but it doesn't get any traction, it seems to be the current state of things.

The biggest risk of free-spending EA is not optics or motivated cognition, but grift

It's likely that almost no one thinks about themselves as a grifter

I strongly agree with this, and think it's important to keep in mind

Almost everyone in EA is at least somewhat biased towards actions that will cause them to have more money and power (on account of being human)

I don't think this matches my (very limited) intuition.
I think that there is huge variance in how much different individuals in EA optimize for money/power/prestige. It seems to me that some people really want to "move up the career ladder" in EA orgs, and be the ones that have that ... (read more)

EA Creatives and Communicators Slack

Hi Matias! Did you send Jeroen_w a private message?

From the post:

How do you get in? Just send me a personal message here on the forum and introduce yourself. I won't be strict at all with accepting people. One sentence is enough. Please don't use the comment section for this as I see the comments less frequently than I see personal messages.

Where are the cool places to live where there is still *no* EA community? Bonus points if there is unlikely to be one in the future

I think ~99.9% of cities don't have in-person EA hangouts.
Maybe you can just find the best cities for you and only later filter out the few ones with an EA group?

You can also check https://forum.effectivealtruism.org/community for places to avoid

1nananana.nananana.heyhey.anon2mo
This isn’t my experience in the US anymore! Most major cities have an EA meetup or it feels inevitable to me that they soon will. EA is still small overall, but increasingly ubiquitous. It’s a credit to the success of movement growth. It’s also a bit overwhelming for me. See comment below; even Tulsa is likely to have an EA group soon!
EA and the current funding situation

I'm really curious as to why this is being downvoted (was at -2 when I originally wrote this comment, now it's at 0 with 7 votes), I find SoGive Grants interesting and relevant to the discussion.

Especially since More funder diversity is a main point of Luke's comment.

I don't know but FWIW my guess is some people might have perceived it as self-promotion of a kind they don't like.

(I upvoted Sanjay's comment because I think it's relevant to know about his agreement and about the plans for SoGive Grants given the context.)

EA and the current funding situation

At least one person earning to give (and not related to FTX) has a net worth of over a billion


Can't be Gary Wang, as he's related to FTX

Did Peter Thiel give "the keynote address at an EA conference"?

Yes the article is indeed full of strawmen and misleading statements.
But (not knowing anything about Torres) I felt the top comment was strongly violating the principle of charity when trying to understand the author's motivations.

I think the principle of charity is very important (especially when posting on a public forum), and saying that someone's true motivations are not the ones they claim should require extraordinary proof (which maybe is the case! I don't know anything about the history of this particular case).

Extraordinary proof? This seems too high to me. You need to strike the right balance between diagnosing dishonesty when it doesn't exist and failing to diagnose it when it does. Both types of errors have serious costs. Given the relatively high prevalence of deception among humans (see e.g. this book), I would be very surprised if requiring "extraordinary proof" of dishonesty produced the best consequences on balance.

That feels very uncharitable.

Phil isn't an unknown internet critic whose motivations are opaque; he is/was a well known person whose motivations and behaviour are known first-hand by many in the community. Perhaps other people have other motivations for disliking longtermism, but the question OP asked was about Phil specifically, and Linch gave the Phil specific answer.

7dpiepgrass2mo
Yeah, but who is speaking here? Beckstead? I don't know any "Beckstead"s. Phil Torres is claiming that The Longtermist Stance is "we should prioritise the lives of people in rich countries over those in poor countries", even though I've never heard EAs say that. At most Beckstead thinks so, though that's not what Beckstead said. What Beckstead said was provisional ("now seems more plausible to me") and not a call to action. Torres is trying to drag down discourse by killing nuance and saying misleading things. Torres' article is filled with misleading statements, and I have made longer and stronger remarks about it here [https://dpiepgrass.medium.com/hi-im-david-742267c29b19]. (Even so I'm upvoting you, because -6 is too harsh IMO)

OP asked a question about Torres specifically. I gave them my personal subjective impression of the best account I have about Torres' motivations. I'm not going to add a "and criticizing EA is often a virtuous activity and we can learn a lot from our critics and some of our critics may well be pure in heart and soul even if this particular one may not be" caveat to every one of my comments discussing specific criticisms of EA.

Demandingness and Time/Money Tradeoffs are Orthogonal

I agree it's very valuable, but how can it be a good signal and impossible to notice at the same time?

5Jeff Kaufman21d
Compare: * In this instance, someone demonstrated a virtue (I just saw them go out of their way to help a coworker) * They generally demonstrate a virtue (they never make ad hominem attacks) Now, technically, these are really the same: even in the latter the signal is composed of individual observations. But they differ in that with the former each instance gives lots of signal (going out of your way is rare) while in the latter each instance gives very little signal (even someone pretty disagreeable is still going to spend most of their time not making ad hominem attacks). I'm interpreting Caroline as saying that when someone is practicing this virtue well you don't notice any individual instance of silence, and praise is generally something we do at the instance level. On the other hand, we can still notice that someone, over many opportunities, has consistently refrained from harmful speech. I agree, though, that it isn't a very good signal because of the difficulty in reception (less legible).
6deep2mo
Self-signaling value ain't something to sneeze at. Personally, a lot of my desire-for-demandingness is about reinforcing my identity as someone who's willing to make sacrifices in order to do good. ("reinforcing" meaning both getting good at that skill, and assuring myself that that's what I'm like :)
Load More