All of ShayBenMoshe's Comments + Replies

Why does GiveWell not provide lower and upper estimates for the cost-effectiveness of its top charities?

Not answering the question, but I would like to quickly mention a few of the benefits of having confidence/credible intervals or otherwise quantifying uncertainty. All of these comments are fairly general, and are not specific criticisms of GiveWell's work. 

  1. Decision making under risk aversion - Donors (large or small) may have different levels of risk aversion. In particular, some donors might prefer having higher certainty of actually making an impact at the cost of having a lower expected value. Moreover, (mostly large) donors could build a por
... (read more)
1Vasco Grilo10d
Thanks for the feedback! I do think further quantifying the uncertainty would be valuable. That being said, for GiveWell's top charities, it seems that including/studying factors which are currently not being modelled is more important than quantifying the uncertainty of the factors which are already being modelled. For example, I think [https://forum.effectivealtruism.org/posts/26t7nC7yJ7xspkaYz/what-is-the-cost-effectiveness-of-givewell-top-life-saving] the effect on population size remains largely understudied.
Critiques of EA that I want to read

Yeah, that makes sense, and is fairly clear selection bias. Since here in Israel we have a very strong tech hub and many people finishing their military service in elite tech units, I see the opposite selection bias, of people not finding too many EA (or even EA-inspired) opportunities that are of interest to them.

I failed to mention that I think your post was great, and I would also love to see (most of) these critiques flashed out.

Critiques of EA that I want to read

The fact that everyone in EA finds the work we do interesting and/or fun should be treated with more suspicion.

I would like to agree with Aaron's comment and make a stronger claim - my impression is that many EAs around me in Israel, especially those coming from a strong technical background, don't find most direct EA-work very intellectually interesting or fun (ignoring its impact).

Speaking for myself, my background is mostly in pure math and in cyber-security research / software engineering. Putting aside managerial and entrepreneurial roles, it seems to... (read more)

3abrahamrowe2mo
That's interesting and makes sense — for reference I work in EA research, and I'd guess ~90%+ of the people I regularly engage with in the EA community are really interested / excited about EA ideas. But that percentage is heavily influenced by the fact that I work at an EA organization.
Announcing Alvea—An EA COVID Vaccine Project

I am extremely impressed by this, and this is a great example of the kind of ambitious projects I would love to see more of in the EA community. I have added it to the list on my post Even More Ambitious Altruistic Tech Efforts.

Best of luck!

Why and how to be excited about megaprojects

I completely agree with everything you said (and my previous comment was trying to convey a part of this, admittedly in much less transparent way).

Why and how to be excited about megaprojects

I simply disagree with your conclusion - it all boils down to what we have at hand. Doubling the cost-effectiveness also requires work, it doesn't happen by magic. If you are not constrained by highly effective projects which can use your resources, sure, go for it. As it seems though, we have much more resources than current small scale projects are able to absorb, and there's a lot of "left-over" resources. Thus, it makes sense to start allocating resources to some less effective stuff.

2MichaelA7mo
Doubling the cost effectiveness while maintaining cost absorbed, and doubling cost absorbed while maintaining cost effectiveness, would both take work (scaling without dilution/breaking is also hard). Probably one tends to be harder, but that’d vary a lot between cases. But if we could achieve either for free by magic, or alternatively if we assume an equal hardness for either, then doubling cost effectiveness would very likely be better, for the reason stated above. (And that’s sufficient for “literally the same” to have been an inaccurate claim.) I think that’s just fairly obvious. Like if you really imagine you could press a button to have either effect on 80k for free or for the same cost either way, I think you really should want to press the “more cost effective” button, otherwise you’re basically spending extra talent for no reason. (With the caveat given above. Also a caveat that absorbing talent also helps build their career capital - should’ve mentioned that earlier. But still that’s probably less good than them doing some other option and 80k getting the extra impact without extra labour.) As noted above, we’re still fairly constrained on some resources, esp. certain types of talent. We don’t have left overs of all types of resources. (E.g. I could very easily swap from my current job into any of several other high impact jobs, but won’t because there’s only 1 me and I think my current job is the best use of current me, and I know several other people in this position. With respect to such people, there are left over positions/project ideas, not left over resources-in-the-form-of-people.)
Why and how to be excited about megaprojects

I agree with the spirit of this post (and have upvoted it) but I think it kind of obscures the really simple thing going on: the (expected) impact of a project is by definition the cost-effectiveness (also called efficiency) times the cost (or resources).
A 2-fold increase in one, while keeping the other fixed, is literally the same as having the roles reversed.

The question then is what projects we are able to execute, that is, both come up with an efficient idea, and have the resources to execute it. When resources are scarce, you really want to squeeze as... (read more)

4MichaelA7mo
But doubling the cost also doubles the cost (in addition to impact), while doubling the cost-effectiveness doubles only the impact. That’s a pretty big difference! Like if we could either make 80k twice as big in terms of quality-adjusted employees while keeping impact per quality-adjusted employee constant, or do the inverse, we should very likely prefer the inverse, since that leaves more talent available for other projects. (I say “very likely” because, as noted in the post, it can be valuable to practice running big things so we’re more able to run other big things.) So I disagree that your simple summary of what’s going on is a sufficient and clear picture (though your equation itself is obviously correct). Separately, I agree with your second paragraph with respect to money, but mildly disagree with the final sentence specifically with respect to talent, or at least “vetted and trained” talent - that’s less scarce than it used to be, but still scarce enough that it’s not simply like there’s a surplus relative to projects that can absorb it. (Though more project ideas or early stage projects would still help us more productively absorb certain specific people, and I’d also say there’s kind of a surplus of less vetted and trained talent.)
Democratising Risk - or how EA deals with critics

I am not sure that there is actually a disagreement between you and Guy.
If I understand correctly, Guy says that in so far as the funder wants research to be conducted to deepen our understanding of a specific topic, the funders should not judge researchers based on their conclusions about the topic, but based on the quality and rigor of their work  in the field and their contributions to the relevant research community.
This does not seem to conflict what you said, as the focus is still on work on that specific topic.

Flimsy Pet Theories, Enormous Initiatives

I strongly agree with this post and it's message.

I also want to respond to Jason Crawford's response. We don't necessarily need to move to a situation where everyone tries to optimize things as you suggest, but at this point it seems that almost no one tries to optimize for the right thing. I think even changing this to a few percents of entrepreneurial work or philanthropy could have tremendous effect, without losing much of the creative spark people worry we might lose, or maybe gain even more, as new directions open.

7Stefan_Schubert8mo
I disagree with Crawford's s take. It seems to me that effective altruists have managed to achieve great things using that mindset during the past years - which is empirical evidence against his thesis.
Even More Ambitious Altruistic Tech Efforts

That's great, thanks!
I was aware of Anthropic, but not of the figures behind it.

Unfortunately, my impression is that most funding for such projects are around AI safety or longtermism (as I hinted in the post...). I might be wrong about this though, and I will poke around these links and names.

Relatedly, I would love see OPP/EA Funds fund (at least a seed round or equivalent) such projects, unrelated to AI safety and longtermism, or hear their arguments against that.

Even More Ambitious Altruistic Tech Efforts

Thanks for clarifying Ozzie!
(Just to be clear, this post is not an attack on you or on your position, both of which I highly appreciate :). Instead, I was trying to raise a related point, which seems extremely important to me and I was thinking about recently, and making sure the discussion doesn't converge to a single point).

With regards to the funding situation, I agree that many tech projects could be funded via traditional VCs, but some might not be, especially those that are not expected to be very financially rewarding or very risky (a few examples t... (read more)

3Ozzie Gooen9mo
Thanks! I didn't mean to say it was, just was clarifying my position. Now that I think about it, the situation might be further along than you might expect. I think I've heard about small "EA-adjacent" VCs starting in the last few years.[1] There are definitely socially-good-focused VCs out there, like 50 Year [https://fiftyyears.com/] VC. Anthropic was recently funded for $124 Million as the first round. Dustin Moskovitz, Jaan Tallinn, and the Center for Emerging Risk Research all were funders (all longtermists). I assume this was done fairly altruistically. I think Jaan has funded several altruistic EA projects; including ones that wouldn't have made sense just on a financial level. https://pitchbook.com/profiles/company/466959-97?fbclid=IwAR040xC65lCV0ZW68DOXwI7K_RkSzyr7ZJa9HBs7R7C4ZkFGM5sC1Lec9Wk#team [https://pitchbook.com/profiles/company/466959-97?fbclid=IwAR040xC65lCV0ZW68DOXwI7K_RkSzyr7ZJa9HBs7R7C4ZkFGM5sC1Lec9Wk#team] https://www.radiofreemobile.com/anthropic-open-ai-mission-impossible/?fbclid=IwAR3iC0B-EKFD40Hf7DXEedI_tzFgqypT7_Pf4jSiUhPeKbHq_xFawHc-rpA [https://www.radiofreemobile.com/anthropic-open-ai-mission-impossible/?fbclid=IwAR3iC0B-EKFD40Hf7DXEedI_tzFgqypT7_Pf4jSiUhPeKbHq_xFawHc-rpA] [1]: Sorry for forgetting the 1-2 right names here.
Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits

I wrote a response post Even More Ambitious Altruistic Tech Efforts, and I would love to spinoff relevant discussion there. The tl;dr is that I think we should have even more ambitious goals, and try to initiate projects that potentially have a very large direct impact (rather than focus on tools and infrastructure for other efforts).

Also, thanks for writing this post Ozzie. Despite my disagreements with your post, I mostly agree with your opinions and think that more attention should be steered towards such efforts.

CEA grew a lot in the past year

I just want to add, on top of Haydn's comment to your comment, that:

  1. You don't need the treatment and the control group to be of the same size, so you could, for instance, randomize among the top 300 candidates.

  2. In my experience, when there isn't a clear metric for ordering, it is extremely hard to make clear judgements. Therefore, I think that in practice, it is very likely that let's say places 100-200 in their ranking seem very similar.

I think that these two factors, combined with Haydn's suggestion to take the top candidates and exclude them from the study, make it very reasonable, and of very low cost.

Has anyone done any work on how donating to lab grown meat research (https://new-harvest.org/) might compare to Giving Green's recommendations for fighting climate change?

Last August Stijn wrote a post titled The extreme cost-effectiveness of cell-based meat R&D about this subject.
Let me quote the bottom line (emphasis mine):

This means one euro extra funding spares 100 vertebrate land animals. Including captured and aquaculture fish (also fish used for fish meal for farm animals), the number becomes an order 10 higher: 1000 vertebrate animals saved per euro.
...
Used as carbon offsetting, cell-based meat R&D has a price around 0,1 euro per ton CO2e averted.

In addition, as I wrote in a comment, I also did a back of the... (read more)

List of Under-Investigated Fields - Matthew McAteer

Thanks for linking this, this looks really interesting! If anyone is aware of other similar lists, or of more information about those fields and their importance (whether positive or negative), I would be interested in that.

2EdoArad2y
I think that Is really really REALLY important, but not everyone agrees. You can find more information in this critical review [https://ssir.org/articles/entry/the_elitist_philanthropy_of_so_called_effective_altruism] . 😘
My Career Decision-Making Process

Thanks for detailing your thoughts on these issues! I'm glad to hear that you are aware of the different problems and tensions, and made informed decisions about them, and I look forward to seeing the changed you mentioned being implemented.

I want to add one comment about to the How to plan your career article, if it's already mentioned. I think it's really great, but it might be a little bit too long for many readers' first exposure. I just realized that you have a summary on the Career planning page, which is good, but I think it might be too short. I fo... (read more)

Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement

Thanks for publishing negative results. I think that it is important to do so in general, and especially given that many other group may have relied on your previous recommendations.

If possible, I think you should edit the previous post to reflect your new findings and link to this post.

2jessica_mccurdy2y
Ah thanks for the reminder! I meant to do this and forgot! :)
(Autistic) visionaries are not natural-born leaders

Thanks to Aaron for updating us, and thanks guzey for adding the clarification in the head of the post.

How EA Philippines got a Community Building Grant, and how I decided to leave my job to do EA-aligned work full-time

Thank you for writing this post Brian. I appreciate your choices and would be interested to hear in the future (say in a year, and even after) how things worked out, how excited will you be about your work, and if you will be able to sustain this financially.

I also appreciate the fact that you took the time to explicitly write those caveats.

2BrianTan2y
Hey Shay, thanks for the message! I agree in the value of writing an update towards the end of this year about how things worked out, how excited or not I still am about this work and about EA, and if our CB grant was renewed. Regarding if I will be able to sustain this financially, even if I'm paid only 0.61 FTE currently, the funding is enough for my needs currently. I would like this funding to be increased to 0.8 or 1 FTE so I can be more comfortable for this long-term, but if not, I think I'll be willing to do this work full-time even at 0.61 FTE for 2-3 more years. I'm not too worried about my financial situation.
(Autistic) visionaries are not natural-born leaders

I meant the difference between using the two, I don't doubt that you understand the difference between autism and (lack of) leadership. In any case, this was not main point, which is that the word autistic in the title does not help your post in any way, and spreads misinformation.

I do find the rest of the post insightful, and I don't think you are intentionally trying to start a controversy. If you really believe that this helps your post, please explain why (you haven't so far).

(Autistic) visionaries are not natural-born leaders

I don't understand how you can seriously not understand that difference between the two. Autism is a developmental disorder, which manifests itself in many ways, most of which are completely irrelevant to your post. Whereas being a "terrible leader", as you call them, is a personal trait which does not resemble autism in almost any way.

Furthermore, the word autistic in the title is not only completely speculative, but also does not help your case at all.

I think that by using that term so explicitly in your title, you spread misinformation, and with no good reason. I ask you to change the title, or let the forum moderators handle this situation.

Aaron Gertler2yModerator Comment12

Note from the lead moderator: We discussed a potential change to the post title, but no participants in the discussion thought that doing so was the right move. 

I personally found the title confusing and annoying for some of the reasons others have mentioned, but titles don't have to help the author's case (or even make sense). 

If a claim that someone had been diagnosed with a developmental disorder were being applied with no evidence to someone other than a public figure, it would clearly run afoul of our rules. But in this case, I don't think t... (read more)

Contrary to your insinuation, I never wrote that I don't understand the difference between those two. I was pointing out that Brian's argument applies to both "(autism)" and "terrible leaders".

My Career Decision-Making Process

Hey Arden, thanks for asking about that. Let me start by also thanking you for all the good work you do at 80,000 Hours, and in particular for the various pieces you wrote that I linked to at 8. General Helpful Resources.

Regarding the key ideas vs old career guide, I have several thoughts which I have written below. Because 80,000 Hours' content is so central to EA, I think that this discussion is extremely important. I would love to hear your thoughts about this Arden, and I will be glad if others could share their views as well, or even have a separate d... (read more)

7Ardenlk2y
Thanks for this quick and detailed feedback shaybenmoshe, and also for your kind words! Yes. We decided to go "ideas/information-first" for various reasons, which has upsides but also downsides. We are hoping to mitigate the downsides by having practical, career-planning resources more emphasised alongside Key Ideas. So in the future the plan is to have better resources on both kinds of things, but they'll likely be separated somewhat -- like here are the ideas [set of articles], and here are the ways to use them in your career [set of articles]. We do plan to introduce the ideas first though, which we think are important for helping people make the most of their careers. That said, none of this is set in stone. We became aware of the AI safety problem last year -- we've tried to deemphasie AI Safety relative to other work since to make it clearer that, although it's our top choice for most pressing problem and therefore what we'd recommend people work on if they could work on anything equally successfully, that doesnt' mean that it's the only or best choice for everyone (by a long shot!). I'm hoping Key Ideas no longer gives this impression, and that our lists of other problems and paths might help show that we're excited about people working on a variety of things. Re: Longtermism, I thnk our focus on that is just a product of most people at 80k being more convinced of longtermism's truth/importance, so a longer conversation! I totally agree with this and think it's a problem with Key Ideas. We are hoping the new career planning process we've released can help with this, but also know that it's not the most accessible right now. Other things we might do: improve our 'advice by expertise' article, and try to make clear in the problems section (similar to the point about ai safety above) that we're talking about what is most pressing and therefore best to work on for the person who could do anything equally successfully, but that career capital and personal fit
3Kirsten2y
Strong upvoted this as I feel almost exactly the same way! I've tried the new 80k Google doc but looked the old career guide and career decision making tool a lot better.
My Career Decision-Making Process

Thanks for spelling out your thoughts, these are good points and questions!

With regards to potentially impactful problems in health. First, you mentioned anti-aging, and I wish to emphasize that I didn't try to assess it at any point (I am saying this because I recently wrote a post linking to a new Nature journal dedicated to anti-aging). Second, I feel that I am still too new to this domain to really have anything serious to say, and I hope to learn more myself as I progress in my PhD and work at KSM institute. That said, my impression (which is mostly b... (read more)

My Career Decision-Making Process

Thanks for your comment Michelle! If you have any other comments to make on my process (both positive and negative), I think that would be very valuable for me and for other readers as well.

Important Edit: Everything I wrote below refers only to technical cyber-security (and formal verification) roles. I don't have strong views on whether governance, advocacy or other types of work related to those fields could be impactful. My intuition is that these are indeed more promising than technical roles.

I don't see any particularly important problem that can be ... (read more)

My Career Decision-Making Process

This is a very good question and I have some thoughts about it.

Let me begin by answering about my specific situation. As I said, I have many years of experience in programming and cyber security. Given my background and connections (mostly from the army) it was fairly easy for me to find multiple companies I could work for as a contractor/part-time employee. In particular, in the past 3 years I have worked part-time in cyber security and had a lot of flexibility in my hours. Furthermore, I am certain that it is also possible to find such positions in more ... (read more)

5aogara2y
Thanks, that makes sense. Freelancing in software development and tech seems to me like a reasonable path to a well-paid part-time gig for many people. I wonder what other industries or backgrounds lend themselves towards these kinds of jobs. While this is fascinating, I’d be most interested in your views on AI for Good, healthcare, and the intersection between the two, as potential EA cause areas. Your views, as I understand them (and please correct me where I’m wrong): You see opportunity for impact in applying AI and ML techniques to solve real-world problems. Examples include forecasting floods and earthquakes, or analyzing digital data on health outcomes. You’re concerned that there might already be enough talented people working on the most impactful projects, thereby reducing your counterfactual impact, but you see opportunities for outsize impact when working on a particularly important problem or making a large counterfactual contribution as an entrepreneur. Without having done a fraction of the research you clearly have, I’m hopeful that you’re right about health. Anti-aging research and pandemic preparedness seem to be driving EA interest into healthcare and medicine more broadly, and I’m wondering if more mainstream careers in medical research and public health might be potentially quite impactful, if only from a near-term perspective. Would be interested in your thoughts on which problems are high impact, how to identify impactful opportunities when you see them, and perhaps the overall potential of the field for EA — as well as anything anyone else has written on these topics. AI for Good seems like a robustly good career path in many ways, especially for someone interested in AI Safety (which, as you note, you are not). Your direct impact could be anywhere from “providing a good product to paying customers” to “solving the world’s most pressing problems with ML.” You can make a good amount of money and donate a fraction of it. You’ll meet an ambit
Open Philanthropy: 2020 Allocation to GiveWell Top Charities

Thanks for cross-posting this, I probably wouldn't hear about this otherwise.

I am very interested in Open Phil's model regarding the best time to donate for such causes. If anyone is aware of similar models for large donors, I would love to hear about them.

My upcoming CEEALAR stay

Thanks for sharing that, that sounds like an interesting plan.

A while ago I was trying to think about potential ways to have large impact via formal verification (after reading this post). I didn't give it much attention, but it looks like others and I don't see a case for this career path to be highly impactful, but I'd to love be proven wrong. I would appreciate it if you could elaborate on your perspective on this. I should probably mention that I couldn't find a reference to formal verification at agent foundations (but I didn't really read it), and Va... (read more)

1quinn2y
Thanks for the comment. I wasn't aware of yours and Rohin's discussion on Arden's post. Did you flesh out the inductive alignment idea on lw or alignment forum? It seems really promising to me. I want to jot down notes more substantive than "wait until I post 'Going Long on FV' in a few months" today. FV IN AI SAFETY IN PARTICULAR As Rohin's comment suggests, both aiming proofs about properties of models toward today's type theories and aiming tomorrow's type theories toward ML have two classes of obstacles: 1. is it possible? 2. can it be made competitive? I've gathered that there's a lot of pessimism about 1, in spite of MIRI's investment in type theory and in spite of the word "provably" in CHAI's charter. My personal expected path to impact as it concerns 1. is "wait until theorists smarter than me figure it out", and I want to position myself to worry about 2.. I think there's a distinction between theories and products, and I think programmers need to be prepared to commercialize results. There's a fundamental question: should we expect that a theory's competitiveness can be improved one or more orders of magnitude by engineering effort, or will engineering effort only provide improvements of less than an order of magnitude? I think a lot depends on how you feel about this. Asya: Asya may not have been speaking about AI safety here, but my basic thinking is that if less primitive proof assistants end up drastically more competitive, and at the same time there are opportunities convert results in verified ML into tooling, expertise in this area could gain a lot of leverage. FV IN OTHER PATHS TO IMPACT Rohin: It's not clear to me that grinding FV directly is as wise as, say, CompTIA certifications. From the expectation that FV pays dividends in advanced cybersec, we cannot conclude that FV is relevant to early stages of a cybersec path. Related: Information security careers for GCR reduction [https://forum.effectivealtruism.org/posts/ZJiCfwTy5dC4CoxqA/i
A Case Study in Newtonian Ethics--Kindly Advise

With regards to FIRE, I myself still haven't figured out how this fits with my donations. In any case, I think that giving money to beggars sums up to less than $5 per month in my case (and probably even less on average), but I guess that also depends on where you live etc.

A Case Study in Newtonian Ethics--Kindly Advise

I would like to reiterate Edo's answer, and add my perspective.

First and foremost, I believe that one can follow EA perspectives (e.g. donate effectively) AND be kind and helpful to strangers, rather than OR (repeating an argument I made before in another context).
In particular, I personally don't write giving a couple of dollars in my donation sheet, and it does not affect my EA-related giving (at least not intentionally).

Additionally, they constitute such a little fraction of my other spending, that I don't notice them financially.
Despite that, I truly b... (read more)

2Lumpyproletariat2y
Thank you for the kind words and human connection--I don't want to reiterate word for word what I said under EdoArad's post, but I'd like to. It seems to me that separating the conversation and disordering it is a a tradeoff upvote-style forums make, and I'm entirely unconvinced that such is worth it. Especially for a relatively small forum where everyone reading comments is reading all the way to the bottom anyway. My situation is a bit different than yours, I think. I don't feel the a strong need to spend money on things; I don't anticipate my personal expenses ever rising above five hundred dollars a month unless I move somewhere with a higher cost of living--with the expectation that such would be a net gain. After I can consistently cover essential expenses without worry, I plan to use my money as effectively as I can (well, before that point too). In my case spending money on anything trades directly against becoming financially independent sooner and then donating the surplus. I also imagine that if I made a habit of charitable giving at this juncture, I'd notice it financially pretty quick. That said, your, EdoArad's, and DonyChristie's perspectives have helped me gain, well, perspective. I'll think about this more.
The effect of cash transfers on subjective well-being and mental health

I see, thanks for the teaser :)

I was under the impression that you have rough estimate for some charities (e.g. StrongMinds). Looking forward to see your future work on that.

2JoelMcGuire2y
Those estimates are still in the works, but stay tuned!
The effect of cash transfers on subjective well-being and mental health

Thanks for posting that. I'm really excited about HLI's work in general, and especially the work on the kinds of effects you are trying to estimate in this post!

I personally don't have a clear picture of how much $ / WELLBY is considered good (whereas GiveWell's estimates for their leading charities is around 50-100 $ / QALY). Do you have a table or something like that on your website, summarizing your results for charities you found to be highly effectively, for reference?

Thanks again!

3JoelMcGuire2y
I realized my previous reply might have been a bit misleading so I am adding this as a bit of an addendum. Thereareprevious calculations which include WELLBY like calculations such as Michael's comparison of StrongMinds to GiveDirectly in his 2018 Mental Health cause profile [https://forum.effectivealtruism.org/posts/XWSTBBH8gSjiaNiy7/cause-profile-mental-health] or in Origins of Happiness [https://www.jstor.org/stable/j.ctvd58t1t] / Handbook for WellBeing Policy Making in the UK [https://drive.google.com/file/d/1DgyUHWzGbDjKngaIblZduTTKxIu5TrW6/view?usp=sharing] . Why do we not compare our effects to these previous efforts? Most previous estimates looked at correlational effects and give no clear estimate of the total effect through time. An aside follows: An example of these results communicated well is Micah Kaats thesis (which I think was related to HRI's WALY report [https://www.happinessresearchinstitute.com/waly]). They show the relationship of different maladies to life satisfaction and contextualize it with different common effects of life events. Moving from standard deviations to points on a 0-11 scale is a further difficulty. Something else worth noting is that different estimation methods can lead to systematically different effect sizes.In the same thesis, Kaats shows that fixed effects model tend to have lower effects. While this may make it seem as if the the non fixed effects estimates are over-estimates. That's only if you "are willing to assume the absence of dynamic causal relationships [https://imai.fas.harvard.edu/research/files/FEmatchLong.pdf]" -- whether that's reasonable will depend on the outcome. As Michael did in his report with StrongMinds, and Clark et al., did for two studies (moving to a better neighborhood and building cement floors in Mexico -- p. 207) in Origins of happiness, there have been estimates of cost effectiveness that take duration of effects into consideration, but they address only single studies. We wish to hav
7JoelMcGuire2y
Hello, Glad to hear you're excited! Unfortunately, we do not have a clear picture yet of how many WELLBYs per dollar is a good deal. Cash transfers are the first intervention we (and I think anyone) have analyzed in this manner. Figuring this out is my priority and I will soon review the cost effectiveness of other interventions which should give more context. To give a sneak peak, cataracts surgery is looking promising in terms of cost effectiveness compared to cash transfers.
Have you ever used a Fermi calculation to make a personal career decision?

I recently made a big career change, and I am planning to write a detailed post on this soon. In particular, it will touch this point.

I did use use Fermi calculation to estimate my impact in my career options.
In some areas it was fairly straightforward (the problem is well defined, it is possible to meaningfully estimate the percentage of problem expected to be solved, etc.). However, in other areas I am clueless as to how to really estimate this (the problem is huge and it isn't clear where I will fit in, my part in the problem is not very clear, there ar... (read more)

Prioritization in Science - current view

I think another interesting example to compare to (which also relates to Asaf Ifergan's comment) is private research institutes and labs. I think they are much more focused on specific goals, and give their researchers different incentives than academia, although the actual work might be very similar. These kinds of organizations span a long range between academia and industry.

There are of course many such example, some of which are successful and somre are probably not that much. Here are some examples that come to my mind: OpenAI, DeepMind, The Institute... (read more)

A new strategy for broadening the appeal of effective giving (GivingMultiplier.org)

I just wanted to say that I really like your idea, and at least at the intuitive level it sounds like it could work. Looking forward to the assessment of real-world usage!

Also, the website itself looks great, and very easy to use.

Hiring engineers and researchers to help align GPT-3

Thanks for the response.
I believe this answers the first part, why GPT-3 poses an x-risk specifically.

Did you or anyone else ever write what aligning a system like GPT-3 looks like? I have to admit that it's hard for me to even have a definition of being (intent) aligned for a system GPT-3, which is not really an agent on its own. How do you define or measure something like this?

Paris-compliant offsets - a high leverage climate intervention?

Thanks for posting this!

Here is a link to the full report: The Oxford Principles for Net Zero Aligned Carbon Offsetting
(I think it's a good practice to include a link to the original reference when possible.)

2Ben2y
Oops meant to add that - it's now in the first paragraph!
Hiring engineers and researchers to help align GPT-3

Quick question - are these positions relevant as remote positions (not in the US)?

(I wrote this comment separately, because I think it will be interesting to a different, and probably smaller, group of people than the other one.)

7Paul_Christiano2y
Hires would need to be able to move to the US.
Hiring engineers and researchers to help align GPT-3

Thank you for posting this, Paul. I have questions about two different aspects.

In the beginning of your post you suggest that this is "the real thing" and that these systems "could pose an existential risk if scaled up".
I personally, and I believe other members of the community, would like to learn more about your reasoning.
In particular, do you think that GPT-3 specifically could pose an existential risk (for example if it falls into the wrong hands, or scaled up sufficiently)? If so, why, and what is a plausible mechanism by which it poses an x-risk?

On a... (read more)

I think that a scaled up version of GPT-3 can be directly applied to problems like "Here's a situation. Here's the desired result. What action will achieve that result?" (E.g. you can already use it to get answers like "What copy will get the user to subscribe to our newsletter?" and we can improve performance by fine-tuning on data about actual customer behavior or by combining GPT-3 with very simple search algorithms.)

I think that if GPT-3 was more powerful then many people would apply it to problems like that. I'm conc... (read more)

Does using the mortality cost of carbon make reducing emissions comparable with health interventions?

At some point I tried to estimate this too and got similar results. This raised several of points:

  1. I am not sure what the mortality cost of carbon actually measures:
    1. I believe that the cost of additional ton of carbon depends on the amount of total carbon released already (for example in a 1C warming scenario, it is probably very different than in a 3.5C warming scenario).
    2. The carbon and its effect will stay there and affect people for some unknown time (could be indefinitely, could be until we capture it, or until we got extinct, or some other option). This
... (read more)
Keynesian Altruism

I agree that it isn't easy to quantify all of these.

Here is something you could do, which unfortunately does not take into account the changes in charities operation at different times, but is quite easy to do (all of the figures should be in real terms).

  1. Choose a large interval of time (say 1900 to 2020), and at each point (say every month or year), decide how much you invest vs how much you donate, according to your strategy (and others).
  2. Choose a model for how much money you have (for example, starting with a fixed amount, or say receiving a fixed amount
... (read more)
Keynesian Altruism

Thanks for posting this, this is very interesting.

Did you by any chance try to models this? It would be interesting for example to compare different strategies and how would they work given past data.

3Grayden2y
I haven't. I think the key debate is whether the theory could work in practice, rather than whether the theory holds. In terms of modelling, I think it would be hard to quantify the benefits as the variables (in particular: (1) the cost of downsizing and then re-scaling an organisation, and (2) change in marginal CPLSE with respect to a change in GDP) are inherently difficult to measure. Do you have any thoughts about how we could do it?
Book Review: Deontology by Jeremy Bentham

Thanks for writing this! I really like the way you write, which I found both fun and light and, at the same time, highlighting the important parts vividly. I too was surprised to learn that this is the version of utilitarianism Bentham had in his mind, and I find the views expressed in your summary (Ergo) lovely too.

The extreme cost-effectiveness of cell-based meat R&D

I too was surprised when I first read your post. I find it reassuring that our estimates are not far from each other, although the models are essentially different. I suppose we both neglect some aspects of the problem, although both models are somewhat conservative.

I agree that it is probably the case that cell-based meat is very cost-effective at greenhouse gas reduction, and I would love to more sophisticated models than ours.

Research Summary: The Subjective Experience of Time

Thank you for the eloquent response, and for the pointers to the parts of your posts relevant to the matter.

I think I understand your position, and I will dig deeper into your previous posts to get a more complete picture of your view. Thanks once more!

The extreme cost-effectiveness of cell-based meat R&D

Thanks for sharing your computation. This highly resonates with a (very rough) back of the envelope estimate I ran for the cost-effectiveness of the Good Food Institute, the guesstimate model is here https://www.getguesstimate.com/models/16617. The result (which shouldn't be taken to literally) is $1.4 per ton CO2e (and $0.05-$5.42 for 90% CI).

I can give more details on how my model works, but very roughly I try to estimate the amount of CO2e saved by clean meat in general, and then try to estimate how much earlier will that happen because of GFI. Again, this is very rough, and I'd love any input, or comparison to other models.

2Stijn2y
I'm surprised by the level of agreement between our assumptions. In your model, 200 M$ funding is required to advance clean meat with 0,7 years, whereas I assumed 100M$ and 1 year. You assume a lower greenhouse gas saving: 50% of the current 7,8 Gton CO2 emissions, whereas I assumed an increase in meat consumption in businass as usual scenario, and a reduction of 1 ton CO2 per vegan year, that means a reduction of around 10 Gton (assuming 10B people), but you assumed a 25% probability of success, whereas I assumed 10%. But with more lognormal error distributions, you arrive at higher $/ton estimates. Here's my guesstimate https://www.getguesstimate.com/models/16723 [https://www.getguesstimate.com/models/16723]
Research Summary: The Subjective Experience of Time

Thank you for writing this summary (and conducting this research project)!

I have a question. I am not sure what the standard terminology is, but there are (at least) two different kinds of mental processes: reflexes/automatic response and thoughts or experiences which span longer times. I am not certain which are more related to capacity for welfare, but I guess it is the latter. Additionally I imagine that the experience of time is more relevant for the former. This suggests that maybe the two are not really correlated. Have you thought about this? Is my view of the situation flawed?

Thanks again!

9Jason Schukraft2y
Thanks, that’s a great question! Welfare is constituted by those things that are non-instrumentally good or bad for the creature. Insofar as reflexes are unconscious, they probably are not non-instrumentally good or bad. (They are, of course, often instrumentally good; they help the creature get other things that are good for it.) Conscious experiences, on the other hand, are usually non-instrumentally good or bad. Experiences with a positive valence are non-instrumentally good; experiences with a negative valence are non-instrumentally bad. (Experiences that are perfectly neural may not be non-instrumentally good or bad; experiences can also be instrumentally useful in a variety of ways.) Differences in the subjective experience of time—assuming they exist—are relevant to welfare (both realized welfare and capacity for welfare) because they reflect differences in the amount of experience a creature undergoes per unit of objective time. I write about the moral importance of the subjective experience of time in this part [https://forum.effectivealtruism.org/posts/qEsDhFL8mQARFw6Fj/the-subjective-experience-of-time-welfare-implications#Why_the_Subjective_Experience_of_Time_Matters] of the first post. You’re right that there are other aspects of temporal perception that may not be directly relevant to welfare. We already know that there are differences in temporal resolution (roughly: the rate at which a perceptual system samples information about its environment) across species. Enhanced temporal resolution may, among other things, enable faster unconscious reflexes. Naturally, the speed of a creature’s reflexes will indirectly contribute to its welfare, but those unconscious reflexes won’t be part of what constitutes the creature’s welfare. Whether or not there is a correlation between temporal resolution and the subjective experience of time is an open question, one that I explore in depth in the second post [https://forum.effectivealtruism.org/posts/DAKivjBpvQ
Some promising career ideas beyond 80,000 Hours' priority paths

As someone in the intersection of these subjects I tend to agree with your conclusion, and with your next comment to Arden describing the design-implementation relationship.

Edit 19 Feb 2022: I want to clarify my position, namely, that I don't see formal verification as a promising career path. As for what I write below, I both don't believe it is a very practical suggestions, and I am not at all sold on AI safety.

However, while thinking about this, I did come up with a (very rough) idea for AI alignment , where formal verification could play a significant ... (read more)

Climate change donation recommendations

I agree with your main argument, but I think that the current situation is that we have no estimate at all, and this is bad. We literally have no idea if GFI averts 1 ton CO2e at $0.01 or at $1000. I believe having some very rough estimates could be very useful, and not that hard to do.

Also, I completely agree that splitting donations is a very good idea, and I personally do it (and in particular donated to both CATF and GFI in the past).

Load More