shaybenmoshe

Posts

Sorted by New

Comments

The effect of cash transfers on subjective well-being and mental health

I see, thanks for the teaser :)

I was under the impression that you have rough estimate for some charities (e.g. StrongMinds). Looking forward to see your future work on that.

The effect of cash transfers on subjective well-being and mental health

Thanks for posting that. I'm really excited about HLI's work in general, and especially the work on the kinds of effects you are trying to estimate in this post!

I personally don't have a clear picture of how much $ / WELLBY is considered good (whereas GiveWell's estimates for their leading charities is around 50-100 $ / QALY). Do you have a table or something like that on your website, summarizing your results for charities you found to be highly effectively, for reference?

Thanks again!

Have you ever used a Fermi calculation to make a personal career decision?

I recently made a big career change, and I am planning to write a detailed post on this soon. In particular, it will touch this point.

I did use use Fermi calculation to estimate my impact in my career options.
In some areas it was fairly straightforward (the problem is well defined, it is possible to meaningfully estimate the percentage of problem expected to be solved, etc.). However, in other areas I am clueless as to how to really estimate this (the problem is huge and it isn't clear where I will fit in, my part in the problem is not very clear, there are too many other factors and actors, etc.).

In my case, I had 2 leading options, one of which was reasonable to amenable to these kind of estimates, and the other - not so much. The interesting thing was that in the first case, my potential impact turned out to be around the same order of magnitude as EtG, maybe a little bit more (though there is a big confidence interval).

All in all, I think this is a helpful method to gain some understanding of the things you can expect to achieve, though, as usual, these estimates shouldn't be taken too seriously in my opinion.

Prioritization in Science - current view

I think another interesting example to compare to (which also relates to Asaf Ifergan's comment) is private research institutes and labs. I think they are much more focused on specific goals, and give their researchers different incentives than academia, although the actual work might be very similar. These kinds of organizations span a long range between academia and industry.

There are of course many such example, some of which are successful and somre are probably not that much. Here are some examples that come to my mind: OpenAI, DeepMind, The Institute for Advanced Study, Bell Labs, Allen Institute for Artificial Intelligence, MIGAL (Israel).

A new strategy for broadening the appeal of effective giving (GivingMultiplier.org)

I just wanted to say that I really like your idea, and at least at the intuitive level it sounds like it could work. Looking forward to the assessment of real-world usage!

Also, the website itself looks great, and very easy to use.

Hiring engineers and researchers to help align GPT-3

Thanks for the response.
I believe this answers the first part, why GPT-3 poses an x-risk specifically.

Did you or anyone else ever write what aligning a system like GPT-3 looks like? I have to admit that it's hard for me to even have a definition of being (intent) aligned for a system GPT-3, which is not really an agent on its own. How do you define or measure something like this?

Paris-compliant offsets - a high leverage climate intervention?

Thanks for posting this!

Here is a link to the full report: The Oxford Principles for Net Zero Aligned Carbon Offsetting
(I think it's a good practice to include a link to the original reference when possible.)

Hiring engineers and researchers to help align GPT-3

Quick question - are these positions relevant as remote positions (not in the US)?

(I wrote this comment separately, because I think it will be interesting to a different, and probably smaller, group of people than the other one.)

Load More