Daniel_Eth

Topic Contributions

Comments

Digital people could make AI safer

You might be interested in my paper on this topic, where I also come to the conclusion that achieving WBE before de novo AI would be good:
https://informatica.si/index.php/informatica/article/view/1874

My first effective altruism conference: 10 learnings, my 121s and next steps

Go to EA conferences even if you don't think you are a good fit or 100% bought in to EA. It sparked my interest, sprouted ideas and I was able to tangibly help and share my experiences with others. I underestimated the value of my perspectives for others in different walks of life.

 

This resonated with me. For my first EA Global (back in 2016), I applied on a whim, attracted by a couple of the speakers and the fact that the conference was close to my hometown, but hesitant due to a few negative misperceptions I had about EA at the time. While there, I felt very much at home,  and I've been heavily involved in EA ever since. Of course, not everyone will have the same experience, but my sense is there's a pretty wide range of surprising upsides from going to these sorts of conferences, and it's often worth going to at least one if you're uncertain.

Death to 1 on 1s

I've also found going for walks during 1-on-1s to be nice, to the point that I do this for the majority of my 1-on-1s (this also has the side benefit of reducing covid risk)

Replicating and extending the grabby aliens model

"

The possibility of try-once steps allows one to reject the existence of hard try-try steps, but suppose very hard try-once steps.

  • I'm not seeing why this is. Why is that the case?

"

Because if (say) only 1/10^30 stars has a planet with just the right initial conditions to allow for the evolution of intelligent life, then that fully explains the Great Filter, and we don't need to posit that any of the try-try steps are hard (of course, they still could be).

A new media outlet focused in part on philanthropy

FWIW, I found the interview with SBF to be quite fair, and imho it presented Sam in a neutral-to-positive light (though perhaps a bit quirky). Teddy's more recent reporting/tweets about Sam also strike me as both fair and neutral to positive.

evelynciara's Shortform

Hmm, culturally YIMBYism seems much harder to do in suburbs/rural areas. I wouldn't be too surprised if the easiest ToC here is to pass YIMBY-energy policies on the state level, with most of the support coming from urbanites. 

But sure, still probably worth trying.

evelynciara's Shortform

I thought YIMBYs were generally pretty in favor of this already? (Though not generally as high a priority for them as housing.) My guess is it would be easier to push the already existing YIMBY movement to focus on energy more, as opposed to creating a new movement from scratch.

Daniel_Eth's Shortform

Not just EA funds, I think (almost?) all random, uninformed EA donations would be much better than donations to an Index fund considering all charities on Earth. 

A Model of Patient Spending and Movement Building

if one wants longtermism to get a few big wins to increase its movement building appeal, it would surprise me if the way to do this was through more earning to give, rather than by spending down longtermism's big pot of money and using some of its labor for direct work

I agree – I think the practical implication is more "this consideration updates us towards funding/allocating labor towards direct work over explicit movement building" and less "this consideration updates us towards E2G over direct work/movement building".

A Model of Patient Spending and Movement Building

because of scope insensitivity, I don't think potential movement participants would be substantially more impressed by $2*N billions of GiveDirectly-equivalents of good per year vs just $N billions

Agree (though potential EAs may be more likely to be impressed with that stuff than most people), but I think qualitative things that we could accomplish would be impressive. For instance, if we funded a cure for malaria (or cancer, or ...) I think that would be more impressive than if we funded some people trying to cure those diseases but none of the people we funded succeeded. I also think that people are more likely to be attracted to AI safety if it seems like we're making real headway on the problem.

Load More