All of Elliot_Olds's Comments + Replies

A website to crowdsource research for Impact List

Impact List is building up a database of philanthropic donations from wealthy individuals, as a step in ranking the top ~1000 people by positive impact via donations. We're also building a database of info on the effectiveness of various charities.

It would be great if a volunteer could build a website with the following properties:

-It contains pages for each donor, and for each organization that is the target of donations.
-Pages for donors list every donation they've ever made, with the date, target organiza... (read more)

1
NicoleJaneway
1y
I love this idea!!  It would be amazing if it could sway the ~1000 individuals listed on the site, however I suspect the true power is increasing awareness and engagement in effective giving.  Super cool project.

Yes, me and a few others but no one full time yet. I plan to start working roughly full time on it in a month. 

I recently posted the work items that I need help with in the discord: https://discord.gg/6GNre8U2ta

 

Thanks for the feedback!

Regarding (a), it doesn't seem clear to me that conditional on Impact List being wildly successful (which I'm interpreting as roughly the $110B over ten years case), we shouldn't expect it to account for more than 10% of overall EA outreach impact. Conditional on Impact List accounting for $110B, I don't think I'd feel surprised to learn that EA controls only $400B (or even $200B) instead of ~$1T. Can you say more about why that would be surprising? 

(I do think there's a ~5% chance that EA controls or has deployed $1T within te... (read more)

Yeah it will be very time intensive.

When we evaluate people who don't make the list, we can maintain pages for them on the site showing what we do know about their donations, so that a search would surface their page even if they're not on the list. Such a page would essentially explain why they're not on the list by showing the donations we know about and which recipients we've evaluated vs. those who we've assigned default effectiveness values for their category.

I think we can possibly offload some of the research work on people who think we're wrong abo... (read more)

I think even on EA's own terms (apart from any  effects from EA being fringe) there's a good reason for EAs to be OK with being more stressed and unhappy than people with other philosophies.

On the scale of human history we're likely in an emergency situation when we have an opportunity to trade off the happiness of EAs for enormous gains in total well-being. Similar to how during a bear attack you'd accept that you won't feel relaxed and happy while you try to mitigate the attack, but this period of stress is worth it overall. This is especially true if you believe we're in the hinge of history. 

3
Locke
2y
Eh this logic can be used to justify a lot of extreme action in the name of progress. Communists and Marxists have had a lot of thoughts about the "hinge of history" and used that to unleash terrible destruction on the rest of humanity.
3
Guy Raveh
2y
In contrast to a bear attack, you don't expect to know that the "period of stress" has ended during your lifetime. Which raises a few questions, like "Is it worth it?" and "How sure can we be that this really is a stress period?". The thought that we especially are in a position to trade our happiness for enormous gains for society - while not impossible - is dangerous in that it's very appealing, regardless whether it's true or not.

I also mention this in my response to your other comment, but in case others didn't notice that: my current best guess for how we can reasonably compare across cause areas is to use something like WALYs. For animals my guess is we'll adjust WALYs with some measure of brain complexity. 

In general the rankings will be super sensitive to assumptions. Through really high quality research we might be able to reduce disagreements a little, but no matter what there will still be lots of disagreements about assumptions. 

I mentioned in the post that the d... (read more)

Agreed. I guess my intuition is that using WALYs for humans+animals (scaled for brain complexity), humans only, and longtermist beings will be a decent enough approximation for maybe 80% of EAs and over 90% of the general public. Not that it's the ideal metric for these people, but good enough that they'd treat the results as pretty important if they knew the calculations were done well.

2
david_reinstein
2y
Do you mean all three separately (humans, animals, potential people) or trying to combine them in the same rating? My impression was that separate could work but combining them ‘one of the three will overwhelm the others’.
2
david_reinstein
2y
My suspicion is that there will only be a very narrow and “lucky” range of moral and belief parameters where the three cause areas will have cost effectivenesses in the same orders of magnitude. But I should dig into this.

Awesome! (Ideopunk and I are chatting on discord and likely having a call tomorrow.)

Yeah, I've lately been considering just three options for moral weights: 'humans only', 'including animals', and 'longtermist', with the first two being implicitly neartermist. 

It seems like we don't need 'longtermist with humans only' and 'longtermist including animals' because if things go well the bulk of the beings that exist in the long run will be morally relevant (if they weren't we would have replaced them with more morally relevant beings). 

3
david_reinstein
2y
but even within 'humans only' (say, weighted by 'probability of existing' ... or only those sure to exist). There are still difficult moral parameters, such as: * suffering vs happiness * death of a being with a strong identity vs suffering * death of babies vs children vs adults (Similar questions 'within animals' too).

Hi Ben. I just read the transcript of your 80,000 Hours interview and am curious how you'd respond to the following:

Analogy to agriculture, industry

You say that it would be hard for a single person (or group?) acting far before the agricultural revolution or industrial revolution to impact how those things turned out, so we should be skeptical that we can have much effect now on how an AI revolution turns out.

Do you agree that the goodness of this analogy is roughly proportional to how slow our AI takeoff is? For instance if the first AGI ever created... (read more)

3
bgarfinkel
4y
I think there are a couple different bits to my thinking here, which I sort of smush together in the interview. The first bit is that, when developing an individual AI system, its goals and capabilities/intelligence tend to take shape together. This is helpful, since it increases the odds that we'll notice issues with the system's emerging goals before they result in truly destructive behavior. Even if someone didn't expect a purely dust-minimizing house-cleaning robot to be a bad idea, for example, they'll quickly realize their mistake as they train the system. The mistake will be clear well before the point when the simulated robot learns how to take over the world; it will probably be clear even before the point when the robot learns how to operate door knobs. The second bit is that there are many contexts in which pretty much any possible hand-coded reward function will either quickly reveal itself as inappropriate or be obviously inappropriate before the training process even begins. This means that sane people won’t proceed in developing and deploying things like house-cleaning robots or city planners until they’ve worked out alignment techniques to some degree; they’ll need to wait until we’ve moved beyond “hand-coding” preferences, toward processes that more heavily involve ML systems learning what behaviors users or developers prefer. It’s still conceivable that, even given these considerations, people will still accidentally develop AI systems that commit omnicide (or cause similarly grave harms). But the likelihood at least goes down. First of all, it needs to be the case that (a): training processes that use apparently promising alignment techniques will still converge on omnicidal systems. Second, it needs to be the case that (b): people won’t notice that these training processes have serious issues until they’ve actually made omnicidal AI systems. I’m skeptical of both (a) and (b). My intuition, regarding (a), is that some method that involves lear
3
bgarfinkel
4y
I would say that, in a scenario with relatively "smooth" progress, there's not really a clean distinction between "narrow" AI systems and "general" AI systems; the line between "we have AGI" and "we don't have AGI" is either a bit blurry or a bit arbitarily drawn. Even if the management/control of large collections of AI systems is eventually automated, I would also expect this process of automation to unfold over time rather than happening in single go. In general, the smoother things are, the harder it is to tell a story where one group gets out way ahead of others. Although I'm unsure just how "unsmooth" things need to be for this outcome to be plausible. I think that if there were multiple AGI or AGI-ish systems in the world, and most of them were badly misaligned (e.g. willing to cause human extinction for instrumental reasons), this would present an existential risk. I wouldn't count on them balancing each other out, in the same way that endangered gorilla populations shouldn't count on warring communities to balance each other out. I think the main benefits of smoothness have to do with risk awareness (e.g. by observing less catastrophic mishaps) and, especially, with opportunities for trial-and-error learning. At least when the concern is misalignment risk, I don't think of the decentralization of power as a really major benefit in its own right: the systems in this decentralized world still mostly need to be safe. I think it's plausible that especially general systems would be especially useful for managing the development, deployment, and interaction of other AI systems. I'm not totally sure this is the case, though. For example, at least in principle, I can imagine an AI system that is good at managing the training of other AI systems -- e.g. deciding how much compute to devote to different ongoing training processes -- but otherwise can't do much else.
6
bgarfinkel
4y
Hi Elliot, Thanks for all the questions and comments! I'll answer this one in stages. On your first question: I agree with this. To take the fairly extreme case of the Neolithic Revolution, I think that there are at least a few reasons why groups at the time would have had trouble steering the future. One key reason is what the world was highly "anarchic," in the international relations sense of the term: there were many different political communities, with divergent interests and a limited ability to either coerce one another or form credible commitments. One result of anarchy is that, if the adoption of some technology or cultural/institutional practice would give some group an edge, then it's almost bound to be adopted by some group at some point: other groups will need to either lose influence or adopt the technology/innovation to avoid subjugation. This explains why the emergence and gradual spread of agricultural civilization was close to inevitable, even though (there's some evidence) people often preferred the hunter-gatherer way of life. There was an element of technological or economic determinism that put the course of history outside of any individual group's control (at least to a significant degree). Another issue, in the context of the Neolithic Revolution, is that norms, institutions, etc., tend to shift over time, even in there aren't very strong selection pressures. This was even more true before the advent of writing. So we do have a few examples of religious or philosophical traditions that have stuck around, at least in mutated forms, for a couple thousand years; but this is unlikely, in any individual case, and would have been even more unlikely 10,000 years ago. At least so far, we also don't have examples of more formal political institutions (e.g. constitutions) that have largely stuck around for more than few thousand years either. There are a couple reasons why AI could be different. The first reason is that -- under certain scenari