anonymous_ea

Wiki Contributions

Comments

A Primer on the Symmetry Theory of Valence

Greg, I want to bring two comments that have been posted since your comment above to your attention:

  1. Abby said the following to Mike:

Your responses here are much more satisfying and comprehensible than your previous statements, it's a bit of a shame we can't reset the conversation.

2. Another anonymous commentator (thanks to Linch for posting) highlights that Abby's line of questioning regarding EEGs ultimately resulted in a response satisfactory to her and which she didn't have the expertise to further evaluate: 

if they had given the response that they gave in one of the final comments in the discussion, right at the beginning (assuming Abby would have responded similarly) the response to their exchange might have been very different i.e. I think people would have concluded that they gave a sensible response and were talking about things that Abby didn't have expertise to comment on:

_______


Abby Hoskin: If your answer relies on something about how modularism/functionalism is bad: why is source localization critical for your main neuroimaging analysis of interest? If source localization is not necessary: why can't you use EEG to measure synchrony of neural oscillations?

Mike Johnson: The harmonic analysis we’re most interested in depends on accurately modeling the active harmonics (eigenmodes) of the brain. EEG doesn’t directly model eigenmodes; to infer eigenmodes we’d need fairly accurate source localization. It could be there are alternative ways to test STV without modeling brain eigenmodes, and that EEG could give us. I hope that’s the case, and I hope we find it, since EEG is certainly a lot easier to work with than fMRI.

Abby Hoskin: Ok, I appreciate this concrete response. I don't know enough about calculating eigenmodes with EEG data to predict how tractable it is.

AI Timelines: Where the Arguments, and the "Experts," Stand

I appreciate you posting this picture, which I had not seen before. I just want to add that this was compiled in 2014, and some of the people in the picture have likely shifted in their views since then. 

Towards a Weaker Longtermism

Phil Trammell's point in  Which World Gets Saved is also relevant: 

It seems to me that there is another important consideration which complicates the case for x-risk reduction efforts, which people currently neglect. The consideration is that, even if we think the value of the future is positive and large, the value of the future conditional on the fact that we marginally averted a given x-risk may not be.

...

Once we start thinking along these lines, we open various cans of worms. If our x-risk reduction effort starts far "upstream", e.g. with an effort to make people more cooperative and peace-loving in general, to what extent should we take the success of the intermediate steps (which must succeed for the x-risk reduction effort to succeed) as evidence that the saved world would go on to a great future? Should we incorporate the fact of our own choice to pursue x-risk reduction itself into our estimate of the expected value of the future, as recommended by evidential decision theory, or should we exclude it, as recommended by causal? How should we generate all these conditional expected values, anyway?

Some of these questions may be worth the time to answer carefully, and some may not. My goal here is just to raise the broad conditional-value consideration which, though obvious once stated, so far seems to have received too little attention. (For reference: on discussing this consideration with Will MacAskill and Toby Ord, both said that they had not thought of it, and thought that it was a good point.) In short, "The utilitarian imperative 'Maximize expected aggregate utility!'" might not really, as Bostrom (2002) puts it, "be simplified to the maxim 'Minimize existential risk'".

What EA projects could grow to become megaprojects, eventually spending $100m per year?

I like this idea in general, but would it ever really be able to employ $100m+ annually? For comparison, GiveWell spends about $6 million/year and CSET was set up for $55m/5 years (11m/year)

Linch's Shortform

Thanks. Going back to your original impact estimate, I think the bigger difficulty I have in swallowing your impact estimate and claims related to it (e.g. "the ultimate weight of small decisions you make is measured not in dollars or relative status, but in stars") is not the probabilities of AI or space expansion, but what seems to me to be a pretty big jump from the potential stakes of a cause area or value possible in the future without any existential catastrophes, to the impact that researchers working on that cause area might have. 

Joe Carlsmith has a small paragraph articulating some of my worries along these lines elsewhere on the forum:

Of course, the possibly immense value at stake in the long-term future is not, in itself, enough to get various practically-relevant forms of longtermism off the ground. Such a future also needs to be adequately large in expectation (e.g., once one accounts for ongoing risk of events like extinction), and it needs to be possible for us to have a foreseeably positive and sufficiently long-lasting influence on it. There are lots of open questions about this, which I won’t attempt to address here.

Linch's Shortform

So is the basic idea that transformative AI not ending in an existential catastrophe is the major bottleneck on a vastly positive future for humanity? 

Linch's Shortform

Conditioning upon us buying the importance of work at MIRI (and if you don't buy it, replace what I said with CEA or Open Phil or CHAI or FHI or your favorite organization of choice), I think the work of someone sweeping the floors of MIRI is just phenomenally, astronomically important, in ways that is hard to comprehend intuitively. 

(Some point estimates with made-up numbers: Suppose EA work in the next few decades can reduce existential risk from AI by 1%.  Assume that MIRI is 1% of the solution, and that there are less than 100 employees of MIRI. Suppose variance in how good a job someone can do in cleanliness of MIRI affects research output by 10^-4 as much as an average researcher.* Then we're already at 10^-2 x 10^ -2 x 10^-2 x 10^-4 = 10^-10 the impact of the far future. Meanwhile there are 5 x 10^22 stars in the visible universe)

Can you spell out the impact estimation you are doing in more detail? It seems to me that you first estimate how much a janitor at an org might impact the research productivity of that org, and then there's some multiplication related to the (entire?) value of the far future. Are you assuming that AI will essentially solve all issues and lead to positive space colonization, or something along those lines? 

Looking for more 'PlayPumps' like examples

I'm not sure Make a Wish is a good example given the existence of this study. Quoting Dylan Matthews from Future Perfect on it (emphasis added):

The average wish costs $10,130 to fulfill. Given that Malaria Consortium can save the life of a child under 5 for roughly $2,000 (getting a precise figure is, of course, tough, but it’s around that), you could probably save four or five children’s lives in sub-Saharan Africa for the cost of providing a nice experience for a single child in the US. For the cost of the heartwarming Batkid stunt — $105,000 — you could save the lives of some 50-odd kids.

So that’s why I’ve been hard on Make-A-Wish in the past, and why effective altruists like Peter Singer have criticized the group as well.

But now I’m reconsidering. A new study in the journal Pediatric Research, comparing 496 patients at the Nationwide Children’s Hospital in Columbus, Ohio, who got their wishes granted to 496 “control” patients with similar ages, gender, and diseases, found that the patients who got their wishes granted went to the emergency room less, and were less likely to be readmitted to the hospital (outside of planned readmissions).

In a number of cases, this reduction in hospital admissions and emergency room visits resulted in a cost savings in excess of $10,130, the cost of the average wish. In other words, Make-A-Wish helped, and helped in a cost-effective way.

Draft report on existential risk from power-seeking AI

your other comment

This links to A Sketch of Good Communication, not whichever comment you were intending to link :)

Concerns with ACE's Recent Behavior

You know, this makes me think I know just how academia was taken over by cancel culture. 

It's a very strong statement that academia has been taken over by cancel culture. I definitely agree that there are some very concerning elements (one of the ones I find most concerning are the University of California diversity statements), but academia as a whole is quite big and you may be jumping the gun quite a bit. 

Load More