MaxRa

Wiki Contributions

Comments

The Importance-Avoidance Effect

Thanks, I also could relate to the general pattern. For example during my PhD I really tried hard to find and work on things that seem most promising and give it my all cause I want to do it as good as I can, but this was pretty stressful and I think it noticeably decreased the fun and my ability to let simple curiosity lead my research.

Share the load of the project with others. Get some trusted individuals to work with you.

This is a big one for me. Working with others on projects is usually much more fun and motivating to me.

Great Power Conflict

Thucydide‘s Trap by Graham Allison features a scenario of escalating conflict between the US and China in the South Chinese Sea conflict that I found very chilling. Iirc the scenario is just like you mentioned, each side doing from her perspective legitimate moves, protecting dearly hold interests, drawing lines in the sand and the outcome is escalation to war. The underlying theme is conflicting dynamics when a reigning power is challenged by a rising power. You probably saw the book mentioned, I found it very worth reading. 

And you didn‘t mention cyber warfare, which is what pops into my mind immediately. I haven‘t looked into this, but I imagine that potential damage is very high while proper international peace-supporting and deescalating norms are much more lagging behind compared to physical conflicts.

Disentangling "Improving Institutional Decision-Making"

Really nice and useful exploration, and I really liked your drawings.

(a) Maybe the average/median institution’s goals are already aligned with public good

FWIW, I intuitively would’ve drawn the institution blob in your sketch higher, i.e. I’d have put fewer than (eyeballing) 30% of institutions in the negatively aligned space (maybe 10%?). In moments like this, including a quick poll into the forum to get a picture what others think would be really useful.

However, I don’t see a clear argument for how an abstract intervention that improves decision-making would also incidentally improve the value-alignment of an institution.

Other spontaneous ideas, besides choosing more representative candidates:

  • increased coherence of the institution could lead to an overall stronger link between its mandate and its actions
  • increased transparency and coherence could reduce corruption and rent-seeking

I know of efficient and technologically progressive institutions that seem extremely harmful, and of benign, well-meaning institutions that are slow to change and inefficient

Given what I said beforehand, I’d be interested in learning more about examples of harmful institutions that have generally high capacity.

How to get more academics enthusiastic about doing AI Safety research?

Perfect, so he appreciated it despite finding the accompanying letter pretty generic, and thought he received it because someone (the letter listed Max Tegmark, Joshua Bengio and Tim O’Reilly, though w/o signatures) believed he’d find it interesting and that the book is important for the field. Pretty much what one could hope for.

And thanks for the work trying to get them to take this more seriously, would be really great if you could find more neuroscience people to contribute to AI safety.

How to get more academics enthusiastic about doing AI Safety research?

Interesting anyway, thanks! Did you by any chance notice if he reacted positively or negatively to being send the book? I was a bit worried it might be considered spammy. On the other hand, I remember reading that Andrew Gelman regularly gets send copies of books he might be interested in for him to write a blurp or review, so maybe it's just a thing that happens to scientists and one needn't be worried.

How to get more academics enthusiastic about doing AI Safety research?

Maybe one could send a free copy of Brian Christians „The Alignment Problem“ or Russel‘s „Human Compatible“ to the office addresses of all AI researchers that might find it potentially interesting?

How to get more academics enthusiastic about doing AI Safety research?

At least the novel the movie is based on seems to have had significant influence:

Kubrick had researched the subject for years, consulted experts, and worked closely with a former R.A.F. pilot, Peter George, on the screenplay of the film. George’s novel about the risk of accidental nuclear war, “Red Alert,” was the source for most of “Strangelove” ’s plot. Unbeknownst to both Kubrick and George, a top official at the Department of Defense had already sent a copy of “Red Alert” to every member of the Pentagon’s Scientific Advisory Committee for Ballistic Missiles. At the Pentagon, the book was taken seriously as a cautionary tale about what might go wrong.

https://www.newyorker.com/news/news-desk/almost-everything-in-dr-strangelove-was-true

How to get more academics enthusiastic about doing AI Safety research?

Another idea is replicating something like Hilbert‘s speech in 1900 in which he lists 23 open maths problems, which seems to have had some impact in agenda setting for the whole scientific community. https://en.wikipedia.org/wiki/Hilbert's_problems

Doing this well for the field of AI might get some attention from AI scientists and funders.

How to get more academics enthusiastic about doing AI Safety research?

I wonder if a movie about realistic AI x-risk scenarios might have promise. I have somewhere in the back of my mind that Dr. Strangelove possibly inspired some people to work on the threat of nuclear war (the Wikipedia article is surprisingly sparse on the topic of the movie’s impact, though).

When pooling forecasts, use the geometric mean of odds

Cool, that’s really useful to know. Can you also check how extremizing the odds with different parameters performs?

Load More