All of MaxRa's Comments + Replies

The Importance-Avoidance Effect

Thanks, I also could relate to the general pattern. For example during my PhD I really tried hard to find and work on things that seem most promising and give it my all cause I want to do it as good as I can, but this was pretty stressful and I think it noticeably decreased the fun and my ability to let simple curiosity lead my research.

Share the load of the project with others. Get some trusted individuals to work with you.

This is a big one for me. Working with others on projects is usually much more fun and motivating to me.

2davidhartsough3dThank you for sharing (and reading)! Were you able to "share the load" (so to say) in some capacity with your PhD and research? In what ways do you effectively utilize this insight you've gained into your own social motivation? Do you tend to build teams and recruit people to help you with your projects in specific ways? How do you keep it fun and enjoyable for yourself and your friends?
Great Power Conflict

Thucydide‘s Trap by Graham Allison features a scenario of escalating conflict between the US and China in the South Chinese Sea conflict that I found very chilling. Iirc the scenario is just like you mentioned, each side doing from her perspective legitimate moves, protecting dearly hold interests, drawing lines in the sand and the outcome is escalation to war. The underlying theme is conflicting dynamics when a reigning power is challenged by a rising power. You probably saw the book mentioned, I found it very worth reading. 

And you didn‘t mention cy... (read more)

1Zach Stein-Perlman3dThanks for your comment. US-China tension currently seems most likely to me to cause great power conflict, and cyber capabilities were mostly what I had in mind for "offense outpaces defense" scenarios. I think this post is more valuable if it's more general, though, and I don't know enough about US-China, cyber capabilities, or warfare to say much more specifically. I think understanding possible futures of cyber capabilities would be quite valuable. I would not be surprised to look back in 2030 or 2040 and say: But again, such work is not my comparative advantage (and, as a disclaimer for the above paragraph, I don't know what I'm talking about).
Disentangling "Improving Institutional Decision-Making"

Really nice and useful exploration, and I really liked your drawings.

(a) Maybe the average/median institution’s goals are already aligned with public good

FWIW, I intuitively would’ve drawn the institution blob in your sketch higher, i.e. I’d have put fewer than (eyeballing) 30% of institutions in the negatively aligned space (maybe 10%?). In moments like this, including a quick poll into the forum to get a picture what others think would be really useful.

However, I don’t see a clear argument for how an abstract intervention that improves decision-mak

... (read more)
2Lizka5dThank you for this comment! I won't redraw/re-upload this sketch, but I think you are probably right. That's a really good idea, thank you! I'll play around with that. Thank you for the suggestions! I think you raise good points, and I'll try to come back to this.
How to get more academics enthusiastic about doing AI Safety research?

Perfect, so he appreciated it despite finding the accompanying letter pretty generic, and thought he received it because someone (the letter listed Max Tegmark, Joshua Bengio and Tim O’Reilly, though w/o signatures) believed he’d find it interesting and that the book is important for the field. Pretty much what one could hope for.

And thanks for the work trying to get them to take this more seriously, would be really great if you could find more neuroscience people to contribute to AI safety.

How to get more academics enthusiastic about doing AI Safety research?

Interesting anyway, thanks! Did you by any chance notice if he reacted positively or negatively to being send the book? I was a bit worried it might be considered spammy. On the other hand, I remember reading that Andrew Gelman regularly gets send copies of books he might be interested in for him to write a blurp or review, so maybe it's just a thing that happens to scientists and one needn't be worried.

3steve215214dSee here [https://discourse.numenta.org/t/numenta-research-meeting-august-10-2020/7795], the first post is a video of a research meeting where he talks dismissively about Stuart Russell's argument, and then the ensuing forum discussion features a lot of posts by me trying to sell everyone on AI risk :-P (Other context here [https://www.lesswrong.com/posts/ixZLTmFfnKRbaStA5/book-review-a-thousand-brains-by-jeff-hawkins] .)
How to get more academics enthusiastic about doing AI Safety research?

Maybe one could send a free copy of Brian Christians „The Alignment Problem“ or Russel‘s „Human Compatible“ to the office addresses of all AI researchers that might find it potentially interesting?

3steve215214dI saw Jeff Hawkins mention (in some online video) that someone had sent Human Compatible to him unsolicited but he didn't say who. And then (separately) a bit later the mystery was resolved: I saw some EA-affiliated person or institution mention that they had sent Human Compatible to a bunch of AI researchers. But I can't remember where I saw that, or who it was. :-(
How to get more academics enthusiastic about doing AI Safety research?

At least the novel the movie is based on seems to have had significant influence:

Kubrick had researched the subject for years, consulted experts, and worked closely with a former R.A.F. pilot, Peter George, on the screenplay of the film. George’s novel about the risk of accidental nuclear war, “Red Alert,” was the source for most of “Strangelove” ’s plot. Unbeknownst to both Kubrick and George, a top official at the Department of Defense had already sent a copy of “Red Alert” to every member of the Pentagon’s Scientific Advisory Committee for Ballistic M

... (read more)
1MaxRa15dMaybe one could send a free copy of Brian Christians „The Alignment Problem“ or Russel‘s „Human Compatible“ to the office addresses of all AI researchers that might find it potentially interesting?
How to get more academics enthusiastic about doing AI Safety research?

Another idea is replicating something like Hilbert‘s speech in 1900 in which he lists 23 open maths problems, which seems to have had some impact in agenda setting for the whole scientific community. https://en.wikipedia.org/wiki/Hilbert's_problems

Doing this well for the field of AI might get some attention from AI scientists and funders.

How to get more academics enthusiastic about doing AI Safety research?

I wonder if a movie about realistic AI x-risk scenarios might have promise. I have somewhere in the back of my mind that Dr. Strangelove possibly inspired some people to work on the threat of nuclear war (the Wikipedia article is surprisingly sparse on the topic of the movie’s impact, though).

5steve215214d* There was a 2020 documentary We Need To Talk About AI [https://www.imdb.com/title/tt7658158/]. All-star lineup of interviewees! Stuart Russell, Roman Yampolskiy, Max Tegmark, Sam Harris, Jurgen Schmidhuber, …. I've seen it, but it appears to be pretty obscure, AFAICT. * I happened to watch the 2020 Melissa McCarthy film Superintelligence [https://www.rottentomatoes.com/m/superintelligence] yesterday. It's umm, not what you're looking for. The superintelligent AI's story arc was a mix of 20% arguably-plausible things that experts say about superintelligent AGI, and 80% deliberately absurd things for comedy. I doubt it made anyone in the audience think very hard about anything in particular. (I did like it as a romantic comedy :-P ) * There's some potential tension between "things that make for a good movie" and "realistic", I think.
2MaxRa15dAt least the novel the movie is based on seems to have had significant influence: https://www.newyorker.com/news/news-desk/almost-everything-in-dr-strangelove-was-true [https://www.newyorker.com/news/news-desk/almost-everything-in-dr-strangelove-was-true]
When pooling forecasts, use the geometric mean of odds

Cool, that’s really useful to know. Can you also check how extremizing the odds with different parameters performs?

 brier-log
metaculus_prediction0.1100.360
geo_mean_weighted0.1150.369
extr_geo_mean_odds_2.5_weighted0.1160.387
geo_mean_odds_weighted0.1170.371
median_weighted0.1210.381
mean_weighted0.1220.393
geo_mean_unweighted0.1280.409
geo_mean_odds_unweighted0.1300.410
extr_geo_mean_odds_2.5_unweighted0.1310.431
median_unweighted0.1340.417
mean_unweighted0.1380.439
EA Forum feature suggestion thread

Yeah, just a feature which displays the comments from LessWrong crossposts would save me some clicking.

The Governance Problem and the "Pretty Good" X-Risk

If we create aligned superintelligence, how we use it will involve political institutions and processes. Superintelligence will probably be controlled by a state or a group of states. This is more likely the more AI becomes popularly appreciated and the more legibly powerful AI is created before the intelligence explosion.

 

It seems really useful to me to understand better how likely states will end up calling the shots. I wonder if there are potential options for big tech to keep sovereignty about AI. I'd suspect a company would prefer staying in cont... (read more)

4Zach Stein-Perlman21dYes, absolutely. I think this largely depends on the extent to which political elites appreciate AI's importance; I expect that political elites will appreciate AI and take action in a few years, years before an intelligence explosion. I want to read/think/talk about this. While big tech companies will probably come up with more strategies, I'm skeptical about their ability to not be nationalized or closely supervised by states. In response to your specific suggestions: * I think states are broadly able to seize property in their territory. To secure autonomy, I think a corporation would have to get the government to legally bind itself. I can't imagine the US or China doing this. Perhaps a US corporation could make a deal with another government and move its relevant hardware to that state before the US appreciates AI or before the US has time to respond? That would be quite radical. Given the major national security implications of AI, even such a move might not guarantee autonomy. But I think corporations would probably have to move somehow to maintain autonomy if there was political will and a public mandate for nationalization. * I don't understand. But if the US and China appreciate AI's national security implications, they won't be distracted. * I don't understand "assembling . . . ability," but corporations intentionally making AI feel nonthreatening is interesting. I hadn't thought about this. Hmm. This might be a factor. But there's only so much that making systems feel nonthreatening can do. If political elites appreciate AI, then it won't matter whether currently-deployed AI systems feel nonthreatening: there will be oversight. It's also very possible that the US will have a Sputnik moment for AI and then there's strong pressure for a national AI project independent of the current state of private AI in the US.
The Governance Problem and the "Pretty Good" X-Risk

Really interesting post, thanks! Some random reactions.

"Pretty good" governance failure is possible. We could end up with an outcome that many or most influential people want, but that wiser versions of ourselves would strongly disapprove of. This scenario is plausibly the default outcome of aligned superintelligence: great uses of power are a tiny subset of the possible uses of power, the people/institutions that currently want great outcomes constitute a tiny share of total influence, and neither will those who want non-great outcomes be persuaded nor wi

... (read more)
3Zach Stein-Perlman21dThanks for your comments! I certainly agree that Earthly utopia won't happen; I just wrote that to illustrate how prosaic values would be disastrous in some circumstances. But here are some similar things that I think are very possible: * Scenarios where some choices that are excellent by prosaic standards unintentionally make great futures unlikely or impossible. * Scenarios where the choices that would tend to promote great futures are very weird by prosaic standards and fail to achieve the level of consensus necessary for adoption. In retrospect, I should have thought and written more about failure scenarios instead of just risk factors for those scenarios. I expect to revise this post, and failure scenarios would be an important addition. For now, here's my baseline intuition for a "pretty good" future: * After an intelligence explosion, a state controls aligned superintelligence. Political elites * are not familiar with ideas like long reflection and indirect normativity, * do not understand why such ideas are important, * are constrained from pursuing such goals ( or perhaps because opposed factions can veto such ideas), or * do not get to decide what to do with superintelligence because the state's decisionmaking system is bound by prior decisions about how powerful AI should be used (either directly, by forbidding great uses of AI, or indirectly, by giving decisionmaking power to groups unlikely to choose a great future) * So the state initially uses AI in prosaic ways and, roughly speaking, thinks of AI in prosaic ways. I don't have a great model of what happens to our cosmic endowment in this scenario, but since we're at the point where unwise individuals/institutions are empowered, the following all feel possible: * We optimize for something prosaic * We lock in a choice that disallows intentionally optimizing for anything * We enter a stable s
Gifted $1 million. What to do? (Not hypothetical)
Answer by MaxRaAug 30, 202129

Wow, that's really cool! One idea is to look at the reports from winners of donor lotteries. They are also more or less ordinary people who got to decide where to donate a lot of money and shared their process and learnings: https://forum.effectivealtruism.org/tag/donor-lotteries  

An Informal Review of Space Exploration

Thanks, I found this very interesting and well written and am glad you took a deeper look into it.

Growth and the case against randomista development

Just saw this on Marginal Revolution and wondered what people here make of it, e.g. if the recent slowdown or instability in major countries Nigeria, Ethiopia and South Africa is a noticeable update for them against the promise of economic growth work in Africa.

One of the saddest stories of the year has gone largely unreported: the slowdown of political and economic progress in sub-Saharan Africa. There is no longer a clear path to be seen, or a simple story to be told, about how the world’s poorest continent might claw its way up to middle-income status

... (read more)
Report on Running a Forecasting Tournament at an EA Retreat

@Simon_Grimm and me ended up also organizing a forecasting tournament. It went really well, people seemed to like it a lot, so thanks for the inspiration and the instructions! 

One thing we did differently

  • we hanged posters for each question in the main hallway because we thought it would make the forecasts more visible/present and it would be interesting to see what others write down on the poster as their forecast - I would likely do this again, even though hammering all the numbers into an excel sheet was some effort

Questions we used

1. Will the proba... (read more)

Analyzing view metrics on the EA Forum

Cool! Just in case you have the data quickly at hand, I’d‘ve been interested in more than just the top three articles, maybe you could add the top ten? Also, maybe minutes would be the more intuitive unit, compared to something like 2600 seconds.

1dmarti24dHi MaxRa Thanks for your question and input. We don't want to encourage too much of a comparison between posts to avoid giving the impression that some are "better" than others. Therefore, we'd prefer to let the post stand as it is.
What EA projects could grow to become megaprojects, eventually spending $100m per year?

I think you’re right. Even if the experts were paid really well for their participation, say 10k per year (maybe as a fixed sum or in expectation given some incentive scheme), and you might have on the order of 50 experts each for 20(?) fields, then you end up with 10 million per year. But probably it wouldn’t even require that, as long as it’s prestigious and is set up well with enough buy-in. Paying for their judgement would make the latter easier I suppose.

What EA projects could grow to become megaprojects, eventually spending $100m per year?

Interesting, the Atlantic article didn't give this impression. I'd also be pretty surprised if you had to become essentially the cliche of a moderate politician if you're part of the leadership team of a journalistic organization. In my mind, you're mostly responsible for setting and living the norms you want the organization to follow, e.g. 

  • epistemic norms of charitability, clarity, probabilistic forecasts, scout mindset
  • values like exploring neglected and important topics with a focus on having an altruistic impact? 

And then maybe being involved in hiring the people who have shown promise and fit?

1HStencil1moYeah, I mean, to be clear, my impression was that Yglesias wished this weren't required and believed that it shouldn't be required (certainly, in the abstract, it doesn't have to be), but nonetheless, it seemed like he conceded that from a practical standpoint, when this is what all your staff expect, it is required. I guess maybe then the question is just whether he could "avoid the pitfalls from his time with Vox," and I suppose my feeling is that one should expect that to be difficult and that someone in his position wouldn't want to abandon their quiet, stable, cushy Substack gig for a risky endeavor that required them to bet on their ability to do it successfully. I think too many of the relevant causes are things that you can't count on being able to control as the head of an organization, particularly at scale, over long periods of time, and I'd been inferring that this was probably one of the lessons Yglesias drew from his time at Vox.
What EA projects could grow to become megaprojects, eventually spending $100m per year?

Thanks, didn't see what he said about this. Just read an Atlantic article about this and I don't see why it shouldn't be easy to avoid the pitfalls from his time with Vox, and why he wouldn't care a lot about starting a new project where he could offer a better way to do journalism.

Yglesias felt that he could no longer speak his mind without riling his colleagues. His managers wanted him to maintain a “restrained, institutional, statesmanlike voice,” he told me in a phone interview, in part because he was a co-founder of Vox. But as a relative moderate at

... (read more)
2HStencil1moYeah, I guess the impression I had (from comments he made elsewhere — on a podcast, I think) was that he actually agreed with his managers that at a certain point, once a publication has scaled enough, people who represent its “essence” to the public (like its founders) do need to adopt a more neutral, nonpartisan (in the general sense) voice that brings people together without stirring up controversy, and that it was because he agreed with them about this that he decided to step down.
What EA projects could grow to become megaprojects, eventually spending $100m per year?

Urgent doesn‘t feel like the right word, the question to me is whether his contributions could be scaled up well with more money. I think his substack deal is on the order of 300k per year, but maybe he could found and lead a new news organization, hire great people that want to work with him and do more rational, informative and world-improvy journalism?

2HStencil1moI would be extremely surprised if he had any interest in doing this, given what he’s said about his reasons for leaving Vox.
What EA projects could grow to become megaprojects, eventually spending $100m per year?

In the short term yes, but my vision was to see a news media organization under the leadership of a person like Kelsey Piper that is able to hire talented reasonably aligned journalists to do great and informative journalism in the vein of Future Perfect. Not sure how scalable Future Perfect is under the Vox umbrella, and how freely it could scale up to its best possible form from an EA perspective.

What EA projects could grow to become megaprojects, eventually spending $100m per year?
Answer by MaxRaAug 07, 202153

Build up an institution that does the IGM economic experts survey with every scientific field, with paid editors, additionally probabilistic forecasts, monetary incentives for the experts maybe. https://www.igmchicago.org/igm-economic-experts-panel/

I like this idea in general, but would it ever really be able to employ $100m+ annually? For comparison, GiveWell spends about $6 million/year and CSET was set up for $55m/5 years (11m/year)

5Nathan Young1moI would upvote if someone wrote a quick summary of this and a number of the other ideas which aren't immediately clear on first reading.
What EA projects could grow to become megaprojects, eventually spending $100m per year?
Answer by MaxRaAug 06, 202124

Hire ~5 film-studios to each make a movie that concretely shows an AI risk scenario which at least roughly survives the rationalist fiction sniff test. Goal: Improve AI Safety discourse, motivate more smart people to work on this.

4HaydnBelfield1moHell yeah! Get JGL to star - https://www.eaglobal.org/speakers/joseph-gordon-levitt/
What EA projects could grow to become megaprojects, eventually spending $100m per year?
Answer by MaxRaAug 06, 202116

Take some EAs involved in public outreach, some journalists who made probabilistic forecasts on there own volition (Future Perfect people, Matt Yglesias, ?), and buy them their own news media organization to influence politics and raise the sanity- and altruism-waterline.

We could buy (a significant number of shares in) media companies themselves and shift their direction. Bezos bought the Washington Post for $250 million. Some are probably too big, like the New York Times at a $8 billion market cap and Fox Corporation at $20 billion.

3MichaelStJules1moWouldn't they lose readers if they left their organizations? Is that what you mean? The fact that Future Perfect is at Vox gets Vox readers to read it.
5ChanaMessinger1moMatt makes lots of money on his independent substack now, so that feels less urgent, but funding other things like future perfect in other news sources as the Rockefeller Foundation does now seems great.
What EA projects could grow to become megaprojects, eventually spending $100m per year?
Answer by MaxRaAug 06, 202115

Funding a „serious“ prediction market.

Not sure if 100M is necessary or sufficient if you want many people or even multiple organizations to seriously work full-time on forecasting EA relevant questions. Maybe could also be used to spearhead its usage in politics.

1samhbarton1mowww.ideamarket.io [www.ideamarket.io] is working on something that's in the same vein. It's not a prediction market, but seeks to use markets to identify credible/trustworthy sources. Disclaimer: i started working with Ideamarket a month ago
4HStencil1mohttps://kalshi.com/ [https://kalshi.com/]
Is effective altruism growing? An update on the stock of funding vs. people

Great comment!

I’m trying to imagine what global development charities EAs who believe HBD donate to, and I’m having a hard time.

I don’t totally follow why „the belief that races differ genetically in socially relevant ways“ would leave one to not donate to for example the Against Malaria Foundation, or Give Directly. Assuming that there for example is on average a (slightly?) lower average IQ, it seems to me that less Malaria or more money still will do most one would hope for and what the RCTs say they do, even if you might expect (slightly ?) lower economic growth potential and in the longer term (slightly?) less potential for the regions to become highly-specialized skilled labor places?

3SamiM1moI think you're right. I guess I took Gwen's comment at face value and tried to figure out how development aid will look different due to the "huge implications", which was hard.
How to Train Better EAs?

I used to listen to the podcast of a former Navy SEAL and he argues that the idea of obedient drones is totally off for SEALs, and I got the impression they learn a lot of specialized skills for strategic warfare stuff. Here an article he wrote about this (haven’t read it myself): https://www.businessinsider.com/navy-seal-jocko-willink-debunks-military-blind-obedience-2018-6

[3-hour podcast]: Joseph Carlsmith on longtermism, utopia, the computational power of the brain, meta-ethics, illusionism and meditation

Really enjoyed listening to this. I relate a lot with your perspective on grounding value in our experiences and found Joseph's pushbacks really stimulating. 

DeepMind: Generally capable agents emerge from open-ended play

Is there already a handy way to compare computation costs that went into training? E.g. compared to GPT3, AlphaZero, etc.?

5kokotajlod2moI would love to know! If anyone finds out how many PF-DAYs or operations or whatever were used to train this stuff, I'd love to hear it. (Alternatively: How much money was spent on the compute, or the hardware.)
What novels, poetry, comics have EA themes, plots or characters?

I also really enjoyed the unofficial sequal, Significant Digits. http://www.anarchyishyperbole.com/p/significant-digits.html

It's easy to make big plans and ask big questions, but a lot harder to follow them through.  Find out what happens to Harry Potter-Evans-Verres, Hermione, Draco, and everyone else once they grow into their roles as leaders, leave the shelter of Hogwarts, and venture out into a wider world of intrigue, politics, and war.  Not official.

"The best HPMOR continuation fic." -Eliezer Yudkowsky

Books and lecture series relevant to AI governance?

Thanks a lot for compiling this, I'm thinking about switching my career into AI governance and the lists in your Google Doc seem super useful!

Metaculus Questions Suggest Money Will Do More Good in the Future

Cool questions! I am a bit hesitant to update much:

  • they don’t seem to be too active, e.g. few comments, interest count at around 10 (can you see the number of unique forecasters somehow?)
  • the people doing the forecasts are probably EA adjacent and if they did something akin to a formal analysis, they would share it with the EA community, or at least in the comments, as it seems relatively useful to contribute this
3Charles Dillon 2moFor the first question, you can see under "community stats" the number of unique users, currently 28. For the second one you cannot see it on the page, I'm not sure why, but I'd guess its a similar ratio (I.e. approx half of the number of predictions)
On what kinds of Twitter accounts would you be most interested in seeing research?

I’ve seen the claim that economist Alex Tabarrok was significantly ahead of the curve on COVID issues. Would be interesting to see how he did or did not reach and convince the people involved in policy making. https://www.twitter.com/ATabarrok

1Miranda_Zhang2moYes, I think Lizka might have mentioned him too. Good suggestion, thank you!
Increasing personal security of at-risk high-impact actors

Just saw this, which seems like a good step in the intended direction:

Canada will become one of the first countries to offer a dedicated, permanent pathway for human rights defenders, and will resettle up to 250 human rights defenders per year, including their family members, through the Government-Assisted Refugees Program.

https://www.canada.ca/en/immigration-refugees-citizenship/news/2021/07/minister-mendicino-launches-a-dedicated-refugee-stream-for-human-rights-defenders.html

Building my Scout Mindset: #1

Seconded, I‘d really like to read more of stream of thought inspections like this. Seems like a great practice and also like a cool way to understand other people‘s thought processes around difficult topics.

People working on x-risks: what emotionally motivates you?

Good question. I think I’m maybe quarterway there to be internally/emotionally driven to do what I can to prevent the worst possible AI failures, but re this

Say I'm afraid of internalizing responsibility for working on important, large problems

I always thought it would be a great thing if my emotional drives would line up more with the goals that I deliberately thought through to be likely the most important. It would feel more coherent, it would give me more drive and focus on what matters, and downregulate things like some social motivations that I d... (read more)

Some AI Governance Research Ideas

Yeah. What I thought is that one might want to somehow use a term that also emphasizes the potentially transformative impact AI companies will have, as in „We think your AI research might fit into the reference class of the Manhattan Project“. And „socially beneficial“ doesn‘t really capture this either for me. Maybe something in the direction of „risk-aware“, „risk-sensitive“, „farsighted“, „robustly beneficial“, „socially cautious“…

Edit: Just stumbled upon the word word „stewardship“ in the most recent EconTalk episode from a lecturer wanting to kindle a sense of stewardship over nuclear weapons in military personnel.

A do-gooder's safari

There is a poll on the Effective Altruism Polls Facebook group on the question "With which archetype(s) from Owen's post "A do-gooder's safari" do you identify the most?"

https://www.facebook.com/groups/477649789306528/posts/1022081814863320/

Mauhn Releases AI Safety Documentation

I think it's great  that  you're trying to lead by example and I think concrete ideas of how companies can responsibly deal  with the potential for leading the development of advanced or even  transformative AI systems is really welcome in my view. I skimmed three of your links and thought that all sounded basically sensible, though like it will probably all look very different from this, and that I never want to put so much responsibility on anything called an "Ethics Board". (But I'm very basic in my thinking around strategic and gove... (read more)

1Berg Severens2moIf it's not the ethics board that has the responsibility, it is either the company itself or the government (through legislation). The first one is clearly not the safest option. The main problem with the second one is that legislation has two problems: (1) it is typically too late before first incidents occur, which may be dramatic in this case and (2) it is quite generic, which wouldn't work well in a space where different AGI organizations have completely different algorithms to make safe. Although an ethics board is not perfect, it can be tailor-made and still be free of conflict of interest. I agree that it would be more desirable that AGI would be developed by a collaboration between governments or non-profit institutions. With my background, it was just a lot easier from a pragmatic perspective to find money through investors than through non-profit institutions. Yes, the alignment system is still quite basic (although I believe that the concept of education would already solve a significant number of safety problems). The first gap we focus on is how to optimize the organizational safety structure, because this needs to be right from the start: it's really hard to convince investors to become a capped profit company for instance if you're already making a lot of profits. The technical AI safety plan is less crucial for us in this phase, because the networks are still small and for classification only. It goes without saying that we'll put much more effort into a technical AI safety plan in the future. I didn't ask yet for feedback, because of reasons of bandwidth: we're currently a team of three people with a lot to get done. We're happy to post this first version, but we also need to move forward technically. So, getting more feedback will be for 1-2 years or so
Anki deck for "Some key numbers that (almost) every EA should know"

In case suggestions for new cards are still useful, just saw another useful number:

Q: What percentage of people across Europe think the world is getting better? [2015] A: +/- 5% Source: https://ourworldindata.org/optimism-pessimism

What are some key numbers that (almost) every EA should know?

Cool, really looking forward to add them to my Anki! 

Re: How many big power transitions ended in war

I had the work from Graham Allison in mind here, not sure how set in stone it is but I had the impression it is a sufficiently solid rough estimate:

(2) In researching cases of rising powers challenging ruling powers over the last 500 years, Allison and the Thucydides Trap Project at Harvard University found 12 of 16 cases resulted in war. 

Re: roughly how much they value their own time

I would do a card where people are able to fill in their number, ... (read more)

2Pablo3moThanks!
The unthinkable urgency of suffering

I'm also  a bit surprised, if I'm not mistaken the post had negative karma at one point. People of course downvote for other reasons than controversy, e.g. from the forum's voting norms section:

“I didn’t find this relevant.”
“I think this contains an error.”
“This is technically fine, but annoying to read.”

But I'd be sad if people get the impression that posts like this that reflect on altruistic motivations would not be welcome.

Yes, this was a bit puzzling for me. Good to see it redeemed a bit. I could see the post being disliked for a few reasons:

  • An image of EA as focused on suffering might be bad for the movement
  • It's preaching to the choir (which it definitely is)

Anyway, thanks for the reassuring comment!

The unthinkable urgency of suffering

Thanks for writing these words, they rekindled my deeply held desire to prevent intense suffering. It really is weird, the desire also quickly fades in the background for me. In my case, one part is probably that my work and thinking nowadays is more directed at preserving what is good about humanity instead of preventing the worst suffering, which was more of my focus when I thought more about global poverty and animal suffering.

9aaronb503moYou're welcome and thanks for the comment. I too want to preserve what is good, but I can't help but think that EAs tend to focus too much on preserving the good instead of reducing the bad, in large part because we tend to be relatively wealthy, privileged humans who rarely if ever undergo terrible suffering.
WANBAM mentee applications are open until the end of July! 

It's Women and Non-Binary Altruism Mentorship . I also couldn't find it on the website but googling did it. 

What are the 'PlayPumps' of cause prioritisation?

Great share. Really hurt to read, oh man.

Here some more details from the article that I found interesting, too:

Nonetheless, by the 1980s finding fault with high-yield agriculture had become fashionable. Environmentalists began to tell the Ford and Rockefeller Foundations and Western governments that high-yield techniques would despoil the developing world. As Borlaug turned his attention to high-yield projects for Africa, where mass starvation still seemed a plausible threat, some green organizations became determined to stop him there. "The environmenta

... (read more)
Exploring Existential Risk - using Connected Papers to find Effective Altruism aligned articles and researchers

Same here, thanks a lot for the post! Would be really cool if this leads to new connections in the growing field of longtermist academia.

What effectively altruistic inducement prize contest would you like to be funded?

Hmm, good question.

Recommending people to apply:

  • for an EA related job where the people end up being seriously considered
  • for an EA fund who ends up getting funded

Forecasting related:

  • maybe writing of comments on EA relevant questions on Metaculus that shift the distribution of forecasts by at least a certain amount
  • writing EA relevant questions on Metaculus

Content related:

  • incentivizing EA related book reviews with a contest
1Mati_Roy3mothose are not inducement prizes
Load More