All of D0TheMath's Comments + Replies

[Creative writing contest] Blue bird and black bird

Death of the author interpretation: currently there are few, large, EA-aligned organizations which were created by EAs. Much of the funding for EA aligned projects just supports smart people who happen to be doing effective altruism.

The blue bird represents the EA community going to smart people, symbolized by the black bird, and asking why they’re working on what they’re working on. If the answer is a good one, the community / blue bird will pitch in and help.

2Lizka8dI'm highly enjoying the "death of the author" interpretation (and even just its existence), thanks! :)
2Hamish Huggard9dOh nice. Socratic irony. I like it.
[Creative writing contest] Blue bird and black bird

I felt some cognitive dissonance at the small tree / lumberjack scene. Black Bird could have helped fight the lumberjack, then cut down the sprout. So it doesn’t map very well to actual catastrophic risk tradeoffs. I don’t know how to fix it though.

6Matt_Sharp11dYeah, and I don't think the example of the sprout maps particularly well to catastrophic risks in itself. If the sprout grows into a giant oak tree that is literally right next to their current tree, it seems like they could easily just move to the giant oak tree. It sounds like the 'giant oak' would eventually be bigger than their current tree, meaning more space per bird, allowing for more birds. Oh and some birds eat acorns! In this case I think black bird could be making things worse for future birds.
9Harrison D11dI did also initially think that it might be good to try to change the lumberjack instance, if possible, although it wasn't for the same reason: I just feel that there is much more of a case to make that the lumberjack deserves a whole-of-community effort since there is a plausible chance the extra bird could make a difference. But after considering this about the non-urgency of the sprout vs the lumberjack, I especially feel it may not the best example. Still, I understood the message/idea, and it's hard to know how non-EAs might react to the situation. Just something to keep in mind.
Needed: Input on testing fit for your career

This seems like it could be a very valuable resource, and I will totally use it.

3Miranda_Zhang1moAgreed! Most of my EA networking is geared towards answering this question.
In favor of more anthropics research

Ah, thanks. It was a while ago, so I guess I was misremembering.

In favor of more anthropics research

I haven't done significant research into the Doomsday argument, but I do remember thinking it seemed intuitively plausible when I first heard of it. Then I listened to this 80,000 Hours podcast , and the discussion on the doomsday argument, if I remember correctly, convinced me it's a non-issue. But you may want to relisted to make sure I'm remembering correctly.  correction: I was not remembering correctly. They came away with the conclusion that more funding & research is needed in this space.

There may be good work to be done on formalizing the ... (read more)

6smountjoy1moI had the opposite takeaway from the podcast. Ajeya and Rob definitely don't come to a confident conclusion. Near the end of the segment, Ajeya says, referring definitely to the simulation argument but also I think to anthropics generally,
[PR FAQ] Banner highlighting valuable EA resources

This seems cool. I think I’d learn quite a bit about what orgs & resources exist if this was implemented, but also worry it may take up too much space, and I’ll decide to turn it off out of annoyance.

Narration: Improving the EA-aligned research pipeline: Sequence introduction

I said I wasn't going to publish this as a frontpage post, but I misclicked a button during the posting process. Sorry. It'd be nice if an moderator could take it off the frontpage.

What novels, poetry, comics have EA themes, plots or characters?

The only work I know of which is explicitly effective altruist is A Common Sense Guide to Doing the Most Good, but many works on r/rational share a similar philosophy as many EAs.

Narration: The case against “EA cause areas”

Thanks! This is really good feedback. One person saying something could mean anything, but two people saying the same thing is a much stronger signal that that thing is a good idea.

9Aaron Gertler2moI'll be a third person here: the narrations are nice, but are starting to clutter the front page. I'd recommend having one big post where you list all the narrations you've done, with links to the appropriate posts or comments. That post can have the "audio" tag so people find it when they look for audio, and it's a handy way for you to link to the full set of recordings at once if you want people to know about the resource.
Narration: The case against “EA cause areas”

Noted. I was worried it would get annoying, so thanks for confirming that worry. I’ll experiment with posting some not on the front-page, and see if they get significantly fewer listens.

4michaelchen2moI listen to a good amount of podcasts using the Pocket Casts app (or at least I did for a couple years up until a few weeks ago when I realized that I find YouTube explanations of tech topics a lot more informative). But when I'm browsing the EA Forum, I'm not really interested in listening to podcasts, especially podcast versions of posts I've already read that I could easily re-read on the EA Forum. I think this is a cool project but after the first couple of audio narration posts which were good for generating awareness of this podcast, I don't think it's necessary to continue these top-level posts. I think it would still be worthwhile to experiment with not posting some episodes on the front-page and seeing how that affects the number of listens.
Research into people's willingness to change cause *areas*?

Rethink Priorities' analysis of the 2019 EA survey concluded that 42% of EAs changed their cause area after joining the movement, 57% of change was away from global poverty, and 54% towards long term future / catastrophic and existential risk reduction.

Rethink Priorities, and Faunalytics also have much content on how to do effective animal advocacy, which would likely be useful for your purposes.

This is probably not the extent of research that Rethink Priorities has on this issue, but it's what I could remember reading about.

Narration: We are in triage every second of every day

Yes, it’s a linkpost to my podcast here, where myself and others have been narrating selected forum posts.

Which EA forum posts would you most like narrated?

A narration of the newsletter, or the posts linked in the newsletter?

2mawa2moI meant just the newsletter itself, not the linked posts.
How do you communicate that you like to optimise processes without people assuming you like tricks / hacks / shortcuts?

Using Slate Star Codex's Style Guide: Not Sounding Like An Evil Robot, instead of saying "I want to optimize X", you should instead say "I want to find the best way to do X".

2Madhav Malhotra2moI like the examples in the guide! Thank you for sharing that with me :-) I do say things that he mentioned like males/females instead of men/women or 'a high probability of' instead of 'probably'. I'll start working on that!
How to explain AI risk/EA concepts to family and friends?

Perhaps try explaining by analogy, or providing examples of ways we’re already messing up.

Like the YouTube algorithm. It only maximizes the amount of time people spend on the platform, because (charitably) Google thought that’d be a useful metric for the quality of the content it provides. But instead, it ended up figuring out that if it showed people videos which convinced them of extreme political ideologies, then it would be easier to find videos which would make them angry/happy/sad/other addictive emotions which would keep them on the platform.

This pa... (read more)

When should those who sign up expect to receive their acceptance/rejection?

2Art Kiser3moBy this Friday.

I am testing comment functionality on Linux Mint OS, using the Firefox browser.

edit: seems like I can edit too.

The EA Forum Podcast is up and running

We’ve talked in private, but I figure I should publicly thank you for your offer for help.

edit: this is the thank you.

The EA Forum Podcast is up and running

Anchor sends messages to podcast platforms to get the podcast on them. They say this takes a few business days to complete. In the meantime, you can use Ben Schifman's method.

[Link] Reading the EA Forum; audio content

For example, in the first month we launched them (July 2020), across the 3 different profiles, the detailed versions averaged 62% the number of downloads as the short versions, and the audio versions averaged 6% of the number of downloads of the short versions.

This changes my estimate of how useful the EA forum podcast will be, so thanks for sharing your experience.

4EdoArad3moOne important difference is that the EA forum is a continuous stream and people probably mostly read posts by the frontpage feed, rather than looking directly for information (which is probably more the case for the skills profiles)
Which EA forum posts would you most like narrated?

Thanks for the correction. I read the intro to the first prize post (May, 2020) on the tag page, and thought it meant it was the last one that would be published.

I thought there were more published between May of 2020, and now, but for the last year time has felt pretty weird, so I figured I was misremembering.

3EdoArad3moAh, I see! Yea, the way it's sorted makes it very confusing (it's based on the tag upvotes, which is rather irrelevant here)
Which EA forum posts would you most like narrated?

That’s a great idea! I was disappointed though that they stopped doing these a year ago, and I thought of any similar ‘best-of’ lists I know of, and remembered the EA Forum Digest exists, so I’m probably going to read posts from this too.

2EdoArad3moThe forum prize is ongoing, the most recent is for March [https://forum.effectivealtruism.org/posts/FzEf4EeXcnW8X4JXb/ea-forum-prize-winners-for-march-2021] (and I guess that the April edition should be out soon)
[Link] Reading the EA Forum; audio content

Nice! Have any preferences for what we use to coordinate together (Slack, Discord, Twitter, WhatsApp, …)? If not, then we can default to WhatsApp.

2david_reinstein3moIf you are already on the ea global slack we could start a channel
2david_reinstein3moWhatsApp (or slack) is good. Email me at daaronr at gmail to coordinate?
[Link] Reading the EA Forum; audio content

Yes!!! Thank you for this! I absorb audio info much easier & quicker than text, so this will be very helpful.

Edit: also, now that you mention it, I could probably record forum posts myself as well, as there are likely many others like me. Do you want to partner up to coordinate on which posts to read & add a measure of social enforcement?

I don’t have a lot of time in the next few days, but I should be much freer after the 5th.

2david_reinstein3moYes, let’s do it. Maybe even get a bigger group going.
Please Test/Share my New Vegan Video Game

Oh, this seems fun! I'll certainly be playtesting it in the coming days (it's also been added to my wish-list).

3scottxmulligan4moThank you so much! Wishlists help a lot!

Ok. I'll do that.

That's annoying. The formatting is fixed (I had to transfer over to the WYSIWYG editor, since the <br> solution for markdown didn't work). I also don't know how to transfer it over to a regular post. Thanks for telling me about these problems.

2Aaron Gertler4moMods aren't able to "transfer" posts in this way, either. I'd recommend just moving this post to a draft and reposting as a non-Question with the same tag.
Getting money out of politics and into charity

If we successfully built this platform, would you consider using it? If your answer is “it depends”, what does it depend on?

I wouldn't use it, since I don't donate to campaigns, but I would certainly push all my more political friends and family members to use it.

What questions would you like to see forecasts on from the Metaculus community?

In The Precipice Toby Ord gives estimated chances of various existential risks happening within the next 100 years. It'd be cool if we could get estimates from Metaculus as well, although it may be impossible to implement, as Tachyons would only be awarded when the world doesn't blow up.

5niplav1yWell, there's the Ragnarök question series [https://www.metaculus.com/questions/2568/ragnar%25C3%25B6k-question-series-results-so-far/] , which seems to fit what you're looking for.
The 80,000 Hours podcast should host debates

I like the idea of having people with different opinions discuss their disagreements, but I don't think they should be marketed as debates. That term doesn't have positive connotations, and seems to imply that there will be a winner/loser. Even if there is no official winner/loser, it puts the audience and the participants in a zero-sum mentality.

I think something more like an adversarial collaboration would be healthier, and I like that term more because it's not as loaded, and it's more up front about what we actually want the participants to do.

1jackmalde1yI actually completely agree. I'm sort of against there being a winner and loser because that might imply that the winner's side of the argument is now objectively better and should be adopted by EAs. I doubt anything will be 'settled' by a podcast episode, but it should hopefully identify points of contention and help us get closer to the truth
How do i know a charity is actually effective

Thanks for the correction. Idk why I thought it was Toby Ord.

How do i know a charity is actually effective

I haven't read Will's book, so I'm not entirely sure what your background knowledge is.

Are you unsure about how to compare two different cause areas? For instance, do you accept that it's better to save the lives of 10 children than to fund a $30,000 art museum renovation project, but are unsure whether saving the lives of 10 children or de-worming 4,500 children is better?

In this case I suggest looking at QUALYs and DALYs which try to quantify the number of healthy years of lives saved given estimates for how bad various diseases and d... (read more)

2Thomas Kwa1yThe person who broke down in tears during an interview is actually Derek Parfit, also an effective altruist. Source: http://bostonreview.net/books-ideas-mccoy-family-center-ethics-society-stanford-university/lives-moral-saints [http://bostonreview.net/books-ideas-mccoy-family-center-ethics-society-stanford-university/lives-moral-saints]
FLI AI Alignment podcast: Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

This was a particularly informative podcast, and you helped me get a better understanding of inner alignment issues, which I really appreciate.

To be clear I understand: the issue with inner alignment is that as an agent gets optimized for a reward/cost function on a training distribution, and to do well the agent needs to have a good enough world model to determine that it's in or could be undergoing training, then if the training ends up creating an optimizer, it's much more likely that that optimizer's reward function is bad or a proxy, and if it's suffi

... (read more)
2evhub1yGlad you enjoyed it! So, I think what you're describing in terms of a model with a pseudo-aligned objective pretending to have the correct objective is a good description of specifically deceptive alignment [https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB/p/zthDPAjh9w6Ytbeks], though the inner alignment problem [https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB/p/FkgsxrGf3QxhfLWHG] is a more general term that encompasses any way in which a model might be running an optimization process for a different objective than the one it was trained on. In terms of empirical examples, there definitely aren't good empirical examples of deceptive alignment right now for the reason you mentioned, though whether or not there are good empirical examples of inner alignment problems in general is more questionable. There are certainly lots of empirical examples of robustness/distributional shift problems, but because we don't really know whether our models are internally implementing optimization processes or not, it's hard to really say whether we're actually seeing inner alignment failures. This post [https://www.alignmentforum.org/posts/2GycxikGnepJbxfHT/towards-an-empirical-investigation-of-inner-alignment] provides a description of the sort of experiment which I think would need to be done to really definitely demonstrate an inner alignment failure (Rohin Shah at CHAI also has a similar proposal here [https://docs.google.com/document/d/1G1a1XOiKoHLGpD03IZIqIatphPx5JNBE5WYMn1y3V0A/edit] ).
How do you talk about AI safety?

While I haven't read the book, Slate Star Codex has a great review on Human Compatible. Scott says it speaks of AI safety, especially in the long-term future, in a very professional sounding, and not weird way. So I suggest reading that book, or that review.


You could also list several different smaller scale AI-misalignment problems, such as the problems surrounding Zuckerberg and Facebook. You could say something like "You know how Facebook's AI is programmed to keep you on as long as possible, so often it will show you controversial cont... (read more)

3rohinmshah1yWas going to recommend this as well (and I have read the book).
Terrorism, Tylenol, and dangerous information

Not entirely applicable to the discussion, but I just like talking about things like this and I finally found something tangentially related. Feel free to disregard.

if you look at a period of sustained effort in staying on the military cutting edge, i.e. the Cold War, you won't see as many of these mistakes and you'll instead find fairly continuous progress

The cold war wasn't peacetime though... there was continuous fighting by both sides. The Americans and Chinese in Korea, the Americans in Vietnam, and the Russians in Afghanistan.

One ... (read more)