All of michaelchen's Comments + Replies

Can human extinction due to AI be justified as good?

I can see an AI creating clones or other agents to help it achieve its goals. And they might all try to help each other survive to work toward that goal. But that doesn't mean helping each other feel positive experiences (at least not necessarily). It could even involve a significant degree of punishment to shape actions to better achieve that goal, although I'm less sure about this.

1Samuel Shadrach8hYup the punishment point is definitely valid. I was just assuming "beings helping their clones" intrinsically is morally valuable activity if each being has moral status , and answering based on that.
AGI Safety Fundamentals curriculum and application

I noticed that "Will humans build goal-directed agents?" was changed from being a required reading to Week 2 to being an optional reading. I don't disagree with this choice, as I didn't find the post very convincing, though I was rather fond of your post "AGI safety from first principles: Goals and Agency". However, now all the required readings for Week 2 essentially take for granted that AGI will have large-scale goals. Before I participated in AGI Safety Fundamentals in the first round this year, I never considered the possibility that AGI could be non-... (read more)

Help me understand this expected value calculation

Your calculation looks correct to me. (WolframAlpha confirms "10^52 * 1% * 1 billionth * 1 billionth * 1%" is 10^30.) It seems that Nick Bostrom is underestimating the expected value by 10^10.

1AndreaSR6dThanks for your reply. I'm glad my calculation doesn't seem way off. Still feel like it's too obvious a mistake for it not to have been caught, if it indeed were a mistake...
5JP Addison7dA minor factor of ten billion 😉
Noticing the skulls, longtermism edition

I have a hard time seeing longtermism being at risk for embracing eugenics or racism. But it might be interesting to look at the general principles for why people in the past advocated eugenics or racism—perhaps, insufficient respect or individual autonomy—and try to learn from those more general lessons. Is that what you're arguing for in your post?

2Davidmanheim10dYes. The ways that various movements have gone wrong certainly differs, and despite the criticism related to race, which I do think is worth addressing, I'm not primarily worried that longtermists will end up repeating specific failure modes [https://forum.effectivealtruism.org/posts/7Pxx7kSQejX2MM2tE/why-do-social-movements-fail-two-concrete-examples] - different movements fail differently.
Noticing the skulls, longtermism edition

These largely focus on the (indisputable) fact that avoiding X-risks can be tied to racist or eugenic historical precedents. This should be worrying;

I think most social movements can be traced to some sort of unsavory historical precedent. For example:

I provide these examples not to criticize these movements but because I think these ... (read more)

I think that ignoring historical precedent is exactly what Scott was pointing out we aren't doing in his post, and I think the vast majority of EAs think it would be a mistake to do so now.

My point was that we're aware of the skulls, and cautious. Your response seems to be "who cares about the skulls, that was the past. I'm sure we can do better now." And coming from someone who is involved in EA, hearing that view from people interested in changing the world really, really worries me - because we have lots of evidence from studies of organizational decision making and policy that ignoring what went wrong in the past is a way to fail now and in the future.

It seems pretty bizarre to me to say that these historical examples are not at all relevant for evaluating present day social movements. I think it's incredibly important that socialists, for example, reflect on why various historical folks and states acting in the name of socialism caused mass death and suffering, and likewise for any social movement look at it's past mistakes, harms, etc., and try to reevaluate their goals in light of that. 

To me, the examples you give just emphasize the post's point — I think it would be hard to find someone who di... (read more)

My personal cruxes for working on AI safety

What does AI safety movement building look like? What sorts of projects or tasks does this involve? What are the relevant organizations where one could do AI safety movement building work?

Avoiding Groupthink in Intro Fellowships (and Diversifying Longtermism)

Reading your suggested required readings, S-risks: An introduction and S-risk FAQ, I don't get a clear sense of why s-risks are plausible or why the suggested interventions are useful. I like S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017) a bit more for illustrating why they are plausible, and I've added it as an optional reading in the uni chapter intro program I'm running. Unfortunately, it doesn't give more than a cursory overview of how s-risks could be reduced. I'd be hesitant about making an s-risk readi... (read more)

GCRs mitigation: the missing Sustainable Development Goal

So I see https://www.povertyactionlab.org/initiative/crime-and-violence-initiative and https://www.poverty-action.org/topics/crime but based on a quick examination, I have no idea how cost-effective these interventions are. Does anyone have links providing an estimate of the cost-effectiveness of violence prevention?

Annotated List of Project Ideas & Volunteering Resources

I can't comment on the Google Doc version of this post. Can you add Impact CoLabs?

How to make the best of the most important century?

Figuring out how to stop AI systems from making extremely bad judgments on images designed to fool them, and other work focused on helping avoid the "worst case" behaviors of AI systems.

I haven’t seen much about adversarial examples for AI alignment. Besides https://www.alignmentforum.org/tag/adversarial-examples (which only has four articles tagged), https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment, and https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-robustness-and-adversarial-exa... (read more)

3Holden Karnofsky15dI'm not sure whether you're asking for academic literature on adversarial examples (I believe there is a lot) or for links discussing the link between adversarial examples and alignment (most topics about the "link between X and alignment" haven't been written about a ton). The latter topic is discussed some in the recent paper Unsolved Problems in ML Safety [https://arxiv.org/pdf/2109.13916.pdf] and in An overview of 11 proposals for building safe advanced AI [https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai] .
Fighting Climate Change with Progressive Activism in the US: CEA

Is progressive activism for the climate more cost-effective than nonpartisan activism (such as by Citizens' Climate Lobby)?

2Dan Stein24dHI Michael, thanks for the question! We haven't tried to do an in-depth analysis of Citizen's Climate lobby, though we did do a shallow dive [https://www.givinggreen.earth/us-policy-change-researches/activism%3A-shallow-dives] on them last year. I think in theory it would be great if we could find an organization doing high-impact, centrist activism, but I haven't seen it. CCL is an interesting model and they have had a lot of success, but they have been really focused on a carbon tax, which doesn't seem to have much leverage in DC recently. So I think that blunts their effectiveness. That being said, a carbon tax just came up in the discussions for the first time in a while, so perhaps there is more potential to CCL's approach than I originally thought.
Introducing 'Playpumps Productivity System'

At least for me, I think daily accountability—and having to write a reflection if you fail to meet your goals—is a lot greater of an incentive than the threat of donating to PlayPumps a few months down the line.

The importance of optimizing the first few weeks of uni for EA groups

I'm surprised that retreats are low-effort to plan! What sorts of sessions do you run? What draws people in to attend?

Introducing EA to policymakers/researchers at the ECB

I think you'd get a lot more answers if you ask your question in the EA Groups Slack: https://efctv.org/groupslack

Giving What We Can's guide to talking about effective altruism has some good tips.

Inviting people to come to a nearby EA meetup or to apply to a locally hosted EA Fellowship sounds good.

One thing you can see if you can set up is presenting to someone in charge of philanthropy at the organization about EA. I know someone who just had a presentation like that at his internship company, though he wasn't successful in causing them to give to effect... (read more)

1anonfornow1moThis is all very helpful - thank you!
Announcing riesgoscatastroficosglobales.com

Looks good! Some minor suggestions:

  • Remove "Made with Squarespace" in the footer
  • Add a favicon to the website
1Jsevillamol1moDone, thank you!
Lessons from Running Stanford EA and SERI

Hey Markus, I'm only getting started with organizing an EA group, but here are my thoughts:

  • I think 6 hours per week is enough time to sustain a reasonable amount of growth for a group though, but I don't have enough personal experience to know. If you think funding would enable you to spend more time on community building, you can apply for an EA Infrastructure Fund. And you can always get Group Support Funding to cover expenses for things you think would help, such as snacks, flyers, books etc.
  • I think the Intro EA Program is a surprisingly effective wa
... (read more)
1markus_over1moThank you Michael! * I personally am definitely more time- than funding constrained. Or maybe evem "energy constrained"? But maybe applying for funding would be something to consider when/if we find a different person to run the local group, maybe a student who could do this for 10h a week or so. * regarding a fellowship: my bottlenecks here are probably "lack of detailed picture of how to run such a thing (or what it even is exactly)" and "what would be the necessary concrete steps to get it off the ground". Advertising is surely very relevant, but secondary to these other questions for now. * on a slightly more meta level, I think one of the issues is that I don't have a good overview of the "action space" (or "moves") in front of me as an organizer of an EA local group. Running a fellowship appears to be a very promising move, but I don't really know how to make it. Other actions may be intro talks, intro workshops, concepts workshops, discussions, watching EAG talks together, game nights, talks in general, creating a website, setting up a proper newsletter instead of having a manually maintained list of email addresses, looking for a more capable group organizer, facebook ads, flyers, posters, running giving games, icebreaker sessions, running a career club, coworking, 1on1s, meeting other local groups, reaching out to formerly-but-not-anymore-active members, and probably much more I'm not even thinking about. Maybe I'm suffering a bit from decision paralysis here and just doing any of these options would be better than my current state of "unproductive wondering what I should be doing"... :) * will message you regarding a call, thanks for the offer!
You should write about your job

While I think a write up of my experience as a web development intern wouldn’t add much value compared to the existing web developer post, I’d be interested in writing a guide to getting a (top) software engineering internship/new grad position as a university student. (Not saying my past internships are top-tier!) I'm planning on giving an overview about (or at least link to resources about) to how to write a great resume, preparing behavioral interview answers, preparing for technical interviews with LeetCode-style or system design questions, and so on. ... (read more)

2Aaron Gertler2moDefinitely sounds on-topic! (Just like these "job profile" posts are.) I appreciate that you plan to make use of the vast online literature on this topic, rather than writing a lot of original content. Even if your post doesn't cover much new ground, I've seen other Forum posts [https://forum.effectivealtruism.org/posts/YGBgWfLqeqCCWfnmK/hiring-process-and-takeaways-from-fish-welfare-initiative] that were among the best I've read on "generic" topics, so my prior is that we have a lot of good writers whose work is worth reading, even when it's not specific to EA.
Phil Torres' article: "The Dangerous Ideas of 'Longtermism' and 'Existential Risk'"

Phil Torres's tendency to misrepresent things aside, I think we need to take Phil Torres's article as an example of the severe criticism that longtermism is liable to attract, as currently framed, and reflect on how we can present it differently. It's not hard to read this sentence on the first page of (EDIT: the original version of) "The Case for Strong Longtermism":

The idea, then, is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focussing pr

... (read more)
7Aleks_K2moFYI, this has already happened. The version you are linking to is outdated, and the updated versionhere [https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/] does no longer contain this statement.
Narration: The case against “EA cause areas”

I listen to a good amount of podcasts using the Pocket Casts app (or at least I did for a couple years up until a few weeks ago when I realized that I find YouTube explanations of tech topics a lot more informative). But when I'm browsing the EA Forum, I'm not really interested in listening to podcasts, especially podcast versions of posts I've already read that I could easily re-read on the EA Forum. I think this is a cool project but after the first couple of audio narration posts which were good for generating awareness of this podcast, I don't think it... (read more)

1D0TheMath3moThanks! This is really good feedback. One person saying something could mean anything, but two people saying the same thing is a much stronger signal that that thing is a good idea.
There will now be EA Virtual Programs every month!

It seems inconvenient if applicants potentially have to fill out the Virtual Programs application form too and receive a second acceptance/rejection decision—could we have just one application form for them to fill out and one acceptance/rejection decision notification? I was thinking that hopefully we could have something like the following process:

  • Have applicants apply through the EA Virtual Programs form, or have a form specific to our chapter which puts data into the EA Virtual Programs application database. (I don't know enough about Airtable to kno
... (read more)
Why I prioritize moral circle expansion over artificial intelligence alignment

In Stuart Russell's Human Compatible (2019), he advocates for AGI to follow preference utilitarianism, maximally satisfying the values of humans. As for animal interests, he seems to think that they are sufficiently represented since he writes that they will be valued by the AI insofar as humans care about them. Reading this from Stuart Russell shifted me toward thinking that moral circle expansion probably does matter for the long-term future. It seems quite plausible (likely?) that AGI will follow this kind of value function which does not directly care ... (read more)

EA cause areas are just areas where great interventions should be easier to find

I see this sentence as suggesting capitalizing on the (relative) popularity of anti-racism movements and trying to use society's interest in anti-racism toward genocide prevention.

1freedomandutility3moYep exactly that!
There will now be EA Virtual Programs every month!

The Introductory EA Virtual Program has been invaluable for getting enough people engaged in EA for me to be able to start a group at my university, and for that I'm extremely grateful to those who have helped organize it and develop the curriculum. If you're in the same position I was a few months ago, reasonably interested in starting an EA group but having difficulty finding enough people interested in EA, I'd highly recommend advertising the Introductory EA Program!

I'm interested in running a local in-person program at my university from September to O... (read more)

2yiyang3moHi Michael! Yes, just direct people who are not able to join your local program to EA VP's website! And tell them to state in the application form that they want to be in a cohort with other people from the same uni. I spoke to Emma about this, so here's what I gathered: When we think about fellowships, we generally think about programs that are highly selective, are intensive, has funding, has various supports and opportunities (example 1 [https://en.wikipedia.org/wiki/Fellow], example 2 [https://career.berkeley.edu/Resources/Fellow]). It sounds misleading when we use the term of "fellowship" and that's bad for EA's reputation so we use "programs" instead. I didn't ask whether locally organised programs should also have the same naming conventions, so I'm still clarifying this.
You are allowed to edit Wikipedia

I agree that people should edit a Wikipedia article directly or discuss on the talk page instead of complaining about it elsewhere. Leaving a comment on the talk page can be a quick way of helping shift the consensus for a controversial topic. In my experience though, unless it's a very popular page, it's often the case that when someone leaves a comment on the talk page describing overall changes that they want to be made, no one will respond and no changes will be taken. Or someone responds with an agreement or disagreement, and nothing happens. Thus, es... (read more)

I agree with this advice.

I think a simple way to get involved with Wikipedia is to "adopt" an article on an important topic you are familiar with but which is currently covered inadequately. This will allow you to see how your changes are received, develop a relationship with other editors who contribute regularly on that page, and experience the satisfaction of seeing the article (hopefully) improve over time in part thanks to your efforts.

Matsés - Are languages providing epistemic certainty of statements not of the interest of the EA community?

Something similar to explicit credence levels on claims is how Arbital has inline images of probability distributions. Users can vote on a certain probability and contribute to the probability distribution.

Shouldn't 'Effective Altruism' be capitalized?

I've seen a mix of some people capitalizing effective altruism, maybe more often in communications with a more general audience, and some people not capitalizing it. I generally try leave it uncapitalized, following CEA policy, but sometimes I capitalize it when it makes it clearer that effective altruism is an actual Thing, not just altruism that is effective, and not a generic made-up compound term like, say, "efficient humanitarianism" or "impactful lovingkindness". For example, if I write in a self-introduction "I'm passionate about effective altruism"... (read more)

1lukefreeman4moYeah, lowercase (other than in titles) is what helps ensure that "effective altruism" isn't seen as a single organisation.
Matsés - Are languages providing epistemic certainty of statements not of the interest of the EA community?

Anecdotally, it's quite tiring to put credence levels on everything. When I started my blog I began by putting a probability on all major claims (and even wrote a script to hide this behind a popup to minimise aesthetic damage). But I soon stopped.

Interesting! Could you provide links to some of these blog posts?

2technicalities4moSeems I did this in [https://www.gleech.org/controversy/] exactly 3 [https://www.gleech.org/genes-out/] posts before [https://www.gleech.org/anthropology/] getting annoyed.
gavintaylor's Shortform

According to Fleck's thesis, Matsés has nine past tense conjugations, each of which express the source of information (direct experience, inference, or conjecture) as well as how far in the past it was (recent past, distant past, or remote past). Hearsay and history/mythology are also marked in a distinctive way. For expressing certainty, Matsés has a particle ada/-da and a verb suffix -chit which mean something like "perhaps" and another particle, ba, that means something like "I doubt that..." Unfortunately for us, this doesn't seem more expressive than ... (read more)

Matsés - Are languages providing epistemic certainty of statements not of the interest of the EA community?

I'm pretty interested in linguistics so after reading gavintaylor's comment which you linked to, I decided to read part of David Fleck's doctoral thesis from 2003 on the Matsés language. For distinguishing levels of certainty, Matsés has a particle ada/-da and a verb suffix -chit which mean something like "perhaps" and another particle, ba, that means something like "I doubt that...", and English speakers already naturally express these distinctions of certainty. Something distinctive that Matsés speakers do though is that they always mark statements about... (read more)

1mikbp4moYou blew my mind, thanks!
Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare

I initially thought that CEA here stood for Centre for Effective Altruism, and only later did I realize that it stood for cost-effectiveness analysis.

Introducing Rational Animations

I upvoted, but here are some comments I have. Looking at the titles of the first three videos, it wasn't clear how they related to rationality.

  • How Does Bitcoin Work? Simple and Precise 5

  • Why Your Life Is Harder When You Are Ugly | The
 (didn't notice the "Halo Effect" in the thumbnail at first)
  • Why You STRUGGLE to Finish on Time | The


So perhaps people downvoted based on first impressions that it doesn't seem that related to rationality?

I enjoyed the Bitcoin, halo effect, and planning fallacy videos, but I didn't think that the video "If You Want to F... (read more)

5Writer4moThanks for this feedback. I did the actionable thing I could do and changed the titles of the first two animated videos.
Please Test/Share my New Vegan Video Game

How long does it take to play through this game?

4scottxmulligan5moIt takes about 1-2 hours to complete.
Introducing Rational Animations

What topics are you thinking of making videos for in the future? (Or is this information reserved for Patreons?)

You say "first three videos", but I can also see the following videos:

  • redstone neuron in minecraft!
  • minecraft 24 hour digital clock
  • etc.

I don't think that's a big deal though—you can keep them up and maybe they'll attract some more viewers to your channel.

Also, one month ago on your YouTube channel, you posted a link to your Patreon, but it actually links to https://www.patreon.com/rationalanima (which is dead) instead of https://www.patreon.... (read more)

2Writer5moBy "first three videos," I meant the first three animated videos. Pardon. There are old videos I mean to keep because they are generally on topic and either sort of historic (epic conway's game of life) or cute stuff from my adolescence, which was pretty good anyway. Not production-wise, but at least concept-wise. But most importantly most of my current public arrives from them. Thanks for the link heads-up.
On Sleep Procrastination: Going To Bed At A Reasonable Hour

Some things I do:

  • Set my computer to automatically shutdown at certain times, such as 12:10am, 12:30am, and 1am. I chose the time of 12:10am because assignments are generally due at 11:59pm, so there's no need to stay up later. Since I'm using Windows, I can do this with Task Scheduler: https://www.techrepublic.com/article/how-to-schedule-a-windows-10-shutdown-for-a-specific-date-and-time/
  • Set an alarm on my computer at 12am to remind me that my computer is going to shut down soon. This alarm plays only when the computer is awake (i.e., when I might be us
... (read more)
Concerns with ACE's Recent Behavior

I think this point from the Black VegFest 7 points of allyship (for the white vegan community) is reasonably straightforward:

White vegans/ARs will respect the sanctity of Black space and will not enter unless their presence is necessary. Black space is for the growth and betterment of Black people. Allyship and being accomplices begins with white people learning to respect Black space.

My understanding is that there can be spaces for only Black people to discuss, though white people can participate if necessary (presumably, if they are invited). Part of... (read more)

1MichaelStJules6mo(I'm currently an intern for ACE, but speaking only for myself.) With the context from your edit that was omitted from the original post, I think it does make sense and is not absurd at all on its face, but the phrasing "simply by being white" was hyperbole (which does lend itself to misinterpretation, so better to avoid), and was explained by the claims that follow. I think the OP omitting this context was probably bad and misleading, although I don't think it was intended to mislead.

Nitpick: I really wish SJ-aligned people would clarify what they mean by "capitalism" in these contexts.

Concerns with ACE's Recent Behavior

I found this post to be quite refreshing compared to the previous one criticizing Effective Altruism Munich for uninviting Robin Hanson to speak. I’m not against “cancel culture” when it’s cancelling speakers for particularly offensive statements they’ve made in the past (e.g., Robin Hanson in my opinion, but let’s not discuss Robin Hanson much further since that’s not the topic of this post). Sometimes though, cancelling happens in response to fairly innocuous statements, and it looks like that’s what ACE has done with the CARE incident.

EA Debate Championship & Lecture Series

Sure, debate may involve a lot of techniques that are counterproductive to truth-seeking, and I wouldn't want people to write on the EA Forum like it's a debate, for example. However, I think there are many places where it would help to be able to convey more convincing arguments even if being more convincing doesn't improve truth-seeking—speaking with non-EAs about EA, for example.

I generally want us to use truth-seeking methods when engaging with outsiders as well. Of course, that isn't always possible, but I also really don't want us to have a reputation for using lots of rhetorical tricks to convince others (and generally think that doing so is pretty bad).

EA Debate Championship & Lecture Series

Are there recordings of the debates? I'd be interested in watching them.

2Habryka6moI would also be interested in this.
EA for Jews - Proposal and Request for Comment

If necessary, it might be good to frame the arguments from religious texts as connecting with traditional Jewish thought, not in a way that demands a belief (or lack of belief) in the literal accuracy of the Talmud—basically what (my understanding of) Reform Judaism does. It might be good to intersperse religious arguments with secular arguments as well.

2BenSchifman6moAbsolutely -- this is my intention in both regards. First, in my ideal vision the website would have content that appeals to both religious as well as non-religious Jews. So in addition to highlighting or discussing traditional commentary on, say, tzedakah from the tanakh and talmud I'd also like to highlight Jewish thought broadly related to social justice throughout history. Luckily there is thousands of years worth of content to mine in both regards!
Contact with reality

Out there, though, you can meet real people, with their own rich and complex lives; you can make real friends, and be part of real relationships, communities, and institutions. You can wander cities with real history; you can hear stories about things that really happened, and tell them; you can stand under real skies, and feel the heat of a real sun. People out there are doing real science, and discovering real things. They’re barely beginning to understand the story they’re a part of, but they can understand. You can understand, too; you can be a part o

... (read more)
EA Birthday Posts: An Alternative to Fundraisers

I ended up making a post based on Kuhan's. It's somewhat shorter and has less of a focus on careers. I ended up getting 14 reactions (for reference, I have a bit over 600 friends on Facebook). I wonder if accompanying the post with a new profile picture would have been a good way to get more engagement haha.

My birthday is today! For the past two years, I did birthday fundraisers, but this year, instead of gifts or donations, I’m asking for just five minutes of your attention.

In eighth grade, I discovered a social impact–oriented career site called 80000h

... (read more)
4kuhanj10moOoh I like the changing profile picture idea, can I add that to the post? (I'll give you credit of course)
Introducing Probably Good: A New Career Guidance Organization

I'm not a fan of the name "Probably Good" because:

  • if it's describing the advice, it seems like the advice might be pretty low-effort and not worth paying attention to
  • if it's describing the careers, it sounds like the careers recommended have a significant chance of having a negative impact, so again, not worth reading about
Load More