michaelchen

Posts

Sorted by New

Wiki Contributions

Comments

The importance of optimizing the first few weeks of uni for EA groups

I'm surprised that retreats are low-effort to plan! What sorts of sessions do you run? What draws people in to attend?

Introducing EA to policymakers/researchers at the ECB

I think you'd get a lot more answers if you ask your question in the EA Groups Slack: https://efctv.org/groupslack

Giving What We Can's guide to talking about effective altruism has some good tips.

Inviting people to come to a nearby EA meetup or to apply to a locally hosted EA Fellowship sounds good.

One thing you can see if you can set up is presenting to someone in charge of philanthropy at the organization about EA. I know someone who just had a presentation like that at his internship company, though he wasn't successful in causing them to give to effective charities. Might still be worth a shot though! The EA Hub has resources on introductory presentations and you can talk to Jack Lewars from One for the World since he has experience with talking to corporations about effective giving.

Announcing riesgoscatastroficosglobales.com

Looks good! Some minor suggestions:

  • Remove "Made with Squarespace" in the footer
  • Add a favicon to the website
Lessons from Running Stanford EA and SERI

Hey Markus, I'm only getting started with organizing an EA group, but here are my thoughts:

  • I think 6 hours per week is enough time to sustain a reasonable amount of growth for a group though, but I don't have enough personal experience to know. If you think funding would enable you to spend more time on community building, you can apply for an EA Infrastructure Fund. And you can always get Group Support Funding to cover expenses for things you think would help, such as snacks, flyers, books etc.
  • I think the Intro EA Program is a surprisingly effective way for newcomers to learn about EA to a reasonably deep level, so I would prioritize running a fellowship. You can broadly advertise for the fellowship across various mailing lists, Facebook groups, and group chats. You can see some more templates as well as advertising templates on the EA Hub's "Advertising Your EA Programs" page. You can see Yale EA's fellowship page for some of the benefits for participants. If you have the time to facilitate discussions, it would be better for engagement to run a fellowship on campus instead of referring everyone to the EA Virtual Programs, if students are on campus. Facilitating takes about 1 hour per week per cohort if you have already done the readings in the Intro EA Program, and an extra 1.5 hours per week if you have not done the readings.
  • Marketing widely is probably quite helpful. Stanford EA and Brown EA have docs on marketing advice which I can also send you. I think the Intro EA Program is better for outreach than regular weekly discussions. I'm currently thinking of using weekly discussions for people who have already completed the Intro EA Program but aren't planning on committing to another fellowship like the In-Depth EA Program.
  • I believe GDPR only applies to businesses collecting data, not private individuals like you.
  • I think you shouldn't be hesitant about inviting people to do things, highlighting the benefits so they can feel motivated, etc. but you can't push people to do things.
  • Let me know if you'd like to set up a call and I can message you on the EA Forum.
You should write about your job

While I think a write up of my experience as a web development intern wouldn’t add much value compared to the existing web developer post, I’d be interested in writing a guide to getting a (top) software engineering internship/new grad position as a university student. (Not saying my past internships are top-tier!) I'm planning on giving an overview about (or at least link to resources about) to how to write a great resume, preparing behavioral interview answers, preparing for technical interviews with LeetCode-style or system design questions, and so on. A lot has been written about the topic already on the general internet, so I would heavily link to those resources rather than reinventing the wheel, but I think it would be useful to have some practical job-seeking advice on the EA Forum to help with each other's career success. Does that sound like it would be on-topic for the EA Forum?

Phil Torres' article: "The Dangerous Ideas of 'Longtermism' and 'Existential Risk'"

Phil Torres's tendency to misrepresent things aside, I think we need to take Phil Torres's article as an example of the severe criticism that longtermism is liable to attract, as currently framed, and reflect on how we can present it differently. It's not hard to read this sentence on the first page of (EDIT: the original version of) "The Case for Strong Longtermism":

The idea, then, is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.

and conclude that, as Phil Torres does, longtermism means that we can justify causing present-day atrocities for a slight, let's say 0.1% increase in the subjective probability of a valuable long-term future. Thinking rationally, atrocities do not improve the long-term future, and longtermists care a lot about stability. But with the framing given by "The Case for Strong Longtermism", there is a small risk that is higher than it needs to be that future longtermists can be persuaded that atrocities would be justified, especially when subjective probabilities are so subjective. How can we reframe or redefine longtermism so that: firstly, we reduce the risk of longtermism being used to justify atrocities, and secondly (and I think more pressingly), reduce the risk that longtermism is generally seen as something that justifies atrocities?

It seems like this framing of longtermism is a far greater reputational risk to EA than, say, how 80,000 Hours over-emphasized earning to give, which is something that 80,000 Hours apparently seriously regrets. I think "The Case for Strong Longtermism" should be revised to not say things like "we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years", without detailing significant caveats. It's just a working paper—shouldn't be too hard for Greaves and MacAskill to revise. (EDIT: this has already happened, as Aleks_K has pointed out below.) If there are many more articles like Phil Torres's here written in other media in the near future, I would be very hesitant about using the term "longtermism". Phil Torres is someone who is sympathetic to effective altruism and to existential risk reduction, someone who believes "you ought to care equally about people no matter when they exist"; now imagine if the article were written by someone who isn't as sympathetic to EA.

(This really shouldn't affect my argument, but I do generally agree with longtermism.)

Narration: The case against “EA cause areas”

I listen to a good amount of podcasts using the Pocket Casts app (or at least I did for a couple years up until a few weeks ago when I realized that I find YouTube explanations of tech topics a lot more informative). But when I'm browsing the EA Forum, I'm not really interested in listening to podcasts, especially podcast versions of posts I've already read that I could easily re-read on the EA Forum. I think this is a cool project but after the first couple of audio narration posts which were good for generating awareness of this podcast, I don't think it's necessary to continue these top-level posts. I think it would still be worthwhile to experiment with not posting some episodes on the front-page and seeing how that affects the number of listens.

There will now be EA Virtual Programs every month!

It seems inconvenient if applicants potentially have to fill out the Virtual Programs application form too and receive a second acceptance/rejection decision—could we have just one application form for them to fill out and one acceptance/rejection decision notification? I was thinking that hopefully we could have something like the following process:

  • Have applicants apply through the EA Virtual Programs form, or have a form specific to our chapter which puts data into the EA Virtual Programs application database. (I don't know enough about Airtable to know whether this is possible or unrealistic).
  • Include a multiple-choice application question about whether they prefer in-person or virtual. I think we can assume by default that Georgia Tech applicants prefer to be with other Georgia Tech students—or at least that should help with building the community at Effective Altruism at Georgia Tech.
  • Tell EA Virtual Programs how many in-person cohorts we could have and the availability of the in-person facilitators. Perhaps the facilitators could fill out the regular EA Virtual Programs facilitator form but with some info about whether or not they can facilitate on-campus.
  • EA Virtual Programs assigns people to in-person or virtual cohorts.
  • Something extra that might be nice: If they were rejected due to limited capacity and their application answers were not bad, automatically offer them an option to be considered for the next round (for EA Georgia Tech, I'm thinking we'd have rounds in September–October and February–March).

If it is true that people who are rejected tend not to reapply or engage with the EA group because they might feel discouraged, then it seems important to try to minimize how many people get a rejection from the Intro EA Program.

When we think about fellowships, we generally think about programs that are highly selective, are intensive, has funding, has various supports and opportunities (example 1, example 2).

Interesting, I didn't realize "fellowship" had those connotations before to such an extent! I mainly associated "fellowship" with its meaning in Christianity haha, where it isn't selective or prestigious, just religious.

Why I prioritize moral circle expansion over artificial intelligence alignment

In Stuart Russell's Human Compatible (2019), he advocates for AGI to follow preference utilitarianism, maximally satisfying the values of humans. As for animal interests, he seems to think that they are sufficiently represented since he writes that they will be valued by the AI insofar as humans care about them. Reading this from Stuart Russell shifted me toward thinking that moral circle expansion probably does matter for the long-term future. It seems quite plausible (likely?) that AGI will follow this kind of value function which does not directly care about animals rather than broadly anti-speciesist values, since AI researchers are not generally anti-speciesists. In this case, moral circle expansion across the general population would be essential.

(Another factor is that Russell's reward modeling depends on receiving feedback occasionally from humans to learn their preferences, which is much more difficult to do with animals. Thus, under an approach similar to reward modeling, AGI developers probably won't bother to directly include animal preferences, when that involves all the extra work of figuring out how to get the AI to discern animal preferences. And how many AI researchers want to risk, say, mosquito interests overwhelming human interests?)

In comparison, if an AGI was planned to only care about the interests of people in, say, Western countries, that would instantly be widely decried as racist (at least in today's Western societies) and likely not be developed. So while moral circle expansion encompasses caring about people in other countries, I'm less concerned that large groups of humans will not have their interests represented in the AGI's values than I am about nonhuman animals.

It may be more cost-effective to have targeted approach of increasing anti-speciesism among AI researchers and doing anti-speciesist AI alignment philosophy/research (e.g., more details on how AI following preference utilitarianism can also intrinsically care about animal preferences, accounting for preferences of digital sentience given the problem that they can easily replicate and dominate preference calculations), but anti-speciesism among the general population still seems to be an important component of reducing risk of having a bad far future.

Load More