If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)

If you're new to the EA Forum, consider using this thread to introduce yourself! 

You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)

If you're new to effective altruism, consider checking out the Motivation Series (a collection of classic articles on EA). You can also learn more about how the Forum works on this page.

14

0
0

Reactions

0
0
Comments35


Sorted by Click to highlight new comments since:

I'm a 3rd year undergraduate double majoring in electrical engineering and economics at University of California Davis (about 2 hours from the San Francisco Bay Area).

I've been thinking about effective altruism concepts all my life, but just discovered the community in December 2020. After reading many EA articles and double checking with my economics professor, today I've decided to switch my post-graduation career plans from a masters degree in electrical engineering to a PhD in economics so I can work on global priorities research. 

[This comment is no longer endorsed by its author]Reply

That's awesome, congratulations!

Hi everybody! A slippery slope from 80,000 hours podcasts has led me to this lovely community. Probably like a lot of people here I've been EA-sympathetic for a long time before realising that the EA community was a thing. 

I'm not in a very 'EA-adjacent' job (if that's the term!) at the moment and am starting to think about areas in which I might enjoy working, where I would value the work being done and feel that I was really contributing value myself. 

Very excited to start my journey of engaging more directly with all of you and the discussions being had here :)

Welcome to the EA Forum!

Thank you Khorton!

Welcome Lowry! I'm Brian from EA Philippines. I love  80,000 Hours' content and podcast too. I was in a similar position to you last year, in that I was in a non-EA job and wanted to see how I could have a more EA-aligned and more enjoyable career. Thankfully I now do EA-aligned work  full-time (mainly through EA Philippines), but it does take a while before that can happen for a lot of people. And I think if people broaden the scope of what they consider to be "EA-adjacent" jobs, it's much more likely they'll get one (because we have a lot of EAs and too few jobs at EA orgs).

You or others new to the EA community can feel free to message me about your cause interests, skills, and career interests, and I may have useful advice to give or resources/organizations to point you two. I've read up a lot on EA and its various concepts and causes, such as global health and development, animal welfare,  and some longtermist causes, so I can give some advice/resources there. :)

Please tag your posts! I've seen several new forum posts with no tags, and I add any tags I consider relevant. But it would be better if everyone added tags when they publish new posts. Also, please add tags to any post that you think is missing them.

Hi everyone! I have been interested in EA, and adjacent fields, for little over a year now. So I thought it was time to register here.

I work in journalism, although not always on EA-related topics. As a side-project I also run a little newsletter about, among other issues, x-risk.

So hope being here can help advance my thinking, and maybe even support me in doing more good.

Welcome to the Forum! 

I hope your experience here is good; let me know if there's anything I can help with. (I'm the lead moderator and work for CEA, which runs the site.)

Welcome to the Forum Felix! It's good to have another journalist interested in EA (and hopefully writing about it in an informed way). I think there's relatively few of you. 

It's cool that you have a newsletter on x-risk. Maybe you could consider cross-posting an upcoming or previous writing of yours on this Forum? The interview you had with Tom Chivers might be interesting to people interested in AI Safety here. 

You can include a short summary of the post and why you think people might want to read it when cross-posting. Just a suggestion in case you'd find more subscribers or readers here. You could also link to your newsletter and include a short bio of yourself in your Forum so people can find it that way. :)

Thank you for the welcome, and the encouragement!

I was already thinking about re-posting some interviews here, but was a bit worried about too much self-promotion, so glad you suggested it :)

No problem! Posting a few (1-3 ) interviews/issues first should be fine.

Hey all!

I'm studying for a bachelor in Philosophy & Economics at Humboldt-Universität zu Berlin.  I first read Singers Essay "Famine, Affluence and Morality" in school and was impressed with the shallow pond argument.  That was the start of my interest in practical ethics and the EA Movement alligns nicely with most of my views. 

I'm still quiet unsure about my future (apart from wanting to do good) and am currently struggling with procrastination and a missing sense of direction. Conseqently, I'm especially interested in meeting EAs, who are dealing with the same issues.  One idea of dealing with procrastination is a pen pal, so if you're interested, feel free to message me :)

I have been lurking on this forum for a week and you all seem like really nice, level-headed people, who enjoy a good debate, so I'm very happy to join! 

Hi Kottsiek, welcome to the Forum! Have you connected with someone from EA Berlin, such as Manuel Allgaier? Here's their website: https://ea-berlin.org/. You can also reach out to NEAD, which connects people interested in EA in Germany : https://ealokal.de/. You will likely be able to connect with EAs with a similar background or at least in the same region/country as you through EA Berlin or NEAD.

Regarding struggling with procrastination, I found the Complice's Goal-Crafting Intensive Workshop useful. It's a 5-hour event where you listen and work through content with others to help you set and prioritize goals for yourself, and come up with strategies to achieve them, among other topics. It only costs a minimum of $25. The next session is still in April, but you can already book for a class ahead and they can give you content that you can work through ahead. 

You might also like to read this EA Forum post about finding an accountability buddy to meet or chat with every week, to help you overcome procrastination: https://forum.effectivealtruism.org/posts/2RvpoWWQDiFpptpam/accountability-buddies-a-proposed-system-1. In the Complice event, they invite attendees to find an accountability buddy at the end.

You can also join the EA Life Coaching Exchange facebook group, and try to find an accountability. buddy there. A couple of people in EA Philippines have found an accountability buddy/group through there.  Hope this helps!

Thank you for the links. I signed up for the workshop. 

No problem!

Hello everyone!

I'm a 2nd year Sociology & Social Anthropology student studying at the University of Edinburgh. I've joined this forum as myself and some of my colleagues are interested in learning about what various participants in the EA 'movement' think about 'effectiveness' and the organisation as a whole. 

We're doing ethnographic research, which means taking part in some activities alongside you, while talking to you directly in events, on forums, and in interviews. If you'd be interested in talking to me about your experiences and thoughts about effective altruism, please feel free to send me a private message and we can find a time to chat!

Hi Kate, welcome to the forum! Great to see someone with a sociology background in EA - there's relatively few of you in the movement. I'm glad that you're doing ethnographic research on people in the movement. I was a UI/UX designer before so I've done some user research / qualitative interviews before.

Another EA, Vaidehi Agarwalla, did something similar to you before where she interviewed people in EA, particularly those who were looking to make a career transition or had just made a career transition. Her undergraduate degree was also in sociology. You may be interested to read her sequence on "Towards A Sociological Model of EA Movement Building", which I think is unfinished yet, but already has 2 articles in it.

I was wondering if you were planning on focusing on a specific topic or demographic within EA for your ethnographic research? That might be good to do, since people in EA and their interests can be quite varied, so it might be worth scoping the research down rather than just asking to interview anyone in the movement. Just my two cents!

Also, if you haven't seen it yet, 80,000 Hours has a list here of research topics that people with a background in sociology can research on. You could consider researching on one of these topics as a side project or uni project in the future.

Also, if you're interested in biosecurity, David Manheim had some biosecurity project ideas for people with a sociology/anthropology background. :)

Hello, if you experience #low-impact-angst, please join this slack. We currently have 7 tech/programmer-type humans that met at EAxVirtual last year. Come hang out! :)

Trying to figure out a career path.... Ahhhhh. There's a career plan worksheet, and it really needs some feedback. Please comment if giving feedback on a career plan sounds fun. Thanks!

I definitely find this feeling relatable from my own career planning!

Inspired in part by your similar comment on another post, I've now made an open thread on the Forum for people to request and/or provide such feedback. And:

To get things going, I commit to reading and providing some feedback on at least 2 pages' worth of the documents from each of the first 5 people who comment to request feedback. (I might do more; I'll see how long this takes me.)

I'm pretty certain that some people on this forum get 2 karma on their comments immediately on posting them. Is this a thing?

I realise this is a petty and unimportant thing to think about, but I am slightly curious as to what's going on here.

I'm pretty sure the Forum uses the same karma vote-power as LessWrong.

Your observations is correct. How much karma you start off with depends on the amount of karma you have - unfortunately I don't know the minimum required to start off with 2 karma. The more karma you have, the more weighty your strong upvotes become as well (mine are 7 karma, before I hit 2500 karma it was 6).

Here is the relevant section of the code: 

export const userSmallVotePower = (karma: number, multiplier: number) => {

if (karma >= 1000) { return 2 * multiplier }

return 1 * multiplier

}

 

export const userBigVotePower = (karma: number, multiplier: number) => {

if (karma >= 500000) { return 16 * multiplier } // Thousand year old vampire

if (karma >= 250000) { return 15 * multiplier }

if (karma >= 175000) { return 14 * multiplier }

if (karma >= 100000) { return 13 * multiplier }

if (karma >= 75000) { return 12 * multiplier }

if (karma >= 50000) { return 11 * multiplier }

if (karma >= 25000) { return 10 * multiplier }

if (karma >= 10000) { return 9 * multiplier }

if (karma >= 5000) { return 8 * multiplier }

if (karma >= 2500) { return 7 * multiplier }

if (karma >= 1000) { return 6 * multiplier }

if (karma >= 500) { return 5 * multiplier }

if (karma >= 250) { return 4 * multiplier }

if (karma >= 100) { return 3 * multiplier }

if (karma >= 10) { return 2 * multiplier }

return 1 * multiplier

}

In other words, you get 2 small-vote power at 1000 karma, and you can look at the numbers above to see the multipliers for strong-votes.

What's multiplier?

And why is it equal to 1?

It's sometimes 1 (for upvotes) and sometimes -1 (for downvotes). Implementing it as a free variable was a bit easier than implementing it as a boolean, so we did that.

Ah, well you learn something new every day, thanks.

The size of your weak upvotes is also affected by your total karma, just more slowly. Every post starts with one weak upvote from its author.

Would a discord server work better? This is a community platform that is easy to download and maintain. There are individual chats, group forums, and voice channels for all means of communication. With enough support, this can be set up quickly. Please upvote if this is something that sounds useful, and depending on support, there will be a link posted on this post shortly. Keep in mind this discord server could be used for all things EA, besides, connecting individuals and providing an easy place to share documents and stories. Please provide feedback!

There are multiple Discord servers with some degree of EA activity. The biggest I'm aware of is "EA Corner" (invite link), which is quite active. Thanks for the reminder to add that to our "useful links" post!

The EA Forum is very different from what Discord can accomplish; we want this to be a place where useful posts and discussions are available for decades to come -- a record of EA intellectual progress, as well as a community space for long-form discussion. Discord is great for live chat, but very poor for archiving material or crafting a "body of work".

(These open threads are the sort of thing one could replicate pretty well on Discord, but part of why they exist is for people to say hello as they enter the Forum community, so hosting them on a totally different platform would defeat the purpose.)

Can you embed a YouTube video in the EA Forum? If so, how?

Try pasting in a YouTube link. Note that this doesn't work if you've enabled the Markdown editor in your settings.

Ah... I prefer to use the Markdown editor, but I could switch to the rich text editor for this post.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3
 ·  · 3m read
 · 
We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond! Background Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm. Focus on pledges Based on our last impact evaluation[1], we have made our pledges –  and in particular the 🔸10% Pledge – the core focus of GWWC’s work.[2] We know the 🔸10% Pledge is a powerful institution, as we’ve seen almost 10,000 people take it and give nearly $50M USD to high-impact charities annually. We believe it could become a norm among at least the richest 1% — and likely a much wider segment of the population — which would cumulatively direct an enormous quantity of financial resources towards tackling the world’s most pressing problems.  We initiated this focus on pledges in early 2024, and are doubling down on it in 2025. In line with this, we are retiring various other initiatives we were previously running and which are not consistent with our new strategy. Introducing our BHAG We are setting ourselves a long-term Big Hairy Audacious Goal (BHAG) of 1 million pledgers donating $3B USD to high-impact charities annually, which we will start working towards in 2025. 1 million pledgers donating $3B USD to high-impact charities annually would be roughly equivalent to ~100x GWWC’s current scale, and could be achieved by 1% of the world’s richest 1% pledging and giving effectively. Achieving this would imply the equivalent of nearly 1 million lives being saved[3] every year. See the BHAG FAQ for more info. Working towards our BHAG Over the coming years, we expect to test various growth pathways and interventions that could get us to our BHAG, including digital marketing, partnerships with aligned organisations, community advocacy, media/PR, and direct outreach to potential pledgers. We thin