If you're new to the EA Forum, consider using this thread to introduce yourself! 

You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)


If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)


Open threads are also a place to share good news, big or small. See this post for ideas.

15

0
0

Reactions

0
0
Comments8


Sorted by Click to highlight new comments since:

Just trying to get myself comfortable with posting on the forum, since I'm new to it.

I'm from Brazil (Rio Grande do Sul), I consider myself deeply concerned about ethics, and I believe there are analytical methods that can get us closer not only to ethical truths (be they objective or not) but also to the methods whereby we may abide by those truths. 

I have a medical degree and I'm currently taking an online MicroMasters in Statistics and Data Science at MITx. I plan to take part in public health research, though I'm pretty much open to change gears if presented with sufficient evidence to do so.

Thank you all for supporting the EA community!

Hi there!  I'm new to the forum and thought I'd post here just to break the ice and get comfortable posting on the forum.  It's great to meet all of you!  Looking forward to interesting conversations!

Hi! I joined the forum recently, and wanted to introduce myself. 

I am a Bachelor's student in Computer Science and Economics in the Eastern US. Throughout the years, I attempted to introduce effective altruism to my friends and classmates - when appropriate. The concept seemed to resonate especially well with students in engineering and finance, but ultimately the efforts rarely resulted in concrete changes. 

That problem got irreversibly stuck in my mind: Why do these people, who are both good and can intellectually see the net benefits of EA, find it difficult to engage with? Was it because we are students and stereotypically dislike spending any amount of money?

From what those people have done and said, the problem might lie in the perceived inaccessibility of EA (for example, the added research step of ensuring effective use of donations discouraged many from taking action) and/or perceived emotional distance of the results (for example, using evidence and logic to discard some altruistic missions in favor of others may have taken away from the emotional component of altruism, which seems to be the more traditional aspect) .

I don't know why EA is not more prevalent or 'easy' to get into. I think it should be. But maybe it was my approach that was faulty; I have a lot to learn. So, I am here to learn more and do better, effectively. 

Hi there!

A quick thought about your quandary: I have been very puzzled by this throughout my time as an EA as well and my best model for people who 1) intellectually understand EA but 2) don't act on it is that they are mostly signalling, which is super cheap to do. Taking real action (e.g. donating your hard earned money that you could have used on yourself) is much more costly.

Experience has also born the following out for me: For people who don't intellectually (I'd go so far as to say intuitively) get EA, I think there is (almost) no hope of getting them on board. It seems deeply dispositional to me. This lends itself to a strategy that tries to uncover existing EAs who have never heard of it rather than converting those who have but show resistance.

Just my two cents!

The barrier to action is definitely a big thing. When I was a student, I avoided donating money. I told myself I'd start donating when I got a job and started making good money. Then, when I did get a job, I procrastinated for another two years. 

The thing that convinced me to finally do it was joining a different online group where I tried to do a good deed every day. When I got that down, I got into the habit of doing good, which made me rethink EA. After some thought, I committed to try giving 10% just for a year. A month later, I made the Giving What We Can pledge. After I'd made the commitment I realised it wasn't that hard, and I felt a lot better about myself afterwards.

If I could go back in time, I think what I'd ask my past self to do is not to commit to donating 10%, but to commit to donating just 1%  for a year. 1% is nothing, and anyone can do that - but once you start intuitively understanding that A) You feel better donating this money, and B) You really don't miss it, it's a lot easier to scale up. Going from 0 to 1 is a bigger step than from 1 to 10.

I still don't have a full solution, but I think that might be a place to begin.

Medium-time lurker, first-time commenter (I think)! I'll be posting a piece tomorrow or Friday about whether Effective Altruists should sign Up for Oxford’s COVID challenge study. Hoping to start a lively discussion!

Another great initiative in trying to make the Forum more friendly. Congrats!

 I'm from Brazil, São Paulo. Joined EA community in Dec 2016. Trying my best to help the community grow well.

Hiiii!  I was drawn on to here by the creative writing contest! Hoping to participate... except the thing I'm trying to write will. not. behave. (It's driving me bananas.) That said, I love writing fiction. SOMETIMES it flows. And sometimes it lets me grapple with the way Reality is tangled up with so much ambiguity.

I end up bumping into EA and EA-interested people on Discords! ...and I heard about the creative writing contest on a really fun Discord server I'm on.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3