I'm interested in effective altruism and longtermism broadly. The topics I'm interested in change over time; they include existential risks, climate change, wild animal welfare, alternative proteins, and longtermist global development.
A comment I've written about my EA origin story
Pronouns: she/her
Legal notice: I hereby release under the Creative Commons Attribution 4.0 International license all contributions to the EA Forum (text, images, etc.) to which I hold copyright and related rights, including contributions published before 1 December 2022.
"It is important to draw wisdom from many different places. If we take it from only one place, it becomes rigid and stale. Understanding others, the other elements, and the other nations will help you become whole." āUncle Iroh
Thank you for posting this! I've been frustrated with the EA movement's cautiousness around media outreach for a while. I think that the overwhelmingly negative press coverage in recent weeks can be attributed in part to us not doing enough media outreach prior to the FTX collapse. And it was pointed out back in July that the top Google Search result for "longtermism" was a Torres hit piece.
I understand and agree with the view that media outreach should be done by specialists - ideally, people who deeply understand EA and know how to talk to the media. But Will MacAskill and Toby Ord aren't the only people with those qualifications! There's no reason they need to be the public face of all of EA - they represent one faction out of at least three. EA is a general concept that's compatible with a range of moral and empirical worldviews - we should be showcasing that epistemic diversity, and one way to do that is by empowering an ideologically diverse group of public figures and media specialists to speak on the movement's behalf. It would be harder for people to criticize EA as a concept if they knew how broad it was.
Perhaps more EA orgs - like GiveWell, ACE, and FHI - should have their own publicity arms that operate independently of CEA and promote their views to the public, instead of expecting CEA or a handful of public figures like MacAskill to do the heavy lifting.
I've gotten more involved in EA since last summer. Some EA-related things I've done over the last year:
Although I first heard of EA toward the end of high school (slightly over 4 years ago) and liked it, I had some negative interactions with EA community early on that pushed me away from the community. I spent the next 3 years exploring various social issues outside the EA community, but I had internalized EA's core principles, so I was constantly thinking about how much good I could be doing and which causes were the most important. I eventually became overwhelmed because "doing good" had become a big part of my identity but I cared about too many different issues. A friend recommended that I check out EA again, and despite some trepidation owing to my past experiences, I did. As I got involved in the EA community again, I had an overwhelmingly positive experience. The EAs I was interacting with were kind and open-minded, and they encouraged me to get involved, whereas before, I had encountered people who seemed more abrasive.
Now I'm worried about getting burned out. I check the EA Forum way too often for my own good, and I've been thinking obsessively about cause prioritization and longtermism. I talk about my current uncertainties in this post.
Disclosure: Possible conflict of interest here. I donated close to $900 to SPI in November 2024 based on information they shared with me privately about their work and confirmation of their activities from a mutual contact. This was a large portion of my giving last year, which may bias me towards wanting to believe they will have impact.
I appreciate that you folks did a review of SPI, but why publish this linkpost without a description?
Also, is there really no information you can share about SPI's work so far? This doesn't match my impression of the work they were up to. I'm happy to follow up with them about their progress and share what I find out here, provided that they don't object.
This is a pretty grounded take. The only thing that bugs me is this passage, in the introduction:
The national media spotlight focused on sweatshops in 1996 after Charles Kernaghan, of the National Labor Committee, accused Kathy Lee Gifford of exploiting children in Honduran sweatshops. He flew a 15 year old worker, Wendy Diaz, to the United States to meet Kathy Lee. Kathy Lee exploded into tears and apologized on the air, promising to pay higher wages.
Should Kathy Lee have cried? Her Honduran workers earned 31 cents per hour. At 10 hours per day, which is not uncommon in a sweatshop, a worker would earn $3.10. Yet nearly a quarter of Hondurans earn less than $1 per day and nearly half earn less than $2 per day.
Wendy Diazās message should have been, āDonāt cry for me, Kathy Lee. Cry for the Hondurans not fortunate enough to work for you.ā Instead the U.S. media compared $3.10 per day to U.S. alternatives, not Honduran alternatives. But U.S. alternatives are irrelevant. No one is offering these workers green cards.
One of the lessons I draw from this issue is that people, particularly those living in extreme poverty, may have preferences different from our own and different from what we'd expect given our assumptions about them. As the piece points out, factory workers in developing countries generally "want most of their compensation in wages and little in health or safety improvements.... Employers will meet health and safety mandates by either laying off workers or by improving health and safety while lowering wages against workersā wishes." It's not our place to tell third-world workers that they should want improved working conditions more than they want moolah.
By the same token, I feel that it's arrogant for the author to tell poor people like Wendy Diaz what they should think or say, especially when he is using them to make a point. The piece doesn't tell us what Wendy told Kathy Lee Gifford, her putative "employer," but it's likely that she was testifying about her lived experiences working in the factory. Expectations color our perception of our life circumstances, so her outlook could have been subsequently shaped by her being flown north to the United States and getting a glimpse of what was possible outside her home country of Honduras. I can only speculate, so I'm being cautious not to project my beliefs onto Wendy as if she "should think X" or "did in fact feel Y." My point is that activists have a duty to respect the autonomy of their subjectsānothing about us without us.
Thanks for sharing! This is probably an abuse of notation, but I clicked the check mark reaction as a note-to-self that I completed the survey, even though it typically means "agree".
Should we send this to our non-EA friends too?
Does anyone know what's going on with Apart Research's funding situation? I participated in one of their AI safety hackathons and it propelled me into the world of AIS research, so I'm sad to hear that they might be forced to shut down or downsize. They're trying to raise nearly a million dollars in the next month.
I can speak for myself: I want AGI, if it is developed, to reflect the best possible values we have currently (i.e. liberal values[1]), and I believe it's likely that an AGI system developed by an organization based in the free world (the US, EU, Taiwan, etc.) would embody better values than one developed by one based in the People's Republic of China. There is a widely held belief in science and technology studies that all technologies have embedded values; the most obvious way values could be embedded in an AI system is through its objective function. It's unclear to me how much these values would differ if the AGI were developed in a free country versus an unfree one, because a lot of the AI systems that the US government uses could also be used for oppressive purposes (and arguably already are used in oppressive ways by the US).
Holden Karnofsky calls this the "competition frame" - in which it matters most who develops AGI. He contrasts this with the "caution frame", which focuses more on whether AGI is developed in a rushed way than whether it is misused. Both frames seem valuable to me, but Holden warns that most people will gravitate toward the competition frame by default and neglect the caution one.
Hope this helps!
Fwiw I do believe that liberal values can be improved on, especially in that they seldom include animals. But the foundation seems correct to me: centering every individual's right to life, liberty, and the pursuit of happiness.