Update, 12/7/21: As an experiment, we're trying out a longer-running Open Thread that isn't refreshed each month. We've set this thread to display new comments first by default, rather than high-karma comments.

If you're new to the EA Forum, consider using this thread to introduce yourself! 

You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)

If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)

Open threads are also a place to share good news, big or small. See this post for ideas.

100 comments, sorted by Highlighting new comments since Today at 11:48 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Last year there were 2062 Frontpage posts and 82 Personal Blogposts. By default, Personal Blogposts are hidden from view — you have to search for them in All Posts or change your settings to view them.

By default, the home page only displays Frontpage Posts, which are selected by moderators as especially interesting or useful to people with interest in doing good effectively. Personal posts get to have looser standards of relevance, and may include topics that could lead to more emotive or heated discussion (e.g. politics), which are generally excluded fr

... (read more)

Just wanted to flag that AI scientist Timnit Gebru has written a tweet thread criticizing the AI safety field and the longtermist paradigm, quoting the Phil Torres Aeon essay. I would appreciate it if someone could put out a kind, thoughtful response to her thread. Since Gebru is a prominent, respected person in the mainstream AI ethics research community, inconsiderate responses to her thread (especially personal attacks) by EA community members run the risk of making the movement look bad.

4evelynciara1dThe thread arose from this related conversation [https://twitter.com/AmandaAskell/status/1484678621268615168] about sentient AIs being compared to people with disabilities (where everyone agreed that such analogies are harmful)
6Julia_Wise8hThanks for noting! Habiba responded: https://twitter.com/FreshMangoLassi/status/1485769468634710020 [https://twitter.com/FreshMangoLassi/status/1485769468634710020]
2evelynciara7hI really like her response :)

Hello all. I'm Dave, I'm in my late 20s and I've been on an existential crisis since I've come across EA and related topics. I don't know what to do to help since I don't have any degree, I don't live in a rich country, and also because I don't think there's much we can do on the long-term. Namely if we keep inventing these magic-like technologies which will grant us power that no human being is wise enough to hold. I don't have anyone to talk to, and even if it did I wouldn't want to destroy their sanity, as I'm already on my way of destroying mine. Any advice or perspectives would be appreciated. Thank you

You're not alone in finding these topics mind-boggling and distressing!

If you'd like to talk to people and there's not an EA group near you, you could join the EA Anywhere group: https://eahub.org/group/effective-altruism-anywhere-2/

There's also the EA Peer Support group: https://www.facebook.com/groups/ea.peer.support


I'm new here. My name's Carlos and I'm an anthropologist and social scientist looking for new career perspectives after my PhD. I would love to join a company or NGO to have a positive impact on the world. I'm interested in animal rights, fighting poverty and universal basic income. It's a pleasure to be here and to learn with you. Thanks for reading me


I am writing to say that I might be doing “moderately-high temporal resolution scrapes of some subset of EA Forum content".

This comment/notification is mainly for forum technical admin, and anyone interested in these scrapes or the potential products of such a project.

Precedents for this scraping include this post, this question, the existence of the API and it's discussion here, and general open source/discussion principles, or something.

Feel free to discuss!

For more information, I've very quickly written rambling, verbose thoughts in a reply to this c... (read more)

1Charles He23dFlagging some more technical points about the scraping above (verbose, quickly written): * This scraping might be in the form of API calls that occur every few minutes. The burden of these calls seems small (?) relative to the mundane, everyday use of the API, e.g. see GreaterWrong [https://www.greaterwrong.com/]or Issa Rice's site [https://lw2.issarice.com/]. * Just to be super clear, I think the computing costs for the backend activity of these calls are probably <$1 a month * It seems there aren't rules/norms for rate limits and there is some evidence that the EA forum/ LessWrong may not handle heavy use of API calls robustly : * Calls that seem sort of large are allowed. To me, these calls seem large compared to say, response limits and size limits of calls of Gmail API and other commercial APIs I've used. * Pagination isn't supported in the API, and for many calls there aren't even date filters ("before:"/"after:") for me to approximate pagination * The API exposes certain information that isn’t available in the front-end website. However, I am reluctant to elaborate because (1) this same information is available another way, so it’s not quite a leak (2) I’m a noob, but this was easy to find—I think this is a sign it's sanguine and maybe already used (3) I don't want to just add a low value ticket to someone's Kanban board (4) I find this information interesting! Other comments on the purpose (also verbose, quickly written): * This "higher resolution" scraping might help answer interesting questions. I don’t want to write details, mainly because I’m in the fun, initial 10%/ideation stage of a side project. In this stage, usually I see something shiny, like a batch of kittens in the neighborhood that need fostering, and the project ends. * Not really related to high frequency temporal scrapping, but related scrapping in general: this is useful to get over certain li
7NunoSempere15dHey, I have a series of js snippets that I've put some love into that that might be of help, do reach out via PM.
4Charles He9dHi Nuño, This is generous of you. So I managed to stitch together a quick script in Python. This consists of GraphQL queries created per the post here [https://www.lesswrong.com/posts/LJiGhpq8w4Badr5KJ/graphql-tutorial-for-lesswrong-and-effective-altruism-forum] and Python requests/urllib3. If you have something interesting written up in js, that would be cool to share! I guess you have much deeper knowledge of the API than I do. It was a bit of a hassle was getting it packaged and running on AWS, with Lambda calls every few minutes. But I got it working! Now, witness the firepower of this fully armed and operational battlestation!

Pulse charity:

Effective altruism seems to focus like a laser on the most valuable problems for human suffering, but what if we extend the metaphor further, and to increase the impact make it a pulse laser? (Part of my inspiration was debt jubilees) I think this could have a few effects:

  • Many issues can be solved with large piles of cash that can't be solved with smaller ones, such as building a well vs importing water
  • on the donor side, it could be a Schelling point. Hey, those EA folks only come around every few years, now I can blow off other donors the
... (read more)
1TianyiQ4dCurrently, EA resources are not gained gradually year by year; instead, they're gained in big leaps (think of Openphil and FTX). Therefore it might not make sense to accumulate resources for several years and give them out all at once. In fact, there is a call [https://forum.effectivealtruism.org/posts/ckcoSe3CS2n3BW3aT/] for megaprojects in EA, which echos your point 1 and 3 (though these megaprojects are not expected to funded by accumulating resources over the years, but by directly deploying existing resources). I'm not sure if I understand your second point though.

Hi, I'm new here. I am writing from Calgary., Canada. I'm a Ph.D. student in the area of Communication and Media Studies.  Interested in AI and media 

1Charles He1moThat seems like really important and interesting work. Can you write a bit more here about anything that would help you in your "journey" in EA or elsewhere, or have any questions for anyone?

Anyone know how to embed links into text in the "User Profile" section?

So make it look like this:

Instead of this:

Just can't seem to do it!

4Habryka1moI think we maybe support markdown in that textbox, so try using Markdown syntax.
2JackM1moThanks. I ticked "Activate Markdown Editor" and tried the hyperlink syntax but comes out like this: Maybe I'm doing something wrong?
4Aaron Gertler1moYou had a non-syntactical space between [LinkedIn] and your URL. I removed it. (Note that you don't need to turn on the Markdown editor to edit your bio — the bio is in Markdown no matter what.)
2JackM1moThanks Aaron!

As per this comment, "winter" doesn't feel like the best term for this time of year given we have people from both hemispheres on the Forum


Here is a forum bug that has been bugging me since forever: My own comments show up as new comments, i.e., the post comments bubble lights up in blue. But this shouldn't be the case; I already know that I left a new comment.

This summer, I became incredibly interested in effective altruism and as a high school student and someone from a low-income background, I felt like there were limited options on how to get involved in EA.

I would love to start a project supporting the EA movement for high school/secondary students.  Here are my ideas!

1. A website similar to 80 000 hours, mentioning career planning and how to plan your undergraduate career to align with EA principles.

2. Hosting an EA conference for youth in a virtual format.

3. Having an EA council with mentorship from more established members in the space to work on the projects mentioned above and produce content.

9ovidius1moHi! I know this is two weeks late, but I'm new to the forum so I hope you'll forgive me. I'm also a high school student interested in EA, and I've found some ways to help out in the movement despite the limited options which I'd be happy to talk more about. I'm really interested in your ideas, and also just in how many high schoolers lurk on this forum but (like me) find the high level of discourse a bit intimidating. I'd like to write a post intended to surface and connect with those high schoolers. Perhaps from there, we can work together on making 2 or 3 happen.
2Charles He1moHi, I'm not an official or representative from EA or anything like that, but this sounds awesome! Your post is really welcome. Are you asking for help in any way? If so, just say so and people can help. By the way, yes, the discourse uses a lot of words, but a lot of the ideas are basically from high school. People are just familiar with writing with them. What really sets good EA apart is patience, listening, and perception, and the gradual development of good judgement. There's deep pools of talent people who don't write a lot. This is less obvious, but these people are valuable. You are too!
2DavidXYu1moI really appreciate the sentiment from this. I help run SPARC (https://sparc-camp.org/) and while the camp itself is meant to be a selective program, we want to support more broadly addressed initiatives too (if nothing else they end up benefiting us anyway because it encourages future good and aligned applications). SPARC can probably help on the level of ops support from alumni who may be interested and a degree of funding that can at least make something like 2. happen.
2ChanaMessinger1moCool! Peter McIntyre is working on things like #1 and might be interested in 2 and 3 as well. That doesn't mean you shouldn't try it on your own, but that might be someone to get in touch with!

Hello! I am slowly seeping into the Forum floorboards, dripping down the comments section, leaving meandering mumblings along an electronic thread. Most of my thoughts are obscure and dubiously specific. Expect errors; I do. And, I value dialogue not for compromise, but to send feelers out in all directions of the design-space. Those lateral extremes bind the constraints of good ideas, found only after pondering a few dozen flops! I'm glad to turn them around, to find any lucky inspirations. Most domains are a straight path up my alley; I follow specific problems into each arena, in turn.

Hi! I got recommendation to join the forums because of my reflections about what I should focus on in my career. Is it allowed to write a post on the forum which is not making a specific proposition but rather is asking for advice and providing discussion points for commenters? Or should that be posted as a question?

4Khorton1moI'd probably use the question feature, but I'm sure either is fine - looking forward to your post!

Had the chance to speak to venture capitalist, former poker pro and Effective Altruist Haseeb Qureshi about EA and Web3 - including earning to give and how crypto can facilitate effective giving. You can give it a read here: https://golem.foundation/2021/12/03/interview-HQureshi.html. 

1SimonM1moThat link is broken for me.
1Guy Raveh16dThere's an extra dot at the end. Remove it and the link is fine.

Hi, I've been interested in EA for years, but I'm not a heavyhitter. I'm expecting to give only dozens of thousands of dollars during my life.

That said, I have a problem and I'd like some advice on how to solve it: I don't know whether to focus on shortterm organizations like Animal Charity Evaluators and Givewell or longterm organizations like Machine Intelligence Research Institute, Center for Reducing Suffering (CRS), Center on Longterm Risk (CLR), Longterm Future Fund, Clean Air Task Force and so on. It feels like longterm organizations are a huge gamb... (read more)

3Aaron Gertler2moThis is one of the hardest "big questions" in EA, and you've outlined what makes the question hard. You might want to wait another week or two — we have an annual post where people explain where they're giving and why. You can be notified when it goes up if you subscribe to the donation writeup tag [https://forum.effectivealtruism.org/tag/donation-writeup]. You can also see last year's version [https://forum.effectivealtruism.org/posts/SjxB2KHihsJMb6G4M/where-are-you-donating-in-2020-and-why] of that post. Maybe some of the explanations in these posts will help you figure out what point of view makes the most sense to you!
1LoveAndPeaceAlways2moThank you for answering, I subscribed to that tag and I will take a closer look at those threads.
6NunoSempere2moPersonally, 1. Bite all the bullets, uncertain but higher expected impact > certain but lower impact 2. It's tricky to know how good longtermist organizations are compared to each other. In the past I would have said to just defer to the LTFF, but now I feel more uncertain.
1LoveAndPeaceAlways2moThank you for answering, your reasoning makes sense if longterm charities have a higher expected impact when taking into account the uncertainty involved.

Hi everyone! 

I was wondering if anyone had an opinion on whether it is more ethical to eat 100% grass-fed beef/lamb from trusted suppliers in Australia (i.e. CCTV in slaughter houses and minimal transport) or more tofu/beans? 

The pros of tofu/beans are clearly that it does not require taking the life from a cow or lamb who wants to live (although note that it takes lots of meals  to cause the death of one cow), and also that it dramatically reduces carbon emissions. 

The pros of instead eating 100% grass-fed beef/lamb are that it ma... (read more)

2TianyiQ4dFrom a consequentialist perspective, I think what matters more is how these options affect your psychology and epistemics (in particular, whether doing this will increase or decrease your speciesist bias, and whether doing this makes you uncomfortable), instead of the amount of suffering they directly produce or reduce. After all, your major impact on the world is from your words and actions, not what you eat. That being said, I think non-consequentialist views deserve some considerations too, if only due to moral uncertainty. I'm less certain about what are their implications though, especially when taking into account things like WAS. A few minor notes to your points: At least where I live, vitamin supplements can be super cheap if you go for the pharmaceutical products instead of those health products wrapped up in fancy packages. I'm taking 5 kinds of supplements simultaneously, and in total they cost me no more than (the RMB equivalent of) several dollars per month. It might be hard to hide that from your friends if you are eating meat when being alone. All the time people mindlessly say things they aren't supposed to say. Also when your friends ask you about your eating habit you'll have to lie, which might be a bad thing even for consequentialists [https://forum.effectivealtruism.org/posts/CfcvPBY9hdsenMHCr/integrity-for-consequentialists] .
1utilitarian012dMight be irrelevant, but have you considered moving to the US for the increased salary?
1TianyiQ2dThanks for the suggestion, but I'm currently in college, so it's impossible for me to move :)
5Lucas Lewit-Mendes1moUpdate - I just came across this article [http://www.animalvisuals.org/projects/data/1mc], which suggests that harvesting/pasture deaths are probably higher for beef than plants anyway, so it seems a pretty clear decision that being vegan is best in expectation!
2Charles He2moThis is a really thoughtful and useful question. Most informed people agree that beef and dairy cows live the best life of all factory farmed animals, more so than pigs, and much much more so than chickens. Further, as you point out, beef and dairy cows produce much more food per animal (or suffering weighted days alive). A calculator here can help make make the above thoughts more concrete [https://reducing-suffering.org/how-much-direct-suffering-is-caused-by-various-animal-foods/] , maybe you have seen it. I think you meant prevents painful deaths? With this change, I don't know, but this seems plausible. (I think amount of suffering depends on the land use and pesticides, but I don't know if the scientific understanding is settled, and this subtopic may be distracting.) I think you have a great question. Note that extreme suffering in factory farming probably comes from very specific issues, concentrated in a few types of animals (caged hens suffering to death [https://countinganimals.com/is-vegan-outreach-right-about-how-many-animals-suffer-to-death/#:~:text=Chickens%20arrive%20dead%20for%20a,days%20to%20reduce%20fecal%20contamination.] by the millions and other graphic situations). This means that, if the assumptions in this discussion are true, and our concern is on animal suffering, decisions like beef versus tofu, or even much larger dietary decisions, seem small in comparison.
6Lucas Lewit-Mendes2moThanks Charles for your thoughtful response. I just wanted to note that I'm referring to 100% pasture fed lamb/beef. I think it's very unlikely that it's ethically permissable to eat factory farmed lamb/beef, even if it's less bad than eating chickens, etc. I'd also caution against eating dairy since calves and mothers show signs of sadness [https://kb.rspca.org.au/knowledge-base/what-happens-to-bobby-calves/]when separated, although each dairy cow produces a lot [https://reducing-suffering.org/how-much-direct-suffering-is-caused-by-various-animal-foods/] of dairy (as you noted). Sorry, I probably could've worded this better, but my original wording was what I meant. My understanding is that crop cultivation for grains and beans causes painful wild animal deaths, but grass-fed cows/lamb do not eat crops and therefore, as far as I'm aware, do not cause wild animal deaths. I certainly agree with your conclusion that not eating factory farmed chicken, pork, and eggs (and probably also fish) is the most important step! But I'd still like to do the very best with my own consumption.
2Charles He2moEverything you said is fair and valid and seems right to me. Thank you for your thoughtful choices and reasoning. Edit: I forgot you said entirely pasture/grass fed beef, so this waives the thoughts below. A quibble: 1. It seems that beef and dairy cows both use feed, not just grass. Because eating dairy/beef requires more calories of feed (trophic levels), it is possible the amount of land needed for beef might be large compared to land needed for soy. 2. Grass crops are a use of land that might have ambiguous effects on animal suffering. I don't know about either of 1) or 2) above. I guess I am saying it is either good to be uncertain, or else get a good canonical source.

Just watched the new James Bond movie No Time to Die - the plot centers around a nanobot-based bioweapon developed by MI6 that gets stolen by international terrorists (if I'm understanding the plot correctly; it was confusing). Maybe someone can write a review of it that focuses on the EA themes?

I am the founder of Sanctuary Hostel a unique cross border eco friendly animal rescue/ hostel/ community garden project.

After taking a trip all over Mexico i noticed the animals were not treated well there, so i decided to move there and build an animal rescue. After arriving i decided a rescue was not enough. The existing rescues fail because they rely solely on donations and they dont really solve the problem they are a band aid.

I felt community and worldwide involvement was needed so i decided combining a hostel would help with that as well as a communi... (read more)

Greeting. My name is Anna and I am a digital producer. I am glad that there are so many of us here :)  

Hi guys, my name is Nathaniel and I'm new to this forum. I found out about EA a few months ago because I've been thinking in these terms my whole life (how to maximize positive output to the world) and it's great to see there's a whole community centered around that question. I'm studying an undergrad in sustainable energy engineering at SFU and I'm hoping to have a career somewhere in the intersection between this field and computer science (computational sustainability). I haven't done a lot of research into this yet but it seems like an area with so muc... (read more)

I'm Gabe Newman from Canada. My wife got involved in EA earlier this year and I've been skulking on the sidelines, reading and thinking. I'm almost 50 but also a student again as I am getting my MSW (little midlife crisis). I'm still trying to figure out where and how to apply my skill set. I have lots of experience with micro NGO projects which are sustainable but I'm not sure how easy they would be to study, so EA is a bit of a new way of thinking for me. I've typically enjoyed Keep It Simple Stupid projects. But lately I have had a couple incredible com... (read more)

1CarolineJ7dWelcome! It seems like your skills in NGO management are very needed [https://forum.effectivealtruism.org/posts/a7CnLEmn3P2ACugxm/why-is-operations-no-longer-an-80k-priority-path] in EA projects! You can consider reading more about how to apply your expertise [https://80000hours.org/articles/advice-by-expertise/]to high-impact causes and see if you come across exciting opportunities to directly work in an NGO or be a consultant for different organizations.

It seems like there's been a proliferation of AI safety orgs recently; I'd like to see a forum post describing all of them so people can easily find out more about them and who's hiring.

Hi, I'm newish to EA and new (as of today) to the forum! I use she/her/hers pronouns and I'm a college freshman. I've recently been thinking a lot about how I can use my career to help. AI safety technical research seems like the best option for me from the couple hours of research I've done. I'm planning to donate all my disposable income to the EA meta fund. I'm really passionate about doing as much good as I can, and I'm excited to have found a community that shares that! My biggest stumbling block has recently been my mental health, so if anybody has resources/tips they want to share, I'd love to hear them (for reference, I am actively getting treatment, so no worries there)!

5tessa2moIf you're looking for resources on mental health, you might enjoy some of the upvoted posts under the self-care tag [https://forum.effectivealtruism.org/tag/self-care?sortedBy=top], including Mental Health Resources Tailored for EAs [https://forum.effectivealtruism.org/posts/iHvvc9HHzSfHNGCHb/mental-health-resources-tailored-for-eas-wip] and Resources on Mental Health and Finding a Therapist [https://forum.effectivealtruism.org/posts/by38PwJNpqNWfc43G/resources-on-mental-health-and-finding-a-therapist] .
3Charles He2moSimilar to what Linch said, another useful perspective comes from in this post [https://forum.effectivealtruism.org/posts/mMEBty3W3WkK7rgEH/your-time-might-be-more-valuable-than-you-think] which says the value of your time might be higher than you think. At the same time, your earnings are probably lower right now than they will be. With this perspective, you might be better off spending the money on yourself given the personal needs you mentioned. For example, regular cleaning or relaxing travel probably helps mental health for many. It is wonderful you are working to help others.
5Linch2moWelcome to the Forum! I think it's good to donate a bit of money to good causes to help build good virtues, but at your current life/career stage you should probably focus on spending money in ways that make you better at doing good work later. See this blog post [https://meteuphoric.com/2013/03/30/the-value-of-time-as-a-student/] for some considerations.

Hi everyone!

My name is Holly, and I'm a 20-year-old freshman student in California. I first encountered the EA community in the International Youth Summit on Energy and Climate Change, Shenzhen, China, and found the forum when I was looking for help to navigate through my future career path.  I've been exploring and trying to understand the concept of effective altruism since I grew up in a highly self-interest-driven, bureaucratic environment, but I want to do good to help others and make this world a better place. EA would be a great opportunity for me.

I'm currently an Economics major, and I want to be an Econ professor in the future. (However, I just started to embark on this path to get a P.hD. first, and I found myself a little nervous since the road ahead is a bit unknown for me at this point. I sort of have a weak math background, and I've been trying to improve my skills) I care about people, and I'd love to help them find happiness and the true meaning of their lives, as well as help them to pick up the right mindset to understand the world and live better. This is what I wanna do for my whole life.

4Aaron Gertler2moGreetings! You didn't mention whether you'd found an EA group near you, and I'd recommend looking for one [https://eahub.org/groups/] if you haven't. It's easier to stay motivated and interested when some of your friends share your interests. Do you see this as something you'd be able to do as an economics professor? What is it that draws you to economics, specifically?

Hey, everyone. I don't post here often and I'm not particularly knowledgeable about strong longtermism, but I've been thinking a bit about it lately and wanted to share a thought I haven't seen addressed yet and I was wondering if it’s reasonable and unaddressed. I’m not sure this is the right place though, but here goes.

It seems to me that strong longtermism is extremely biased towards human beings.

In most catastrophic risks I can imagine (climate change, AI misalignment, and maybe even nuclear war* or pandemics**), it seems unlikely that earth would beco... (read more)

1TianyiQ4dGreat points! I agree that the longtermist community need to better internalize the anti-speciesist belief that we claim to hold, and explicitly include non-humans in our considerations. On your specific argument that longtermist work doesn't affect non-humans: * X-risks aren't the sole focus of longtermism. IMO work in the S-risk [https://s-risks.org/] space takes non-humans (including digital minds [https://forum.effectivealtruism.org/tag/digital-person/]) much more seriously, to the extent that human welfare is mentioned much less often than non-human welfare. * I think X-risk work does affect non-humans. Linch's comment mentions one possible way, though I think we need to weigh the upsides and downsides more carefully. Another thing I want to add is that misaligned AI can be a much powerful actor than other earth-originating intelligient species, and may have a large influence on non-humans even after human extinction. * I think we need to thoroughly investigate the influence of our longtermist interventions on non-humans. This topic is highly neglected relative to its importance.
2Frank_R2moI agree with Linchs comment, but I want to mention a further point. Let us suppose that the well-being of all non-human animals between now and the death of the sun is the most important value. This idea can be justified since there are much more animals than humans. Let us suppose furthermore that the future of human civilization has no impact on the lives of animals in the far future. [I disagree with this point since it might be possible that future humans abolish wild animal suffering or in the bad case they take wild animals with them when they colonize the stars and thus extend wild animal suffering.] Nevertheless, let us assume that we cannot have any impact on animals in the far future. In my opinion, the most logical thing would be to focus on the things that we can change (x-risks, animal suffering today etc.) and to develop a stoic attitude towards the things we cannot change.
4Linch2moIf humanity survives, we have a decent shot of reducing suffering in nature and spreading utopia throughout the stars. If humanity dies, but not all life, and some other species eventually evolves intelligence and then builds civilization, I think they might also have a shot of doing the same thing, but this is more speculative and uncertain, and seems to me to be a much worse bet than betting on humanity (flawed as we are).
1bezurli2moThanks for the comment. I really hadn't considered colonizing the stars and bringing animals.
3Linch2moTBC, I think it's more likely that utopia would not look like having animals in the stars. Digital minds [https://forum.effectivealtruism.org/tag/digital-person/] seem more likely, but also I think it's likely just that the future will be really weird, even weirder than digital minds.
2acylhalide2moAren't all ethical principles / virtues by default biased towards human beings? Except the ones that explicitly attempt to include animals in the moral circle. I assume most people value human lives higher than animal lives, even within EA, and even if they believe society currently undervalues animal lives. Not that that makes it objectively right or wrong ofcourse, you're free to value animal lives as highly as human if that is something you are drawn to. P.S. Valuing animal lives highly doesn't mean human extinction is neutral, it is still a bad thing because it is a lot of lives lost, versus counterfactual where no lives are lost. And if your ethics are total utilitarianism, what value you assign to animal lives doesn't even matter in this scenario, because it's the same number of lives lost. The lives not lost don't contribute to the delta. I personally don't find total utilitarianism intuitive though, we are probably closer to log(total) maximisers.

Hey everyone, I'm also new to the forum and to EA as of summer 2021. I found EA mostly through Lex Fridman's old podcast with Will MacAskill, which I watched after being reminded of EA by a friend. Then I read some articles on 80,000 hours and was pretty convinced.

I'm a sophomore computer science student at the University of Washington. I'm currently doing research with UW Applied Math on machine learning for science and engineering. It seems like my most likely career is in research in AI or brain-computer interfacing, but I'm still deciding and have an a... (read more)

Hi everyone! I'm a longtime EA but I haven't spent much time on the EA Forum, so taking this opportunity to introduce myself.

Professionally, I'm an economist in California focused on tax and benefit policy. I'm the co-founder and CEO of PolicyEngine, a tech nonprofit whose product lets anyone reform the tax and benefit system and see the quantified impact on society and one's own household (we're live in the UK and working on a US model). I'm also the founder and president of the UBI Center, a think tank researching universal basic income policies. Outside of work, I'm a founding lead of Ventura County YIMBY, which advocates housing density, and I lead the Ventura chapter of Citizens' Climate Lobby, which advocates carbon dividends.

I previously spent most of my career as a data scientist at Google, where I first encountered EA when Google.org gave a grant to GiveDirectly in 2012. I then became active in Google's internal EA group, left Google in 2018, took the GWWC pledge in 2019 (which I wrote about here), and got a Master's in Development Economics from MIT in 2020, where I became involved in the MIT EA community. I give primarily to GiveDirectly and GiveWell, though as an avid l... (read more)

3Aaron Gertler2moWelcome, Max! I've been following you on Twitter for a long time, and I'm excited to see you on the site I help to run :-) If you want feedback before you publish your post, I offer that to everyone [https://forum.effectivealtruism.org/posts/ZeXqBEvABvrdyvMzf/editing-available-for-ea-forum-drafts] (though it's totally optional).

Hi, I'm new to the forum and wanted to introduce myself! I'm a product manager in the cybersecurity industry, located in Salt Lake City, UT. I'm currently looking for ways to make more of a positive impact, focused around 1) helping to build up the local EA community and 2) using my career.

I'm relatively early in my career so I have a lot of uncertainties around what cause area to work on and what my personal fit would be for different roles, so I'm trying to find lots of people to talk to in the EA community about product management, data science, or EA startups.

Happy to be here and excited to start contributing!

3Aaron Gertler2moHi there! You may have considered this already, but I'd recommend applying to speak with 80,000 Hours [https://80000hours.org/speak-with-us/]. They're a great starting point for finding others to talk to, and they accept a lot of applications ("roughly 40% of people who apply", and I'd guess that many of their rejections are because the applicant has never heard of EA and doesn't really "get" what 80K is about).
1Derek Brimley2moYep, should have mentioned I already applied for their 1-on-1 advice! Trying to cast as wide a net as possible. :)
3Max_Daniel2moWelcome! I guess there's a good chance you've already seen this, but just to make sure: some people think that careers in the info sec space can be very high-impact [https://forum.effectivealtruism.org/posts/ZJiCfwTy5dC4CoxqA/information-security-careers-for-gcr-reduction] .
3Derek Brimley2moThanks! Skimming that over, it does seem like a potentially good path. I know info sec is one of 80k's "potentially good options" but I've generally brushed it off, even though it might seem like a good fit on paper. I've really only been involved in the development/management of a few insider risk products, so my skillset isn't focused on expertise in traditional info sec, it's mostly generalist PM skills for software dev. I'm probably in a slightly better position than most to pursue that route, but not by much. I'll read it over more thoroughly, thanks for the pointer!

I noticed something at EAG London which I want to promote to someone's conscious attention. Almost no one at the conference was overweight, even though the attendees were mostly from countries with  overweight and obesity rates ranging from 50-80% and 20-40% respectively. I estimate that I interacted with 100 people, of whom 2 were overweight. Here are some possible explanations; if the last one is true, it is potentially very concerning:

1. effective altruism is most common among young people, who have lower rates of obesity than the general population
2. effective altruism is correlated with veganism, which leads to generally healthy eating, which leads to lower rates of diseases including obesity
3. effective altruists have really good executive function, which helps resist the temptation of junk food
4. selection effects: something about effective altruism doesn't appeal to overweight people

It's clearly bad that EA has low representation of religious adherents and underprivileged minorities. Without getting into the issue of missing out on diverse perspectives, it's also directly harmful in that it limits our talent and donor pools. Churches receive over $50 billion in donatio... (read more)

7Will Bradshaw3moThe natural first step here is to check whether EA has lower rates of overweight/obesity than the demographics from which it primarily recruits. I can't speak much to the US, but in the European countries I've lived in overweight/obesity varies massively with socioeconomic status. My classmates at university were also mostly thin, as were all the scientists I've worked with (in several groups in several countries) over the years. And it's my reasonably strong impression that many other groups of highly-educated professionals have much lower rates of obesity than the population average. In general, I've tended to be the most overweight person in most of my social and work circles – and I'd describe my fat level over the past 10 years as, at worst, a little chubby. If it is the case that EA is representative of its source demographics on this dimension, that implies that it doesn't make all that much sense to focus on getting more overweight/obese people into the movement. Obviously, as with other demographic issues, we should be very concerned if we find evidence of the movement being actively unwelcoming to these people – but their rarity per se is not strong evidence of this. (EDIT: See also Khorton's comment for similar points.)
6Will Bradshaw3moIt's also probably worth noting that obesity levels in rich European countries are pretty dramatically lower [https://digital.nhs.uk/data-and-information/publications/statistical/statistics-on-obesity-physical-activity-and-diet/england-2020/part-3-adult-obesity-copy] than the US, which might skew perceptions of Americans at European conferences: I don't want to overstate this, since my memory of EA San Francisco 2019 was also generally thin. But it is probably something to remember to calibrate for.

I think there are extensions of (1) and (3) that could also be true, like "people at EA Global were particularly likely to be college-educated" and "people who successfully applied to EA Global are particularly willing to sacrifice today in order to improve the future"

EDIT: and just generally wealth leads to increased fitness I think - obesity is correlated with poverty and food insecurity in Western countries

3Nathan_Barnard2moI'm currently doing research on this! The big big driver is age, income is pretty small comparatively, the education effect goes away when you account for income and age. At least this what I get from the raw health survey of England data lol.
5Linch3moFWIW I see a much higher percentage of overweight EAs in the Bay Area.
5Larks3moI'm skeptical of the comparability of your 2/100 and 50-80% numbers; being overweight as judged by BMI is consistent with looking pretty normal, especially if you have muscle. I would guess that more people would have technically counted as overweight than you'd expect using the typical informal meaning of the word. It could also be that obese people are less likely to want to do conference socializing, and hence EAG is not representative of the movement.
8Will Bradshaw3moWhile BMI as a measure of obesity is far from perfect, it mostly fails in a false negative direction. False positives are quite rare; you have to be really quite buff in order for BMI to tell you you're obese when you're not. That is to say, I believe BMI-based measures will generally suggest lower rates of obesity than by-eye estimation, not higher. https://examine.com/nutrition/how-valid-is-bmi-as-a-measure-of-health-and-obesity/ [https://examine.com/nutrition/how-valid-is-bmi-as-a-measure-of-health-and-obesity/]
2Larks3moThanks for sharing this, I guess it looks like I was wrong!
3Jay Bailey3moI still don't think you're wrong. Will is correct when he says that it is more likely someone with a BMI of 25 or lower is actually overweight than someone with a BMI of 25 or higher is just well-muscled, but that isn't the same as estimating by eye. The point, as I understand it, is that if you live in a country where most people are overweight, your understanding of what "overweight" is will naturally be skewed. If the average person in your home country has a BMI of 25-30, you'll see that subconsciously as normal, and therefore you could see plenty of mildly overweight people and not think they were overweight at all - only people at even higher BMI's would be identifiable as overweight to you.
8Will Bradshaw2moRelatively minor in this particular case, but: Please don't claim people said things they didn't actually say. I know you're paraphrasing, but to me the combination of "when he says" with quote marks strongly implies a verbatim quote. It's pretty important to clearly distinguish between those two things.
3Jay Bailey2moFair enough. I've edited it to remove the quotation marks.
2Will Bradshaw2moI agree "BMI gives lots of false negatives compared to more reliable measures of overweight" is not the same thing as "BMI is more prone to false negatives than by-eye estimation" – it could be that BMI underestimates overweight, but by-eye estimation underestimates it even more. It would be great to see a study comparing both BMI and by-eye estimation to a third metric (I haven't searched for this). But if BMI is more prone to false negatives, and less prone to false positives, than most people think, that still seems to me like prima facie evidence against the claim that the opposite (that by-eye will underestimate relative to BMI) is true.
6Pablo3moIs that so? From the way BMI is defined, one should expect a tendency to misclassify tall normal people as overweight, and short overweight people as normal—i.e. a bias in opposite directions for people on either end of the height continuum. This is because weight scales with the cube of height, but BMI is defined as weight / height².
4Will Bradshaw3moAfter reading around a bit, my understanding is that the height exponent was derived empirically – the height exponent was chosen to maximise the fit to the data (of weight vs height in lean subjects). (Here's a retrospective article [https://academic.oup.com/ndt/article/23/1/47/1923176] from the Wikipedia citations.) The guy who developed the index did this in the 19th century, so it may well be the case that we'd find a different exponent given modern data – but e.g. this study [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2082278/] finds an exponent of 1.96 for males and 1.95 for females, suggesting it isn't all that dumb. (This study [https://onlinelibrary.wiley.com/doi/abs/10.1002/ajpa.20107] finds lower exponents – bad for BMI but still not supporting a weight/height³ relationship.) I don't find this too surprising – allometry [https://en.wikipedia.org/wiki/Allometry] is complicated and often deviates from what a naive dimensional analysis would suggest. A weight/height³ relationship would only hold if tall people were isometrically scaled-up versions of short people; a different exponent implies that tall and short people have systematically different body shapes, which matches my experience. In any case, my claim above is based on empirical evidence, comparing obesity as identified with BMI to obesity identified by other, believed-to-be-more-reliable metrics – those studies find that false positives are rare. Examine.com is a good source, and its conclusions roughly match my impressions from earlier reading, albeit with rather higher rates of false negatives than I'd thought.

For those interested in the work Michael Kremer (Giving What We Can member and 2019 Nobel Laureate in Economics) and his spouse and fellow GWWC member Rachel Glennerster have done on COVID-19 vaccine supply, our team profiled one of their co-authors this week — Juan Camilo Castillo of UPenn. An excerpt is below / the link is here: https://innovationexchange.mayoclinic.org/market-design-for-covid-19-vaccines-interview-with-upenn-professor-castillo/


JCC: Michael Kremer had worked on groundbreaking pneumococcal vaccine research in the past. Early in 2020, h... (read more)

(Repost from Shortform because I didn't get an answer. Hope that's ok.)

The "Personal Blogposts" section has recently become swamped with [Event] posts.
Most of them are irrelevant to me. Is there a way to hide them in the "All Posts"-view?

3Ben_West2moThanks Tobias, we are aware of this issue and have fixing it on our backlog. Unfortunately there isn't an easy way to filter out these posts in the interim.
7Aaron Gertler3moDoes this strike you as unusually threatening compared to other bugs that have been discovered in recent years? Headline aside, the article's tone seemed mild to me, and it looks like several organizations are taking steps to mitigate the issue. But my knowledge of computer security is rudimentary at best — do the stakes seem very high to you?
[+][comment deleted]7d 1
[+][comment deleted]2mo 2
[+][comment deleted]2mo 1
[+][comment deleted]2mo 1
[+][comment deleted]2mo 1