Open Thread: November 2021

by Aaron Gertler1 min read1st Nov 202148 comments

15

Open thread
Frontpage

If you're new to the EA Forum, consider using this thread to introduce yourself! 

You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)


If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)


Open threads are also a place to share good news, big or small. See this post for ideas.

48 comments, sorted by Highlighting new comments since Today at 8:38 PM
New Comment

Hi, I'm newish to EA and new (as of today) to the forum! I use she/her/hers pronouns and I'm a college freshman. I've recently been thinking a lot about how I can use my career to help. AI safety technical research seems like the best option for me from the couple hours of research I've done. I'm planning to donate all my disposable income to the EA meta fund. I'm really passionate about doing as much good as I can, and I'm excited to have found a community that shares that! My biggest stumbling block has recently been my mental health, so if anybody has resources/tips they want to share, I'd love to hear them (for reference, I am actively getting treatment, so no worries there)!

Welcome to the Forum!

I'm planning to donate all my disposable income to the EA meta fund.

I think it's good to donate a bit of money to good causes to help build good virtues, but at your current life/career stage you should probably focus on spending money in ways that make you better at doing good work later. 

See this blog post for some considerations.

I'm planning to donate all my disposable income to the EA meta fund. 

...

My biggest stumbling block has recently been my mental health, so if anybody has resources/tips they want to share, I'd love to hear them (for reference, I am actively getting treatment, so no worries there)!


Similar to what Linch said, another useful perspective comes from in this post which says the value of your time might be higher than you think. At the same time, your earnings are probably lower right now than they will be.

With this perspective, you might be better off spending the money on yourself given the personal needs you mentioned. For example, regular cleaning or relaxing travel probably helps mental health for many.

It is wonderful you are working to help others.

It seems like there's been a proliferation of AI safety orgs recently; I'd like to see a forum post describing all of them so people can easily find out more about them and who's hiring.

Hi everyone!

My name is Holly, and I'm a 20-year-old freshman student in California. I first encountered the EA community in the International Youth Summit on Energy and Climate Change, Shenzhen, China, and found the forum when I was looking for help to navigate through my future career path.  I've been exploring and trying to understand the concept of effective altruism since I grew up in a highly self-interest-driven, bureaucratic environment, but I want to do good to help others and make this world a better place. EA would be a great opportunity for me.

I'm currently an Economics major, and I want to be an Econ professor in the future. (However, I just started to embark on this path to get a P.hD. first, and I found myself a little nervous since the road ahead is a bit unknown for me at this point. I sort of have a weak math background, and I've been trying to improve my skills) I care about people, and I'd love to help them find happiness and the true meaning of their lives, as well as help them to pick up the right mindset to understand the world and live better. This is what I wanna do for my whole life.

Greetings!

You didn't mention whether you'd found an EA group near you, and I'd recommend looking for one if you haven't. It's easier to stay motivated and interested when some of your friends share your interests.

I care about people, and I'd love to help them find happiness and the true meaning of their lives, as well as help them to pick up the right mindset to understand the world and live better. 

Do you see this as something you'd be able to do as an economics professor? What is it that draws you to economics, specifically?

Hi, I'm new to the forum and wanted to introduce myself! I'm a product manager in the cybersecurity industry, located in Salt Lake City, UT. I'm currently looking for ways to make more of a positive impact, focused around 1) helping to build up the local EA community and 2) using my career.

I'm relatively early in my career so I have a lot of uncertainties around what cause area to work on and what my personal fit would be for different roles, so I'm trying to find lots of people to talk to in the EA community about product management, data science, or EA startups.

Happy to be here and excited to start contributing!

Hi there!

You may have considered this already, but I'd recommend applying to speak with 80,000 Hours. They're a great starting point for finding others to talk to, and they accept a lot of applications ("roughly 40% of people who apply", and I'd guess that many of their rejections are because the applicant has never heard of EA and doesn't really "get" what 80K is about).

Yep, should have mentioned I already applied for their 1-on-1 advice! Trying to cast as wide a net as possible. :)

Welcome! I guess there's a good chance you've already seen this, but just to make sure: some people think that careers in the info sec space can be very high-impact.

Thanks! Skimming that over, it does seem like a potentially good path. I know info sec is one of 80k's "potentially good options" but I've generally brushed it off, even though it might seem like a good fit on paper.  I've really only been involved in the development/management of a few insider risk products, so my skillset isn't focused on expertise in traditional info sec, it's mostly generalist PM skills for software dev. I'm probably in a slightly better position than most to pursue that route, but not by much. I'll read it over more thoroughly, thanks for the pointer!

Hi everyone! I'm a longtime EA but I haven't spent much time on the EA Forum, so taking this opportunity to introduce myself.

Professionally, I'm an economist in California focused on tax and benefit policy. I'm the co-founder and CEO of PolicyEngine, a tech nonprofit whose product lets anyone reform the tax and benefit system and see the quantified impact on society and one's own household (we're live in the UK and working on a US model). I'm also the founder and president of the UBI Center, a think tank researching universal basic income policies. Outside of work, I'm a founding lead of Ventura County YIMBY, which advocates housing density, and I lead the Ventura chapter of Citizens' Climate Lobby, which advocates carbon dividends.

I previously spent most of my career as a data scientist at Google, where I first encountered EA when Google.org gave a grant to GiveDirectly in 2012. I then became active in Google's internal EA group, left Google in 2018, took the GWWC pledge in 2019 (which I wrote about here), and got a Master's in Development Economics from MIT in 2020, where I became involved in the MIT EA community. I give primarily to GiveDirectly and GiveWell, though as an avid listener of the 80k Hours podcast (and soon-to-be-avid reader of the EA Forum!) I'm always interested in new cause areas.

I'm also working on a post on tax and benefit policy as an EA cause area, so I'm open to ideas here on that topic.

Welcome, Max! I've been following you on Twitter for a long time, and I'm excited to see you on the site I help to run :-)

If you want feedback before you publish your post, I offer that to everyone (though it's totally optional).

I noticed something at EAG London which I want to promote to someone's conscious attention. Almost no one at the conference was overweight, even though the attendees were mostly from countries with  overweight and obesity rates ranging from 50-80% and 20-40% respectively. I estimate that I interacted with 100 people, of whom 2 were overweight. Here are some possible explanations; if the last one is true, it is potentially very concerning:

1. effective altruism is most common among young people, who have lower rates of obesity than the general population
2. effective altruism is correlated with veganism, which leads to generally healthy eating, which leads to lower rates of diseases including obesity
3. effective altruists have really good executive function, which helps resist the temptation of junk food
4. selection effects: something about effective altruism doesn't appeal to overweight people

It's clearly bad that EA has low representation of religious adherents and underprivileged minorities. Without getting into the issue of missing out on diverse perspectives, it's also directly harmful in that it limits our talent and donor pools. Churches receive over $50 billion in donations each year in the US alone, an amount that dwarfs annual outlays to all effective causes. I think this topic has been covered on the forum before from the religion and ethnicity angles, but I haven't seen it for other types of demographics.

If we're somehow limiting participation to the 3/10ths of the population who are under 25 BMI, are we needlessly keeping out 7/10ths of the people who might otherwise work to effectively improve the world?

I think there are extensions of (1) and (3) that could also be true, like "people at EA Global were particularly likely to be college-educated" and "people who successfully applied to EA Global are particularly willing to sacrifice today in order to improve the future"

EDIT: and just generally wealth leads to increased fitness I think - obesity is correlated with poverty and food insecurity in Western countries

I'm skeptical of the comparability of your 2/100 and 50-80% numbers; being overweight as judged by BMI is consistent with looking pretty normal, especially if you have muscle. I would guess that more people would have technically counted as overweight than you'd expect using the typical informal meaning of the word.

It could also be that obese people are less likely to want to do conference socializing, and hence EAG is not representative of the movement.

While BMI as a measure of obesity is far from perfect, it mostly fails in a false negative direction. False positives are quite rare; you have to be really quite buff in order for BMI to tell you you're obese when you're not.

That is to say, I believe BMI-based measures will generally suggest lower rates of obesity than by-eye estimation, not higher.

https://examine.com/nutrition/how-valid-is-bmi-as-a-measure-of-health-and-obesity/

Is that so? From the way BMI is defined, one should expect a tendency to misclassify tall normal people as overweight, and short overweight people as normal—i.e. a bias in opposite directions for people on either end of the height continuum. This is because weight scales with the cube of height, but BMI is defined as weight / height². 

After reading around a bit, my understanding is that the height exponent was derived empirically – the height exponent was chosen to maximise the fit to the data (of weight vs height in lean subjects). (Here's a retrospective article from the Wikipedia citations.)

The guy who developed the index did this in the 19th century, so it may well be the case that we'd find a different exponent given modern data – but e.g. this study finds an exponent of 1.96 for males and 1.95 for females, suggesting it isn't all that dumb. (This study finds lower exponents – bad for BMI but still not supporting a weight/height³ relationship.)

I don't find this too surprising – allometry is complicated and often deviates from what a naive dimensional analysis would suggest. A weight/height³ relationship would only hold if tall people were isometrically scaled-up versions of short people; a different exponent implies that tall and short people have systematically different body shapes, which matches my experience.

In any case, my claim above is based on empirical evidence, comparing obesity as identified with BMI to obesity identified by other, believed-to-be-more-reliable metrics – those studies find that false positives are rare. Examine.com is a good source, and its conclusions roughly match my impressions from earlier reading, albeit with rather higher rates of false negatives than I'd thought.

Thanks for sharing this, I guess it looks like I was wrong!

I still don't think you're wrong. Will is correct when he says that it is more likely someone with a BMI of 25 or lower is actually overweight than someone with a BMI of 25 or higher is just well-muscled, but that isn't the same as estimating by eye.

The point, as I understand it, is that if you live in a country where most people are overweight, your understanding of what "overweight" is will naturally be skewed. If the average person in your home country has a BMI of 25-30, you'll see that subconsciously as normal, and therefore you could see plenty of mildly overweight people and not think they were overweight at all - only people at even higher BMI's would be identifiable as overweight to you.

Will is correct when he says "It is more likely someone with a BMI of 25 or lower is actually overweight than someone with a BMI of 25 or higher is just well-muscled", but that isn't the same as estimating by eye.

Relatively minor in this particular case, but: Please don't claim people said things they didn't actually say. I know you're paraphrasing, but to me the combination of "when he says" with quote marks strongly implies a verbatim quote. It's pretty important to clearly distinguish between those two things.

Fair enough. I've edited it to remove the quotation marks.

I agree "BMI gives lots of false negatives compared to more reliable measures of overweight" is not the same thing as "BMI is more prone to false negatives than by-eye estimation" – it could be that BMI underestimates overweight, but by-eye estimation underestimates it even more. It would be great to see a study comparing both BMI and by-eye estimation to a third metric (I haven't searched for this).

But if BMI is more prone to false negatives, and less prone to false positives, than most people think, that still seems to me like prima facie evidence against the claim that the opposite (that by-eye will underestimate relative to BMI) is true.

The natural first step here is to check whether EA has lower rates of overweight/obesity than the demographics from which it primarily recruits.

I can't speak much to the US, but in the European countries I've lived in overweight/obesity varies massively with socioeconomic status. My classmates at university were also mostly thin, as were all the scientists I've worked with (in several groups in several countries) over the years. And it's my reasonably strong impression that many other groups of highly-educated professionals have much lower rates of obesity than the population average.

In general, I've tended to be the most overweight person in most of my social and work circles – and I'd describe my fat level over the past 10 years as, at worst, a little chubby.

If it is the case that EA is representative of its source demographics on this dimension, that implies that it doesn't make all that much sense to focus on getting more overweight/obese people into the movement. Obviously, as with other demographic issues, we should be very concerned if we find evidence of the movement being actively unwelcoming to these people – but their rarity per se is not strong evidence of this.

(EDIT: See also Khorton's comment for similar points.)

It's also probably worth noting that obesity levels in rich European countries are pretty dramatically lower than the US, which might skew perceptions of Americans at European conferences:

I don't want to overstate this, since my memory of EA San Francisco 2019 was also generally thin. But it is probably something to remember to calibrate for.

FWIW I see a much higher percentage of overweight EAs in the Bay Area.

Hey everyone, I'm also new to the forum and to EA as of summer 2021. I found EA mostly through Lex Fridman's old podcast with Will MacAskill, which I watched after being reminded of EA by a friend. Then I read some articles on 80,000 hours and was pretty convinced.

I'm a sophomore computer science student at the University of Washington. I'm currently doing research with UW Applied Math on machine learning for science and engineering. It seems like my most likely career is in research in AI or brain-computer interfacing, but I'm still deciding and have an appointment with 80,000 hours advising.

Something else I'm interested in is joining (and possibly building) an EA community at UW. To my knowledge, the group has mostly died away since COVID, but there may still be some remaining UW EAs to link up with.

Looking forward to engaging in discussion on the forum!

Just watched the new James Bond movie No Time to Die - the plot centers around a nanobot-based bioweapon developed by MI6 that gets stolen by international terrorists (if I'm understanding the plot correctly; it was confusing). Maybe someone can write a review of it that focuses on the EA themes?

I am the founder of Sanctuary Hostel a unique cross border eco friendly animal rescue/ hostel/ community garden project.

After taking a trip all over Mexico i noticed the animals were not treated well there, so i decided to move there and build an animal rescue. After arriving i decided a rescue was not enough. The existing rescues fail because they rely solely on donations and they dont really solve the problem they are a band aid.

I felt community and worldwide involvement was needed so i decided combining a hostel would help with that as well as a community garden.

Our focus is not rescue its education we want to stop strays from existing we want to stop people from breeding animals, we want to stop people from abusing the animals.

So in 2019 i moved to Rosarito where i purchased some land and have been working towards building this concept. We are still very new so we dont have many people on the team and we dont have a lot of brand awareness. I am trying to learn a bit about fundraising and crowdfunding and donations.
 

This is why i chose Mexico.
Roughly 70% of Mexico’s 18 million dogs are abandoned and become strays, making it the worst country for pet abandonment in Latin America. Animals are treated more like property than pets, and they are often mistreated whether living in a home or on the street.

For those interested in the work Michael Kremer (Giving What We Can member and 2019 Nobel Laureate in Economics) and his spouse and fellow GWWC member Rachel Glennerster have done on COVID-19 vaccine supply, our team profiled one of their co-authors this week — Juan Camilo Castillo of UPenn. An excerpt is below / the link is here: https://innovationexchange.mayoclinic.org/market-design-for-covid-19-vaccines-interview-with-upenn-professor-castillo/

###

JCC: Michael Kremer had worked on groundbreaking pneumococcal vaccine research in the past. Early in 2020, he realized there would be a profound need for research into financing COVID-19 vaccines. He thus reached out to several people and put together a team of economists that included some of his former colleagues and some new people (such as myself).

At the start of our work, we saw that some of the hurdles that had to be cleared to develop a vaccine were no longer a problem, since phase I and II clinical trials were already underway for several vaccine candidates. However, we realized that it would not be easy to translate successful trials into large-scale vaccination quickly, since few steps had been taken to set in place the capacity to manufacture vaccines. So we focused our work on financing large-scale manufacturing capacity that would allow for quick, large-scale vaccination as soon as vaccine trials were successful.

I'm Gabe Newman from Canada. My wife got involved in EA earlier this year and I've been skulking on the sidelines, reading and thinking. I'm almost 50 but also a student again as I am getting my MSW (little midlife crisis). I'm still trying to figure out where and how to apply my skill set. I have lots of experience with micro NGO projects which are sustainable but I'm not sure how easy they would be to study, so EA is a bit of a new way of thinking for me. I've typically enjoyed Keep It Simple Stupid projects. But lately I have had a couple incredible complicated ideas.

First, I've been inspired by the discussion around megaprojects and I was wondering if there has been any consideration towards buying intellectual property rights. For example, if Astra Zeneca was purchased by an EA org then it could have gotten to lower income countries quickly rather than being horded by wealthy countries.  Considerable lives would have been saved. 

I appreciate a lot of money is going into research for the next pandemic, but what about skipping that step and buying rights to promising vaccines so it can go straight to generic. Pharma gets their money back and a small profit while low resource settings can access life saving medication. I'm sure there is a reason why this wouldn't work but I don't know what it is. 

Secondly, Bill Gates is currently the second largest donor to the WHO (2018-2019 Budget https://www.who.int/about/finances-accountability/reports/results_report_18-19_high_res.pdf?ua=1) . He is driving where funding is allocated and protecting intellectual rights. I believe this is incredibly problematic. EA could have a seat at that table. Yes it would cost a lot (in 2018/2019 Gates contributed 531 million), but at 100 million it would make EA the 11th largest financial donor of the World health organization! 

I realize these are just ideas with no details but I'd love to know if these are horrible ideas because I can't get them out of my head. Perhaps, I'm looking for a reality check.

Thanks!

(Repost from Shortform because I didn't get an answer. Hope that's ok.)

The "Personal Blogposts" section has recently become swamped with [Event] posts.
Most of them are irrelevant to me. Is there a way to hide them in the "All Posts"-view?

Thanks Tobias, we are aware of this issue and have fixing it on our backlog. Unfortunately there isn't an easy way to filter out these posts in the interim.

Hey, everyone. I don't post here often and I'm not particularly knowledgeable about strong longtermism, but I've been thinking a bit about it lately and wanted to share a thought I haven't seen addressed yet and I was wondering if it’s reasonable and unaddressed. I’m not sure this is the right place though, but here goes.

It seems to me that strong longtermism is extremely biased towards human beings.

In most catastrophic risks I can imagine (climate change, AI misalignment, and maybe even nuclear war* or pandemics**), it seems unlikely that earth would become uninhabitable for a long period or that all life on earth would be disrupted.

Some of these events (e.g. climate change) could have significant short to medium term effects on all life on earth, but in the long run (after several million years?), I’d argue the impact on non-human animals would likely be negligible, since evolution would eventually find its way. So if this is right and you consider the very long term and value all lives (humans and other animals) equally, wouldn’t strong longtermism imply not doing anything?

Although I definitely am somewhat biased towards human beings and think existential risk is a very important cause, I wonder if this critique makes sense.

 

*Regarding nuclear war, I guess it would depend on the length and strength of the radioactivity, which is not a subject I’m familiar with.

**From what I’ve learned in the last year and a half, it wouldn’t be easy for viruses (not sure about bacteria) to infect lots of different species (covid-19 doesn’t seem to be a problem to other species). 

Some of these events (e.g. climate change) could have significant short to medium term effects on all life on earth, but in the long run (after several million years?), I’d argue the impact on non-human animals would likely be negligible, since evolution would eventually find its way. So if this is right and you consider the very long term and value all lives (humans and other animals) equally, wouldn’t strong longtermism imply not doing anything?

If humanity survives, we have a decent shot of reducing suffering in nature and spreading utopia throughout the stars. 

If humanity dies, but not all life, and some other species eventually evolves intelligence and then builds civilization, I think they might also have a shot of doing the same thing, but this is more speculative and uncertain, and seems to me to be a much worse bet than betting on humanity (flawed as we are).

Thanks for the comment. I really hadn't considered colonizing the stars and bringing animals.

TBC, I think it's more likely that utopia would not look like having animals in the stars. Digital minds seem more likely, but also I think it's likely just that the future will be really weird, even weirder than digital minds.

I agree with Linchs comment, but I want to mention a further point. Let us suppose that the well-being of all non-human animals between now and the death of the sun is the most important value. This idea can be justified since there are much more animals than humans.

Let us suppose furthermore that the future of human civilization has no impact on the lives of animals in the far future. [I disagree with this point since it might be possible that future humans abolish wild animal suffering or in the bad case they take wild animals with them when they colonize the stars and thus extend wild animal suffering.] Nevertheless, let us assume that we cannot have any impact on animals in the far future.

In my opinion, the most logical thing would be to focus on the things that we can change (x-risks, animal suffering today etc.) and to develop a stoic attitude towards the things we cannot change. 

Aren't all ethical principles / virtues by default biased towards human beings? Except the ones that explicitly attempt to include animals in the moral circle.

I assume most people value human lives higher than animal lives, even within EA, and even if they believe society currently undervalues animal lives.

Not that that makes it objectively right or wrong ofcourse, you're free to value animal lives as highly as human if that is something you are drawn to.

P.S. Valuing animal lives highly doesn't mean human extinction is neutral, it is still a bad thing because it is a lot of lives lost, versus counterfactual where no lives are lost. And if your ethics are total utilitarianism, what value you assign to animal lives doesn't even matter in this scenario, because it's the same number of lives lost. The lives not lost don't contribute to the delta. I personally don't find total utilitarianism intuitive though, we are probably closer to log(total) maximisers.

Greeting. My name is Anna and I am a digital producer. I am glad that there are so many of us here :)  

Hi guys, my name is Nathaniel and I'm new to this forum. I found out about EA a few months ago because I've been thinking in these terms my whole life (how to maximize positive output to the world) and it's great to see there's a whole community centered around that question. I'm studying an undergrad in sustainable energy engineering at SFU and I'm hoping to have a career somewhere in the intersection between this field and computer science (computational sustainability). I haven't done a lot of research into this yet but it seems like an area with so much potential. I dream big and have thought a lot about how AI could be used to optimize permaculture setups and help transition our food system into decentralized farming co-ops especially in the wake of climate change. 

I'm also interested in animal rights activism and anti-capitalism! 

Does this strike you as unusually threatening compared to other bugs that have been discovered in recent years? Headline aside, the article's tone seemed mild to me, and it looks like several organizations are taking steps to mitigate the issue.

But my knowledge of computer security is rudimentary at best — do the stakes seem very high to you?

[+][comment deleted]9d 2
[+][comment deleted]13d 1
[+][comment deleted]13d 1
[+][comment deleted]13d 1