All of Abby Babby's Comments + Replies

Larks' claims seem pretty easy to verify, and I think you failed to address all of them. 

  1. In 1965, UNRWA changed the eligibility requirements to be a Palestinian refugee to include third-generation descendants, and in 1982, it extended it again, to include all descendants of Palestine refugee males, including legally adopted children, regardless of whether they had been granted citizenship elsewhere. This is not how refugee status is determined for basically any other group. Interestingly, under this definition, the majority of the world's Jews would h
... (read more)
1[anonymous]
I appreciate you taking the time to respond. I've replied to a few of your points below. (1) I don't have objections to Israel's Law of Return per se. But I wanted to raise the point that Jews who cannot trace ties back to Israel in living memory, or have converted to Judaism, receive rights and opportunities that Palestinians whose parents/grandparents were expelled do not. If ancestral ties are valid grounds for some groups, why not for others? Do you agree this is a double standard? (2) I think it's important to be specific and provide direct examples about "problematic and hateful" content in textbooks. As outsiders, we're often dealing with contested histories through second-hand accounts, and when history is politicized there are always profoundly different narratives. Someone seen as a martyr or hero by one group can be viewed as a terrorist by another - much like how Nelson Mandela or John Brown were perceived differently depending on time, place, and identity. I don't say this to excuse violence or antisemitism, but to note that perspective matters and it is hard to judge these claims without specific examples. For example, several early Israeli political leaders were leaders of violent paramilitary groups denounced by some countries for terrorism. Menachem Begin was a leader of Irgun, and went on to become a Prime Minister of Israel and win a Nobel Peace Prize in 1978. Yitzhak Shamir was a leader of Lehi (the Stern Gang), which killed more than 100 Palestinians in Deir Yassin, and went on to become a Prime Minister of Israel. It is not hard for me to imagine that they are honored in some narratives as founders and leaders, and criticized harshly in other narratives for violent acts against civilians. I'd be interested to know how these figures are discussed in Israeli curricula - just as I'd like to see concrete examples from Palestinian curricula - but I don't have firsthand knowledge of how either side teaches these histories. I would welcome suggesti
8
David Mathers🔸
"But UNRWA doesn't seem like a high integrity organization, and I seriously doubt donating to them is the best way to help the people of Gaza. " Almost none of the things you cite are relevant to whether access for UNRWA being allowed is particularly likely to reduce the hunger currently in Gaza, relative to access for other aid agencies which seems a very big part of what determines whether it is "the best way". I actually don't think donations to UNRWA will help because there is no chance in hell of Israel letting them in, and it would be better to try to get them to let in MSF or some other aid agency instead, but that is a separate point.  I guess you could hold that UNRWA are genuinely a major factor in keeping the conflict going, and that this means that marginal further funding for them has a non-negligible  but I think that is extremely implausible: Hamas would exist with or without UNRWA, and presumably whoever the major providers of schools in Gaza are they will teach in a way roughly compatible with Hamas' demands and current Palestinian public opinion. I expect the marginal impact of donation to UNRWA or UNRWA access to Gaza to feed people for a few days on the conflict to be zero by any mechanism other than one that goes directly through the effects of more Gazans being fed by literally any organization. Out of interest, do you think Israel should do more to let in other aid organizations, like say MSF, than they are currently doing? 

I don't think there are other orgs filling Arkose's niche and the data I've seen suggests Arkose was doing a good job. I know funders have good reasons for not sharing their takes on every grant rejection, but it's a shame there isn't more insight into this. 

Very interesting links, thanks for sharing!

Such an interesting read, thank you!

Thanks for clarifying! Really appreciate you engaging with this. 

Re: It takes a lot longer. It seems like it takes a lot of time for you to monitor the comments on this post and update your top level post in response. The cost of doing that after you post publicly, instead of before, is that people who read your initial post are a lot less likely to read the updated one. So I don't think you save a massive amount of time here, and you increase the chance other people become misinformed about orgs.

Re: Orgs can still respond to the post after it's publi... (read more)

Thanks for being thoughtful about this! Could you clarify what your cost benefit analysis was here? I'm quite curious!

I did it in my head and I haven't tried to put it into words so take this with a grain of salt.

Pros:

  • Orgs get time to correct misconceptions.

(Actually I think that's pretty much the only pro but it's a big pro.)

Cons:

  • It takes a lot longer. I reviewed 28 orgs; it would take me a long time to send 28 emails and communicate with potentially 28 people. (There's a good chance I would have procrastinated on this and not gotten my post out until next year, which means I would have had to make my 2024 donations without publishing this writeup first.)
  • Communica
... (read more)

I appreciate the effort you’ve put into this, and your analysis makes sense based on publicly available data and your worldview. However, many policy organizations are working on initiatives that haven’t been/can't be publicly discussed, which might lead you to make some incorrect conclusions. For example, I'm glad Malo clarified MIRI does indeed work with policymakers in this comment thread.

Tone is difficult to convey online, so I want to clarify I'm saying the next statement gently: I think if you do this kind of report--that a ton of people are reading ... (read more)

I think it's reasonable for a donor to decide where to donate based on publicly available data and to share their conclusions with others. Michael disclosed the scope and limitations of his analysis, and referred to other funders having made different decisions. The implied reader of the post is pretty sophisticated and would be expected to know that these funders may have access to information on initiatives that haven’t been/can't be publicly discussed.

While I appreciate why orgs may not want to release public information on all initiatives, the unavoida... (read more)

9
MichaelDickens
I spent a good amount of time thinking about whether I should do this and I read various arguments for and against it, and I concluded that I don't have that responsibility. There are clear advantages to running posts by orgs, and clear disadvantages, and I decided that the disadvantages outweighted the advantages in this case.

This is a really complex space with lots of moving parts; very cool to see how you've compiled/analyzed everything! Haven't finished going through your report yet, but it looks awesome :)

This looks so cool! Good luck!!!

This course sounds cool! Unfortunately there doesn't seem to be too much relevant material out there. 

This is a stretch, but I think there's probably some cool computational modeling to be done with human value datasets (e.g., 70,000 responses to variations on the trolley problem). What kinds of universal human values can we uncover? https://www.pnas.org/doi/10.1073/pnas.1911517117 

For digestible content on technical AI safety, Robert Miles makes good videos. https://www.youtube.com/c/robertmilesai

2
Geoffrey Miller
Abby - good suggestions, thank you. I think I will assign some Robert Miles videos! And I'll think about the human value datasets.

Thanks for the clarification, too many Carnegies! 

4
christian.r
Thanks! and agreed: https://www.carnegie.org/about/our-history/other-carnegie-organizations/ 

From what I understand, the MacArthur foundation was one of the main funders of nuclear security research, including at the Carnegie Endowment for International Peace, but they massively reduced their funding of nuclear projects and no large funder has replaced them.  https://www.macfound.org/grantee/carnegie-endowment-for-international-peace-2457/

(I've edited this comment, I got confused between the MacArthur foundation and the various Carnegie philanthropic efforts.) 

2
Vasco Grilo🔸
Thanks, Abby. I knew MacArthur had left the space, but not that Carnegie Endowment had recently decreased funding. In any case, I feel like discussions about nuclear risk funding often implicitly assume that a large relative decrease in philanthropic funding means a large increase in marginal cost-effectiveness, but this is unclear to me given it is only a small fraction of total funding. According to Founders Pledge's report on nuclear risk, "total philanthropic nuclear security funding stood at about $47 million per year ["between 2014 and 2020"]". So a 100 % reduction in philantropic funding would only be a 1.16 % (= 0.047/4.04) relative reduction in total funding, assuming this is 4.04 G$, which I got from the mean of a lognormal distribution with 5th and 95th percentile equal to 1 and 10 G$, corresponding to the lower and upper bound guessed in 80,000 Hours’ profile on nuclear war.

This is so, so, so, wonderful! Thanks for organizing such a fantastic event, as well as sharing all this analysis/feedback/reflection. I want to go next year!!!!

3
Agustín Covarrubias 🔸
Hope to see you next year! 🤝

So glad somebody is finally fixing Swapcard!

1
Ivan Burduk
It's been a long haul, but we've finally convinced their CEO (a Pisces), that redeveloping their core architecture to support star-sign matching would be a good business decision.

Any plans to have this printed on t shirts?

This needs to be discussed internally, but I think a better description is Cooperative with EA (CEA)

These are great things to check! It's especially important to do this kind of due diligence if you're leaving your support network behind (e.g. moving country). Thanks for spelling things out for people new to the job market ❤️

Thanks so much for sharing this, Michelle! It's always strange to visit our past selves, remembering who we used to be and thinking about all of the versions of ourselves we chose not to become. 

I'm glad you became who you are now ❤️

3
Michelle_Hutchinson
<3

Hahaha, thanks for posting!! :)

This is a really interesting question! Unfortunately, it was posted a little too late for me to run it by the team to answer. Hopefully other people interested in this topic can weigh in here. This 80k podcast episode might be relevant? https://80000hours.org/podcast/episodes/michael-webb-ai-jobs-labour-market/

This is an interesting idea! I don't know the answer. 

Thanks for the interesting questions, but unfortunately, they were posted a little too late for the team to answer. Glad to hear writing them helped you clarify your thinking a bit!

On calls, the way I do this is not assume people are part of the EA community, and instead see what their personal mindset is when it comes to doing good. 

I think 80k advisors give good advice. So I hope people take it seriously but not follow it blindly.

Giving good advice is really hard, and you should seek it out from many different sources. 

You also know yourself better than we do; people are unique and complicated, so if we give you advice that simply doesn’t apply to your personal situation, you should do something else. We are also flawed human beings, and sometimes make mistakes. Personally, I was miscalibrated on how hard it is to get technical AI safety roles, and I think I was overly optimisti... (read more)

Tricky, multifaceted question. So basically, I think some people obsess too much about intelligence and massively undervalue the importance of conscientiousness and getting stuff done in the real world. I think this leads to silly social competitions around who is smarter, as opposed to focusing on what’s actually important, i.e. getting stuff done. If you’re interested in AI Safety technical research, my take is that you should try reading through existing technical research; if it appeals to you, try replicating some papers. If you enjoy that, consider a... (read more)

Alex Lawsen, my ex-supervisor who just left us for Open Phil (miss ya 😭), recently released a great 80k After Hours Podcast on the top 10 mistakes people make! Check it out here: https://80000hours.org/after-hours-podcast/episodes/alex-lawsen-10-career-mistakes/ 

We had a great advising team chat the other day about “sacrificing yourself on the altar of impact”. Basically, we talk to a lot of people who feel like they need to sacrifice their personal health and happiness in order to make the world a better place. 

The advising team would actually prefer for people to build lives that are sustainable; they make enough money to meet their needs, they have somewhere safe to live, their work environment is supportive and non-toxic, etc. We think that setting up a lifestyle where you can comfortably work in the long... (read more)

I love my job so much! I talk to kind hearted people who want to save the world all day, what could be better? 

I guess people sometimes assume we meet people in person, but almost all of our calls are on Zoom. 

Also, sometimes people think advising is about communicating “80k’s institutional views”, which is not really the case; it’s more about helping people think through things themselves and offering help/advice tailored to the specific person we’re talking to. This is a big difference between advising and web content; the latter has to be aime... (read more)

Yeah, I always feel bad when people who want to do good get rejected from advising. In general, you should not update too much on getting rejected from advising. We decide not to invite people for calls for many reasons. For example, there are some people who are doing great work who aren’t at a place yet where we think we can be much help, such as freshmen who would benefit more from reading the (free!) 80,000 Hours career guide than speaking to an advisor for half an hour. 

Also, you can totally apply again 6 months after your initial applicatio... (read more)

Sudhanshu is quite keen on this, haha! I hope that at the moment our advisors are more clever and give better advice than GPT-4. But keeping my eye out for Gemini ;) Seriously though, it seems like an advising chat bot is a very big project to get right, and we don’t currently have the capacity.

This is pretty hard to answer because we often talk through multiple cause areas with advisees. We aren’t trying to tell people exactly what to do; we try to talk through ideas with people so they have more clarity on what they want to do. Most people simply haven’t asked themselves, “How do I define positive impact, and how can I have that kind of impact?” We try to help people think through this question based on their personal moral intuitions.  Our general approach is to discuss our top cause areas and/or cause areas where we think advisees could ... (read more)

Studying economics opens up different doors than studying computer science. I think econ is pretty cool; our world is incredibly complicated, but economic forces shape our lives. Economic forces inform global power conflict, the different aims and outcomes of similar sounding social movements in different countries, and often the complex incentive structures behind our world’s most pressing problems. So studying economics can really help you understand why the world is the way it is, and potentially give you insights into effective solutions. It’s often a ... (read more)

Mid-career professionals are great; you actually have specific skills and a track record of getting things done! One thing to consider is looking through our job board, filtering for jobs that need mid/senior levels of experience, and applying for anything that looks exciting to you. As of me writing this answer, we have 392 jobs open for mid/senior level professionals. Lots of opportunities to do good :) 

It would be awesome if there were more mentorship/employment opportunities in AI Safety! Agree this is a frustrating bottleneck. Would love to see more senior people enter this space and open up new opportunities. Definitely the mentorship bottleneck makes it less valuable to try to enter technical AI safety on the margin, although we still think it's often a good move to try, if you have the right personal fit. I'd also add this bottleneck is way lower if you: 1. enter via more traditional academic or software engineer routes rather than via 'EA fellowshi... (read more)

1
Huon Porteous
To add on to Abby, I think it’s true of impactful paths in general, not just AI safety, that people often (though not always) have to spend some time building career capital without having much impact before moving across. I think spending time as a software engineer, or ML engineer before moving across to safety will both improve your chances, and give you a very solid plan B. That said, a lot of safety roles are hard to land, even with experience. As someone who hasn’t coped very well with career rejection myself, I know that can be really tough.

Our advising is most useful to people who are interested in or open to working on the top problem areas we list, so we’re certainly more likely to point people toward working on causes AI safety than away from it. We don’t want all of our users focusing on our very top causes, but we have the most to offer advisees who want to explore work in the fields we’re most familiar with, which include AI safety, policy, biosecurity, global priorities research, EA community building, and some related paths. The spread in personal fit is also often larger t... (read more)

Thank you very much :)

I totally agree that more life experience is really valuable. For example, I recently updated my bio to reflect how I'm a mom (of two now, ahhhh!); somebody mentioned they booked in with me because they specifically wanted to chat with a parent, so it's great we have an advisor with that kind of experience on the team. If you have recommendations for experienced people who you think would be good advisors, feel free to shoot me a DM with names!

I agree with Jaime's answer about how alignment should avoid deception. (Catastrophic misgeneralization seems like it could fall under your alignment as capabilities argument.)

I sometimes think of alignment as something like "aligned with universal human values" more than "aligned with the specific goal of the human who programmed this model". One might argue there aren't a ton of universal human values. Which is correct! I'm thinking very basic stuff like, "I value there being enough breathable oxygen to support human life". 

Thanks for this very thorough write up. I appreciate this level of transparency on what's needed for two of our community's biggest grantmaking orgs!

I didn't even know you could make a table and then embed youtube videos within the table on EA Forum posts! Very cool. 

-1[anonymous]
Thanks :)
[This comment is no longer endorsed by its author]Reply
1[anonymous]
i find this a strange feature of this forum tbh.  i don't think ive ever downvoted anything?  but yeah, the best strategy is not to care imo

I'm interested to hear why you're asking this question. How would this affect your confidence in certain beliefs and they way you defer?

2
aprilsun
I've become much more familiar with EA, historically I've consider the two communities to be similarly rational and I thought the two were generally a lot more similar in their beliefs than I do now. So when I learn of a difference of opinion, I update my outside view and the extent to which I consider people the relevant experts. E.g., when I learn that Eliezer thinks pigs aren't morally relevant because they're not self-aware, I lose a bit of confidence in my belief that pigs are morally relevant and I become a bit less trustful that any alignment 'solutions' coming from the rationalist community would capture the bulk of what I care about.

I think individuals donating less than $1 million a year need very different advice than big donors moving millions a year (e.g., Dustin Moskovitz). 

If you are in the former category, any smart normal financial advisor can give good advice.  It is hard to find smart retail financial advisors who aren't trying to sell you some random high fee product, so it makes sense for you to collect recommendations. I just don't think they need to be EA aligned; lots of wealthy people ask these exact same questions with the goal of maximizing their donations to whatever their chosen cause is. 

Great to hear the water infrastructure is improving! Seems like a huge boost to quality of life :) 

The mystery of the beans continues though...

A lot of EAs are into mindfulness/meditation/enlightenment. You link to Clearer Thinking, and I consider Spencer Greenberg to be part of our community. If you want to get serious about tractable, scalable mental health interventions, SparkWave (also from Spencer Greenberg) has a bunch of very cool apps that focus on this. 

I'm personally not into enlightenment/awakening because meditation doesn't do much for me, and a lot of the "insights" I hear from "enlightened" people strike me as the sensation of insight more than the discovery of new knowledge. I... (read more)

5
Abby Babby

This is not central to the original question (I agree with you that poverty and preventable diseases are more pressing concerns), but for what it's worth, one shouldn't be all that nonplussed at how the “insights” one might hear from “enlightened” people sound more like the sensation of insight than the discovery of new knowledge. Most people who've found something worthwhile in meditation—and I'm speaking here as an intermediate meditator who's listened to many advanced meditators—would agree that progress/breakthroughs/the goal in meditation is not about gaining new knowledge, but rather, about seeing more clearly what is already here. (And doing so at an experiential level, not a conceptual level.)

3
Rebecca
I think Yanni actually works at SparkWave :)

Random thought: you mention it's not always easy to get clean drinking water. Is there anything in the water in Uganda that could become dangerous to consume if left sitting around for 12 hours? Maybe there are different bean soaking norms in Uganda compared to other countries because you get sick after consuming stagnant water there? (Bean soaking is the norm for other developing countries I'm aware of.)

Also, now I'm really hungry for beans ;)

7
NickLaing
Haha we ate beans just now (as we do a few nights a week). After soaking the beans, the water is discarded and the new water is boiled for an hour. I have fairly high confidence there is no major issues here. As a side note, more and more boreholes and protected springs (usually pipees coming out of the ground) are available around Uganda, and the national water piping system is spreading around cities. Development is real, and this has been one clear, positive improvement over the last few years here.

My hot take is, at the level of donations you're considering, your main consideration should be how impactful your actual job is/how impactful the job that you're pivoting into could be. Seems worth taking a hit on impact done right now if it allows you to become super high impact in the near future. 

[This comment is no longer endorsed by its author]Reply
1
Peter Drotos 🔸
Thank you for taking a look and for the suggestions! Not saying I've tried super hard to talk these through with an advisor but my attempts did not get much attention so far. Completely agree that one should prioritize long-term impact. I'm just saying that in case of a temporary funding constraint, choosing not to donate may prevent other, at least equally promising candidates who are in need of funding from investing into their own careers.

This looks really interesting! Thanks for sharing with the forum!

Load more