Thanks for clarifying! Really appreciate you engaging with this.
Re: It takes a lot longer. It seems like it takes a lot of time for you to monitor the comments on this post and update your top level post in response. The cost of doing that after you post publicly, instead of before, is that people who read your initial post are a lot less likely to read the updated one. So I don't think you save a massive amount of time here, and you increase the chance other people become misinformed about orgs.
Re: Orgs can still respond to the post after it's publi...
I did it in my head and I haven't tried to put it into words so take this with a grain of salt.
Pros:
(Actually I think that's pretty much the only pro but it's a big pro.)
Cons:
I appreciate the effort you’ve put into this, and your analysis makes sense based on publicly available data and your worldview. However, many policy organizations are working on initiatives that haven’t been/can't be publicly discussed, which might lead you to make some incorrect conclusions. For example, I'm glad Malo clarified MIRI does indeed work with policymakers in this comment thread.
Tone is difficult to convey online, so I want to clarify I'm saying the next statement gently: I think if you do this kind of report--that a ton of people are reading ...
I think it's reasonable for a donor to decide where to donate based on publicly available data and to share their conclusions with others. Michael disclosed the scope and limitations of his analysis, and referred to other funders having made different decisions. The implied reader of the post is pretty sophisticated and would be expected to know that these funders may have access to information on initiatives that haven’t been/can't be publicly discussed.
While I appreciate why orgs may not want to release public information on all initiatives, the unavoida...
This course sounds cool! Unfortunately there doesn't seem to be too much relevant material out there.
This is a stretch, but I think there's probably some cool computational modeling to be done with human value datasets (e.g., 70,000 responses to variations on the trolley problem). What kinds of universal human values can we uncover? https://www.pnas.org/doi/10.1073/pnas.1911517117
For digestible content on technical AI safety, Robert Miles makes good videos. https://www.youtube.com/c/robertmilesai
From what I understand, the MacArthur foundation was one of the main funders of nuclear security research, including at the Carnegie Endowment for International Peace, but they massively reduced their funding of nuclear projects and no large funder has replaced them. https://www.macfound.org/grantee/carnegie-endowment-for-international-peace-2457/
(I've edited this comment, I got confused between the MacArthur foundation and the various Carnegie philanthropic efforts.)
This is a really interesting question! Unfortunately, it was posted a little too late for me to run it by the team to answer. Hopefully other people interested in this topic can weigh in here. This 80k podcast episode might be relevant? https://80000hours.org/podcast/episodes/michael-webb-ai-jobs-labour-market/
I think 80k advisors give good advice. So I hope people take it seriously but not follow it blindly.
Giving good advice is really hard, and you should seek it out from many different sources.
You also know yourself better than we do; people are unique and complicated, so if we give you advice that simply doesn’t apply to your personal situation, you should do something else. We are also flawed human beings, and sometimes make mistakes. Personally, I was miscalibrated on how hard it is to get technical AI safety roles, and I think I was overly optimisti...
Tricky, multifaceted question. So basically, I think some people obsess too much about intelligence and massively undervalue the importance of conscientiousness and getting stuff done in the real world. I think this leads to silly social competitions around who is smarter, as opposed to focusing on what’s actually important, i.e. getting stuff done. If you’re interested in AI Safety technical research, my take is that you should try reading through existing technical research; if it appeals to you, try replicating some papers. If you enjoy that, consider a...
We had a great advising team chat the other day about “sacrificing yourself on the altar of impact”. Basically, we talk to a lot of people who feel like they need to sacrifice their personal health and happiness in order to make the world a better place.
The advising team would actually prefer for people to build lives that are sustainable; they make enough money to meet their needs, they have somewhere safe to live, their work environment is supportive and non-toxic, etc. We think that setting up a lifestyle where you can comfortably work in the long...
I love my job so much! I talk to kind hearted people who want to save the world all day, what could be better?
I guess people sometimes assume we meet people in person, but almost all of our calls are on Zoom.
Also, sometimes people think advising is about communicating “80k’s institutional views”, which is not really the case; it’s more about helping people think through things themselves and offering help/advice tailored to the specific person we’re talking to. This is a big difference between advising and web content; the latter has to be aime...
Yeah, I always feel bad when people who want to do good get rejected from advising. In general, you should not update too much on getting rejected from advising. We decide not to invite people for calls for many reasons. For example, there are some people who are doing great work who aren’t at a place yet where we think we can be much help, such as freshmen who would benefit more from reading the (free!) 80,000 Hours career guide than speaking to an advisor for half an hour.
Also, you can totally apply again 6 months after your initial applicatio...
This is pretty hard to answer because we often talk through multiple cause areas with advisees. We aren’t trying to tell people exactly what to do; we try to talk through ideas with people so they have more clarity on what they want to do. Most people simply haven’t asked themselves, “How do I define positive impact, and how can I have that kind of impact?” We try to help people think through this question based on their personal moral intuitions. Our general approach is to discuss our top cause areas and/or cause areas where we think advisees could ...
Studying economics opens up different doors than studying computer science. I think econ is pretty cool; our world is incredibly complicated, but economic forces shape our lives. Economic forces inform global power conflict, the different aims and outcomes of similar sounding social movements in different countries, and often the complex incentive structures behind our world’s most pressing problems. So studying economics can really help you understand why the world is the way it is, and potentially give you insights into effective solutions. It’s often a ...
Mid-career professionals are great; you actually have specific skills and a track record of getting things done! One thing to consider is looking through our job board, filtering for jobs that need mid/senior levels of experience, and applying for anything that looks exciting to you. As of me writing this answer, we have 392 jobs open for mid/senior level professionals. Lots of opportunities to do good :)
It would be awesome if there were more mentorship/employment opportunities in AI Safety! Agree this is a frustrating bottleneck. Would love to see more senior people enter this space and open up new opportunities. Definitely the mentorship bottleneck makes it less valuable to try to enter technical AI safety on the margin, although we still think it's often a good move to try, if you have the right personal fit. I'd also add this bottleneck is way lower if you: 1. enter via more traditional academic or software engineer routes rather than via 'EA fellowshi...
Our advising is most useful to people who are interested in or open to working on the top problem areas we list, so we’re certainly more likely to point people toward working on causes AI safety than away from it. We don’t want all of our users focusing on our very top causes, but we have the most to offer advisees who want to explore work in the fields we’re most familiar with, which include AI safety, policy, biosecurity, global priorities research, EA community building, and some related paths. The spread in personal fit is also often larger t...
I totally agree that more life experience is really valuable. For example, I recently updated my bio to reflect how I'm a mom (of two now, ahhhh!); somebody mentioned they booked in with me because they specifically wanted to chat with a parent, so it's great we have an advisor with that kind of experience on the team. If you have recommendations for experienced people who you think would be good advisors, feel free to shoot me a DM with names!
I agree with Jaime's answer about how alignment should avoid deception. (Catastrophic misgeneralization seems like it could fall under your alignment as capabilities argument.)
I sometimes think of alignment as something like "aligned with universal human values" more than "aligned with the specific goal of the human who programmed this model". One might argue there aren't a ton of universal human values. Which is correct! I'm thinking very basic stuff like, "I value there being enough breathable oxygen to support human life".
I think individuals donating less than $1 million a year need very different advice than big donors moving millions a year (e.g., Dustin Moskovitz).
If you are in the former category, any smart normal financial advisor can give good advice. It is hard to find smart retail financial advisors who aren't trying to sell you some random high fee product, so it makes sense for you to collect recommendations. I just don't think they need to be EA aligned; lots of wealthy people ask these exact same questions with the goal of maximizing their donations to whatever their chosen cause is.
A lot of EAs are into mindfulness/meditation/enlightenment. You link to Clearer Thinking, and I consider Spencer Greenberg to be part of our community. If you want to get serious about tractable, scalable mental health interventions, SparkWave (also from Spencer Greenberg) has a bunch of very cool apps that focus on this.
I'm personally not into enlightenment/awakening because meditation doesn't do much for me, and a lot of the "insights" I hear from "enlightened" people strike me as the sensation of insight more than the discovery of new knowledge. I...
This is not central to the original question (I agree with you that poverty and preventable diseases are more pressing concerns), but for what it's worth, one shouldn't be all that nonplussed at how the “insights” one might hear from “enlightened” people sound more like the sensation of insight than the discovery of new knowledge. Most people who've found something worthwhile in meditation—and I'm speaking here as an intermediate meditator who's listened to many advanced meditators—would agree that progress/breakthroughs/the goal in meditation is not about gaining new knowledge, but rather, about seeing more clearly what is already here. (And doing so at an experiential level, not a conceptual level.)
Random thought: you mention it's not always easy to get clean drinking water. Is there anything in the water in Uganda that could become dangerous to consume if left sitting around for 12 hours? Maybe there are different bean soaking norms in Uganda compared to other countries because you get sick after consuming stagnant water there? (Bean soaking is the norm for other developing countries I'm aware of.)
Also, now I'm really hungry for beans ;)
My hot take is, at the level of donations you're considering, your main consideration should be how impactful your actual job is/how impactful the job that you're pivoting into could be. Seems worth taking a hit on impact done right now if it allows you to become super high impact in the near future.
Larks' claims seem pretty easy to verify, and I think you failed to address all of them.
- In 1965, UNRWA changed the eligibility requirements to be a Palestinian refugee to include third-generation descendants, and in 1982, it extended it again, to include all descendants of Palestine refugee males, including legally adopted children, regardless of whether they had been granted citizenship elsewhere. This is not how refugee status is determined for basically any other group. Interestingly, under this definition, the majority of the world's Jews would h
... (read more)