I see two new relevant roles on the 80,000 Hours job board right now:
Here's an excerpt from Anthropic's job posting. It's looking for basic familiarity with deep learning and mechanistic interpretability, but mostly nontechnical skills.
... (read more)In this role you would:
- Partner closely with the interpre
You might want to share this project idea in the Effective Environmentalism Slack, if you haven't already done so.
Is the application form "EAGxBerkeley, India & Future Forum Organizing Team Expression of Interest" supposed to have questions asking about whether you're interested in organizing the Future Forum? I don't see any; I only see questions about EAGxBerkeley and EAGxIndia.
From my experience with running EA at Georgia Tech, I think the main factors are:
I think I was primarily concerned that negative information about the campaign could get picked up by the media. Thinking it over now though, that motivation doesn't make sense for not posting about highly visible negative news coverage (which the media would have already been aware of) or not posting concerns on a less publicly visible EA platform, such as Slack. Other factors for why I didn't write up my concerns about Carrick's chances of being elected might have been that:
Thanks for the suggestion, just copied the critiques of the "especially useful" post over!
Before the election was decided, I agreed with the overall point that donating, phone banking, or door-knocking for the campaign seemed quite valuable. At the same time, I want to mention a couple critiques I have (copied from my comment on "Some potential lessons from Carrick’s Congressional bid")
Overall, I agree with Habryka's comment that "negative evidence on the campaign would be 'systematically filtered out'". Although I maxed out donations to the primary campaign and phone banked a bit for the campaign, I had a number of concerns about the campaign that I never saw mentioned in EA spaces. However, I didn't want to raise these concerns for fear that this would negatively affect Carrick's chances of winning the election.
Now that Carrick's campaign is over, I feel more free to write my concerns. These included:
I'd recommend cross-posting your critiques of the "especially useful" post onto that post — will make it easier for anyone who studies this campaign later (I expect many people will) to learn from you.
Another introductory post about why one may want to care about insect welfare: Does Insect Suffering Bug You? - Faunalytics (Jesse Gildesgame, 2016).
... (read more)Recently, activists have started campaigning against silk because they believe the production process is cruel to silkworms. Many people respond to these campaigns with skepticism: who cares about silkworms? It’s easy to feel for the chinchillas, foxes, and other furry mammals used in fur clothing. But insects like silkworms are a harder sell. It seems crazy to grant moral consideration to a bug.
Nonetheless, t
The Qualia Research Institute might be funding-constrained but it's questionable whether it's doing good work; for example, see this comment here about its Symmetry Theory of Valence.
I see, I thought you were referring to reading a script about EA during a one-on-one conversation. I don't see anything wrong with presenting a standardized talk, especially if you make it clear that EA is a global movement and not just a thing at your university. I would not be surprised if a local chapter of, say, Citizens' Climate Lobby, used an introductory talk created by the national organization rather than the local chapter.
I also misunderstood the original post as more like a "sales script" and less about talks. I also am surprised that people find having scripts for intro talks to be creepy, but perhaps secular Western society is just extremely oversensitive here (which is a preference we should respect if it's our target audience!)
introducing people to EA by reading prepared scripts
Huh, I'm not familiar with this, can you post a link to an example script or message me it?
I agree that reading a script verbatim is not great, and privately discussed info in a CRM seems like an invasion of privacy.
I get the impression many orgs set up to support EA groups have some version of this. Here are some I found on the internet:
Global Challenges Project has a "ready-to-go EA intro talk transcript, which you can use to run your own intro talk" here: https://handbook.globalchallengesproject.org/packaged-programs/intro-talks
EA Groups has "slides and a suggested script for an EA talk" here: https://resources.eagroups.org/events-program-ideas/single-day-events/introductory-presentations
To be fair, in both cases there is also some encouragement to adapt the talks,... (read more)
Privately discussed info in a CRM seems like an invasion of privacy.
I've seen non-EA college groups do this kind of thing and it seems quite normal. Greek organizations track which people come to which pledge events, publications track whether students have hit their article quota to join staff, and so on.
Doesn't seem like an invasion of privacy for an org's leaders to have conversations like "this person needs to write one more article to join staff" or "this person was hanging out alone for most of the last event, we should try and help them feel more comfortable next time".
Would you be interested in supporting EA groups abroad to recruit local talent to work on impactful causes? I'm not sure what country you're from or what languages you're fluent in. But even if you only know English, it seems like you could potentially help with EA Philippines, EA India, EA University of Cape Town, EA Nigeria, or EA groups in the UK, US, and Australia. You can browse groups and get in contact with them through the EA Forum.
To get a sense of why this could be valuable, see "Building effective altruism - 80,000 Hours" and "A huge opportunity... (read more)
Besides distillation, another option to look into could be the Communications Specialist or Senior Communications Specialist contractor roles at the Fund for Alignment Research.
Could 80,000 Hours make it clear on their job which roles they think are valuable only for career capital and aren't directly impactful? It could just involve adding a quick boilerplate statement like in the job details, such as:
Relevant problem area: AI safety & policy
Wondering why we’ve listed this role?
We think this role could be a great way to develop relevant career capital, although other opportunities would be better for directly making an impact.
Perhaps this suggestion is unworkable for various reasons. But I think it's easy for people to think... (read more)
My bad, I meant to write "Part-time volunteering might not provide as much of an opportunity to build unique skills, compared to working full-time on direct work". Fixed.
Is it possible to have a 10% version of pursuing a high-impact career? Instead of donating 10% of your income, you would donate a couple hours a week to high-impact volunteering. I've listed a couple opportunities here. In my opinion, many of these would count as a high-impact career if you did full-time.
I expect 10 people donating 10% of their time to be less effective than 1 person using 100% of their time because you don't get to reap the benefits of learning for the 10% people. Example: if people work for 40 years, then 10 people donating 10% of their time gives you 10 years with 0 experience, 10 with 1 year, 10 with 2 years, and 10 with 3 years; however, if someone is doing EA work full-time, you get 1 year with 0 exp, 1 with 1, 1 with 2, etc. I expect 1 year with 20 years of experience to plausibly be as good/useful as 10 with 3 years of experience.... (read more)
most AI experts think advanced AI is much likelier to wipe out human life than climate change
I'm not sure this is true, unless you use a very restrictive definition of "AI expert". I would be surprised if most AI researchers saw AI as a greater threat than climate change.
Meta: This post was also cross-posted to LessWrong.
Companies and governments will find it strategically valuable to develop advanced AIs which are able to execute creative plans in pursuit of a goal achieving real-world outcomes. Current large language models have a rich understanding of the world which generalizes to other domains, and reinforcement learning agents already achieve superhuman performance at various games. With further advancements in AI research and compute, we are likely to see the development of human-level AI this century. But for a wide variety of goals, it is often valuable to pursue ... (read more)
But I am a bit at loss on why people in the AI safety field think it is possible to build safe AI systems in the first place. I guess as long as it is not proven that the properties of safe AI systems are contradictory with each other, you could assume it is theoretically possible. When it comes to ML, the best performance in practice is sadly often worse than the theoretical best.
To me, this belief that AI safety is hard or impossible would imply that AI x-risk is quite high. Then, I'd think that AI safety is very important but unfortunately intractable. ... (read more)
For what it's worth, even though I prioritize longtermist causes, reading
Maybe it depends on the cause area but the price I'm willing to pay to attract/retain people who can work on meta/longtermist things is just so high that it doesn't seem worth factoring in things like a few hundred pounds wasted on food.
made me fairly uncomfortable, even though I don't disagree with the substance of the comment, as well as
... (read more)2) All misallocations of money within EA community building is lower than misallocations of money caused by donations that were wasted by donating t
Free food and free conferences are things that are somewhat standard among various non-EA university groups. It's easy to object to whether they're an effective use of money, but I don't think they're excessive except under the EA lens of maximizing cost-effectiveness. I think if we reframe EA universities groups as being about empowering students to tackle pressing global issues through their careers, and avoid mentioning effective donations and free food in the same breath, then it's less confusing why there is free stuff being offered. (Besides apparent... (read more)
I've considered this before and I'm not sure I agree. If I'm at a +10 utility for the next 10 years and afterwards will be at +1,000,000 utility for the following 5,000 years, I might just feel like skipping ahead to be feeling +1,000,000 utility, simply from being impatient about getting to feel even better.
Got it, I'm surprised by how little time it took to organize HEA's spring retreat. What programming was involved?
For me, the main value of retreats/conferences has been forming lots connections, but I haven't become significantly more motivated to be more productive, impactful, or ambitious. I have a couple questions which I think would be helpful for organizers to decide whether they should be running more retreats:
Yep, great questions -- thanks, Michael. To respond to your first thing, I definitely don't expect that they'll have those effects on everybody, just that they are much more likely to do so than pretty much any other standard EA group programming.
My experience with EA at Georgia Tech is that a relatively small proportion of people who complete our intro program participate in follow-up programs, so I think it's valuable to have content you think is important in your initial program instead of hoping that they'll learn it in a later program. I think plenty of Georgetown students would be interested in signing up for an AI policy/governance program, even if it includes lots of x-risk content.
As a community that values good epidemics
good epistemics?
Thanks for posting about this; I had no idea this was happening to a significant extent.
To the extent that the program is meant to provide an introduction to "catastrophic and existential risk reduction in the context of AI/ML", I think it should include some more readings on the alignment problem, existential risk from misaligned AI, transformative AI or superintelligence. I think Mauricio Baker's AI Governance Program has some good readings for this.
I think lack of diversity in EA is largely due to founder effects, and EA is working on this. There's an emerging effort to have EA outreach in more global south countries like India, the Philippines, and Mexico, and local EA community-builders are working hard on that.
For what it's worth, it seems to me that EA university groups have more racial and gender diversity than the broader EA movement, which I think is because they reach a broader base of people, compared to the type of people who randomly stumble across EA on the internet.
The EA community is ex... (read more)
Any chance the price could be reduced to lower the barrier to pre-ordering? It costs $30 on Amazon US for a hard copy, which is a lot to ask for.
you can usually find small online projects throughout the year
Where?
Red team: Argue that moral circle expansion is or is not an effective lever for improving the long-term future. Subpoint: challenge the claim that ending factory farming effectively promotes moral circle expansion to wild animals or digital sentient beings.
Related:
Red team: Certain types of prosaic AI alignment (e.g., arguably InstructGPT) promote the illusion of safety without genuinely reducing existential risk from AI, or are capabilities research disguised as safety research. (A claim that I've heard from EleutherAI, rather indelicately phrased, and would like to see investigated)
Red team: Is existential security likely, assuming that we avoid existential catastrophe for a century or two?
Some reasons that I have to doubt that existential security is the default outcome we should expect:
Red team: Is the expected value of extinction risk reduction positive?
Relevant articles:
I helped start WikiProject Effective Altruism a few months ago, but I think that the items on our WikiProject's to-do list are not as valuable as, say, organizing a local EA group at a top university, or writing a useful post on the EA Forum. One tricky thing about Wikipedia is that you have to be objective, so while someone might read an article on effective altruism and be like "wow, this is a cool idea", you can't engage them further. I also think that the articles are already pretty decent.
Richard Ngo recently wrote a post on careers in AI safety.
I think you could divide AI safety careers into six categories. I've written some quick tentative thoughts on how you could get started, but I'm not an expert in this for sure.
Can someone provide a more realistic example of partial alignment causing s-risk than SignFlip or MisconfiguredMinds? I don't see either of these as something that you'd be reasonably likely to get by say, only doing 95% of the alignment research necessary rather than 110%.
I understand the post's claim to be as follows. Broadly speaking, EAs go for global health and development if they want to help people through rigorously prevent interventions. They go for improving the long-term future stuff if they want to maximize expected value. And they go for farmed animal welfare if they care a lot about animals, but the injustice of factory farming is a major motivation for why many EAs care about it. This makes a lot of sense to me and I wholeheartedly agree.
That said, I think the selection of the main three cause areas – global h... (read more)
It might be helpful to talk with Koki Ajiri from EA NYU Abu Dhabi, as they’re trying to do EA outreach in the UAE, though they’re focusing on English-speaking universities.
I’d definitely recommend applying for funding from the EA Infrastructure Fund! My main question is, how sure are you that you can get articles published in major newspapers?
By the way, it looks like you asked a duplicate question Labeling cash transfers to solve charcoal-related problems? - EA Forum (effectivealtruism.org)
Great power conflict is generally considered an existential risk factor, rather than an existential risk per se – it increases the chance of existential risks like bioengineered pandemics, nuclear war, and transformative AI, or lock-in of bad values (Modelling great power conflict as an existential risk factor, The Precipice chapter 7).
I can define a new existential risk factor that could be as great as all existential risks combined – the fact that our society and the general populace do not sufficiently prioritize existential risks, for example. So no, I... (read more)
Is work at Credo AI targeted at trying to reduce existential risk from advanced AI (whether from misalignment, accident, misuse, or structural risks)?
Credo AI is not specifically targeted at reducing existential risk from AI. We are working with companies and policy makers who are converging on a set of responsible AI principles that need to be thought out better and implemented.
-
Speaking for myself now - I became interested in AI safety and governance because of the existential risk angle. As we have talked to companies and policy makers it is clear that most groups do not think about AI safety in that way. They are concerned with ethical issues like fairness - either for moral reasons, or, more ... (read more)
I think print books are still preferred by more readers compared to e-books. You might as well donate the books in both the physical and digital formats and probably also as an audiobook.
It looks like libraries don't generally have an official way for you to donate print books virtually or to donate e-books, so I think you would have to inquire with them about whether you can make a donation and ask them to use that to buy specific books. Note that the cost of e-book licenses to libraries is many times the consumer sale price.
I really like this project idea! It's ambitious and yet approachable, and it seems that a lot of this work could be delegated to virtual personal assistants. Before starting the project, it seems that it would be valuable to quickly get a sense of how often EA books in libraries are read. For example, you could see how many copies of Doing Good Better are currently checked out, or perhaps you could nicely ask a library if they could tell you how many times a given book has been checked out.
Some quick thoughts:
- EA Virtual Programs should be fine in my opinion, especially if you think you have more promising things to do than coordinating logistics for a program or facilitating cohorts
- The virtual Intro EA Program only has discussions in English and Spanish. If group members would much prefer to have discussions in Hungarian instead, it might be useful for you to find some Hungarian-speaking facilitators.
- Like Jaime commented, if you're delegating EA programs to EA Virtual Programs, it's best for you to have some contact with participants, especi
... (read more)