All of michaelchen's Comments + Replies

What's the value of creating my own fellowship program when I can direct people to the virtual programs?

Some quick thoughts:

  • EA Virtual Programs should be fine in my opinion, especially if you think you have more promising things to do than coordinating logistics for a program or facilitating cohorts
  • The virtual Intro EA Program only has discussions in English and Spanish. If group members would much prefer to have discussions in Hungarian instead, it might be useful for you to find some Hungarian-speaking facilitators.
  • Like Jaime commented, if you're delegating EA programs to EA Virtual Programs, it's best for you to have some contact with participants, especi
... (read more)
What does the Project Management role look like in AI safety?

I see two new relevant roles on the 80,000 Hours job board right now:

Here's an excerpt from Anthropic's job posting. It's looking for basic familiarity with deep learning and mechanistic interpretability, but mostly nontechnical skills.

In this role you would:

  • Partner closely with the interpre
... (read more)
The real state of climate solutions - want to help?

You might want to share this project idea in the Effective Environmentalism Slack, if you haven't already done so.

Apply to help run EAGxIndia, Berkeley, Singapore and Future Forum!

Is the application form "EAGxBerkeley, India & Future Forum Organizing Team Expression of Interest" supposed to have questions asking about whether you're interested in organizing the Future Forum? I don't see any; I only see questions about EAGxBerkeley and EAGxIndia.

2Vaidehi Agarwalla3d
Yes i just added it - thanks for the flag!
Most students who would agree with EA ideas haven't heard of EA yet (results of a large-scale survey)

From my experience with running EA at Georgia Tech, I think the main factors are:

  • not prioritizing high-impact causes
  • not being interested in changing their career plans
  • lack of high-impact career opportunities that fit their career interests, or not knowing about them
  • not having the skills to get high-impact internships or jobs
Some potential lessons from Carrick’s Congressional bid

I think I was primarily concerned that negative information about the campaign could get picked up by the media. Thinking it over now though, that motivation doesn't make sense for not posting about highly visible negative news coverage (which the media would have already been aware of) or not posting concerns on a less publicly visible EA platform, such as Slack. Other factors for why I didn't write up my concerns about Carrick's chances of being elected might have been that:

  • no other EAs seemed to be posting much negative information about the campaign, a
... (read more)
Some potential lessons from Carrick’s Congressional bid

Thanks for the suggestion, just copied the critiques of the "especially useful" post over!

Why Helping the Flynn Campaign is especially useful right now

Before the election was decided, I agreed with the overall point that donating, phone banking, or door-knocking for the campaign seemed quite valuable. At the same time, I want to mention a couple critiques I have (copied from my comment on "Some potential lessons from Carrick’s Congressional bid")

  • The post claims "The race seems to be quite tight. According to this poll, Carrick is in second place among likely Democratic voters by 4% (14% of voters favor Flynn, 18% favor Salinas), with a margin of error of +/- 4 percentage points." However, it declines to
... (read more)
Some potential lessons from Carrick’s Congressional bid

Overall, I agree with Habryka's comment that "negative evidence on the campaign would be 'systematically filtered out'". Although I maxed out donations to the primary campaign and phone banked a bit for the campaign, I had a number of concerns about the campaign that I never saw mentioned in EA spaces. However, I didn't want to raise these concerns for fear that this would negatively affect Carrick's chances of winning the election.

Now that Carrick's campaign is over, I feel more free to write my concerns. These included:

  • The vast majority of media coverage
... (read more)

I'd recommend cross-posting your critiques of the "especially useful" post onto that post — will make it easier for anyone who studies this campaign later (I expect many people will) to learn from you.

9Aaron Gertler8d
Thanks for sharing all of this! I'm curious about your fear that these comments would negatively affect Carrick's chances. What was the mechanism you expected? The possibility of reduced donations/volunteering from people on the Forum? The media picking up on critical comments? If "reduced donations" were a factor, would you also be concerned about posting criticism of other causes you thought were important for the same reason? I'm still working out what makes this campaign different from other causes (or maybe there really are similar issues across a bunch of causes). One thing that comes to mind is time-sensitivity: if you rethink your views on a different cause later, you can encourage more donations to make up for a previous reduction. If you rethink views on a political campaign after Election Day, it's too late. If that played a role, I can think of other situations that might exert the same pressure — for example, organizations running out of runway having a strong fundraising advantage if people are worried about dooming them. Not sure what to do about that, and would love to hear ideas (from anyone, this isn't specifically aimed at Michael).
Why should I care about insects?

Another introductory post about why one may want to care about insect welfare: Does Insect Suffering Bug You? - Faunalytics (Jesse Gildesgame, 2016).

Recently, activists have started campaigning against silk because they believe the production process is cruel to silkworms. Many people respond to these campaigns with skepticism: who cares about silkworms? It’s easy to feel for the chinchillas, foxes, and other furry mammals used in fur clothing. But insects like silkworms are a harder sell. It seems crazy to grant moral consideration to a bug.

Nonetheless, t

... (read more)
If EA is no longer funding constrained, why should *I* give?

The Qualia Research Institute might be funding-constrained but it's questionable whether it's doing good work; for example, see this comment here about its Symmetry Theory of Valence.

3Question Mark9d
Even if the Symmetry Theory of Valence turns out to be completely wrong, that doesn't mean that QRI will fail to gain any useful insight into the inner mechanics of consciousness. Andrew Zuckerman sent me this comment [https://forum.effectivealtruism.org/posts/sHHRneQDMyjREMmhE/?commentId=7TttJrcX82gNPBRni] previously on QRI's pathway to impact, in response to Nuño Sempere's criticisms of QRI. The expected value of QRI's research may therefore have a very high degree of variance. It's possible that their research will amount to almost nothing, but it's also possible that their research could turn out to have a large impact. As far as I know, there aren't any other EA-aligned organizations that are doing the sort of consciousness research that QRI is doing.
Bad Omens in Current Community Building

I see, I thought you were referring to reading a script about EA during a one-on-one conversation. I don't see anything wrong with presenting a standardized talk, especially if you make it clear that EA is a global movement and not just a thing at your university. I would not be surprised if a local chapter of, say, Citizens' Climate Lobby, used an introductory talk created by the national organization rather than the local chapter.

I also misunderstood the original post as more like a "sales script" and less about talks. I also am surprised that people find having scripts for intro talks to be creepy, but perhaps secular Western society is just extremely oversensitive here (which is a preference we should respect if it's our target audience!)

Bad Omens in Current Community Building

introducing people to EA by reading prepared scripts

Huh, I'm not familiar with this, can you post a link to an example script or message me it?

I agree that reading a script verbatim is not great, and privately discussed info in a CRM seems like an invasion of privacy.

I get the impression many orgs set up to support EA groups have some version of this. Here are some I found on the internet:

Global Challenges Project has a "ready-to-go EA intro talk transcript, which you can use to run your own intro talk" here: https://handbook.globalchallengesproject.org/packaged-programs/intro-talks

EA Groups has "slides and a suggested script for an EA talk" here: https://resources.eagroups.org/events-program-ideas/single-day-events/introductory-presentations

To be fair, in both cases there is also some encouragement to adapt the talks,... (read more)

Privately discussed info in a CRM seems like an invasion of privacy.

I've seen non-EA college groups do this kind of thing and it seems quite normal. Greek organizations track which people come to which pledge events, publications track whether students have hit their article quota to join staff, and so on.

Doesn't seem like an invasion of privacy for an org's leaders to have conversations like "this person needs to write one more article to join staff" or  "this person was hanging out alone for most of the last event, we should try and help them feel more comfortable next time".

What are your recommendations for technical AI alignment podcasts?
  • AXRP
  • Nonlinear Library: Alignment Forum
  • Towards Data Science (the podcast has had an AI safety skew since 2020)
  • Alignment Newsletter Podcast
Volunteering abroad

Would you be interested in supporting EA groups abroad to recruit local talent to work on impactful causes? I'm not sure what country you're from or what languages you're fluent in. But even if you only know English, it seems like you could potentially help with EA Philippines, EA India, EA University of Cape Town, EA Nigeria, or EA groups in the UK, US, and Australia. You can browse groups and get in contact with them through the EA Forum.

To get a sense of why this could be valuable, see "Building effective altruism - 80,000 Hours" and "A huge opportunity... (read more)

2jwpieters19d
I'm not sure what your background is, but I agree that getting in touch with local EA groups is a great place to start. We (University of Cape Town) would certainly be happy to have somebody offer help
Comparative advantage does not mean doing the thing you're best at

Besides distillation, another option to look into could be the Communications Specialist or Senior Communications Specialist contractor roles at the Fund for Alignment Research.

There are currently more than 100 open EA-aligned tech jobs

Could 80,000 Hours make it clear on their job which roles they think are valuable only for career capital and aren't directly impactful? It could just involve adding a quick boilerplate statement like in the job details, such as:

Relevant problem area: AI safety & policy

Wondering why we’ve listed this role?

We think this role could be a great way to develop relevant career capital, although other opportunities would be better for directly making an impact.

Perhaps this suggestion is unworkable for various reasons. But I think it's easy for people to think... (read more)

Increasing Demandingness in EA

My bad, I meant to write "Part-time volunteering might not provide as much of an opportunity to build unique skills, compared to working full-time on direct work". Fixed.

Increasing Demandingness in EA

Is it possible to have a 10% version of pursuing a high-impact career? Instead of donating 10% of your income, you would donate a couple hours a week to high-impact volunteering. I've listed a couple opportunities here. In my opinion, many of these would count as a high-impact career if you did full-time.

  • Organizing a local EA group
    • Or in-person/remote volunteering for a university EA group, to help with managing Airtable, handling operations, designing events, facilitating discussions, etc. Although I don't know that any local EA groups currently accept rem
... (read more)
3Holly Morgan21d
It is.
4martin_glusker1mo
I think in most cases, this doesn't look like using 10% of your time, but rather trading off the an optimally effective career for a less effective career with that improves along selfish dimensions such as salary, location, work/life balance, personal engagement, etc. This picture is complicated by the fact that many of these characteristics are not independent from effectiveness, so it isn't clean. Personal fit for a career is a good example of this because it's both selfish and you'll be better at your job if you find a career with relative better fit.
3elifland1mo
Related: Scalaby using labour [https://forum.effectivealtruism.org/topics/scalably-using-labour] tag and the concept of Task Y [https://forum.effectivealtruism.org/posts/uWWsiBdnHXcpr7kWm/can-the-ea-community-copy-teach-for-america-looking-for-task]

I expect 10 people donating 10% of their time to be less effective than 1 person using 100% of their time because you don't get to reap the benefits of learning for the 10% people. Example: if people work for 40 years, then 10 people donating 10% of their time gives you 10 years with 0 experience, 10 with 1 year, 10 with 2 years, and 10 with 3 years; however, if someone is doing EA work full-time, you get 1 year with 0 exp, 1 with 1, 1 with 2, etc. I expect 1 year with 20 years of experience to plausibly be as good/useful as 10 with 3 years of experience.... (read more)

[$20K In Prizes] AI Safety Arguments Competition

most AI experts think advanced AI is much likelier to wipe out human life than climate change

I'm not sure this is true, unless you use a very restrictive definition of "AI expert". I would be surprised if most AI researchers saw AI as a greater threat than climate change.

1AndrewDoris25d
I took that from a Kelsey Piper writeup here [https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment] , assuming she was summarizing some study: "Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction [https://www.fhi.ox.ac.uk/wp-content/uploads/Existential-Risks-2017-01-23.pdf]. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now." The hyperlink goes to an FHI paper that appears to just summarize various risks, so it's unclear what her source was on the "most." I'd be curious to know as well. She does stress the greater variance of outcomes and uncertainty surrounding AI - writing "Our predictions about climate change are more confident, both for better and for worse." - so maybe my distillation should admit that too.
[$20K In Prizes] AI Safety Arguments Competition

Companies and governments will find it strategically valuable to develop advanced AIs which are able to execute creative plans in pursuit of a goal achieving real-world outcomes. Current large language models have a rich understanding of the world which generalizes to other domains, and reinforcement learning agents already achieve superhuman performance at various games. With further advancements in AI research and compute, we are likely to see the development of human-level AI this century. But for a wide variety of goals, it is often valuable to pursue ... (read more)

How I failed to form views on AI safety

But I am a bit at loss on why people in the AI safety field think it is possible to build safe AI systems in the first place. I guess as long as it is not proven that the properties of safe AI systems are contradictory with each other, you could assume it is theoretically possible. When it comes to ML, the best performance in practice is sadly often worse than the theoretical best.

To me, this belief that AI safety is hard or impossible would imply that AI x-risk is quite high. Then, I'd think that AI safety is very important but unfortunately intractable. ... (read more)

1Ada-Maaria Hyvärinen1mo
I think you understood me in the same way than my friend did in the second part of the prolog, so I apparently give this impression. But to clarify, I am not certain that AI safety is impossible (I think it is hard, though), and the implications of that depend a lot on how much power the AI systems will be given at the end, and what part of the damage they might cause is due to them being unsafe and what for example misuse, like you said.
Free-spending EA might be a big problem for optics and epistemics

For what it's worth, even though I prioritize longtermist causes, reading

Maybe it depends on the cause area but the price I'm willing to pay to attract/retain people who can work on meta/longtermist things is just so high that it doesn't seem worth factoring in things like a few hundred pounds wasted on food.

made me fairly uncomfortable, even though I don't disagree with the substance of the comment, as well as

2) All misallocations of money within EA community building is lower than misallocations of money caused by donations that were wasted by donating t

... (read more)
Free-spending EA might be a big problem for optics and epistemics

Free food and free conferences are things that are somewhat standard among various non-EA university groups. It's easy to object to whether they're an effective use of money, but I don't think they're excessive except under the EA lens of maximizing cost-effectiveness. I think if we reframe EA universities groups as being about empowering students to tackle pressing global issues through their careers, and avoid mentioning effective donations and free food in the same breath, then it's less confusing why there is free stuff being offered. (Besides apparent... (read more)

Nikola's Shortform

I've considered this before and I'm not sure I agree. If I'm at a +10 utility for the next 10 years and afterwards will be at +1,000,000 utility for the following 5,000 years, I might just feel like skipping ahead to be feeling +1,000,000 utility, simply from being impatient about getting to feel even better.

1Nikola2mo
I agree, to clarify, my claim assumes infinite patience.
University Groups Should Do More Retreats

Got it, I'm surprised by how little time it took to organize HEA's spring retreat. What programming was involved?

3Trevor Levin2mo
Also should note that we had a bit of a head start: I had organized the DC retreat one month earlier so had some recent experience, we had lots of excited EAs already so we didn't even try to get professional EAs and we decided casual hangouts were probably very high-value, and the organizing team basically had workshops ready to go. We also had it at a retreat center that provided food (though not snacks). If any of these were different it would have taken much longer to plan.
3Trevor Levin2mo
It was very much an 80-20'd thing due to organizer capacity. The schedule was something like: * Friday evening arrivals + informal hangouts + board games (e.g. Pandemic) * Saturday morning: opening session, hikes/informal hangouts * Saturday afternoon: three sessions, each with multiple options: * 1-on-1 walks, Updating Session, AI policy workshop * 1-on-1 walks, Concept Swap, forecasting workshop * 1-on-1 walks, AI policy workshop * Saturday evening: Hamming Circles, informal hangouts feat. hot tub and fire pit * Sunday morning: walks/hangouts * Sunday afternoon: career reflection, closing session, departure
University Groups Should Do More Retreats

For me, the main value of retreats/conferences has been forming lots connections, but I haven't become significantly more motivated to be more productive, impactful, or ambitious. I have a couple questions which I think would be helpful for organizers to decide whether they should be running more retreats:

  • How many hours does it take to organize a retreat?
  • To what extent can the value of a retreat be 80/20'd with a series of 1-on-1s? (Perhaps while taking a walk through a scenic part of campus) Would that save organizer time?
  • Do you have estimates as to how many participants have significant plan changes after a retreat?

Yep, great questions -- thanks, Michael. To respond to your first thing, I definitely don't expect that they'll have those effects on everybody, just that they are much more likely to do so than pretty much any other standard EA group programming.

  • Depends on the retreat. HEA's spring retreat (50 registrations, ~32 attendees) involved booking and communicating with a retreat center (which took probably 3-4 hours), probably 5-6 hours of time communicating with attendees, and like 2 hours planning programming. I ran a policy retreat in DC that was much more ti
... (read more)
Intro to AI/ML Reading Group at EA Georgetown!

My experience with EA at Georgia Tech is that a relatively small proportion of people who complete our intro program participate in follow-up programs, so I think it's valuable to have content you think is important in your initial program instead of hoping that they'll learn it in a later program. I think plenty of Georgetown students would be interested in signing up for an AI policy/governance program, even if it includes lots of x-risk content.

The Vultures Are Circling
As a community that values good epidemics

good epistemics?

Thanks for posting about this; I had no idea this was happening to a significant extent.

Intro to AI/ML Reading Group at EA Georgetown!

To the extent that the program is meant to provide an introduction to "catastrophic and existential risk reduction in the context of AI/ML", I think it should include some more readings on the alignment problem, existential risk from misaligned AI, transformative AI or superintelligence. I think Mauricio Baker's AI Governance Program has some good readings for this.

1Danny W.2mo
Thanks for your comment. Question -- Do you think it's worth introducing X risk (+ re areas) in this context? I ask this because we envision this reading group as a lead-in to an intro fellowship or other avenues of early stage involvement. Given this, we want to balance materials we introduce with limited time, while also making people curious about ideas discussed in the EA space.
Where is the Social Justice in EA?

I think lack of diversity in EA is largely due to founder effects, and EA is working on this. There's an emerging effort to have EA outreach in more global south countries like India, the Philippines, and Mexico, and local EA community-builders are working hard on that.

For what it's worth, it seems to me that EA university groups have more racial and gender diversity than the broader EA movement, which I think is because they reach a broader base of people, compared to the type of people who randomly stumble across EA on the internet.

The EA community is ex
... (read more)
Announcing What We Owe The Future

Any chance the price could be reduced to lower the barrier to pre-ordering? It costs $30 on Amazon US for a hard copy, which is a lot to ask for.

8Greg_Colbourn2mo
When I ordered on Amazon (UK), it said that I'd be charged the lowest price they offer between now and the launch date, so if there are any (further) discounts later on, I'll get them. I'm guessing this is the same for the US.
8abierohrig2mo
The price is set by the publisher, so for now there's nothing we can do directly reduce the price of the book on Amazon, etc. (In the future, an EA publishing house could make sure EA books are priced more accessibly! See Fin Moorhouse's post here [https://www.finmoorhouse.com/writing/ea-projects#an-ea-publishing-house]) In the coming months, we'll likely partner with bookstores to do pre-order discounts (eg. 25% off), and EA groups can use funding to purchase copies of the book for their members. So if pre-ordering now is price prohibitive, there will be other opportunities later on to pre-order.
AI safety starter pack
you can usually find small online projects throughout the year

Where?

1mariushobbhahn2mo
There is no official place yet. Some people might be working on a project board. See comments in my other post: https://forum.effectivealtruism.org/posts/srzs5smvt5FvhfFS5/there-should-be-an-ai-safety-project-board [https://forum.effectivealtruism.org/posts/srzs5smvt5FvhfFS5/there-should-be-an-ai-safety-project-board] Until then, I suggest you join the slack I linked in the post and ask if anyone is currently searching. Additionally, if you are at any of the EAGs and other conferences, I recommend asking around. Until we have something more official, projects will likely only be accessible through these informal channels.
$100 bounty for the best ideas to red team

Red team: Argue that moral circle expansion is or is not an effective lever for improving the long-term future. Subpoint: challenge the claim that ending factory farming effectively promotes moral circle expansion to wild animals or digital sentient beings.

Related:

$100 bounty for the best ideas to red team

Red team: Certain types of prosaic AI alignment (e.g., arguably InstructGPT) promote the illusion of safety without genuinely reducing existential risk from AI, or are capabilities research disguised as safety research. (A claim that I've heard from EleutherAI, rather indelicately phrased, and would like to see investigated)

$100 bounty for the best ideas to red team

Red team: Is existential security likely, assuming that we avoid existential catastrophe for a century or two?

Some reasons that I have to doubt that existential security is the default outcome we should expect:

  • Even superintelligent aligned AI might be flawed and fail catastrophically eventually
  • Vulnerable world hypothesis
  • Society is fairly unstable
  • Unregulated expansion throughout the galaxy may reduce extinction risk but may increase s-risks, and may not be desirable
$100 bounty for the best ideas to red team

I helped start WikiProject Effective Altruism a few months ago, but I think that the items on our WikiProject's to-do list are not as valuable as, say, organizing a local EA group at a top university, or writing a useful post on the EA Forum. One tricky thing about Wikipedia is that you have to be objective, so while someone might read an article on effective altruism and be like "wow, this is a cool idea", you can't engage them further. I also think that the articles are already pretty decent.

Career Advice: Philosophy + Programming -> AI Safety

Richard Ngo recently wrote a post on careers in AI safety.

I think you could divide AI safety careers into six categories. I've written some quick tentative thoughts on how you could get started, but I'm not an expert in this for sure.

  • Software engineering: infrastructure, building environments, etc.
    • Do LeetCode/NeetCode and other interview prep and get referrals to try to get a really good entry-level software engineering job. Work in software engineering for a few years, try to get really good at engineering (e.g., being able to dive into a large, unfam
... (read more)
Mediocre AI safety as existential risk

Can someone provide a more realistic example of partial alignment causing s-risk than SignFlip or MisconfiguredMinds? I don't see either of these as something that you'd be reasonably likely to get by say, only doing 95% of the alignment research necessary rather than 110%.

Brief Thoughts on "Justice Creep" and Effective Altruism

I understand the post's claim to be as follows. Broadly speaking, EAs go for global health and development if they want to help people through rigorously prevent interventions. They go for improving the long-term future stuff if they want to maximize expected value. And they go for farmed animal welfare if they care a lot about animals, but the injustice of factory farming is a major motivation for why many EAs care about it. This makes a lot of sense to me and I wholeheartedly agree.

That said, I think the selection of the main three cause areas – global h... (read more)

1Devin Kalish2mo
This is a good summary of my position. I also agree that a significant part of the reason for the three major cause areas is history, but think that this answers a slightly different question from the one I'm approaching. It's not surprising, from the outside, that people who want to good, and have interests in common with major figures like Peter Singer, are more likely to get heavily involved with the EA movement than people who want to do good and have other values/interests. However, from the inside it doesn't give an account of why the people who do wind up involved with EA find the issue personally important, certainly the answer is unlikely to be "because it is important to Peter Singer". I'd count myself in this category, of people who share values with major figures in the movement, were in part selected for by the movement on this basis, and also, personally, care a very great deal about factory farming, more so that even cause areas I think might be more important from an EV perspective. This is as much an account of my own feelings that I think applies to others as anything else.
Funding request: Promoting EA in MENA

It might be helpful to talk with Koki Ajiri from EA NYU Abu Dhabi, as they’re trying to do EA outreach in the UAE, though they’re focusing on English-speaking universities.

I’d definitely recommend applying for funding from the EA Infrastructure Fund! My main question is, how sure are you that you can get articles published in major newspapers?

1Timothy_Liptrot2mo
| My main question is, how sure are you that you can get articles published in major newspapers? Good question. I'm uncertain. I would like to write up one piece now and try to get it in, as a check on the viability of the plan.
4Harrison Durland3mo
I also made this mistake a few days ago, as the site was telling me there was some kind of error when I tried posting.
Is transformative AI the biggest existential risk? Why or why not?

Great power conflict is generally considered an existential risk factor, rather than an existential risk per se – it increases the chance of existential risks like bioengineered pandemics, nuclear war, and transformative AI, or lock-in of bad values (Modelling great power conflict as an existential risk factor, The Precipice chapter 7).

I can define a new existential risk factor that could be as great as all existential risks combined – the fact that our society and the general populace do not sufficiently prioritize existential risks, for example. So no, I... (read more)

4evelynciara3mo
This is an interesting point, thanks! I tend not to distinguish between "hazards" and "risk factors" because the distinction between them is whether they directly or indirectly cause an existential catastrophe, and many hazards are both. For example: 1. An engineered pandemic could wipe out humanity either directly or indirectly by causing famine, war, etc. 2. Misaligned AI is usually thought of as a direct x-risk, but it can also be thought of a risk factor because it uses its knowledge of other hazards in order to drive humanity extinct as efficiently as possible (e.g. by infecting all humans with botox-producing nanoparticles [https://www.lesswrong.com/posts/oKYWbXioKaANATxKY/soares-tallinn-and-yudkowsky-discuss-agi-cognition] ). Mathematically, you can speak of the probability of an existential catastrophe given a risk factor by summing up the probabilities of that risk factor indirectly causing a catastrophe by elevating the probability of a "direct" hazard: Pr(extinction∣great power war)=Pr(extinction∣great power war,engineered pandemic )+Pr(extinction∣great power war,transformative AI)+… You can do the same thing with direct risks. All that matters for prioritization is the overall probability of catastrophe given some combination of risk factors.
Credo AI is hiring!

Is work at Credo AI targeted at trying to reduce existential risk from advanced AI (whether from misalignment, accident, misuse, or structural risks)?

Credo AI is not specifically targeted at reducing existential risk from AI. We are working  with companies and policy makers who are converging on a set of responsible AI principles that need to be thought out better and implemented.

-

Speaking for myself now - I became interested in AI safety and governance because of the existential risk angle. As we have talked to companies and policy makers it is clear that most groups do not think about AI safety in that way. They are concerned with ethical issues like fairness - either for moral reasons, or, more ... (read more)

The Future Fund’s Project Ideas Competition

I think print books are still preferred by more readers compared to e-books. You might as well donate the books in both the physical and digital formats and probably also as an audiobook.

It looks like libraries don't generally have an official way for you to donate print books virtually or to donate e-books, so I think you would have to inquire with them about whether you can make a donation and ask them to use that to buy specific books. Note that the cost of e-book licenses to libraries is many times the consumer sale price.

The Future Fund’s Project Ideas Competition

I really like this project idea! It's ambitious and yet approachable, and it seems that a lot of this work could be delegated to virtual personal assistants. Before starting the project, it seems that it would be valuable to quickly get a sense of how often EA books in libraries are read. For example, you could see how many copies of Doing Good Better are currently checked out, or perhaps you could nicely ask a library if they could tell you how many times a given book has been checked out.

Load More