Funding Strategy Week
Marginal Funding Week
Donation Election
Pledge Highlight
Donation Celebration
Nov 4 - 10
Funding Strategy Week
Read and continue Funding Strategy Week's conversations here.
Nov 12 - 18
Marginal Funding Week
A week for organisations to explain what they would do with marginal funding. Read more.
Nov 18 - Dec 3
Donation Election
A crowd-sourced pot of funds will be distributed amongst three charities based on your votes. Find out more. $12 781 raised.
$12 781 raised
+ Show one more
This will recalculate the vote totals
Current Leaderboard
+ Show one more
This will recalculate the vote totals
Dec 16 - 22
Pledge Highlight
A week to post about your experience with pledging, and to discuss the value of pledging. Read more.
Dec 23 - 31
Donation Celebration
When the donation celebration starts, you’ll be able to add a heart to the banner showing that you’ve done your annual donations.

New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
I'd love to see an 'Animal Welfare vs. AI Safety/Governance Debate Week' happening on the Forum. The risks from AI cause has grown massively in importance in recent years, and has become a priority career choice for many in the community. At the same time, the Animal Welfare vs Global Health Debate Week demonstrated just how important and neglected the cause of animal welfare remains. I know several people (including myself) who are uncertain/torn about whether to pursue careers focused on reducing animal suffering or mitigating existential risks related to AI. It would help to have rich discussions comparing both causes's current priorities and bottlenecks, and a debate week would hopefully expose some useful crucial considerations.
EA in a World Where People Actually Listen to Us I had considered calling the third wave of EA "EA in a World Where People Actually Listen to Us".  Leopold's situational awareness memo has become a salient example of this for me. I used to sometimes think that arguments about whether we should avoid discussing the power of AI in order to avoid triggering an arms race were a bit silly and self important because obviously defense leaders aren't going to be listening to some random internet charity nerds and changing policy as a result. Well, they are and they are. Let's hope it's for the better.
I think it's a shame the Nucleic Acid Observatory are getting so few votes. They are relatively cheap (~$2M/year) and are working on a unique intervention that on the face of it seems like it would be very important if successful. At least as far as I'm aware there is no other (EA) org that explicitly has the goal of creating a global early warning system for pandemics. By the logic of it being valuable to put the first few dollars into something unique/neglected I think it looks very good (although I would want to do more research if it got close to winning).
I'm working on a "who  has funded what in AI safety" doc. Surprisingly, when I looked up Lightspeed Grants online (https://lightspeedgrants.org/) I couldn't find any list of what they funded. Does anyone know where I could find such a list? 
Ten months ago I met Australia's Assistant Defence Minister about AI Safety because I sent him one email asking for a meeting. I wrote about that here. In total I sent 21 emails to Politicians and had 4 meetings. AFAICT there is still no organisation with significant funding that does this as their primary activity. AI Safety advocacy is IMO still extremely low hanging fruit. My best theory is EAs don't want to do it / fund it because EAs are drawn to spreadsheets and google docs (it isn't their comparative advantage). Hammers like nails etc.