I strongly agree that being associated with EA in AI policy is increasingly difficult (as many articles and individuals' posts on social media can attest), in particular in Europe, DC, and the Bay Area.
I appreciate Akash's comment, and at the same time, I understand the object of this post is not to ask for people's opinions about what the priorities of CEA would be, so I won't go too much into detail. I want to highlight that I'm really excited for Zach Robinson to lead CEA!
With my current knowledge of the situation in three different jurisdictions,...
If the fallout from FTX has you concerned, it's worth looking inward at your own organization and potentially other orgs. Are there parallels, like a weak board, conflicts of interest, questionable incentives, or a lack of risk management and crisis planning? Is liquidity an issue, or are there unconventional approaches in management? These red flags warrant closer inspection.
I agree that these decisions are going in the right direction. I think their resignations should have been given earlier given the severity of the conflicts of interest with FTX and the problem of their judgments over the situations.
(I still appreciate Nick and Will as individuals and value immensely their contribution to the fields)
Thanks so much for your work, Will! I think this is the right decision given the circumstances and that will help EV move in a good direction. I know some mistakes were made but I still want to recognize your positive influence.
I'm eternally grateful for getting me to focus on the question of "how to do the most good with our limited resources?".
I remember how I first heard about EA.
The unassuming flyer taped to the philosophy building wall first caught my eye: “How to do the most good with your career?”
It was October 2013, midterms week at Tufts Uni...
I've used the "Calm me" feature multiple times. I find it very easy to use during the day - taking just a few minutes off. I don't have panic attacks but found it helpful to have a tool to reduce stress. I found it especially helpful around the release of GPT-4 and dealing with lots of worries about the speed of AI progress then. After a couple of exercises, I could go back to work and focus again on my AI governance work with renewed resolve.
I'm very supportive of MindEase growth and focus on panic attacks, but honestly found it very useful as a general "relaxing and calming down" app.
My quick initial research:
The UK's influence on DeepMind, a subsidiary of US-based Alphabet Inc., is substantial despite its parent company's origin. This control stems from DeepMind's location in the UK (jurisdiction principle), which mandates its compliance with the country's stringent data protection laws such as the UK GDPR. Additionally, the UK's Information Commissioner's Office (ICO) has shown it can enforce these regulations, as exemplified by a ruling on a collaboration between DeepMind and the Royal Free NHS Foundation Trust. The UK government's ...
I'm looking for insights on the potential regulatory implications this could have, especially in relation to the UK's AI regulation policies.
This post is beautiful, rational, and useful - thank you!
As the beginning of a reply to the question "What does a “realistic best case transition to transformative AI” look like?", we could maybe say that a worthwhile intermediary goal is getting to a Long Reflection when we can use safe (probably narrow) AIs to help us build a Utopia for the many years to come.
Congrats on launching cFactual; it sounds great!
Exploring how you can help launch small or mega projects could also be interesting. If we expect this century or decade to be "wild", the EA community will create many new organizations and projects to deal with new challenges. It would be great to help these projects have a solid ToC, governance structure, etc., from the beginning. I understand that these projects may be on a slightly longer timeline (e.g. "the first year of the creation of a new AI governance organization...") but it could be great. I'd personally feel more confident about launching a new large project if I had cFactual to help!
(However, it is very difficult to hire taxis to go to and come back from there, which often takes 30 min). Edit: people can wait up to 1h30 to get a taxi from Wytham, which isn't super practical.
I agree with Adam here about the fact that it's better to host all attendees in one place during retreats.
However, I am not sure of the number of bedrooms that Wytham has. It could be that a lot of attendees have to rent bedrooms outside of Wytham anyways, which makes the deal worse.
Agreed that it would be very helpful to have a widely distributed survey about this, ideally with in-depth conversations. Quantitative and qualitative data seem to be lacking, while there seems to be a lot of anecdotal evidence. Wondering if CEA or RP could lead such work, or whether an independent organization should do it.
I would mainly like it to be easy to fill out so that the results are representative. I think it's pretty easy for surveys like this to end up only filled in by people with the strongest opinions.
In this case, it seems like a very good strategy for the world, too, in that it doesn't politicize one issue too much (like climate change has been in the US because it was tied to Democrats instead of both sides of the aisle).
More opportunities:
+ 1 for way more investigations and background checks for major donations, megaprojects, and association with EA.
I agree that the tone was too tribalistic, but the content is correct.
(Seems a bit like a side-topic, but you can read more about Leverage on this EA Forum post and, even more importantly, in the comments. I hope that's useful for you! The comments definitely changed my views - negatively - about the utility of Leverage's outputs and some cultural issues.)
I've read it. I'd guess we have similar views on Leverage, but different views on CEA. I think it's very easy for well-intentioned, generally reasonable people's epistemics to be corrupted via tribalism, motivated reasoning, etc.
But as I said above I'm unsure.
Edited to add: Either way, might be a distraction to debate this sort of thing further. I'd guess that we both agree in practice that the allegations should be taken seriously and investigated carefully, ideally by independent parties.
For what it's worth, these different considerations can be true at the same time:
I agree that these can technically all be true at the same time, but I think the tone/vibe of comments is very important in addition to what they literally say, and the vibe of Arepo's comment was too tribalistic.
I'd also guess re: (3) that I have less trust in CEA's epistemics to necessarily be that much better than Leverage's , though I'm uncertain here (edited to add: tbc my best guess is it's better, but I'm not sure what my prior should be if there's a "he said / she said" situation, on who's telling the truth. My guess is closer to 50/50 than 95/5 in log odds at least).
Here's a report on Positive AI Economic Futures published by the World Economic Forum and supported by the Center for Human-Compatible AI (CHAI).
I think this is totally fair and another reason not to do the Pentathlon: the Pentathlon is often particularly useful for the two weeks of the competition, but the habits often don't hold very well after. If you want to make the habits endure, I recommend setting up strong systems during the Pentathlon and holding yourself accountable for keeping them afterwards. For example, the Pentathlon's sleep target will possibly lead you to set up an alarm on your phone, your computer, or your programmed lightbulbs, etc. to go to bed on time. Set up a ta...
Yes! If people are interested in joining an "EA Forum" team, they can either coordinate here or via EA Forum DMs. Otherwise, join as an individual and specify you want to join an EA Forum team and we'll match you with others!
You can definitely join as an individual! You'll then be matched to an EA team (probably made of other individuals). Would love to have you!
For another counterargument to your point about the fact that some positions don't look attractive to people who are overqualified, here's Ben West's article. I personally think that making the position challenge and a growth opportunity make people more motivated and excited.
This was such a great post and I was nodding along throughout the whole article, except for the part about the importance of hiring people who are "strategically aligned".
I think that you often need people at the top of the organization to deeply share the org's ethics and long-term goals, otherwise you find yourself in very-long debates about theories of change, which ultimately affect a lot of the decisions (I wonder if you have experienced this?). The exception to this is when you find non-EA, but exceptional people who share EA goals while also h...
Thank you for writing this! I like the concept and word "Dedicate". This piece resonates a lot with me.
A small idea of a potentially high-impact consultancy: you may want to consider specializing in helping EAs figure out what physical health problems they have and recommending steps they can take to improve those. (I realize after writing this that you underlined that you don't like clinical work that much so maybe the following isn't that useful.)
One of the pieces of advice of 80,000 Hours is to take care of your physical health and notably avoid back issues.
...We were surprised to learn that the biggest risk to our productivity is probably back p
I found this post interesting!
I would highly recommend this book "Plays Well with Others: The Surprising Science Behind Why Everything You Know About Relationships Is (Mostly) Wrong".
Eric Barker (from the blog "Barking Up the Wrong Tree") gives advice based on hundreds of papers on the topic.
Thanks for this post! I've been wondering about how to think about this too.
Some burgeoning ideas:
I'll be able to do phonebanking on Tuesday from 10am to 1pm PT on Tuesday - join then!
And I'm happy to help coordinate outside of this!
Lots of useful insights. At this point, I'm more on the side of doing this, which is not fanning the flames.
" How should I respond to takes on EA that I disagree with?
Maybe not at all — it may not be worth fanning the flames.
If you do respond, it helps to link to a source for the counter-point you want to make. That way, curious people who see your interaction can follow the source to learn more."
Agree with this point. Jeffrey Ladish wrote "US Citizens: Targeted political contributions are probably the best passive donation opportunities for mitigating existential risk".
He says:
...Recently, I’ve surprised myself by coming to believe that donating to candidates who support policies which reduce existential risks is probably the best passive donation opportunity for US citizens. The main reason I’ve changed my mind is that I think highly aligned political candidates have a lot of leverage to affect policies that could impact the long-t
It's quite hard to know and I don't know what the Team Campaign thinks about it.
There is a good article on Vox about the evidence base for those things "Gerber and Green’s rough estimate is that canvassing can garner campaigns a vote for about $33, while volunteer phone-banking can garner a vote for $36 — not too different, especially when you consider how imprecise these estimates necessarily are." Not exactly what you answered but can give you a sense of direction."
I also think that helping Carrick would be super good!
Regarding phone banking, I wouldn't be that interested in paying for volunteers. The most important factor in the effectiveness of the calls is that the caller is genuinely enthusiastic about the candidate - basically, if the caller is really interested, the person on the other end of the line has three times more chances of being convinced than if the caller is not really enthusiastic about the candidate. (I don't have the specific paper this was in.) So it's great if you get your friends t...
The campaign can only use the first $2,900 in the primary campaign, but they can use the rest in the general election if they win the primary. If they don't win the primary, the options are either returning the remaining money to you or passing it along to another campaign.
Additional funding right now would be financing better and more personal ads that still work in these final days.
As you've already given $2,900, I may recommend:
It's about every Monday - the next one being in three days.
However, I recommend that you sign up even outside of these slots because there are still opportunities to do phone calls!
Kuhan is probably right. However, after speaking to someone on Team Carrick today, it seems like there is still room for funding for the campaign's ads, which are different from the PAC's ads and show more Carrick talking directly to people. So giving now still makes sense (for the next 48 hours) even though the effects are smaller than a few days ago.
Thanks for the suggestion, I'm going to definitely consider that. I'm a bit worried about feeding the troll... Maybe something more focused on why I think he's really a good candidate, and more detailed?
It is still useful to donate until Sunday!
I just talked with someone in Carrick's campaign and they said that there are still two more days for ads to be useful. The PAC has its own ads but they don't show Carrick's speaking. The campaign has better, more personable ads that people like better but the campaign can't use the PAC's funding for those better ads.
And we have another Fermi estimate of the ROI on a donation to Carrick’s campaign!
'There are 435 members of the House of Representatives. Let’s assume that the House as a whole holds 1/4 of...
Here's an ITN-style analysis from an anonymous friend and me.
I'm genuinely curious: if you’re considering donating but haven’t yet, what are your key questions/cruxes?
Feel free to answer here and/or DM me.
I'm organizing a small group discussion around this and your questions would help direct the conversation. Also maybe some people can answer here!
If you're interested in chatting casually about this, please DM-me :)
Why are you volunteering for the campaign?
I’m a volunteer for the campaign because
(just a note that it's completely possible to be a bit silly when you're a kid, eg play with Barbie Dolls, read romance novels, swoon for the next Bieber (or the next Taylor Swift), and fret about having too few shoes and dresses... and still be extremely attached to EA principles and dedicate one's life to EA ideas. I did all the above and I would describe myself as high-curiosity, high-openness, high-altruism, thoughtful and really trying to be epistemically strong... I think it's ok to be a bit silly when you're a kid or a teenager... and also still a bit silly when you're an adult? Not sure those "silly stuff" are actually deal-breakers to be ridiculously impactful later on? )
I also think that having friends, colleagues, and coaches who are very honest with you is extremely important because invisible mistakes are sometimes especially hard to spot. Maybe you got a very fancy-looking new position, but it's way "less good" than you doing a more hidden but higher impact job. The rest of the world will tell you that it's great; so you need transparent friends and be in a sufficiently good mental space to receive this feedback.
Reviewing your decision on your own every X months and trying to make predictions about what "really good impact" looks like may be a good idea.
(Note: I have edited this comment after finding even more reasons to agree with Neel)
I find your answer really convincing so you made me change my mind!
A small contribution to your list: it seems like Indonesia has a lot of natural disaster risks, like earthquakes, tsunamis, and volcano eruptions.
Interesting! Before this study, I thought that EAs had two factors, Effectiveness and Altruism, but I defined Effectiveness differently. I thought it referred to something like "optimization mindset" and "being good at thinking quantitatively and strategically". This would probably have been a defining characteristic of people in business school.
So I updated my definition of "Effectiveness" in EA. I'd be curious to see more studies about other personality traits!
I think it's likely that the optimization mindset and numeracy are important proto-EA factors, just one that wasn't measured here as they're more difficult to measure through survey questions. Also, holding constant someone's overall agreement with EA, I'd predict that someone with optimization mindset would have more average impact, and this outcome is more difficult to measure.
Thanks for this!
Typo: it's not "Stuart Russell on why he is co-founding Aligned AI" but Stuart Armstrong.
Could you explain a bit more what you mean by "confidence to forge our own path"? I think if the validity of the claims made about AI safety is systematically attacked due to EA connections, there is a strong reason to worry about this. I find that it makes it more difficult for a bunch of people to have an impact on AI policy.
The costs of chasing good PR are larger than they first appear: at the start you're just talking about things differently, but soon enough it distorts your epistemics.
At the same time, these actions make less of a difference than you might expect. Some people are just looking for a reason to criticism you and will find a different reason. People will still attack you based on what happened in the past.