All of Caro's Comments + Replies

Could you explain a bit more what you mean by "confidence to forge our own path"? I think if the validity of the claims made about AI safety is systematically attacked due to EA connections, there is a strong reason to worry about this. I find that it makes it more difficult for a bunch of people to have an impact on AI policy.

The costs of chasing good PR are larger than they first appear: at the start you're just talking about things differently, but soon enough it distorts your epistemics.

At the same time, these actions make less of a difference than you might expect. Some people are just looking for a reason to criticism you and will find a different reason. People will still attack you based on what happened in the past.

I strongly agree that being associated with EA in AI policy is increasingly difficult (as many articles and individuals' posts on social media can attest), in particular in Europe, DC, and the Bay Area. 

I appreciate Akash's comment, and at the same time, I understand the object of this post is not to ask for people's opinions about what the priorities of CEA would be, so I won't go too much into detail. I want to highlight that I'm really excited for Zach Robinson to lead CEA!

With my current knowledge of the situation in three different jurisdictions,... (read more)

Caro
4mo22
11
0


If the fallout from FTX has you concerned, it's worth looking inward at your own organization and potentially other orgs. Are there parallels, like a weak board, conflicts of interest, questionable incentives, or a lack of risk management and crisis planning? Is liquidity an issue, or are there unconventional approaches in management? These red flags warrant closer inspection.

Caro
6mo69
18
7

I agree that these decisions are going in the right direction. I think their resignations should have been given earlier given the severity of the conflicts of interest with FTX and the problem of their judgments over the situations.

(I still appreciate Nick and Will as individuals and value immensely their contribution to the fields)

4
Grumpy Squid
6mo
I agree. The governance and decision-making of the EV boards is an important matter that shouldn’t be dismissed because of Will and Nick’s other contributions.

Thanks so much for your work, Will! I think this is the right decision given the circumstances and that will help EV move in a good direction. I know some mistakes were made but I still want to recognize your positive influence.

I'm eternally grateful for getting me to focus on the question of "how to do the most good with our limited resources?". 

I remember how I first heard about EA.

The unassuming flyer taped to the philosophy building wall first caught my eye: “How to do the most good with your career?”

It was October 2013, midterms week at Tufts Uni... (read more)

I've used the "Calm me" feature multiple times. I find it very easy to use during the day - taking just a few minutes off. I don't have panic attacks but found it helpful to have a tool to reduce stress. I found it especially helpful around the release of GPT-4 and dealing with lots of worries about the speed of AI progress then. After a couple of exercises,  I could go back to work and focus again on my AI governance work with renewed resolve. 

I'm very supportive of MindEase growth and focus on panic attacks, but honestly found it very useful as a general "relaxing and calming down" app. 

3
Marta_Krzeminska
8mo
Thank you for sharing! It's so motivating to hear from real-word users how that app was helpful (I'll pass this to the full team). And especially that it could provide peace of mind int he face of AI-related concerns. Th good news is that we won't be losing the previous features and exercises. So, if there are any particular tools you found helpful in the past, you'll still be able to use them even after the pivot. :)

My quick initial research:
The UK's influence on DeepMind, a subsidiary of US-based Alphabet Inc., is substantial despite its parent company's origin. This control stems from DeepMind's location in the UK (jurisdiction principle), which mandates its compliance with the country's stringent data protection laws such as the UK GDPR. Additionally, the UK's Information Commissioner's Office (ICO) has shown it can enforce these regulations, as exemplified by a ruling on a collaboration between DeepMind and the Royal Free NHS Foundation Trust. The UK government's ... (read more)

I'm looking for insights on the potential regulatory implications this could have, especially in relation to the UK's AI regulation policies.

  1. Given that DeepMind was a UK-based subsidiary of Alphabet Inc., does the UK still have the jurisdiction to regulate it after the merger with Google Brain? 
  2. On the other hand, what is the weight of the US regulation on DeepMind?

    I appreciate any insights or resources you can share on this matter. I understand this is a complex issue, and I'm keen to understand it from various perspectives.
5
Caro
10mo
My quick initial research: The UK's influence on DeepMind, a subsidiary of US-based Alphabet Inc., is substantial despite its parent company's origin. This control stems from DeepMind's location in the UK (jurisdiction principle), which mandates its compliance with the country's stringent data protection laws such as the UK GDPR. Additionally, the UK's Information Commissioner's Office (ICO) has shown it can enforce these regulations, as exemplified by a ruling on a collaboration between DeepMind and the Royal Free NHS Foundation Trust. The UK government's interest in AI regulation and DeepMind's work with sensitive healthcare data further subjects the company to UK regulatory oversight. However, the recent fusion of DeepMind with Google Brain, an American entity, may reduce the UK's direct regulatory influence. Despite this, the UK can still impact DeepMind's operations via its general AI policy, procurement decisions, and data protection laws. Moreover, voices like Matt Clifford, the founder and CEO of Entrepreneur First, suggest a push for greater UK sovereign control over AI, which could influence future policy decisions affecting companies like DeepMind.

This post is beautiful, rational, and useful - thank you!

As the beginning of a reply to the question "What does a “realistic best case transition to transformative AI” look like?", we could maybe say that a worthwhile intermediary goal is getting to a Long Reflection when we can use safe (probably narrow) AIs to help us build a Utopia for the many years to come.

Congrats on launching cFactual; it sounds great!

Exploring how you can help launch small or mega projects could also be interesting. If we expect this century or decade to be "wild", the EA community will create many new organizations and projects to deal with new challenges.  It would be great to help these projects have a solid ToC, governance structure, etc., from the beginning. I understand that these projects may be on a slightly longer timeline (e.g. "the first year of the creation of a new AI governance organization...") but it could be great. I'd personally feel more confident about launching a new large project if I had cFactual to help!

Caro
1y-2
0
3

(However, it is very difficult to hire taxis to go to and come back from there, which often takes 30 min). Edit: people can wait up to 1h30 to get a taxi from Wytham, which isn't super practical.

I agree with Adam here about the fact that it's better to host all attendees in one place during retreats.

However, I am not sure of the number of bedrooms that Wytham has. It could be that a lot of attendees have to rent bedrooms outside of Wytham anyways, which makes the deal worse.

Caro
1y15
6
2

Agreed that it would be very helpful to have a widely distributed survey about this, ideally with in-depth conversations. Quantitative and qualitative data seem to be lacking, while there seems to be a lot of anecdotal evidence. Wondering if CEA or RP could lead such work, or whether an independent organization should do it.

I would mainly like it to be easy to fill out so that the results are representative. I think it's pretty easy for surveys like this to end up only filled in by people with the strongest opinions.

Very excited about this competition! Is it still happening?

1
Oliver Z
1y
Yup! The bounty is still ongoing. We have been awarding prizes throughout the duration of the bounty and will post an update in January detailing the results.

In this case, it seems like a very good strategy for the world, too, in that it doesn't politicize one issue too much (like climate change has been in the US because it was tied to Democrats instead of both sides of the aisle).

Answer by CaroNov 14, 20226
0
0

More opportunities:

  • The AI Safety Microgrant Round: "We are offering microgrants up to $2,000 USD with the total size of this round being $6,000 USD"; "We believe there are projects and individuals in the AI Safety space who lack funding but have high agency and potential."; "Fill out the form at Microgrant.ai by December 1, 2022."
  • Nonlinear Emergency Funding: "Some of you counting on Future Fund grants are suddenly finding yourselves facing an existential financial crisis, so, inspired by the Covid Fast Grants program, we’re trying something similar fo
... (read more)
Caro
1y33
16
2

+ 1 for way more investigations and background checks for major donations, megaprojects, and association with EA.

Caro
1y21
6
0

I agree that the tone was too tribalistic, but the content is correct.

(Seems a bit like a side-topic, but you can read more about Leverage on this EA Forum post and, even more importantly, in the comments. I hope that's useful for you! The comments definitely changed my views - negatively - about the utility of Leverage's outputs and some cultural issues.)

I've read it. I'd guess we have similar views on Leverage, but different views on CEA. I think it's very easy for well-intentioned, generally reasonable people's epistemics to be corrupted via tribalism, motivated reasoning, etc.

But as I said above I'm unsure.

Edited to add: Either way, might be a distraction to debate this sort of thing further. I'd guess that we both agree in practice that the allegations should be taken seriously and investigated carefully, ideally by independent parties.

Caro
1y59
23
0

For what it's worth, these different considerations can be true at the same time:

  1. "He may have his own axe to grind.": that's probably true, given that he's been fired by CEA.
  2. "Kerry being at CEA for four years makes it more important to pay serious attention to what he has to say even if it ultimately doesn’t check out.": it also seems like he may have particularly useful information and contexts.
  3. "He's now the program manager at a known cult that the EA movement has actively distanced itself from": it does seem like Leverage is shady and doesn't have a very
... (read more)
1
RobBensinger
1y
I really like this comment, and I agree with it.

I agree that these can technically all be true at the same time, but I think the tone/vibe of comments is very important in addition to what they literally say, and the vibe of Arepo's comment was too tribalistic.

I'd also guess re: (3) that I have less trust in CEA's epistemics to necessarily be that much better than Leverage's , though I'm uncertain here (edited to add: tbc my best guess is it's better, but I'm not sure what my prior should be if there's a "he said / she said" situation, on who's telling the truth. My guess is closer to 50/50 than 95/5 in log odds at least).

1
elteerkers
1y
Thank you!

I think this is totally fair and another reason not to do the Pentathlon: the Pentathlon is often particularly useful for the two weeks of the competition, but the habits often don't hold very well after. If you want to make the habits endure, I recommend setting up strong systems during the Pentathlon and holding yourself accountable for keeping them afterwards. For example,  the Pentathlon's sleep target will possibly lead you to set up an alarm on your phone, your computer, or your programmed lightbulbs, etc. to go to bed on time.  Set up a ta... (read more)

Yes! If people are interested in joining an "EA Forum" team, they can either coordinate here or via EA Forum DMs. Otherwise, join as an individual and specify you want to join an EA Forum team and we'll match you with others!

You can definitely join as an individual! You'll then be matched to an EA team (probably made of other individuals). Would love to have you!

Caro
2y14
0
0

For another counterargument to your point about the fact that some positions don't look attractive to people who are overqualified, here's Ben West's article. I personally think that making the position challenge and a growth opportunity make people more motivated and excited. 

Caro
2y20
0
0

This was such a great post and I was nodding along throughout the whole article, except for the part about the importance of hiring people who are "strategically aligned". 

I think that you often need people at the top of the organization to deeply share the org's ethics and long-term goals, otherwise you find yourself in very-long debates about theories of change, which ultimately affect a lot of the decisions (I wonder if you have experienced this?). The exception to this is when you find non-EA, but exceptional people who share EA goals while also h... (read more)

6
AnonymousThrowAway
2y
I don't think we disagree much here, but where we do I'm trying to bottom out the cruxes...  I think it's primarily risk appetite. I do agree though that the wrong hire can make things hellish, on many levels. But in my experience that's usually been less driven by what people thought was important and moreso by the individual's characteristics, behaviours, degree of self-awareness, tendency towards defensiveness / self protection vs. openness. Usually if it doesn't work out in terms of irreconcilably different views on a problem, people just agree to disagree and move on! Perhaps we also have different things in our heads as meaningful signals of being a good leader for the org, and maybe different models of how a "signed up to doing good but not every EA doctrinal belief" person would operate.  As mentioned in the post how you (dis)agree is often the most important thing; which reflects what you're saying about flexible and open-minded people with their own perspectives. I think I stand by the IIDM example, illustrating how you don't need to be signed up to every EA idea to add a lot of value to an organisation. I think it's similar for X-risk oriented pandemic preparedness, AI risk, etc; that sometimes the most strategically sound thing to do would be more near-term, but those with a long-term orientation could not have that in their immediate view. Similarly would apply for e.g. deciding which funders / partners to work with; skills / talent requirements within the team; etc.   (That said, if there's an instinctive feeling that an EA adjacent / non-EA hire - senior or otherwise - could threaten organisational alignment, it's almost a recipe for unconscious ostracism and exclusion; almost in a self-fulfilling prophecy kind of way. It's just very human to react negatively to someone who you feel is threatening. So yeah - another thing to reflect on if you are working in an EA org). Maybe another crux is how much those people are exceptions? As I argued in the
2
Niel_Bowerman
2y
Nope.  

Thank you for writing this! I like the concept and word "Dedicate". This piece resonates a lot with me.

Answer by CaroJun 22, 20229
0
0

A small idea of a potentially high-impact consultancy: you may want to consider specializing in helping EAs figure out what physical health problems they have and recommending steps they can take to improve those. (I realize after writing this that you underlined that you don't like clinical work that much so maybe the following isn't that useful.)

One of the pieces of advice of 80,000 Hours is to take care of your physical health and notably avoid back issues. 

We were surprised to learn that the biggest risk to our productivity is probably back p

... (read more)
2
Joe91
2y
I think you're right that consultancy for EAs could be a good idea.  However I'm not particularly enthusiastic about ergonomics and posture, because I've yet to see strong evidence they prevent pain. Other lines of evidence also suggest that traditional physiotherapy beliefs about back pain and other pain may be misguided. I think  cardiovascular risk factor reduction (obesity, low physical activity) and proven injury prevention programs such as the FIFA11+ for soccer would be more effective, but less relevant to EAs. The Medical Mysteries Investigator sounds interesting, and I will keep an eye out for similar jobs. Thanks very much for your advice!

I found this post interesting!

I would highly recommend this book "Plays Well with Others: The Surprising Science Behind Why Everything You Know About Relationships Is (Mostly) Wrong". 

Eric Barker (from the blog "Barking Up the Wrong Tree") gives advice based on hundreds of papers on the topic. 

Thanks for this post! I've been wondering about how to think about this too.

Some burgeoning ideas: 

  • Maybe try to understand new people's moral priorities, e.g. understand if they 'score' high on "Expansive Altruism" and "Effectiveness-focused" scales. If they actually genuinely 'score' [1] high on those moral inclinations, I would tend to trust them more.
  • Maybe start a clearance process, in the sense of checking the background of people, etc. National security of countries also has to deal with this type of alignment problem.
  • Teach people how t
... (read more)

I'll be able to do phonebanking on Tuesday from 10am to 1pm PT on Tuesday - join then!

And I'm happy to help coordinate outside of this! 

Lots of useful insights. At this point, I'm more on the side of doing this, which is not fanning the flames.

" How should I respond to takes on EA that I disagree with?

Maybe not at all — it may not be worth fanning the flames. 

If you do respond, it helps to link to a source for the counter-point you want to make. That way, curious people who see your interaction can follow the source to learn more."

Agree with this point.  Jeffrey Ladish wrote "US Citizens: Targeted political contributions are probably the best passive donation opportunities for mitigating existential risk". 

He says: 

Recently, I’ve surprised myself by coming to believe that donating to candidates who support policies which reduce existential risks is probably the best passive donation opportunity for US citizens. The main reason I’ve changed my mind is that I think highly aligned political candidates have a lot of leverage to affect policies that could impact the long-t

... (read more)

It's quite hard to know and I don't know what the Team Campaign thinks about it.

There is a good article on Vox about the evidence base for those things  "Gerber and Green’s rough estimate is that canvassing can garner campaigns a vote for about $33, while volunteer phone-banking can garner a vote for $36 — not too different, especially when you consider how imprecise these estimates necessarily are." Not exactly what you answered but can give you a sense of direction."

I also think that helping Carrick would be super good!

Regarding phone banking, I wouldn't be that interested in paying for volunteers. The most important factor in the effectiveness of the calls is that the caller is genuinely enthusiastic about the candidate  - basically, if the caller is really interested, the person on the other end of the line has three times more chances of being convinced than if the caller is not really enthusiastic about the candidate.  (I don't have the specific paper this was in.) So it's great if you get your friends t... (read more)

The campaign can only use the first $2,900 in the primary campaign, but they can use the rest in the general election if they win the primary. If they don't win the primary, the options are either returning the remaining money to you or passing it along to another campaign. 

Additional funding right now would be financing better and more personal ads that still work in these final days.

As you've already given $2,900, I may recommend:

  • Directly going to Oregon and knocking on doors!
  • Phone banking
  • Talk to your Oregon friends living in District 6 or who have connections to it
  • Saying why you care about voting for Carrick on social media

It's about every Monday - the next one being in three days.

However, I recommend that you sign up even outside of these slots because there are still opportunities to do phone calls! 

Kuhan is probably right. However, after speaking to someone on Team Carrick today, it seems like there is still room for funding for the campaign's ads, which are different from the PAC's ads and show more Carrick talking directly to people. So giving now still makes sense (for the next 48 hours) even though the effects are smaller than a few days ago. 

Thanks for the suggestion, I'm going to definitely consider that.  I'm a  bit worried about feeding the troll... Maybe something more focused on why I think he's really a good candidate, and more detailed?

8
NotOtherwiseSpecified
2y
Maybe these are obvious considerations, but seeing those reddit comments makes me wonder: * At what point does further ad spending become actively counterproductive by provoking people into voting for the competition, or into persuading others to? * Is it worth it for someone like Flynn or Bankman-Fried to communicate directly to the voting public explicitly acknowledging how many ads there are from outside funders, and explaining why that seemed like a legitimate thing to do given what these outside funders felt to be the stakes? (At least I don't think I've seen such communication so far.) That might give people an alternative to the adversarial frame that they might otherwise default to.
3
John_Maxwell
2y
I think there is no harm in setting up an alert in case there are more threads about him. The earlier you arrive in a thread, the greater the opportunity to influence the discussion. If people are going to be reading a negative comment anyways, I don't think there is much harm in replying, at least on reddit -- I don't think reddit tends to generate more views for a thread with more activity, the way twitter can. In fact, replying to the older threads on reddit could be a good way to test out messaging, since almost no one is reading at this point, but you might get replies from people who left negative comments and learn how to change their mind. I've had success arguing for minority positions on my local subreddit by being friendly, respectful, and factual. Beyond that I'm really not sure, creating new threads could be a high-risk/high-reward strategy to use if he's falling in the polls. Maybe get him to do an AMA? My local subreddit's subscriber count is about 20% of the population of the city, and I've never seen a political candidate post there, even though there is lots of politics discussion. I think making an AMA saying what you've learned from talking to voters, and asking users what issues are most important to them, early in a campaign could be a really powerful strategy (edit: esp. if prearranged w/ subreddit moderators). I don't know if there is a comparable subreddit for District 6 though, e.g. this subreddit only has about 1% of the city population according to Wikipedia, and it's mostly pretty pictures right now so they might not like it if you started talking about politics.

It is still useful to donate until Sunday!

I just talked with someone in Carrick's campaign and they said that there are still two more days for ads to be useful. The PAC has its own ads but they don't show Carrick's speaking. The campaign has better, more personable ads that people like better but the campaign can't use the PAC's funding for those better ads.

And we have another Fermi estimate of the ROI on a donation to Carrick’s campaign! 
 

'There are 435 members of the House of Representatives. Let’s assume that the House as a whole holds 1/4 of... (read more)

Here's an ITN-style analysis from an anonymous friend and me.
 

  1. Neglectedness. Very few candidates put pandemic preparedness and preserving future generations as their top priorities. Also, given how important it is to have flexible spending from small donors compared to big donors, we could say that $3k increases his effective level of funding by 0.1%.
  2. Tractability. A doubling in funding could increase probability of the campaign winning by 10% (i.e. 30->40%).
  3. Importance. The value of a win is worth $N
  4. The calculation. Then the value of a $3k donation
... (read more)

I'm genuinely curious: if you’re considering donating but haven’t yet, what are your key questions/cruxes?

Feel free to answer here and/or DM me. 

I'm organizing a small group discussion around this and your questions would help direct the conversation.  Also maybe some people can answer here!

If you're interested in chatting casually about this, please DM-me :)  
 

5
Austin
2y
I have donated $2900, and I'm on the fence about donating another $2900. Primarily, I'm not sure what the impact of a marginal dollar to the campaign will accomplish -- is the campaign still cash-constrained? My very vague outsider sense was that the Flynn campaign had already blanked the area with  TV ads, so that additional funding might not do that much, eg from local coverage from a somewhat hostile source
Caro
2y23
0
0

Why are you volunteering for the campaign?

I’m a volunteer for the campaign because

  • I am convinced that Carrick is an exceptional candidate because of his track record of making big, impactful things happen. Examples: co-founding GovAI and CSET -- which are top references in AI governance --, saving potentially thousands of lives by clearing a roadblock to a nationwide vaccination program, and securing a court decision that reallocated over $1 billion to high impact health programs by manually going through over 1,000 pages of accounting documentation.
  • Also,
... (read more)
9
John_Maxwell
2y
Have you thought about crossposting this to some local subreddits? I searched for Carrick's name on reddit and he seems to be very unpopular there. People are tired of his ads and think he's gonna be a shill for the crypto industry. Maybe could make a post like "Why all of the Flynn ads? An explanation from a campaign volunteer"
Caro
2y10
0
0

(just a note that it's completely possible to be a bit silly when you're a kid, eg play with Barbie Dolls, read romance novels, swoon for the next Bieber (or the next Taylor Swift), and fret about having too few shoes and dresses... and still be extremely attached to EA principles and dedicate one's life to EA ideas. I did all the above and I would describe myself as high-curiosity, high-openness, high-altruism, thoughtful and really trying to be epistemically strong... I think it's ok to be a bit silly when you're a kid or a teenager... and also still a bit silly when you're an adult? Not sure those "silly stuff" are actually deal-breakers to be ridiculously impactful later on? ) 

1
DPiepgrass
2y
Thanks for pointing that out! Of course, my actual worry is that she won't pick up on EA principles when the only EA in her environment is me. I hate to have to move to an overpriced EA hub city to provide more intellectual infrastructure, but it's on the table.
Caro
2y12
0
0

I also think that having friends, colleagues, and coaches who are very honest with you is extremely important because invisible mistakes are sometimes especially hard to spot. Maybe you got a very fancy-looking new position, but it's way "less good" than you doing a more hidden but higher impact job. The rest of the world will tell you that it's great; so you need transparent friends and be in a sufficiently good mental space to receive this feedback.

Reviewing your decision on your own every X months and trying to make predictions about what "really good impact" looks like may be a good idea.

(Note: I have edited this comment after finding even more reasons to agree with Neel)

I find your answer really convincing so you made me change my mind!

 

  1. On truth-seeking without the whole "EA is a question": If someone made the case for existential risks using quantitative and analytical thinking, that would work. We should just focus on just conveying these ideas in a rational and truth-seeking way.
  2. On cause-neutrality: Independently of what you said, you may make the case that the probability of finding a cause that is even higher impact than AI and
... (read more)

A small contribution to your list: it seems like Indonesia has a lot of natural disaster risks, like earthquakes, tsunamis, and volcano eruptions. 

Interesting! Before this study, I thought that EAs had two factors, Effectiveness and Altruism, but I defined Effectiveness differently. I thought it referred to something like "optimization mindset" and "being good at thinking quantitatively and strategically". This would probably have been a defining characteristic of people in business school. 

So I updated my definition of "Effectiveness" in EA. I'd be curious to see more studies about other personality traits! 

I think it's likely that the optimization mindset and numeracy are important proto-EA factors, just one that wasn't measured here as they're more difficult to measure through survey questions. Also, holding constant someone's overall agreement with EA, I'd predict that someone with optimization mindset would have more average impact, and this outcome is more difficult to measure.

Thanks for this!

Typo:  it's not "Stuart Russell on why he is co-founding Aligned AI" but Stuart Armstrong. 

Load more