All of ClaireZabel's Comments + Replies

Thanks for sharing this, Tom! I think this is an important topic, and I agree with some of the downsides you mention, and think they’re worth weighing highly; many of them are the kinds of things I was thinking in this post of mine of when I listed these anti-claims:

Anti-claims

(I.e. claims I am not trying to make and actively disagree with) 

  • No one should be doing EA-qua-EA talent pipeline work
    • I think we should try to keep this onramp strong. Even if all the above is pretty correct, I think the EA-first onramp will continue to appeal to lots of gr
... (read more)

Seriously. Someone should make a movie!

7
Linch
1y
Agreed, if this was on Netflix I'd probably watch it, and I'd potentially be pretty happy to contribute to a Patreon/Kickstarter/etc of such a movie in the works!

Very strongly agree, based on watching the career trajectory of lots of EAs over the past 10 years. I think focusing on what broad kinds of activities you are good at and enjoy, and what skills you have or are well-positioned to obtain (within limits: e.g. "being a really clear and fast writer" is probably helpful in most cause areas, "being a great salsa dancer" maybe less so), then thinking about how to apply them in the cause area you think is most important, is generally much more productive than trying to entangle that exploration with personal cause prio exercises.

Our impression when we started to explore different options was that one can’t place a trustee on a leave of absence; it would conflict with their duties and responsibilities to the org, and so wasn’t a viable route.

Isn't the point of being placed on leave in a case like this to (temporarily) remove the trustee from their duties and responsibilities while the situation is investigated, as their ability to successfully execute on their duties and responsibilities has been called into question? 

(I'm not trying to antagonize here – I'm genuinely trying to understand the decision-making of EA leadership better as I think it's very important for us to be as transparent as possible in this moment given how it seems the opacity around past decision-making contributed to... (read more)

Chiming in from the EV UK side of things: First, +1 to Nicole’s thanks :) 

As you and Nicole noted, Nick and Will have been recused from all FTX-related decision-making. And, Nicole mentioned the independent investigation we commissioned into that. 

Like the EV US board, the EV UK board is also looking into adding more board members (though I think we are slightly behind the US board), and plans to do so soon.  The board has been somewhat underwater with all the things happening (speaking for myself, it’s particularly difficult because a ... (read more)

2
Milan_Griffes
1y
Thanks, Claire. Can you comment on why Nick Beckstead and Will MacAskill were recused rather than placed on leaves of absence? 

My favorite is probably the movie Colossus: the Forbin Project. For this, would also weakly recommend the first section of Life 3.0. 

Hey Jack, this comment might help answer your question. 

Hi Claire,

Thanks for coming back to this comment.

I have heard it said that large funders often ask for a seat on the Board of charities they fund. I've never actually heard of a concrete example of this, but I'm happy to take it on faith.

What I'm more surprised about is that the funder would appoint someone to the Board who then assesses grant applications from that nonprofit. This is surely an unavoidable conflict of interest - the Board member has a direct interest in gaining the grant for the nonprofit, even if it's not in the grantor's best interests t... (read more)

That’s correct. It’s common for large funders of organizations to serve on the boards of organizations they support, and I joined the EVF  board partly because we foresaw synergies between the roles (including for me acting as grant investigator on EVF grants). Leadership at both organizations are aware I am in both roles. 

Also, though you didn’t ask: I don’t receive any compensation for my work as an EVF board member.

Hey, I wanted to clarify that Open Phil gave most of the funding for the purchase of Wytham Abbey (a small part of the costs were also committed by Owen and his wife, as a signal of “skin in the game”). I run the Longtermist EA Community Growth program at Open Phil (we recently launched a parallel program for EA community growth for global health and wellbeing, which I don’t run) and I was the grant investigator for this grant, so I probably have the most context on it from the side of the donor. I’m also on the board of the Effective Ventures Foundation (... (read more)

How much professional advice on the cost and resource requirements on refurbishing and maintaining the property did Owen obtain? I note this is a Grade 1 listed building.

Thank you Claire.

Just to understand fully: in your role in OpenPhil in November 2021, you acted as the key decision-maker to award a grant of ~£15m to the Effective Values Foundation while simultaneously acting as a Director of the Effective Values Foundation (appointment confirmed on 18 July 2019.)

Or have I misunderstood the role of "grant investigator" or some aspect of the timing?

1
Ramiro
1y
Thanks for finally providing an answer for this, but it's still unclear why Owen Cotton Barrat [see the edit] said the donor wanted to remain anonymous. [EDIT: OCB didn't say sich a thing. But it's still unclear why (a) he couldn't disclose the donors' identities, nor (b) why he claimed that the funds were specifically for this purchase, so implying that effective altruists couldn't spend the 15mi any other way]
DMMF
1y14
7
6

I appreciate it may be worthwhile for OP to fund the acquisition of a dedicated EA events space, but the shift from:

"we should fund a dedicated EA events space"

to

"we should specifically fund the purchase of Wytham Abbey"

is alarming given the obvious challenges with Wytham Abbey (both with the property and the COI issues).

If EVF or OP wanted to purchase a dedicated event space and solicited applications/proposals for it, given all of the stated concerns, I am confident Wytham Abbey would not have won. I think it is worthwhile for OP to reflect on what went wrong here.

This reads as though the approach to grant making was “is this positive EV” rather than “does this maximise EV”, which seems bad.

It's no concern of mine how OP spends its money, but since it's come up here: I don't think your cost estimate can be correct.

Firstly, OP doesn't have the asset, so its resale value is irrelevant to you.  It's all very well to say that proceeds would be used for EVF's general funding which would funge against OP's future grants, but (a) there doesn't seem to be anything stopping EVF from using the proceeds for some specific project which OP wouldn't otherwise fund and (b) it's possible to imagine a scenario in which OP ceases to fund EVF and there's n... (read more)

5
MaxRa
1y
Thanks for the explaining the reasoning, it reads very reasonable to me and I appreciate you taking the time! I wonder if you changed your mind on communicating Open Phil's funding decisions going forward. Random thoughts from me: * IIRC the info on your grant pages is usually very sparse. * If you focus on communicating funding decisions that seem somewhat novel and large, it might not require so much extra time. * Better understanding your reasoning would increase the level of trust that interested and well-meaning outsiders have towards you and the EA community more broadly. * Understanding your thinking better might help potential grantees to know what type of projects seem worth pitching to you. And just a minor curiosity, would be really interested in hearing more about what kind of issues to look out for when renting venues:

Hi Claire - thanks for the extra info here, which is very helpful.

Can you say whether you/Open Phil considered anything here to be a conflict of interest and if so how you managed that?

At a first glance, a trustee of EVF recommending a grant of £10m+ to EVF on behalf of their employer seems like a CoI.

Thanks for sharing this info, Claire! 

I think your team correctly concluded that in-person events are enormously valuable for people making big career changes, but running in-person events are expensive and super logistically challenging. I think logistics are somewhat undervalued in the EA community, e.g. I read a lot of criticism along the lines of, "Why don't community organizers or EAGs just do some extremely time costly thing," without much appreciation for how hard it is to get things to happen. 

From this perspective, lowering the barrier f... (read more)

-42
Tony Sinclair
1y
jai
1y26
10
13

Given the massive decline in expected EA liquidity since the purchase and the fact that the purchase was largely justified on the grounds that as a durable asset it could be converted back into liquid funds with minimal loss, why not sell it now?.

3
Jeroen Willems
1y
I really appreciate this response, thank you! I would like to hear more about the grant page publishing process.

Not the intended audience, but as a US person who lives in the Bay Area, I enjoyed reading this really detailed list of what's often unusual or confusing to people from a specific different cultural context  

1
JoshYou
1y
I loved this Wikitravel article about American culture for this same reason.

I generally directionally agree with Eli Nathan and Habryka's responses. I also weak-downvoted this post (though felt borderline about that), for two reasons. 

(1) I would have preferred a post that tried harder to even-handedly discuss and weigh up upsides and downsides, whereas this mostly highlighted upsides of expansion, and (2) I think it's generally easier to publicly call for increased inclusivity than to publicly defend greater selectivity (the former will generally structurally have more advocates and defenders). In that context I feel worse a... (read more)

Quite. I was in that Stanford EA group, I thought Kelsey was obviously very promising and I think the rest of us did too, including when she was taking a leave of absence. 

No worries, appreciate ppl checking  :) 

As noted in the post, I got Scott's permission before posting this. 

7
ThomasW
2y
Yes, that's my mistake, sorry.

I strongly disagree with Greg. I think CFAR messed up very badly, but I think the way they messed up is totally consistent with also being able to add value in some situations. 

We have data I find convincing suggesting a substantial fraction of top EAs got value from CFAR. ~ 5 years have passed since I went to a CFAR workshop, and I still value what I learned and think it's been useful for my work. I would encourage other people who are curious to go (again, with the caveat that I don't know much about the new program), if they feel like they're in a ... (read more)

[anonymous]2y18
0
0

To build on Greg's example, I think in normal circumstances, if eg a school was linked with a summer camp for high schoolers, and the summer camp made the errors outlined in the post linked to, then the school would correctly sever ties with the summer camp. 

The mistakes made seem to me to be outrageously bad - they put teenagers in the custody of someone they had lots of evidence was an unethical sociopath, and they even let him ask a minor to go to Burning Man with him, and after that still didn't ban him from their events (!). Although apparently l... (read more)

I don't find said data convincing re. CFAR, for reasons I fear you've heard me rehearse ad nauseum. But this is less relevant: if it were just 'CFAR, as an intervention, sucks' I'd figure (and have figured over the last decade) that folks don't need me to make up their own mind. The worst case, if that was true, is wasting some money and a few days of their time.

The doctor case was meant to illustrate that sufficiently consequential screw-ups in an activity can warrant disqualification from doing it again - even if one is candid and contrite about them. I ... (read more)

You said you wouldn’t tell anyone about your friend’s secret, but this seems like a situation where they wouldn’t mind, and it would be pretty awkward to say nothing…etc.

 

This isn't your main point, and I agree there's a lot of motivated cognition people can fall prey to. But I think this gets a bit tricky, because people often ask for vague commitments, that are different from what they actually want and intend. For example, I think sometimes when people say "don't share this" they actually mean something more like "don't share this with people that ... (read more)

3
Jeffrey Ladish
2y
I super agree it's important not to conflate  "do you keep actually-thoughtful promises you think people expected you to interpret as real commitments" and "do you take all superficially-promise-like-things as serious promises"!  And while I generally want people to think harder about what they're asking for wrt commitments, I don't think going overboard on strict-promise interpretations is good. Good promises have a shared understanding between both parties. I think a big part of building trust with people is figuring out a good shared language and context for what you mean, including when making strong and weak commitments.  I wrote something related my first draft but removed since it seemed a little tangtial, but I'll paste it here: "It’s interesting that there are special kinds of ways of saying things that hold more weight than other ways of saying things. If I say “I absolutely promise I will come to your party”, you will probably have a much higher expectation that I’ll attend then if I say “yeah I’ll be there”. Humans have fallible memory, they sometimes set intentions and then can’t carry through. I think some of this is a bit bad and some is okay. I don’t think everyone would be better off if every time they said they would do something they treated this as an ironclad commitment and always followed through. But I do think it would be better if we could move at least somewhat in this direction." Which, based on your comment, I now think the thing to move for is not just "interpreting commitments as stronger" but rather "more clarity in communication about what kind of commitments are what type."  

This seems really exciting, and I agree that it's an underexplored area. I hope you share resources you develop and things you learn to make it easier for others to start groups like this.

PSA for people reading this thread in the future: Open Phil is also very open to and excited about supporting AI safety student groups (as well as other groups that seem helpful for longtermist priority projects); see here for a link to the application form.

I used to agree more with the thrust of this post than I do, and now I think this is somewhat overstated. 

[Below written super fast, and while a bit sleep deprived]

An overly crude summary of my current picture is "if you do community-building via spoken interactions, it's somewhere between "helpful" and "necessary" to have a substantially deeper understanding of the relevant direct work than the people you are trying to build community with, and also to be the kind of person they think is impressive, worth listening to, and admirable. Additionally, be... (read more)

A lot of what Claire says rings true to me.

Just to focus on my experience:

  • 3-5% of time talking to key object level people feels very useful. I think I did too little of this after covid started and I stopped going to in-person conferences (and didn't set up a compensating set of meetings), and that was a mistake.
  • Considering my options going forward, I now have the opportunity to spend serious time learning about object level issues, but it usually seems like the best way to do that is just to speak to lots of people in the area and read about them, rather
... (read more)
8
Owen Cotton-Barratt
2y
Thanks, really appreciated this (strong upvoted for the granularity of data). To be very explicit: I mostly trust your judgement about these tradeoffs for yourself. I do think you probably get a good amount from social osmosis (such that if I knew you didn't talk socially a bunch to people doing direct work I'd be more worried that the 5-10% figure was too low); I almost want to include some conversion factor from social time to deliberate time. If you were going to get worthwhile benefits from more investment in understanding object-level things, I think the ways this would seem most plausible to me are: * Understanding not just "who is needed to join AI safety teams?", but "what's needed in people who can start (great) new AI safety teams?" * Understanding the network of different kinds of direct work we want to see, and how the value propositions relate to each other, to be able to prioritize finding people to go after currently-under-invested-in areas * Something about long-term model-building which doesn't pay off in the short term but you'd find helpful in five years time Overall I'm not sure if I should be altering my "20%" claim to add more nuance about degree of seniority (more senior means more investment is important) and career stage (earlier means more investment is good). I think that something like that is probably more correct but "20%" still feels like a good gesture as a default. (I also think that you just have access to particularly good direct work people, which means that you probably get some of the benefits of sync about what they need in more time-efficient ways than may be available to many people, so I'm a little suspicious of trying to hold up the Claire Zabel model as one that will generalize broadly.)

>It's fine to have professional facilitators who are helping the community-building work without detailed takes on object-level priorities, but they shouldn't be the ones making the calls about what kind of community-building work needs to happen

I think this could be worth calling out more directly and emphatically. I think a large fraction (idk, between 25 and 70%) of people who do community-building work aren't trying to make calls about what kinds of community-building work needs to happen.

8
Owen Cotton-Barratt
2y
Noticing that the (25%, 70%) figure is sufficiently different from what I would have said that we must be understanding some of the terms differently. My clause there is intended to include cases like: software engineers (but not the people choosing what features to implement); caterers; lawyers ... basically if a professional could do a great job as a service without being value aligned, then I don't think it's making calls about what kind of community building needs to happen. I don't mean to include the people choosing features to implement on the forum (after someone else has decided that we should invest in there forum), people choosing what marketing campaigns to run (after someone else has decided that we should run marketing campaigns), people deciding how to run an intro fellowship week to week (after someone else told them to), etc. I do think in this category maybe I'd be happy dipping under 20%, but wouldn't be very happy dipping under 10%. (If it's low figures like this it's less likely that they'll be literally trying to do direct work with that time vs just trying to keep up with its priorities.) Do you think we have a substantive disagreement?
8
Owen Cotton-Barratt
2y
I guess I think there's a continuum of how much people are making those calls. There are often a bunch of micro-level decisions that people are making which are ideally informed by models of what it's aiming for. If someone is specializing in vegan catering for EA events then I think it's great if they don't have models of what it's all in service of, because it's pretty easy for the relevant information to be passed to them anyway. But I think most (maybe >90%) roles that people centrally think of as community building have significant elements of making these choices. I guess I'm now thinking my claim should be more like "the fraction should vary with how high-level the choices you're making are" and provide some examples of reasonable points along that spectrum?

I put a bunch of weight on  decision theories which support 2. 

A mundane example: I get value now from knowing that, even if I died, my partner would pursue certain Claire-specific projects I value being pursued because it makes me happy to know they will get pursued even if I die. I couldn't have that happiness now if I didn't believe he would actually do it, and it'd be hard for him (a person who lives with me and who I've dated for many years) to make me believe that he actually would pursue them even if it weren't true (as well as seeming ske... (read more)

Thanks for this! Most of what you wrote here matches my experience and what I've seen grantees experience. It often feels weird and frustrating (and counter to econ 101 intuitions) to be like "idk, you just can't exchange money for good and services the obvious way, sorry, no, you can't just pay more money to get out of having to manage that person and have them still do their work well" and I appreciate this explanation of why.
 

Riffing off of the alliance mindset point, one shift I've personally found really helpful (though I could imagine it backfiring for other people) in decision-making settings is switching from thinking "my job is to come up with the right proposal or decision" to "my job is to integrate the evidence I've observed (firsthand, secondhand, etc.) and reason about it as clearly and well as I'm able". 

The first framing made me feel like I was failing if other people contributed; I was "supposed" to get to the best decision, but instead I came to the wrong on... (read more)

This is a cool idea! It feels so much easier to me to get myself started reading a challenging text if there's a specified time and place with other people doing the same, especially if I know we can discuss right after. 

I'm interested in and supportive of people running different experiments with meta-meta efforts, and I think they can be powerful levers for doing good. I'm pretty unsure right now if we're erring too far in the meta and meta-meta direction (potentially because people neglect the meta effects of object-level work) or should go farther, but hope to get more clarity on that down the road. 

So to start, that comment was quite specific to my team and situation, and I think historically we've been super cautious about hiring (my sense is, much moreso than the average EA org, which in turn is more cautious than the next-most-specific reference class org). 

Among the most common and strongest pieces of advice I give grantees with inexperienced executive teams is to be careful about hiring (generally, more careful than I think they'd have been otherwise), and more broadly to recognize that differences in people's skills and interests leads to ... (read more)

2
weeatquince
2y
Really helpful. Good to get this broader context. Thank you!!

Thanks Miranda, I agree these are things to watch really closely for. 

Thanks Akash. I think you're right that we can learn as much from successes and well-chosen actions as mistakes, and also it's just good to celebrate victories. A few things I feel really pleased about (on vacation so mostly saying what comes to mind, not doing a deep dive): 

  • My sense is that our (published and unpublished) research has been useful for clarifying my picture of the meta space, and helpful to other organizations (and led to some changes I think are pretty promising, like increased focus on engaging high schoolers who are interested in lo
... (read more)

Thoughtful and well-informed criticism is really useful, and I'd be delighted for us to support it;  criticism that successfully changes minds and points to important errors is IMO among the most impactful kinds of writing. 

In general, I think we'd evaluate it similarly to other kinds of grant proposals, trying to gauge how relevant the proposal is to the cause area and how good a fit the team is to doing useful work. In this case, I think part of being a good fit for the work is having a deep understanding of EA/longtermism, having really strong epistemics, and buying into the high-level goal of doing as much good as possible.

I think a problem here is when people don't know if someone is being fully honest/transparent/calibrated or using more conventional positive-slanted discourse norms. E.g. a situation where this comes up sometimes is taking and giving references for a job applicant. I think the norm with references is that they should be very positive, and you're supposed to do downward adjustments on the positivity to figure out what's going on (e.g. noticing if someone said someone was "reliable" versus "extremely reliable"). If an EA gives a reference for a job applicant... (read more)

No, that's not what I'd say (and again, sorry that I'm finding it hard to communicate about this clearly). This isn't necessarily making a clear material difference in what we're willing to fund in many cases (though it could in some), it's more about what metrics we hold ourselves to and how that leads us to prioritize.  

I think we'd fund at least many of the scholarships from a pure cost-effectiveness perspective. We think they meet the bar of beating the last dollar, despite being on average less cost-effective than 80k advising, because 80k advisi... (read more)

Hm yeah, I can see how this was confusing, sorry!

I actually wasn't trying to stake out a position about the relative value of 80k vs. our time. I was saying that with 80k advising, the basic inputs per career shift are a moderate amount of funding from us and a little bit of our time and a lot of 80k advisor time, while with scholarships, the inputs per career shift are a lot of funding and a moderate amount of our time, and no 80k time. So the scholarship model is, according to me, more expensive in dollars per career shift, but less time-consuming of ded... (read more)

2
Ben Pace
2y
Thanks! The core thing I'm hearing you say is that the scholarships are the sort of thing you wouldn't fund on a cost-effectiveness metric and 80k is, but that on a time-effectiveness metric that changes it so that the scholarships are now competitive.

Agree. If possible, also, lots of private rooms people can grab for sensitive conversations, and/or places outside they can easily and pleasantly walk together, side-by-side, for same. 

1
Lizka
2y
Thanks for these comments! I agree: more nooks and quiet spaces would be great.

I haven't looked closely, but from a fairly-but-not-completely uninformed perspective, Tim's allocation of part of his donor lottery winnings to the Czech Association for Effective Altruism looks prescient and potentially unusually counterfactually impactful.

You should adjust your estimate, this only took me 1 minute :) 

2
WilliamKiely
2y
Updated, thanks! :)

[As is always the default, but perhaps worth repeating in sensitive situations, my views are my own and by default I'm not speaking on behalf of the Open Phil. I don't do professional grantmaking in this area, haven't been following it closely recently, and others at Open Phil might have different opinions.]

I'm disappointed by ACE's comment (I thought Jakub's comment seemed very polite and even-handed, and not hostile, given the context, nor do I agree with characterizing what seems to me to be sincere concern in the OP just a... (read more)

I like this question :) 

One thing I've found pretty helpful in the context of my failures is to try to separate out (a) my intuitive emotional disappointment, regret, feelings of mourning, etc. (b) the question of what lessons, if any, I can take from my failure, now that I've seen the failure take place (c) the question of whether, ex ante, I should have known the endeavor was doomed, and perhaps something more meta about my decision-making procedure was off and ought to be corrected. 

I think all these things are valid and good to process, but I... (read more)

I’ll consider it a big success of this project if some people will have read Julia Galef's The Scout Mindset next time I check.

It's not out yet, so I expect you will get your wish if you check a bit after it's released :) 

The website isn't working for me, screenshot below:

3
omernevo
3y
Thanks for letting us know! If that's alright, I'll send you an email with some questions to figure out what the problem is...

Just a personal note, in case it's helpful for others: in the past, I thought that medications for mental health issues were likely to be pretty bad, in terms of side effects, and generally associated them with people in situations of pretty extreme suffering.  And so I thought it would only be worth it or appropriate to seek psychiatric help if I were really struggling, e.g. on the brink of a breakdown or full burn-out. So I avoided seeking help, even though I did have some issues that were bothering me.  In my experience, a lot of other people ... (read more)

Seconding this. My partner was spooked by seeing a family member on heavy-duty medications for a more serious mental health situation, so our vague impression was that antidepressants might really change who I was. I did need to try a couple meds and try different times of day, etc to deal with side effects, but at this point I have a med and dose that makes my life better and has very minor side effects.

As a second data point, my thought process was pretty similar to Claire's - I didn't really consider medication until reading Rob's post because I didn't think I was capital D depressed, and I'm really glad now that I changed my mind about trying it for mild depression. I personally haven't had any negative side effects from Wellbutrin, although some of my friends have. 

Scott's new practice, Lorien Psychiatry, also has some resources that I (at least) have found helpful. 

5
Julia_Wise
3y
I also like the writeups there. I was hoping I could refer community members to the actual practice, but Scott writes in a recent post: "Stop trying to sign up for my psychiatry practice. It says in three different places there that it's only currently open to patients who are transferring from my previous practice."

Also, I believe it's much easier to become a teacher for high schoolers at top high schools than a teacher for students at top universities, because most teachers at top unis are professors, or at least lecturers with PhDs, while even at fancy high schools, most teachers don't have PhDs, and I think it's generally just much less selective. So EAs might have an easier time finding positions teaching high schoolers than uni students of a given eliteness level. (Of course, there are other ways to engage people, like student groups, for which different dynamics are at play.) 

4
Jack Malde
3y
Very true, also teaching at top private schools doesn’t even require you to have gone through a teaching qualification (at least in the UK). They’re happy to hire anyone with a degree from a respected uni who has some aptitude for teaching. I have a feeling this might be quite an underrated path.

Huh, this is great to know. Personally, I'm the opposite, I find it annoying when people ask to meet and don't  include a calendly link or similar, I am slightly annoyed by the time it takes to write a reply email and generate a calendar invite, and the often greater overall back-and-forth and attention drain from having the issue linger. 

Curious how anti-Calendly people feel about the "include a calendly link + ask people to send timeslots if they prefer" strategy. 

My feelings are both that it's a great app and yet sometimes I'm irritated when the other person sends me theirs.

If I introspect on the times when I feel the irritation, I notice I feel like they are shirking some work. Previously we were working together to have a meeting, but now I'm doing the work to have a meeting with the other person, where it's my job and not theirs to make it happen.

I think I expect some of of the following asymmetries in responsibility to happen with a much higher frequency than with old-fashioned-coordination:

  • I will book a time,
... (read more)
3
Kirsten
4y
Don't feel great about that, for the same reasons as before - it prioritizes your comfort and schedule over mine, which is kind of rude if you're asking me for a favour. But like other people, I don't necessarily endorse these feelings, and they're not super strong. It's fine for people to keep sending me calendly links.

Some people are making predictions about this topic here.

On that link, someone comments:

Berkeley's incumbent mayor got the endorsement of Bernie Sanders in 2016, and Gavin Newsom for 2020. Berkeley also has a strong record of reelecting mayors. So I think his base rate for reelection should be above 80%, barring a JerryBrownesque run from a much larger state politician.
https://www.dailycal.org/2019/08/30/berkeley-mayor-jesse-arreguin-announces-campaign-for-reelection/

I just wanted to say I thought this was overall an impressively thorough and thoughtful comment. Thank you for making it!

I’ve created a survey about barriers to entering information security careers for GCR reduction, with a focus on whether funding might be able to help make entering the space easier. If you’re considering this career path or know people that are, and especially if you foresee money being an obstacle, I’d appreciate you taking the survey/forwarding it to relevant people. 

The survey is here: https://docs.google.com/forms/d/e/1FAIpQLScEwPFNCB5aFsv8ghIFFTbZS0X_JMnuquE3DItp8XjbkeE6HQ/viewform?usp=sf_link. Open Philanthropy a... (read more)

[meta] Carl, I think you should consider going through other long, highly upvoted comments you've written and making them top-level posts. I'd be happy to look over options with you if that'd be helpful.

Load more