Registering that this line of questioning (and volume of questions) strikes me as a bit off-putting/ too intense.
If someone asked about what "What were the key concerns here, and how were they discussed?" [...] "what questions did you ask, and what were the key considerations/evidence?" about interactions I had years ago, I would feel like they're holding me to an unrealistic standard of memory or documentation.
(Although I do acknowledge the mood that these were some really important interactions. Scrutiny is an appropriate reaction, but I still find this off-putting.)
the EA community is not on track to learn the relevant lessons from its relationship with FTX.
In case it helps, here’s some data from Meta Coordination Forum attendees on how much they think the FTX collapse should influence their work-related actions and how much it has influenced their work-related actions:
I think in any world, including ones where EA leadership is dropping the ball or is likely to cause more future harm like FTX, it would be very surprising if they individually had not updated substantially.
As an extreme illustrative example, really just intended to get the intuition across, imagine that some substantial fraction of EA leaders are involved in large scale fraud and continue to plan to do so (which to be clear, I don't have any evidence of), then of course the individuals would update a lot on FTX, but probably on the dimensions of "her...
Thanks! It's something very similar to the 'responsibility assignment matrix' (RACI) popularized by consultants, I think. But in this case BIRD is more about decisions (rather than tasks) and stands for Bound (set guidelines), Input (give advice), Responsible (do the bulk of thinking decision through and laying out reasoning), and Decider (make decision).
Interesting post!
these PF basically think being EA-aligned means you have to be a major pain in the ass to your grantees.
This surprised me. I wonder why this is the case? Maybe from early GiveWell charity investigations where they made the charities do a lot of reporting? My experience with EA grantmakers is that they're at least very aware of the costs of overhead and try hard to avoid red tape.
Fwiw, I don't know anybody actively promoting 'EA has to be all-or-nothing.' Like, there's not an insider group of EA thought leaders who have decided that either you give "100% to AMF or you're not an EA." I'm not denying that that vibe can be there though; I guess it may just be more of a cultural thing, or the product of a mostly maximizing philosophy.
Context: As part of CEA events team I help organize and attend a lot of events that draw EA thought-leaders.
I'm glad you posted this! I like it :)
I hadn't heard of the moral saint thought experiment or the "one reason too many"—both of these are interesting.
Bumping a previous EA forum post: Key EA decision-makers on the future of EA, reflections on the past year, and more (MCF 2023).
This post recaps a survey about EA 'meta' topics (eg., talent pipelines, community building mistakes, field-building projects, etc.) that was completed by this year's Meta Coordination Forum attendees. Meta Coordination Forum is an event for people in senior positions at community- and field-building orgs/programs, like CEA, 80K, and Open Philanthropy's Global Catastrophic Risk Capacity Building team. (The event has previously gon...
Thank you for posting this!
I find these questions and consideration really interesting, and I could see myself experiment with researching questions of this sort for 3 months. As a relatively junior person though who may end up doing this independently, I worry about none of the thinking/writing I'd do on one of these topics actually changing anything. Do you have any advice on making this type of research useful?
Any plans to address these would come from the individuals or orgs working in this space. (This event wasn't a collective decision-making body, and wasn't aimed at creating a cross-org plan to address these—it was more about helping individuals refine their own plans).
Re the talent development pipeline for AI safety and governance, some relevant orgs/programs I'm aware of off the top of my head include:
A quick note to say that I’m taking some time off after publishing these posts. I’ll aim to reply to any comments from 13 Nov.
A quick note to say that I’m taking some time off after publishing these posts. I’ll aim to reply to any comments from 13 Nov.
A quick note to say that I’m taking some time off after publishing these posts. I’ll aim to reply to any comments from 13 Nov.
FYI that this is the first of a few Meta Coordination Forum survey summaries. More coming in the next two weeks!
Consider keeping a running list of people you’d like to meet at the next EAG(x)
I used to be pretty overwhelmed when using Swapcard to figure out who I should meet at EAG(x)s. I still get kinda overwhelmed, but I've found it helpful to start identifying meetings I'd like to have many weeks before the event.
I do this by creating an easy-to-add-to list in the months leading up to the event. Then when I encounter someone who wrote an interesting post, or if a friend mentions someone scoping a project I'm excited about, I add them to this list. (I use the app t...
If you're willing to write up some of your on the ground perspective and it seem valuable, we'd be happy to share it with attendees!
Off the top of my head, I'm thinking things like:
I'll DM you with info on on how to share such a write up, if you're interested.
Random anecadata on a very specific instance: I worked with Drew to make a Most Important Century writing prize. I enjoyed our conversations and we worked well together — Drew was nice and ambitious. I heard concerns about Nonlinear at this time, but never had anything suspect come up.
I similarly had a positive experience with Drew creating The Econ Theory AI Alignment Prize. I was not prompted by anyone to comment on this.
I second this - I have met Drew at multiple conferences and my experiences with him have only been positive. He has always been very nice to talk to. I have also heard others in EA circles tell their anecdotes on him and nobody has anything bad to say whatsoever.
Appreciate you saying this, Michel. As you can imagine, it’s been rough. Perhaps accidentally, this post seems to often lump me in with situations I wasn’t really a part of.
To put it more strongly: I would like to make clear that I have never heard any claims of improper conduct by Drew Spartz (in relation to the events discussed in this post or otherwise).
Flagging EA Opportunity Board to make sure it's on your radar. It's focused mostly on more junior, non-permanent positions and pretty EA focused.
Thanks for engaging with this post! I appreciate the seriousness with which you engage with the question “is this actually good?” If it’s not, I don’t want to celebrate this donation or encourage others to do the same; the answer to that question matters.
I think your arguments are plausible, and I agree with the comment you make elsewhere that the EA Forum should be a place for such (relatively) unfiltered push back.
But in the absence of any inside view on this topic and the fair push back Nick made, I still feel solid about holding the outside view that d...
Thanks Nick! Really appreciate you flagging this. I didn't intend it that way and hope the edit helps:
Maybe playing with their friends in the schoolyard, maybe spending time with their grandma, or maybe just kicking rocks, alone.
I'd be very curious if there are historical case studies of how much private corporations stuck to voluntary commitments they made, and how long it took for more binding regulation to replace voluntary commitments
Pulling out highlights from PDF of the voluntary commitments that AI safety orgs agreed to:
...The following is a list of commitments that companies are making to promote the safe, secure, and transparent development and use of AI technology. These voluntary commitments are consistent with existing laws and regulations, and designed to advance a generative AI legal and policy regime. Companies intend these voluntary commitments to remain in effect until regulations covering substantially the same issues come into force. Individual companies may make additional
Cool initiative! I'm worried about a failure mode where these just stay in the EA blogosphere and don't reach the target audiences who we'd most like to engage with these ideas (either because they're written with language and style that isn't well-received elsewhere, or no active effort is made to share them to people who may be receptive).
Do you share this concern, and do you have a sense of mitigate it if you share the concern?
College students interested in this may also want to check out remote government internships via VSFS! Here are the open positions
This might be a too loose of a criteria for 'power-seeking', or at least the version of power-seeking that has the negative connotations this post alludes to. By this criteria, a movement like Students for Sensible Drug Policy would be power seeking.
Maybe it's just being successful at these things that makes the difference between generic power-se...
Agree. Something that clarified my thinking on this (still feel pretty confused!) is Katja Grace's counterarguments to basic AI x-risk case. In particular the section on "Different calls to ‘goal-directedness’ don’t necessarily mean the same concept" and discussions about "pseduo-agents" clarified how there are other ways for agents to take actions than purely optimizing a utility functions (which humans don't do).
I really agree with the thesis that EA orgs should make it clear what they are and aren't doing.
But I think specifically for EA orgs that try to be public facing (e.g., an org focused on high net worth donors like Longview) there's a way that very publicly clarifying a transparent scope can cut against their branding/ how they sell themselves. These orgs need to sell themselves at some level to be as effective as possible (it's what their competition is doing), and selling yourself and being radically transparent do seem like they trade off at times (e.g.,...
Learned a new concept today! Pasting in the summary in case people don't click on the link but are still curious:
Moral injury is understood to be the strong cognitive and emotional response that can occur following events that violate a person's moral or ethical code.
...Potentially morally injurious events include a person's own or other people's acts of omission or commission, or betrayal by a trusted person in a high-stakes situation. For example, health-care staff working during the COVID-19 pandemic might experience moral injury because they perceive that
- Find Google Docs where people (whose judgement you respect) have left comments and an overall take on the promisingness of the idea. Hide their comments and form your own take. Compare. (To make this a faster process, pick a doc/idea where you have enough background knowledge to answer without looking up loads of things)
This is a good tip! Hadn't thought of this.
I like using 'Todoist' quick add feature (on mobile and desktop) to do this without having to open up the Gdoc and interrupt my workflow.
Not community growth per se, but I asked a similar question about community health that got some interesting answers: What are some measurable proxies for community health?
I like the idea of featuring well-packed research questions, but I don't want to flood the board with them.
I am currently hiring a new director to execute on product improvement and outreach projects as I step into a more strategic advisor role. I'll sync with the new hire about featuring these research questions.
Seeing this late but appreciate the comment! I think this makes a valuable distinction I had oversimplified. Made some changes and will communicate this more clearly going forward.
+1 to a desire to read GPI papers but never having actually read any because I perceive them to be big and academic at first glance.
I have engaged with them in podcasts that felt more accessible, so maybe there's something there.
Fair enough. Interesting to see people's different intuitions on this.