All of michel's Comments + Replies

Fair enough. Interesting to see people's different intuitions on this. 

4
AnotherAnonymousFTXAccount
8d
I understand that that impression/reaction. As I mentioned, the intention behind offering a bunch of specific and varied questions is that they might prompt reflection from different angles, different thoughts and memories, angles on it that provide new insight or that MacAskill finds more comfortable sharing - not that each would be responded to in forensic detail. 

Registering that this line of questioning (and volume of questions) strikes me as a bit off-putting/ too intense. 

If someone asked about what "What were the key concerns here, and how were they discussed?" [...] "what questions did you ask, and what were the key considerations/evidence?" about interactions I had years ago, I would feel like they're holding me to an unrealistic standard of memory or documentation.  

(Although I do acknowledge the mood that these were some really important interactions. Scrutiny is an appropriate reaction, but I still find this off-putting.) 

These seem pretty reasonable questions to me. 

the EA community is not on track to learn the relevant lessons from its relationship with FTX. 

In case it helps, here’s some data from Meta Coordination Forum attendees on how much they think the FTX collapse should influence their work-related actions and how much it has influenced their work-related actions:

  • On average, attendees thought the average MCF attendee should moderately change their work-related actions because of the FTX collapse (Mean of 4.0 where 1 = no change and 7 = very significant change; n = 39 and SD = 1.5)
  • On av
... (read more)
6
Jason
23d
At the risk of over-emphasizing metrics, it seems that at least some of these reforms could and probably should be broken down into SMART goals (i.e., those that are specific, measurable, achievable, relevant, and time-bound). Example: Better vet risks from funders/leaders might be broken down into sub-tasks like (1) Stratify roles and positions by risk level (critical, severe, moderate, etc.); (2) Determine priorities for implementation and the re-vetting schedule; (3) develop adjudication guidelines; (4) decide who investigates and adjudicates suitability; (5) set measurable and time-bound progress indicators (e.g., the holders of 75% of Critical Risk roles/positions have been investigated and adjudicated by the end of 2025). [Note: The specific framework above borrows from the framework for security clearances and public-trust background checks in the US government. Obviously things in EA would need to be different, and the risks are different, so this is meant as an example rather than a specific proposal on this point. Yet, some of the core system needs would be at least somewhat similar.]

I think in any world, including ones where EA leadership is dropping the ball or is likely to cause more future harm like FTX, it would be very surprising if they individually had not updated substantially. 

As an extreme illustrative example, really just intended to get the intuition across, imagine that some substantial fraction of EA leaders are involved in large scale fraud and continue to plan to do so (which to be clear, I don't have any evidence of), then of course the individuals would update a lot on FTX, but probably on the dimensions of "her... (read more)

Thanks! It's something very similar to the 'responsibility assignment matrix' (RACI) popularized by consultants, I think. But in this case BIRD is more about decisions (rather than tasks) and stands for Bound (set guidelines), Input (give advice), Responsible (do the bulk of thinking decision through and laying out reasoning), and Decider (make decision). 

2
EffectiveAdvocate
1mo
Thank you! Seems like a valuable tool to learn! 

Interesting post!

these PF basically think being EA-aligned means you have to be a major pain in the ass to your grantees.

This surprised me. I wonder why this is the case? Maybe from early GiveWell charity investigations where they made the charities do a lot of reporting? My experience with EA grantmakers is that they're at least very aware of the costs of overhead and try hard to avoid red tape.

3
Kyle Smith
1mo
Yes I think this is an area of misconception that could be explored more and ultimately addressed.

Fwiw, I don't know anybody actively promoting 'EA has to be all-or-nothing.' Like, there's not an insider group of EA thought leaders who have decided that either you give "100% to AMF or you're not an EA." I'm not denying that that vibe can be there though; I guess it may just be more of a cultural thing, or the product of a mostly maximizing philosophy.

Context: As part of CEA events team I help organize and attend a lot of events that draw EA thought-leaders. 

1
Kyle Smith
1mo
Yeah I am really only referring to a perception of all-or-nothing. And like you say, I think it is a product of a maximizing philosophy.  At the end of the day, it really just seems to be an EA marketing/outreach problem, and I think it is entirely addressable by the community. I think the paper idea I mention (discussing the perceived incompatibility of TBP and EA) could be a step in the right direction.

I'm glad you posted this! I like it :) 

I hadn't heard of the moral saint thought experiment or the "one reason too many"—both of these are interesting.

Cool that you're doing this for such a good cause. Good luck!

2
Emma Cameron
1mo
Hey, thanks! I hope you're doing well - I didn't realize you were at CEA now. :)

Bumping a previous EA forum post: Key EA decision-makers on the future of EA, reflections on the past year, and more (MCF 2023).

This post recaps a survey about EA 'meta' topics (eg., talent pipelines, community building mistakes, field-building projects, etc.) that was completed by this year's Meta Coordination Forum attendees. Meta Coordination Forum is an event for people in senior positions at community- and field-building orgs/programs, like CEA, 80K, and Open Philanthropy's Global Catastrophic Risk Capacity Building team. (The event has previously gon... (read more)

3
James Herbert
5mo
Glad you bumped this Michel, I was also surprised by how little attention it received.  You requested feedback, so I hope the below is useful. High level: we've been working on our strategy for 2024. I was expecting these posts to be very, very helpful for this. However, for some reason, they've only been slightly helpful. Below I've listed a few suggestions for what might have made them more helpful (if this info is contained in the posts and I've missed it I apologise in advance): 1. Information to help us decide how to allocate our social change portfolio.  1. (I.e., within meta work, how much of our resources should we spend on, e.g., lobbying, field building, social movement support, or network development?) 2. Information to help us determine our community size goals (bigger or smaller)  1. (The rowing vs steering q was helpful here, but I would have preferred something like 'What do you envision as the optimal size of the EA community in five years?' or 'For the upcoming year, where should our primary focus lie: expanding the EA community's size or enhancing the depth of engagement and commitment among existing members?') 3. Information on talent bottlenecks.  1. (The AIS survey was useful here, but we would have liked info about other fields. This Q from the Leader's Forum would have been useful) 4. Information to help us decide how to handle PR, or even just how much of a priority PR ought to be 5. Information to help us figure out what to do with all of those EAs who want to excel in using their careers to do good, but lack advice from the EA community on how to do this (because their talents don't match identified talent bottlenecks)
6
OllieBase
5mo
I agree. Of all of CEA's outputs this year, I think this could be the most useful for the community and I think it's worth bumping. It's our fault that it didn't get enough traction; it came out just before EAG and we didn't share it elsewhere.

Thank you for posting this!

I find these questions and consideration really interesting, and I could see myself experiment with researching questions of this sort for 3 months. As a relatively junior person though who may end up doing this independently, I worry about none of the thinking/writing I'd do on one of these topics actually changing anything. Do you have any advice on making this type of research useful?  

2
Lukas Finnveden
5mo
I'll hopefully soon make a follow-up post with somewhat more concrete projects that I think could be good. That might be helpful. Are you more concerned that research won't have any important implications for anyone's actions, or that the people whose decisions ought to change as a result won't care about the research?

Thanks for putting on this event and sharing the takeaways!

Any plans to address these would come from the individuals or orgs working in this space. (This event wasn't a collective decision-making body, and wasn't aimed at creating a cross-org plan to address these—it was more about helping individuals refine their own plans). 

Re the talent development pipeline for AI safety and governance, some relevant orgs/programs I'm aware of off the top of my head include:

  • Arkrose
  • SERI MATS
  • Constellation
  • Fellowships like AI Futures
  • Blue Dot Impact
  • AI safety uni groups like MAIA and WAISI
  • ... and other programs mentioned on 80K
... (read more)

A quick note to say that I’m taking some time off after publishing these posts. I’ll aim to reply to any comments from 13 Nov.

A quick note to say that I’m taking some time off after publishing these posts. I’ll aim to reply to any comments from 13 Nov.

A quick note to say that I’m taking some time off after publishing these posts. I’ll aim to reply to any comments from 13 Nov.

FYI that this is the first of a few Meta Coordination Forum survey summaries. More coming in the next two weeks!

Consider keeping a running list of people you’d like to meet at the next EAG(x)

I used to be pretty overwhelmed when using Swapcard to figure out who I should meet at EAG(x)s. I still get kinda overwhelmed, but I've found it helpful to start identifying meetings I'd like to have many weeks before the event.

I do this by creating an easy-to-add-to list in the months leading up to the event. Then when I encounter someone who wrote an interesting post, or if a friend mentions someone scoping a project I'm excited about, I add them to this list. (I use the app t... (read more)

If you're willing to write up some of your on the ground perspective and it seem valuable, we'd be happy to share it with attendees!

Off the top of my head, I'm thinking things like:

  • What changes in community members attitudes might 'leaders' not be tracking?
  • From your engagement with 'leaders' and community members, what seem to be the biggest misunderstandings?
  • What cheap actions could leaders take that might have a really positive influence on the community?

I'll DM you with info on on how to share such a write up, if you're interested.  

1
Rockwell
7mo
Thank you, Michel! I'm replying over DM.

Random anecadata on a very specific instance: I worked with Drew to make a Most Important Century writing prize. I enjoyed our conversations and we worked well together — Drew was nice and ambitious. I heard concerns about Nonlinear at this time, but never had anything suspect come up.

I similarly had a positive experience with Drew creating The Econ Theory AI Alignment Prize. I was not prompted by anyone to comment on this.

I second this - I have met Drew at multiple conferences and my experiences with him have only been positive. He has always been very nice to talk to. I have also heard others in EA circles tell their anecdotes on him and nobody has anything bad to say whatsoever. 

Appreciate you saying this, Michel. As you can imagine, it’s been rough. Perhaps accidentally, this post seems to often lump me in with situations I wasn’t really a part of.

To put it more strongly: I would like to make clear that I have never heard any claims of improper conduct by Drew Spartz (in relation to the events discussed in this post or otherwise).

Flagging EA Opportunity Board to make sure it's on your radar. It's focused mostly on more junior, non-permanent positions and pretty EA focused.

0
omernevo
8mo
Thank you!

Thanks for engaging with this post! I appreciate the seriousness with which you engage with the question “is this actually good?” If it’s not, I don’t want to celebrate this donation or encourage others to do the same; the answer to that question matters.

I think your arguments are plausible, and I agree with the comment you make elsewhere that the EA Forum should be a place for such (relatively) unfiltered push back.

But in the absence of any inside view on this topic and the fair push back Nick made, I still feel solid about holding the outside view that d... (read more)

Thanks Grace! Feel free to share.

Thanks Nick! Really appreciate you flagging this. I didn't intend it that way and hope the edit helps: 

Maybe playing with their friends in the schoolyard, maybe spending time with their grandma, or maybe just kicking rocks, alone.

8
NickLaing
9mo
Nice one yeah that's great - if it was me I would perhaps even change kicking rocks to kicking a football ;).

It totally should be. Fixed, thanks :) 

Cool initiative! Heads up that I think one of the links in the TLDR is to a private forum draft (or at least it doesn't work for me)

See this post 

3
AmAristizabal
9mo
Thanks Michel! I think it should be fixed now :) 

I'd be very curious if there are historical case studies of how much private corporations stuck to voluntary commitments they made, and how long it took for more binding regulation to replace voluntary commitments

Pulling out highlights from PDF of the voluntary commitments that AI safety orgs agreed to:

The following is a list of commitments that companies are making to promote the safe, secure, and transparent development and use of AI technology. These voluntary commitments are consistent with existing laws and regulations, and designed to advance a generative AI legal and policy regime. Companies intend these voluntary commitments to remain in effect until regulations covering substantially the same issues come into force. Individual companies may make additional

... (read more)

Cool initiative! I'm worried about a failure mode where these just stay in the EA blogosphere and don't reach the target audiences who we'd most like to engage with these ideas (either because they're written with language and style that isn't well-received elsewhere, or no active effort is made to share them to people who may be receptive).

Do you share this concern, and do you have a sense of mitigate it if you share the concern? 

8
Jackson Wagner
10mo
Yeah, as a previous top-three winner of the EA Forum Creative Writing Contest (see my story here) and of Future of Life Institute's AI Worldbuilding contest (here), I agree that it seems like the default outcome is that even the winning stories don't get a huge amount of circulation.  The real impact would come from writing the one story that actually does go viral beyond the EA community.  But this seems pretty hard to do; perhaps better to pick something that has already gone viral (perhaps an existing story like one of the Yudkowsky essays, or perhaps expanding on something like a very popular tweet to turn it into a story), and try to improve its presentation by polishing it, and perhaps adding illustrations or porting to other mediums like video / audio / etc. That is why I am currently spending most of my EA effort helping out RationalAnimations, which sometimes writes original stuff but often adapts essays & topics that have preexisting traction within EA.  (Suggestions welcome for things we might consider adapting!) Could also be a cool mini-project of somebody's, to go through the archive of existing rationalist/EA stories, and try and spruce them up with midjourney-style AI artwork; you might even be able to create some passable, relatively low-effort youtube videos just by doing a dramatic reading of the story and matching it up with panning imagery of midjourney / stock art? On the other hand, writing stories is fun, and a $3000 prize pool is not too much to spend in the hopes of maybe generating the next viral EA story!  I guess my concrete advice would be to put more emphasis on starting from a seed of something that's already shown some viral potential (like a popular tweet making some point about AI safety, or a fanfic-style spinoff of a well-known story that is tweaked to contain an AI-relevant lesson, or etc).
7
Daystar Eld
10mo
Absolutely. Part of the hope is that, if we can gather a good collection of stories, we can find ways to promote and publish some or all of them, whether through traditional publishing or audiobooks or youtube animated stories.
7
NickLaing
10mo
I would hope given its a "fable" writing contest, almost by default these stories would be completely accessible to most of the general public,  like  the Yudowsky classic " sorting pebbles into correct heaps or likely even less nerdy than that. But OP can clarify!  

College students interested in this may also want to check out remote government internships via VSFS! Here are the open positions

This might be a too loose of a criteria for 'power-seeking', or at least the version of power-seeking that has the negative connotations this post alludes to. By this criteria, a movement like Students for Sensible Drug Policy would be power seeking.

  1. they try to seed student groups and provide support for them 
  2. They have multiple buttons to donate on their webpage
  3. They have things like a US policy council and explicitly mention policy change in their title.

Maybe it's just being successful at these things that makes the difference between generic power-se... (read more)

1
JoshuaBlake
10mo
I agree a more nuanced definition is probably required, or at least to distinguish acceptable from (possibly) unacceptable power-seeking. I think longtermism stands out for the amount of power it has and seeks relative to the number of members of the movement, and that there isn't much consensus (across wider society) around its aims. I've not fully thought this through but I'd frame it around democratic legitemacy.

Agree. Something that clarified my thinking on this (still feel pretty confused!) is Katja Grace's counterarguments to basic AI x-risk case. In particular the section on "Different calls to ‘goal-directedness’ don’t necessarily mean the same concept" and discussions about "pseduo-agents" clarified how there are other ways for agents to take actions than purely optimizing a utility functions (which humans don't do).

I really agree with the thesis that EA orgs should make it clear what they are and aren't doing.

But I think specifically for EA orgs that try to be public facing (e.g., an org focused on high net worth donors like Longview) there's a way that very publicly clarifying a transparent scope can cut against their branding/ how they sell themselves. These orgs need to sell themselves at some level to be as effective as possible (it's what their competition is doing), and selling yourself and being radically transparent do seem like they trade off at times (e.g.,... (read more)

4
NickLaing
10mo
Thanks mike - I understand the argument and I'm sure others will agree with you, but I'm not sure I agree on the selling yourself vs. being radically transparent as necessarily being a good tradeoff to make. Part of what makes EA stuff so great, and a point of difference I think should be clear communication and transparency even it might appear to be an undersell or even slightly jarring to people not familiar with EA.  And yes I completely agree even if orgs do choose to be bit fluffy, vague and shiny on their website, at least they can transparently lay out their scope right here :D.

Learned a new concept today! Pasting in the summary in case people don't click on the link but are still curious:

Moral injury is understood to be the strong cognitive and emotional response that can occur following events that violate a person's moral or ethical code.

Potentially morally injurious events include a person's own or other people's acts of omission or commission, or betrayal by a trusted person in a high-stakes situation. For example, health-care staff working during the COVID-19 pandemic might experience moral injury because they perceive that

... (read more)
  • Find Google Docs where people (whose judgement you respect) have left comments and an overall take on the promisingness of the idea. Hide their comments and form your own take. Compare. (To make this a faster process, pick a doc/idea where you have enough background knowledge to answer without looking up loads of things)

This is a good tip! Hadn't thought of this.

Thanks for sharing this! I think it's great you made this public.

  • As a result of AGI SF readings and other sporadic AI safety readings… 
    • … I feel more confident asking questions of people who know more than I do
      • I feel like I know the vocabulary, main threat scenarios, and rough approaches to solving the problem, such that I can situate new facts into existing taxonomies 
    • … I’m better able to tell when prominent people disagree / things have more texture
  • Some (self-)critiques 
    • Honestly thought content for some of the weeks was a bit weak if you just wanted an overview of the alignment problem (e.g., adversarial
... (read more)

I like using 'Todoist' quick add feature (on mobile and desktop) to do this without having to open up the Gdoc and interrupt my workflow.

Sadly not, but would love to see someone take them on. 

Not community growth per se, but I asked a similar question about community health that got some interesting answers: What are some measurable proxies for community health? 

2
jackva
1y
Thanks for that, Michel! Did you implement any of the suggestions?

I like the idea of featuring well-packed research questions, but I don't want to flood the board with them.

I am currently hiring a new director to execute on product improvement and outreach projects as I step into a more strategic advisor role. I'll sync with the new hire about featuring these research questions.

Seeing this late but appreciate the comment! I think this makes a valuable distinction I had oversimplified. Made some changes and will communicate this more clearly going forward.

+1 to a desire to read GPI papers but never having actually read any because I perceive them to be big and academic at first glance. 

I have engaged with them in podcasts that felt more accessible, so maybe  there's something there.

2
JackM
1y
Thanks. Did you find these summaries to be more accessible?
Load more