All of michel's Comments + Replies

Helpful post! Especially liked this comment with respect to how EA is defined and what a more helpful definition can include:

Most definitions of effective altruism are unsatisfying, because their goal is primarily persuasive: their purpose is to get you to become an effective altruist [...]

This post series defines effective altruism in a way that’s anthropological. What do effective altruists believe that other people tend not to believe? Why do they believe that? What do the vegans and the kidney donors, the AI safety researchers and the randomized controlled trial lovers, have in common?

Nice post. I've been vaguely thinking in this direction so it's helpful to have a name for this idea and some early food for thought.

I think this is really fair pushback, thanks! Skeptical coverage of AI development is legitimate. I think the way I wrote this over-implied that these articles is a failing of journalism—the marketing hype claim is not baseless.

But I'm torn. I still think there's something off about current AI coverage, and this could be a valid reason to want more journalism on AI. Most articles seem to default to either full embrace of AI companies' claims or blanket skepticism, with relatively few spotlighting the strongest version of arguments on both sides of a debate... (read more)

"Most articles seem to default to either full embrace of AI companies' claims or blanket skepticism, with relatively few spotlighting the strongest version of arguments on both sides of a debate. "  Never agreed with anything as strongly in my life. Both these things are bad and we don't need to choose a side between them. And note that the issue here isn't about these things being "extreme". An article that actually tries to make a case for foom by 2027, or "this is all nonsense, it's just fancy autocomplete and overfitting on meaningless benchm... (read more)

Thank you! 

Also, I back your earlier sentiment that there's probably loads of other UI/UX improvements to be made, although of course I understand if those aren't a priority. 

It's been a few years since I worked on the opportunity board so I don't really remember the decision to add this filter. 

It's based on this database of EA orgs, which I haven't kept up to date. (I'm not sure if more recent opportunity board contributors have). I borrowed the language from a similar database Michael A made. Worth noting that both databases were made before or during summer 2022 where EA was much less politicized. 

Looking at it now, I also find this filter a bit weird and would probably advise removing it or making sure it's up to... (read more)

8
Sarah Cheng 🔸
Thanks Michel! Our team hasn't been keeping that field up-to-date either, so I've gone ahead and removed the filter from the Opportunity Board, and removed the field from the Orgs page as well. :) 

Thank you for taking this on! 

This project that was started by motivated university students, sustained by grant funding (thanks Effective Altruism Infrastructure Fund! cc @calebp ), and now ultimately absorbed by a proper organization. 

I like that arc. I think 'find an organizational home that doesn't depend on one-off grant funding and unusually dedicated/risk-tolerant organizers' is a worthy goal for early-stage project to orient towards. (This can be done either by starting a more mature org on the basis of a good pilot project or joining an org, like the board did here). 

I’m really grateful to have worked alongside you! I admire your enthusiasm and careful thinking and am excited to see what you work on next :)

Interesting that a typical AI journalist is valued more than a typical AI technical researcher (1.2x) and typical AI policy person (1.7x). 

5
Jason
How many of each are there? I wonder if journalists are uncommon enough that their marginal utility hasn't started sloping down too much yet

Just an FYI that most non-profits are often legally constrained on doing many sorts of political advocacy work, I think. 

Depends on the subsection, at least in the US. 501(c)(3) organizations are fairly limited on political activity. While 501(c)(4) organizations are less constrained, one does not get a tax break from donating to them. So you'll see some 501(c)(3)s have an associated 501(c)(4) organization.

National securitisation privileges extraordinary measures to defend the nation, often centred around military force and logics of deterrence/balance of power and defence. Humanity macrosecuritization suggests the object of security is to defend all of humanity, not just the nation, and often invokes logics of collaboration, mutual restraint and constraints on sovereignty.

I found this distinction really helpful. 

It reminds me of Holden Karnofsky's piece on How to make the best of the most important century (2021), in which he presents two contrasting f... (read more)

FYI, I was making a difficult career decision a few months ago and found this post helpful. Thanks for writing it!

Love it.

Your discussion of how ‘Green’ vibes can fall short of really looking consequences in the face reminds of a quote:

“All that blood was never once beautiful, it was always just red.” - Kait Rokowski

Thanks for sharing this! I've really been liking your forum posts recently :) 

3
MathiasKB🔸
Thanks! Very kind of you to say that

Oh sorry, I missed this! I should have read that more closely before commenting.

Fair enough. Interesting to see people's different intuitions on this. 

6
AnotherAnonymousFTXAccount
I understand that that impression/reaction. As I mentioned, the intention behind offering a bunch of specific and varied questions is that they might prompt reflection from different angles, different thoughts and memories, angles on it that provide new insight or that MacAskill finds more comfortable sharing - not that each would be responded to in forensic detail. 

Registering that this line of questioning (and volume of questions) strikes me as a bit off-putting/ too intense. 

If someone asked about what "What were the key concerns here, and how were they discussed?" [...] "what questions did you ask, and what were the key considerations/evidence?" about interactions I had years ago, I would feel like they're holding me to an unrealistic standard of memory or documentation.  

(Although I do acknowledge the mood that these were some really important interactions. Scrutiny is an appropriate reaction, but I still find this off-putting.) 

These seem pretty reasonable questions to me. 

the EA community is not on track to learn the relevant lessons from its relationship with FTX. 

In case it helps, here’s some data from Meta Coordination Forum attendees on how much they think the FTX collapse should influence their work-related actions and how much it has influenced their work-related actions:

  • On average, attendees thought the average MCF attendee should moderately change their work-related actions because of the FTX collapse (Mean of 4.0 where 1 = no change and 7 = very significant change; n = 39 and SD = 1.5)
  • On av
... (read more)
6
Jason
At the risk of over-emphasizing metrics, it seems that at least some of these reforms could and probably should be broken down into SMART goals (i.e., those that are specific, measurable, achievable, relevant, and time-bound). Example: Better vet risks from funders/leaders might be broken down into sub-tasks like (1) Stratify roles and positions by risk level (critical, severe, moderate, etc.); (2) Determine priorities for implementation and the re-vetting schedule; (3) develop adjudication guidelines; (4) decide who investigates and adjudicates suitability; (5) set measurable and time-bound progress indicators (e.g., the holders of 75% of Critical Risk roles/positions have been investigated and adjudicated by the end of 2025). [Note: The specific framework above borrows from the framework for security clearances and public-trust background checks in the US government. Obviously things in EA would need to be different, and the risks are different, so this is meant as an example rather than a specific proposal on this point. Yet, some of the core system needs would be at least somewhat similar.]

I think in any world, including ones where EA leadership is dropping the ball or is likely to cause more future harm like FTX, it would be very surprising if they individually had not updated substantially. 

As an extreme illustrative example, really just intended to get the intuition across, imagine that some substantial fraction of EA leaders are involved in large scale fraud and continue to plan to do so (which to be clear, I don't have any evidence of), then of course the individuals would update a lot on FTX, but probably on the dimensions of "her... (read more)

Thanks! It's something very similar to the 'responsibility assignment matrix' (RACI) popularized by consultants, I think. But in this case BIRD is more about decisions (rather than tasks) and stands for Bound (set guidelines), Input (give advice), Responsible (do the bulk of thinking decision through and laying out reasoning), and Decider (make decision). 

2
EffectiveAdvocate🔸
Thank you! Seems like a valuable tool to learn! 

Interesting post!

these PF basically think being EA-aligned means you have to be a major pain in the ass to your grantees.

This surprised me. I wonder why this is the case? Maybe from early GiveWell charity investigations where they made the charities do a lot of reporting? My experience with EA grantmakers is that they're at least very aware of the costs of overhead and try hard to avoid red tape.

3
Kyle Smith
Yes I think this is an area of misconception that could be explored more and ultimately addressed.

Fwiw, I don't know anybody actively promoting 'EA has to be all-or-nothing.' Like, there's not an insider group of EA thought leaders who have decided that either you give "100% to AMF or you're not an EA." I'm not denying that that vibe can be there though; I guess it may just be more of a cultural thing, or the product of a mostly maximizing philosophy.

Context: As part of CEA events team I help organize and attend a lot of events that draw EA thought-leaders. 

1
Kyle Smith
Yeah I am really only referring to a perception of all-or-nothing. And like you say, I think it is a product of a maximizing philosophy.  At the end of the day, it really just seems to be an EA marketing/outreach problem, and I think it is entirely addressable by the community. I think the paper idea I mention (discussing the perceived incompatibility of TBP and EA) could be a step in the right direction.

I'm glad you posted this! I like it :) 

I hadn't heard of the moral saint thought experiment or the "one reason too many"—both of these are interesting.

Cool that you're doing this for such a good cause. Good luck!

2
Emma Cameron🔸
Hey, thanks! I hope you're doing well - I didn't realize you were at CEA now. :)

Bumping a previous EA forum post: Key EA decision-makers on the future of EA, reflections on the past year, and more (MCF 2023).

This post recaps a survey about EA 'meta' topics (eg., talent pipelines, community building mistakes, field-building projects, etc.) that was completed by this year's Meta Coordination Forum attendees. Meta Coordination Forum is an event for people in senior positions at community- and field-building orgs/programs, like CEA, 80K, and Open Philanthropy's Global Catastrophic Risk Capacity Building team. (The event has previously gon... (read more)

3
James Herbert
Glad you bumped this Michel, I was also surprised by how little attention it received.  You requested feedback, so I hope the below is useful. High level: we've been working on our strategy for 2024. I was expecting these posts to be very, very helpful for this. However, for some reason, they've only been slightly helpful. Below I've listed a few suggestions for what might have made them more helpful (if this info is contained in the posts and I've missed it I apologise in advance): 1. Information to help us decide how to allocate our social change portfolio.  1. (I.e., within meta work, how much of our resources should we spend on, e.g., lobbying, field building, social movement support, or network development?) 2. Information to help us determine our community size goals (bigger or smaller)  1. (The rowing vs steering q was helpful here, but I would have preferred something like 'What do you envision as the optimal size of the EA community in five years?' or 'For the upcoming year, where should our primary focus lie: expanding the EA community's size or enhancing the depth of engagement and commitment among existing members?') 3. Information on talent bottlenecks.  1. (The AIS survey was useful here, but we would have liked info about other fields. This Q from the Leader's Forum would have been useful) 4. Information to help us decide how to handle PR, or even just how much of a priority PR ought to be 5. Information to help us figure out what to do with all of those EAs who want to excel in using their careers to do good, but lack advice from the EA community on how to do this (because their talents don't match identified talent bottlenecks)
6
OllieBase
I agree. Of all of CEA's outputs this year, I think this could be the most useful for the community and I think it's worth bumping. It's our fault that it didn't get enough traction; it came out just before EAG and we didn't share it elsewhere.

Thank you for posting this!

I find these questions and consideration really interesting, and I could see myself experiment with researching questions of this sort for 3 months. As a relatively junior person though who may end up doing this independently, I worry about none of the thinking/writing I'd do on one of these topics actually changing anything. Do you have any advice on making this type of research useful?  

2
Lukas Finnveden
I'll hopefully soon make a follow-up post with somewhat more concrete projects that I think could be good. That might be helpful. Are you more concerned that research won't have any important implications for anyone's actions, or that the people whose decisions ought to change as a result won't care about the research?

Thanks for putting on this event and sharing the takeaways!

Any plans to address these would come from the individuals or orgs working in this space. (This event wasn't a collective decision-making body, and wasn't aimed at creating a cross-org plan to address these—it was more about helping individuals refine their own plans). 

Re the talent development pipeline for AI safety and governance, some relevant orgs/programs I'm aware of off the top of my head include:

  • Arkrose
  • SERI MATS
  • Constellation
  • Fellowships like AI Futures
  • Blue Dot Impact
  • AI safety uni groups like MAIA and WAISI
  • ... and other programs mentioned on 80K
... (read more)

A quick note to say that I’m taking some time off after publishing these posts. I’ll aim to reply to any comments from 13 Nov.

A quick note to say that I’m taking some time off after publishing these posts. I’ll aim to reply to any comments from 13 Nov.

A quick note to say that I’m taking some time off after publishing these posts. I’ll aim to reply to any comments from 13 Nov.

FYI that this is the first of a few Meta Coordination Forum survey summaries. More coming in the next two weeks!

Consider keeping a running list of people you’d like to meet at the next EAG(x)

I used to be pretty overwhelmed when using Swapcard to figure out who I should meet at EAG(x)s. I still get kinda overwhelmed, but I've found it helpful to start identifying meetings I'd like to have many weeks before the event.

I do this by creating an easy-to-add-to list in the months leading up to the event. Then when I encounter someone who wrote an interesting post, or if a friend mentions someone scoping a project I'm excited about, I add them to this list. (I use the app t... (read more)

If you're willing to write up some of your on the ground perspective and it seem valuable, we'd be happy to share it with attendees!

Off the top of my head, I'm thinking things like:

  • What changes in community members attitudes might 'leaders' not be tracking?
  • From your engagement with 'leaders' and community members, what seem to be the biggest misunderstandings?
  • What cheap actions could leaders take that might have a really positive influence on the community?

I'll DM you with info on on how to share such a write up, if you're interested.  

1
Rockwell
Thank you, Michel! I'm replying over DM.

Random anecadata on a very specific instance: I worked with Drew to make a Most Important Century writing prize. I enjoyed our conversations and we worked well together — Drew was nice and ambitious. I heard concerns about Nonlinear at this time, but never had anything suspect come up.

I similarly had a positive experience with Drew creating The Econ Theory AI Alignment Prize. I was not prompted by anyone to comment on this.

I second this - I have met Drew at multiple conferences and my experiences with him have only been positive. He has always been very nice to talk to. I have also heard others in EA circles tell their anecdotes on him and nobody has anything bad to say whatsoever. 

Appreciate you saying this, Michel. As you can imagine, it’s been rough. Perhaps accidentally, this post seems to often lump me in with situations I wasn’t really a part of.

To put it more strongly: I would like to make clear that I have never heard any claims of improper conduct by Drew Spartz (in relation to the events discussed in this post or otherwise).

Flagging EA Opportunity Board to make sure it's on your radar. It's focused mostly on more junior, non-permanent positions and pretty EA focused.

0
omernevo
Thank you!

Thanks for engaging with this post! I appreciate the seriousness with which you engage with the question “is this actually good?” If it’s not, I don’t want to celebrate this donation or encourage others to do the same; the answer to that question matters.

I think your arguments are plausible, and I agree with the comment you make elsewhere that the EA Forum should be a place for such (relatively) unfiltered push back.

But in the absence of any inside view on this topic and the fair push back Nick made, I still feel solid about holding the outside view that d... (read more)

Thanks Grace! Feel free to share.

Thanks Nick! Really appreciate you flagging this. I didn't intend it that way and hope the edit helps: 

Maybe playing with their friends in the schoolyard, maybe spending time with their grandma, or maybe just kicking rocks, alone.

8
NickLaing
Nice one yeah that's great - if it was me I would perhaps even change kicking rocks to kicking a football ;).

It totally should be. Fixed, thanks :) 

Cool initiative! Heads up that I think one of the links in the TLDR is to a private forum draft (or at least it doesn't work for me)

See this post 

3
AmAristizabal
Thanks Michel! I think it should be fixed now :) 

I'd be very curious if there are historical case studies of how much private corporations stuck to voluntary commitments they made, and how long it took for more binding regulation to replace voluntary commitments

Pulling out highlights from PDF of the voluntary commitments that AI safety orgs agreed to:

The following is a list of commitments that companies are making to promote the safe, secure, and transparent development and use of AI technology. These voluntary commitments are consistent with existing laws and regulations, and designed to advance a generative AI legal and policy regime. Companies intend these voluntary commitments to remain in effect until regulations covering substantially the same issues come into force. Individual companies may make additional

... (read more)

Cool initiative! I'm worried about a failure mode where these just stay in the EA blogosphere and don't reach the target audiences who we'd most like to engage with these ideas (either because they're written with language and style that isn't well-received elsewhere, or no active effort is made to share them to people who may be receptive).

Do you share this concern, and do you have a sense of mitigate it if you share the concern? 

8
Jackson Wagner
Yeah, as a previous top-three winner of the EA Forum Creative Writing Contest (see my story here) and of Future of Life Institute's AI Worldbuilding contest (here), I agree that it seems like the default outcome is that even the winning stories don't get a huge amount of circulation.  The real impact would come from writing the one story that actually does go viral beyond the EA community.  But this seems pretty hard to do; perhaps better to pick something that has already gone viral (perhaps an existing story like one of the Yudkowsky essays, or perhaps expanding on something like a very popular tweet to turn it into a story), and try to improve its presentation by polishing it, and perhaps adding illustrations or porting to other mediums like video / audio / etc. That is why I am currently spending most of my EA effort helping out RationalAnimations, which sometimes writes original stuff but often adapts essays & topics that have preexisting traction within EA.  (Suggestions welcome for things we might consider adapting!) Could also be a cool mini-project of somebody's, to go through the archive of existing rationalist/EA stories, and try and spruce them up with midjourney-style AI artwork; you might even be able to create some passable, relatively low-effort youtube videos just by doing a dramatic reading of the story and matching it up with panning imagery of midjourney / stock art? On the other hand, writing stories is fun, and a $3000 prize pool is not too much to spend in the hopes of maybe generating the next viral EA story!  I guess my concrete advice would be to put more emphasis on starting from a seed of something that's already shown some viral potential (like a popular tweet making some point about AI safety, or a fanfic-style spinoff of a well-known story that is tweaked to contain an AI-relevant lesson, or etc).
7
Daystar Eld
Absolutely. Part of the hope is that, if we can gather a good collection of stories, we can find ways to promote and publish some or all of them, whether through traditional publishing or audiobooks or youtube animated stories.
7
NickLaing
I would hope given its a "fable" writing contest, almost by default these stories would be completely accessible to most of the general public,  like  the Yudowsky classic " sorting pebbles into correct heaps or likely even less nerdy than that. But OP can clarify!  

College students interested in this may also want to check out remote government internships via VSFS! Here are the open positions

Load more