I think this is really fair pushback, thanks! Skeptical coverage of AI development is legitimate. I think the way I wrote this over-implied that these articles is a failing of journalism—the marketing hype claim is not baseless.
But I'm torn. I still think there's something off about current AI coverage, and this could be a valid reason to want more journalism on AI. Most articles seem to default to either full embrace of AI companies' claims or blanket skepticism, with relatively few spotlighting the strongest version of arguments on both sides of a debate...
"Most articles seem to default to either full embrace of AI companies' claims or blanket skepticism, with relatively few spotlighting the strongest version of arguments on both sides of a debate. " Never agreed with anything as strongly in my life. Both these things are bad and we don't need to choose a side between them. And note that the issue here isn't about these things being "extreme". An article that actually tries to make a case for foom by 2027, or "this is all nonsense, it's just fancy autocomplete and overfitting on meaningless benchm...
It's been a few years since I worked on the opportunity board so I don't really remember the decision to add this filter.
It's based on this database of EA orgs, which I haven't kept up to date. (I'm not sure if more recent opportunity board contributors have). I borrowed the language from a similar database Michael A made. Worth noting that both databases were made before or during summer 2022 where EA was much less politicized.
Looking at it now, I also find this filter a bit weird and would probably advise removing it or making sure it's up to...
Thank you for taking this on!
This project that was started by motivated university students, sustained by grant funding (thanks Effective Altruism Infrastructure Fund! cc @calebp ), and now ultimately absorbed by a proper organization.
I like that arc. I think 'find an organizational home that doesn't depend on one-off grant funding and unusually dedicated/risk-tolerant organizers' is a worthy goal for early-stage project to orient towards. (This can be done either by starting a more mature org on the basis of a good pilot project or joining an org, like the board did here).
Depends on the subsection, at least in the US. 501(c)(3) organizations are fairly limited on political activity. While 501(c)(4) organizations are less constrained, one does not get a tax break from donating to them. So you'll see some 501(c)(3)s have an associated 501(c)(4) organization.
National securitisation privileges extraordinary measures to defend the nation, often centred around military force and logics of deterrence/balance of power and defence. Humanity macrosecuritization suggests the object of security is to defend all of humanity, not just the nation, and often invokes logics of collaboration, mutual restraint and constraints on sovereignty.
I found this distinction really helpful.
It reminds me of Holden Karnofsky's piece on How to make the best of the most important century (2021), in which he presents two contrasting f...
Registering that this line of questioning (and volume of questions) strikes me as a bit off-putting/ too intense.
If someone asked about what "What were the key concerns here, and how were they discussed?" [...] "what questions did you ask, and what were the key considerations/evidence?" about interactions I had years ago, I would feel like they're holding me to an unrealistic standard of memory or documentation.
(Although I do acknowledge the mood that these were some really important interactions. Scrutiny is an appropriate reaction, but I still find this off-putting.)
the EA community is not on track to learn the relevant lessons from its relationship with FTX.
In case it helps, here’s some data from Meta Coordination Forum attendees on how much they think the FTX collapse should influence their work-related actions and how much it has influenced their work-related actions:
I think in any world, including ones where EA leadership is dropping the ball or is likely to cause more future harm like FTX, it would be very surprising if they individually had not updated substantially.
As an extreme illustrative example, really just intended to get the intuition across, imagine that some substantial fraction of EA leaders are involved in large scale fraud and continue to plan to do so (which to be clear, I don't have any evidence of), then of course the individuals would update a lot on FTX, but probably on the dimensions of "her...
Thanks! It's something very similar to the 'responsibility assignment matrix' (RACI) popularized by consultants, I think. But in this case BIRD is more about decisions (rather than tasks) and stands for Bound (set guidelines), Input (give advice), Responsible (do the bulk of thinking decision through and laying out reasoning), and Decider (make decision).
Interesting post!
these PF basically think being EA-aligned means you have to be a major pain in the ass to your grantees.
This surprised me. I wonder why this is the case? Maybe from early GiveWell charity investigations where they made the charities do a lot of reporting? My experience with EA grantmakers is that they're at least very aware of the costs of overhead and try hard to avoid red tape.
Fwiw, I don't know anybody actively promoting 'EA has to be all-or-nothing.' Like, there's not an insider group of EA thought leaders who have decided that either you give "100% to AMF or you're not an EA." I'm not denying that that vibe can be there though; I guess it may just be more of a cultural thing, or the product of a mostly maximizing philosophy.
Context: As part of CEA events team I help organize and attend a lot of events that draw EA thought-leaders.
Bumping a previous EA forum post: Key EA decision-makers on the future of EA, reflections on the past year, and more (MCF 2023).
This post recaps a survey about EA 'meta' topics (eg., talent pipelines, community building mistakes, field-building projects, etc.) that was completed by this year's Meta Coordination Forum attendees. Meta Coordination Forum is an event for people in senior positions at community- and field-building orgs/programs, like CEA, 80K, and Open Philanthropy's Global Catastrophic Risk Capacity Building team. (The event has previously gon...
Thank you for posting this!
I find these questions and consideration really interesting, and I could see myself experiment with researching questions of this sort for 3 months. As a relatively junior person though who may end up doing this independently, I worry about none of the thinking/writing I'd do on one of these topics actually changing anything. Do you have any advice on making this type of research useful?
Any plans to address these would come from the individuals or orgs working in this space. (This event wasn't a collective decision-making body, and wasn't aimed at creating a cross-org plan to address these—it was more about helping individuals refine their own plans).
Re the talent development pipeline for AI safety and governance, some relevant orgs/programs I'm aware of off the top of my head include:
Consider keeping a running list of people you’d like to meet at the next EAG(x)
I used to be pretty overwhelmed when using Swapcard to figure out who I should meet at EAG(x)s. I still get kinda overwhelmed, but I've found it helpful to start identifying meetings I'd like to have many weeks before the event.
I do this by creating an easy-to-add-to list in the months leading up to the event. Then when I encounter someone who wrote an interesting post, or if a friend mentions someone scoping a project I'm excited about, I add them to this list. (I use the app t...
If you're willing to write up some of your on the ground perspective and it seem valuable, we'd be happy to share it with attendees!
Off the top of my head, I'm thinking things like:
I'll DM you with info on on how to share such a write up, if you're interested.
Random anecadata on a very specific instance: I worked with Drew to make a Most Important Century writing prize. I enjoyed our conversations and we worked well together — Drew was nice and ambitious. I heard concerns about Nonlinear at this time, but never had anything suspect come up.
Flagging EA Opportunity Board to make sure it's on your radar. It's focused mostly on more junior, non-permanent positions and pretty EA focused.
Thanks for engaging with this post! I appreciate the seriousness with which you engage with the question “is this actually good?” If it’s not, I don’t want to celebrate this donation or encourage others to do the same; the answer to that question matters.
I think your arguments are plausible, and I agree with the comment you make elsewhere that the EA Forum should be a place for such (relatively) unfiltered push back.
But in the absence of any inside view on this topic and the fair push back Nick made, I still feel solid about holding the outside view that d...
Cool initiative! Heads up that I think one of the links in the TLDR is to a private forum draft (or at least it doesn't work for me)
See this post
Pulling out highlights from PDF of the voluntary commitments that AI safety orgs agreed to:
...The following is a list of commitments that companies are making to promote the safe, secure, and transparent development and use of AI technology. These voluntary commitments are consistent with existing laws and regulations, and designed to advance a generative AI legal and policy regime. Companies intend these voluntary commitments to remain in effect until regulations covering substantially the same issues come into force. Individual companies may make additional
Cool initiative! I'm worried about a failure mode where these just stay in the EA blogosphere and don't reach the target audiences who we'd most like to engage with these ideas (either because they're written with language and style that isn't well-received elsewhere, or no active effort is made to share them to people who may be receptive).
Do you share this concern, and do you have a sense of mitigate it if you share the concern?
College students interested in this may also want to check out remote government internships via VSFS! Here are the open positions
Helpful post! Especially liked this comment with respect to how EA is defined and what a more helpful definition can include: