I believed for a while that public exposés are often a bad idea in EA, and the current Nonlinear drama certainly appears to be confirmatory evidence. I'm pretty confused about why other people's conclusions appears to be different from mine; this all seems extremely obvious to me.
Happy to end this thread here. On a meta-point, I think paying attention to nuance/tone/implicatures is a better communication strategy than retreating to legalese, but it does need practice. I think reflecting on one's own communicative ability is more productive than calling others irrational or being passive-aggressive. But it sucks that this has been a bad experience for you. Hope your day goes better!
(Clarification about my views in the context of the AI pause debate)
I'm finding it hard to communicate my views on AI risk. I feel like some people are responding to the general vibe they think I'm giving off rather than the actual content. Other times, it seems like people will focus on a narrow snippet of my comments/post and respond to it without recognizing the context. For example, one person interpreted me as saying that I'm against literally any AI safety regulation. I'm not.
For a full disclosure, my views on AI risk can be loosely summarized as fol...
(COI note: I work at OpenAI. These are my personal views, though.)
My quick take on the "AI pause debate", framed in terms of two scenarios for how the AI safety community might evolve over the coming years:
AI safety becomes the single community that's the most knowledgeable about cutting-edge ML systems. The smartest up-and-coming ML researchers find themselves constantly coming to AI safety spaces, because that's the place to go if you want to nerd out about the models. It feels like the early days of hacker culture.
I'd like to constructively push back on this: The research and open-source communities outside AI Safety that I'm embedded in are arguably just as, if not more hands-on, since their attitude towards deployment is usually more ....
I mentioned a few months ago that I was planning to resign from the board of EV UK: I’ve now officially done so.
Since last November, I’ve been recused from the board on all matters associated with FTX and related topics, which has ended up being a large proportion of board business. (This is because the recusal affected not just decisions that were directly related to the collapse of FTX, but also many other decisions for which the way EV UK has been affected by the collapse of FTX was important context.) I know I initially said that I’d wait for ther...
Thanks for all of your hard work on EV, Will! I’ve really appreciated your individual example of generosity and commitment, boldness, initiative-taking, and leadership. I feel like a lot of things would happen more slowly or less ambitiously---or not at all---if it weren’t for your ability to inspire others to dive in and act on the courage of their convictions. I think this was really important for Giving What We Can, 80,000 Hours, Centre for Effective Altruism, the Global Priorities Institute, and your books. Inspiration, enthusiasm, and positivity from you has been a force-multiplier on my own work, and in the lives of many others that I have worked with. I wish you all the best in your upcoming projects.
Politico just published a fairly negative article about EA and UK politics. Previously they’ve published similar articles about EA and Brussels.
I think EA tends to focus on the inside game, or narrow EA, and I believe this increases the likelihood of articles such as this. I worry articles such as this will make people in positions of influence less likely to want to be associated with EA, and that this in the long run will undermine efforts to bring about the policy changes we desire. Still, of course, this focus on the inside game is also pretty cost-eff...
Yes, I noticed that. Certain news organisations, which are trusted by an important subsection of the US population, often characterise progressive movements as uninformed mobs. That is clear. But if you define 'reputable' as 'those organisations most trusted by the general public', which seems like a reasonable definition, then, based on the YouGov analysis, Fox et al. is not reputable. But then maybe YouGov's method is flawed? That's plausible.
But we've fallen into a bit of a digression here. As I see it, there are four cruxes:
Some lawyers claim that there may be significant (though not at all ideal) whistleblowing protection for individuals at AI companies that don't fully comply with the Voluntary Commitments: https://katzbanks.com/wp-content/uploads/KBK-Law360-Despite-Regulation-Lag-AI-Whistleblowers-Have-Protections.pdf
Should We Push For An AI Pause Might Be The Wrong Question
A quick thought on the recent discussion on whether pushing for a pause on frontier AI models is a good idea or not.
It seems obvious to me that within the next 3 years the top AI labs will be producing AI that causes large swaths of the public to push for a pause.
Is it therefore more prudent to ask the following question: when much of the public wants a pause, what should our (the EA community) response be?
Interesting framing.
It's unclear to me how to integrate that theory with our decisions today given how much the strategic situation is likely to have shifted in that time.
Have there ever been any efforts to try to set up EA-oriented funding organisations that focus on investing donations in such a way as to fund high-utility projects in very suitable states of the world? They could be pure investment vehicles that have high expected utility, but that lose all their money by some point in time in the modal case.
The idea would be something like this:
For a certain amount of dollars, to maximise utility, to first order, one has to decide how much to spend on which causes and how to distribute the spending over time.
Howeve...
My overall impression is that the CEA community health team (CHT from now on) are well intentioned but sometimes understaffed and other times downright incompetent. It's hard to me to be impartial here, and I understand that their failures are more salient to me than their successes. Yet I endorse the need for change, at the very least including 1) removing people from the CHT that serve as a advisors to any EA funds or have other conflict of interest positions, 2) hiring HR and mental health specialists with credentials, 3) publicly clarifying their role ...
Why is it that I must return from 100% of EAGs with either covid or a cold?
Perhaps my immune system just sucks or it's impossible to avoid due to asymptomatic cases, but in case it's not: If you get a cold before an EAG(x), stay home!
For those who do this already, thank you!
The minute suffering I experience from the cold is not the real cost!
I'm probably an outlier, given that a lot of my work is networking but I have had to cancel attending an event where I was invited to speak and likely would have met at least a few people who would have been relevant to know for my work, canceled an in-person meeting (though I likely will get a chance to meet them later) and reschedule a third.
The cold probably hit at the best possible time (right after two meetings in parliament), had it come sooner it would have really sucked.
Additional...
Would newer people find it valuable to have some kind of 80,000 hours career chatbot that had access to the career guide, podcast notes, EA forum posts, job postings, etc, and then answered career questions? I’m curious if it could be designed to be better than just a raw read of the career guide or at least a useful add-on to the career guide.
Potential features:
If anyone from 80k is reading this, I’d be happy to build this as a paid project.
(Pretty confident about the choice, but finding it hard to explain the rationale)
I have started using "member of the EA community" vs "EAs" when I write publicly.
Previously I cared a lot less about using these terms interchangeabley, mainly because referring to myself as an EA didn't seem inaccurate, it's quicker and I don't really see it as tying my identity closely to EA, but over time have changed my mind for a few reasons:
Many people I would consider "EA" in the sense that they work on high impact causes, socially engage with other community members et...
I also try not to use "EA" as a noun. Alternatives I've used in different places:
I was watching this video yesterday https://www.youtube.com/watch?v=C25qzDhGLx8
It's a video about ageing and death, and how society has come to accept death as positive thing that gives life meaning. CGP grey goes on to explain that this is not true, which I agree with - I would still have a lot of meaning in my life if I stopped ageing. The impact of people living longer on society and politics is more uncertain, but I don't see it being a catastrophe - society has adapted to 'worse'.
The thing is that ageing can be seen as a disease that affec...
It seems prima facie plausible to me that interventions that save human lives do not increase utility on net, due to the animal suffering caused by saving human life. Has anyone in the broader EA community looked into this? I'm not strongly committed to this, but I'd be interested in seeing what people have reasoned about this.
See the meat-eater problem tag and the posts tagged with it. That being said, wild animal effects can complicate things.
Globally, there are around 20 billion farmed chickens alive at any moment, mostly factory farmed, so about 3 per human alive, higher in high-income countries and lower in low-income countries. There are also probably over 100 billion fish being farmed at any moment, so over 12 per human alive. See Šimčikas, 2020 for estimates.
Instead, I recommend: "My prior is [something], here's why".
I'm even more against "the burden of proof for [some policy] is on X" - I mean, what does "burden of proof" even mean in the context of policy? but hold that thought.
An example that I'm against:
"The burden of proof for vaccines helping should be on people who want to vaccinate, because it's unusual to put something in your body"
I'm against it because
Why doesn't this translate to AI risk.
"We should avoid building more powerful AI because it might kill us all" breaks to
This sounds like someone should have the burden of proof of showing near future AI systems are (1) lethal (2) powerful in a utility way, not just a trick but actually effective at real...
Wanted to give a shoutout to Ajeya Cotra (from OpenPhil), for her great work explaining AI stuff on a recent Freakonomics podcast series. Her explanations about both her work on the development of AI, and her easy to understand predictions of how AI might progress from here were great, she was my favourite expert on the series.
People have been looking for more high quality public communicators to get EA/AI safety stuff out there, perhaps Ajeya could be a candidate if she's keen?
I just came across this old comment by Wei Dai which has aged well, for unfortunate reasons.
I think a healthy dose of moral uncertainty (and normative uncertainty in general) is really important to have, because it seems pretty easy for any ethical/social movement to become fanatical or to incur a radical element, and end up doing damage to itself, its members, or society at large. (“The road to hell is paved with good intentions” and all that.)
Being able to agree and disagreevote on posts feels like it might be great. Props to the forum team.
My hope would be that it would allow people to decouple the quality of the post and whether they agree with it or not. Hopefully people could even feel better about upvoting posts they disagreed with (although based on comments that may be optimistic).
Perhaps combined with a possible tweak in what upvoting means (as mentioned by a few people), someone mentioned we could change "how much do you like this overall" to something that moves away form basing the reaction on an emotions. I think someone suggested something like "Do you think this post adds value" (That's just a real hack at the alternative, I'm sure there are far better ones)
I'm fairly disappointed with how much discussion I've seen recently that either doesn't bother to engage with ways in which the poster might be wrong, or only engages with weak versions. It's possible that the "debate" format of the last week has made this worse, though not all of the things I've seen were directly part of that.
I think that not engaging at all, and merely presenting one side while saying that's what you're doing, seems better than presenting and responding to counterarguments (but only the weak ones), which still seems better than strawmanning arguments that someone else has presented.