Quick takes

I'm fairly disappointed with how much discussion I've seen recently that either doesn't bother to engage with ways in which the poster might be wrong, or only engages with weak versions. It's possible that the "debate" format of the last week has made this worse, though not all of the things I've seen were directly part of that.

I think that not engaging at all, and merely presenting one side while saying that's what you're doing, seems better than presenting and responding to counterarguments (but only the weak ones), which still seems better than strawmanning arguments that someone else has presented.

Linch
3d47-69

I believed for a while that public exposés are often a bad idea in EA, and the current Nonlinear drama certainly appears to be confirmatory evidence. I'm pretty confused about why other people's conclusions appears to be different from mine; this all seems extremely obvious to me.

Showing 3 of 58 replies (Click to show all)
2
Ben Stewart
4h
Things can be 'not the best', but still good. For example, let's say a systematic, well-run, whistleblower organisation was the 'best' way. And compare it to 'telling your friends about a bad org'. 'Telling your friends' is not the best strategy, but it still might be good to do, or worth doing. Saying "telling your friends is not the best way" is consistent with this. Saying "telling your friends is a bad idea" is not consistent with this.  I.e. 'bad idea' connotes much more than just 'sub-optimal, all things considered'.
2
Linch
3h
Sorry by "best" I was locally thinking of what's locally best given present limitations, not globally best (which is separately an interesting but less directly relevant discussion). I agree that if there are good actions to do right now, it will be wrong for me to say that all of them are bad because one should wait for (eg) a "systematic, well-run, whistleblower organisation."  For example, if I was saying "GiveDirectly is a bad charity for animal-welfare focused EAs to donate to," I meant that there are better charities on the margin for animal-welfare focused EAs to donate to. I do not mean that in the abstract we should not donate to charities because a well-run international government should be handling public goods provisions and animal welfare restrictions instead. I agree that I should not in most cases be comparing real possibilities against an impossible (or at least heavily impractical) ideal. Similarly, if I said "X is a bad idea for Bob to do," I meant there are better things for Bob to do with Bob's existing limitations etc, not that if Bob should magically overcome all of his present limitations and do Herculeanly impossible tasks. And in fact I was making a claim that there are practical and real possibilities that in my lights are probably better. Well clearly my choice of words on a quickly fired quick take at 1AM was sub-optimal, all things considered. Especially ex post. But I think it'd be helpful if people actually argued about the merits of different strategies instead of making inferences about my racism or lack thereof, or my rudeness or lack thereof. I feel like I'm putting a lot of work in defending fairly anodyne (denotatively) opinions, even if I had a few bad word choices.  After this conversation, I am considering retreating to more legalese and pre-filtering all my public statements for potential controversy by GPT-4, as a friend of mine suggested privately. I suspect this will be a loss for the EA forum being a place where peop

Happy to end this thread here. On a meta-point, I think paying attention to nuance/tone/implicatures is a better communication strategy than retreating to legalese, but it does need practice. I think reflecting on one's own communicative ability is more productive than calling others irrational or being passive-aggressive. But it sucks that this has been a bad experience for you. Hope your day goes better!

(Clarification about my views in the context of the AI pause debate)

I'm finding it hard to communicate my views on AI risk. I feel like some people are responding to the general vibe they think I'm giving off rather than the actual content. Other times, it seems like people will focus on a narrow snippet of my comments/post and respond to it without recognizing the context. For example, one person interpreted me as saying that I'm against literally any AI safety regulation. I'm not.

For a full disclosure, my views on AI risk can be loosely summarized as fol... (read more)

Thanks, that seems like a pretty useful summary.

(COI note: I work at OpenAI. These are my personal views, though.)

My quick take on the "AI pause debate", framed in terms of two scenarios for how the AI safety community might evolve over the coming years:

  1. AI safety becomes the single community that's the most knowledgeable about cutting-edge ML systems. The smartest up-and-coming ML researchers find themselves constantly coming to AI safety spaces, because that's the place to go if you want to nerd out about the models. It feels like the early days of hacker culture. There's a constant flow of ideas and b
... (read more)
Showing 3 of 10 replies (Click to show all)

Re: Hacker culture

AI safety becomes the single community that's the most knowledgeable about cutting-edge ML systems. The smartest up-and-coming ML researchers find themselves constantly coming to AI safety spaces, because that's the place to go if you want to nerd out about the models. It feels like the early days of hacker culture.

I'd like to constructively push back on this: The research and open-source communities outside AI Safety that I'm embedded in are arguably just as, if not more hands-on, since their attitude towards deployment is usually more .... (read more)

-1
trevor1
17h
I think that the 2-scenario model described here is very important, and should be a foundation for thinking about the future of AI safety. However, I think that both scenarios will also be compromised to hell. The attack surface for the AI safety community will be massive in both scenarios, ludicrously massive in scenario #2, but nonetheless still nightmarishly large in scenario #1. Assessment of both scenarios revolves around how inevitable you think slow takeoff is- I think that some aspects of slow takeoff, such as intelligence agencies, already started around 10 years ago and at this point just involve a lot of finger crossing and hoping for the best.
12
NickLaing
1d
"hesitate to pay for ChatGPT because it feels like they're contributing to the problem" Yep that's me right now and I would hardly call myself a Luddite (maybe I am tho?) Can you explain why you frame this as an obviously bad thing to do? Refusing to help fund the most cutting edge AI company, which has been credited by multiple people with spurring on the AI race and attracting billions of dollars to AI capabilities seems not-unreasonable at the very least, even if that approach does happen to be wrong. Sure there are decent arguments against not paying for chat GPT, like the LLM not being dangerous in and of itself, and the small amount of money we pay not making a significant difference, but it doesn't seem to be prima-facie-obviously-net-bad-luddite behavior, which is what you seem to paint it as in the post.

I mentioned a few months ago that I was planning to resign from the board of EV UK: I’ve now officially done so.

Since last November, I’ve been recused from the board on all matters associated with FTX and related topics, which has ended up being a large proportion of board business. (This is because the recusal affected not just decisions that were directly related to the collapse of FTX, but also many other decisions for which the way EV UK has been affected by the collapse of FTX was important context.) I know I initially said that I’d wait for ther... (read more)

Showing 3 of 4 replies (Click to show all)

Thanks for all of your hard work on EV, Will! I’ve really appreciated your individual example of generosity and commitment, boldness, initiative-taking, and leadership. I feel like a lot of things would happen more slowly or less ambitiously---or not at all---if it weren’t for your ability to inspire others to dive in and act on the courage of their convictions. I think this was really important for Giving What We Can, 80,000 Hours, Centre for Effective Altruism, the Global Priorities Institute, and your books. Inspiration, enthusiasm, and positivity from you has been a force-multiplier on my own work, and in the lives of many others that I have worked with. I wish you all the best in your upcoming projects.

23
MaxDalton
13h
Thank you for all of your hard work over many years, Will. I've really valued your ability to slice through strategic movement-buliding questions, your care and clear communication, your positivity, and your ability to simply inspire massive projects off the ground. I think you've done a lot of good. I'm excited for you to look after yourself, reflect on what's next, and keep working towards a better world.
32
Sean_o_h
14h
Thank you for all your work, and I'm excited for your ongoing and future projects Will, they sound very valuable! But I hope and trust you will be giving equal attention to your well-being in the near-term. These challenges will need your skills, thoughtfulness and compassion for decades to come. Thank you for being so frank - I know you won't be alone in having found this last year challenging mental health-wise, and it can help to hear others be open about it.

Politico just published a fairly negative article about EA and UK politics. Previously they’ve published similar articles about EA and Brussels.

I think EA tends to focus on the inside game, or narrow EA, and I believe this increases the likelihood of articles such as this. I worry articles such as this will make people in positions of influence less likely to want to be associated with EA, and that this in the long run will undermine efforts to bring about the policy changes we desire. Still, of course, this focus on the inside game is also pretty cost-eff... (read more)

Showing 3 of 49 replies (Click to show all)
0
JanPro
3d
No, not really, I am myself confused and wanted to provoke those who know more to reply and clarify. (Which already James Herbert slightly did and I hope more direct info will surface)
4
Daniel_Eth
3d
So I notice Fox ranks pretty low on that list, but if you click through to the link, they rank very high among Republicans (second to only the weather channel). Fox definitely uses rhetoric like that. After Fox (among Republicans) are Newsman and OAN, which similarly both use rhetoric like that. (And FWIW, I also wouldn't be super surprised to see somewhat similar rhetoric from WSJ or Forbes, though probably said less bluntly.) I'd also note that the left-leaning media uses somewhat similar rhetoric for conservative issues that are supported by large groups (e.g., Trumpism in general, climate denialism, etc), so it's not just a one-directional phenomena.

Yes, I noticed that. Certain news organisations, which are trusted by an important subsection of the US population, often characterise progressive movements as uninformed mobs. That is clear. But if you define 'reputable' as 'those organisations most trusted by the general public', which seems like a reasonable definition, then, based on the YouGov analysis, Fox et al. is not reputable. But then maybe YouGov's method is flawed? That's plausible.

But we've fallen into a bit of a digression here. As I see it, there are four cruxes:

  1. Does a focus on the inside g
... (read more)

Some lawyers claim that there may be significant (though not at all ideal) whistleblowing protection for individuals at AI companies that don't fully comply with the Voluntary Commitments: https://katzbanks.com/wp-content/uploads/KBK-Law360-Despite-Regulation-Lag-AI-Whistleblowers-Have-Protections.pdf

Should We Push For An AI Pause Might Be The Wrong Question

A quick thought on the recent discussion on whether pushing for a pause on frontier AI models is a good idea or not.

It seems obvious to me that within the next 3 years the top AI labs will be producing AI that causes large swaths of the public to push for a pause. 

Is it therefore more prudent to ask the following question: when much of the public wants a pause, what should our (the EA community) response be?

Interesting framing.

It's unclear to me how to integrate that theory with our decisions today given how much the strategic situation is likely to have shifted in that time.

1
Gerald Monroe
2d
Which public. Each country in this AI race has a different view on this, and some do not consult their public as much as others. The EA community ideally should take this into account. If the other countries aren't going to pause, and they will not, what should the USA do? (The historical action would be AI progress stops being publicly discussed and all the current experts get drafted into secret labs with the goal of AGI first)

Have there ever been any efforts to try to set up EA-oriented funding organisations that focus on investing donations in such a way as to fund high-utility projects in very suitable states of the world? They could be pure investment vehicles that have high expected utility, but that lose all their money by some point in time in the modal case.

The idea would be something like this:

For a certain amount of dollars, to maximise utility, to first order, one has to decide how much to spend on which causes and how to distribute the spending over time. 

Howeve... (read more)

My overall impression is that the CEA community health team (CHT from now on) are well intentioned but sometimes understaffed and other times downright incompetent. It's hard to me to be impartial here, and I understand that their failures are more salient to me than their successes. Yet I endorse the need for change, at the very least including 1) removing people from the CHT that serve as a advisors to any EA funds or have other conflict of interest positions, 2) hiring HR and mental health specialists with credentials, 3) publicly clarifying their role ... (read more)

Showing 3 of 19 replies (Click to show all)
-5
Jaime Sevilla
2d
12
Jaime Sevilla
2d
I agree that this is very valuable. I would want them to be explicit about this role, and be clear to community builders talking to them that they should treat them as if talking to a funder. To be clear, in the cases where I have felt uncomfortable it was not "X is engaging in sketchy behaviour, and we recommend not giving them funding" (my understanding is that this happens fairly often, and I am glad for it. CHT is providing a very valuable function here, which otherwise would be hard to coordinate. If anything, I would want them to be more brazen and ready to recommend against people based on less evidence than they do now). Is more cases like "CHT staff thinks that this subcommunity would work better without central coordination, and this staff is going to recommend against funding any coordinators going forward" or "CHT is pressuring me to make a certain choice such as not banning a community member I consider problematic, and I am afraid that if I don't comply I won't get renewed" (I've learned of situations like these happen at least thrice). It is difficult to orient yourself towards someone who you are not sure whether your should treat as your boss or as neutral third party mediator. This is stressing for community builders.

Why is it that I must return from 100% of EAGs with either covid or a cold?

Perhaps my immune system just sucks or it's impossible to avoid due to asymptomatic cases, but in case it's not: If you get a cold before an EAG(x), stay home!

For those who do this already, thank you! 

Showing 3 of 5 replies (Click to show all)
5
AI Rights Activist
2d
I would strongly urge people to err on the side of attendance. The value of the connections made at EAGs and EAGxs far exceeds the risks posed by most communicable diseases, especially if precautions are taken, such as wearing a mask.  If you take seriously the value of connections, many of them could very well exceed the cost to save a life. Would you say that your avoiding a cold is worth the death of someone in the developing world, for instance? I think your request fails to take seriously the value of making connections within the EA community.

The minute suffering I experience from the cold is not the real cost!

I'm probably an outlier, given that a lot of my work is networking but I have had to cancel attending an event where I was invited to speak and likely would have met at least a few people who would have been relevant to know for my work, canceled an in-person meeting (though I likely will get a chance to meet them later) and reschedule a third.

The cold probably hit at the best possible time (right after two meetings in parliament), had it come sooner it would have really sucked.

Additional... (read more)

9
joshcmorrison
4d
https://forum.effectivealtruism.org/posts/YNcjDoH6DaHzfhWGb/the-next-ea-global-should-have-safe-air 

Would newer people find it valuable to have some kind of 80,000 hours career chatbot that had access to the career guide, podcast notes, EA forum posts, job postings, etc, and then answered career questions? I’m curious if it could be designed to be better than just a raw read of the career guide or at least a useful add-on to the career guide.

Potential features:

  • It could collect your conversation and convert most of it into an application for a (human) 1-on-1 meeting.
  • You could have a speech-to-text option to ramble all the things you’ve been thinking of.
  • ???

If anyone from 80k is reading this, I’d be happy to build this as a paid project.

(Pretty confident about the choice, but finding it hard to explain the rationale)

I have started using "member of the EA community" vs "EAs" when I write publicly.

Previously I cared a lot less about using these terms interchangeabley, mainly because referring to myself as an EA didn't seem inaccurate, it's quicker and I don't really see it as tying my identity closely to EA, but over time have changed my mind for a few reasons:

Many people I would consider "EA" in the sense that they work on high impact causes, socially engage with other community members et... (read more)

Showing 3 of 5 replies (Click to show all)

I also try not to use "EA" as a noun. Alternatives I've used in different places:

  • "People in EA" (not much better, but hits the amorphous group of "community members plus other people who engage in some way" without claiming that they'd all use a particular label)
  • "People practicing EA" (for people who are actually taking clear actions)
  • "Community members"
  • "People" (for example, I think that posts like "things EAs [should/shouldn't] do" are better as "things people [should/shouldn't] do" — we aren't some different species, we are just people with feelings and
... (read more)
8
Gemma Paterson
12d
I started defaulting to saying people trying to do EA - less person focused more action focused
10
David_Moss
12d
  Fwiw, I am broadly an example of this category, which is partly why I raised the example: I strongly believe in EA and engage in EA work, but mostly don't interact with EAs outside professional contexts. So I would say "I am an EA", but would be less inclined to say "I am a member of the EA community" except insofar as this just means believes in EA/does EA work.

I was watching this video yesterday https://www.youtube.com/watch?v=C25qzDhGLx8 

It's a video about ageing and death, and how society has come to accept death as positive thing that gives life meaning. CGP grey goes on to explain that this is not true, which I agree with - I would still have a lot of meaning in my life if I stopped ageing. The impact of people living longer on society and politics is more uncertain, but I don't see it being a catastrophe - society has adapted to 'worse'. 

The thing is that ageing can be seen as a disease that affec... (read more)

It seems prima facie plausible to me that interventions that save human lives do not increase utility on net, due to the animal suffering caused by saving human life. Has anyone in the broader EA community looked into this? I'm not strongly committed to this, but I'd be interested in seeing what people have reasoned about this.

See the meat-eater problem tag and the posts tagged with it. That being said, wild animal effects can complicate things.

Globally, there are around 20 billion farmed chickens alive at any moment, mostly factory farmed, so about 3 per human alive, higher in high-income countries and lower in low-income countries. There are also probably over 100 billion fish being farmed at any moment, so over 12 per human alive. See Šimčikas, 2020 for estimates.

16
NickLaing
4d
Yeah there have been a sporadic musings about this one the forum. If you search " Vasco Grilo' he has a bit of interesting stuff on this. My broad personal opinion is that in Uganda when I live at least, most animals (apart from battery hens) have net positive lives as they are not intensively farmed. Unfortunately this is changing fast Because of this I don't think saving lives in this country is going to increase net animal suffering (or not very much). Whether we should just optimise for utility anyway it's obviously another question. Others will disagree, it's an in interesting if a bit dark topic...

Against "the burden of proof is on X"

Instead, I recommend: "My prior is [something], here's why".

I'm even more against "the burden of proof for [some policy] is on X" - I mean, what does "burden of proof" even mean in the context of policy? but hold that thought.

 

An example that I'm against:

"The burden of proof for vaccines helping should be on people who want to vaccinate, because it's unusual to put something in your body"

I'm against it because 

  1. It implicitly assumes that vaccines should be judged as part of the group "putting something in your
... (read more)
1
titotal
4d
So, I'll give two more examples of how burden of proof gets used typically: 1. You claim that you just saw a unicorn ride past. I say that the burden of proof is on you to prove it, as unicorns do not exist (as far as we know).  2. As prime minister, you try and combat obesity by taxing people in proportion to their weight. I say that the burden of proof is on you to prove that such a policy would do more good than harm. I think in both these cases, the statements made are quite reasonable. Let me try to translate the objections into your language: 1. my prior of you seeing a unicorn is extremely low, because unicorns do not exist (as far as we know) 2. My prior of this policy being a good idea is low, because most potential interventions are not helpful.  These are fine, but I'm not sure I prefer either of these. It seems like the other party can just say "well my priors are high, so I guess both our beliefs are equally valid".  I think "burden of proof" translates to "you should provide a lot of proof for your position in order for me or anyone else to believe you". It's a statement of what peoples priors should be. 

Why doesn't this translate to AI risk.

"We should avoid building more powerful AI because it might kill us all" breaks to

  1. No prior AI system has tried to kill us all
  2. We are not sure how powerful a system we can really make scaling known techniques and adjacent to known techniques in the next 10-20 years. A system 20 years from now might not actually be "AGI" we don't know.

This sounds like someone should have the burden of proof of showing near future AI systems are (1) lethal (2) powerful in a utility way, not just a trick but actually effective at real... (read more)

1
Azad Ellafi
4d
I've always viewed burden of proof as a dialectical tool. To say one has the burden proof is to say that if they meet the following set of necessary and jointly sufficient conditions: 1. You've made a claim 2. You're attempting to convince another of the claim. They have the obligation in the discussion to provide justification for the claim. If (1) isn't the case, then of course you don't have any burden to provide justification. If (2) isn't the case (Say, everyone already agrees with the claim or someone just wants your opinion on something) it's not clear to me you have some obligation to provide justification either. On this account, it's not like burden of proof talk favors a side. And I'm not sure it implicitly assumes anything or is a conversation stopper. So maybe we can "keep burden of proof talk" by using this construal while also focusing more on explicit discussion of priors. Idk, just a thought I had while reading this.

Wanted to give a shoutout to Ajeya Cotra (from OpenPhil), for her great work explaining AI stuff on a recent Freakonomics podcast series. Her explanations about both her work on the development of AI, and her easy to understand predictions of how AI might progress from here were great, she was my favourite expert on the series.

People have been looking for more high quality public communicators to get EA/AI safety stuff out there, perhaps Ajeya could be a candidate if she's keen?
 

I just came across this old comment by Wei Dai which has aged well, for unfortunate reasons.

I think a healthy dose of moral uncertainty (and normative uncertainty in general) is really important to have, because it seems pretty easy for any ethical/social movement to become fanatical or to incur a radical element, and end up doing damage to itself, its members, or society at large. (“The road to hell is paved with good intentions” and all that.)

5
Lukas_Gloor
4d
I think there's something off about the view that we need to be uncertain about morality to not become fanatic maniacs who are a danger to other people. It's perfectly possible to have firm/confident moral views that are respectful of other people having different life goals from one's own. Just don't be a moral realist utilitarian. The problem is moral realism + utilitarianism, not having confident takes on your morality. Another way to say this is that it seems dangerously fragile if the only reason one doesn't become a maniac is moral uncertainty. What if you feel like you're becoming increasingly confident about some moral view? It tends to happen to people.

Strong agree-- there are so many ways to go off the rails even if you're prioritizing being super humble and weak[1]

  1. ^

    "weak" i.e. in the usage "strong views weakly held" 

Being able to agree and disagreevote on posts feels like it might be great. Props to the forum team.

4
Habryka
5d
Looking forward to how it plays out! LessWrong made the intentional decision to not do it, because I thought posts are too large and have too many claims and agreement/disagreement didn't really have much natural grounding any more, but we'll see how it goes. I am glad to have two similar forums so we can see experiments like this play out. 

My hope would be that it would allow people to decouple the quality of the post and whether they agree with it or not. Hopefully people could even feel better about upvoting posts they disagreed with (although based on comments that may be optimistic). 

Perhaps combined with a possible tweak in what upvoting means (as mentioned by a few people), someone mentioned we could change "how much do you like this overall" to something that moves away form basing the reaction on an emotions. I think someone suggested something like "Do you think this post adds value" (That's just a real hack at the alternative, I'm sure there are far better ones)

4
Nathan Young
5d
I think another option is to have reactions on a paragraph level. That would be interesting.
Load more