JWS

1513 karmaJoined Jan 2023

Bio

Kinda pro-pluralist, kinda anti-Bay EA

I have come here to extend the principle of charity to bad criticisms of EA and kick ass. And I'm all out of charity.

(my opinions are fully my own, and do not represent the views of any close associates or the company I work for)

Posts
3

Sorted by New
4
JWS
· 8mo ago · 1m read

Comments
141

If this is de-railing the convo here feel free to ignore, but what do you mean concretely by the distinction between "near-termist" and "long-termist" cause areas here? Back in spring 2022 when some pretty senior (though not decision-critical) politicians were making calls for NATO to establish a no-fly zone over Ukraine, preventing Nuclear Catastrophe seemed pretty "near-termist" to me?

I also suspect many if not most AIXR people in the Bay think that AI Alignment is a pretty "near-term" concern for them. Similarly, concerns about Shrimp Welfare have only been focused on the here-and-now effects and not the 'long run digital shrimp' strawshrimp I sometimes see on social media.

Is near-term/long-term thinking actually capturing a clean cause area distinction? Or is it just a 'vibe' distinction? I think you clearest definition is:

Many people are drawn to the clear and more palatable idea that we should devote our lives to doing the most good to humans and animals alive right now

But to me that's a philosophical debate right, or perspecitve? Because of 80K's top 5 list, I could easily see individuals in each area making arguments that it is a near-term cause.

To be clear, I actually think I agree with a lot of what you say so I don't want to come off as arguing the opposite case. But when I see these arguments about near v long termism or old v new EA or bednets v scifi EA, it just doesn't seem to "carve nature at its joints" as the saying goes, and often leads to confusion as people argue about different things while using the same words.

JWS
5d11
1
0

Thanks for linking Dario's testimony. I actually found this extract which was closer to answering my question:

I wanted to answer one obvious question up front: if I truly believe that AI’s risks
are so severe, why even develop the technology at all? To this I have three answers: 

First, if we can mitigate the risks of AI, its benefits will be truly profound. In the next few years it could greatly accelerate treatments for diseases such as cancer, lower the cost of energy, revolutionize education, improve efficiency throughout government, and much more. 

Second, relinquishing this technology in the United States would simply hand over its power, risks, and moral dilemmas to adversaries who do not share our values. 

Finally, a consistent theme of our research has been that the best mitigations to the risks of powerful AI often also involve powerful AI. In other words, the danger and the solution to the danger are often coupled. Being at the frontier thus puts us in a strong position to develop safety techniques (like those I’ve mentioned above), and also to see ahead and warn about risks, as I’m doing today.

I know this statement would have been massively pre-prepared for the hearing, but I don't feel super convinced by it:

On his point 1) such benefits have to be weighed up against the harms, both existential and not. But just as many parts of the xRisk story are speculative, so are many of the purported benefits from AI research. I guess Dario is saying 'it could' and not it will, but for me if you want to "improve efficiency throughout government" you'll need political solutions, not technical ones.

Point 2) is the 'but China' response to AI Safety. I'm not an expert in US foreign policy strategy (funny how everyone is these days), but I'd note this response only works if you view the path to increasing capability as straightforward. It also doesn't work, in my mind, if you think there's a high chance of xRisk. Just because someone else might ignite the atmosphere, doesn't mean you should too. I'd also note that Dario doesn't sound nearly as confident making this statement as he did talking to it with Dwarkesh recently.

Point 3) makes sense if you think the value of the benefits massively outweighs the harms, so that you solve the harms as you reap the benefits. But if those harms outweigh the benefits, or you incure a substantial "risk of ruin", then being at the frontier and expanding it further unilaterally makes less sense to me.

I guess I'd want the CEOs and those with power in these companies to actually be put under the scrutiny in the political sphere which they deserve. These are important and consequential issues we're talking about, and I just get the vibe that the 'kid gloves' need to come off a bit in turns of oversight and scrutiny/scepticism.

I found this comment very helpful Remmelt, so thank you. I think I'm going to respond to this comment via PM.

I'm open to there being new evidence on funding, but I'd also want to make a distinction between existential risk and longtermism as reasons for funding. I could reject the 'Astronomical Waste' argument and still think that preventing the worst impacts of Nuclear War/Climate Change from affecting the current generation held massive moral value and deserved funding.

As for being a community builder, I don't have experience there, but I guess I'd make some suggestions/distinctions:

  • If you have a co-director for the community in question who is more AI-focused, perhaps split responsibilities along cause area lines
  • Be open about your personal position (i.e. being unpursuaded about the value of AI risk) but separate that from being a community builder where you introduce the various major cause areas (including AI) and present the arguments for and against it

I don't think you should have to update or defer your own views in order to be a community builder at all, and I'd encourage you to hold on to that feeling of being unconvinced

Hope that helps! :)

JWS
6d36
9
3

I do not understand Dario's[1] thought process or strategy really

At a (very rough) guess, he thinks that Anthropic alone can develop AGI safely, and they need money to keep up with OpenAI/Meta/any other competitors because they're going to cause massive harm to the world and can't be trusted to do so?

If that's true then I want someone to hold his feet to the fire on that, in the style of Gary Marcus telling the Senate hearing that Sam Altman had dodged their question on what his 'worst fear' was - make him say it in an open, political hearing as a matter of record.

  1. ^

    Dario Amodei, Founder/CEO of Anthropic

The Vegan Society in the UK produce the VEG-1 supplement, which contains 100% NRV of Iodine for an adult (150µg) along with other vitamins and nutrients that might be lacking on a vegan/vegetarian diet

Thanks for sharing your take :)

EA have disregarded the possibility to degrowth the economy in rich countries without engaging the arguments

Do you have some references for this? Is the claim more that EA hasn't seen the degrowth arguments at all, or that it has and has dismissed them unjustifiably (in your opinion)?

The EA community has not addressed these reasons, just argued that economic growth is good and that degrowth in rich countries is anyway impossible.

Again, has the EA community made these arguments as opposed to a few individuals? I'm not sure I can think of a canonical source here.

The best example I can think of here is Growth and the case against randomista development - but the argument here is not that growth is good as an end in itself, but that it is the best route to increasing human welfare, and indeed that post explicitly says that "economic growth is not all that matters. GDP misses many crucial determinants of human welfare"

***

Nevertheless, I do you think your intuition is right that the 'degrowth' movement and the 'EA' movement are not friends, and in fact often in opposition. But I think that's because both movements have a set of auxiliary ideological claims which are often in conflict. For example, Jason Hickel is one of the world's most prominent degrowthers, and he often argues that the charts showing that global poverty are biased and incorrect (I think he's wrong), which are often core parts of EAs argument for problems of global health being tractable. But often this argument actually rests on an even more foundational argument of "is the world getting better or not" and so on, so what actually seems like an argument about 'is the current rate of GDP growth sustainable' actually turn out to be deep arguments about 'how should humanity live morally'

JWS
8d13
2
0

I'm going to not directly answer your question, but if you do want a suggestion I'd recommend Stuart Russell's book Human Compatible. Very readable, includes AI history as well as arguments for being concerned about risk, and Russell literally (co)-wrote the textbook on AI so has impeccable credentials.

so I hear about AI and how all the funding and the community is now directed towards AI and how it is the most impactful thing.

Can I ask where you heard this from? Because the evidence we have is that this is not true in terms of funding. AI Safety has become seen as increasingly more impactful, but there's plenty of disagreement in the community about how impactful it actually is.

As an EA raised in the Oxford tradition, I have an urge to defer, but rationally I am not convinced.

Don't defer!! If you've done some initial research and reading, and you're not convinced, then it's absolutely fine to not be convinced!

Given that you've said that you're a non-technical EA who wants to do the most good but isn't inspired/convinced by AI, then don't force yourself into the field of AI! What field would you like to work in, what are your unique skills and experience? Then look at if you can apply them to any number of EA cause areas rather than technical AI safety.

Hi Remmelt, thanks for sharing these thoughts! I actually generally agree that mitigating and avoiding harms from AI should involve broad democratic participation rather than narrow technical focus - it reminded me a lot of Gideon's post "We are fighting a shared battle". So view the questions below as more nitpicks, as I mostly agreed with the 'vibe' of your post.

AI companies have scraped so much personal data, that they are breaking laws.

Quick question for my understanding, do the major labs actually do their own scraping, or do other companies do the scraping which the major AI labs pay for? I'm thinking of the use of Common Crawl to train LLMs here for instance. It potentially might affect the legal angle, though that's not my area of expertise.

- Subtle regulatory capture of the UK's AI Safety initiatives.

Again, for my clarification, what do you think about this article? I have my own thoughts but would like to hear your take.

AI Ethics researchers have been supporting creatives, but lack funds. 
AI Safety has watched on, but could step in to alleviate the bottleneck.
Empowering creatives is a first step to de-escalating the conflict.

Thanks for the article, after a quick skim I'm definitely going to sit down and read it properly. My honest question is do you think this is actually a step to de-escalate the Ethics/Safety fued? Sadly, my own thoughts have become a lot more pessimistic over the last ~year or so, and I think asking the 'Safety' side to make a unilateral de-escalatory step is unlikely to actually lead to much progress.

(if the above questions aren't pertinent to your post here, happy to pick them up in DMs or some other medium)

Thanks for replying Greg. I have indeed upvoted/disagreevoted you here, because I really appreciate Forum voters explaining their reasoning even if I disagree.

  • Mainly, I think calling Nora's post "substantially negative EV for the future of the world" is tending towards the 'galaxy brain' end of EA that puts people off. I can't calculate that, and I think it's much more plausible that it provides EA Forum with a well written and knowledgable perspective of someone who disagrees on alignment difficulty and whether a pause is the best policy.
  • It's part of a debate series, so in my opinion it's entirely fine for it to be Nora's perspective. Her post is quite open that she thinks Alignment is going well, and I valued it a lot even if I disagreed with specific points in it. I don't think Nora's being intentionally wrong, those are just claims she believes that may turn out to be incorrect.
  • I recognise that you are a lot more concerned about AI x-risk than I am (not to say I'm not concerned though) and are a lot more sure about pursuing a moratorium. I suppose I'd caution against presupposing your conclusion is so correct that other views, such as Nora's, don't deserve a hearing in the public sphere. I think that's a really dangerous line of thought to go down. I think this is a place where a moral uncertainty framework could mitigate this line of thought, without necessarily watering down your commitment to prevent AI xRisk.
Load more