All of Gil's Comments + Replies

Gil
10
0
0

This type of thing is talked about from time to time. The unfortunate thing is that there aren't a ton of plausible interventions. The main tool we have to fight against authoritarianism in the US is lawsuits, and that's already being done and not any place where EA could have a comparative advantage. The other big thing that people come up with is helping Democrats win elections, and there are people working on this, although (fortunately) elections are really ultimately decided by the voters, campaign tactics have limited effect at least at the national ... (read more)

Setting aside the substantive issues about how accurate this post is vs. the other one, I'll admit I'm very uncertain on how much we should avoid talking about partisan politics in AI forums, how much it politicizes the debate vs. clarifies the stakes in ways that help us act more strategically

Re: extremely toxic, most people who would see this post are left-wing, that is obvious.

I don't think that a word-for-word identical where the author self-identified as an EA would be good. I think it would be less bad, and I might not clamor for the title to be changed. 

The problem is that this post blew up on Twitter and a lot of people's image of EA was downgraded because of it. To me, that's very unfair; this post is wrong on the substance, this is an extremely unpopular opinion within EA, and the author doesn't even identify as an EA so the post ... (read more)

IMO it's pretty outrageous to make a piece entitled "The EA case for [X]" when you yourself do not call yourself identify as an effective altruist and the [X] in question is extremely toxic to most everyone on the outside. It's like if I made a piece "the feminist case for Benito Mussolini" where I made clear that I am not a feminist but feminists should be supporting Mussolini.

4
Larks
That seems not true to me? Trump and Kamala are roughly equally popular. I guess I don't share your intuition there. Obviously you should try to accurately represent feminist premises and drive sound inferences, and object-level criticisms would be very appropriate if you failed in this, but the writing such a post itself seems fine to me if it passed the ideological turing test. It reminds me of how students and lawyers often have to write arguments for something from the perspective of someone else, even if they don't believe it. It seems very strange to me to think that this post is bad, but a word-for-word identical post would be good if the author self-identified as an EA. The title is meant to describe the content of the post, and the post is about how EA premises might support Trump.

Could you please make the title "My case for Trump 2024" or even just "The case for Trump 2024"? It would be a more accurate description of this piece, and you are hurting EA's reputation a bit with the current title.

2
Larks
It seems like a fair title to me, the post is about arguing for Trump based on specifically EA premises. The phrase "the X case for Y" doesn't preclude there being an "X case for not Y".

I think it's worth noting that the two examples you point to are right-wing, which the vast majority of Silicon Valley is not. Right-wing tech ppl likely have higher influence in DC, so that's not to say they're irrelevant, but I don't think they are representative of silicon valley as a whole

2
Ozzie Gooen
I think Garry Tan is more left-wing, but I'm not sure. A lot of the e/acc community fights with EA, and my impression is that many of them are leftists. I think that the right-wing techies are often the loudest, but there are also lefties in this camp too.  (Honestly though, the right-wing techies and left-wing techies often share many of the same policy ideas. But they seem to disagree on Trump and a few other narrow things. Many of the recent Trump-aligned techies used to be more left-coded.)
Gil
27
7
1

I do want to make the point that how tied to EA you are isn’t really your choice. The reason it’s really easy for media outlets to tie EA to scientific racism is that there’s a lot of interaction with scientific racists and nobody from the outside really cares if events like this explicitly market themselves as EA events or not. Strong free speech norms enabling scientific racism have always been a source of tension for this community, and you can’t just get around that by not calling yourselves EA.

One thing Manifest could do is stop actively associating with EA — promoting their events and funding platforms on this forum, etc. etc.

Ok. Sorry about the tone of the last response, that came off more rude than I would have liked. I do find it unsettling or norm-breaking to withhold information like this, but I guess you have to do what they allow you to do. I remain skeptical.

I don’t think this is norm-breaking for the EA forum or general discourse (though I might still prefer people act differently).

Gil
5
11
7

This number is crazy low. It seems bad to make a Cause Area post on the forum that entirely rests on implausibly low numbers taken from some proprietary data that can’t be shared. You should at least share where you got this data and why we should believe it.

A few quick thoughts: 

Many arguments about the election’s tractability don’t hinge on the impact of donations. 

  • Donating is not the only way to contribute to the election. Here is a public page showing the results of a meta-analysis on the effectiveness of different uses of time to increase turnout (though the number used to estimate the cost-effectiveness of fundraising is not sourced here). The analysis itself is restricted, but people can apply to request access. 
  • Polling and historical data suggest this election has a good chance of b
... (read more)

The main questions in my mind are the extent to which public opinion (in the tech sphere and beyond) will swing against OpenAI in the midst of all this, and the extent to which it will matter. There's potential for real headway here - public opinion can be strong.

9
Geoffrey Miller
My sense is that public opinion has already been swinging against the AI industry (not just OpenAI), and that this is a good and righteous way to slow down reckless AGI 'progress' (i.e. the hubris of the AI industry driving humanity off a cliff).
1
Nathan Young
Along what axis might there be headway?

Love a good cost-effectiveness calculation.

Has anyone done a calculation of the (wild) animal welfare effects of climate change? Or is this so ungodly intractable that no one has dared attempt it.

Vasco Grilo🔸
Thanks! Me too. I am not aware of estimates of the impact of climate change on wild animal welfare. However, Brian Tomasik discussed many relevant factors qualitatively, and summarised his positition as follows. "On balance, I'm extremely uncertain about the net impact of climate change on wild-animal suffering; my probabilities are basically 50% net good vs. 50% net bad when just considering animal suffering on Earth in the next few centuries (ignoring side effects on humanity's very long-term future)". Brian also has many estimates of the years of wild animal lives affected by changing land use (e.g. a change from rainforest to crops). These can be converted to human years using welfare ranges, but then there is the super hard question about what is the level of welfare of wild animals relative to their welfare range. I have calculated the impact of saving lives on wild animal welfare assuming the lives of wild insects are as intense as those of broilers relative to their respective welfare ranges. "All in all, I can see the impact on wild animals being anything from negligible to all that matters in the nearterm". Despite this, I believe there is lots of uncertainty about whether wild animal welfare is positive or negative, so I did not include impacts on wild animals in my post. In any case, if one thinks the impacts of climate change on humans may well be dominated by those on wild animals, interventions to help these will look better than ones to decrease GHG emissions. Likewise, if one thinks the impacts of saving human lives may be dominated by impacts on farmed animals, interventions to help these will look better than ones to decrease GHG emissions. So I believe interventions to help animals are better than ones to decrease GHG emissions under any worldview.

Trump is anti-tackling pandemics except insofar as it implies he did anything wrong

Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn't give specifics on his policy positions, this seems like something he is particularly interested in.

I know politics is discouraged on the EA forum, but I thought I would post this to say: EA should really be preparing for a Trump presidency. He's up in the polls and IMO has a >50% chance of winning the election. Right now politicians seem relatively receptive to EA ideas, this may change under a Trump administration.

3
Nathan Young
I'd say it's 50/50 but sure. And while politics is discouraged, I don't think that your thing is really what's being discouraged.  
5
huw
Orthogonal to your post, that particular policy position seems out of character for him. He was very happy to tout Operation Warp Speed as president & encouraged people to get vaccinated (as well as privately being a germaphobe). I wonder what's motivating this specific statement?

Yes, I just would have emphasized it more. I sort of read it as “yeah this is something you might do if you’re really interested”, while I would more say “this is something you should really probably do”

Mostly agreed, but I do think that donating some money, if you are able, is a big part of being in EA. And again this doesn’t mean reorienting your entire career to become a quant and maximize your donation potential.

1
James Herbert
Oh but I did put 'donate some money' in my 'hobby' list - or am I misunderstanding you?
Gil
1
1
1
1

All punishment is tragic, I guess, in that it would be a better world if we didn't have to punish anyone. I guess I just don't think the fact that SBF on some level "believed" in EA (whatever that means, and if that is even true) - despite not acting in accordance with the principles of EA - is a reason that his punishment is more tragic than anyone else's

Gil
11
8
5

This is just not true if you read about the case, he obviously knew he was improperly taking user funds and tells all sorts of incoherent lies to explain it, and it's really disappointing to see so many EAs continue to believe he was well-intentioned. You can quibble about the length of sentencing, but he broke the law, and he was correctly punished for it.

Please note that my previous post took the following positions:

1. That SBF did terrible acts that harmed people.

2. That it was necessary that he be punished. To the extent that it wasn't implied by the previous comment, I clarify that what he did was illegal (EDIT: which would involve a finding of culpable mental states that would imply that his wrongdoing was no innocent or negligent mistake).

3. The post doesn't even take a position as to whether the 25 years is an appropriate sentence.

All of the preceding is consistent with the proposition that he also a... (read more)

Gil
10
3
0

Suppose someone were to convince you that the interventions GiveWell pursues are not the best way to improve "global capacity", and that a better way would be to pursue more controversial/speculative causes like population growth or long-run economic growth or whatever. I just don't see EA reorienting GHW-worldview spending toward controversial causes like this, ever. The controversial stuff will always have to compete with animal welfare and AI x-risk. If your worldview categorization does not always make the GHW worldview center on non-controversial stuf... (read more)

How are you defining global capacity, then? This is currently being argued in other replies better than I can, but I think there’s a good chance that the most reasonable definition implies optimal actions very different than GiveWell. Although I could be wrong.

I don’t really think the important part is the metric - the important part is that we’re aiming for interventions that agree with common sense and don’t require accepting controversial philosophical positions (beyond rejecting pro-local bias I guess)

2
Richard Y Chappell🔸
I don't have anything as precise as a definition, but something in the general vicinity of [direct effects on individual welfare + indirect effects on total productivity, which can be expected to improve future welfare]. It's not a priori that GiveWell is justified by any reasonable EA framework. It is, and should be, open to dispute. So I see the debate as a feature, not a bug, of my framework. (I personally do think it is justified, and try to offer a rough indication of why, here. But I could be wrong on the empirics. If so, I would accept the implication that the funding should then be directed differently.) On the "important part", distinguish three steps: (i) a philosophical goal: preserving room for common sense causes on a reasonable philosophical basis. (ii) the philosophical solution: specifying that reasonable basis, e.g. valuing reliable improvements (including long-term flow-through effects) to global human capacity. (iii) providing a metric (GDP, QALYs, etc.) by which we could attempt to measure, at least approximately, how well we are achieving the specified value. I'm not making any claims about metrics. But I do think that my proposed "philosophical solution" is important, because otherwise it's not clear that the philosophical goal is realizable (without moral arbitrariness).

Love the post, don't love the names given.

I think "capacity growth" is a bit too vague, something like "tractable, common-sense global interventions" seems better.

I also think "moonshots" is a bit derogatory, something like "speculative, high-uncertainty causes" seems better.

2
Richard Y Chappell🔸
Oops, definitely didn't mean any derogation -- I'm a big fan of moonshots, er, speculative high-uncertainty (but high EV) opportunities! [Update: I've renamed them to 'High-impact long-shots'.] I disagree on "capacity growth" through: that one actually has descriptive content, which "common-sense global interventions" lacks. (They are interventions to achieve what, exactly?)

This post is a great exemplar for why the term “AI alignment” has proven a drag on AI x-risk safety. The concern is and has always been that AI would dominate humanity like humans dominate animals. All of the talk about aligning AI to “human values” leads to pedantic posts like this one arguing about what “human values” are and how likely AIs are to pursue them.

7
Matthew_Barnett
Is there a particular part of my post that you disagree with? Or do you think the post is misleading. If so, how? I think there are a lot of ways AI could go wrong, and "AIs dominating humans like how humans dominate animals" does not exhaust the scope of potential issues.
Gil
22
3
0

Altman, like most people with power, doesn’t have a totally coherent vision for why him gaining power is beneficial for humanity, but can come up with some vague values or poorly-thought-out logic when pressed. He values safety, to some extent, but is skeptical of attempts to cut back on progress in the name of safety.

Hmm, I still don’t think this response quite addresses the intuition. Various groups yield outsized political influence owing to their higher rates of voting - seniors, a lot of religious groups, post-grad degree ppl, etc. Nonetheless, they vote in a lot of uncompetitive races where it would seem their vote doesn’t matter. It seems wrong that an individual vote of theirs has much EV in an uncompetitive race. On the other hand, it seems basically impossible to mediate strategy such that there is still a really strong norm of voting in competitive races but ... (read more)

Sorry, I shouldn’t have used the phrase “the fact that”. Rephrased, the sentence should say “why would the universe taking place in an incomputable continuous setting mean it’s not implemented”. I have no confident stance on if the universe is continuous or not, just that I find the argument presented unconvincing.

That and/or acausal decision theory is at play for this current election

I will say that I think most of this stuff is really just dancing around the fundamental issue, which is that expected value of your single vote really isn't the best way of thinking about it. Your vote "influences" other people's vote, either through acausal decision theory or because of norms that build up (elections are repeated games, after all!).

6
Michael St Jules 🔸
I think the local causal expected value of your vote and most other things you do is actually a decent proxy, even if you accept acausal influence. The proxy probably doesn't work well if you include options to trade acausally (unless acausal trade is much weaker than non-trade-based acausal influence and so can be practically ignored). 1. I doubt your own voting has much causal impact on the voting behaviour of others. 2. I doubt there's much acausal influence from your voting on others locally on this Earth (and this quantum branch, under many-worlds). 3. And then everything gets multiplied with correlated agents across a multiverse, not just voting. So if voting didn't look good on its local causal EV compared to other things you could do with that time, then I doubt it would look good on its acausal EV (suitably defined, in case of infinities). I guess one caveat with acausal influence across a multiverse is that the agents you're correlated with could be voting on entirely different things, with totally different local stakes (and some voting for things you'd disagree with). But the same would be true for other things you do and your acausal influence over others for them. So, it’s not clear this further favours voting in particular over other things.
2
Robi Rahman🔸
So you think your influence on future voting behavior is more impactful than your effect on the election you vote in?

I may go listen to the podcast if you think it settles this more, but on reading it I'm skeptical of Joscha's argument. It seems to skip the important leap from "implemented" to "computable". Why does the fact that our universe takes place in an incomputable continuous setting mean it's not implemented? All it means is that it's not being implemented on a computer, right?

2
Vasco Grilo🔸
Interesting point. I do not think we have any empirical evidence that the universe is: * Continuous, because all measurements have a finite sensitivity. * Infinite, because all measurements have a finite scale. Claiming the universe is continuous or infinite requires extrapolating infinitely far from observed data. For example, to conclude that the universe is infinite, people usually extrapolate from the universe being pretty flat locally to it being perfectly flat globally. This is a huge extrapolation: * Modelling our knowledge about the local curvature as a continuous symmetrical distribution, even if the best guess is that the universe is perfectly flat locally, there is actually 0 % chance it has zero local curvature, 50 % it has negative, and 50 % it has positive. * We do not know whether the curvature infinitely far away is the same as the local one. In my mind, claiming the universe is perfectly flat and infinite based on it being pretty flat locally is similar to claiming that the Earth is flat and infinite based on it being pretty flat locally.

I think there’s a non-negligible chance we survive until the heat death of the sun or whatever, maybe even after, which is not well-modelled by any of this.

To clarify: the point of this parenthetical was to state reasons why a world without transhumanist progress may be terrible. I don't think animal welfare concerns disappear or even are remedied much with transhumanism in the picture. As long as animal welfare concerns don't get much worse however, transhumanism changes the world either from good to amazing (if we figure out animal welfare) or terrible to good (if we don't). Assuming AI doesn't kill us obviously.

Answer by Gil23
10
1

I think the simplest answer is not that such a world would be terrible (except for factory farming and wild animal welfare, which are major concerns), but that a world with all these transhumanist initiatives would be much better

8
Vasco Grilo🔸
Thanks for pointing that out. Just to elaborate a little, the table below from Newberry 2021 has some estimates of how valuable the future can be. Even if one does not endorse the total view, person-affecting views may be dominated by possibilities of large future populations of necessary people.
4
Hayven Frienby
How could AI stop factory farms (aside from making humans extinct)? I'm honestly interested in the connection there. If you're referring to cellular agriculture, I'm not sure why any form of AI would be needed to accomplish that.

I am glad somebody wrote this post. I often have the inclination to write posts like these, but I feel like advice like this is sometimes good and sometimes bad and it would be disingenuous for me to stake out a claim in any direction. Nonetheless, I think it’s a good mental exercise to explicitly state the downsides of comparative claims and the upsides of absolute claims, and then people in the comments will (and have) assuredly explain the opposite.

Gil
17
9
1

"...for most professional EA roles, and especially for "thought leadership", English-language communication ability is one of the most critical skills for doing the job well"

Is it, really? Like, this is obviously true to some extent. But I'm guessing that English communication ability isn't much more important for most professional EA roles than it is for eg academics or tech startup founders. These places are much more diverse in native language than EA I think.

8
Arepo
Yeah, I want thought leaders to be highly proficient in logic, statistics, and/or (preferably 'and') some key area of science, engineering, philosophy or social science. I really don't see any strong need for them to speak pristine English, as long as they can write clearly enough in their native language that someone can easily translate it.
Answer by Gil0
0
0

How did he deal with two-envelope considerations in his calculation of moral weights for OpenPhil?

[This comment is no longer endorsed by its author]Reply
2
CarlShulman
I have never calculated moral weights for Open Philanthropy, and as far as I know no one has claimed that. The comment you are presumably responding to began by saying I couldn't speak for Open Philanthropy on that topic, and I wasn't.
Gil
30
1
0

This consideration is something I had never thought of before and blew my mind. Thank you for sharing.

Hopefully I can summarize it (assuming I interpreted it correctly) in a different way that might help people who were as befuddled as I was. 

The point is that, when you have probabilistic weight to two different theories of sentience being true, you have to assign units to sentience in these different theories in order to compare them. 

Say you have two theories of sentience that are similarly probable, one dependent on intelligence and one depend... (read more)

Gil
74
24
3

If OpenPhil’s allocation is really so dependent on moral weight numbers, you should be spending significant money on research in this area, right? Are you doing this? Do you plan on doing more of this given the large divergence from Rethink’s numbers?

Several of the grants we’ve made to Rethink Priorities funded research related to moral weights; we’ve also conducted our own research on the topic. We may fund additional moral weights work next year, but we aren’t certain. In general, it's very hard to guarantee we'll fund a particular topic in a future year, since our funding always depends on which opportunities we find and how they compare to each other — and there's a lot we don't know about future opportunities.

I unfortunately won’t have time to engage with further responses for now, but whenev... (read more)

Yeah, I think there’s a big difference between how Republican voters feel about it and how their elites do. Romney is, uhh, not representative of most elite Republicans, so I’d be cautious there

Gil
11
2
0

Do we have any idea how Republican elites feel about AI regulation?

This seems like the biggest remaining question mark which will determine how much AI regulation we get. It's basically guaranteed that Republicans will have to agree to AI regulation legislation, and Biden can't do too much without funding in legislation. Also there's a very good chance Trump wins next year and will control executive AI Safety regulation.

3
Odd anon
Copy-pasting something I wrote elsewhere: Also, Mitt Romney seemed to be very concerned about AI risk during the hearings, and I don't think he was at all alone among the Republicans present.

Politics is really important, so thank you for recognizing that and adding to discussion about Pause.

But this post confuses me. You start by talking about how protests are stronger when they are centered on something people care about rather than simply policy advocacy. Which, I don't know if I agree with, but it's an argument that you can make. But then you shift toward advocating for regulation rather than pause. Which is also just policy advocacy, right? And I don't understand why you'd expect it to have better politics than a pause. Your points about needing companies to prove they are safe is pretty much the same point that Holly Elmore has been making, and I don't know why they apply better to regulation than a Pause.

Reading this great thread on SBF's bio it seems like his main problem was stimulants wrecking his brain. He was absurdly overconfident in everything he did, did not think things through, didn't sleep, and admitted to being deficient in empathy ("I don't have a soul"). Much has been written about deeper topics like naive utiliarianism and trust in response to SBF, but I wonder if the main problem might just be the drug culture that exists in certain parts of EA. Stimulants should be used with caution, and a guy like SBF probably should never have been using them, or at least nowhere near the amount he was getting.

7
Jason
Is "the amount he was getting" publicly known? I think we need to be really careful to distinguish self-medication or recreational use from legitimate medical use to [edit: avoid inadvertently criticizing] appropriate medical treatment. The Adderall and Emsam doses referenced in a recent court order are not inappropriate for the diagnoses provided, if the prescriber and patient know what they are doing. I'm also not aware of any significant risk of medical-level doses triggering erratic behavior, but havent looked at the literature specifically. (I don't encourage unauthorized use of controlled substances, but also don't want to discourage those who have mental health conditions from accessing appropriate treatment.)

I think the judgement calls used in coming up with moral weights have less to do with caring about animals and more to do with how much you think attributes like intelligence and self-awareness have to do with sentience. They're applied to animals, but I think they're really more neuroscience/philosophy intuitions. The people who have the strongest/most out-of-the-ordinary intuitions are MIRI folk, not animal lovers.

Yeah i guess that makes sense. But uh.... have other institutions actually made large efforts to preserve such info? Which institutions? Which info?

9[anonymous]
Huh, maybe not. Might be worth buying a physical copy of The Knowledge too (I just have). And if anyone's looking for a big project...
Gil
20
8
2

This might be a dumb question, but shouldn't we be preserving more elementary resources to rebuild a flourishing society? Current EA is kind of only meaningful in a society with sufficient abundant resources to go into nonprofit work. It feels like there are bigger priorities in the case of sub-x-risk.

7
Aaron Bergman
I’ve definitely thought about this and short answer: depends on who “we” is. A sort of made up particular case I was imagining is “New Zealand is fine, everywhere else totally destroyed” because I think it targets the general class of situation most in need of action (I can justify this on its own terms but I’ll leave it for now) In that world, there’s a lot of information that doesn't get lost: everything stored in the laptops and servers/datacenters of New Zealand (although one big caveat and the reason I abandoned the website is that I lost confidence that info physically encoded in eg a cloud server in NZ would be de facto accessible without a lot of the internet’s infrastructure physically located elsewhere), everything in all its university libraries, etc. That is a gigantic amount of info, and seems to pretty clearly satisfy the “general info to rebuild society” thing. FWIW I think this holds if only a medium size city were to remain intact, not certain if it’s say a single town in Northern Canada, probably not a tiny fishing village, but in the latter case it’s hard to know what a tractable intervention would be. But what does get lost? Anything niche enough not to be downloaded on a random NZers computer or in a physical book in a library. Not everything I put in the archive, to be sure, but probably most of it. Also, 21GB of the type of info I think you’re getting at is in the “non EA info for the post apocalypse folder” because why not! :)
7[anonymous]
That was my first thought, but I expect many other individuals/institutions have already made large efforts to preserve such info, whereas this is probably the only effort to preserve core EA ideas (at least in one place)? And it looks like the third folder - "Non-EA stuff for the post-apocalypse" - contains at least some of the elementary resources you have in mind here. But yeah, I'm much more keen to preserve arguments for radical empathy, scout mindset, moral uncertainty etc. than, say, a write-up of the research behind HLI's charity recommendations. Maybe it would also be good to have an even small folder within "Main content (3GB)" with just the core ideas; the "EA Handbook" (39MB) sub-folder could perhaps serve such a purpose in the meantime. Anyway, cool project! I've downloaded :)
Gil
17
6
0

I don't think points about timelines reflect an accurate model of how AI regulations and guardrails are actually developed. What we need is for Congress to pass a law ordering some department within the executive branch to regulate AI, eg by developing permitting requirements or creating guidelines for legal AI research or whatever. Once this is done, the specifics of how AI is regulated are mostly up to that executive branch, which can and will change over time.

Because of this, it is never "too soon" to order the regulation of AI. We may not know exactly ... (read more)

I think most of the animal welfare neglect comes from the fact that if people are deep enough into EA to accept all of its "weird" premises they will donate to AI safety instead. Animal welfare is really this weird midway spot between "doesn't rest on controversial claims" and "maximal impact".

8
Aaron Bergman
Definitely part of the explanation, but my strong impression from interaction irl and on Twitter is that many (most?) AI-safety-pilled EAs donate to GiveWell and much fewer to anything animal related. I think ~literally except for Eliezer (who doesn’t think other animals are sentient), this isn’t what you’d expect from the weirdness model implied. Assuming I’m not badly mistaken about others’ beliefs and the gestalt (sorry) of their donations, I just don’t think they’re trying to do the most good with their money. Tbc this isn’t some damning indictment - it’s how almost all self-identified EAs’ money is spent and I’m not at all talking about ‘normal person in rich country consumption.’

On "End high-skilled immigration programs": The thing about big-brained stuff like this is it very rarely works. Consider:

What is p(doom|immigration restrictions)-p(doom|status quo immigration)? To that end: might immigration be useful in AI Safety research as well? 

What is E[utility from AI doom]-E[utility from not AI doom]? This also probably gets into all sorts of infinite ethics/pascal's mugging issues.

How likely are you to actually change immigration laws like this?

What is the non-AI-related utility of immigration, before AI doom or assuming AI d... (read more)

6
ColdButtonIssues
"The other stuff seems more reasonable but if you're going to restrict immigrants' ability to work on AI you might as well restrict natives' ability to work on AI as well. I doubt that the former is much easier than the latter." This part of your comment I disagree on. There are specific provisions in US law to protect domestic physicians, immigrants on H1B visas have way fewer rights and are more dependent on their employers than citizen employees, and certain federal jobs or contractor positions are limited to citizens/permanent residents. I think this isn't outlandish, but certainly not hard. The end of high-skilled immigration won't happen, I agree. Even when RW populists actually win national elections, they don't do this. 
Gil
12
3
2

Let me make the contrarian point here that you don't have to build AGI to get these benefits eventually. An alternative, much safer approach would be to stop AGI entirely and try to inflate human/biological intelligence with drugs or other biotech. Stopping AGI is unlikely to happen and this biological route would take a lot longer but it's worth bringing up in any argument about the risks vs. reward of AI.

1
Karl von Wendt
I fully agree, see this post.

I am nervous about wading into partisan politics with AI safety. I think there’s a chance that AI safety becomes super associated with one party due to a stunt like this, or worse becomes a laughing stock for both parties. Partisan politics is an incredibly adversarial environment, which I fear could undermine the currently unpolarized nature of AI safety.

Ooh, now this is interesting!

Running a candidate is one thing, actually getting coverage for this candidate is another. If we could get a candidate to actually make the debate stage in one of the parties that would be a big deal, but that would also be very hard. The one person who I can think who could actually get on the debate stage is Andrew Yang, if there ends up being a Democratic primary (which I am not at all sure about). If I recall he has actually talked about AI x-risk in the past? Even if that’s wrong, I know he has interacted with EA before, s... (read more)

Load more