Michael_Cohen

Posts

Sorted by New

Topic Contributions

Comments

Objections to Value-Alignment between Effective Altruists

Among many things I agree with, the part I agree the most with:

EAs give high credence to non-expert investigations written by their peers, they rarely publish in peer-review journals and become increasingly dismissive of academia

I think a fair amount of the discussion of intelligence loses its bite if "intelligence" is replaced with what I take to be its definition: "the ability to succeed a randomly sampled task" (for some reasonable distribution over tasks). But maybe you'd say that perceptions of intelligence in the EA community are only loosely correlated with intelligence in this sense?

As for cached beliefs that people accept on faith from the writings of perceived-intelligent central figures, I can't identify any beliefs that I have that I couldn't defend myself (with the exception that I think many mainstream cultural norms are hard to improve on, so for a particular one, like "universities are the best institutions for producing new ideas", I can't necessarily defend this on the object level). But I'm pretty sure there aren't any beliefs I hold just because a high-status EA holds them. Of course, some high-status EAs have convinced me of some positions, most notably Peter Singer. But that mechanism for belief transmission within EA, i.e. object-level persuasion, doesn't run afoul of your concerns about echochamberism, I don't think.

But maybe you've had a number of conversations with people who appeal to "authority" in defending certain positions, which I agree would be a little dicey.

X-risk dollars -> Andrew Yang?
it doesn't follow that it's a good investment overall

Yes, it doesn't by itself--my point was only meant as a counterargument to your claim that the efficient market hypothesis precluded the possibility of political donations being a good investment.

X-risk dollars -> Andrew Yang?

Well, there are >100 million people who have to join some constituency (i.e. pick a candidate), whereas potential EA recruits aren't otherwise picking between a small set of cults philosophical movements. Also, AI PhD-ready people are in much shorter supply than, e.g. Iowans, and they'd be giving up much much much more than someone just casting a vote for Andrew Yang.

X-risk dollars -> Andrew Yang?
we've had two presidents now who actively tried to counteract mainstream views on climate-change, and they haven't budged climate scientists at all.

I have updated in your direction.

Of course, AI alignment is substantially more scientifically accepted and defensible than climate skepticism.

Yep.

You only mean this as a possibility in the future, if there is any point where AGI is believed to be imminent, right?

No I meant starting today. My impression is that coalition-building in Washington is tedious work. Scientists agreed to avoid gene editing in humans well before it was possible (I think). In part, that might have made it easier since the distantness of it meant fewer people were researching it to begin with. If AGI is a larger part of an established field, it seems much harder to build a consensus to stop doing it.

X-risk dollars -> Andrew Yang?

That is plausible. But "definitely" definitely wouldn't be called for when comparing Yang with Grow EA. How many EA people who could be sold on an AI PhD do you think could recruited with $20 million?

X-risk dollars -> Andrew Yang?

The other thing is that in 20 years, we might want the president on the phone with very specific proposals. What are the odds they'll spend a weekend discussing AGI with Andrew Yang if Yang used to be president vs. if he didn't?

But as for what a president could actually do: create a treaty for countries to sign that ban research into AGI. Very few researchers are aiming for AGI anyway. Probably the best starting point would be to get the AI community on board with such a thing. It seems impossible today that consensus could be built about such a thing, but the presidency is a large pulpit. I'm not talking about making public speeches on topic; I mean inviting the most important AI researchers to the White House to chat with Stuart Russell and some other folks. There are so many details to work out that we could go back and forth on, but that's one possibility for something that would be a big deal if it could be made to work.

X-risk dollars -> Andrew Yang?
If you're super focused on that issue, then it will definitely be better to spend your money on actual AI research, or on some kind of direct effort to push the government to consider the issue (if such an effort exists).

I am, and that's what I'm wondering. The "definitely" isn't so obvious to me. Another $20 million to MIRI vs. an increase in the probability of Yang's presidency by, let's say, 5%--I don't think it's clear cut. (And I think MIRI is the best place to fund research).

X-risk dollars -> Andrew Yang?
Is your claim that AI policy is currently talent-constrained, and having Yang as president would lead to more people working on it, thereby making it money-constrained?

No--just that there's perhaps a unique opportunity for cash to make a difference. Otherwise, it seems like orgs are struggling to spend money to make progress in AI policy. But that's just what I hear.

Can you elaborate on this?

First pass: power is good. Second pass: get practice doing things like autonomous weapons bans, build a consensus around getting countries to agree to international monitoring of software, get practice doing that monitoring, negotiate minimally invasive ways of doing this that protect intellectual property, devote resources to AI safety research (and normalize the field), and ten more things I haven't thought of.

X-risk dollars -> Andrew Yang?
Additionally, Morning Consult shows higher support than all other pollsters. The average for Steyer in early states is considerably less favorable.

Good to know.

Steyer is running ads with little competition

Really?

X-risk dollars -> Andrew Yang?

I am in general more trusting, so I appreciate this perspective. I know he's a huge fan of Sam Harris and has historically listened to his podcast, so I imagine he's head Sam's thoughts (and maybe Stuart Russell's thoughts) on AGI.

Load More