All of Michael_Cohen's Comments + Replies

What are the coolest topics in AI safety, to a hopelessly pure mathematician?

Here are some of mine to add to Vanessa's list.

One on imitation learning. [Currently an "accept with minor revisions" at JMLR]

One on conservatism in RL. A special case of Vanessa's infra-Bayesianism. [COLT 2020]

One on containment and myopia. [IEEE]

Objections to Value-Alignment between Effective Altruists

Among many things I agree with, the part I agree the most with:

EAs give high credence to non-expert investigations written by their peers, they rarely publish in peer-review journals and become increasingly dismissive of academia

I think a fair amount of the discussion of intelligence loses its bite if "intelligence" is replaced with what I take to be its definition: "the ability to succeed a randomly sampled task" (for some reasonable distribution over tasks). But maybe you'd say that perceptions of intelligence in the EA community... (read more)

But that mechanism for belief transmission within EA, i.e. object-level persuasion, doesn't run afoul of your concerns about echochamberism, I don't think.

Getting too little exposure to opposing arguments is a problem. Most arguments are informal so not necessarily even valid, and even for the ones that are, we can still doubt their premises, because there may be other sets of premises that conflict with them but are at least as plausible. If you disproportionately hear arguments from a given community, you're more likely than otherwise to be biased towards the views of that community.

X-risk dollars -> Andrew Yang?
it doesn't follow that it's a good investment overall

Yes, it doesn't by itself--my point was only meant as a counterargument to your claim that the efficient market hypothesis precluded the possibility of political donations being a good investment.

X-risk dollars -> Andrew Yang?

Well, there are >100 million people who have to join some constituency (i.e. pick a candidate), whereas potential EA recruits aren't otherwise picking between a small set of cults philosophical movements. Also, AI PhD-ready people are in much shorter supply than, e.g. Iowans, and they'd be giving up much much much more than someone just casting a vote for Andrew Yang.

2kbog3y
There are numerous minor, subtle ways that EAs reduce AI risk. Small in comparison to a research career, but large in comparison to voting. (Voting can actually be one of them.)
X-risk dollars -> Andrew Yang?
we've had two presidents now who actively tried to counteract mainstream views on climate-change, and they haven't budged climate scientists at all.

I have updated in your direction.

Of course, AI alignment is substantially more scientifically accepted and defensible than climate skepticism.

Yep.

You only mean this as a possibility in the future, if there is any point where AGI is believed to be imminent, right?

No I meant starting today. My impression is that coalition-building in Washington is tedious work. Scientists agreed to avoid gene editing in... (read more)

2kbog3y
FWIW I don't think that would be a good move. I don't feel like fully arguing it now, but main points (1) sooner AGI development could well be better despite risk, (2) such restrictions are hard to reverse for a long time after the fact, as the story of human gene editing shows, (3) AGI research is hard to define - arguably, some people are doing it already.
X-risk dollars -> Andrew Yang?

That is plausible. But "definitely" definitely wouldn't be called for when comparing Yang with Grow EA. How many EA people who could be sold on an AI PhD do you think could recruited with $20 million?

2kbog3y
I meant that it's definitely more efficient to grow the EA movement than to grow Yang's constituency. That's how it seems to me, at least. It takes millions of people to nominate a candidate.
X-risk dollars -> Andrew Yang?

The other thing is that in 20 years, we might want the president on the phone with very specific proposals. What are the odds they'll spend a weekend discussing AGI with Andrew Yang if Yang used to be president vs. if he didn't?

But as for what a president could actually do: create a treaty for countries to sign that ban research into AGI. Very few researchers are aiming for AGI anyway. Probably the best starting point would be to get the AI community on board with such a thing. It seems impossible today that consensus could be built about such a ... (read more)

4kbog3y
You only mean this as a possibility in the future, if there is any point where AGI is believed to be imminent, right? Still, I think you are really overestimating the ability of the president to move the scientific community. For instance, we've had two presidents now who actively tried to counteract mainstream views on climate-change, and they haven't budged climate scientists at all. Of course, AI alignment is substantially more scientifically accepted and defensible than climate skepticism. But the point still stands.
X-risk dollars -> Andrew Yang?
If you're super focused on that issue, then it will definitely be better to spend your money on actual AI research, or on some kind of direct effort to push the government to consider the issue (if such an effort exists).

I am, and that's what I'm wondering. The "definitely" isn't so obvious to me. Another $20 million to MIRI vs. an increase in the probability of Yang's presidency by, let's say, 5%--I don't think it's clear cut. (And I think MIRI is the best place to fund research).

2kbog3y
What about simply growing the EA movement? That clearly seems like a more efficient way to address x-risk, and something where funding could be used more readily.
X-risk dollars -> Andrew Yang?
Is your claim that AI policy is currently talent-constrained, and having Yang as president would lead to more people working on it, thereby making it money-constrained?

No--just that there's perhaps a unique opportunity for cash to make a difference. Otherwise, it seems like orgs are struggling to spend money to make progress in AI policy. But that's just what I hear.

Can you elaborate on this?

First pass: power is good. Second pass: get practice doing things like autonomous weapons bans, build a consensus around getting countries to agree to intern... (read more)

X-risk dollars -> Andrew Yang?
Additionally, Morning Consult shows higher support than all other pollsters. The average for Steyer in early states is considerably less favorable.

Good to know.

Steyer is running ads with little competition

Really?

5Michael_S3y
Yes. People aren't spending much money yet because people will mostly forget about it by the election.
X-risk dollars -> Andrew Yang?

I am in general more trusting, so I appreciate this perspective. I know he's a huge fan of Sam Harris and has historically listened to his podcast, so I imagine he's head Sam's thoughts (and maybe Stuart Russell's thoughts) on AGI.

X-risk dollars -> Andrew Yang?

The stake of the public good in any given election is much larger than the stake of any given entity, so the correct amount for altruists to invest in an election should be much larger than for a self-interested corporation or person.

not that he single-handedly caused Trump's victory.

Didn't claim this.

This is naive.

Not sure what this adds.

5Michael_Wiebe3y
Yes, you're right that altruists have a more encompassing utility function, since they focus on social instead of individual welfare. But even if altruists will invest more in elections than self-interested individuals, it doesn't follow that it's a good investment overall. Sorry for being harsh, but my honest first impression was "this makes EAs look bad to outsiders".
My current thoughts on MIRI's "highly reliable agent design" work

MIRI's current size seems to me to be approximately right for this purpose, and as far as I know MIRI staff don't think MIRI is too small to continue making steady progress.

My guess is that this intuition is relatively inelastic to MIRI's size. It might be worth trying to generate the counterfactual intuition here if MIRI were half its size or double its size. If that process outputs a similar intuition, it might be worth attempting to forget how many people MIRI employs in this area, and ask how many people should be working on a topic that by your est... (read more)