All of vaniver's Comments + Replies

Thoughts on whether we're living at the most influential time in history

What I'm saying is that if you believe that x-risk is 0.1%, then you think we're at least one in a million.

I think you're saying "if you believe that x-risk this century is 0.1%, then survival probability this century is 99.9%, and for total survival probability over the next trillion years to be 0.01%, there can be at most 9200 centuries with risk that high over the next trillion years (.999^9200=0.0001), which means we're in (most generously) a one-in-one-million century, as a trillion years is 10 billion centuries, which divided by ten thousand is a million." That seem right?

Thoughts on whether we're living at the most influential time in history

Then, if the expected cost-effectiveness of the best opportunities varies substantially over time, there will be just one point in time at which your philanthropy will have the most impact, and you should try to max out your philanthropy at that time period, donating all your philanthropy at that time if you can.

Tho I note that the only way one would ever take such opportunities, if offered, is by developing a view of what sorts of opportunities are good that is sufficiently motivating to actually take action at least once every few decades.

For example, wh... (read more)

No More Pandemics: a grassroots group?
Now that the world has experienced COVID-19, everyone understands that pandemics could be bad

I found it somewhat surprising how quickly the pandemic was polarized politically; I am curious whether you expect this group to be partisan, and whether that would be a positive or negative factor.

[A related historical question: what were the political party memberships of members of environmental groups in the US across time? I would vaguely suspect that it started off more even than it is today.]

2Sanjay1yAs far as I'm aware (and it might be worth finding/doing some research to verify this?) * The *response* to the pandemic is politicised, and more so in the US than elsewhere (or at least more so than the UK, and probably elsewhere too) * The view that pandemics are bad and we should prevent them if we can has bilateral support * Hence I think it's probably more straightforward for this group to be on the side of defeating pandemics, and not take sides politically However that's lots that I don't know about politics, esp in the US, so if someone knows more than me about this I'm happy to hear alternative views.
Some thoughts on the EA Munich // Robin Hanson incident
I felt confused about why I was presented with a fully general argument for something I thought I indicated I already considered.

In my original comment, I was trying to resolve the puzzle of why something would have to appear edgy instead of just having fewer filters, by pointing out the ways in which having unshared filters would lead to the appearance of edginess. [On reflection, I should've been clearer about the 'unshared' aspect of it.]

Some thoughts on the EA Munich // Robin Hanson incident
you didn't want to voice unambiguous support for the view that the comment wordings were in fact not easy to improve on given the choice of topic.

I'm afraid this sentence has too many negations for me to clearly point one way or the other, but let me try to restate it and say why I made a comment:

The mechanistic approach to avoiding offense is to keep track of the ways things you say could be interpreted negatively, and search for ways to get your point across while not allowing for any of the negative interpretations. This is a tax on saying a... (read more)

4Lukas_Gloor1yThanks, that makes sense to me now! The three categories are also what I pointed out in my original comment: Okay, so you cared mostly about this point about mind reading: This is a good point, but I didn't find your initial comment so helpful because this point against mind reading didn't touch on any of the specifics of the situation. It didn't address the object-level arguments I gave: I felt confused about why I was presented with a fully general argument for something I thought I indicated I already considered. If I read your comment as "I don't want to comment on the specific tweets, but your interpretation might be a bit hasty" – that makes perfect sense. But by itself, it felt to me like I was being strawmanned for not being aware of obvious possibilities. Similar to khorton, I had the impulse to say "What does this have to do with trolleys, shouldn't we, if anything, talk about the specific wording of the tweets?" Because to me, phrases like "gentle, silent rape" seem obviously unnecessarily jarring even as far as twitter discussions about rape go." (And while one could try to defend this as just blunt or blithe, I think the reasoning would have to be disanalogous to your trolley or food examples, because it's not like it should be surprising to any Western person in the last two decades that rape is a particularly sensitive topic – very unlike the "changing animal food to vegan food" example you gave.)
Some thoughts on the EA Munich // Robin Hanson incident
Comparing trolley accidents to rape is pretty ridiculous for a few reasons:

I think you're missing my point; I'm not describing the scale, but the type. For example, suppose we were discussing racial prejudice, and I made an analogy to prejudice against the left-handed; it would be highly innumerate of me to claim that prejudice against the left-handed is as damaging as racial prejudice, but it might be accurate of me to say both are examples of prejudice against inborn characteristics, are perceived as unfair by the victims, and so on.

And so if y... (read more)

Some thoughts on the EA Munich // Robin Hanson incident
I'm a bit puzzled why it has to be edgy on top of just talking with fewer filters.

Presumably every filter is associated with an edge, right? Like, the 'trolley problem' is a classic of philosophy, and yet it is potentially traumatic for the victims of vehicular violence or accidents. If that's a group you don't want to upset or offend, you install a filter to catch yourself before you do, and when seeing other people say things you would've filtered out, you perceive them as 'edgy'. "Don't they know they ... (read more)

Now, I'm not saying Hanson isn't deliberately edgy; he very well might be.

If you're not saying that, then why did you make a comment? It feels like you're stating a fully general counterargument to the view that some statements are clearly worth improving, and that it matters how we say things. That seems like an unattractive view to me, and I'm saying that as someone who is really unhappy with social justice discourse.

Edit: It makes sense to give a reminder that we may sometimes jump to conclusions too quickly, and maybe you didn... (read more)

2Khorton1yComparing trolley accidents to rape is pretty ridiculous for a few reasons: 1. Rape is much more common than being run over by trolleys. 2. Rape is a very personal form of a violence. I'm not sure anyone has ever been run over by a trolley on purpose in all of history. 3. If you're talking to a person about trolley accidents, they're very unlikely to actually run you over, no matter how cheerful they seem, because most people don't have access to trolleys. If you're talking to a man about rape and he thinks it's not a big deal, there's some chance he'll actually rape you. In some cases, the conversation includes an implicit threat.
Long-term investment fund at Founders Pledge
Benjamin Franklin, in his will, left £1,000 pounds each to the cities of Boston and Philadelphia, with the proviso that the money should be invested for 100 years, with 25 percent of the principal to be invested for a further 100 years.

Also of note is that he gave conditions on the investments; the money was to be lent to married men under 25 who had finished an apprenticeship, with two people willing to co-sign the loan for them. So in that regard it was something like a modern microlending program, instead of just trying to maximize returns for ben... (read more)

A list of good heuristics that the case for AI X-risk fails

Presumably there are two categories of heuristics, here: ones which relate to actual difficulties in discerning the ground truth, and ones which are irrelevant or stem from a misunderstanding. I think it seems bad that this list implicitly casts the heuristics as being in the latter category, and rather than linking to why each is irrelevant or a misunderstanding it does something closer to mocking the concern.

For example, I would decompose the "It's not empirically testable" heuristic into two different components. The first is something li... (read more)

4irving2yYes, the mocking is what bothers me. In some sense the wording of the list means that people on both sides of the question could come away feeling justified without a desire for further communication: AGI safety folk since the arguments seem quite bad, and AGI safety skeptics since they will agree that some of these heuristics can be steel-manned into a good form.
I'm Buck Shlegeris, I do research and outreach at MIRI, AMA
I certainly don't think agents "should" try to achieve outcomes that are impossible from the problem specification itself.

I think you need to make a clearer distinction here between "outcomes that don't exist in the universe's dynamics" (like taking both boxes and receiving $1,001,000) and "outcomes that can't exist in my branch" (like there not being a bomb in the unlucky case). Because if you're operating just in the branch you find yourself in, many outcomes whose probability an FDT agent is trying ... (read more)

2RobBensinger2y+1, I agree with all this.
I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

Oh, an additional detail that I think was part of that conversation: there's only really one way to have a '0-error' state in a hierarchical controls framework, but there are potentially many consonant energy distributions that are dissonant with each other. Whether or not that's true, and whether each is individually positive valence, will be interesting to find out.

(If I had to guess, I would guess the different mutually-dissonant internally-consonant distributions correspond to things like 'moods', in a way that means they... (read more)

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

FWIW I agree with Buck's criticisms of the Symmetry Theory of Valence (both content and meta) and also think that some other ideas QRI are interested in are interesting. Our conversation on the road trip was (I think) my introduction to Connectome Specific Harmonic Waves (CSHW), for example, and that seemed promising to think about.

I vaguely recall us managing to operationalize a disagreement, let me see if I can reconstruct it:

A 'multiple drive' system, like PCT's hierarchical control system, has an easy time explaining independent des
... (read more)
9MikeJohnson2yI think this is a great description. "What happens if we seek out symmetry gradients in brain networks, but STV isn't true?" is something we've considered, and determining ground-truth is definitely tricky. I refer to this scenario as the "Symmetry Theory of Homeostatic Regulation" - [] (mostly worth looking at the title image, no need to read the post) I'm (hopefully) about a week away from releasing an update to some of the things we discussed in Boston, basically a unification of Friston/Carhart-Harris's work on FEP/REBUS with Atasoy's work on CSHW -- will be glad to get your thoughts when it's posted.
We Could Move $80 Million to Effective Charities, Pineapples Included

Thanks! Also, for future opportunities like this, probably the fastest person to respond will be Colm.

Against Modest Epistemology

But as I understand it, Eliezer regards himself as being able to do unusually well using the techniques he has described, and so would predict his own success in forecasting tournaments.

This is also my model of Eliezer; my point is that my thoughts on modesty / anti-modesty are mostly disconnected to whether or not Eliezer is right about his forecasting accuracy, and mostly connected to the underlying models of how modesty and anti-modesty work as epistemic positions.

How narrowly should you define the 'expert' group?

I want to repeat something to mak... (read more)

Against Modest Epistemology

I think with Eliezer's approach, superforecasters should exist, and it should be possible to be aware that you are a superforecaster. Those both seem like they would be lower probability under the modest view. Whether Eliezer personally is a superforecaster seems about as relevant as whether Tetlock is one; you don't need to be a superforecaster to study them.

I expect Eliezer to agree that a careful aggregation of superforecasters will outperform any individual superforecaster; similarly, I expect Eliezer to think that a careful aggregation of anti-modest ... (read more)

1Robert_Wiblin4yOK so it seems like the potential areas of disagreement are: * How much external confirmation do you need to know that you're a superforecaster (or have good judgement in general), or even the best forecaster? * How narrowly should you define the 'expert' group? * How often should you define who is a relevant expert based on whether you agree with them in that specific case? * How much should you value 'wisdom of the crowd (of experts)' against the views of the one best person? * How much to follow a preregistered process to whatever conclusion it leads to, versus change the algorithm as you go to get an answer that seems right? We'll probably have to go through a lot of specific cases to see how much disagreement there actually is. It's possible to talk in generalities and feel you disagree, but actually be pretty close on concrete cases. Note that it's entirely possible that non-modest contributors will do more to enhance the accuracy of a forecasting tournament because they try harder to find errors, but less right than others' all-things-considered views, because of insufficient deference to the answer the tournament as a whole spits out. Active traders enhance market efficiency, but still lose money as a group. As for Eliezer knowing how to make good predictions, but not being able to do it himself, that's possible (though it would raise the question of how he has gotten strong evidence that these methods work). But as I understand it, Eliezer regards himself as being able to do unusually well using the techniques he has described, and so would predict his own success in forecasting tournaments.