Emrik

1759 karmaJoined Feb 2021Norway

Bio

“In the day I would be reminded of those men and women,
Brave, setting up signals across vast distances,
Considering a nameless way of living, of almost unimagined values.”

How others can help me

I would greatly appreciate anonymous feedback, or just feedback in general. Doesn't have to be anonymous.

Comments
291

Oh, this is excellent! I do a version of this, but I haven't paid enough attention to what I do to give it a name. "Blurting" is perfect.

I try to make sure to always notice my immediate reaction to something, so I can more reliably tell what my more sophisticated reasoning modules transforms that reaction into. Almost all the search-process imbalances (eg. filtered recollections, motivated stopping, etc.) come into play during the sophistication, so it's inherently risky. But refusing to reason past the blurt is equally inadvisable.

This is interesting from a predictive-processing perspective.[1] The first thing I do when I hear someone I respect tell me their opinion, is to compare that statement to my prior mental model of the world. That's the fast check. If it conflicts, I aspire to mentally blurt out that reaction to myself.

It takes longer to generate an alternative mental model (ie. sophistication) that is able to predict the world described by the other person's statement, and there's a lot more room for bias to enter via the mental equivalent of multiple comparisons. Thus, if I'm overly prone to conform, that bias will show itself after I've already blurted out "huh!" and made note of my prior. The blurt helps me avoid the failure mode of conforming and feeling like that's what I believed all along.

Blurting is a faster and more usefwl variation on writing down your predictions in advance.

  1. ^

    Speculation. I'm not very familiar with predictive processing, but the claim seems plausible to me on alternative models as well.

I disagree a little bit with the credibility of some of the examples, and want to double-click others. But regardless, I think this is a very productive train of thought and thank you for writing it up. Interesting!

And btw, if you feel like a topic of investigation "might not fit into the EA genre", and yet you feel like it could be important based on first-principles reasoning, my guess is that that's a very important lead to pursue. Reluctance to step outside the genre, and thinking that the goal is to "do EA-like things", is exactly the kind of dynamic that's likely to lead the whole community to overlook something important.

Some selected comments or posts I've written

  • Taxonomy of cheats, multiplex case analysis, worst-case alignment
  • "You never make decisions, you only ever decide between strategies"
  • My take on deference
  • Dumb
  • Quick reasons for bubbliness
  • Against blind updates
  • The Expert's Paradox, and the Funder's Paradox
  • Isthmus patterns
  • Jabber loop
  • Paradox of Expert Opinion
  • Rampant obvious errors
  • Arbital - Absorbing barrier
  • "Decoy prestige"
  • "prestige gradient"
  • Braindump and recommendations on coordination and institutional decision-making
  • Social epistemology braindump (I no longer endorse most of this, but it has patterns)

Other posts I like

  • The Goddess of Everything Else - Scott Alexander
    • “The Goddess of Cancer created you; once you were hers, but no longer. Throughout the long years I was picking away at her power. Through long generations of suffering I chiseled and chiseled. Now finally nothing is left of the nature with which she imbued you. She never again will hold sway over you or your loved ones. I am the Goddess of Everything Else and my powers are devious and subtle. I won you by pieces and hence you will all be my children. You are no longer driven to multiply conquer and kill by your nature. Go forth and do everything else, till the end of all ages.”
  • A Forum post can be short - Lizka
    • Succinctly demonstrates how often people goodhart on length or other irrelevant criteria like effort moralisation. A culture for appreciating posts for the practical value they add to you specifically, would incentivise writers to pay more attention to whether they are optimising for expected usefwlness or just signalling.
  • Changing the world through slack & hobbies - Steven Byrnes
    • Unsurprisingly, there's a theme to what kind of posts I like. Posts that are about de-Goodharting ourselves.
  • Hero Licensing - Eliezer Yudkowsky
    • Stop apologising, just do the thing. People might ridicule you for believing in yourself, but just do the thing.
  • A Sketch of Good Communication - Ben Pace
    • Highlights the danger of deferring if you're trying to be an Explorer in an epistemic community.
  • Holding a Program in One's Head - Paul Graham
    • "A good programmer working intensively on his own code can hold it in his mind the way a mathematician holds a problem he's working on. Mathematicians don't answer questions by working them out on paper the way schoolchildren are taught to. They do more in their heads: they try to understand a problem space well enough that they can walk around it the way you can walk around the memory of the house you grew up in. At its best programming is the same. You hold the whole program in your head, and you can manipulate it at will.

      That's particularly valuable at the start of a project, because initially the most important thing is to be able to change what you're doing. Not just to solve the problem in a different way, but to change the problem you're solving."

I predict with high uncertainty that this post will have been very usefwl to me. Thanks!

Here's a potential missing mood: if you read/skim a post and you don't go "ugh that was a waste of time" or "wow that was worth reading"[1], you are failing to optimise your information diet and you aren't developing intuition for what/how to read.

  1. ^

    This is importantly different from going "wow that was a good/impressive post". If you're just tracking how impressed you are by what you read (or how useful you predict it is for others), you could be wasting your time on stuff you already know and/or agree with. Succinctly, you need to track whether your mind has changed--track the temporal difference.

[weirdness-filter: ur weird if you read m commnt n agree w me lol]

Doing private capabilities research seems not obviously net-bad, for some subcategories of capabilities research. It constrains your expectations about how AGI will unfold, meaning you have a narrower target for your alignment ideas (incl. strategies, politics, etc.) to hit. The basic case: If an alignment researcher doesn't understand how gradient descent works, I think they're going to be less effective at alignment. I expect this to generalise for most advances they could make in their theoretical understanding of how to build intelligences. And there's no fundamental difference between learning the basics and doing novel research, as it all amounts to increased understanding in the end.

That said, it would in most cases be very silly to publish about that increased understanding, and people should be disincentivised from doing so. 

(I'll delete this comment if you've read it and you want it gone. I think the above can be very bad advice to give some poorly aligned selfish researchers, but I want reasonable people to hear it.)

EA: We should never trust ourselves to do act utilitarianism, we must strictly abide by a set of virtuous principles so we don't go astray.

Also EA: It's ok to eat animals as long as you do other world-saving work. The effort and sacrifice it would take to relearn my eating patterns just isn't worth it on consequentialist grounds.


Sorry for the strawmanish meme format. I realise people have complex reasons for needing to navigate their lives the way they do, and I don't advocate aggressively trying to make other people stop eating animals. The point is just that I feel like the seemingly universal disavowment of utilitarian reasoning has been insufficiently vetted for consistency. If we claim that utilitarian reasoning can be blamed for the FTX catastrophe, then we should ask ourselves what else we should apply that lesson to; or we should recognise that FTX isn't a strong counterexample to utilitarianism, and we can still use it to make important decisions.

(I realised after I wrote this that the metaphor between brains and epistemic communities is less fruitfwl than it seems like I think, but it's still a helpfwl frame in order to understand the differences anyway, so I'm posting it here. ^^)


TL;DR: I think people should consider searching for giving opportunities in their networks, because a community that efficiently capitalises on insider information may end up doing more efficient and more varied research. There are, as you would expect, both problems and advantages to this, but it definitely seems good to encourage on the margin.

Some reasons to prefer decentralised funding and insider trading

I think people are too worried about making their donations appear justifiable to others. And what people expect will appear justifiable to others, is based on the most visibly widespread evidence they can think of.[1] It just so happens that that is also the basket of information that everyone else bases their opinions on as well. The net effect is that a lot less information gets considered in total.

Even so, there are very good reasons to defer to consensus among people who know more, not act unilaterally, and be epistemically humble. I'm not arguing that we shouldn't take these considerations into account. What I'm trying to say is that even after you've given them adequate consideration, there are separate social reasons that could make it tempting to defer, and we should keep this distinction is in mind so we don't handicap ourselves just to fit in.

Consider the community from a bird's eye perspective for a moment. Imagine zooming out, and seeing EA as a single organism. Information goes in, and causal consequences go out. Now, what happens when you make most of the little humanoid neurons mimic their neighbours in proportion to how many neighbours they have doing the same thing?

What you end up with is a Matthew effect not only for ideas, but also for the bits of information that get promoted to public consciousness. Imagine ripples of information flowing in only to be suppressed at the periphery, way before they've had a chance to be adequately processed. Bits of information accumulate trust in proportion to how much trust they already have, and there are no well-coordinated checks that can reliably abort a cascade past a point.

To be clear, this isn't how the brain works. The brain is designed very meticulously to ensure that only the most surprising information gets promoted to universal recognition ("consciousness"). The signals that can already be predicted by established paradigms are suppressed, and novel information gets passed along with priority.[2] While it doesn't work perfectly for all things, consider just the fact that our entire perceptual field gets replaced instantly every time we turn our heads.

And because neurons have been harshly optimised for their collective performance, they show a remarkable level of competitive coordination aimed at making sure there are no informational short-circuits or redundancies.

Returning to the societal perspective again, what would it look like if the EA community were arranged in a similar fashion?

I think it would be a community optimised for the early detection and transmission of market-moving information--which in a finance context refers to information that would cause any reasonable investor to immediately make a decision upon hearing it. In the case where, for example, someone invests in a company because they're friends with the CEO and received private information, it's called "insider trading" and is illegal in some countries.

But it's not illegal for altruistic giving! Funding decisions based on highly valuable information only you have access to is precisely the thing we'd want to see happening.

If, say, you have a friend who's trying to get time off from work in order to start a project, but no one's willing to fund them because they're a weird-but-brilliant dropout with no credentials, you may have insider information about their trustworthiness.  That kind of information doesn't transmit very readily, so if we insist on centralised funding mechanisms, we're unknowingly losing out on all those insider trading opportunities.

Where the architecture of the brain efficiently promotes the most novel information to consciousness for processing, EA has the problem where unusual information doesn't even pass the first layer.

(I should probably mention that there are obviously biases that come into play when evaluating people you're close to, and that could easily interfere with good judgment. It's a crucial consideration. I'm mainly presenting the case for decentralisation here, since centralisation is the default, so I urge you keep some skepticism in mind.)


There are no way around having to make trade-offs here. One reason to prefer a central team of highly experienced grant-makers to be doing most of the funding, is that they're likely to be better at evaluating impact opportunities. But this needn't matter much if they're bottlenecked by bandwidth--both in terms of having less information reach them and in terms of having less time available to analyse what does come through.[3]

On the other hand, if you believe that most of the relevant market-moving information in EA is already being captured by relevant funding bodies, then their ability to separate the wheat from the chaff may be the dominating consideration.

While I think the above considerations make a strong case for encouraging people to look for giving opportunities in their own networks, I think they apply with greater force to adopting a model like impact markets.

They're a sort of compromise between central and decentralised funding. The idea is that everyone has an incentive to fund individuals or projects where they believe they have insider information indicating that the project will show itself to be impactfwl later on. If the projects they opportunistically funded at an early stage do end up producing a lot of impact, a central funding body rewards the maverick funder by "purchasing the impact" second-hand.

Once a system like that is up and running, people can reliably expect the retroactive funders to make it worth their while to search for promising projects. And when people are incentivised to locate and fund projects at their earliest bottlenecks, the community could end up capitalising on a lot more (insider) information than would be possible if everything had to be evaluated centrally.

(There are of course, more complexities to this, and you can check out the previous discussions on the forum.)

 

  1. ^

    This doesn't necessarily mean that people defer to the most popular beliefs, but rather that even if they do their own thinking, they're still reluctant to use information that other people don't have access to, so it amounts to nearly the same thing.

  2. ^

    This is sometimes called predictive processing. Sensory information comes in and gets passed along through increasingly conceptual layers. Higher-level layers are successively trying to anticipate the information coming in from below, and if they succeed, they just aren't interested in passing it along.

    (Imagine if it were the other way around, and neurons were increasingly shy to pass along information in proportion to how confused or surprised they were. What a brain that would be!)

  3. ^

    As an extreme example of how bad this can get, an Australian study on medicinal research funding estimated the length of average grant proposals to be "between 80 and 120 pages long and panel members are expected to read and rank between 50 and 100 proposals. It is optimistic to expect accurate judgements in this sea of excessive information." -- (Herbert et al., 2013)

    Luckily it's nowhere near as bad for EA research, but consider the Australian case as a clear example of how a funding process can be undeniably and extremely misaligned with the goal producing good research.

Hm, I think you may be reading the comment from a perspective of "what actions do the symbols refer to, and what would happen if readers did that?" as opposed to "what are the symbols going to cause readers to do?"[1]

The kinds of people who are able distinguish adequate vs inadequate good judgment shouldn't be encouraged to defer to conventional signals of expertise. But those are also disproportionately the people who, instead of feeling like deferring to Eliezer's comment, will respond "I agree, but..."

  1. ^

    For lack of a better term, and because there should be a term for it: Dan Sperber calls this the "cognitive causal chain", and contrasts it with the confabulated narratives we often have for what we do. I think it summons up the right image.

    When you read something, aspire to always infer what people intend based on the causal chains that led them to write that. Well, no. Not quite. Instead, aspire to always entertain the possibility that the author's consciously intended meaning may be inferred from what the symbols will cause readers to do. Well, I mean something along these lines. The point is that if you do this, you might discover a genuine optimiser in the wild. : )

Ideally, EigenTrust or something similar should be able to help with regranting once it takes off, no? : )

Really intrigued by the idea of debates! I was briefly reluctant about the concept at first, because what I associate with "debates" is usually from politics, religious disputes, debating contests, etc. where the debaters are usually lacking so much of essential internal epistemic infrastructure that the debating format often just makes it worse. Rambly, before I head off to bed:

  • Conditional on it being good for EA to have more of a culture for debating, how would we go about practically bring that about?
    • I wonder if EA Global features debates. I haven't seen any. It's mostly just people agreeing with each other and perhaps  adding some nuance.
    • You don't need to have people hostile towards each other in order for it to qualify as "debate", I do think one of the key benefits of debates is that the disagreement is visible.
    • For one, it primes the debaters to hone in on disagreements, whereas perhaps EA in-group are overly primed to find agreements with each other in order to be nice.
    • Making disagreements more visible will hopefwly dispel the illusion that EA as a paradigm is "mostly settled", and get people to question assumptions. This isn't always the best course of action, but I think it's still very needed on the margin, and could get into why if asked.
    • If the debate (and the mutually-agreed-upon mindset of trying to find each others' weakest points) is handled well, it can onlookers feel like head-on disagreeing is more ok. I think we're mostly a nice community, reluctant to step on toes, so if we don't see any real disagreements, we might start to feel like the absence of disagreement is the polite thing to do.
  • A downside risk is that debating culture is often steeped in the "world of arguments", or as Nate Soares put it: "The world is not made of arguments. Think not "which of whese arguments, for these two opposing sides, is more compelling? And how reliable is compellingness?" Think instead of the objects the arguments discuss, and let the arguments guide your thoughts about them."
  • We shouldn't be adopting mainstream debating norms, it won't do anything for us. What I'm excited about is the idea making spaces for good-natured visible disagreements where people are encouraged to attack each others' weakest points. I don't think that mindset comes about naturally, so it could make sense to deliberately make room for it.
  • Also, if you want people to debate you, maybe you should make a shortlist of the top things you feel would be productive to debate you on. : )
Load more