Emrik

In the day I would be reminded of those men and women,

Brave, setting up signals across vast distances,

Considering a nameless way of living, of almost unimagined values.

Topic Contributions

Comments

Emrik's Shortform

FWIW, I think personal information is very relevant to giving decisions, but I also think the meme "EA is no longer funding-constrained" perhaps lacks nuance that's especially relevant for people with values or perspectives that differ substantially from major funders.

Relevant: https://forum.effectivealtruism.org/posts/GFkzLx7uKSK8zaBE3/we-need-more-nuance-regarding-funding-gaps

Apply to attend an EA conference!

"and I was surprised to find I had ideas and perspectives that were unique/might not have surfaced in conversation had I not been there."

I think this is one of the reasons EAG (or other ways of informally conversing with regular EAs on EA-related things) can be extremely valuable for people. It lets you get epistemic and emotional feedback on how capable you are compared to a random EAG-sampled slice of the community. People who might have been underconfident (like you) update towards thinking they might be usefwl. That said, I think you're unusually capable, and that a lot of other people will update towards feeling like they're too dumb for EA.

But the value of increased confidence in people like you seems higher value than the possible harm caused by people whose confidence drops. And there are reasons to expect online EA material to be a lot more intimidating due to being way more filtered for high-status (incl. smart), so exposure to low-filtered informal conversations in EAG probably causes higher confidence in people who haven't had had a lot of low-filtered informal exposure yet (so if that describes you, reader, you should definitely considering going). Personally, I have a history of feeling like everything I discover and learn is just a form of "catching up" to what everyone else already knows, so talking to people about my ideas has increased my confidence a lot.

Don't Be Bycatch

I'm really sorry I downvoted... I love the tone, I love the intention, but I worry about the message. Yes, less ambition and more love would probably make us suffer less. But I would rather try to encourage ambition by emphasising love for the ambitious failures. I'm trying to be ambitious, and I want to know that I can spiritually fall back on goodwill from the community because we all know we couldn't achieve anything without people willing to risk failing.

Deferring

Some (controversial) reasons I'm surprisingly optimistic about the community:

1) It's already geographically and social-network bubbly and explores various paradigms.

2) The social status gradient is aligned with deference at the lower levels, and differentiation at the higher levels (to some extent). And as long as testimonial evidence/deference flows downwards (where they're likely to improve opinions), and the top-level tries to avoid conforming, there's a status push towards exploration and confidence in independent impressions.

3) As long as deference is mostly unidirectional (downwards in social status) there are fewer loops/information cascades (less double-counting of evidence), and epistemic bubbles are harder to form and easier to pop (from above). And social status isn't that hard to attain for conscientious smart people, I think, so smart people aren't stuck at the bottom where their opinions are under-utilised? Idk.

Probably more should go here, but I forget. The community could definitely be better, and it's worth exploring how to optimise it (any clever norms we can spread about trust functions?), so I'm not sure we disagree except you happen to look like the grumpy one because I started the chain by speaking optimistically. :3

Deferring

Thanks<3

Well, I've been thinking about these things precisely in order to make top-level posts, but then my priorities shifted because I ended up thinking that the EA epistemic community was doing fine without my interventions,  and all that remained in my toolkit was cool ideas that weren't necessarily usefwl. I might reconsider it. :p 

Keep in mind that in my own framework, I'm an Explorer, not an Expert. Not safe to defer to.

Deferring

This question is studied in veritistic social epistemology. I recommend playing around with the Laputa network epistemology simulation to get some practical model feedback to notice how it's similar and dissimilar to your model of how the real world community behaves. Here are some of my independent impressions on the topic:

  1. Distinguish between testimonial and technical evidence. The former is what you take on trust (epistemic deference, Aumann-agreement stuff), and the latter is everything else (argument, observation, math).
  2. Under certain conditions, there's a trade-off between the accuracy of crowdsourced estimates (e.g. surveys on AI risk) and the widespread availability of decision-relevant current best guesses (cf. simulations of the "Zollman effect").
  3. Personally, I think simulations plausibly underestimate the effect. Think of it like doing Monte-Carlo Tree Search over ideaspace, where we want to have a certain level of randomness to decide which branches of the tree to go down. And we arguably can't achieve that randomness if we get stuck in certain paradigms due to the Einstellung effect (sorry for jargon). Communicating paradigms can be destructive of underdeveloped paradigms.
  4. To increase the breadth of exploration over ideaspace, we can encourage "community bubbliness" among researchers (aka "small-world network"), where communication inside bubbles is high, and communication between them is limited. There's a trade-off between the speed of research progress (for any given paradigm) and the breadth and rigour of the progress. Your preference for how to make this trade-off could depend on your view of AI timelines.
  5. How much you should update on someone's testimony depends on your trust function relative to that person. Understanding trust functions is one of the most underappreciated leverage points for improving epistemic communities and "raising sanity waterlines", imo.
  6. If a community has a habit of updating trust functions naively (e.g. increase or decrease your trust towards someone based on whether they give you confirmatory testimonies), it can lead to premature convergence and polarisation of group beliefs. And on a personal level, it can indefinitely lock you out of areas in ideaspace/branches on the ideatree you could have benefited from exploring. [Laputa example] [example 2]
  7. Committing to only updating trust functions based on direct evidence of reasoning ability and sincerity, and never on object-level beliefs, can be a usefwl start. But all evidence is entangled, and personally, I'm ok with locking myself out of some areas in ideaspace because I'm sufficiently pessimistic about there being any value there. So I will use some object-level beliefs as evidence of reasoning-ability and sincerity and therefore use them to update my trust functions.
  8. Deferring to academic research can have the bandwidth problem[1] you're talking about, and this is especially a problem when the research has been optimised for non-EA relevant criteria. Holden's History is a good example: he shouldn't defer to expert historians on questions related to welfare throughout history, because most academics are optimising their expertise for entirely different things.
  9. Deferring to experts can also be a problem when experts have been selected for their beliefs to some extent. This is most likely true of experts on existential risk.
  10. Deferring to community members you think know better than you is fairly harmless if no one defers to you in turn. I think a healthy epistemic community has roles for people to play for each area of expertise.
    1. Decision-maker: If you make really high-stakes decisions, you should use all the evidence you can, testimonial or otherwise, in order to make better decisions.
    2. Expert: Your role is to be safe to defer to. You realise that crowdsourced expert beliefs provide more value to the community if you try to maintain the purity of your independent impressions, so you focus on technical evidence and you're very reluctant to update on testimonial evidence even from other experts.
    3. Explorer: If most of your contributions come from contributing with novel ideas, perhaps consider taking risks by exploring neglected areas in ideaspace at the cost of potentially making your independent impressions less accurate on average compared to the wisdom of the crowd.

Honestly, my take on the EA community is that it's surprisingly healthy. It wouldn't be terrible if EA kept doing whatever it's doing right now. I think it ranks unreasonably high in the possible ways of arranging epistemic communities. :p

  1. ^

    I like this term for it! It's better than calling it the "Daddy-is-a-doctor problem".

Sort forum posts by: Occlumency (Old & Upvoted)

Oh. It does mitigate most of the problem as far as I can tell. Good point Oo

Sort forum posts by: Occlumency (Old & Upvoted)

Oh, this is wonderfwl. But to be clear, Occlumency wouldn't be the front page. It would one of several ways to sort posts when you go to /all posts. Oldie goldies is a great idea for the frontpage, though!

Sort forum posts by: Occlumency (Old & Upvoted)

I have no idea how feasible it is. But I made this post because I personally would like to search for posts like that to patch the most important missing holes in my EA Forum knowledge. Thanks for all the forum work you've done, the result is already amazing! <3

EA Forum feature suggestion thread
  1. Add a sorting option for Occlumency so people can find the posts with the most enduring value historically (sorting by total karma doesn't do it due to the sharp increase in karma allocated towards newer posts due to influx of new forum users).
  2. Add a tag for "outdated" that people can vote up or down, so that outdated but highly upvoted past posts don't continually mislead people (e.g. based on research that failed to replicate). I can't think of any posts atm, but if you can think of any, please mark them.
  3. Consider hiding authorship and karma for posts 24 hours after publication to decrease how sensitive final karma is to slight variations in initial conditions that are amplified by information cascades. I don't actually advocate doing this, I just recommend considering it to see if it makes sense to people who could know better. My intuition is that it's not worth the cost.
Load More