All of Emrik's Comments + Replies

"Two-factor" voting ("two dimensional": karma, agreement) for EA forum?

Big support!

  1. By making agreement a separate axis, people will feel safer upvoting something for quality/novelty/appreciation with less of a risk that it's confounded with agreement. Unpopular opinions that people still found enlightening should get marginally more karma. And we should be optimising for increased exposure to information that people can update on in either direction, rather than for exposure to what people agree with.[1]
  2. We now have an opinion poll included for every comment/post. This just seems like a vast store of usefwl-but-imperfect infor
... (read more)
Get-Out-Of-Hell-Free Necklace

I really, really, like this idea. Do you know anyone (including yourself) who's made something like this? Perhaps even including a practical guide (e.g. how to store them, expiration dates for various drugs, where to get them)?

Having ketamine at hand could be really important for me, because I sometimes feel very strongly suicidal but still goal-aware enough to take actions to inhibit myself. It would also just be usefwl to have acute pain medication at hand for all sorts of reasons.

But it could use a more professional name if the idea is to take off. Something like "A Cute Necklace", indicating that it's for acute situations. Get it? This is a great marketing strategy, pwomise. :3

Don't Over-Optimize Things

If you think of thinking as generating a bunch of a priori datapoints (your thoughts) and trying to find a model that fits those data, we can use this to classify some overthinking failure-modes. These classes may overlap somewhat.

  1. You overfit your model to the datapoints because you underestimate their variance (regularization failure).
  2. Your datapoints may not compare very well to the real-world thing you're trying to optimize, so by underestimating their bias you may make your model less generalizable out of training distribution (distribution mismatch).
  3. If
... (read more)
Depression and psychedelics - an anonymous blog proposal

I'd be very interested in a practical, informed, concise, and illegal guide for how to acquire and use psychadelics for depression. It could fit in the EA forum, but idk if they'd appreciate the "optics" (how it makes the EA forum look). You could try. LessWrong is less concerned about optics, however, and I think a lot of people would appreciate it.

Mental support for EA partners?

In my experience, EA is a somewhat dangerous philosophy because it's emotionally hard to keep one's eyes open to the problems of the world, while understanding what's possible to do about it, while also trying to understand one's own limitations. So mental health is something EAs struggle with a lot, but I think there are some misunderstandings that make it worse.

  1. Understand that, yes, indeed, we live in triage every second of every day. That's just unfortunately the world we live in.
  2. But being good does not mean you have to try to suffer in accordance with
... (read more)
1Hedgehog9d
Thank you - "it's emotionally hard to keep one's eyes open to the problems of the world, while understanding what's possible to do about it, while also trying to understand one's own limitations." This might be exactly what is underlying the problem: it is hard as an individual as it is to find the balance between ambitions and attention to pressing issues, and knowing that it is hard to make some difference or change, no matter how hard you try. I love my spouse for exactly that (among many other things!) - which makes it even more difficult for myself to weigh in with other perspectives , or to even suggest that we leave the EA professional field outside our front door.
Product Managers: the EA Forum Needs You

I'm generally in favour subdivisions, but there are many ways of doing it. E.g. you could have literal subforums, or just have more ways of sorting things. One idea is to split karma into subcategories. Get rid of karma as an indicator of "overall quality" of posts, and instead split into something like "quality_1", "quality_2", "quality_3", and have buttons for each category. The qualities could be any of "novelty", "altruism", "community", "urgent", "concise", etc.

The point is that karma can only capture something like the weighted average of the various... (read more)

6Ben_West8d
Thanks! Lesswrong is currently experimenting with [https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards#e4x24Sp224NjMFmtm] multidimensional voting; if you haven't already, I would suggest trying that out and giving them feedback.
Impact markets may incentivize predictably net-negative projects

Aye, I updated. I was kinda dumb. The magical speculation model is probably not worth going for when end-buyers seem within reach.

2MakoYass10d
I think there's an argument for the thing you were saying, though... Something like... If one marketplace forbids most foundational AI public works, then another marketplace will pop up with a different negative externality estimation process, and it wont go away, and most charities and government funders still aren't EA and don't care about undiscounted expected utility, so there's a very real risk that that marketplace would become the largest one. I guess there might not be many people who are charitibly inclined, and who could understand, believe in, and adopt impact markets, but also don't believe in tail risks. There are lots of people who do one of those things, but I'm not sure there are any who do all.
Impact markets may incentivize predictably net-negative projects

But afaik the theory of change of this project doesn't rely on altruistic "end buyers", it relies on profit-motivated speculation? At least, the aim is to make it work even in the worst-case scenario where traders are purely motivated by profit, and still have the trades generate altruistic value.  Correct me if I'm wrong,

Update: If it wasn't clear, I was wrong. :p

4MakoYass11d
There might be a market for that sort of ultimately valueless token now (or several months ago? I haven't been following the NFT stuff), I'm not sure there will be for long.
8Linch11d
My understanding is that without altruistic end-buyers, then the intrinsic value of impact certificates becomes zero and it's entirely a confidence game.
Impact markets may incentivize predictably net-negative projects

I don't think such a rule has a chance of surviving if impact markets take off?

  1. Added complexity to the norms for trading needs to pay for itself to withstand friction or else decay to its most intuitive equilibrium.
    1. Or the norm for punishing defectors needs to pay for itself in order to stay in equilibrium.
    2. Or someone needs to pay the cost of punishing defectors out of pocket for altruistic reasons.
  2. Once a collateral-charging market takes off, someone could just start up an exchange that doesn't demand a collateral, and instead just charge a nominal fee that
... (read more)
4MakoYass11d
Traders would adopt a competitor without negative externality mechanisms, but charities wouldn't, there will be no end buyers there, I wouldn't expect that kind of vicious amoral competitive pressure between platforms to play out.
Stuff I buy and use: a listicle to boost your consumer surplus and productivity

So, based on my own understanding of the model here, wouldn't it make more sense to take ~1g/1g of each, considering diminishing returns for marginally more of each?

On the other hand, maybe EPA and DHA share an enzyme/receptor/pathway through the BBB (blood-brain barrier; or a shared bottleneck elsewhere) such that it's the ratio that determines how much of each actually gets through. In that case, we'd see inversely correlated absorption after a shared bottleneck is hit.

This study says

Unesterified DHA freely passes the BBB [39,61], and it appears that the

... (read more)
EA Forum feature suggestion thread

A page for current contests/prizes, just like there's a page for events. Been quite a few of them lately, and they seem to (anecdotally) generate quite a bit of interest for writing usefwl things. 

The ones I know about:

  1. OpenPhil's Cause Exploration Prize
  2. EA Criticism Contest
  3. Retroactive Funding Contest
  4. Clearer Thinking's Regranting Program
  5. New Blog Prize

Quite a few on LessWrong that recently ended too. I expect there are more that I just haven't seen.

Oh, there's a Topic for it. Another thing I didn't have the bell set to the right colour on. Black! But uh,... (read more)

2JP Addison9d
Thanks, I'll make a note to think about ways to make the Topic more discoverable.
EA Forum feature suggestion thread

Oh. Yes, that would capture most of the value. I had subscribed to topics before, but I hadn't clicked the bell. It's supposed to be dark if I want it to send me emails, right?

Thanks!

3JP Addison17d
Yep
Steering AI to care for animals, and soon

Love this. It's one of the things on my "possible questions to think about at some point" list. My motivation would  be

  1. Try to figure out what specific animals care about. (A simple sanity check here is to try to figure out what a human cares about, which is hard enough. Try expand this question to humans from different cultures, and it quickly gets more and more complicated.)
  2. Try to figure out how I'm figuring out what animals care about. This is the primary question, because we want to generalize the strategies for helping beings that care about diffe
... (read more)
Steering AI to care for animals, and soon

This is one of the reasons I care about AI in the first place, and it's a relief to see someone talking about it. I'd love to see research on the question: "Conditional on the AI alignment problem being 'solved' to some extent, what happens to animals the next hundred years after that?"

Some butterfly considerations:

  1. How much does it matter for the future of animal welfare whether current AI researchers care about animals?
    1. Should responsible animal advocates consider trying hard to become AI researchers?
    2. If by magic we 'solve' AI by making it corrigible-to-a-c
... (read more)

So if an AI being aligned means that it cares about animals to the extent humans do, it could still be unaligned with respect to the animals' own values to the extent humans are mistaken about them (which we most certainly are).

 

I very much agree with this. This will actually be one of the topics I will research in the next 12 months, with Peter Singer.

EA Forum feature suggestion thread

An option to subscribe (notifications on email or otherwise) to search terms.

Currently I'm hesitant to even glance at the Frontpage because there are so many potentially interesting things I would eagerly read and get nerdsniped by. So looking at it predictably wastes my time when I know I should (for now) be concentrating on the topics I'm currently focusing on. But I do want to catch the forum post I'm most likely to benefit from. Hence I want to be able to customize what I get sent by email (or the bell top-right).

This is probably a better way to match ... (read more)

3JP Addison17d
Would you like to get notified of all posts that get tagged with some topic [https://forum.effectivealtruism.org/topics/all]? That might be the right way to get what you want here. You can do so by going to a topic, Moral Philosophy [https://forum.effectivealtruism.org/topics/moral-philosophy] say, subscribing to the topic and choosing to be notified.
Deference Culture in EA

I think there  are several things wrong with the Equal Weight View, but I think this is the easiest way to see it:

Let's say I have  which I updated from a prior of . Now I meet someone who A) I trust to be rational as much as myself, and B) I know started with the same prior as me, and C) I know cannot have seen the evidence that I have seen, and D) I know has updated on evidence independent of evidence I have seen.

They say 

Then I can infer that they updated from  to  by multiplyi... (read more)

Deference Culture in EA

Here are choice parts of my model of deference:

  1. Whether you should defer or not depends not only on your estimation of relative expertise but also on what kind of role you want to fill in the community, in order to increase the altruistic impact of the community.  I call it role-based social epistemology, and I really should write it longly at some point.
  2. You can think of the roles as occupying different points on the production possibilities frontier for the explore-exploit trade-off. If you think of rationality as an individual project, you might reas
... (read more)
4tamgent23d
I found this to be an interesting way to think about this that I hadn't considered before - thanks for taking the time to write it up.
Sort forum posts by: Occlumency (Old & Upvoted)

Looks like they updated to add something similar to this. ^^

Top (Inflation Adjusted): Posts with the highest karma relative to those posted around the same time.

3JP Addison5d
I thought I had already written this, but FYI this post was counterfactually responsible for the feature being implemented. (The idea had occurred to me already, but the timing was good to suggest this soon before a slew of work trials.)
Should large EA nonprofits consider splitting?

Not going to make any recommendation about splitting vs not splitting in any practical cases, since there are many tradeoffs here,  but I think the arguments are interesting! I like the idea of smaller competitive units being more efficient in terms of finding the best fit for each role.

If you maximise for the sum of two simultaneous dice rolls, it's going to take more rolls on average to reach a sum of at least  compared to if you were allowed to roll each die separately. For the latter case, if you roll a high number on the first die, you... (read more)

Notes on impostor syndrome

This is excellent. Personally, (3) does everything for me. I don't need to think I'm especially clever if I think I'm ok being dumb. I'm not causing harm if I express my thoughts, as long as I give people the opportunity to ignore or reject me if they think I don't actually have any value to offer them. Here are some assorted personal notes on how being dumb is ok, so you don't need to be smart in order not to worry about it.

Exhibit A: Be conspicuously dumb as an act of altruism!

It must be ok to be dumber than average in a community, otherwise it will iter

... (read more)
New Harvest, the nonprofit nucleus of cellular agriculture, is in crisis. Emergency town hall on 6/10.

The funding paradox: The more we trust EA fund-managers to make good decisions, the less inclined we are to think anyone publicly asking for money is worth giving money to. After all, the wise fund-managers would have funded them already if they were good.

It's related to the Expert's Paradox: An expert who's able to discern other experts should (on first-order) be more inclined to update on their signals because they have justified confidence in their signals being good. But if experts start updating strongly on each others' signals, now suddenly the signa... (read more)

1Quinn McHugh (he/him)1mo
Hi Emrik, My apologies. It appears the Zoom link was not copied over from our other events! We'll be sure to double check this for any future online happenings. Thanks for flagging.
Emrik's Shortform

FWIW, I think personal information is very relevant to giving decisions, but I also think the meme "EA is no longer funding-constrained" perhaps lacks nuance that's especially relevant for people with values or perspectives that differ substantially from major funders.

Relevant: https://forum.effectivealtruism.org/posts/GFkzLx7uKSK8zaBE3/we-need-more-nuance-regarding-funding-gaps

Apply to attend an EA conference!

"and I was surprised to find I had ideas and perspectives that were unique/might not have surfaced in conversation had I not been there."

I think this is one of the reasons EAG (or other ways of informally conversing with regular EAs on EA-related things) can be extremely valuable for people. It lets you get epistemic and emotional feedback on how capable you are compared to a random EAG-sampled slice of the community. People who might have been underconfident (like you) update towards thinking they might be usefwl. That said, I think you're unusually capab... (read more)

Don't Be Bycatch

I'm really sorry I downvoted... I love the tone, I love the intention, but I worry about the message. Yes, less ambition and more love would probably make us suffer less. But I would rather try to encourage ambition by emphasising love for the ambitious failures. I'm trying to be ambitious, and I want to know that I can spiritually fall back on goodwill from the community because we all know we couldn't achieve anything without people willing to risk failing.

Deferring

Some (controversial) reasons I'm surprisingly optimistic about the community:

1) It's already geographically and social-network bubbly and explores various paradigms.

2) The social status gradient is aligned with deference at the lower levels, and differentiation at the higher levels (to some extent). And as long as testimonial evidence/deference flows downwards (where they're likely to improve opinions), and the top-level tries to avoid conforming, there's a status push towards exploration and confidence in independent impressions.

3) As long as deference is... (read more)

Deferring

Thanks<3

Well, I've been thinking about these things precisely in order to make top-level posts, but then my priorities shifted because I ended up thinking that the EA epistemic community was doing fine without my interventions,  and all that remained in my toolkit was cool ideas that weren't necessarily usefwl. I might reconsider it. :p 

Keep in mind that in my own framework, I'm an Explorer, not an Expert. Not safe to defer to.

8Owen Cotton-Barratt2mo
On my impressions: relative to most epistemic communities I think EA is doing pretty well. Relatively to a hypothetical ideal I think we've got a way to go. And I think the thing is good enough to be worth spending perfectionist attention on trying to make excellent.
Deferring

This question is studied in veritistic social epistemology. I recommend playing around with the Laputa network epistemology simulation to get some practical model feedback to notice how it's similar and dissimilar to your model of how the real world community behaves. Here are some of my independent impressions on the topic:

  1. Distinguish between testimonial and technical evidence. The former is what you take on trust (epistemic deference, Aumann-agreement stuff), and the latter is everything else (argument, observation, math).
  2. Under certain conditions, there'
... (read more)
8Owen Cotton-Barratt2mo
[Without implying I agree with everything ...] This comment was awesome, super high density of useful stuff. I wonder if you'd consider making it a top level post?
Sort forum posts by: Occlumency (Old & Upvoted)

Oh. It does mitigate most of the problem as far as I can tell. Good point Oo

2Charles He2mo
Your idea is still viable and useful! There’s also valuable discussion and ideas that came up. IMO you deserve at least 80% of the credit for these, as they arose from your post.
Sort forum posts by: Occlumency (Old & Upvoted)

Oh, this is wonderfwl. But to be clear, Occlumency wouldn't be the front page. It would one of several ways to sort posts when you go to /all posts. Oldie goldies is a great idea for the frontpage, though!

3Charles He2mo
Hmm, maybe we are talking about different things, but I think the /all pages already breaks down posts by year. So that seems to mitigate a lot of the problem I think you are writing about (less if within year inflation is high)? I also think your post is really thoughtful, deep and helpful.
Sort forum posts by: Occlumency (Old & Upvoted)

I have no idea how feasible it is. But I made this post because I personally would like to search for posts like that to patch the most important missing holes in my EA Forum knowledge. Thanks for all the forum work you've done, the result is already amazing! <3

EA Forum feature suggestion thread
  1. Add a sorting option for Occlumency so people can find the posts with the most enduring value historically (sorting by total karma doesn't do it due to the sharp increase in karma allocated towards newer posts due to influx of new forum users).
  2. Add a tag for "outdated" that people can vote up or down, so that outdated but highly upvoted past posts don't continually mislead people (e.g. based on research that failed to replicate). I can't think of any posts atm, but if you can think of any, please mark them.
  3. Consider hiding authorship and karma for posts 24 h
... (read more)
3JP Addison1mo
Thanks for the suggestions. Responding here rather than on the post. I like the "Occlumency" idea, and have been thinking along those lines. I've recorded it. I also like outdated, have passed on to Topics lead Pablo. We've heard this before. I personally lean in the direction that this is the right sort of thing to think about, but does not make for a good Forum experience. There might be other approaches like "ratio of upvotes to reads" that would serve the final purpose while being less disruptive.
Sort forum posts by: Occlumency (Old & Upvoted)

The users with the highest karma come from a range of different years, and the two highest joined in 2017 and 2019. I don't think it's too much of a problem.

Sort forum posts by: Occlumency (Old & Upvoted)

Good point! Edited the post to mention this.

Getting a feel for changes of karma and controversy in the EA Forum over time

Not sure how much it matters, but if you weight vote balances by forum activity during month of publication, you aren't controlling for votes outside month of publication. This means that older posts that have received a second wind of upvotes will be ranked higher.

Sort forum posts by: Occlumency (Old & Upvoted)

Experimental fine-tuning might be in order. But even without it, Occlumency has a different set of problems to Magic (New & Upvoted), so the option value is probably good.

As for outdated post, there could be an "outdated" tag that anyone can add to posts and vote down or up. And anyone who uses it should be encouraged to link to the reason the post is outdated in the comments. Do you have any posts in mind?

Emrik's Shortform

(I no longer endorse this post.)

A way of reframing the idea of "we are no longer funding-constrained" is "we are bottlenecked by people who can find new cost-effective opportunities to spend money". If this is true,  we should plausibly stop donating to funds that can't give out money fast enough anyway, and rather spend money on orgs/people/causes you personally estimate needs more money now. Maybe we should up-adjust how relevant we think personal information is to our altruistic spending decisions.

Is this right? And are there any good public summar... (read more)

1Emrik1mo
FWIW, I think personal information is very relevant to giving decisions, but I also think the meme "EA is no longer funding-constrained" perhaps lacks nuance that's especially relevant for people with values or perspectives that differ substantially from major funders. Relevant: https://forum.effectivealtruism.org/posts/GFkzLx7uKSK8zaBE3/we-need-more-nuance-regarding-funding-gaps
1james.lucassen1mo
Hey, I really like this re-framing! I'm not sure what you meant to say in the second and third sentences tho :/
If EA is no longer funding constrained, why should *I* give?

Reframe the idea of "we are no longer funding-constrained" to "we are bottlenecked by people who can find new good things to spend money on". Which means you should plausibly stop donating to funds that can't give out money fast enough, and rather spend money on orgs/people/causes you personally estimate needs more money now.

Are there any good public summaries of the collective wisdom fund managers have acquired over the years? If we're bottlenecked by people who can find new giving opportunities, it would be great to promote the related skills. And I want to read them.

EA can be hard: links for that

Idk, I like the attitudes found in pain is not the unit of effort. Summarised: Effort is a dangerous proxy variable to maximise. And for most human beings, maximising impact means trying to have plenty of mental and practical slack in your life. If you feel like you're only putting in enough effort if you're at the brink of how much pain you can handle, you should probably try to find and test ways of getting out of that trap (like acquiring SSRIs to try). :)

Virtual Coworking

And for people who don't know what "gather town" means and wish to judge whether it could appeal to them as a place to cowork, you can read the forum post about it. :)

EA coworking/lounge space on gather.town

I looked at it for a bit, and it seems interesting! But announcing a move would be risky, given that we might lose people in the transition, so the difference in quality of the space would have to sufficient to overcome that risk, and I'm not sure it is.

Also, if you have a Gather Town in Germany, we could link it via a portal; or alternatively you could copy the whole space into EAGT, and it could be linked via a door (like with EA Denmark's space). The latter option has the advantage that it benefits the larger community, encourages more intermingling bet... (read more)

What makes a statement a normative statement?

Oh, I like this. Seems good to have a word for it, because it's a set of constraints that a lot of us try to fit our morality into. We don't want it to have logical contradictions. Seems icky. Though it does make me wonder what exactly I mean by 'logical contradiction'. 

EA coworking/lounge space on gather.town

If cost is a problem, I could definitely contribute with up to 200 $/month. But I expect if we get 25 concurrent users, I'm not the only one interested in funding the project. Having an online EA hub like that could be extremely valuable.

Open Thread: Spring 2022

Is there like some statistics on this forum? Particularly distribution of votes over posts?

6Lorenzo2mo
Hi Emrik! Is this what you're looking for? https://effectivealtruismdata.com/#post-wilkinson-section [https://effectivealtruismdata.com/#post-wilkinson-section] https://www.effectivealtruismdata.com/#forum-scatter-section [https://www.effectivealtruismdata.com/#forum-scatter-section]
Project: A web platform for crowdsourcing impact estimates of interventions.

I'm in favour of the project, but here's a consideration against: Making people in the community more confident about what the community thinks about a subject, can be potentially harmfwl.

Testimonial evidence is the stuff you get purely because you trust another reasoner (Aumann-agreement fashion), and technical evidence is everything else (observation, math, argument).

Making people more aware of testimonial evidence will also make them more likely to update on it, if they're good Bayesians. But this also reduces the relative influence that technical evide... (read more)

2Max Clarke2mo
One of the approaches here, is to A) require people sign up, and B) don't show people aggregated predictions until they have posted their own.
Free-spending EA might be a big problem for optics and epistemics

There's two opposing arguments: 1) You get more information about your friends than you get about strangers, and 2) you are more likely to be biased in favour of your friends.

Personally, I think it would be very hard to vet potential funding prospects over just having a few talks, and the fact that I've "vetted" my friends over several years is a wealth of information that I would be foolish to ignore.

Our intuitions on this may diverge based on how likely we think it is that we've acquired exceptional friends. If you're imagining childhood friends or colle... (read more)

3freedomandutility2mo
I think for funding a project, most of the important and relevant information about a person who might run the project can be obtained from a detailed CV. I thin most of the information that a funder could obtain about a friend which they couldn't also get from the friend's CV is their impression of difficult-to-accurately-evaluate things like personality traits. I place very little value on a funder's evaluation of these things because these things are inherently difficult to evaluate anyway and I expect their evaluation to be too heavily biased by their liking for their friend. Perhaps we disagree on the difficulty of evaluating personality traits, but I think we probably disagree on the extent to which liking someone as a friend is likely to bias your views on them. My view has long been that the bias is likely to be so large that funding applications should include CVs but not the names of people. I think many EAs feel like systems like these overvalue credentials, but that could easily be gotten round by excluding university names and focusing CVs more on 'track record of running cool projects'.
Free-spending EA might be a big problem for optics and epistemics

Being friends with someone is also a great way of learning about their capabilities, motivations and reliability, so I think it could be rational for rich funders to be giving grants to their friends moreso than strangers.

I disagree with you here. I think bring friends with someone makes you quite likely to overestimate their capabilities / reliability etc. If there’s psychology research available on how we evaluate people we know vs strangers, I’d love to read it.

Free-spending EA might be a big problem for optics and epistemics

FWIW, I think it'd be pretty hard (practically and emotionally) to fake a project plan that EA funders would be willing to throw money at. So my prior is that cheating is rare and an acceptable cost to being a high-risk funder. EA is not about minimising crime, it's about maximising impact, and before we crack down on funding we should check our motivations. I don't want anyone to change their high-risk strategy based on hearsay, but I do want our top funders to be on the lookout so that they might catch a possible problem before it becomes rampant.

I like ... (read more)

EA coworking/lounge space on gather.town

Cool! I'll try to stay online when I can. If you see me online, feel free to walk up to me and chat. I'll let you know if I'm too busy to talk. I'd like to know what other EAs are up to, and talk about what I'm up to.

Load More