All of Moses's Comments + Replies

Do research organisations make theory of change diagrams? Should they?

I haven’t actually seen any examples of ToC diagrams from research orgs except the two shown above.

A good example of a ToC diagram is this old Leverage Research plan.

2Raemon9mohttps://forum.effectivealtruism.org/posts/JJuEKwRm3oDC3qce7/mentorship-management-and-mysterious-old-wizards
3Raemon2yAlas, I started writing it and then was like "geez, I should really do any research at all before just writing up a pet armchair theory about human motivation." I wrote this Question Post [https://www.lesswrong.com/posts/Bnk6xJyhWZKMbT2ZZ/how-do-people-become-ambitious] to try to get a sense of the landscape of research. It didn't really work out, and since then I... just didn't get around to it.
EA Organization Updates: March 2020

FHI staff were asked to give advice at the highest level of government in the U.K. and the Czech Republic

Is there more info anywhere on the connection between FHI and the Czech govt?

Countering imposter syndrome

I think the first step, if you believe you're less competent than your colleagues believe you to be, is to find out who's wrong—you, them, or both? And are you wrong about your assessment of yourself, or about what your colleagues think of you, or both? Think about what questions you could ask or what metrics you could measure to answer these questions.

If it's your colleagues who's wrong, is it worth correcting them? They understand the risks, they know that recruitment is hit and miss. Is it your responsibility to protect them? You can live in fear of the

... (read more)
casebash's Shortform

Oh, I would've sworn that was already the case (with the understanding that, as you say, there is less volunteering involved, because with the "inner" movement being smaller, more selective, and with tighter/more personal relationships, there is much less friction in the movement of money, either in the form of employment contracts or grants).

If physics is many-worlds, does ethics matter?

So, to simplify your problem: I help someone, but somewhere else there is someone else who I wasn't able to help. Wat do?

You're in this precise situation regardless of quantum physics; I guarantee you won't be able to save everyone in your personal future light cone either. So I think that should simplify your question a bunch.

Why would this change your metaethical position? The reason you'd want to help someone else shouldn't change if I make you aware of some additional people somewhere which you're not capable of helping.

1strangepoop2yInterestingly, Eliezer claims here [https://www.greaterwrong.com/posts/qcYCAxYZT4Xp9iMZY/living-in-many-worlds#comment-yPFbqDTGR8nT6QBqe] that that is precisely what caused the change in his case: That's from more than ten years ago. I'm unaware if that is still his position.
I find this forum increasingly difficult to navigate

Both here and on LW, I have /allPosts bookmarked, "Sorted by Daily"; that helps. I haven't used the front page in ages.

4Raemon2yStrong upvoted mostly to make it easier to find this comment.
9jimrandomh2yLessWrong has a sidebar which makes the link to All Posts much more prominent; it looks like EA Forum hasn't adopted that yet, but it would probably help.
I find this forum increasingly difficult to navigate

Just as a data point, I didn't read OP as an attack at all.

I also don't think that if you have overall negative feedback, you should necessarily have to come up with some good things to say as well, just to balance things out and "be nice". OP said what they wanted to say and it reads to me like valuable feedback, including the subtle undertone of frustration.

As a data point on the object level, I think that magic sorting makes sense on a website with intense traffic (HN, reddit), not on a site with a few posts a day.

What new EA project or org would you like to see created in the next 3 years?

Oh, I thought you refer to some kind of legal costs. You mean costs of vetting. Right. As has been noted: EA is vetting constrained, EA is network constrained.

But this is the case with employees as well, isn't it? It's just about vetting people in general.

One thing I notice, looking at the 80k job board, is that not that many EA(-adjacent) orgs are interested in remote workers.

2Ozzie Gooen2yOne difference is that orgs can share contractors, but not employees. For instance, my designer only spends around 1-3 hours per week, so has lots of time to help other groups, like EA groups. I'm thinking of low-time, skill-specific workers (the jobs in that list would all only work a few hours per month or similar)
What new EA project or org would you like to see created in the next 3 years?

The costs to set up contractor relationships are considerable

I'm curious, how does that work in the US? Why is contract work different in this regard from receiving services from any other type of supplier?

2Ozzie Gooen2yThere are typically many contractors to choose from, and it's difficult to evaluate their quality. For instance, if you want a virtual assistant, it may take several interviews and trials until you find one you like. I'm not sure what kinds of suppliers you are referring to. If it's simple things like, "buy paper from this company on Amazon", that's typically simpler. The reviews are more indicative of performance and there are often fewer alternatives.
Raemon's EA Shortform Feed

Hmm, it's not so much the classic rationalist trait of overthinking that I'm concerned about. It's more like…

First, when you do X, the brain has a pesky tendency to learn exactly X. If you set out to practice thinking, the brain improves at the activity of "practicing thinking". If you set out to achieve something that will require serious thinking, you improve at serious thinking in the process. Trying to try and all that. So yes, practicing thinking, but you can't let your brain know that that's what you're trying to achieve.

Second, "thinking for real" s

... (read more)
Raemon's EA Shortform Feed

Ah.

An important facet of the Middle of the Middle is that people don't yet have the agency or context needed to figure out what's actually worth doing, and a lot of the obvious choices are wrong.

This seems to me like two different problems:

Some people lack, as you say, agency. This is what I was talking about—they're looking for someone to manage them.

Other people are happy to do things on their own, but they don't have the necessary skills and experience, so they will end up doing something that's useless in the best case and actively harmful in the w

... (read more)
3Raemon2yI think the actions that EA actually needs to be involved with doing also require figuring things out and building a deep model of the world. Meanwhile... "sufficiently advanced thinking looks like doing", or something. At the early stages, running a question hackathon requires just as much ops work and practice as running some other kind of hackathon. I will note that default mode where rationalists or EAs sit around talking and not doing is a problem, but often that mode, in my opinion, doesn't actually rise to the level of "thinking for real." Thinking for real is real work.
Raemon's EA Shortform Feed

I think a big problem for EA is not having a clear sense of what mid-level EAs are supposed to do.

Funny—I think a big problem for EA is mid-level EAs looking over their shoulders for someone else to tell them what they're supposed to do

7Raemon2ySo I actually draw an important distinction between "mid-level EAs", where there's three stages: "The beginning of the Middle" – once you've read all the basics of EA, the thing you should do is... read more things about EA. There's a lot to read. Stand on the shoulders of giants. "The Middle of the Middle" – ???? "The End of the Middle" – Figure out what to do, and start doing it (where "it" is probably some kind of ambitious project). An important facet of the Middle of the Middle is that people don't yet have the agency or context needed to figure out what's actually worth doing, and a lot of the obvious choices are wrong. (In particular, mid-level EAs have enough context to notice coordination failures, but not enough context to realize why the coordination failures are happening, nor the skills to do a good job at fixing them. A common failure mode is trying to solve coordination problems when their current skillset would probably result in a net-negative result) So yes, eventually, mid-level EAs should just figure out what to do and do it, but at EAs current scale, there are 100s (maybe 1000s) of people who don't yet have the right meta skills to do that.
Raemon's EA Shortform Feed

I'll take your invitation to treat this as an open thread (I'm not going to EAG).

before you're ready to tackle anything real ambitious... what should you do?

Why not tackle less ambitious goals?

4Raemon2yWhat goals, though?

I'm going to speak for myself again:

I view our current situation as a fork in the road. Either very bad outcomes or very good ones. There is no slowing down. There is no scenario where we linger before the fork for decades or centuries.

As far as very bad outcomes, I'm not worried about extinction that much; dead people cannot suffer, at least. What I'm most concerned about is locking ourselves into a state of perpetual hell (e.g. undefeatable totalitarianism, or something like Christiano's first tale of doom, and then spreading that hell across the univers

... (read more)
Answer by MosesJun 01, 201910

If humanity wipes itself out, those wild animals are going to continue suffering forever.

If we only partially destroy civilization, we're going to set back the solution to problems like wild animal suffering until (and if) we rebuild civilization. (And in the meantime, we will suffer as our ancestors suffered).

If we nuke the entire planet down to bedrock or turn the universe into paperclips, that might be a better scenario than the first one in terms of suffering, but then all of the anthropic measure is confined to the past, where it suffers, and we're fo

... (read more)
5SiebeRozendal2yNot forever. Only until the planet becomes too hot to support complex life (<1 billion years from now). Giving that the universe can support life 1-100 trillion years, this is a relatively short amount of suffering compared to what could be. And also only on our planet! Which is much less restricted than the suffering that can spread if humanity remains alive. (Although, as I write in my own answer, I don't think humanity would spread wild animals beyond the solar system.)
1jackmalde2yThanks for this. I do wonder about the prospect of 'solving' extinction risk. Do you think EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified? I'm not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?
Why do you downvote EA Forum posts & comments?

Most often I downvote posts when I'm reasonably confident that it would be a waste of time for others to open and read it (confused posts, off-topic, rambling, trivial, etc.)—my goal with voting is to make recommendations to others.

I rarely downvote comments, typically only when someone's not playing nice, but that's more on LW than here.

Meditation and Effective Altruism

I think it's more than a matter of the quantity of thinking; I think there's a qualitative difference in whether the underlying motive for even starting the train of thought is "I intend to do X, so I have to plan the steps that constitute X", or whether it's "X scares the fuck out of me and I have to avoid doing X in a way that the System 2 can rationalize to itself, so it's either (1) go stare in the fridge, (2) masturbate, (3) deep-clean the bathroom, or (4) start a google doc brainstorming all the concerns I should take into account when prioritizing the various sub-tasks of X. Hmm, 4 sounds like something System 2 would eat up, the absolute dumbass."

Meditation and Effective Altruism

Re: productivity—from personal experience, meditation also seems to help with overthinking. I think that Rationalists in particular have the nasty habit of endless intellectualizing about how to beat akrasia and get myself to do X; it seems that as you meditate, the addiction to this mental movement fades and then it's not appealing anymore, so you go do X instead.

1robm_73@hotmail.com3yHi Moses, thanks for the comment. Totally agree with you here. There's a certain amount of thinking that is useful to consider things and make good decisions but the mind has a tendency to carry on thinking for a long time after that threshold of usefulness has been reached. After that, it can easily turn into over-analysing, doubt, worry and all sorts of other productivity-sapping stuff.
5Milan_Griffes3y+1 My subjective experience of this has been something like moving from: "X is super intense & it's going to take all my collected will to even think about approaching it." to: "Oh X, that's pretty straightforward. I'll just do that now."
Meditation and Effective Altruism

Nice summary of the benefits, thanks.

To new practitioners, I would strongly suggest to follow much more detailed instruction that given here; for example, I follow the meditation guide The Mind Illuminated, which I can wholeheartedly recommend. It will make your meditation more productive and more enjoyable.

6Aaron_Nesmith-Beck3yI second the recommendation of The Mind Illuminated, just as wholeheartedly.
4Milan_Griffes3yFinding a good teacher can be really helpful: * tighter feedback loops * less time wondering "Am I doing this right? Is this a thing at all?" * can help you avoid some common pitfalls that are hard to notice from the inside
What are people's objections to earning-to-give?
Answer by MosesApr 14, 201931

I'm not in a position where EtG would seem reasonable, but I can imagine the psychological obstacles which would arise if I was in that position. E.g.:

If you're one of the x-risk-oriented people (like me), rather than, say, global-poverty-oriented, your money wouldn't typically go to people who are much worse off than you, in Africa and elsewhere. It would typically go to support people like AI and generalist researchers, content creators, event organizers, and their support staff—people who are notably better off than you. They spend their days doing work

... (read more)

I wanted to write something similar. I saved up the money that I donated by buying cheaper food and living in cheaper places. It all felt a bit pointless when I saw that the orgs that I donated to spend some of that money on fancy offices in expensive areas. But if I remember correctly, it wasn't a big deal as I continued donating to them. I thought that from an utilitarian POV it could be the right decision on their part.


I also want to say that I'm not sure that I now enjoy my job as a researcher at an EA org more than I enjoyed earning to give ... (read more)

3Matthew_Barnett3yI strongly empathize with this framing.
Long-Term Future Fund: April 2019 grant recommendations

Yes, that helps, thanks. "Mediating" might be a word which would convey the idea better.

Long-Term Future Fund: April 2019 grant recommendations

Is there any resource (eg blogpost) for people curious about what "facilitating conversations" involves?

At the moment, not really.

There's the classic Double Crux post. Also, here's a post I wrote, that touches on one sub-skill (out of something like 50 to 70 sub-skills that I currently know). Maybe it helps give the flavor.

If I were to say what I'm trying to do in a sentence: "Help the participants actually understand eachother." Most people generally underestimate how hard this is, which is a large part of the problem.

The good thing that I'm aiming for in a conversation is when "that absurd / confused thing that X-person... (read more)

$100 Prize to Best Argument Against Donating to the EA Hotel

I agree with Brendon that the Hotel should charge the tenants, and the tenants should seek their own funding.

If I was contemplating donating to the Hotel, the decision would hinge almost entirely on who is at the hotel and what they are working on. Moreover, I expect I would almost certainly want to tie my donation to a specific tenant/group of tenants, because I wouldn't a priori expect all of them to be good donation targets.

At this point, why would I not just fund the specific person directly? Better yet, why would I not donate to the EA Funds/CEA and l

... (read more)

I think this view as presented has an overly narrow focus. In terms of thinking of the expected value of the hotel and whether it's worth funding on the margin, it's useful to also consider:

  • The benefits of the in-person community in terms of support, feedback, motivation, productivity, collaboration, networking.
  • All the potential future value from future guests and iterating, expanding and franchising the model.
  • The effect it failing from a lack of funding would have on the likelihood of similar initiatives being started in future.
  • The notion of
  • ... (read more)

I think this gets to the big flaw in the current appeal from a design perspective -

  1. the idea of the hotel is too new and cannot demonstrate impact on an aggregate scale (unlike say cash transfers) in an easy to understand way.

  2. Therefore people look for specific examples of what people are doing at the hotel to reassure them of the impact

  3. But as there are numerically few residents so far and the first residents had little competition to be accepted, many are not seen as competitive to what funders would independently decide to fund so they don’t make

... (read more)
Severe Depression and Effective Altruism

I have several thoughts on this, but I only have time for one right now:

I'm not a psychiatrist, but I would suggest that the thoughts we have when we're mentally healthy are the valid ones, and the thoughts we have when we're depressed are the twisted, irrational ones.

I know that when you're depressed, it seems that you're seeing things more clearly, but I think that a psychiatrist would tell you that's not the case.

So if your healthy self feels okay about not performing up to your depressed-self's standards, I would strongly suggest to defer to the healthy self (by postponing all decisions until you're healthy again).

SHOW: A framework for shaping your talent for direct work

It's been said that EA is vetting constrained, but in some deep sense it's more like that EA (and the world) is constrained on the amount of people that don't need to be told what to do.

Great, I feel less crazy when other people have the same thoughts as me. From my comment a week ago:

The high-profile EA orgs are not bottlenecked on "structure" or "network"; they're bottlenecked because there's a hundred people requiring management for every one person willing to manage others.

3Davidmanheim2yI strongly feel this is incorrect. Coordination is incredibly expensive, is already a major pain point and source of duplication and wasted effort, and having lots of self-directed go-getters will make that worse.
SHOW: A framework for shaping your talent for direct work

Yes, makes sense.

EA should try to make people feel relevant if and only if they're doing good.

I would even say something like "iff they're making an honest attempt at doing good", because the kids are suffering from enough crippling anxiety as it is :)

5mike_mclaren3yRegarding applying to EA organizations, I think we can simply say that the applicants are doing good by applying. Many of the orgs have explicitly said they want lots of applicants---the applicants aren't wasting the orgs' time, but helping them get better candidates (in addition to learning a lot through the process, etc).
SHOW: A framework for shaping your talent for direct work

achieved their prominence

Aha! This made it click for me. I was confused by this whole issue where people can't get jobs at prestigious EA orgs. Something felt backwards about it.

Let's say you want to solve some problem in the world and you conclude that the most effective way for you to push on the problem is to take the open research position at organization X.

But you find out that there's someone even better for that position than you who will take it. Splendid! Now your hands are free to take the only slightly less effective position at organization

... (read more)
7toonalfrink3yI don't think you read too much Robin Hanson, it clarifies a lot of things :) In some sense, I don't even think these people are wrong to be frustrated. You have to satisfy your own needs before you can effectively help others. One of these needs just happens to be the need to feel relevant. And like everything else, this is a systemic problem. EA should try to make people feel relevant if and only if they're doing good. If doing good doesn't get you recognition unless you're in a prestigious organisation, then we have to fix that.

I'm broadly sympathetic to this view, though I think another possibility is that people want to maximise personal impact, in a particular sense, and that this leads to optimising for felt personal impact more than actually optimising for amount of overall good produced.

For example, in the context of charitable donations, people seem to strongly prefer that their donation specifically goes to impact producing things rather than overhead that 'merely' supports impact producing things and that someone else's donation goes to cover the overhead. (Gneezy et al,

... (read more)
9Mappy3yYup. See @sdspikes comment above.
What to do with people?

I feel you could come to the same conclusions/prescriptions with a much simpler underlying framework:

In order to utilize human effort, someone must come up with some valuable activity to pipe that effort into. A manager/employer, roughly speaking.

Some people manage/employ themselves; they find something to pipe their efforts into on their own. Maybe they start a project, a charity, a startup, organize a local group or an event, what have you.

Some people are even willing to manage/employ other people: they come up with so many ideas of what to do that it ca

... (read more)
6Jan_Kulveit3yIn my view without all the hierarchy stuff, it is harder to see what to create, start, manage, delegate. I would be significantly more worried about the meme of "just go&do things&manage others" spreading than about the meme "figure out how to grow the structure".

Most people don't have the skills required to manage themselves, start their own org, organize their own event, etc; a large fraction of people need someone else to assign them tasks to even keep their own household running. Helping people get better at management skills (at least for managing themselves, though ability to manage others as well would be ideal) could potentially be very high-value. There don't seem to be many good resources on how to do this currently.

8Greg_Colbourn3yFor people who want to do more of this and not have to worry about runway, there is the EA Hotel [http://eahotel.org].
So you want to do operations [Part two] - how to acquire and test for relevant skills

Yes, I don't. The result page is broken (the previous pages work fine).

So you want to do operations [Part two] - how to acquire and test for relevant skills

Just a heads up regarding the HEXACO personality test website that was mentioned: it seems to be broken right now, so instead of results, you get a bunch of lines like this: Notice: Undefined offset: 3 in /home/hexaco/domains/hexaco.org/public_html/classes/Statistics.php on line 35

I didn't find any other HEXACO test online; did anyone else? (Or has the official website worked for anyone else?)

1eirine3yJust to check, does this link work for you? http://hexaco.org/hexaco-online (Edit) Ah, sorry. So you don't get the results from the website?