FHI staff were asked to give advice at the highest level of government in the U.K. and the Czech Republic
Is there more info anywhere on the connection between FHI and the Czech govt?
I think the first step, if you believe you're less competent than your colleagues believe you to be, is to find out who's wrong—you, them, or both? And are you wrong about your assessment of yourself, or about what your colleagues think of you, or both? Think about what questions you could ask or what metrics you could measure to answer these questions.
If it's your colleagues who's wrong, is it worth correcting them? They understand the risks, they know that recruitment is hit and miss. Is it your responsibility to protect them? You can live in fear of the
...Oh, I would've sworn that was already the case (with the understanding that, as you say, there is less volunteering involved, because with the "inner" movement being smaller, more selective, and with tighter/more personal relationships, there is much less friction in the movement of money, either in the form of employment contracts or grants).
So, to simplify your problem: I help someone, but somewhere else there is someone else who I wasn't able to help. Wat do?
You're in this precise situation regardless of quantum physics; I guarantee you won't be able to save everyone in your personal future light cone either. So I think that should simplify your question a bunch.
Why would this change your metaethical position? The reason you'd want to help someone else shouldn't change if I make you aware of some additional people somewhere which you're not capable of helping.
Both here and on LW, I have /allPosts
bookmarked, "Sorted by Daily"; that helps. I haven't used the front page in ages.
Just as a data point, I didn't read OP as an attack at all.
I also don't think that if you have overall negative feedback, you should necessarily have to come up with some good things to say as well, just to balance things out and "be nice". OP said what they wanted to say and it reads to me like valuable feedback, including the subtle undertone of frustration.
As a data point on the object level, I think that magic sorting makes sense on a website with intense traffic (HN, reddit), not on a site with a few posts a day.
Oh, I thought you refer to some kind of legal costs. You mean costs of vetting. Right. As has been noted: EA is vetting constrained, EA is network constrained.
But this is the case with employees as well, isn't it? It's just about vetting people in general.
One thing I notice, looking at the 80k job board, is that not that many EA(-adjacent) orgs are interested in remote workers.
The costs to set up contractor relationships are considerable
I'm curious, how does that work in the US? Why is contract work different in this regard from receiving services from any other type of supplier?
Hmm, it's not so much the classic rationalist trait of overthinking that I'm concerned about. It's more like…
First, when you do X, the brain has a pesky tendency to learn exactly X. If you set out to practice thinking, the brain improves at the activity of "practicing thinking". If you set out to achieve something that will require serious thinking, you improve at serious thinking in the process. Trying to try and all that. So yes, practicing thinking, but you can't let your brain know that that's what you're trying to achieve.
Second, "thinking for real" s
...Ah.
An important facet of the Middle of the Middle is that people don't yet have the agency or context needed to figure out what's actually worth doing, and a lot of the obvious choices are wrong.
This seems to me like two different problems:
Some people lack, as you say, agency. This is what I was talking about—they're looking for someone to manage them.
Other people are happy to do things on their own, but they don't have the necessary skills and experience, so they will end up doing something that's useless in the best case and actively harmful in the w
...I think a big problem for EA is not having a clear sense of what mid-level EAs are supposed to do.
Funny—I think a big problem for EA is mid-level EAs looking over their shoulders for someone else to tell them what they're supposed to do
I'll take your invitation to treat this as an open thread (I'm not going to EAG).
before you're ready to tackle anything real ambitious... what should you do?
Why not tackle less ambitious goals?
I'm going to speak for myself again:
I view our current situation as a fork in the road. Either very bad outcomes or very good ones. There is no slowing down. There is no scenario where we linger before the fork for decades or centuries.
As far as very bad outcomes, I'm not worried about extinction that much; dead people cannot suffer, at least. What I'm most concerned about is locking ourselves into a state of perpetual hell (e.g. undefeatable totalitarianism, or something like Christiano's first tale of doom, and then spreading that hell across the univers
...If humanity wipes itself out, those wild animals are going to continue suffering forever.
If we only partially destroy civilization, we're going to set back the solution to problems like wild animal suffering until (and if) we rebuild civilization. (And in the meantime, we will suffer as our ancestors suffered).
If we nuke the entire planet down to bedrock or turn the universe into paperclips, that might be a better scenario than the first one in terms of suffering, but then all of the anthropic measure is confined to the past, where it suffers, and we're fo
...Most often I downvote posts when I'm reasonably confident that it would be a waste of time for others to open and read it (confused posts, off-topic, rambling, trivial, etc.)—my goal with voting is to make recommendations to others.
I rarely downvote comments, typically only when someone's not playing nice, but that's more on LW than here.
I think it's more than a matter of the quantity of thinking; I think there's a qualitative difference in whether the underlying motive for even starting the train of thought is "I intend to do X, so I have to plan the steps that constitute X", or whether it's "X scares the fuck out of me and I have to avoid doing X in a way that the System 2 can rationalize to itself, so it's either (1) go stare in the fridge, (2) masturbate, (3) deep-clean the bathroom, or (4) start a google doc brainstorming all the concerns I should take into account when prioritizing the various sub-tasks of X. Hmm, 4 sounds like something System 2 would eat up, the absolute dumbass."
Re: productivity—from personal experience, meditation also seems to help with overthinking. I think that Rationalists in particular have the nasty habit of endless intellectualizing about how to beat akrasia and get myself to do X; it seems that as you meditate, the addiction to this mental movement fades and then it's not appealing anymore, so you go do X instead.
Nice summary of the benefits, thanks.
To new practitioners, I would strongly suggest to follow much more detailed instruction that given here; for example, I follow the meditation guide The Mind Illuminated, which I can wholeheartedly recommend. It will make your meditation more productive and more enjoyable.
I'm not in a position where EtG would seem reasonable, but I can imagine the psychological obstacles which would arise if I was in that position. E.g.:
If you're one of the x-risk-oriented people (like me), rather than, say, global-poverty-oriented, your money wouldn't typically go to people who are much worse off than you, in Africa and elsewhere. It would typically go to support people like AI and generalist researchers, content creators, event organizers, and their support staff—people who are notably better off than you. They spend their days doing work
...I wanted to write something similar. I saved up the money that I donated by buying cheaper food and living in cheaper places. It all felt a bit pointless when I saw that the orgs that I donated to spend some of that money on fancy offices in expensive areas. But if I remember correctly, it wasn't a big deal as I continued donating to them. I thought that from an utilitarian POV it could be the right decision on their part.
I also want to say that I'm not sure that I now enjoy my job as a researcher at an EA org more than I enjoyed earning to give ...
Is there any resource (eg blogpost) for people curious about what "facilitating conversations" involves?
At the moment, not really.
There's the classic Double Crux post. Also, here's a post I wrote, that touches on one sub-skill (out of something like 50 to 70 sub-skills that I currently know). Maybe it helps give the flavor.
If I were to say what I'm trying to do in a sentence: "Help the participants actually understand eachother." Most people generally underestimate how hard this is, which is a large part of the problem.
The good thing that I'm aiming for in a conversation is when "that absurd / confused thing that X-person...
I agree with Brendon that the Hotel should charge the tenants, and the tenants should seek their own funding.
If I was contemplating donating to the Hotel, the decision would hinge almost entirely on who is at the hotel and what they are working on. Moreover, I expect I would almost certainly want to tie my donation to a specific tenant/group of tenants, because I wouldn't a priori expect all of them to be good donation targets.
At this point, why would I not just fund the specific person directly? Better yet, why would I not donate to the EA Funds/CEA and l
...
I think this view as presented has an overly narrow focus. In terms of thinking of the expected value of the hotel and whether it's worth funding on the margin, it's useful to also consider:
I think this gets to the big flaw in the current appeal from a design perspective -
the idea of the hotel is too new and cannot demonstrate impact on an aggregate scale (unlike say cash transfers) in an easy to understand way.
Therefore people look for specific examples of what people are doing at the hotel to reassure them of the impact
But as there are numerically few residents so far and the first residents had little competition to be accepted, many are not seen as competitive to what funders would independently decide to fund so they don’t make
I have several thoughts on this, but I only have time for one right now:
I'm not a psychiatrist, but I would suggest that the thoughts we have when we're mentally healthy are the valid ones, and the thoughts we have when we're depressed are the twisted, irrational ones.
I know that when you're depressed, it seems that you're seeing things more clearly, but I think that a psychiatrist would tell you that's not the case.
So if your healthy self feels okay about not performing up to your depressed-self's standards, I would strongly suggest to defer to the healthy self (by postponing all decisions until you're healthy again).
It's been said that EA is vetting constrained, but in some deep sense it's more like that EA (and the world) is constrained on the amount of people that don't need to be told what to do.
Great, I feel less crazy when other people have the same thoughts as me. From my comment a week ago:
The high-profile EA orgs are not bottlenecked on "structure" or "network"; they're bottlenecked because there's a hundred people requiring management for every one person willing to manage others.
Yes, makes sense.
EA should try to make people feel relevant if and only if they're doing good.
I would even say something like "iff they're making an honest attempt at doing good", because the kids are suffering from enough crippling anxiety as it is :)
achieved their prominence
Aha! This made it click for me. I was confused by this whole issue where people can't get jobs at prestigious EA orgs. Something felt backwards about it.
Let's say you want to solve some problem in the world and you conclude that the most effective way for you to push on the problem is to take the open research position at organization X.
But you find out that there's someone even better for that position than you who will take it. Splendid! Now your hands are free to take the only slightly less effective position at organization
...I don't think you read too much Robin Hanson, it clarifies a lot of things :)
In some sense, I don't even think these people are wrong to be frustrated. You have to satisfy your own needs before you can effectively help others. One of these needs just happens to be the need to feel relevant. And like everything else, this is a systemic problem. EA should try to make people feel relevant if and only if they're doing good. If doing good doesn't get you recognition unless you're in a prestigious organisation, then we have to fix that.
I'm broadly sympathetic to this view, though I think another possibility is that people want to maximise personal impact, in a particular sense, and that this leads to optimising for felt personal impact more than actually optimising for amount of overall good produced.
For example, in the context of charitable donations, people seem to strongly prefer that their donation specifically goes to impact producing things rather than overhead that 'merely' supports impact producing things and that someone else's donation goes to cover the overhead. (Gneezy et al,
...I feel you could come to the same conclusions/prescriptions with a much simpler underlying framework:
In order to utilize human effort, someone must come up with some valuable activity to pipe that effort into. A manager/employer, roughly speaking.
Some people manage/employ themselves; they find something to pipe their efforts into on their own. Maybe they start a project, a charity, a startup, organize a local group or an event, what have you.
Some people are even willing to manage/employ other people: they come up with so many ideas of what to do that it ca
...Most people don't have the skills required to manage themselves, start their own org, organize their own event, etc; a large fraction of people need someone else to assign them tasks to even keep their own household running. Helping people get better at management skills (at least for managing themselves, though ability to manage others as well would be ideal) could potentially be very high-value. There don't seem to be many good resources on how to do this currently.
Just a heads up regarding the HEXACO personality test website that was mentioned: it seems to be broken right now, so instead of results, you get a bunch of lines like this:
Notice: Undefined offset: 3 in /home/hexaco/domains/hexaco.org/public_html/classes/Statistics.php on line 35
I didn't find any other HEXACO test online; did anyone else? (Or has the official website worked for anyone else?)
A good example of a ToC diagram is this old Leverage Research plan.