What’s wrong with “make a specific targeted suggestion for a specific person to do the thing, with an argument for why this is better than whatever else the person is doing?”, like Linch suggests?
This can still be hard, but I think the difficulty lives in the territory, and is an achievable goal for someone who follows EA Forum and pays attention to what organizations do what.
It seemed useful to dig into "what actually are the useful takeaways here?", to try an prompt some more action-oriented discussion.
The particular problems Elizabeth is arguing for avoiding:
I left off "Taxing Facebook" because it feels like the wrong name (since it's not really p...
Quick note: I don't think there's anything wrong with asking "are you an english speaker" for this reason, I'm just kinda surprised that that seemed like a crux in this particular case. Their argument seemed cogent, even if you disagreed with it.
The comments/arguments about the community health team mostly make me think something more like "it should change its name" than be disbanded. I think it's good to have a default whisper network to report things to and surreptitiously check in with, even if they don't really enforce/police things. If the problem is that people have a false sense of security, I think there are better ways to avoid that problem.
Just maintaining the network is probably a fair chunk of work.
That said – I think one problem is that the comm-health team has multiple roles. I'm ho...
But a glum aphorism comes to mind: the frame control you can expose is not the true frame control.
I think it's true that frame control (or, manipulation in general) tends to be designed to make it hard to expose, but, I think the actual issue here is more like "manipulation is generally harder to expose than it is to execute, so, people trying to expose manipulation have to do a lot of disproportionate work."
Part of the reason I think it was worth Ben/Lightcone prioritizing this investigation is as a retro-active version of "evaluations."
Like, it is pretty expensive to "vet" things.
But, if your org has practices that lead to people getting hurt (whether intentionally or not), and it's reasonably likely that those will eventually come to light, orgs are more likely to proactively put more effort into avoiding this sort of outcome.
This is a pretty complex epistemic/social situation. I care a lot about our community having some kind of good process of aggregating information, allowing individuals to integrate it, and update, and decide what to do with it.
I think a lot of disagreements in the comments here and on LW stem from people having an implicit assumption that the conversation here is about "should [any particular person in this article] be socially punished?". In my preferred world, before you get to that phase there should be at least some period f...
I don't know about Jonas, but I like this more from the self-directed perspective of "I am less likely to confuse myself about my own goals if I call it talent development."
I do wanna note, I thought the experience of using the google campus was much worse than many other EAGs I've been at – having to walk 5-10 minutes over to another part of the campus, hope that anyone else had shown up to the event I wanted to go to (which they often hadn't) eventually left me with a learned helpnessness about trying to do anything.
Hmm, have there been applications that are like "what's your 50th percentile expected outcome?" and "what's your 95th percentile outcome?"
I listed those on an SFF application last year, although I can't remember if they asked for it explicitly. I think it's a good idea.
Note: the automatic audio for this starts with what sounds like some weird artifacts around the image title.
I think there's a reasonable case that, from a health perspective, many people should eat less meat. But "less meat" !== "no meat".
Elizabeth was pretty clear on her take being:
Most people’s optimal diet includes small amounts of animal products, but people eat sub-optimally for lots of reasons and that’s their right.
i.e. yes, the optimal diet is small amounts of meat (which is less than most people eat, but more than vegans eat).
The article notes:
...It’s true that I am paying more attention to veganism than I am to, say, the trad carnivore idiots, even
The argument isn’t about that at all, and I think most people would agree that nutrition is important.
It sounds like you're misreading the point of the article.
The entire point of this article is that there are vegan EA leaders who downplay or dismiss the idea that veganism requires extra attention and effort. It doesn't at all say "there are some tradeoffs, therefore don't be vegan." (it goes out of the way to say almost the opposite)
Whether costs are worth discussing doesn't depend on how large one cost is vs the other – it depends on whether the h...
Is there a word in the rest-of-the-world that means "everything that supports the core work and allows other people to focus on the core work?"
I hadn't looked into the details of Windfall Clause proposed execution and assumed it was prescribing something closer to GiveDirectly than "CEO gets to direct it personally." CEO gets to direct it personally does seem obviously bad.
The "disadvantaged background" thing does turn out to show up in the top several google results, so, does seem like a real thing, although I also had no idea until this moment and would have naively used the term "talent search" in the way you describe.
Another angle on this (I think this is implied by the OP but didn't quite state outright?)
All the community-norm posts are an input into effective altruism. The gritty technical posts are an output. If you sit around having really good community norms, but you never push forward the frontier of human knowledge relevant to optimizing the world, I think you're not really succeeding at effective altruism.
It is possible that frontier-of-human-knowledge posts should be paid for with money rather than karma, since karma just isn't well suited for rewarding it. But, yeah it seems like it distorts the onboarding experience of what people learn to do on the forum.
A related, important consideration when Lightcone arranged to buy the Rose Garden Inn (for similar reasons as Wytham Abbey), is that the Inn can also be resold if it turns out not to be as valuable. So thinking of this as "15 million spent" isn't really right here.
The Rose Garden Inn is even something at a comparable price point to pressure test against. As in it is the same ballpark general distance to most of the potential users, roughly the same price, within a factor of 2 in room count, etc. but way more run down, and as recent breakins have shown, though perhaps way more vulnerable to people just walking on premises and stealing construction materials as they work to fix it up.
I do think the Lightcone example is a large part of why I'm not up in arms about this. They've demonstrated in their existing somewhat s...
(it'd be handy to have a link in the opening paragraph so if I wanna avoid spoilers I can go do that easily)
I'm not sure what your imagining, in terms of overall infrastructural update here. But, here's a post that is in some sense a followup post to this:
Where are you expecting to find your audience? (I feel surprisingly ignorant on how journal projects like this bootstrap their way into wider readership)
You probably have set your user to use Markdown, specifically. Go to your user settings, open "site customizations", and check that you don't have "use markdown" set.
While I agree with Vaidehi's comments on whether "value drift" is the right descriptor, I think it's true that proportion of in-practice-priorities has probably shifted.
As someone who endorses the overall shift towards longtermist priorities, I still do agree with this post. I think it's important people be thinking for themselves and not getting tugged along with social consensus.
My answer is that you should primarily be focused on saving, so that you have the financial freedom to pivot, change jobs, learn more, or found an organization. Previously, I recommended new EAs (esp. college students) give 1%, save at least 10% (so that they were building at least some concrete altruistic habits, while mostly focusing on building up slack).
I think this remains good practice in the current environment. (Giving 1% is somewhat a symbolic gift in the first place, and I think it's still a useful forcing function to think about which organizati...
(LW Developer here: there's a code update ready-to-ship that updates the /reviewVoting page to show the outcome. It's been a bit delayed in merging roughly because JP and I are in different timezones)
I definitely still stand by the overall thrust of this post, which I'd summarize as:
"The default Recommended EA Action should include saving up runway. It's more important to be able to easily switch jobs, or pivot into a new career, or absorb shocks while you try risky endeavors, than to donate 10%, especially early in your career. This seems true to me regardless of whether you're primarily earning to give, or hoping to do direct work, or aren't sure."
I'm not particularly attached to my numbers here. I think people need more runway than they think, and I...
I wrote a fairly detailed self-review of this post on the LessWrong 2019 Review last year. Here are some highlights:
This was among the most important things I read recently, thanks! (Mostly via reminding me "geez holy hell it's really hard to know things.")
That is helpful, thanks. I've been sitting on this post for years and published it yesterday while thinking generally about "okay, but what do we do about the mentorship bottleneck? how much free energy is there?", and "make sure that starting-mentorship is frictionless" seems like an obvious mechanism to improve things.
https://forum.effectivealtruism.org/posts/JJuEKwRm3oDC3qce7/mentorship-management-and-mysterious-old-wizards
In another comment you mention:
(One example would be the high levels of self-censorship required.)
I'm curious what the mechanism underlying the "required-ness" is. i.e. which of the following, or others, are most at play:
A related thing I'm wondering is whether you considered anything like "going out with a bang", where you tried... just not self-censoring, and... probably lo...
- you'd get voted out of office
No, not this one. I don't think there was anything I wanted to say that would have been harmful enough to turn the Eye of Sauron(*) upon me.
- there are costs imposed directly on you/people-close-to-you (i.e. stress)
Nah, any stress would have been a tertiary effect from...
- you'd lose support from your political allies that you need to accomplish anything
This was the big one. I was already a black sheep when I got voted into office; I had negative amounts of political capital within my party. I had to focus a ton of...
The issue isn't just the conflation, but missing a gear about how the two relate.
The mistake I was making, that I think many EAs are making, is to conflate different pieces of the moral model that have specifically different purposes.
Singer-ian ethics pushes you to take the entire world into your circle of concern. And this is quite important. But, it's also quite important that the way that the entire world is in your circle of concern is different from the way your friends and government and company and tribal groups are in your circle of concern.
In part...
Just wanted to throw up my previous exploration of a similar topic. (I think I had a fairly different motivation than you – namely I want young EAs to mostly focus on financial runway so they can do risky career moves once they're better oriented).
tl;dr – I think the actual Default Action for young EAs should not be giving 10%, but giving 1% (for self-signalling), and saving 10%.
I recently chatted with someone who said they've been part of ~5 communities over their life, and that all but one of them was more "real community" like than the rationalists. So maybe there's plenty of good stuff out there and I've just somehow filtered it out of my life.
The "real communities" I've been part of are mostly longer-established, intergenerational ones. I think starting a community with almost entirely 20-somethings is a hard place to start from. Of course most communities started like that, but not all of them make it to being intergenerational.
Alas, I started writing it and then was like "geez, I should really do any research at all before just writing up a pet armchair theory about human motivation."
I wrote this Question Post to try to get a sense of the landscape of research. It didn't really work out, and since then I... just didn't get around to it.
Currently, there's only so many people who are looking to make friends, or hire at organizations, or start small-scrappy-projects together.
I think most EA orgs started out as a small scrappy project that initially hired people they knew well. (I think early-stage Givewell, 80k, CEA, AI Impacts, MIRI, CFAR and others almost all started out that way – some of them still mostly hire people they know well within the network, some may have standardized hiring practices by now)
I personally moved to the Bay about 2 years ago and shortly thereaft...
I expect to want to link this periodically. One thing I could use is clearer survey data about how often volunteering is useful, and when it is useful almost-entirely-for-PR reasons. People often are quite reluctant to think volunteering isn't useful will say "My [favorite org] says they like volunteers!". (My background assumption is that their favorite org probably likes volunteers and needs to say so publicly, but primarily because of long-term-keeping-people-engaged reasons. But, I haven't actually seen reliable data here)
I just donated to the first lottery, but FYI I found it surprisingly hard to navigate back to it, or link others to it. It doesn't look like the lottery is linked from anywhere on the site and I had to search for this post to find the link again.
The book The Culture Map explores these sorts of problems, comparing many cultures' norms and advising on how to bridge the differences.
In Senegal people seem less comfortable by default expressing disagreement with someone above them in the hierarchy. (As a funny example, I've had a few colleagues who I would ask yes-or-no questions and they would answer "Yes" followed by an explanation of why the answer is no.)
Some advice it gives for this particular example (at least in several 'strong hierarchy' cultures), is instead of a ...
Tying in a bit with Healthy Competition:
I think it makes sense (given my understanding of the folk at 80k's views) for them to focus the way they are. I expect research to go best when it follows the interests and assumptions of the researchers.
But, it seems quite reasonable if people want advice for different background assumptions to... just start doing that research, and publishing. I think career advice is a domain that can definitely benefit from having multiple people or orgs involved, just needs someone to actually step up and do it.
I work for Habryka, so my opinion here should be discounted. (for what it's worth I think I have disagreed with some of his other comments this week, and I think your post did update me on some other things, which I'm planning to write up). But re:
this seems egregiously inaccurate to me. Two of the three journalists said some flavor of "it's complicated" on the topic of ... (read more)
I think it's worth pointing to the specifics of each, because I really don't think it's unreasonable to gloss as "all of whom disagreed."
This goes without saying.
... (read more)