Please submit more concrete ones! I added "poetic" and "super abstract" as an advantage and disadvantage for fire.
If the organization chooses to directly support the new researcher, then the net value depends on how much better their project is than the next-most-valuable project.
This is nit-picky, but if the new researcher proposes, say, the best project the org could support, it does not necessarily mean the org cannot support the second-best project (the "next-most-valuable project"), but it might mean that the sixth-best project becomes the seventh-best project, which the org then cannot support.
In general, adding a new project to the pool of projects does n...
I'll be looking forward to hearing more about your work on whistleblowing! I've heard some promising takes about this direction. Strikes me as broadly good and currently neglected.
(Take my strong upvote, I think people downvoting you don't realize you are the author of the post haha)
When applicants requested feedback, did they do that in the application or by reaching out after receiving a rejection?
Is that lognormal distribution responsible for
the cost-effectiveness is non-linearly related to speed-up time.
If yes, what's the intuition behind this distribution? If not, why is cost-effectiveness non-linear in speed-up time?
Something I found especially troubling when applying to many EA jobs is the sense that I am p-hacking my way in. Perhaps I am never the best candidate, but the hiring process is sufficiently noisy that I can expect to be hired somewhere if I apply to enough places. This feels like I am deceiving the organizations that I believe in and misallocating the community's resources.
There might be some truth in this, but it's easy to take the idea too far. I like to remind myself:
Thanks for this! I would still be interested to see estimates of eg mice per acre in forests vs farms and I'm not sure yet whether this deforestation effect is reversible. I'll follow up if I come across anything like that.
I agree that the quality of life question is thornier.
Under CP and CKR, Zuckerberg would have given higher credence to AI risk purely on observing Yudkowsky’s higher credence, and/or Yudkowsky would have given higher credence to AI risk purely on observing Zuckerberg’s lower credence, until they agreed.
Should that say lower, instead?
Decreasing the production of animal feed, and therefore reducing crop area, which tends to: Increase the population of wild animals
Could you share the source for this? I've wondered about the empirics here. Farms do support wild animals (mice, birds, insects, etc), and there is precedent for farms being paved over when they shut down, which prevents the land from being rewilded.
Suppose someone is an ethical realist: the One True Morality is out there, somewhere, for us to discover. Is it likely that AGI will be able to reason its way to finding it?
What are the best examples of AI behavior we have seen where a model does something "unreasonable" to further its goals? Hallucinating citations?
What are the arguments for why someone should work in AI safety over wild animal welfare? (Holding constant personal fit etc)
At least we can have some confidence in the total weight of meat consumed on average by a Zambian per year and the life expectancy at birth in Zambia.
We should also think about these on the margin. Ie the lives averted might have been shorter than average and consumed less meat than average.
I imagine a proof (by contradiction) would work something like this:
Suppose you place > 1/x probability on your credences moving by a factor of x. Then the expectation of your future beliefs is > prior * x * 1/x = prior, so your credence will increase. With our remaining probability mass, can we anticipate some evidence in the other direction, such that our beliefs still satisfy conservation of expected evidence? The lowest our credence can go is 0, but even if we place our remaining < 1 - 1/x probability on 0, we would still find future beliefs &...
I would endorse all of this based on experience leading EA fellowships for college students! These are good principles not just for public media discussions, but also for talking to peers.
Thanks for the thorough response! I agree with a lot of what you wrote, especially the third section on Epistemic Learned Helplessness: "Bayesianism + EUM, but only when I feel like it" is not a justification in any meaningful sense.
I agree that we can construct thought experiments (Pascal's Mugging, acausal trade) with arbitrarily high stakes to swamp commonsense priors (even without religious scenarios or infinite value, which are so contested I think it would be difficult to extract a sociological lesson from them).
I sti...
Thanks for the excellent post!
I think you are right that this might be a norm/heuristic in the community, but in the spirit of a "justificatory story of our epistemic practices," I want to look a little more at
4. When arguments lead us to conclusions that are both speculative and fanatical, treat this as a sign that something has gone wrong.
First, I'm not sure that "speculative" is an independent reason that conclusions are discounted, in the sense of a filter that is applied ex-post. In your 15AI thought experiment, for example, I think ...
Copy that. I removed "smash," but I'm leaving the language kind of ambiguous because my understanding of this strategy is that it's not restricted to conventional regulations, but instead will draw on every available tool, including informal channels.
Thanks for following up and thanks for the references! Definitely agree these statements are evidence; I should have been more precise and said that they're weak evidence / not likely to move your credences in the existence/prevalence of human consciousness.
a very close connection between an entity’s capacity to model its own mental states, and consciousness itself.
The 80k episode with David Chalmers includes some discussion of meta-consciousness and the relationship between awareness and awareness of awareness (of awareness of awareness...). Would recommend to anyone interested in hearing more!
They make the interesting analogy that we might learn more about God by studying how people think about God than by investigating God itself. Similarly we might learn more about consciousness by investigating how people think about it...
We trust human self-reports about consciousness, which makes them an indispensable tool for understanding the basis of human consciousness (“I just saw a square flash on the screen”; “I felt that pinprick”).
I want to clarify that these are examples of self-reports about consciousness and not evidence of consciousness in humans. A p-zombie would be able to report these stimuli without subjective experience of them.
They are "indispensable tools for understanding" insofar as we already have a high credence in human consciousness.
Oh I see! Ya, crazy stuff. I liked the attention it paid to the role of foundation funding. I've seen this critique of foundations included in some intro fellowships, so I wonder if it would also especially resonate with leftists who are fed up with cancel culture in light of the Intercept piece.
I don't think anything here attempts a representation of "the situation in leftist orgs" ? But yes lol same
https://forum.effectivealtruism.org/posts/MCuvxbPKCkwibpcPz/how-to-talk-to-lefties-in-your-intro-fellowship?commentId=YwQme9B2nHoH6fXeo
https://forum.effectivealtruism.org/posts/MCuvxbPKCkwibpcPz/how-to-talk-to-lefties-in-your-intro-fellowship?commentId=YwQme9B2nHoH6fXeo
https://forum.effectivealtruism.org/posts/MCuvxbPKCkwibpcPz/how-to-talk-to-lefties-in-your-intro-fellowship?commentId=YwQme9B2nHoH6fXeo
This is a response to D0TheMath, quinn, and Larks, who all raise some version of this epistemic concern:
(1) Showing how EA is compatible with leftist principles requires being disingenuous about EA ideas —> (2) recruit people who join solely based on framing/language —> (3) people join the community who don't really understand what EA is about —> (4) confusion!
The reason I am not concerned about this line of argumentation is that I don't think it attends to the ways people decide whether to become more involved in EA.
(2) In my experience, people a...
Ya maybe if your fellows span a broad political spectrum, then you risk alienating some and you have to prioritize. But the way these conversations actually go in my experience is that one fellow raises an objection, eg "I don't trust charities to have the best interests of the people they serve at heart." And then it falls to the facilitator to respond to this objection, eg "yes, PlayPumps illustrates this exact problem, and EA is interested in improving these standards so charities are actually accountable to the people they serve," etc.
My sense is...
I agree with quinn. I'm not sure what the mechanism is by which we end up with lowered epistemic standards. If an intro fellow is the kind of person who weighs reparative obligations very heavily in their moral calculus, then deworming donations may very well satisfy this obligation for them. This is not an argument that motivates me very much, but it may still be a true argument. And making true arguments doesn't seem bad for epistemics? Especially at the point where you might be appealing to people who are already consequentialists, just consequentialists with a developed account of justice that attends to reparative obligations.
Thanks for the reply! I'm satisfied with your answer and appreciate the thought you've put into this area :) I do have a couple follow-ups if you have a chance to share further:
I expected to hear about the value of the connections made at EAG, but I'm not sure how to think about the counterfactual here. Surely some people choose to meet up at EAG but in the absence of the conference would have connected virtually, for example?
I also wonder about the cause areas of the EA-aligned orgs you cited. Ie, I could imagine longtermist orgs that are more talen...
The second point implies more of a bright line than scalar dynamic, which seems consistent with scope insensitivity over lower donation amounts. That is, we might expect scope insensitivity to equalize the perception of $1m and $5m dollars, but once you hit $10m, then you attract negative media coverage. If we restrict ourselves to donation sizes that allow us to fly under the radar of national media outlets, then the scope insensitivity argument may still bite.
I have no idea what the finances for the event looked like, but I'll assume the best case that CEA at least broke even.
The conference seemed extravagant to me. We don't need so much security or staff walking around to collect our empty cups. How much money was spent to secure an endless flow of wine? There were piles of sweaters left at the end; attendees could opt in with their sizes ahead of time to calibrate the order.
Particularly in light of recent concerns about greater funding, it would behoove us to consider the harms of an opu...
Hi — I’m Eli from the EA Global team. Thanks for your thoughts on this — appreciate your concerns here. I’ll try to chip in with some context that may be helpful. To address your main underlying point, my take is that EA Globals have incredibly high returns on investment — EA orgs and members of the community report incredibly large amounts of value from our events. For example:
Hi Ann! Congratulations on this excellent piece :)
I want to bring up a portion I disagreed with and then address another section that really struck me. The former is:
Of course, co-benefits only affect the importance of an issue and don’t affect tractability or neglectedness. Therefore, they may not affect marginal cost-effectiveness.
I think I disagree with this for two reasons:
Why is "people decide to lock in vast nonhuman suffering" an example of failed continuation in the last diagram?
Thanks, Agustín! This is great.