Endorsement by the Democratic congressional leadership. There are plenty of low-information voters who hardly follow politics but generally prefer Democrats to Republicans so in the primary, they are more likely to vote for candidate endorsed by the those people who aim to get a Democratic majority in Congress.
So now that it's over, can someone explain what the heck was up with SBF donating $6m to HMP in exchange for a $1m donation to Flynn? From an outside perspective it seems tailor made to look vaguely suspicious and generate bad press, without seeming to produce any tangible benefits for Flynn or EA.
It seems like these observations could be equally explained by Paul correctly having high credence in long timelines, and giving advice that is appropriate in worlds where long timelines are true, without explicitly trying to persuade people of his views on timelines. Given that, I'm not sure there's any strong evidence that this is good advice to keep in mind when you actually do have short timelines, regardless of your views on the Bible.
It's probably the lizardman constant showing up again -- if ~5% of people answer randomly and <5% of the population are actually veg*ns, then many of the self-reported veg*ns will have been people who answered randomly.
The next logical step is to evaluate the novel ideas, though, where a "cadre of uber-rational people" would be quite useful IMHO. In particular, a small group of very good evaluators seems much better than a large group of less epistemically rational evaluators who could be collectively swayed by bad reasoning.
Have you read this paper suggesting that there is no good evidence of a connection between climate change and the Syrian war? I found it quite persuasive.
What if rooms at the EA Hotel were cost-price by default, and you allocated "scholarships" based on a combination of need and merit, as many US universities do? This might avoid a negative feedback cycle (because you can retain the most exceptional people) while reducing costs and making the EA Hotel a less attractive target for unaligned people to take resources from.
With the charity structure we're setting up, charging cost price will also amount to a grant in the form of a partial subsidy. Charging anyone less than market rate (~double cost price) means they are a beneficiary of the charity. So in practice everyone will have to apply for a grant of free accommodation, board and stipend, and the amount given (total or partial subsidy) will depend on their need and merit.
Due to politicization, I'd expect reducing farm animal suffering/death to be much cheaper/more tractable per animal than reducing abortion is per fetus; choosing abortion as a cause area would also imperil EA's ability to recruit smart people across the political spectrum. I'd guess that saving a fetus would need to be ~100x more important in expectation than saving a farm animal for reducing abortions to be a potential cause area; in an EA framework, what grounds are there for believing that to be true?
Note: It would also be quite costly for EA as a movement to generate a better-researched estimate of the parameters due to the risk of politicizing the movement.
Reducing global poverty, and improving farming practices, lack philosophically attractive problems (for a consequentialist, at least) - yet EAs work heavily on them all the same.
I think this comes from an initial emphasis towards short-term, easily measured interventions (promoted by the $x saves a life meme, drowning child argument, etc.) among the early cluster of EA advocates. Obviously, the movement has since branched out into cause areas that trade certainty and immediate benefit for the chance of higher impact, but these tend to be clustered in "...
What does it mean to be "pro-science"? In other words, what might a potential welfarist, maximizing, impartial, and non-normative movement that doesn't meet this criterion look like?
I ask because I don't have a clear picture of a definition that would be both informative and uncontroversial. For instance, the mainstream scientific community was largely dismissive of SIAI/MIRI for many years; would "proto-EAs" who supported them at that time be considered pro-science? I assume that excluding MIRI does indeed count as controversial, but then I don't have a clear picture of what activities/causes being "pro-science" would exclude.
edit: Why was this downvoted?
As a scientist, I consider science a way of learning about the world, and not what a particular group of people say. I think the article is fairly explicit about taking a similar definition of "science-aligned":
(i) the use of evidence and careful reasoning to work out...
(...)
- Science-aligned. The best means to figuring out how to do the most good is the scientific method, broadly construed to include reliance on careful rigorous argument and theoretical models as well as data.
There is usually a vast body of existing relevant work on a topic across va
An example of what I had in mind was focusing more on climate change when running events like Raemon's Question Answering hackathons. My intuition says that it would be much easier to turn up insights like the OP than insights of "equal importance to EA" (however that's defined) in e.g. technical AI safety.
The answer to your question is basically what I phrased as a hypothetical before:
participation in the EA movement as one way to bring oneself closer to God through the theological virtue of charity.
I was involved in EA at university for 2 years before coming to believe Catholicism is true, and it didn't seem like Church dogma conflicted with my pro-EA intuitions at all, so I've just stayed with it. It helped that I wasn't ever an EA for rigidly consequentialist reasons; I just wanted to help people and EA's analytical approach was a na...
I downvoted the post because I didn't learn anything from it that would be relevant to a discussion of C-GCRs (it's possible I missed something). I agree that the questions are serious ones, and I'd be interested to see a top level post that explored them in more detail. I can't speak for anyone else on this, and I admit I downvote things quite liberally.
Tl;dr the moral framework of most religions is different enough from EA to make this reasoning nonsensical; it's an adversarial move to try to change religions' moral framework but there's potentially scope for religions to adopt EA tools
Like I said in my reply to khorton, this logic seems very strange to me. Surely the veracity of the Christian conception of heaven/hell strongly implies the existence of an objective, non-consequentialist morality? At that point, it's not clear why "effectively doing the most good" in this man...
Thank you! I'm not sure, but I assume that I accidentally highlighted part of the post while trying to fix a typo, then accidentally e.g. pressed "ctrl-v" instead of "v" (I often instinctively copy half-finished posts into the clipboard). That seems like a pretty weird accident, but I'm pretty sure it was just user error rather than anything to do with the EA forum.
This doesn't seem like a great idea to me for two reasons:
1. The notion of explicitly manipulating one's beliefs about something as central as religion for non-truthseeking reasons seems very sketchy, especially when the core premise of EA relies on an accurate understanding of highly uncertain subjects.
2. Am I correct in saying the ultimate aim of this strategy is to shift religious groups' dogma from (what they believe to be) divinely revealed truth to [divinely revealed truth + random things EAs want]? I'm genuinely not sure if I interpreted the post correctly, but that seems like an unnecessarily adversarial move against a set of organized groups with largely benign goals.
Yeah, I don't think I phrased my comment very clearly.
I was trying to say that, if the Christian conception of heaven/hell exists, then it is highly likely than an objective non-utilitarian morality exists. It shouldn't be surprising that continuing to use utilitarianism within an otherwise Christian framework yields garbage results! As you say, a Christian can still be an EA, for most relevant definitions of "be an EA".
I'm fairly confident the Church does not endorse basing moral decisions on expected value analysis; that says absolutely nothing about the compatibility of Catholicism and EA. For example, someone with an unusually analytical mindset might see participation in the EA movement as one way to bring oneself closer to God through the theological virtue of charity.
This example of a potentially impactful and neglected climate change intervention seems like good evidence that EAs should put substantially more energy towards researching other such examples. In particular, I'm concerned that the neglect of climate change has more to do with the lack of philosophically attractive problems relative to e.g. AI risk, and less to do with marginal impact of working on the cause area.
It seems like that question would interact weirdly with expectations of future income: as a college student I donate ~1% of expenses, but if I could only save one life, right now, I would probably try to take out a large, high interest loan to donate a large sum. That depends on availability of loans, risk aversion, expectations of future income, etc. much more than it does on my moral values.
Unfortunately I find it hard to give examples that are comprehensible without context that is either confidential or would take me a lot of time to describe. Very very roughly I'm often not convinced by the use of quantitative models in research (e.g. the "Racing to the Precipice" paper on several teams racing to develop AGI) or for demonstrating impact (e.g. the model behind ALLFED's impact which David Denkenberger presented in some recent EA Forum posts). OTOH I often wish that for organizational decisions or in direct feedback more q...
A few thoughts this post raised for me (not directed at OP specifically):
1. Does RAISE/the Hotel have a standardized way to measure the progress of people self-studying AI? If so, especially if it's been vetted by AI risk organizations, it seems like that would go a long ways towards resolving this issue.
2. Does "ea organisations are unwilling to even endorse the hotel" refer to RAISE/Rethink Charity (very surprising & important evidence!), or other EA organizations without direct ties to the Hotel?
3. I would be curious what the margin...
2. RAISE very much does endorse the hotel (especially given that the founder works for and lives at the hotel, and the hotel was integral to their progress over the last 6 months). See e.g. here and here. We have no formal relationship with Rethink Charity (or Rethink Priorities in particular) - individuals at the hotel have applied for and got work from them independently.
3. The marginal cost of adding a new resident when already at ~75% capacity is ~£4k/yr.
6. I wonder too. I wonder also how different it would be if it was done after another 6-12 months of getting established.
Did the report consider increasing access to medical marijuana as an alternative to opioids? If so, what was the finding? (I didn't see any mention while skimming it) My impression was that many leaders in communities affected by opioid abuse see access to medical marijuana as the most effective intervention. One (not particularly good) example
Are you saying there are groups who go around inflicting PR damage on generic communities they perceive as vulnerable, or that there are groups who are inclined to attack EA in particular, but will only do so if we are percieved as vulnerable (or something else I'm missing)? I'm having a hard time understanding the mechanism through which this occurs.
The law school example seems like weak evidence to me, since the topics mentioned are essential to practicing law, whereas most of the suggested "topics to avoid" are absolutely irrelevant to EA. Women who want to practice law are presumably willing to engage these topics as a necessary step towards achieving their goal. However, I don't see why women who want to effectively do good would be willing to (or expected to) engage with irrelevant arguments they find uncomfortable or toxic.
If the topics to avoid are irrelevant to EA, it seems preferable to argue that these topics shouldn't be discussed because they are irrelevant than to argue that they shouldn't be discussed because they are offensive. In general, justifications for limiting discourse that appeal to epistemic considerations (such as bans on off-topic discussions) appear to generate less division and polarization than justifications that appeal to moral considerations.
I like the idea of profiting-to-give as a way to strengthen the community and engage people outside of the limited number of direct work EA jobs; however, I don't see how an "EA certification" effectively accomplishes this goal.
I do think there would be a place for small EA-run businesses in fields with:
Such a business might plausibly be able to donate at least much money as its employees were previously donating individually by virtue of their competitive success in the...
That's a good point, but I don't think my argument was brittle in this sense (perhaps it was poorly phrased). In general, my point is that climate change amplifies the probabilities of each step in many potential chains of catastrophic events. Crucially, these chains have promoted war/political instability in the past and are likely to in the future. That's not the same as saying that each link in a single untested causal chain is likely to happen, leading to a certain conclusion, which is my understanding of a "brittle argument"
On the other hand, I think it's fair to say that e.g. "Climate change was for sure the primary cause of the Syrian civil war" is a brittle argument
I'd previously read that there was substantial evidence linking climate change-->extreme weather-->famine--> Syrian civil war (a major source of refugees). One example: https://journals.ametsoc.org/doi/10.1175/WCAS-D-13-00059.1 This paper claims the opposite though: https://www.sciencedirect.com/science/article/pii/S0962629816301822.
"The Syria case, the article finds, does not support ‘threat multiplier’ views of the impacts of climate change; to the contrary, we conclude, policymakers, commentators and scholars alike should exercise f...
Same