I would really appreciate further analysis of family planning as an intervention. Some specific questions I’d like to see tackled:
I don't think that SEADS still exists. They haven't posted in a while and their website is dead
https://seads-ai.org/portfolio.html
>What I personally think is that those who are pledgees should consider donation matching as part of a prospective job's compensation as it is a permanent cost. (also would incentivise negotiation in that direction)
I'm not sure I understand. Are you suggesting that GWWC should include the donation match in the denominator, but not the numerator? Or include in both? Or are you not talking about GWWC at all here?
I'm giving to the EA Animal Welfare Fund.
https://funds.effectivealtruism.org/funds/animal-welfare
I thought this was likely among the best giving opportunities around. And then was further persuaded by the investigation from GWWC.
https://docs.google.com/document/d/1hqYNZ9zJfe3D_nyJ4b21J0IJs210upAXTw8fPWnYJe8/edit#heading=h.kiw67f2s2v90
You say "don't yet"...are you aware of anyone working on a project to incorporate deontology or other non-utilitarian factors in cause prioritization?
because we don't yet have a way to give enough weight to subjective wellbeing, the value of self-determination, or justice
Do you have thoughts on giving now vs. later?
Investing to give e.g. (https://www.founderspledge.com/research/investing-to-give)?
If you got google stock options or grants from 2013 (I don't know if you did) then those would have increased in value about 800%, so could your giving go much further if delayed to take advantage of gain? Or do you think of it some other way?
Thanks.
Do you have thoughts on giving now vs. later?
The higher you think the risk of extinction is, the less valuable giving later looks: you probably do better giving now either to improve the lives of pre-extinction people or to reduce the risk of extinction.
Futures where we avoid extinction are likely pretty strange, and I think historical reasoning around growth patterns seems unlikely to apply well. I don't know how this goes overall, but it generally makes me more optimistic around capacity building (movement, governance, institutions, technology) than a...
Hi Joel,
I would love to do this but do not have the bandwidth right now. I believe that Froolow is also a health economist and may be available.
Cheers
Equally enthusiastic about your project, good luck. Would love to hear the answer to this though -- and also why the broad name? Would you ever move beyond factory farming?
Hi Saulius, thank you for the interesting post. When you consider wild animal interventions do you include wild-caught fish?
e.g.
https://forum.effectivealtruism.org/posts/tykEYESbJkqT39v64/directly-purchasing-and-distributing-stunning-equipment-to
Hi Tyner. This is one of the questions that I decided to not clarify in the article for the sake of conciseness, so thank you for asking.
Wild-caught fish die under human control. So working on killing them more humanely doesn't have any complicated uncertain consequences of WAW interventions that I discuss. Relatively to WAW issues, it is easy to research and is unambiguously good if we can do it right. To me, it is precisely the kind of intervention we should be focusing on first before tackling super complex WAW issues. So everything that I say abo...
Hi Edward,
You might be interested in the work of the Non-Human Rights Project. They are attempting to establish the legal and political frameworks to ensure that animals (e.g. tigers) will be treated well by people.
https://www.nonhumanrights.org
Thanks for writing this.
Maybe one way to address this would be separate posts? The first raises the problems, shares emotions. The second suggests particular actions that could help.
Is it this one?
https://forum.effectivealtruism.org/posts/bXP7mtkK6WRS4QMFv/are-bad-people-really-unwelcome-in-ea
This was another discussion of EA/FIRE
https://forum.effectivealtruism.org/posts/j2ccaxmHcjiwGDs9T/ea-vs-fire-reconciling-these-two-movements
Below is a link to the Philanthropy 50 from last year. It is US only and ranks by amount given
https://archive.ph/XFfEI
This sounds like a great project and I would really like to participate, but cannot make the commitment for that date span. Is there a good way to stay in the loop for future cohorts? Thanks!
Does "calibrated probability assessment" training work?
In "How to Measure Anything" chapter 5, Douglas Hubbard describes the training he provides to individuals and organizations that want to improve their skills. He provides a sample test which is based on general knowledge trivia, questions like
"What is the air distance from LA to NY?"
for which the student is supposed to provide a 90% confidence interval. There are also some true/false questions where you provide your level of confidence in the answer e.g.
"Napoleon was born ...
If you search the forum for the EAIF tag you can get some more details on past grants. I'm not sure if this gives you quite what you're looking for, or not.
https://forum.effectivealtruism.org/topics/effective-altruism-infrastructure-fund?sortedBy=magic
The reading time estimates on lesswrong crossposts seem to be wrong. For example, this says 1 but should be 5-10 (I would guess):
Doesn't really make sense to me and would lead to some very weird conclusions.
For example, I'm a manager and one of my staff talks to me and says this report takes a very long time because there are many manual steps, I believe we could automate these steps by using software X which I've used in a previous role and costs $Y. By the logic of this maxim I should ignore the proposed solution AND ignore the initial complaint.
Because they proposed a solution, now I should think it less likely that the report takes a very long time? Seems totally nonsensical (or I'm not understanding what you're actually saying).
The discussion on Erik Hoel's piece is here:
https://forum.effectivealtruism.org/posts/PZ6pEaNkzAg62ze69/ea-criticism-contest-why-i-am-not-an-effective-altruist
Hi Fai, I agree with whoever encouraged you to post more. I always enjoy and appreciate your stuff even when we don't 100% agree.
The below sentence is difficult to parse, what do you actually mean? That it was economic reasons, or that it was not economic reasons, or something else entirely?
>Well, I personally did not have much hope in humanity's moral progress, until I recently got moderately convinced that it’s less likely than not that we abolished slavery mainly for economic reasons. And in case you think that it is impossible to h...
Expanding our exploitation of animals is a moral step backward. This does not seem like the kind of project EA people or organizations should be supporting.
>100 such ideas
here's another with the same vibes
https://forum.effectivealtruism.org/posts/3caZ7LhMsvsS7kRrz/hobbit-manifesto
A smaller change that I think would be beneficial is to eliminate strong upvotes on your own comments. I really don't see how those have a use at all.
Thanks for writing, I agree with a bunch of these.
As far as #14, is this something you've thought about trying to tackle at Rethink? I don't know of another org that would be better positioned...
Another organization that is spending some time on this is sogive.org They have impact assessments for groups like Planned Parenthood and Muslim Aid.
I can provide an anecdotal use case that is maybe not quite tackled in your write up. My mother-in-law is a retired dentist. She gives money to the American Dental Association every year. This strikes me as an ineffective organization mostly because US dentists are typically quite wealthy. If I told her "forget all that, give your money to Humane League/Helen Keller/Intelligence.o...
This is my favorite criticism contest entry. The amount of actionable information is really great. I would love to see various organizations move to incorporate these methods, where applicable. Very nice use of visuals as well.
I know you said in a previous post that you are not involved in EA. I hope you'll consider staying involved after this criticism contest. It seems you have a lot of value you could add.
I don't really know how giving works for a very wealthy person, but to me it seems unlikely that they or someone on their staff would just look at the GiveWell site and be done. It seems a lot more likely that they would have a conversation, with GiveWell staff or others, which would create an opportunity for more nuanced advice. So I really doubt it much matters for that scenario.
"If we had X amount of money we'd do this" page, with milestone targets?
That's a neat idea!
I took AAC online course in 2021. I thought it was great. I learned a lot about animal advocacy, existing organizations, needed skills, potential roles...and made a bunch of animal-relevant connections on LinkedIn. I have subsequently recommended it to anyone who is interested in finding a career in animal advocacy. If that is you, and you're not sure what steps to take, definitely do the course!
Very interesting post, thank you for the research.
Based on your model, should Open Phil etc. be aiming for 50% research in every year? Or should it be aiming for a very high level of research funding now, knowing that it can take actions on better opportunities in the future? Maybe the research percentage by year should be something like 100%, 95%, 90% etc?
I missed that detail, thanks for pointing it out. To me this makes the case somewhat worse from a practical standpoint. If these people are well placed in the GOP already then why would such a candidate run 3rd party and not just GOP?
Thanks for posting this again, I'm excited about this project!
Does anyone knows of a US-based charity that is supporting this initiative? This way I could get my employer giving match.
Funding things you don't really believe in as a form of sabotage would damage the reputation and future trust of the funder and potentially EA as a whole.
Giving a larger platform (e.g. TV ads) to people with far right ideas could make these ideas more mainstream e.g. https://en.wikipedia.org/wiki/Overton_window
Seems like a bad idea.
Hi Rosie, great post!
When I looked at this briefly a year ago I flagged two organizations that seemed promising:
https://www.globaldentalrelief.org/
Both are more holistic than the specific interventions you looked at. Did you happen to look at either of these in your research?
Thanks!
Question - how did you select judges for your contest? How did you balance expertise with diversity?
Thanks!
The project is under development. In time, all being well, it will function as a workshop venue in Oxford.
>I would be curious to understand why you assign a probability of only 10 % to chickens, given moral patienthood, having a moral weight larger than 0.01.
Sorry, not sure I understand, my intention was to apply probability of moral patienthood at 95%, not 10%.
Great post!
FWIW I re-ran the model with two changes - (a) Probability moral patients = 0.95 and (b) value of chicken compared to human ranging between 1/100,000 and 1/100. Here are the results:
Ratio between the cost-effectiveness of CCCW and MIF
mu = 14.7838855503632
sigma = 66.74845969785116
percentile5 = 0.0697604616065207
median = 2.0520772418771545
percentile95 = 62.02562191180946
Hi Ann,
Some quibbles with your book list. Animal Liberation came out in 1975, not 2001.
https://www.goodreads.com/book/show/29380.Animal_Liberation
You overlooked Scout Mindset, which came out in 2021.
https://www.goodreads.com/book/show/42041926-the-scout-mindset
Also,
>Essentially, neartermist causes served as an on-ramp to EA (and to longtermism). Getting rid of that on-ramp seems like a bad idea.
Do you worry at all about a bait-and-switch experience that new people might have?
Can you clarify the difference between these two paragraphs? They read the same to me, but I'm guessing I'm missing something here.
(1) i.e. reflective preferences are always prioritized over revealed preferences whenever they disagree, then the result is practically the same, and we may as well ignore non-reflective beings.
(2) However, if we instead allow continuous tradeoffs between reflective preferences and revealed preferences, optionally ignoring revealed preferences in an individual when their reflective preferences are available, then we can get continuous tradeoffs between human and nonhuman animal preferences.
Thank you for trying <3