Recent Discussion

We conducted this research on behalf of Equalia to evaluate the impact of their campaign to establish CCTV cameras in slaughterhouses. More information about Equalia's campaign is available here on their website.


SUMMARY

Due to the desire to reduce animal welfare violations, CCTV cameras have been installed in slaughterhouses in a number of jurisdictions around the world. This has been driven by legal requirements (e.g. England, Israel), agreements between industry and government (e.g. the Netherlands), or retailer requirements (e.g. United States).

There have been no studies testing whether CCTV cameras actually deter violations of animal welfare regulations in slaughterhouses. Until a scientific study calculates the magnitude of the effect of CCTV on compliance with animal welfare regulations, it is difficult to be certain about the exact magnitude of CCTV's impact....

Thanks for the positive feedback :)

If you consider on-farm (rather than slaughterhouse) CCTV, the welfare benefits increase significantly, as you're monitoring a much longer period of each animal's life. However, the tractability of an on-farm CCTV campaign would probably* be much lower. Farmers often have closer, more personal relationships with their farms than slaughterhouse owners do with their slaughterhouses. Proposing to install CCTV on farms would likely trigger a lot of backlash from farmers (particularly given the common public image of the 'fami... (read more)

1Ren Springlea27m
Hey Fai! According to the crime research, the deterrent effects of CCTV depend on the slaughterhouse workers' perceived probability of detection, not the true probability of detection. So, in principle, it's possible for CCTV to have a meaningful deterrent effect even if the videos aren't watched 100%. For example, a government could identify which slaughterhouses have the highest risk of non-compliance and focus on those footage, then very quickly and clearly respond to any incidents they do detect. These visible, rapid responses would help convey the impression to slaughterhouse workers that the feeds are being monitored, which would increase the perceived probability of detection despite not all feeds being watched.
1Ren Springlea30m
Thank you for the positive feedback :)

Ben Todd suggested back in 2019 that small donors might help organizations diversify their funding:

It’s also not healthy for an organisation to depend 100% on a single foundation for its funding. This means that until we have 3+ large foundations covering each organisation, small donors play a role in diversifying the funding base of large organisations. (Though note that you’re only providing this benefit if your grantmaking process is independent from the large donors.)

So... which organizations are looking for funding from small donors?

Open Philanthropy used to publish its suggestions, but their latest is from 2020. I imagine the funding landscape has changed since then.

(Also related: How should large donors coordinate with small donors?)

Answer by KevinOSep 30, 202220

I'd guess that a lot of non-longtermist, non-EA-meta charities are more more likely to be funding constrained and less likely to be topped up by FTX. I also suspect FTX isn't taking up all the opportunities for organizations to spend money, even for the ones it supports.

I suspect organizations with a research focus, such as Sentience Institute, ALLFED, and other answers on this post, are often happy to hire more researcher time with marginal donations.

Organizations that do marketing probably have room to spend more there, such as 80,000 Hours and Giving Wh... (read more)

5Answer by Peter Wildeford2h
Rethink Priorities - you can see our funding needs on our donate page [https://rethinkpriorities.org/donate]
10Answer by Joey2h
We try to keep a page with information (including room for funding numbers) for the organisations that get founded [https://www.charityentrepreneurship.com/our-charities] through Charity Entrepreneurship. Many of them are in a situation where marginal, small donors could make an impact.

Giving What We Can is excited to talk to your workplace or community group about effective giving this Giving Season.

If you bring the people, we’ll bring the content!

We’ve found that these kinds of engagements have been successful for fundraising and engaging people in the effective giving and effective altruism communities. In fact, our Head of Marketing, Grace first heard about GWWC at a workplace talk!

You can register your interest in us running one of the following events for your group during November and December:

  • A talk on effective giving: 

A standard talk we usually give to groups and workplaces (options for 15min, 30min, 60min, new content is being developed right now).

  • A workshop on high-impact philanthropy: 

A new workshop format which aims to get people to reason about their approach to...

What's the minimum sized audience that you'd be happy to present to?

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some question posts that could use more answers.

I came across this article from the Carnegie Council's Artificial Intelligence and Equality Initiative and I can't help but feel like they misunderstand longtermism and EA. The article mentions the popularity of William Macaskill's new book "What We Owe the Future" and the case for considering future generations and civilization. I would recommend you read the article before you read my take below, but Carnegie Council made the common fallacious arguments against longtermism.

  1. They make it seem like in order to address longtermism, you have to completely ignore the present. I have never heard an EA argue for disregarding contemporary issues.
  2. They convey that longtermism requires you to "put all your eggs in one basket," the basket being longtermism and not today's problems.
  3. Regulating AI will result in a slowdown in production. Yes, this is true but risking an uncontrollable accelerating risky technology like AI can result in the end of humanity and mass suffering. Therefore, the trade-off should be worth it much like regulating carbon emissions is (better word for worth it).

Will is promoting longtermism as a key moral priority - merely one of our priorities, not the sole priority. He'll say things like (heavily paraphrased from my  memory) "we spend so little on existential risk reduction - I don't know how much we should spend, but maybe once we're spending 1% of GDP we can come back and revisit the question".

It's therefore disappointing to me when people write responses like this, responding to the not-widely-promoted idea that longtermism should be the only priority.

4MathiasKB4h
To me it seems they understood longtermism just fine and just so happen to disagree with strong longtermism's conclusions. We have limited resources and if you are a longtermist you think some to all of those resources should be spent ensuring the far future goes well. That means not spending those resources on pressing neartermist issues. If EAs, or in this case the UN, push for more government spending on the future the question everyone should ask is where that spending should come from. If it's from our development aid budgets, that potentially means removing funding for humanitarian projects that benefit the worlds poorest. This might be the correct call, but I think it's a reasonable thing to disagree with.
6Lauren Maria7h
I don't think the arguments are fallacious if you look at how strong longtermism is defined: Positively influencing the future is not just a moral priority but the moral priority of our time. See general discussion here [https://www.centreforeffectivealtruism.org/longtermism] [https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/] and in depth discussionhere [https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/] Perhaps they should have made that distinction since not all EAs take the strong longtermist view - including MacAskill himself who doesn't seem certain.

Harry Truman once said: "It's amazing what you can accomplish if you don't care who gets the credit"

The Truman Prize, with a $100,000 prize pool, is now live on EA prize platform Superlinear, which recognizes Effective Altruists with $5,000-$10,000 prizes for declining credit in order to increase their impact, in ways that can't be publicized directly.

Theory of change: EA promotes caring about effectiveness over other goals like getting credit, but wanting credit or recognition for your work is natural. Rewarding people for maximizing impact over credit increases the health and future effectiveness of the community.

Example #1: Sam toils behind the scenes and makes a breakthrough on an important problem. Sam suggests the idea to, say, a political figure or other organization who can then take credit, because that leads...

Good suggestion! Updated the description.

Engaged with psychology or mental health? This is for you.

Impartial compassion. Rationality. Wellbeing. For a movement built on these values, EA likely underutilizes psychology professionals. Together with our supporters from the Global Priorities Institute, the Center for Effective Altruism, and the Happier Lives Institute, HIPsy aims to help people engaged with psychology or mental health maximize their impact. 

Vision

We will follow in the footsteps of the EA Consulting Network, High-Impact Medicine, and High-Impact Athletes. Accordingly, the goal of HIPsy shall be to increase the likelihood of high-impact decisions, make collaboration and information processes more effective, and reduce the risk of value drift for people engaged in psychology or mental health. 

Relevant resources shall be available, easy to access, and use: 

  • up-to-date high-quality information, 
  • career and work advice, 
  • networking and collaboration opportunities.

Psychological know-how shall...

4Geoffrey Miller10h
Inga - this sounds exciting and useful, and I'm happy to help with it however I can. I'm a psychology professor at U. New Mexico (USA) who's taught classes on 'The psychology of Effective Altruism', human emotions, human sexuality, intelligence, evolutionary psychology, etc., and I've written 5 books and lots of papers on diverse psych topics, the most relevant of which might be the ones on mental disorders (schizophrenia, depression, autism), psych research methods, individual differences (intelligence, personality traits, behavior genetics), consumer psychology, and moral psychology (eg evolutionary origins of altruism and virtue-signaling). I also did a bunch of machine learning research (neural networks, genetic algorithms, autonomous agents) back in the 90s, and I've been catching up on AI alignment research. Here's my google scholar page: https://scholar.google.com/citations?user=vEqE_rUAAAAJ&hl=en&oi=ao [https://scholar.google.com/citations?user=vEqE_rUAAAAJ&hl=en&oi=ao] And my web site: https://www.primalpoly.com/ [https://www.primalpoly.com/]

That sounds great, Geoffrey! I will reach out to you.

2PeterSlattery13h
Thanks for your work everyone! I am excited to see this develop!

Tl;dr: Technical progress in DNA synthesis has outpaced regulatory safeguards against negligent and malevolent misuse of DNA synthesis technology. The need for intervention from the community has been on our collective radar for sometime and still coordination on the hottest issues looks underwhelming. 

Epistemic status: Low - Master's degree in epidemiology and broad expertise in public sector/health policy but <5 hours reading on this issue, synthetic biology, bioterrorism etc

The conversation on biosecurity in the EA community appears to somewhat infrequent and when it does to me it appears immature - short of leadership and expertise proportionate to the imminence, scale, neglectedness and tractability of the threat and solution, and the standard of analysis we expect in analyses and ideas proposed by EAs in the global health and development space. 

Background

The...

Dear all,

I hope you're well! Our new NGO Malengo, whose mission is to facilitate international educational migration, has had a good year: we more than doubled our program, and 18 young Ugandans are on their way to Germany to start their Bachelor's degrees. In the coming years we want to grow to many hundreds of students each year. 

We're now looking to grow our team: We're hiring a Country Director for Germany, and a Senior Program Manager — Africa. Both jobs have great conditions; e.g. we offer unlimited time off, and very competitive salaries. Here are the detailed descriptions: 

Country Director Germany: https://smrtr.io/bJKsh
Senior Program Manager — Africa: https://smrtr.io/bJrLK

More information about Malengo is on our website: https://malengo.org/

Please help us spread the word by sharing this widely! And please apply if you want to join us!

Many thanks and best wishes,

Johannes 

 

PS: Here is this year's cohort at a reception organized for them by the German Embassy in Kampala: