This is a special post for quick takes by Dave Cortright 🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Edgy and data-driven TED talk on how the older generations in America are undermining the youth. Worth a watch.
 

I'm proud to announce the 5-minute animated short on mental health I wrote back in 2020 is finally finished! I'd love you to watch it and let me know what you think (like, share…). It's currently "unlisted" as I wait to see how the production studio wants to release it publicly. But in the meantime I'm sharing it with my extended network.

I haven’t gone through this all the way, but I’ve loved Nicky Case’s previous explainers.
https://aisafety.dance/

The Onion: What To Know About The Collapse Of FTX

Q: What is the “effective altruism” philosophy Bankman-Fried practices?
A: A movement to allocate one’s money to where it can do the most benefit for oneself.

The Atlantic has a column called “Progress” by contributor Derek Thompson with the tag line: A special series focused on two big questions: How do you solve the world’s most important problems? And how do you inspire more people to believe that the most important problems can actually be solved?

Sounds a lot like EA to me.

Derek is holding virtual office hours on June 14

https://www.theatlantic.com/progress/

Derek is one of the earlier Giving What We Can members (2015 iirc), and have been interested in EA for a long time.

There will be an EAG Coaching meetup during EAGxVirtual.
Feel free to join if you are a coach, therapist, or anyone in a related personal development field!

Saturday, November 18
8 PM UTC / 3 PM Eastern / 12 PM Pacific / 7 AM Sydney

https://meet.google.com/eas-pyaa-gxk

Or dial: ‪(US) +1 405-356-8141‬ PIN: ‪225 495 585‬#
More phone numbers: 
https://tel.meet/eas-pyaa-gxk?pin=9017005535543

Thanks for running this!

We started a #role-coaches-and-therapists channel in the EA Everywhere Slack.

There will also be a meeting for coaches and therapists to talk about organizing and coordinating at the EASE monthly meeting on January 24, 2024. More details in the Slack.
 

Here's my experience donating bone marrow in the USA. I recommend all EAs in the USA sign up for the registry at Be The Match. You can decide whether or not to do it if you are asked. The odds of getting asked are 1 in 430.

Stand-up comedian in San Francisco spars with ChatGPT AI developers in the audience
https://youtu.be/MJ3E-2tmC60

[This comment is no longer endorsed by its author]Reply

Seems like this is important, neglected, and possibly tractable. Is there anyone out there working on screening leaders for psychopathy?

Here's a framework I use for A or B decisions. There are 3 scenarios:

  1. One is clearly better than the other.
  2. They are both about the same
  3. I'm not sure; more data is needed.

1 & 2 are easy. In the first case, choose the better one. In the second, choose the one that in your gut you like better (or use the "flip a coin" trick, and notice if you have any resistance to the "winner". That's a great reason to go with the "loser").

It's the third case that's hard. It requires more research or more analysis. But here's the thing: there are costs to doing this work. You have to decide if the opportunity cost to delve in is worth the investment to increase the odds of making  the better choice.

My experience shows that—especially for people who lean heavily on logic and rationality like myself 😁—we tend to overweight "getting it right" at the expense of making a decision and moving on. Switching costs are often lower than you think, and failing fast is actually a great outcome. Unless you are sending a rover to Mars where there is literally no opportunity to "fix it in post-", I suggest you do a a nominal amount of research and analysis, then make a decision and move onto other things in your life. Revisit as needed.

[cross-posted from a comment I wrote in response to Why CEA Online doesn’t outsource more work to non-EA freelancers]

Volunteers of America (VOA) Futures Fund Community Health Incubator

Accelerate your innovative business solution for community health disparities.

Volunteers of America (VOA), one of the nation's largest and most experienced nonprofit housing, health, and human service organizations, launched this first-of-its-kind Incubator to accelerate social enterprises that improve quality, equity, and access to care for Medicaid and at-risk populations. Sponsored by the Humana Foundation, the VOA Community Health Incubator powered by SEEP SPOT supports early-stage entrepreneurship that develops innovative products and services for equitable community health outcomes

Leveraging VOA’s vast portfolio of assets – 16,000 employees, 400 communities, 22,000+ units of affordable housing, 15+ senior healthcare facilities, 1.5 million lives touched annually, hundreds of programs and service models – in this 12-week program founders will benefit from the expertise and collaboration with the VOA network, gain business training and tailored mentorship while tackling the most intractable community health disparities. This is a fully funded opportunity with non-diluted grants and the potential follow-on investment of up to $200,000.

APPLY BY APRIL 28

Update Jan 4: Fixed, thanks! 🙏🏼

@Centre for Effective Altruism, is there a reason this video—From Bednets to Mindsets: The Case for Mental Health in Effective Altruism |  @Joy Bittner—is unlisted? 

I had watched it not long after it aired from a direct link in Vida Plena's newsletter. I was trying to find it again, but because it's unlisted it doesn't show up in YouTube search results. I feel all EAG talks like this should be public to give as much exposure to the ideas as possible.

Thanks!

Thank you for flagging this!

We've now made this talk public. All EAGxVirtual 2023 talks were unlisted. I think (90% confidence) the team hadn't yet received confirmation from the speakers that they should post, and I'm just checking that with them. I've asked the team to post all the videos that they have received consent from the speaker to share.

I’m not affiliated with this, but I suspect it might be of interest to other folks 
https://www.humanflourishing.org/

Comedian Gary Gulman recommends GiveWell on Mike Birbiglia’s “Working It Out” podcast
 

Timestamp?

If you expand the description, you can click "Show transcript" and then search for givewell


Timestamp is 50:17

I’ve been working on a logical, science-based definition of the arbitrary race labels[1] we’ve assigned to humans. The most succinct definition I’ve come up with is evolutionary physiological acclimatization. Essentially, the bodies of the descendants of people living in an environment with specific climate attributes and trends will become more adapted to that environment. For example, the darker skin, larger noses, and bigger lips of people of African descent helped their ancestors survive in the intense sunlight and heat.[2] Ironically, we have migrated to parts of the world where our physiology is mismatched with the environment.

Race is fundamentally an artificial construct that helped people in positions of wealth and power protect their place. They chose visible superficial traits to make it easier to delineate the in-group (whom the law protects but does not bind) from the out-group (whom the law binds but does not protect).[3]

I believe it’s necessary to acknowledge what race was and what it actually is if we are to move to a world where we truly treat all people as equals.

  1. ^
  2. ^
  3. ^

Imposter syndrome is innately illogical. It presumes that everyone else either has poor judgment, or they see the truth but is going along with the deception that you aren’t capable of your current position. Poor judgment or “going along to get along” may be the case for any given individual, but when you add up all of the people in the group you interact with, it is statistically improbable, or it would require a Truman Show level of coordination to execute.

The antidote to imposter syndrome is trust. Trust in others to make fair and honest assessments of you and your capabilities. And trust in yourself and the objective successes you’ve achieved to reach your current place.

Ryuji Chua advocates for the suffering of fish

Loved Ryuji’s interview on The Daily Show. His nonjudgmental attitude towards those who still eat animals is a wonderful way to keep the conversation open and welcoming. A true embodiment of the "big tent" approach that benefits EA expansion.

I also watched his documentary How Conscious Can A Fish Be? It’s always hard for me to see animals suffering, but I also know I need to keep renewing that emotional connection to the cause so I don't drift towards apathy.

Here's a framework I use for A or B decisions. There are 3 scenarios:

  1. One is clearly better than the other.
  2. They are both about the same
  3. I'm not sure; more data is needed.

1 & 2 are easy. In the first case, choose the better one. In the second, choose the one that in your gut you like better (or use the "flip a coin" trick, and notice if you have any resistance to the "winner". That's a great reason to go with the "loser").

It's the third case that's hard. It requires more research or more analysis. But here's the thing: there are costs to doing this work. You have to decide if the opportunity cost to delve in is worth the investment to increase the odds of making  the better choice.

My experience shows that—especially for people who lean heavily on logic and rationality like myself 😁—we tend to overweight "getting it right" at the expense of making a decision and moving on. Switching costs are often lower than you think, and failing fast is actually a great outcome. Unless you are sending a rover to Mars where there is literally no opportunity to "fix it in post-", I suggest you do a a nominal amount of research and analysis, then make a decision and move onto other things in your life. Revisit as needed.

[cross-posted from a comment I wrote in response to Why CEA Online doesn’t outsource more work to non-EA freelancers]

"This Request for Information (RFI) seeks input on how to best collect and integrate environmental health data into the All of Us Research Program dataset.

The All of Us Research Program seeks to accelerate health research and medical breakthroughs to enable individualized prevention, treatment, and care for all of us. To do this, the program will partner with one million or more participants nationwide and build one of the most diverse biomedical data resources of its kind. Researchers may leverage the All of Us platform for thousands of studies on a wide range of health conditions.

Diversity is one of the core values of the All of Us Research Program. The program aims to reflect the diversity of the United States and has a special focus on engaging communities that have been underrepresented in health research in the past. Participants are from different races, ethnicities, age groups, and regions of the country. They are also diverse in gender identity, sexual orientation, socioeconomic status, educational attainment, and health status. …"

https://rfi.grants.nih.gov/?s=625848a8fa2300004a006f22

Abigail Marsh’s 2016 TED talk on “Extraordinary Altruists”

It doesn’t look like anyone posted this TED talk on extraordinary altruists who donate a kidney to a stranger. The thing that stood out for me was the movement away from ego and into what could be called a non-dualistic perspective of humanity. I also detect a higher EQ—the ability to read and connect with others’ emotions, and this requires them to be skillful at recognizing, connecting, and regulating their own emotions.

What are your thoughts?

Devil’s Advocate: the only way to truly minimize suffering is to eliminate all life.

Of course, that also eliminates all serene existence too. I don't think anyone compassionate would advocate for this modest proposal.

Even QALY’s don't really work; they don't capture what proportion of a given life was in suffering vs serenity. It seems to me we should be trying to maximize QALYs/DALYs, right?

[Apologies if this is a noob question; I looked around but couldn't find anything that satisfactorily addressed this.]

I'm a big fan of standup comedians. In many cases, they offer alternative viewpoints that challenge societal norms.

In that vein, Whitney Cummings has a Netflix special from July 2019 (Can I Touch It?) where the second half is about the growing market for more and more realistic sex dolls and touches on some AI safety issues. TBH, it lacks depth and nuance, but she's a comedian going for laughs, not meaningful discourse.  But I post it here because I do appreciate seeing these issues come up in more mainstream culture.

Hypothesis: our planet’s ecosystem and the process of evolution necessarily have some inherent level (or range) of pain and suffering. Has anyone done an analysis of what this might be? Having this data from pre-civilization times would be most enlightening, as it would give an idea of what the system looks like without human meddling (whether intentional or not).

And yes it will fluctuate. Earth’s previous mass extinction events would involve a HUGE amount of suffering in a short period of time (like that first day after the Chicxulub impact). But what is the average baseline?

Good data visualization of record temperatures in USA  cities. https://pudding.cool/2022/03/weather-map/

Mental health org in India that follows the paraprofessional model
https://reasonstobecheerful.world/maanasi-mental-health-care-women/

#mental-health-cause-area

Humans are cucinivores—the conjecture all vegans should know 

The crux: humans are a new and unique type of eater: cucinivores. We evolved to eat cooked food.

More on this here:

More people should know about this conjecture, and for anyone looking for a vegan-related research project, I believe the world would benefit from more research in this area (which I suspect would support the conjecture).

I'm afraid I'm failing to connect the dots. How do you see this being related to veganism, and how do you see researching this making an impact?

One common argument against veganism is that humans are carnivores. We aren’t. We are designed to eat cooked food.

I see. My personal intuition is that it wouldn't convince many people. I mean, cooked food includes cooked meat. So, unfortunately, their argument that we have evolved to have meat in our diets still stands.

A good friend turned me onto The Telepathy Tapes. It presents some pretty compelling evidence that people who are neurodivergent can more easily tap into an underlying universal Consciousness. I suspect Buddhists and other Enlightened folks who spend the time and effort quieting their mind and letting go of ego and dualism can also. I'm curious what others in EA (self-identified rationalists for example) make of this…

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we