Since August 2020, I've been recording conversations with brilliant and insightful people in and adjacent to the effective altruism community. Below, I've shared a selection of these conversations, organized according to the topics they cover. All of these conversations can also be found on our podcast website or by searching for "Clearer Thinking" in just about any podcast app.
If there are other people you'd like to see me record conversations with, please nominate them in the comments! The format of each episode is that I invite each guest to bring four or five "ideas that matter" that they are excited to talk about, and then the aim is to have a fun, intellectual discussion exploring those ideas (rather than an interview).
Conversations related to major EA cause areas
Artificial intelligence and existential risks
What is "the precipice"? Which kinds of risks (natural or technological) pose the greatest threats to humanity specifically or to life on Earth generally in the near future? What other kinds of existential risks exist beyond mere extinction? What are the differences between catastrophic risks and existential risks? How serious is the threat of climate change on an existential scale? What are the most promising lines of research into the mitigation of existential risks? How should funds be distributed to various projects or organizations working on this front? What would a world with existential security look like? What is differential technological development? What is longtermism? Why should we care about what happens in the very far future?
Why is YouTube such a great way to communicate research findings? Why is AI safety (or alignment) a problem? Why is it an important problem? Why is the creation of AGI (artificial general intelligence) existentially risky for us? Why is it so hard for us to specify what we want in utility functions? What are some of the proposed strategies (and their limitations) for controlling AGI? What is instrumental convergence? What is the unilateralist's curse?
What kinds of catastrophic risks could drastically impact global food supply or large-scale electricity supply? What kinds of strategies could help mitigate or recover from such outcomes? How can we plan for and incentivize cooperation in catastrophic scenarios? How can catastrophic and existential risks be communicated more effectively to the average person? What factors cause people to cooperate or not in disaster scenarios? Where should we be spending resources right now to prepare for catastrophe? Why does it seem that governments are largely uninterested in these questions?
What is machine learning? What are neural networks? How can humans interpret the meaning or functionality of the various layers of a neural network? What is a transformer, and how does it build on the idea of a neural network? Does a transformer have a conceptual advantage over neural nets, or is a transformer basically the equivalent of neural nets plus a lot of computing power? Why have we started hearing so much about neural nets in just the last few years, even though they've existed conceptually for many decades? What kind of ML model is GPT-3? What learning sub-tasks are encapsulated in the process of learning how to autocomplete text? What is "few-shot" learning? What is the difference between GPT-2 and GPT-3? How big of a deal is GPT-3? Right now, GPT-3's responses are not guaranteed to contain true statements; is there a way to train future GPT or similar models to say only true things (or to indicate levels of confidence in the truthfulness of its statements)? Should people whose jobs revolve around writing or summarizing text be worried about being replaced by GPT-3? What are the relevant copyright issues related to text generation models? A website's "robots.txt" file or a "noindex" HTML attribute in its pages' meta tags tells web crawlers which content they can and cannot access; could a similar solution exist for writers, programmers, and others who want to limit or prevent their text from being used as training data for models like GPT-3? What are some of the scarier features of text generation models? What does the creation of models like GPT-3 tell us (if anything) about how and when we might create artificial general intelligence?
What is superintelligence? Can a superintelligence be controlled? Why aren't people (especially academics, computer scientists, and companies) more worried about superintelligence alignment problems? Is it possible to determine whether or not an AI is conscious? Do today's neural networks experience some form of consciousness? Are humans general intelligences? How do artificial superintelligence and artificial general intelligence differ? What sort of threats do malevolent actors pose over and above those posed by the usual problems in AI safety?
How does GiveWell's approach to charity differ from other charitable organizations? Why does GiveWell list such a small number of recommended charities? How does GiveWell handle the fact that different moral frameworks measure causes differently? Why has GiveWell increased its preference for health-related causes over time? How does GiveWell weight QALYs and DALYs? How much does GiveWell rely on a priori moral philosophy versus people's actual moral intuitions? Why does GiveWell have such low levels of confidence in some of its most highly-recommended charities or interventions? What should someone do if they want to be more confident that their giving is actually having a positive impact? Why do expected values usually tend to drop as more information is gathered? How does GiveWell think about second-order effects? How much good does the median charity do? Why is it so hard to determine how impactful charities are? Many charities report on the effectiveness of individual projects, but why don't more of them report on their effectiveness overall as an organization? Venture capitalists often diversify their portfolios as much as possible because they know that, even though most startups will fail, one unicorn can repay their investments many times over; so, in a similar way, why doesn't GiveWell fund as many projects as possible rather than focusing on a few high performers? Why doesn't GiveWell recommend more animal charities? Does quantification sometimes go too far?
Researchers in the Effective Altruism movement often view their work through a utilitarian lens, so why haven't they traditionally paid much attention to the psychological research into subjective wellbeing (i.e., people's self-reported levels of happiness, life satisfaction, feelings of purpose and meaning in life, etc.)? Are such subjective measures reliable and accurate? Or rather, which such measures are the most reliable and accurate? What are the pros and cons of using QALYs and DALYs to quantify wellbeing? Why is there sometimes a disconnect between the projected level of subjective wellbeing of a health condition and its actual level (e.g., some people can learn to manage and cope with "major" diseases, but some people with "minor" conditions like depression or anxiety might be in a constant state of agony)? What are some new and promising approaches to quantifying wellbeing? The EA movement typically uses the criteria of scale, neglectedness, and tractability for prioritizing cause areas; is that framework still relevant and useful? How do those criteria apply on a personal level? And how do those criteria taken together differ conceptually from cost-effectiveness? How effective are psychological interventions at improving subjective wellbeing? How well do such interventions work in different cultures? How can subjective wellbeing measures be improved? How can philosophers help us do good better?
How can we develop vaccines more quickly? What kinds of study designs are used (or could be used) during vaccine development? In pandemic situations, we need to roll out vaccines quickly; but even if we can develop and test a vaccine quickly and thoroughly, how confident can we be that there won't be long-term risks? Between ethics and pragmatics, which facet should communicators emphasize when trying to convince organizations and institutions to adopt certain vaccine development strategies? Informed consent is, of course, a hugely important requirement for using human volunteers in challenge trials; so if some people are informed, eager, and willing to volunteer their health and safety for such trials in order to aid vaccine development, then why aren't they being used more (if at all)? Since IRBs are often "all brakes and no gas", could they be given powers to accelerate research in addition to their current powers to slow or halt research? How can bioethics reviews be improved?
Are universities a cult? Do charitable interventions like de-worming work? How much should we trust the conclusion of well-respected charity evaluators like GiveWell?
How can we encourage people to increase their critical thinking and reliance on evidence in the current information climate? What types of evidence "count" as valid, useful, or demonstrative? And what are the relative strengths and weaknesses of those types? Could someone reasonably come to believe just about anything, provided that they live through very specific sets of experiences? What does it mean to have a "naturalistic" epistemology? How does a philosophical disorder differ from a moral failure? Historically speaking, where does morality come from? Is moral circle expansion always good or praiseworthy? What sorts of entities deserve moral consideration?
How can people be more effective in their altruism? Is it better for people to give to good causes in urgent situations or on a regular basis? What causes people to donate to less effective charities even when presented with evidence that other charities might be more effective? We can make geographically distant events seem salient locally by (for example) showing them on TV, but how can we make possible future events seem more salient? How much more effective are the most effective charities than the average? How do altruists avoid being exploited (in a game-theoretic sense)? What sorts of norms are common in the EA community?
Conversations related to other EA-relevant topics
What are "scout" and "soldier" mindsets? How can we have productive disagreements even when one person isn't in scout mindset? Is knowing about good rationality habits sufficient to reason well? When do we naturally tend to be in scout mindset or soldier mindset? When is each mindset beneficial or harmful? Are humans "rationally irrational"? What are the two different types of confidence? What are some practical strategies for shifting our mindset in the moment from soldier to scout?
How can we apply the theory of measurement accuracy to human judgments? How can cognitive biases affect both the bias term and the noise term in measurement error? How much noise should we expect in judgments of various kinds? Is there reason to think that machines will eventually make better decisions than humans in all domains? How does machine decision-making differ (if at all) from human decision-making? In what domains should we work to reduce variance in decision-making? If machines learn to use human decisions as training data, then to what extent will human biases become "baked into" machine decisions? And can such biases be compensated for? Are there any domains where human judgment will always be preferable to machine judgment? What does the "fragile families" study tell us about the limits of predicting life outcomes? What does good decision "hygiene" look like? Why do people focus more on bias than noise when trying to reduce error? To what extent can people improve their decision-making abilities? How can we recognize good ideas when we have them? Humans aren't fully rational, but are they irrational?
This particular episode is unique in that we’ve also made a Thought Saver deck of flashcards to help you to learn or consolidate the key insights from it. You can see a sample of these below, but you can get the full deck by creating a Thought Saver account here or by clicking through to the end of the sample. And if you want to embed your own flashcards in EA Forum posts (like we did below), here’s a link to a LessWrong post that describes how to do that.
What does it mean to leave lines of retreat in social contexts? How can we make sense of the current state of the world? What happens when we run out of map? How does the book Elephant in the Brain apply to the above questions?
What's the best way to help someone who's going through a difficult situation? What are the four states of distress? What are "comfort languages"? How can we introduce more nuance into our everyday thinking habits? When gathering information and forming opinions, how do you know who to trust? What's the difference between intelligence and wisdom?
What is the Great Rationality Debate? What are axiomatic rationality and ecological rationality? How irrational are people anyway? What's the connection between rationality and wisdom? What are some of the paradigms in cognitive science? Why do visual representations of information often communicate their meaning much more effectively than other kinds of representations?
What's the best way to teach rationality? How do you communicate rationalist principles to people who aren't already interested in thinking more clearly? What has COVID taught us about how people typically make decisions and think about problems? Where and how can the rationalist community improve? Does rationalism have anything to say about (for example) exercise, spirituality, art, or other parts of the human experience that aren't typically addressed by rationalists? What are some positive aspects of social media (especially Twitter)? What's going on with recent dating trends? Has dating gotten harder in recent years? How many people does it take to make a pencil? Is there a case to be made for anti-antinatalism?
What is a "wamb"? What are the differences between wambs and nerds? When is it appropriate (or not) to decouple concepts from their context? What are some common characteristics of miscommunications between journalists and writers/thinkers in the EA and Rationalist communities? What are "crony" beliefs? How can you approach discussions of controversial topics without immediately getting labeled as being on one team or another? What sorts of quirks do members of the EA and Rationalist communities typically exhibit in social contexts?
What are the best strategies for improving ourselves? How are line managers useful? Why does Rob prefer long-form content for the 80,000 Hours podcast? What are the sorts of things humans value and why? In what ways do research ethics considerations fail to achieve their stated objectives? Why are prediction markets useful?
What is 80,000 Hours? What sorts of people should become entrepreneurs? How can you run cheap experiments on yourself? What are some beneficial modes of philosophical thinking?
What is metamodernism? How does metamodernism relate to spiral dynamics? What does it look like to apply a metamodern approach to large-scale problems? What are shadow traits, and what is shadow projection? What do our reactions to others' behavior tell us about ourselves? What's going on psychologically and physiologically when we relive past traumas? What dosages of psychedelics are most effective in a therapeutic context? How soon will psychedelic substances likely be decriminalized or legalized at the state and/or federal level in the United States? How can we enter into blissful, ecstatic, intense, or other less common psychological states without drugs or alcohol? What are the pros and cons of (especially intergenerational) co-living?
What is the illusion of explanatory depth? Are there forms of debate or dialogue that actually help people to change their minds (instead of stacking the incentives such that people feel forced to harden and defend their views)? What is epistemic "debt"? Should people avoid having opinions on things where they haven't thought deeply and carefully about all of the relevant considerations? How does one choose which experts to trust? What is "growth mindset"? How can social science be used to do good in the world?
What is the Internal Family Systems model? What kinds of information do our emotions give us? How many agents live in our heads? And, if there's more than one, how well do those agents cooperate? What is operant conditioning? What is attachment theory? How does parenting differ from animal training? Is decision theory able to unify many different psychological theories?
How can you live your best life? What's a good definition of "wisdom"? What are some possible taxonomies of life outcomes? What are some low-hanging fruit in the realm of self-improvement? What are some useful behavior change frameworks and techniques?
How can we become better leaders? How can we give better feedback to others? How can we be better listeners? How can we give good advice? How do startups (or even existing companies) build great products? What sorts of things do experts actually know? When is it useful to poll customers for feedback?
Philosophy and morality
What is utilitarianism? And what are the different flavors of utilitarianism? What are some alternatives to utilitarianism for people that find it generally plausible but who can't stomach some of its counterintuitive conclusions? For the times when people do use utilitarianism to make moral decisions, when is it appropriate to perform actual calculations (as opposed to making estimations or even just going with one's "gut")? And what is "utility" anyway?
What is the Moral Foundations Theory? Is the MFT a model that's only intended to describe human behavior and psychology, or does it also make claims about what's actually true about morality? Why does morality exist in the first place? How can the MFT be used to have better conversations across ideological and cultural divides? What (if anything) helps groups to cohere successfully as they increase in size? Why is internet communication especially hostile and prone to misunderstandings? What common mistakes do people make in in-person communication? What is OpenMind?
What are some of the challenges of defining utopia? What should a utopia look like? What are concrete versus sublime utopias? What are some of the failure modes related to various conceptions of utopia? Is it really that hard to create a shared, positive vision of the future? What is the value (or disvalue) of creating new people, especially in relation to the utopic or dystopic state of the world? What is "whole-hearted morality" versus "morality-as-taxes"? How can we encourage people to be more moral without harming them psychologically (e.g., by loading them down with guilt)? Which sorts of worldview changes are reversible? Where does clinging fit into the constellation of concepts like valuing, caring, envying, etc.? How does non-attachment differ from indifference? Is clinging always bad? Is philosophy making tangible progress as a field? Is philosophy's primary function to show us how our questions are confused rather than to give us direct answers to our questions? Has philosophy given us a clearer picture of what consciousness is or isn't?
What is normative hedonism? What's the difference between wanting something and wanting to want something? Should we only care about the experiences of conscious beings? What's wrong with moral discourse? Does philosophy ever actually make progress, or is it still only discussing the things that were discussed a thousand years ago? What is (or should be) the role of intuition in philosophy? Why should people study philosophy (especially as opposed to other disciplines)? What can we do to create more rationality or systematic wisdom in the world? How can we disagree better?
Why do "antagonistic" teachers exist in popular culture but not in the classroom? What happens to student outcomes when "antagonistic" learning is implemented in real classrooms? What is the Field Theory of Parenting? What are things that we can do for others but can't do for ourselves? How can we notice and utilize costly and unfakeable signals? What is the core definition of civilization? How can we influence others ethically? Is explicit communication always better than implicit?
How can we accelerate learning? Is spaced repetition the best way to absorb information over the long term? Do we always read non-fiction works with the goal of learning? What are some less common but perhaps more valuable types of information that can be put on flashcards? What sorts of things are worth remembering anyway? Why is it important to commit some ideas to memory when so much information is easily findable on the internet? What benefits are derived from being involved in all stages of a project pipeline from concept to execution (as opposed to being involved only in one part, like the research phase)? Why should more researchers be involved in para-academic projects? Where can one find funding for para-academic research?
What's the best way to learn? Why is learning how to learn "the most important skill"? When should we explore, and when should we exploit? What are the merits and demerits of various models of governance? How should we think about the problems around free speech?
What is "The Index"? What are some benefits of externally compiling and organizing one's knowledge? When is spaced repetition useful? How can we co-opt our visual systems to boost memory? Would we all be more interested in producing an external personal knowledge base if we could feel on a visceral level how much information is constantly being forgotten? How and when should we move up and down the ladder of abstraction? What sorts of problems can be solved by simulation? What is a generative model (as opposed to a predictive model)? How can constraints improve creativity? How useful are credentials as a guide to how much a person knows and whether or not a person is "allowed" to have an opinion on a topic? What do credentials actually signal about a person? What are "fox" and "hedgehog" thinking? What is deugenesis?
Is scientific progress speeding up or slowing down? What are the best strategies for funding research? What is "para-academia," and what are the pros and cons of being a para-academic researcher? What are the feedback loops in politics that cause politicians and their constituents to react to each other?
What are "shed" and "cake" projects? And how can you avoid "shed" projects? What is the "jobs to be done" framework? What is the "theory of change" framework? How can people use statistics (or statistical intuition) in everyday life? How accurate are climate change models? How much certainty do scientists have about climate change outcomes? What are some promising strategies for mitigating and reversing climate change?
Should we trust social science research? What is the open science movement? What is the "file drawer" effect? How can common sense help social science dig itself out of the replicability crisis? Is social science in the West too focused on interventions for individuals? How useful is the Implicit Association Test? How useful is the concept of "grit"? How should journalists communicate confidence or skepticism about scientific results? What incentive structures stand in the way of honestly and openly critiquing scientific methods or findings?
How should math be taught in primary and secondary schools? How much is science denialism caused by statistics illiteracy or lack of statistical intuitions? What do p-values actually mean? Under what conditions should null results be published? What are some of the less well-known factors that may be contributing to the social science reproducibility crisis?
What is enlightenment? What are the different kinds or definitions of enlightenment? What was Aella's religious upbringing like, and why did she lose her faith? How did Aella get into sex work, and what has her career as a sex worker been like? How do we ask great questions, and what is Askhole?
Why should we meditate? What are the typical developmental stages as one progresses along the contemplative path? What does it mean to "hold an ontology loosely"? Are some meditative techniques inappropriate for some practitioners? Are there risks associated with meditation?
What's a good definition of meditation that cuts through all the dogma and differing methodology? What are the techniques, skills, and insights associated with meditation? How does meditation connect to religion and spirituality, and is meditation valuable without those components? And what is enlightenment?
What is "awakening"? What is a "stateless" state? What is nonduality? Why and how do some spiritual practitioners experience a dissolution of their sense of self? Do these altered or enlightened states require thousands of hours of practice to achieve, or are they always inside us, waiting to be noticed and accessed at any time? Can these states be accessed through a variety of paths and methods? Is there a certain kind of person that does better or worse at achieving these states?
Should pleasure and pain be measured on logarithmic scales? How might such a scale affect utilitarian calculations? How do harmonic energy waves in the brain correspond to states of (or intensities of) consciousness? What sorts of conclusions can we draw about brain states given the resolutions and sampling rates of tools like fMRI, EEG, and MEG? What is the symmetry theory of homeostatic regulation, and how does it connect to pleasure and pain? Are uncomfortable or confused mental states computationally useful to us? To what extent can the concepts of musical consonance and dissonance map onto energy states in the brain?
What are the advantages of viewing the mind through the multi-agent model as opposed to (say) the rational/optimizing agent model? What is the "global workspace" theory of consciousness? What's going on during concentration meditation according to the global workspace theory? If our brains are composed of multiple sub-agents, then what does it mean when I say, "I believe such-and-such"? Are beliefs context-dependent (i.e., you believe P in one context and not-P in a different context)? What effects do the various therapeutic modalities and meditation practices have on our beliefs? What are the advantages of transformational therapy over other approaches?
When (if ever) can suffering be good? Is there an optimal ratio of pleasure to pain? What is motivational pluralism? Can large, positive incentives be coercive? (For example, is it coercive to offer to pay someone enormous amounts of money to do something relatively benign or even painful or immoral?) How can moving from making judgments about a person's actions to making judgments about their character solve certain moral puzzles? Why do we sometimes make seemingly irrational judgments about the relative badness of certain actions? How does the level of controversy around an action factor into how much we publicly disapprove of it? What are the differences between compassion and empathy? Is antisocial personality disorder (AKA psychopathy or sociopathy) defined only by a lack of empathy? How have humans evolved (or not) to detect and mitigate the effects of others who feel no remorse? Is altruism especially vulnerable to remorseless people? What are the differences between narcissists and sociopaths?
Learning from history
What is progress? How do we (and should we) measure progress? What are the most important questions to ask in progress studies? What are the factors that lead to progress? Why has large-scale progress taken so long (i.e., why did we not see much progress until the Industrial Revolution)? Why did the industrial revolution, scientific revolution, and democratic revolution all seem to start within a relatively short period of time of each other? How can we prevent progress from slowing down, stopping, or even reversing? What factors have contributed to the slowing of progress in the last 50 years? What's the state of progress in nuclear energy? What is the history of attitudes towards progress? And why is it important for people to believe that progress is good?
What is "long" history? Why don't historians usually focus on what happened before recorded human history? What (if anything) is special about agriculture when it comes to the development of civilization? How far back does human civilization go, and why should we care? Have humans always been gardeners? What factors cause civilizations to crumble or thrive? Should we reboot standardized tests and college admissions every few decades so that measures don't become targets? Which destructive factors are particularly salient to modern human civilization? Why is there such a disconnect between our intuition that progress is inevitable and our knowledge that virtually all civilizations have collapsed in the past? In other words, what makes us think that we'll succeed where others have failed? How does a functional social institution differ from a failing one? What is the "great founder" theory?
What are the benefits of studying history? How do we find useful historical analyses? Can learning about history save us from repeating it? Is America decaying as a nation, empire, and/or leading world power? Generally speaking, what causes empires to fail? Is the aging and decay experienced by organic bodies analogous to the aging and decay experienced by an empire (or by any complex system, for that matter)? What are all the reasons organisms age, decay, and die? What are the most promising avenues of exploration in longevity research? What kind of stressors on our bodies are beneficial? How accurate is the efficient market hypothesis? What kinds of catalysts force a market to value assets at their "intrinsic" value? How rational are markets?
Why might it be the case that "all propositions about real interest rates are wrong"? What, if anything, are most economists wrong about? Does political correctness affect what economists are willing to write about? What are the biggest open questions in economics right now? Is there too much math in economics? How has the loss of the assumption that humans are perfectly rational agents shaped economics? Is Tyler's worldview unusual? Should people hold opinions (even loosely) on topics about which they're relatively ignorant? Why is there "something wrong with everything" (according to Cowen's First Law)? How can we learn how to learn from those who offend us? What does it mean to be a mentor? What do we know and not know about success? What is lookism? Why is raising someone else's aspirations a high-return activity?
How is the economy like a differential equation? Can the economy grow indefinitely? Are there economic attractor states? Or are economic outcomes chaotic and/or extremely sensitive to certain variables? What should we know about progress in genetic engineering? Can you (and should you) do genetic engineering in your garage? What are some common mistakes people make when thinking about AI? Should we expect AI abilities to converge in some domains and diverge in others? Why do we sometimes collectively forget important ideas? Have we as a species grown wiser over the course of our history? How can we form high-trust communities on the internet? In the context of social media, is ease of access at cross-purposes with membership screening and/or costs, or is it possible to have both? What should we make of ephemeral communities that appear briefly, do something huge, and then disappear (like the WallStreetBets subreddit phenomenon)? What are the various types of misinformation being used in the US, Russia, China, and elsewhere?
What is 80,000 Hours, and why is it so important? Does doing the most good in the world require being completely selfless and altruistic? What are the career factors that contribute to impactfulness? How should people choose among the various problem areas on which they could work? What sorts of long-term AI outcomes are possible (besides merely apocalyptic scenarios), and why is it so important to get AI right? How much should we value future generations? How much should we be worried about catastrophic and/or existential risks? Has the 80,000 Hours organizing shifted its emphasis over time to longer-term causes? How many resources should we devote to meta-research into discovering and rating the relative importance of various problems? How important is personal fit in considering a career?
What are "forward-chaining" and "backward-chaining," and how do they connect with theory of change? What sorts of mental habits and heuristics prevent you from brainstorming ideas effectively? How can you harness feedback effectively to sharpen your ideas? From whom should you solicit feedback? How can you view your own products with fresh eyes? What are some common struggles people encounter when starting or changing careers, and how can they be overcome? Why are small experiments so under-used? How can we construct a sustainable work life? What are the best ways to rest and recover from overwork and burnout?
Is it okay for anyone to have opinions about marginalized communities even if they're not a part of those communities? Do people in marginalized groups have special knowledge (especially tacit knowledge) about their groups that can't be known or experienced from the outside? To what extent can we know and empathize with others' experiences regardless of differences in race, socioeconomic status, gender, sexual orientation, etc.? Do oppression and discrimination tend to be caused more by active bigotry or by mere lack of care and awareness? What information (if any) does intersectionality fail to capture about people? Is describing someone intersectionally an end in itself, or is it just a way of correcting (or over-correcting) for the suppression of marginalized voices? Should ideas be discussed absent their context or implications (see: decoupling norms vs. contextualizing norms)? To what extent should we focus on individuals versus groups when attempting to fix inequities? Are individuals or groups responsible for redressing the atrocities of their ancestors? Should people be "canceled" for their views (including their past views, even if their current views are different)? To what extent is the shifting of moral ground around social justice issues unpredictable and/or disorienting? How can democratic societies balance the need to debate difficult ideas with the risk of giving reprehensible ideas a platform? Should rules about offensiveness be enforced from the top-down (e.g., from a government, a school administration, a company's board of directors, or even parents)? Is offense only "in the eye of the beholder"?
Why do liberals and conservatives disagree so vehemently? Why are things so polarized in the US right now? What are the core values held by liberals and conservatives? How much value does tradition have? Where and why do liberals and conservatives disagree about climate change? Where and why do liberals and conservatives disagree about free speech and political correctness?
Miscellaneous EA-adjacent topics
What's the current state of cryptocurrency? What are the good and bad aspects of crypto? To what extent have the promises of crypto panned out? How do blockchain and cryptocurrency even work anyway? What are "proof of work" and "proof of stake"? What are the differences between Bitcoin and Ethereum? What sorts of transactions are made easy or possible by the blockchain that are difficult or impossible to perform with traditional currencies? What are non-fungible tokens (NFTs)? What (if anything) prevents people from doing nefarious things with cryptocurrencies? What are some of the exciting, positive things coming up on the crypto horizon?
Are there more meaningful and ethical ways of honoring the dead than our traditional rituals? Why is it useful to adopt probabilistic thinking in our everyday lives? What sorts of things do we value intrinsically (i.e., that we would value even if they had no other positive benefits)? What do stories do well and not so well?
What is cryonics? And how does it work? What do we know right now about reversing death? And what would we have to learn to make resurrection from a cryogenically frozen state feasible? How much does cryonics cost? What incentives would future people have for reviving a cryo-frozen person? How likely is it that a cryo-frozen person will be brought back in the future? Why do people (even pro-cryonics people) "cryoprastinate" and put off considering cryonics for a later time? What sorts of risks are involved in being frozen and later revived? What philosophical and ethical issues are at stake with cryonics? Would a revived person be able to integrate into a future society? Why is there a stigma around cryonics in some cultures?
How can we improve art museums? Does aesthetics need something equivalent to the effective altruism movement? What is steel-aliening? What are the most important social skills to learn, and how can we learn them? Can anybody become polyamorous? What does it take to succeed in a polyamorous relationship? Why do societies decay over time?
What are the various components of intelligence? How does intelligence relate to IQ? Can IQ be trained or improved? What is creativity, and how does it relate to intelligence? Can creativity be trained or improved? What is self-actualization, and how does it relate to Maslow's Hierarchy of Needs? What is transcendence?
Please note that the above is a partial list of recordings, focusing on just those people and topics most connected to effective altruism.
People in or adjacent to effective altruism who I've already recorded with, but haven't yet released the episodes for, include Aaron Hamlin, Alene Anello, Buck Shlegeris, Caitlin (Cate) Hall, Chris Chambers, Elizabeth Edwards-Appell, Eric Schwitzgebel, Habiba Islam, Jim Davies, Joscha Bach, Katja Grace, Leah Edgerton, Matt Goldenberg, Misha Glouberman, and Peter Hurford Wildeford. Please let me know who else I should record with! :)