All posts

New & upvoted

Today, 11 October 2024
Today, 11 Oct 2024

Frontpage Posts

Quick takes

Future debate week topics? 1. Global health & wellbeing (including animal welfare) vs global catastrophic risks, based on Open Phil's classifications. 2. Neartermism vs longtermism. 3. Extinction risks vs risks of astronomical suffering (s-risks). 4. Saving 1 horse-sized duck vs saving 100 duck-sized horses. I like the idea of going through cause prioritization together on the EA Forum.
I never found psychological hedonism (or motivational hedonism) very plausible, but I think it's worth pointing out that the standard version — according to which everyone is ultimately motivated only by their own pleasure and pain — is a form of psychological egoism and seems incompatible with sincerely being a hedonistic utilitarian or caring about others and their interests for their own sake. From https://www.britannica.com/topic/psychological-hedonism : More concretely, a psychological hedonist who cares about others, but only based on how it makes them feel, would prefer to never find out that they've caused harm or are doing less good than they could, if it wouldn't make them (eventually) feel better overall. They don't actually want to do good, they just want to feel like they're doing good. Ignorance is bliss. They could be more inclined to get in or stay in an experience machine, knowing they'd feel better even if it meant never actually helping anyone else. That being said, they might feel bad about it if they know they're in or would be in an experience machine. So, they might refuse the experience machine by following their immediate feelings and ignoring the fact that they'd feel better overall in the long run. This kind of person seems practically indistinguishable from someone who sincerely cares about others, but does so through and based on their feelings.
This year's Nobel prizes for Physics and for Chemistry went to computer scientists (among others). Previous prizes have stretched the discipline boundaries, e.g., the Economics Prize for Ostrom (poli sci) and Kahnemann (psych). Probably because the prize categories are not set optimally to maximize their goal ... especially as the world has progressed. The current categories are: Physics, Chemistry, Physiology or Medicine, Literature, Economics (*slightly different prize), and Peace What would be the ideal categories for this, considering what the real world (not just EA) will latch onto? My quick take, in approximate order of importance to this goal: 1. Humanitarian work (policies, programs, innovation, and action; if feasible, to include animal welfare) 2. Peace (actual work towards international peace, not humanitarian stuff) 3. Governance and public policy 4. Reduction of global catastrophic risks (this one might be hard to sell), response to disasters and pandemics 5. Life sciences (including medicine) 6. Physical and climate science (including physics, chemistry, astronomy, geology etc.) 7. Math, statistics, computer science, and AI 8. Technology and engineering 9. (Maybe) Social science (including economics) 10. (Maybe) Philosophy, journalism, culture, and communication Hmm... that seems like too many; need to pare down the list, maybe combine some of these
New Webinar from Faunalytics: Bridging Conservative Values and Animal Advocacy Faunalytics' latest study — Bridging U.S. Conservative Values And Animal Protection — can help give advocates a framework for both working with conservative lawmakers to pass pro-animal laws, and in crafting pro-animal messages that will resonate with the conservative public. In this panel, we will explore how to apply these findings to your work! First, learn about the study and the research from Faunalytics. Then, listen to our two guests — Max Broad from DC Voters for Animals and Roland Halpern from Colorado Voters for Animals — as they discuss how these ideas can be applied in their own interventions. Finally, you'll have a chance to ask any questions you have about political identity and animal protection! This webinar is ideal for political advocates, legislative advocates, or anyone who works with individuals from across the political spectrum with any intervention. The study focused on U.S. conservatives, but the panel will likely be illuminating for understanding conservatives in other countries as well. Register here: Bridging Conservative Values and Animal Advocacy 

Thursday, 10 October 2024
Thu, 10 Oct 2024

Frontpage Posts

Quick takes

I'm grateful for the articles @MichaelStJules writes on the forum. He seems to be motivated by a deep desire to understand what will benefit moral patients. For example, I particularly value his sequence on the impact of fishing on fish welfare (The moral ambiguity of fishing on wild aquatic animal populations and other articles)
I really like the vote and discussion thread. In part because I think that aggregating votes will improve our coordination and impact.  Without trying to unpack my whole model of movement building; I think that the community needs to understand itself (as a collective and as individuals) to have the most impact, and this approach may really help. EA basically operates on a "wisdom of wise crowds" principle, where individuals base decisions on researchers' and thinkers' qualitative data (e.g., forum posts and other outputs) However, at our current scale, quantitative data is much easier to aggregate and update on. For instance, in this case, we are now seeing strong evidence that most people in EA think that AW is undervalued (as seems will be the case) relative to global development. Also who thinks what in relation to that claim and why. This is extremely useful for community and individual decision-making. It would never be captured in the prior system of coordinating via posts and comments. Many people may/will act on or reference this information when they seek funding, write response posts, or choose a new career option. In a world without the vote, and just forum posts, these actions might otherwise not occur. In short, very keen on this and to see more of this. 
We've just shipped an update to the debate week banner, so you can expand the columns to see more of the distribution, as it's getting a bit squashed. You just have to click on one of the "+N" circles. (Feel free to downvote this quick take if it hangs around on the Frontpage for too long)
How does Animal Welfare/Global Health affect AI Safety? Very brief considerations. I think someone might build super strong AI in the next few years, and this could affect most of the value of the future. If true, I think it implies that the majority of any value from an intervention or cause area comes from how it affects whether AI goes well. Even if that's very slight and indirect. Relatedly, I think whether AI goes well depends on whether states will be able to coordinate. How do Animal Welfare interventions affect whether AI goes well?  – I think the Moral Circle expansion is relevant. – Helping reach climate targets seems relevant to help with international coordination. – But I think that Animal Welfare interventions place a cost on society such as by raising the price of food and increasing pressure on our governments in high-income countries. How do Global Health interventions affect whether AI goes well?  – I think that it reduces the pressure on governments in LMICs and gives them a safer society. This gives their Governments slightly more room to come to peaceful international agreements.  – But it may also enable more people to contribute to AI, whether that be AI capabilities development, chip manufacture (or AI safety/governance) Overall, I slightly lean towards global health being better. Perhaps RP's tools shed light on this. (I haven't checked!)
With the debate week discussion thread getting so long, it is now a community service (even more than usual) to sort the comments by "new" and upvote/ downvote or comment. Let's not let quality content get buried!

Topic Page Edits and Discussion

Wednesday, 9 October 2024
Wed, 9 Oct 2024

Frontpage Posts

Quick takes

Anyone know any Earn-To-Givers who might be interested in participating in an AMA during Giving Season? If a few are interested, it might be fun to experiment with an AMA panel, where Forum users ask questions, and any of the AMA co-authors can respond/ co-authors can disagree. Why? Giving Season is, in my opinion, a really great time to highlight the earn-to-give work which is ongoing all year, but is generally under-celebrated by the EA community. + Earn-to-givers might have good insights on how to pick donation targets during the donation election, and Giving Season more generally.   
I would advise being careful with RP's Cross-cause effectiveness tool as it currently stands, especially with regards to the chicken campaign. There appears to be a very clear conversion error which I've detailed in the edit to my comment here. I was also unable to replicate their default values from their source data, but I may be missing something. 
Re "pivotal questions"... Some thoughts on what The Unjournal (unjournal.org) can offer, cf existing EA-aligned research orgs (naturally, there are pros and cons) ... both in terms of defining and assessing the 'pivotal questions/claims', and in evaluating specific research findings that most inform these. 1. Non-EA-aligned expertise and engagement: We can offer mainstream (not-EA aligned) feedback and evaluation, consulting experts who might not normally come into this orbit. We can help engage non-EA academics in the priorities and considerations relevant to EAs and EA-adjacent orgs. This can leverage the tremendous academic/government infrastructure to increase the relevant research base. Our processes can provide 'outside the EA bubble' feedback and perhaps measure/build the credibility of EA-aligned work. 2. Depth and focus on specific research and research findings: Many EA ~research orgs focus on shallow research and comms. Some build models of value and cost-effectiveness targeted to EA priorities and 'axiology'. In contrast, Unjournal expert evaluations can dig deeply into the credibility of specific findings/claims that may be pivotal to these models. 3. Publicity, fostering public feedback and communication: The Unjournal is building systems for publishing and promoting our evaluations. We work to link these to the scholarly/bibliometric tools and measures people are familiar with. We hope this generates further feedback, public discussion, research, and application of this research.
I think that data ethics and drug ethics together or as separate social functions can save humanity at large.

Tuesday, 8 October 2024
Tue, 8 Oct 2024

Frontpage Posts

Quick takes

I was surprised to find that I felt slightly uncomfortable positioning myself on the 'animal welfare' side of the debate week scale. I guess I generally think of myself as more of a 'global health & development' person, and might have subconscious concerns about this as an implicit affiliational exercise (even though I very much like and respect a lot of AW folks, I guess I probably feel more "at home" with GHD)? Obviously those kinds of personal factors shouldn't influence our judgments about an objective question like the debate week question is asking. But I guess they inevitably do. I don't know if this observation is even worth sharing, but there it is, fwiw. I guess I'd just like to encourage folks to be aware of their personal biases and try to bracket them as best they can. (I'd like to think of all EAs as ultimately "on the same side" even when we disagree about particular questions of cause prioritization, so I feel kind of bad that I evidently have separate mental categories of "GHD folks" and "AW folks" as though it were some kind of political/coalitional competition.)
I want to once again congratulate the forum team on this voting tool. I think by doing this, the EA forum is at the forefront of internal community discussions. No communities do this well and it's surprising how powerful it is. 
The 80,000 Hours team just published that "We now rank factory farming among the top problems in the world." I wonder if this is a coincidence or if this planned to coincide with the EA Forum's debate week? Combined with the current debate week's votes on where an extra $100 should be spent, these seem like nice data points to show to anyone that claims EA doesn't care about animals.  
I'm not really focused on animal rights nor do I spend much time thinking about it, so take this comment with a grain of salt. However, if I wanted to make the future go well for animals I'd be offering free vegan meals in the Bay Area or running a conference on how to ensure that the transition to advanced AI systems goes well for animals in the Bay Area. Reality check: Sorry for being harsh, but you're not going to end factory farming before the transition to advanced AI technologies. Max 1-2% chance of that happening. So the best thing to do is to ensure that this goes well for animals and not just humans. Anyway, that concludes my hot-take.

Monday, 7 October 2024
Mon, 7 Oct 2024

Frontpage Posts

Quick takes

The real danger isn't just from AI getting better- it's from it getting good enough that humans start over-relying on it and offloading tasks to it.  Remember that Petrov had automatic detection systems, too; he just independently came to the conclusion not to fire nukes back.

Sunday, 6 October 2024
Sun, 6 Oct 2024

Quick takes

EA needs more communications projects. Unfortunately, the EA Communications Fellowship and the EA Blog prize shut down[1]. Any new project needs to be adapted to the new funding environment. If someone wanted to start something in this vein, what I'd suggest would be something along the lines of AI Safety Camp. People would apply with a project to be project leads and then folk could apply to these projects. Projects would likely run over a few months, part-time remote[2]. Something like this would be relatively cheap as it would be possible for someone to run this on a volunteer basis, but it might also make sense for there to be a paid organiser at a certain point. 1. ^ Likely due to the collapse of FTX 2. ^ Despite the name, AI Safety Camp is now remote.
Does anyone know of a low-hassle way to charge invoices for services but it's a third-party charity that gets paid? It could well be an EA charity if that makes it easy. I'm hoping for something slightly more structured than "I'm not receiving any pay for my services but I'm trusting you to donate X amount to this charity instead".
Thinking of trying to re-host innovationsinfundraising.org, which I stopped hosting maybe a year ago. Not sure I have the bandwidth to keep it updated as a ~living literature review, but the content might be helpful to people. You can see some of the key content on the wayback machine, e.g., the table of evidence/consideration of potential tools .  Any thoughts/interest in using this or collaborating on a revival (focused on the effective giving part)? This, along with the barriers to effective giving might (or might not) also be a candidate for Open Phil's living literature project. (The latter is still hosted, some overlaps with @Lucius Caviola  and @Stefan_Schubert's book).

Saturday, 5 October 2024
Sat, 5 Oct 2024

Quick takes

I was going through Animal Charity Evaluators' reasoning behind which countries to prioritize (https://animalcharityevaluators.org/charity-review/the-humane-league/#prioritizing-countries) and I notice they judge countries with a higher GNI per capita as more tractable. This goes against my intuition, because my guess is your money goes further in countries that are poorer. And also because I've heard animal rights work in Latin America and Asia is more cost-effective nowadays. Does anyone have any hypotheses/arguments? This quick take isn't meant as criticism, I'm just informing myself as I'm trying to choose an animal welfare org to fundraise for this week (small, low stakes). When I have more time I'd be happy to do more research and contact ACE myself with these questions, but right now I'm just looking for some quick thoughts.

Friday, 4 October 2024
Fri, 4 Oct 2024

Quick takes

From Reuters: I sincerely hope OpenPhil (or Effective Ventures, or both - I don't know the minutia here) sues over this. Read the reasoning for and details of the $30M grant here.  The case for a legal challenge seems hugely overdetermined to me: * Stop/delay/complicate the restructuring, and otherwise make life appropriately hard for Sam Altman * Settle for a large huge amount of money that can be used to do a huge amount of good * Signal that you can't just blatantly take advantage of OpenPhil/EV/EA as you please without appropriate challenge I know OpenPhil has a pretty hands-off ethos and vibe; this shouldn't stop them from acting with integrity when hands-on legal action is clearly warranted
The Debate Week banner is under construction... get your takes ready for Monday morning! In the mean time, you can brush up on the reading list in the announcement post, or respond to this post with ideas for posts you'd like to see next week. 
The CEA Online Team (which runs this Forum) has finalized our OKRs for this new half-quarter, and I've updated the public doc, so I'm reposting the link to the doc here.
In America, dining services influence far more meals than vegans' personal consumption Aramark, Compass Group, etc. each serve billions of meals annually, and even the largest individual correctional facilities, hospital campuses, school districts, public universities, baseball stadiums, etc. each serve millions of meals annually to largely captive audiences

Thursday, 3 October 2024
Thu, 3 Oct 2024

Quick takes

38
Lizka
8d
1
A note on mistakes and how we relate to them (This was initially meant as part of this post[1], but I thought it didn't make a lot of sense there, so I pulled it out.) “Slow-rolling mistakes” are usually much more important to identify than “point-in-time blunders,”[2] but the latter tend to be more obvious. When we think about “mistakes”, we usually imagine replying-all when we meant to reply only to the sender, using the wrong input in an analysis, including broken hyperlinks in a piece of media, missing a deadline, etc. I tend to feel pretty horrible when I notice that I've made a mistake like this. I now think that basically none of my mistakes of this kind — I’ll call them “Point-in-time blunders” — mattered nearly as much as other "mistakes" I've made by doing things like planning my time poorly, delaying for too long on something, setting up poor systems, or focusing on the wrong things.  This second kind of mistake — let’s use the phrase “slow-rolling mistakes” — is harder to catch; I think sometimes I'd identify them by noticing a nagging worry, or by having multiple conversations with someone who disagreed with me (and slowly changing my mind), or by seriously reflecting on my work or on feedback I'd received.  ... This is not a novel insight, but I think it was an important thing for me to realize. Working at CEA helped move me in this direction. A big factor in this, I think, was the support and reassurance I got from people I worked with.  This was over two years ago, but I still remember my stomach dropping when I realized that instead of using “EA Forum Digest #84” as the subject line for the 84th Digest, I had used “...#85.” Then I did it AGAIN a few weeks later (instead of #89). I’ve screenshotted Ben’s (my manager’s) reaction. ... I discussed some related topics in a short EAG talk I gave last year, and also touched on these topics in my post about “invisible impact loss”.  An image from that talk. 1. ^ It was there because
36
Lizka
8d
1
A note on how I think about criticism (This was initially meant as part of this post,[1] but while editing I thought it didn't make a lot of sense there, so I pulled it out.) I came to CEA with a very pro-criticism attitude. My experience there reinforced those views in some ways,[2] but it also left me more attuned to the costs of criticism (or of some pro-criticism attitudes). (For instance, I used to see engaging with all criticism as virtuous, and have changed my mind on that.) My overall takes now aren’t very crisp or easily summarizable, but I figured I'd try to share some notes. ... It’s generally good for a community’s culture to encourage criticism, but this is more complicated than I used to think. Here’s a list of things that I believe about criticism: 1. Criticism or critical information can be extremely valuable. It can be hard for people to surface criticism (e.g. because they fear repercussions), which means criticism tends to be undersupplied.[3] Requiring critics to present their criticisms in specific ways will likely stifle at least some valuable criticism. It can be hard to get yourself to engage with criticism of your work or things you care about. It’s easy to dismiss true and important criticism without noticing that you’re doing it.  1. → Making sure that your community’s culture appreciates criticism (and earnest engagement with it), tries to avoid dismissing critical content based on stylistic or other non-fundamental qualities, encourages people to engage with it, and disincentivizes attempts to suppress it can be a good way to counteract these issues.  2. At the same time, trying to actually do anything is really hard.[4] Appreciation for doers is often undersupplied. Being in leadership positions or engaging in public discussions is a valuable service, but opens you up to a lot of (often stressful) criticism, which acts as a disincentive for being public. Psychological safety is important in teams (and communities), so it’s u
I've been looking at the numbers with regards to how many GPUs it would take to train a model with as many parameters as the human brain has synapses. The human brain has 100 trillion synapses, and they are sparse and very efficiently connected. A regular AI model fully connects every neuron in a given layer to every neuron in the previous layer, so that would be less efficient. The average H100 has 80 GB of VRAM, so assuming that each parameter is 32 bits, then you have about 20 billion per GPU. So, you'd need 10,000 GPUs to fit a single instance of a human brain in RAM, maybe. If you assume inefficiencies and need to have data in memory as well you could ballpark another order of magnitude so 100,000 might be needed. For comparison, it's widely believed that OpenAI trained GPT4 on about 10,000 A100s that Microsoft let them use from their Azure supercomputer, most likely the one listed as third most powerful in the world by the Top500 list. Recently though, Microsoft and Meta have both moved to acquire more GPUs that put them in the 100,000 range, and Elon Musk's X.ai recently managed to get a 100,000 H100 GPU supercomputer online in Memphis. So, in theory at least, we are nearly at the point where they can train a human brain sized model in terms of memory. However, keep in mind that training such a model would take a ton of compute time. I haven't done to calculations yet for FLOPS so I don't know if it's feasible yet. Just some quick back of the envelope analysis.
When EA Lund does tabling at student association fairs, one thing that's gotten a laugh out of some people is having two plates of cookies people can take from. One of them gets a sticky saying "this cookie saves one (1) life", and the other gets a sticky saying "this cookie saves 100 lives!" 

Wednesday, 2 October 2024
Wed, 2 Oct 2024

Frontpage Posts

Quick takes

I'd love to see the EA forum add a section titled "Get Involved" or something similar. There is the groups directory, but it's one of only many ways that folks can get more involved, from EAGx Conferences, to Virtual Programs, 80,000 Hours content/courses to donating.
I quickly wrote up some rough project ideas for ARENA and LASR participants, so I figured I'd share them here as well. I am happy to discuss these ideas and potentially collaborate on some of them. Alignment Project Ideas (Oct 2, 2024) 1. Improving "A Multimodal Automated Interpretability Agent" (MAIA) Overview MAIA (Multimodal Automated Interpretability Agent) is a system designed to help users understand AI models by combining human-like experimentation flexibility with automated scalability. It answers user queries about AI system components by iteratively generating hypotheses, designing and running experiments, observing outcomes, and updating hypotheses. MAIA uses a vision-language model (GPT-4V, at the time) backbone equipped with an API of interpretability experiment tools. This modular system can address both "macroscopic" questions (e.g., identifying systematic biases in model predictions) and "microscopic" questions (e.g., describing individual features) with simple query modifications. This project aims to improve MAIA's ability to either answer macroscopic questions or microscopic questions on vision models. 2. Making "A Multimodal Automated Interpretability Agent" (MAIA) work with LLMs MAIA is focused on vision models, so this project aims to create a MAIA-like setup, but for the interpretability of LLMs. Given that this would require creating a new setup for language models, it would make sense to come up with simple interpretability benchmark examples to test MAIA-LLM. The easiest way to do this would be to either look for existing LLM interpretability benchmarks or create one based on interpretability results we've already verified (would be ideal to have a ground truth). Ideally, the examples in the benchmark would be simple, but new enough that the LLM has not seen them in its training data. 3. Testing the robustness of Critique-out-Loud Reward (CLoud) Models Critique-out-Loud reward models are reward models that can reason explici
It's important to think about the policy space from a meta-level incentives/factors that might get in the way of having an impact, such as making AI safer. One I heard today was that policy people thrive in moments of regulatory uncertainty, while this is bad for companies.

Load more days