Exciting resource, and well presented! I'm digging into the insecticide section now. Some of the research into numbers of individuals, prevalence of insecticides, biggest actors, and off target effects is also useful for grounding biodiversity impact estimations. Thanks to all the researchers for their hard work on this project.
I think it's probably true that animal advocates likely under-rate how weird things might be with TAI but I am not convinced that this would change significant amounts of how resources are allocated:
Hi, I'm trying to understand your call to action.
I'm confused why donors "should not give to Founder’s Pledge or Giving Green’s climate fund until charities that engage in nuclear advocacy are no longer part of their recommended charities lists." It sounds like you are mainly saying that nuclear is ineffective. You also believe funding nuclear efforts might worsen outcomes by displacing renewables. Are you saying it a significant enough backfire so as to to negate the effectiveness of the rest of the fund? Or is this just a way to say that "it would be mor...
Thanks Tom. I'm sure that's true in theory, but in practice RP is at the public forefront of the animal welfare work in the way that they aren't in other work. That's not to diminish other work, more to say that in the public sphere, the moral weights, cause prioritization work and surveys on community preferences point heavily in the direction of animal welfare.
So i might weakly disagree with your "in practice" claim. This might not be intentional or even bad if it's pushing animal welfare work more to the forefront.
Matthew, thank you for engaging critically with our work. We weren't contacted for comment before publication, which led to some mischaracterizations of our strategy as well as the case for nuclear-oriented climate philanthropy. The TLDR for everyone who stops reading here is that I think – while it cites a long of specific things I agree with – Matthew’s post:
(a) misunderstands the actual foci of our work,
(b) makes a set of claims against nuclear that are much weaker than they seem and, crucially,
(c) does not address and thus does not refute t...
A late comment to say that I don't think RP takes the view that any given cause area is more important than another, either philosophically or in practice. Our GHD team produces a steady stream of-I think-interesting and helpful reports. Perhaps this perception stems from the fact that a lot of our GHD work is not public (for various reasons), or simply that people don't engage with it as much as they might have in the past.
I could report 50 % for 68 and 69 eyewitnesses, but this does not necessarily imply I am insentive to small changes in the number of eyewitnesses. In practice, I would be reporting my best guess rounded to the closest multiple of 0.1 or so. So I believe the reported value being exactly the same would only mean my best guesses differ by less than 10 pp, not that they are exactly the same. I would say the mean of the (rounded) reported best guesses for a given number of eyewitnesses tends to the (precise) underlying best guess as the number of reports increa...
This was a fun read, thanks for sharing!
I agree with you that cash benchmarking is a helpful, relatively intuitive metric. But I also think that all benchmarks in our space sometimes provide a veneer of precision, when ultimately there are a bunch of non-trivial subjective beliefs that help build out even a quantitative-looking cash benchmark. Concretely, how much do we weigh a life relative to hard cash? This is not an easy question, and I worry sometimes that cash benchmarking makes people believe it is some kind of purely objective metric.
Executive summary: The authors argue that AI systems should sometimes act as “good citizens” by proactively taking uncontroversial, context-sensitive prosocial actions beyond user instructions, and that this can yield large societal benefits without significantly increasing takeover risk if carefully designed.
Key points:
Executive summary: The author argues that under deep AI timeline uncertainty, you should choose career strategies by expected value across scenarios—often favoring paths with higher upside in longer timelines—while balancing learning, limited deference to experts, and acting despite uncertainty.
Key points:
The reasons you mentioned for gathering strong evidence not being possible (or being very difficult) apply to some extent to efforts increasing human welfare, but humans have probably still made progress on increasing human welfare over the past 200 years or so? Can one be confident similar progress cannot be extended to non-humans?
I agree research can backfire. However, at least historically, doing research on the sentience of animals, and on how to increase their welfare has mostly been beneficial for the target animals?
I don't think they are trying to convert the EA community into something else - they are pretty clearly creating separate spaces for their movement/community. [1]
Describing their post as using "applause lights" seems at best uncharitable, and "absolute nonsense" is just rude. There are several well-received posts on the forum around "[a]ugmenting decision-making with meditative (e.g. mindfulness) [practices]" like this one and this one. It's fine to dislike their principles, but I think it's worth making an effort to be encouraging when fellow altruis...
Some questions here are whether 50-50 as precise probabilities to start is reasonable and whether the approach to assign 50-50 as precise probabilities is reasonable.
If, when looking at the scenario, you would have done something like "wow, that's so complicated and I'm clueless, so 50-50", then your reaction almost certainly would have been the same if the example originally included one extra eyewitness in favour of one side. But then this tells you your initial way to assign credences was insensitive to this small difference. And yet after the initial a...
That's awesome, Denis. Glad you are liking the content from Consultants for Impact.
Would love to talk about Effective Giving Ireland. Feel free to message me, and we can find a time. We did a successful campaign for EA Germany last year.
I agree with you. EA needs to market itself better. Why are there more people into many niche hobbies than their are EA? I believe the answer is marketing and advertising. EA and EA orgs need to get itself out there in a meaningful way. The ability to do so has never been more accessible.
In some cases, we can't gather strong enough evidence, say because:
To clarify on donations - I think if you still want to give to individual recommended climate charities recommend by Giving Green or to research operations that's fine. The alternative protein space I think is likely the best bet. I just don't think the FP or GG funds are particularly impactful right now.
I strongly disliked this post for reasons that I'm not sure how to articulate. It seems to be advocating for a sort of lack of grounding in cost-effectiveness that is the thing that makes EA good. Or maybe my issue is that this post advocates for things that are difficult to disagree with ("full-spectrum knowing"; "wisdom"), without acknowledging tradeoffs (why do EAs allegedly not put enough priority on full-spectrum knowing?) or not saying anything concrete about how EAs could do more good.
[edited to be more polite]
Those are great points, thanks, I think you are right. On the other hand, I think my argument was that if the "science is solved" and cultivated meat became cheaper and more environmentally friendly, I don't see the current state of factory farming as a stable equilibrium situation: I don't think it is reasonable to expect an indefinite protection of a more expensive, more polluting and less worker friendly economic sector in favour of another. Eg, a ban might be feasible, but it may not be sustainable in decade-long time horizons.
Could you comment on the sense of "should" you have in mind in this post?
I think your core thesis is something like "it would be more socially efficient for AI systems to have prosocial drives". (I lean agree.)
But then sometimes you write as though the implication is "AI companies should unilaterally implement more prosocial drives in their systems". And this feels much less obvious to me.
If the purchasers of AI services prefer them to not have prosocial drives, then this could be imposing values on the consumers (which might ultimately have the effe...
Just commenting in the likelihood of a full-EU ban being low. FWIW I don't think it is currently more likely to happen than not, but I think you are underestimate the risk.
To block a regulation or legislation in the European Union under qualified majority voting rules, a "blocking minority" must be formed by at least four member states and represent more than 35% of the EU population.
Italy and Hungary have already banned cultivated meat. The Romanian Senate has also approved a ban (although I don't think it has been implemented). France and Aus...
Thanks Tobias, some good threads to pull here!
Yes, the question of whether int/a is a subset of EA, overlapping, or something totally different has been a big point of discussion, and we haven't found a clean answer.
You are right that EA in some sense already contains a lot of the things int/a is excited about (especially in terms of the official written principles being quite broad), but perhaps the real difference is what is emphasized in practice.
For example:
...Effective altruism doesn't take a position on whether we are in conflict with the natural unfo
I would simply say the expected mass is practically (not exactly) the same given the evidence available to me, and consider gathering additional evidence depending on how much I expected this to change future decisions. Likewise for altruistic interventions among which comparisons of the expected change in welfare feel very arbitrary.
I'd also be keen to get your response to this (and also this, if you have the time.)
I have replied to both comments.
I think there's a lot that could change if you very seriously weighed others' actual or possible direct impressions/intuitions without heavily privileging your own, before we even get into the question of precise vs imprecise credences. Epistemic modesty is going to do a lot of work first.
Thanks for elaborating on this. I imagine I could arrive to different (practical) priorities if I changed my mind about the topics you listed. At the same t...
If your team's work is worth doing, it's worth doing as an org
When a few people are doing good work together, the question of whether to formally incorporate into an organization can feel like a distraction from doing the actual work. Why take time away from your exciting research project to create an org? There are some real up-front costs to incorporating – dealing with bureaucracy, legal overhead, governance obligations – but I think the benefits of doing so are usually greater and underappreciated.
I already agreed with the premise before reading the article but I really enjoyed reading that! A lovely, funny, and concise article summarising the strengths and limitations of cash benchmarking.
The post avoids (perhaps deliberately to keep the tone light!) giving a name to one of the reasons why certain people are reticent to give cash to others, which I would describe as a kind of condescending paternalism e.g. 'I know better than them what's good for them'.
On a day to day you might encounter this kind of thinking with people who might be ok...
Yeah, the future described in this post isn't particuarly "weird", per se, it's just using the assumption that every technology that has been hypothetically proposed for the future will be created by ASI soon after AGI arrives.
I think the future will be a lot more unpredictable than this. Analolgously, I can imagine someone from 1965 being very confused about a future where immensely powerful computers can fit in your pocket, but human spaceflight had gone no further than the moon. It's very hard to predict in advance the constraints and shortcomings of future technology, or the practical and logistical factors that affect what is achieved.
Thanks for writing this!
You're describing integral altruism as broader than EA, but if I understand you correctly, it's also narrower in many ways. Some examples:
Letting go of the need to control everything and transcending the frame that we are in conflict with the natural unfolding of the universe. This also means emphasising collective action over individual heroism.
–> Effective altruism doesn't take a position on whether we are in conflict with the natural unfolding of the universe. EAs emphasise collective actions vs. individual heroism to various ...
Given how specific his predictions were, I think he did pretty darn well really for 2 years ago. Besides perhaps the important China race dynamic @huw bought up which was a central part of his thesis.
Thanks Jan, I appreciate the pushback.
Just wanted to flag the group is heavily selected for belief alignment with something like "EA/Constellation/Trajan House" views
As an event focused on x-risk, yes, I think this is fair.
"AI enabled human takeovers" was promoted as agenda to prioritize in multiple widely read memos by high statues people in the community (which the organisers prioritized in the reading list).
It's true that:
Have you considered that the reason these policies are not increasing AI usage is that AI usage is not particularly useful for many applications? Particularly when it comes to something like animal advocacy, I'm struggling to think of many things you'd actually need a full model subscription for (rather than just asking the occasional question to a free model).
I think the original policies are fine: they let people evaluate and decide for themselves how useful AI models are, and adjust strategies accordingly. Trying to pressure people to use AI beyond this level is going to make your team less effective.
Over on my blog, I wrote about prediction models, replacement value, and how I was taught about saving lives for pennies on the pound.
So long Mo Salah, and thanks for all the lives you saved.
"Death in a Shallow Pond": A new-ish book on the 'drowning child' thought experiment and EA
TIL about this book: Death in a Shallow Pond: A Philosopher, A Drowning Child, and Strangers in Need, published September 2025, by David Edmonds. I can't find it mentioned on the Forum but apologies if I've missed it. I haven't read it, but according to the blurb, it discusses 'the experiences and world events that led Singer to make his radical case and how it moved some young philosophers to establish the Effective Altruism movement, which tries to optimize philant...
Hi Michael.
It seems bad if we're basing how to do the most good on whims and biases.
I agree. However, in cases where priors are playing a crucial role, one should simply prioritise gathering more evidence until there is reasonable convergence about what to do (among a given group of people, for a particular decision)?
A great post. I agree - nuclear advocacy just isn't all that effective in a world where costs of renewables and batteries have fallen so much and continue to fall.
I think more widely, what is judged "the most effective climate philanthropy intervention" will shift rapidly over time due to technological/economic/societal progress on climate and it's going to be a constant scramble to keep up with that. This is different to the situation GiveWell is in, and GiveWell have far more money for their analysis operations than Giving Green do.
I encourage continued ...
Thanks for the great post, Gregory. Do you have any thoughts on the sequence "The challenge of unawareness for impartial altruist action guidance" from @Anthony DiGiovanni 🔸?
...Yet across my forecasts (on topics including legislation in particular countries, election results, whether people remain in office, and property prices - _all _of which I know very little about), I do somewhat better than the median forecaster, and substantially better than chance (Brier ~ 0.23). Crucially, the median forecaster also almost always does better than chance too (~ 0.32
Separately, here's Claude's direct reply to your specific points in case you're curious (sorry I don't enough of a developed inside view take to respond myself!):
...On "China don't have any frontier labs, only labs which distill other models": this is probably too strong. DeepSeek introduced genuine architectural innovations (Multi-head Latent Attention, fine-grained MoE) that Epoch AI characterises as real advances, not just distillation. That said, the distillation question is genuinely debated: OpenAI has alleged it, and Chinese labs scraped millions of Cl
Thanks for reviewing and raising this! You're right that the US/China dynamics are central to Situational Awareness's thesis and we underemphasised them. We've now added a dedicated China/US section with its own tab and three expandable cards, evaluating his specific sub-predictions on infrastructure (7nm chips, power, Middle East), algorithms and open source, and strategic dynamics. Would value your review of the updated version if you have time!
How organisations with low AI usage can and should be using it more
There is a lot of discussion about how everyone should be using AI more, and efforts to increase use and literacy. So far in animal advocacy spaces where I work I’ve seen the following efforts to increase usage so far:
The above has made a real dent in AI usage, but much less than we should be aiming for given ...
This is really nice, I really like it. Millenarianism feels all too easy to reach for in AI risk—as you note, there is a subtle self-satisfaction in predicting the end of the world that we have to be careful not to use as a crutch. In the world where we succeed, it will have been important to have done so pro-socially for the world after to have any chance of being worth living in.
The epoch of superintelligence will not result in any meaningful improvements in animal welfare. Previous epochs of humanity, marked by transformative advancements such as the industrial and digital revolutions, have failed to yield meaningful improvements in animal welfare. If anything, these shifts created novel pathways for animal exploitation or rendered existing models, such as animal husbandry, vastly more lethal and efficient through the rise and continued development of factory farming. Unless there is a massive societal-level dietary shift, which ...
Many exciting ideas here - thanks very much for sharing.
From an initial read, there seem to be a lot of similarities to the approach taken by Ambitious Impact's Charity Entrepreneurship Program. Would you be able to share a little bit of context about how this program compares and contrasts to Ambitious Impact, aside from, of course, the geographic focus on Latin America?
Aside from financial donations, are there other ways to support these organizations? I'm thinking in particular in terms of providing advisory support or connections.
Fina...
My best guess about which of 2 identical objects has a larger mass in expectation will be arbitrary is their mass only differs by 10^-6 kg, and I have no way of assessing this small difference. However, this does not mean the expected mass of the 2 objects is fundamentally incomparable
I worry you're reifying "expectations" as something objective here. The relative actual masses of the objects are clearly comparable. But if you subjectively can't compare them, then they're indeed incomparable "in expectation" in the relevant sense.
Hmm. Not a super well-thought out take here, but it seems to me that Situational Awareness’ biggest crux is around whether an arms race dynamic would develop between the U.S. and China, and he lays out a few specific ways in which that might happen.
I don’t see any evidence of such an arms race taking place. China don’t have any frontier labs, only labs which distill other models. They haven’t yet produced a capable chip and seem at least a few years to half a decade off (much slower than Aschenbrenner’s predictions). They haven’t waged a state-sponsored cy...
Our ancestors had less insight into the trade they were making than we do about our own situation. That's true.
Yet they still made the trade, and in hindsight, was it a bad trade to make? I disagree with people like Jared Diamond who argue that the agricultural revolution was the "worst mistake in the history of the human race". It certainly had some very negative consequences. But like most people, I think the agricultural revolution was still a good thing overall, despite the fact that it carried enormous negative side effects.
I suspect the transit...
I am extremely uncertain on this point. While there is a possibility that an aligned AI could be immensely beneficial for animals, I believe this is an outcome we absolutely cannot take for granted.
Broadly speaking, it is difficult to assess such a scenario without knowing the specific form an 'aligned' AI will take and what a world where humans coexist with an AGI or ASI will actually look like. As some have pointed out, if this AI were to simply 'lock in' current human values indefinitely, it would likely be really bad for animals.
It seems probable, howe...
It depends on the case. Do you think my answer to the above should influence which interventions I prioritise? My current top recommendations are research on i) the welfare of soil animals and microorganisms, and ii) comparisons of (expected hedonistic) welfare across species and digital systems. Could you see these changing if I thought EVs were imprecise instead of precise at a fundamental level?
I think there's a lot that could change if you very seriously weighed others' actual or possible direct impressions/intuitions without heavily privile...
Just as our ancestors experienced before us, we face the prospect of losing the world we know in exchange for material progress and prosperity.
Your ancestors who adopted agriculture did so because they thought that they and their children would get to eat the bread, not that they were sowing the seeds of their own destruction. If they had known that planting crops would lead to invasion and replacement they likely would not have done it. This rather large dis-analogy makes me think your use of the word 'just' is a bit of a stretch here.
My definition of going well for humans: for the existing population, there would be a re-allocation of resources. Food and water would be rationally distributed based on basic needs, and once basic needs are all met, based on wealth (or the ability to generate progress for the society).
With this as a premise, I think 1) there would be no need for factory farming, and 2) the welfare of all, including animals, would be rebalanced.
For point 2), I very much think the lack of welfare of animals is reflective of the lack of welfare in humans themselves
Hi Ben.
Now imagine that, after considering all of this evidence, you learn a new fact: it turns out that there were actually 69 eyewitnesses (rather than 68) testifying that Smith did it. Does this make it the case that you should now be more confident in S than J? That, if you had to choose right now who to send to jail, it should be Smith? I think not.
One should update towards a higher chance of Smith having commited the crime. However, if one was around 50 % confident that Smith commited the crime before the update, an update much smaller than 50 pp wil...
Just wanted to flag the group is heavily selected for belief alignment with something like "EA/Constellation/Trajan House" views, and "AI enabled human takeovers" was promoted as agenda to prioritize in multiple widely read memos by high statues people in the community (which the organisers prioritized in the reading list).
I dislike the "echo chambre" effect where the steps are:
- invite people partially based on alignment with the idea cluster
- tell them to read memos advocating something written by some of the most central people in the cluste...
I agree with the points you make in the 1st 3 paragraphs of your comment.
Would you take the fact that a direct impression came from your brain — from an inscrutable process, prone to cognitive biases of various kinds, and whose reliability you can at best verify by track records in limited domains where feedback is practical, and where track records may not generalize across tasks and domains well — is better evidence than a direct impression from another person's brain, with access to the same objective external evidence?
Not necessarily. It depends on whi...
I think you've simplified the problem too much. There can be special cases where we can use symmetry and just take simple averages, but many practical cases are not like that. Indeed, that's the point of the distinction between complex and simple cluelessness in the first place.
I think, ideally, we should look for and exploit as much evidential symmetry as possible, but I don’t think we'll always find enough of it to land on a unique precise distribution, I'd guess in principle impossible in many cases (probably almost all cases of intervention and cause a...
Thanks for your reply!
I definitely take your point about "I used a narrow definition of AGI because I think that's where actionable analysis can be made, but I agree its not necessarily enough." – I think I could have worded that better.
What I meant was that I think the world I discuss is plausible and we can get some actionable analysis from it, which can get us some way to identifying what actions may be more robust across different scenarios. (I agree we wouldn't want to discuss scenarios that are impossible.)
It seems the difference in our vi...
Interesting I never thought of that. So a "hedge" for direct impact careers?
I'm feeling some hesitation here - people who take direct impact careers often already implicitly donate by taking a big pay cut, and I'd see it as a portfolio play - as a movement we will have plenty of impact, even if not all direct "bets" work out.
But maybe I am not fully understanding this. For sure it feels like there is something there... can you maybe give a more specific example for when/how you'd use it?
Thanks for following up, Anthony.
My best guess about which of 2 identical objects has a larger mass in expectation will be arbitrary if their mass only differs by 10^-6 kg, and I have no way of assessing this small difference. However, this does not mean the expected mass of the 2 objects is fundamentally incomparable. Likewise, my best guess about which of 2 actions increases welfare more in expectation may be arbitrary without this implying that their expected change in welfare is incomparable.
I am not sure it matters whether one endorses precise expecte...
Hi Michael.
Do you think it's reasonable for two people with all of the same evidence to disagree on precise probabilities and expected values?
It depends on what is included in "all of the same evidence". If 2 people had exactly the same evidence about everything, including internal states about the plausibility of the probabilities, they would be the same people, and therefore would agree on everything. In practice, different people share some evidence, but start with different priors, and therefore do not have to agree on precise probabilities and expecte...
How would you choose the distributions for the model weights in a way that's not itself arbitrary? E.g. how do you choose their forms and parameters in a way that's not arbitrary?
I agree the distributions for the model weights would be arbitrary to some extent. However, I think probability density functions (PDFs) should be precise at a fundamental level, which implies precised expected values (EVs). If 2 PDFs feel exactly as plausible, I would simply use the mean between them.
I am not sure it matters whether one endorses precise EVs or not. In practice, I...
I worry I'm too pessimistic in general, but the world economy (and general living standards), have improved significantly over time, and farmed animal welfare seems to be a lot worse. That seems to be evidence to me that amazing technological progress won't be sufficient for animal welfare progress.
Does WAW dwarf FAW in expectation?
Yes
Most animals are wild animals, so the answer to this question should focus on them.
Not necessarily, because S-risks may be more important in expectation (e.g. a malevolent or vindictive ASI tiles the universe with extremely energy-efficient animal-like beings of pure suffering).
The poll defines "probably" as 70% chance. In this post, I wrote that I thought there was a ~70% chance that AGI would go well for animals.
I guess that means I believe there's a 50% chance that there's a 70% chance that AI goes well for animals? So I should vote in the exact middle of the spectrum?
However, the same goes for comparisons among the expected mass of seemingly identical objects with a similar mass if I can only assess their mass using my hands, but this does not mean their mass is incomparable.
I don't exactly understand what argument you're making here.
My core argument in the post is: Take any intervention X. We want to weigh up its impact for all sentient beings across the cosmos, where this "weighing up" is aggregation over all hypotheses. Now suppose we want to force ourselves to compare X with inaction, i.e., say either UEV(do X) >...
If you and me and all of humanity gets killed by AI and turned into paperclips, that would be an unprecedented moral catastrophe. If the AIs that killed all of us stay around and enjoy having more paperclips, that is still extremely bad. The very act of killing us makes these AIs not a worthy successor of the human species.
This suggests that proposing to pause AI today is like proposing to pause electricity in 1880
The prospect of AI killing all of us makes these very different. Yes, in both cases a pause will probably slow GDP growth. But humans should be willing to accept lower GDP if this notably reduces the chance of all humans being killed.
Do you think it's reasonable for two people with all of the same evidence to disagree on precise probabilities and expected values? If so, how would you justify picking your own precise probabilities over someone else's, if you think theirs are just as defensible?
Or would you just average yours and theirs in some way to get a new distribution? How?
And how far would you go, if you consider all the defensible precise probability distributions anyone could assign (whether or not anyone actually does so)? How do you weigh them all if there are infi...
for example we've aligned some ai to winning at chess and now they're better than any human
Chess bots are narrow AI, not general AI, which makes the situation very different. We don't know how to align an ASI to the goal of winning at chess. The most likely outcome would be some sort of severe misalignment—for example, maybe we think we trained the ASI to win at chess, but what actually maximizes its reward signal is the checkmate position, so it builds a fleet of robots to cut down every tree in the world to build trillions of chess sets and arranges e...
Hmm, that opens up a lot of interesting conversation threads. I actually think some goals will be easier to align ai towards than others, for example we've aligned some ai to winning at chess and now they're better than any human. Obviously that kind of goal is much simpler than any values framework that would be worth aligning agi too, but I think sentientist values would be easier to instill than "human values" (although not in the case of LLMs, I think they're already basically "aligned" with human values and we now need to shift them towards caring mor...
This is a very sad post to read for me. because I think it's obvious the AI x animals field needs to expand extremely quickly. I also agree that it's tiny currently and the funding situation is also constrained for now (have heard this will change from some important people, but it's not changing fast enough to grow a movement). I feel we're in a bit of a loop currently where some funders want to support impactful projects in this space but aren't seeing enough of those and the movement builders are really struggling to get funds to get more track record. ...
OK I think I get what you're saying now. I think the statement "most current alignment work is going towards aligning ai with human values" is not true. Alignment work is primarily about how to point ASI at any goal at all without there being catastrophic unintended consequences.
It sounds to me like you're saying the structure of the problem is like this:
and these are two totally separate problems, and alignment researchers are working on #1 to the exclusion of #2. Wh...
[ETA: I posted a revised version of this essay here.]
AI pause advocates often say they are pro-technology and pro-economic growth, and that they simply make one exception for AI because of its unique risks. But this reasoning will grow less credible over time as AI comes to account for a larger and larger share of economic growth.
Simple growth models predict that AI capable of substituting for human labor will raise economic growth rates by an order of magnitude or more. If that's right, then AI will eventually be driving the vast majority of technological...
As far as I know, most current alignment work is going towards aligning ai with human values. If that's successful then yay for us, but if we worked towards aligning ai with sentientist values (along the lines of "evidence, reason, and compassion for all sentient beings"), then we would also be in the group of valued beings. If people think that would go well for us, then I think it would make sense to think about ways to redirect more research towards aligning ai with all sentient beings, rather than just human values.
For example, humans. We are som...
I think you've made a lot of good points. But solutions to factory farming are broader than just cultivated meat. Plant-based meat is already much closer in cost and more acceptable to consumers. And the source of protein could come from fungus, bacteria, leaf protein, seaweed, etc., though those are probably not as acceptable as regular plants. It's also possible that AGI could help engineer a meat substitute that actually tastes better than animal meat, perhaps by triggering sweet taste buds without actually having appreciable sugar.
I think a second important question is "If AGI goes well for animals, it'll go well for humans" which I think is extremely likely, but I'm much more doubtful about it going well for animals if it goes well for humans.
We are animals, so it going well for animals allows us to instill at least somewhat simpler values in AI, but many people will want humans to be privileged in ai values which is not only more likely to exclude non-humans, but it'll also be a potentially more complicated and more likely to fail
I haven't figured out how to organize my thoughts well, so forgive me if this is unclear or disjointed.
I used a narrow definition of AGI because I think that's where actionable analysis can be made
Even assuming AI reaches your narrow definition of AGI and then stops advancing, AGI would still radically change the economic and political environment. The bits in this post about behavior of regulators become irrelevant if regulators are replaced by AI.
I don't think it's fair to say (e.g.) "it's hard to predict how politics/government will change, so I'll ...
Thanks for the comment, Mal.
Unfortunately I think this post is otherwise not very relevant, mainly because no one uses Contrapest [...]
[...]
- Since I'm not sure Evolve works either, it may be that any fertility control product used in the future has an entirely different formulation, making speculation here quite difficult.
Do you agree replacing the rodenticide bait with ContraPest may impact soil animals much more than rodents for my estimate that it decreases cropland by 0.413 m2-year per initial rodent? If so, how much smaller do you think the change in c...
I'll give it a try !
Open question: would it be useful to frame this an "impact insurance" for people in impactful careers?
As in:
-I expect this goal to be good for the world (say ~100 WELLBYs in expectation)
-If I don't achieve this goal, then I definitely owe something that's a least comparably good for the world (say ~$200 to PureEarth (since I've chosen WELLBYs))
(or maybe at least half as good)
I think using it this way could help people who have a real hesitation between impactful work and impactful donations.
"Goes well for humans" (i.e for a very long time) worlds are mostly worlds where AGI is fully theoretically and empirically aligned with a CEV-shaped alignment target, which for me logically requires animal welfare. (I also currently believe those worlds to be implausible because no company seems focused on this)
I struggle to imagine any deliberative or reflective-preference oriented process that does not give the right answer to the animal welfare question. If it doesn't care about non-human animals, then it means animals are not sentient, or that the CEV...
Many pessimistic predictions about AGI or ASI tend to paint the picture of a superhuman agent with an extreme maximalisation mindset powered by some unsophisticated version of rationalist principles, which would lead it to commit unspeakable acts of violence (e.g. the paperclip problem: the AI starts killing every form of life in order to save energy that could otherwise be used to make more paperclips).
This, to me, seems somewhat antithetic with the very notion of intelligence.
Surely, a truly 'superior' agent would be able to question the goal of tu... (read more)