Been a research project by
Kayode Adekoya | Economist |AI Safety and Animal welfare Advocate
Olukoyaolukayode7477@gmail.com
Submitted to :
Electricsheep Futurekind Winter Fellowship, 2025/2026
Abstract
Artificial intelligence (AI) is increasingly understood as a general-purpose technology (GPT) with transformative, cross-sectoral effects (Bresnahan & Trajtenberg, 1995; Helpman, 1998). While contemporary AI governance and alignment debates primarily focus on human-centered risks, such as bias, transparency, and existential safety (OECD, 2019; Russell, 2019; UNESCO, 2021), comparatively little attention has been given to how AI systems reshape human–animal relations. This omission is particularly consequential in the Global South, where rapid agricultural modernization, biodiversity vulnerability, and institutional capacity constraints intersect (FAO, 2022; World Bank, 2020). Drawing on general-purpose technology theory, AI alignment scholarship, and animal political theory (Donaldson & Kymlicka, 2011; Nussbaum, 2006; Singer, 1975).
This paper develops a multispecies governance framework for AI deployment. Using empirical illustrations from livestock digitization and wildlife AI systems, it argues that AI may and unintentionally scale animal suffering in the absence of institutional safeguards.
The paper concludes by outlining policy recommendations for integrating animal welfare into AI governance regimes across the Global South.
1. Introduction
Artificial intelligence is rapidly transforming economic systems, institutional infrastructures, and governance architectures worldwide. As a general-purpose technology, AI exhibits economy-wide applicability, continuous technical improvement, and complementarities with other innovations (Bresnahan & Trajtenberg, 1995; Helpman, 1998). These characteristics position AI as a structural force capable of reshaping agricultural systems, conservation practices, and production models.
Current AI governance frameworks emphasize fairness, accountability, transparency, and safety (OECD, 2019; UNESCO, 2021). Alignment discourse further explores long-term risks and the problem of aligning AI systems with human values (Bostrom, 2014; Russell, 2019). However, these frameworks remain implicitly anthropocentric. They focus on human stakeholders while largely neglecting welfare implications for non-human animals.
Animal ethics scholarship has long challenged strictly anthropocentric moral boundaries. Utilitarian approaches emphasize the moral relevance of sentience (Singer, 1975), capabilities-based theories extend justice to non-human beings (Nussbaum, 2006), and political models conceptualize animals as members of moral communities (Donaldson & Kymlicka, 2011). Yet these perspectives remain marginal within AI governance debates.
This omission is particularly significant in the Global South, where AI adoption intersects with expanding livestock sectors and biodiversity governance challenges (FAO, 2022; World Bank, 2020). In such contexts, AI may function as a productivity multiplier within industrial animal systems, potentially accelerating intensification without parallel ethical oversight.
This paper advances three core arguments:
1. AI, as a general-purpose technology, amplifies structural scaling effects in animal production and conservation systems.
2. Existing AI governance frameworks insufficiently integrate non-human moral subjects.
3. Governance asymmetries in the Global South make multispecies integration urgently necessary.
2. AI as a General-Purpose Technology
General-purpose technologies are defined by their pervasiveness, continuous improvement, and innovation complementarities (Bresnahan & Trajtenberg, 1995; Helpman, 1998). Historically, technologies such as electricity and information technology reshaped entire economic systems through downstream innovation effects.
AI demonstrates these properties through deployment across agriculture, finance, healthcare, logistics, and environmental monitoring. In agricultural systems specifically, AI enables automated environmental controls, feed optimization algorithms, predictive disease detection, and biometric livestock monitoring (Berckmans, 2017; FAO, 2022).
Precision livestock farming technologies can improve disease detection and reduce certain mortality risks (Berckmans, 2017). However, these same systems also enable higher stocking densities, tighter confinement management, and accelerated production cycles. From a GPT perspective, the relevant governance concern is not merely productivity gains but the structural dynamics being amplified (Bresnahan & Trajtenberg, 1995).
If AI is deployed within high-intensity confinement systems, it may scale both efficiency and suffering simultaneously. Welfare science emphasizes that behavioral restriction, stocking density, and environmental stress are core determinants of animal well-being (Fraser, 2008). Absent explicit welfare safeguards, AI optimization systems may prioritize output metrics while externalizing welfare costs.
3. Anthropocentrism in AI Governance
Major AI governance frameworks center human interests. The OECD AI Principles emphasize human rights and democratic values (OECD 2019) UNESCO’s Recommendation on the Ethics of Artificial Intelligence similarly frames ethical obligations primarily in relation to human dignity and societal well-being (UNESCO, 2021). Long-term AI safety discourse focuses on preventing existential or catastrophic risks to humanity (Bostrom, 2014; Russell, 2019).
While these concerns are normatively important, they create a structural blind spot. AI systems can be technically aligned with human economic objectives while generating unintended harms for non-human sentient beings. Because welfare impacts on animals are rarely included in algorithmic impact assessments, they remain external to optimization targets and regulatory review processes.
Animal political theory argues that animals affected by human institutions are members of shared political communities (Donaldson & Kymlicka, 2011). If AI systems restructure agricultural or ecological environments inhabited by animals, those animals become affected stakeholders in governance terms. Excluding them from regulatory consideration perpetuates anthropocentric bias in technological oversight.
4. The Global South Context
The Global South presents distinctive governance conditions shaped by rapid economic transformation and institutional capacity constraints. Livestock sectors are expanding in response to urbanization and rising protein demand (FAO, 2022). Digital economy initiatives aim to accelerate modernization and productivity (World Bank, 2020).
AI-enabled agricultural systems including disease analytics, automated climate control, and data-driven breeding optimization, are increasingly introduced into emerging markets (FAO, 2022). While these technologies may reduce mortality and improve operational efficiency, they also facilitate higher-density production models. In regulatory environments where animal welfare oversight is limited, productivity gains may outpace ethical safeguards.
Wildlife conservation technologies further illustrate AI’s normative plasticity. AI-driven monitoring systems can strengthen anti-poaching enforcement and biodiversity tracking (Sandbrook et al., 2013). Yet these technologies also introduce governance challenges related to data ownership, surveillance, and prioritization of species.
The Global South thus functions as a critical site where AI’s developmental narrative intersects with ethical governance gaps. Without institutional integration of welfare standards, AI diffusion may structurally entrench intensified animal systems under the banner of modernization.
5. Toward a Multispecies AI Governance Framework
A multispecies AI governance framework requires integration across technological, moral, and institutional dimensions.
First, policymakers must recognize AI’s scaling dynamics as a general-purpose technology (Bresnahan & Trajtenberg, 1995). Governance must anticipate second-order structural effects rather than evaluating isolated deployments.
Second, moral scope expansion is necessary. Sentience-based ethics (Singer, 1975), capabilities approaches (Nussbaum, 2006), and political models of animal inclusion (Donaldson & Kymlicka, 2011) provide normative foundations for integrating animals into governance consideration.
Third, institutional capacity building in the Global South is critical. Cross-ministerial coordination between agricultural, veterinary, environmental, and digital economy authorities can prevent siloed regulation. International development initiatives supporting digital transformation (World Bank, 2020) should incorporate welfare safeguards into technology transfer processes.
Without such integration, AI adoption may unintentionally amplify systemic harms at scale.
6. Conclusion and Policy Recommendations
Artificial intelligence’s status as a general-purpose technology implies that its governance cannot be limited to immediate performance metrics or isolated deployments (Bresnahan & Trajtenberg, 1995). GPTs reshape institutional logics, production structures, and incentive systems over time. In agricultural and ecological contexts across the Global South, AI systems are increasingly embedded in livestock intensification, disease surveillance, and wildlife management infrastructures (FAO, 2022). Without anticipatory governance, these systems risk scaling structural harms alongside efficiency gains.
If AI deployment prioritizes productivity maximization without embedding welfare constraints, it may entrench high-density confinement models at unprecedented scale. Welfare science demonstrates that stocking density, behavioral restriction, and environmental stress are core determinants of animal suffering (Fraser, 2008). Simultaneously, animal political theory argues that animals affected by institutional systems are members of shared moral communities and therefore legitimate subjects of governance consideration (Donaldson & Kymlicka, 2011; Nussbaum, 2006). Integrating these insights into AI governance requires institutional redesign rather than rhetorical inclusion.
6.1 Multispecies AI Impact Assessments
First, regulatory systems should expand algorithmic impact assessments to include multispecies welfare metrics. Existing AI governance frameworks typically evaluate risks related to bias, privacy, and human rights (OECD, 2019; UNESCO, 2021). However, agricultural AI systems materially alter the lived environments of animals.
Multispecies AI Impact Assessments would require developers and deployers of agricultural AI systems to evaluate:
Changes in stocking density enabled by automation
Behavioral restriction indicators
Mortality and morbidity trends
Long-term structural scaling incentives
Such assessments would function analogously to environmental impact reviews but extend explicitly to sentient welfare outcomes. Importantly, this approach does not prohibit AI adoption; rather, it introduces welfare-sensitive guardrails during early diffusion stages.
6.2 Welfare-Embedded Algorithmic Design
Second, governance frameworks should require welfare metrics to be embedded directly within optimization systems. AI systems in livestock production often optimize feed conversion ratios, growth rates, and mortality reduction (Berckmans, 2017). However, if welfare indicators are excluded from objective functions, they become externalities.
Regulatory standards could require that AI-driven agricultural management systems integrate measurable welfare proxies, such as space allocation thresholds, behavioral diversity markers, and stress detection analytics into algorithmic decision-making. This aligns with value alignment theory’s insight that objective specification determines system behavior (Russell, 2019). By embedding welfare constraints at the design stage, policymakers can prevent purely output-driven optimization from dominating system performance.
6.3 Cross-Ministerial Institutional Coordination
Third, governance of multispecies AI systems requires institutional coordination across ministries traditionally operating in silos. Agricultural ministries regulate productivity and food security; environmental agencies oversee biodiversity; digital economy authorities promote technological innovation. AI’s cross-sectoral nature collapses these distinctions.
Formal inter-ministerial review bodies or task forces should evaluate large-scale AI agricultural deployments. This structure would prevent regulatory fragmentation, particularly in emerging economies where institutional capacity is uneven (World Bank, 2020). Coordinated governance ensures that digital transformation initiatives do not inadvertently undermine animal welfare or ecological resilience.
6.4 Capacity Building and Technical Literacy
Fourth, effective governance requires technical literacy within veterinary and regulatory institutions. Many Global South jurisdictions face resource and expertise constraints in evaluating AI systems (FAO, 2022). Without sufficient capacity, welfare standards risk remaining aspirational rather than enforceable.
International development programs supporting digital transformation should incorporate AI auditing training for veterinary authorities and agricultural inspectors. This includes technical understanding of machine learning systems, data pipelines, and performance metrics. Building local expertise reduces dependency on foreign technology providers and strengthens sovereign regulatory oversight.
6.5 Procurement and Market Incentives
Fifth, public procurement policies can function as leverage points. Governments in emerging economies frequently subsidize or co-finance agricultural modernization programs. Conditioning procurement approval or public funding on compliance with welfare-integrated AI standards would create market incentives for ethical system design.
Such mechanisms align with broader responsible innovation strategies in technology governance (OECD, 2019). By embedding welfare criteria into funding eligibility, states can shape market behavior without imposing blanket prohibitions.
6.6 Regional Governance Cooperation
Finally, regional coordination across Global South jurisdictions is essential to prevent regulatory arbitrage. If welfare-integrated AI standards vary widely between neighboring countries, production may shift toward weaker regulatory environments.
Regional economic communities and agricultural unions could develop harmonized guidelines for welfare-sensitive AI deployment. Cooperative governance increases bargaining power in negotiations with multinational technology firms and reduces incentives for standards competition.
Expanding AI governance beyond anthropocentric frameworks is not anti-development. Rather, it reflects an anticipatory governance strategy consistent with the structural power of general-purpose technologies (Bresnahan & Trajtenberg, 1995). AI systems reshape not only productivity metrics but lived environments across species boundaries.
The Global South stands at a formative moment in AI diffusion. Early institutional design choices will determine whether AI entrenches intensified confinement systems or supports welfare-sensitive modernization. Integrating multispecies safeguards into governance frameworks today can prevent structural lock-in tomorrow.
Ethical foresight, institutional integration, and capacity investment are therefore not peripheral concerns. They are central to ensuring that AI’s transformative potential aligns with a broader conception of justice, one that recognizes the multispecies character of technologically mediated economies.
References
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Helpman, E. (Ed.). (1998). General purpose technologies and economic growth. MIT Press.
OECD. (2019). OECD principles on artificial intelligence. OECD Publishing.
Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Viking.
Adams, Rene Van Der wal, Koen Arts . (2015). Digital technology and the Conservation of nature ,
Singer, P. (1975). Animal liberation. HarperCollins.
UNESCO. (2021). Recommendation on the ethics of artificial intelligence. UNESCO.
World Bank. (2020). Digital economy for Africa initiative. World Bank

Intersting research. I think an important moral view that we should align AI with is that wild animal suffering is also a serious issue. In my intuition, AI might hold the view that there are a lot of happiness in the wild so spreading animals to the outer space would be something good, which is probably not true and might cause s-risks to happen.