This is a special post for quick takes by nora. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Meta: I haven’t seen this framing spelt out in these terms and think it’s a useful way of integrating considerations raised by patient longtermism into one overall EA worldview.
The considerations elucidated by patient longtermism, namely that our resources can “go further” in the future, are important. There is an analogous here to Singer’s drowning child argument, which says that, all else equal, you shouldn’t have a preference over helping someone who is spatially close to you compared to someone who is spatially far away. In other words, when evaluating different altruistic actions, you should only consider their “impact potential” and not, for example, your geographical distance of the moral patient. In Singer’s case, inequalities in global levels of development mean that money can go further (i.e. have more altruistic impact) abroad. In the case of patient longtermism, interest rates being higher than the rate at which creating additional welfare becomes more expensive over time mean that money can go further in the future.
Personally, I feel generally very happy to defer judgement about what is best to do to future beings since knowledge and wisdom is likely to have increased by then. Because of that (and abstracting from some other complications, some of which I will touch on later), I feel happy to invest resources today in a way that has them accumulate over time such that, eventually, future beings have more resources at hand for doing good, according to their judgement of how to best do that.
This is why I think estimates based on considerations of patient longtermism can usefully function as a benchmark against which to compare present-day altruistic actions. [1]
(Of course, all of this is still abstracting away from a lot of real-world complexity, some of which are decision-relevant. Thus, a benchmark consideration as I’m suggesting it ought to be used considerately, more like one among many inputs that weigh in on one’s decision.)
[1] An early example of this might be Philip Trammell’s calculation (see “Discounting for Patient Philanthropists” or “80,000 Hours interview with Phillip Trammel”) that says that: if interest rates continue to be higher than the rate at which creating additional welfare becomes more expensive, in approximately 279 years, giving the invested money to rich people in the developed world would (still) create more welfare than if you were to give the initial amount of money to the world’s poorest today. (
Below, I briefly discuss some motivating reasons, as I see them, to foster more interdisciplinary thought in EA. This includes ways EA's current set of research topics might have emerged for suboptimal reasons.
More EA-relevant interdisciplinary research : why?
The ocean of knowledge is vast. But the knowledge commonly referenced within EA and longtermism represents only a tiny fraction of this ocean.
I argue that EA's knowledge tradition is skewed for reasons including but not-limited-to the epistemic merit of those bodies of knowledge. There are good reasons for EA to focus in certain areas:
Direct relevance (e.g. if you're trying to do good, it seems clearly relevant to look into philosophy a bunch; if you're trying to do good effectively, it seems clearly relevant to look into economics (among others) a bunch; if you came to think that existential risks are a big deal, it is clearly relevant to look into bioengineering, international relations, etc. a bunch; etc.)
Evidence of epistemic merit (e.g. physics has more evidence for epistemic merit than psychology, which in return has more evidence for epistemic merit than astrology; in other words, beliefs gathered from different fields are are likely to pay more/less rent, or are likely to be more/less explanatory virtuous)
However, some of the reasons we’ve ended up with our current foci may not be as good:
The, in parts arbitrary, way academic disciplines have been carved up
Inferential distances between knowledge traditions that hamper the free diffusion of knowledge between disciplines and schools of thought
Having a skewed knowledge basis is problematic. There is a significant likelihood that we are missing out on insights or perspectives that might critically advance our undertaking. We don’t know what we don’t know. We have all the reasons to expect that we have blindspots.
***
I am interested in the potential value and challenges of interdisciplinary research.
Neglectedness
(Academic) incentives make it harder for transdisciplinary thought to flourish, resulting in what I expect to be an undersupply thereof. One way of thinking about why we would see an undersupply of interdisciplinry thought is in terms of "market inefficiencies". For one, individual actors are incentivised (because it’s less risky) to work on topics that are already recognised as interesting by the community (“exploitation”), as opposed to venturing into new bodies of knowledge that might or might not prove insightful (“exploration”). What is “already recognized as valuable by the community”, however, will only in part be determined by epistemic considerations, and in another part be shaped by path-dependencies.
For two, “markets” are insufficiently liquid and thus tend to fail where we cannot easily specify what we want. I’d argue that this is the case for DS/ET work. This is generally true for intellectual work, but is likely even more true for DS/ET work due to the relatively siloed structure of academia that adds additional “transaction costs” to attempts of communicating across disciplinary boundaries.
One way to reduce these inefficiencies is by improving the interfaces between the disciplines. "Domain scanning" and "episetmic translation" are precisely about creating such interfaces. Their purpose is to identify knowledge that is concretely relevant to a given target domain and make that knolwege accessible to thinkers entrenched in the "vocabulary" of that target domain. A useful interface between political philosophy and computer science, for example, might require a mathematical formalization of central ideas such as justice.
Challenges
At the same time, doing interdisciplinary well is callenging. For example, interdisciplinary research can only be as valuable as a researcher's ability to identify knowledge relevant to their target domain; or as a research community's quality assurance/error correction mechanisms. Phenomena like citogenesis or motivatiogensis are examples of manifestations of these difficulties.
There have been various attempts at overcoming these incentive barriers, for example the Santa Fe Institute whose organizational structure completely disregards scientific disciplines; -ARPAs have a similar flavour; the field of cybernetics which proposed an inherently transdisciplinary view on regulatory systems; or the recent surge in the literature on “mental models” (e.g. here or here).
A closer inspection of such examples - in how far they were successful and how they went about it - might bear some interesting insights. I don't have the capacity to properly puruse such case studies in the near future, but it's definteily something on my list of potentially promising (side) projects.
If readers are aware of other examples of innovative approaches trying to solve this problem that might make for insightful case studies, I’d love to hear them.
I think RAND is a good case study for interdisciplinary approaches to problem solving, though I'm biased. The key there, as in industry and most places other than academia, but unlike Santa Fe and the ARPAs, is a focus on solving concrete specific problems regardless of the tools used.
Also, big +1 to cybernetics, which is an interesting case study for 2 reasons, first because of what worked, and second because of how it was supplanted / coopted into narrow disciplines, and largely fizzled out as its own thing.
The below provides definitions and explanations of "domain scanning" and "epistemic translation", in an attempt of adding further gears to how interdisciplinary research works.
Domain scanning and epistemic translation
I suggest understanding domain scanning and epistemic translation as a specific type of research that both plays (or ought to play) an important role as part of a larger research progress, or can be usefully pursued as “its own thing”.
Domain Scanning
By domain scanning, I mean the activity of searching through diverse bodies and traditions of knowledge with the goal of identifying insights, ontologies or methodsrelevant to another body of knowledge or to a research question (e.g. AI alignment, Longtermism, EA).
I call source domains those bodies of knowledge where insights are being drawn from. The body of knowledge that we are trying to inform through this approach is called the target domain. A target domain can be as broad as an entire field or subfield or a specific research problem (in which case I often use the term target problem instead of target domain).
Domain scanning isn’t about comprehensively surveying the entire ocean of knowledge, but instead about selectively scouting for “bright spots” - domains that might importantly inform the target domain or problem.
An important rationale for domain scanning is the belief that model selection is a critical part of the research process. By model selection, I mean the way we choose to conceptualize a problem at a high-level of abstraction (as opposed to, say, working out the details given a certain model choice). In practice, however, this step often doesn’t happen at all because most research happens within a paradigm that is already “in the water”.
As an example, say an economist wants to think about a research question related to economic growth. They will think about how to model economic growth and will make choices according to the shape of their research problem. They might for example decide between using an endogenous growth or an exogenous growth model, and other modeling choices at a similar level of abstraction. However, those choices happen within an already comparably limited space of assumptions - in this case namely neoclassical economics. It's at this higher level of abstraction that I think we're often not sufficiently looking beyond a given paradigm. Like fish in the water.
Neoclassical economics, as an example, is based on assumptions such as agents being rational and homogenous, and the economy being an equilibrium system. Those are, in fact, not straightforward assumptions to make, as heterodox economics have in recent years slowly been bringing to the attention of the field. Complexity economics, for example, drops the above-mentioned assumptions which helps broaden our understanding of economics in ways I think are really important. Notably, complexity economics is inspired by the study of non-equilibrium systems from physics and its conception of heterogeneous and boundedly rational agents come from fields such as psychology and organizational studies.
Research within established paradigms is extremely useful a lot of the time and I am not suggesting that an economist who tackles their research question from a neoclassical angle is necessarily doing something wrong. However, this type of research can only ever make incremental progress. As a research community, I do think we have a strong interest in fostering, at a structural level, the quality of interdisciplinary transfer.
The role of model selection is particularly important in the case of pre-paradigmatic fields (examples include AI Safety or Complexity Science). In this case, your willingness to test different frameworks for conceiving of a given problem seems particularly valuable in expectation. Converging too early on one specific way of framing the problem risks locking in the burgeoning field too early. Pre-paradigmatic fields can often appear fairly chaotic, unorganized and unprincipled (“high entropy”). While this is sometimes evidence against the epistemic merit of a research community, I tend to want to abstain from holding this against emerging fields, because, since the variance of outcomes is higher, the potential upsides are higher too. (Of course, one’s overall judgement of the promise of an emerging paradigm will also depend more than just this factor.)
Epistemic Translation
By epistemic translation, I mean the activity of rendering knowledge commensurable between different domains. In other words, epistemic translation refers to the intellectual work necessary to i) understand a body of knowledge, ii) identify its relevance for your target domain/problem, and iii) render relevant conceptual insights accessible to (the research community of) the target domain, often by integrating it.
Epistemic translation isn’t just about translating one vocabulary into another or merely sharing factual information. It’s about expanding the concept space of the target domain by integrating new conceptual insights and perspectives.
The world is complicated and we are at any one time working with fairly simple models of reality. By analogy, when I look at a three-dimensional cube, I can only see a part of the entire cube at any one time. By taking different perspectives on the same cube and putting these perspectives together - an exercise one might call “triangulating reality” -, I can start to develop an increasingly accurate understanding of the cube. The box inversion hypothesis by Jan Kulveit is another, AI alignment specific example of what I’m thinking about.
I think something like this is true for understanding reality at large, - be it magnitudes more difficult than the cube example suggests.Domain scanning is about seeking new perspectives on your object of inquiry, and epistemic translation is required for integrating these numerous perspectives with one another in an epistemically faithful manner.
In the case of translation between technical and non-technical fields - say translating central notions of political philosophy into game theoretic or CS language - the major obstacle to epistemic translation is formalization. A computer scientist might well be aware of, say, the depth of discourse on topics like justice or democracy. But that doesn’t yet mean that they can integrate this knowledge into their own research or engineering. Formalization is central to creating useful disciplinary interfaces and close to no resources are spent to systematically spreading up this process.
Somewhere in between domain scanning and epistemic translation, we could talk about “prospecting” as the activity of providing epistemic updates on how valuable a certain source domain is likely to be. This involves some scanning and some translation work (therefore categorized as “in between the two”), and would serve the central function of a community mechanism for coordinating around what a community might want to pay attention to.
List of fields/questions for interdisciplinary AI alignment research
The following list of fields and leading questions could be interesting for interdisciplinry AI alignment reserach. I started to compile this list to provide some anchorage for evaluating the value of interdiscplinary research for EA causes, specifically AI alignment.
Some comments on the list:
Some of these domains are likely already very much on the radar of some people, other’s are more speculative.
In some cases I have a decent idea of concrete lines of question that might be interesting, in other cases all I do is very broadly gesturing that “something here might be of interest”.
I don’t mean this list to be comprehensive or authoritative. On the contrary, this list is definitely skewed by domains I happened to have come across and found myself interested in.
While this list is specific to AI alignment (/safety/governance), I think the same rationale applies to other EA-relevant domains and I'd be excited for other people to compile similar lists relevant to their area of interest/expertise.
Very interested in hearing thoughts on the below!
Target domain: AI alignment/safety/governance
Evolutionary biology
Evolutionary biology seems to have a lot of potentially interesting things to say about AI alignment. Just a few examples include:
The relationship between environment, agent, evolutionary paths (which e.g. relates to to the role of training environments)
Niche construction as an angle on embedded agency
The nature of intelligence
Linguistics and Philosophy of language
Lots of things that are relevant to understanding the nature and origin of (general) intelligence better.
Sub-domains, such as semiotics could, for example, have relevant insights on topics like delegation and interpretability.
Cognitive science and neuroscience
Examples include Minsky’s Society of Minds (“The power of intelligence stems from our vast diversity, not from any single, perfect principle”), Hawkin’s A thousand brains (the role of reference frames for general intelligence), Frinston et al’s Predictive Coding/Predictive Processing (in its most ambitious versions a near universal theory of all things cognition, perception, comprehension and agency), and many more
Information theory
Information theory is hardly news to the AI alignment idea space. However, there might still be value on the table from deeper dives or more out-of-the-orderly applications of its insights. One example of this might be this paper on The Information Theory of Individuality.
Cybernetics/Control Systems
Cybernetics seems straightforwardly relevant to AI alignment. Personally, I’d love to have a piece of writing synthesising the most exciting intellectual developments under cybernetics done by someone with awareness of where the AI alignment field is at currently.
Complex systems studies
What does the study of complex systems have to say about robustness, interoperability, emergent alignment? It also offers insights into and methodology for approaching self-organization and collective intelligence which is interesting in particular in multi-multi scenarios.
Heterodox schools of economic thinking
Schools of thought are trying to reimagine the economy/capitalism and (political) organization, e.g. through decentralization and self-organization, by working on antitrust, by trying to understand potentially radical implications of digitalization on the fabric of the economy, etc. Complexity economics, for example, can help understanding the out-of-equilibrium dynamics that shape much of our economy and lives.
The richness of the history of political thought is astonishing; the most obvious might be ideas related to social choice or principles of governance. (A denses while also high-quality overview is offered by this podcast series History Of Ideas.) The crux in making the depth of political thought available and relevant to AI alignment is formalization, which seems extremely undersupplied in current academia for very similar reasons as I’ve argued above.
Management and organizational theory, Institutional economics and Institutional design
Talks for example about desiderata for institutions like robustness (e.g. here), or about how to understand and deal with institutional path-dependencies (e.g. here).
Patient Longtermism as a benchmark
Meta: I haven’t seen this framing spelt out in these terms and think it’s a useful way of integrating considerations raised by patient longtermism into one overall EA worldview.
The considerations elucidated by patient longtermism, namely that our resources can “go further” in the future, are important. There is an analogous here to Singer’s drowning child argument, which says that, all else equal, you shouldn’t have a preference over helping someone who is spatially close to you compared to someone who is spatially far away. In other words, when evaluating different altruistic actions, you should only consider their “impact potential” and not, for example, your geographical distance of the moral patient. In Singer’s case, inequalities in global levels of development mean that money can go further (i.e. have more altruistic impact) abroad. In the case of patient longtermism, interest rates being higher than the rate at which creating additional welfare becomes more expensive over time mean that money can go further in the future.
Personally, I feel generally very happy to defer judgement about what is best to do to future beings since knowledge and wisdom is likely to have increased by then. Because of that (and abstracting from some other complications, some of which I will touch on later), I feel happy to invest resources today in a way that has them accumulate over time such that, eventually, future beings have more resources at hand for doing good, according to their judgement of how to best do that.
This is why I think estimates based on considerations of patient longtermism can usefully function as a benchmark against which to compare present-day altruistic actions. [1]
(Of course, all of this is still abstracting away from a lot of real-world complexity, some of which are decision-relevant. Thus, a benchmark consideration as I’m suggesting it ought to be used considerately, more like one among many inputs that weigh in on one’s decision.)
[1] An early example of this might be Philip Trammell’s calculation (see “Discounting for Patient Philanthropists” or “80,000 Hours interview with Phillip Trammel”) that says that: if interest rates continue to be higher than the rate at which creating additional welfare becomes more expensive, in approximately 279 years, giving the invested money to rich people in the developed world would (still) create more welfare than if you were to give the initial amount of money to the world’s poorest today. (
Below, I briefly discuss some motivating reasons, as I see them, to foster more interdisciplinary thought in EA. This includes ways EA's current set of research topics might have emerged for suboptimal reasons.
More EA-relevant interdisciplinary research : why?
The ocean of knowledge is vast. But the knowledge commonly referenced within EA and longtermism represents only a tiny fraction of this ocean.
I argue that EA's knowledge tradition is skewed for reasons including but not-limited-to the epistemic merit of those bodies of knowledge. There are good reasons for EA to focus in certain areas:
However, some of the reasons we’ve ended up with our current foci may not be as good:
Having a skewed knowledge basis is problematic. There is a significant likelihood that we are missing out on insights or perspectives that might critically advance our undertaking. We don’t know what we don’t know. We have all the reasons to expect that we have blindspots.
***
I am interested in the potential value and challenges of interdisciplinary research.
Neglectedness
(Academic) incentives make it harder for transdisciplinary thought to flourish, resulting in what I expect to be an undersupply thereof. One way of thinking about why we would see an undersupply of interdisciplinry thought is in terms of "market inefficiencies". For one, individual actors are incentivised (because it’s less risky) to work on topics that are already recognised as interesting by the community (“exploitation”), as opposed to venturing into new bodies of knowledge that might or might not prove insightful (“exploration”). What is “already recognized as valuable by the community”, however, will only in part be determined by epistemic considerations, and in another part be shaped by path-dependencies.
For two, “markets” are insufficiently liquid and thus tend to fail where we cannot easily specify what we want. I’d argue that this is the case for DS/ET work. This is generally true for intellectual work, but is likely even more true for DS/ET work due to the relatively siloed structure of academia that adds additional “transaction costs” to attempts of communicating across disciplinary boundaries.
One way to reduce these inefficiencies is by improving the interfaces between the disciplines. "Domain scanning" and "episetmic translation" are precisely about creating such interfaces. Their purpose is to identify knowledge that is concretely relevant to a given target domain and make that knolwege accessible to thinkers entrenched in the "vocabulary" of that target domain. A useful interface between political philosophy and computer science, for example, might require a mathematical formalization of central ideas such as justice.
Challenges
At the same time, doing interdisciplinary well is callenging. For example, interdisciplinary research can only be as valuable as a researcher's ability to identify knowledge relevant to their target domain; or as a research community's quality assurance/error correction mechanisms. Phenomena like citogenesis or motivatiogensis are examples of manifestations of these difficulties.
There have been various attempts at overcoming these incentive barriers, for example the Santa Fe Institute whose organizational structure completely disregards scientific disciplines; -ARPAs have a similar flavour; the field of cybernetics which proposed an inherently transdisciplinary view on regulatory systems; or the recent surge in the literature on “mental models” (e.g. here or here).
A closer inspection of such examples - in how far they were successful and how they went about it - might bear some interesting insights. I don't have the capacity to properly puruse such case studies in the near future, but it's definteily something on my list of potentially promising (side) projects.
If readers are aware of other examples of innovative approaches trying to solve this problem that might make for insightful case studies, I’d love to hear them.
I think RAND is a good case study for interdisciplinary approaches to problem solving, though I'm biased. The key there, as in industry and most places other than academia, but unlike Santa Fe and the ARPAs, is a focus on solving concrete specific problems regardless of the tools used.
Also, big +1 to cybernetics, which is an interesting case study for 2 reasons, first because of what worked, and second because of how it was supplanted / coopted into narrow disciplines, and largely fizzled out as its own thing.
The below provides definitions and explanations of "domain scanning" and "epistemic translation", in an attempt of adding further gears to how interdisciplinary research works.
Domain scanning and epistemic translation
I suggest understanding domain scanning and epistemic translation as a specific type of research that both plays (or ought to play) an important role as part of a larger research progress, or can be usefully pursued as “its own thing”.
Domain Scanning
By domain scanning, I mean the activity of searching through diverse bodies and traditions of knowledge with the goal of identifying insights, ontologies or methods relevant to another body of knowledge or to a research question (e.g. AI alignment, Longtermism, EA).
I call source domains those bodies of knowledge where insights are being drawn from. The body of knowledge that we are trying to inform through this approach is called the target domain. A target domain can be as broad as an entire field or subfield or a specific research problem (in which case I often use the term target problem instead of target domain).
Domain scanning isn’t about comprehensively surveying the entire ocean of knowledge, but instead about selectively scouting for “bright spots” - domains that might importantly inform the target domain or problem.
An important rationale for domain scanning is the belief that model selection is a critical part of the research process. By model selection, I mean the way we choose to conceptualize a problem at a high-level of abstraction (as opposed to, say, working out the details given a certain model choice). In practice, however, this step often doesn’t happen at all because most research happens within a paradigm that is already “in the water”.
As an example, say an economist wants to think about a research question related to economic growth. They will think about how to model economic growth and will make choices according to the shape of their research problem. They might for example decide between using an endogenous growth or an exogenous growth model, and other modeling choices at a similar level of abstraction. However, those choices happen within an already comparably limited space of assumptions - in this case namely neoclassical economics. It's at this higher level of abstraction that I think we're often not sufficiently looking beyond a given paradigm. Like fish in the water.
Neoclassical economics, as an example, is based on assumptions such as agents being rational and homogenous, and the economy being an equilibrium system. Those are, in fact, not straightforward assumptions to make, as heterodox economics have in recent years slowly been bringing to the attention of the field. Complexity economics, for example, drops the above-mentioned assumptions which helps broaden our understanding of economics in ways I think are really important. Notably, complexity economics is inspired by the study of non-equilibrium systems from physics and its conception of heterogeneous and boundedly rational agents come from fields such as psychology and organizational studies.
Research within established paradigms is extremely useful a lot of the time and I am not suggesting that an economist who tackles their research question from a neoclassical angle is necessarily doing something wrong. However, this type of research can only ever make incremental progress. As a research community, I do think we have a strong interest in fostering, at a structural level, the quality of interdisciplinary transfer.
The role of model selection is particularly important in the case of pre-paradigmatic fields (examples include AI Safety or Complexity Science). In this case, your willingness to test different frameworks for conceiving of a given problem seems particularly valuable in expectation. Converging too early on one specific way of framing the problem risks locking in the burgeoning field too early. Pre-paradigmatic fields can often appear fairly chaotic, unorganized and unprincipled (“high entropy”). While this is sometimes evidence against the epistemic merit of a research community, I tend to want to abstain from holding this against emerging fields, because, since the variance of outcomes is higher, the potential upsides are higher too. (Of course, one’s overall judgement of the promise of an emerging paradigm will also depend more than just this factor.)
Epistemic Translation
By epistemic translation, I mean the activity of rendering knowledge commensurable between different domains. In other words, epistemic translation refers to the intellectual work necessary to i) understand a body of knowledge, ii) identify its relevance for your target domain/problem, and iii) render relevant conceptual insights accessible to (the research community of) the target domain, often by integrating it.
Epistemic translation isn’t just about translating one vocabulary into another or merely sharing factual information. It’s about expanding the concept space of the target domain by integrating new conceptual insights and perspectives.
The world is complicated and we are at any one time working with fairly simple models of reality. By analogy, when I look at a three-dimensional cube, I can only see a part of the entire cube at any one time. By taking different perspectives on the same cube and putting these perspectives together - an exercise one might call “triangulating reality” -, I can start to develop an increasingly accurate understanding of the cube. The box inversion hypothesis by Jan Kulveit is another, AI alignment specific example of what I’m thinking about.
I think something like this is true for understanding reality at large, - be it magnitudes more difficult than the cube example suggests. Domain scanning is about seeking new perspectives on your object of inquiry, and epistemic translation is required for integrating these numerous perspectives with one another in an epistemically faithful manner.
In the case of translation between technical and non-technical fields - say translating central notions of political philosophy into game theoretic or CS language - the major obstacle to epistemic translation is formalization. A computer scientist might well be aware of, say, the depth of discourse on topics like justice or democracy. But that doesn’t yet mean that they can integrate this knowledge into their own research or engineering. Formalization is central to creating useful disciplinary interfaces and close to no resources are spent to systematically spreading up this process.
Somewhere in between domain scanning and epistemic translation, we could talk about “prospecting” as the activity of providing epistemic updates on how valuable a certain source domain is likely to be. This involves some scanning and some translation work (therefore categorized as “in between the two”), and would serve the central function of a community mechanism for coordinating around what a community might want to pay attention to.
Context: (1) Motivations for fostering EA-relevant interdisciplinary research; (2) "domain scanning" and "epistemic translation" as a way of thinking about interdisciplinary research
List of fields/questions for interdisciplinary AI alignment research
The following list of fields and leading questions could be interesting for interdisciplinry AI alignment reserach. I started to compile this list to provide some anchorage for evaluating the value of interdiscplinary research for EA causes, specifically AI alignment.
Some comments on the list:
Very interested in hearing thoughts on the below!
Target domain: AI alignment/safety/governance