Hide table of contents

Note: This is not a detailed account of the tripartite theory of knowledge or the Gettier cases but is rather a brief overview of the two. My knowledge and research are not substantive by any means, so feel free to let me know if I have missed out on anything critical or misrepresented something. 

The questions of how we know what we know or how we deem what we know as "knowledge"  have formed a core part of epistemology - the philosophical branch concerned with the nature and origin of knowledge. More than that, these questions have found their way into several other disciplines and research fields, including effective altruism. 

The core principles of EA involve making decisions and taking actions based on the best available evidence and reasoning in order to do the most good. This directly ties into epistemology and the questions of how we know what we know, because effective altruists need to ensure that their knowledge about what actions are most effective is well-founded, true, and justified. 

The tripartite theory of knowledge was initially introduced by the ancient Greek philosopher Plato, and is now a fundamental idea in philosophy. The theory creates a framework of conditions that must be met in order for something to be classified as "knowledge", and for someone to truly "know" something.

Belief - Truth - Justification

The three conditions of the tripartite theory of knowledge are as follows: belief, truth, and justification. 

  1. Belief: You must genuinely believe the proposition.
  2. Truth: The proposition must correspond to reality.
  3. Justification: There must be solid evidence or reasons supporting the belief.

To illustrate how these conditions lead us to the knowledge of something, I will be using an example in the context of effective altruism. For instance, suppose you believe that Charity A effectively reduces malaria in a specific region. To know this, the three criteria must be met. 

  1. Belief: You must genuinely believe that Charity A effectively reduces malaria
  2. Truth: This belief must correspond to reality - meaning that Charity A must actually be effective in reducing malaria
  3. Justification: Your belief must be supported by solid evidence such as data from trials showing a significant reduction in malaria cases due to the intervention of the charity, independent evaluations, or expert health testimonies that can attest to the positive impact that the charity's work has had. 

For example, you might come across a reputable report stating that Charity A has reduced malaria incidence by 50% in the region over the past year. You also see data from health organizations corroborating these findings. Moreover, personal accounts from those in the affected areas might confirm the positive changes. With this evidence, your belief is not only true but also justified, fulfilling the tripartite theory's criteria. Thus, you can confidently claim to know that Charity A is effectively reducing malaria in the region - which also ensures that your support and donations are well-placed. 

According to the tripartite theory of knowledge, false beliefs or justifications do not constitute knowledge. For example, suppose you believe that Charity A effectively reduces malaria based on a report you read. If the data in the report were inaccurate or fabricated, your justification for believing in the charity's effectiveness would be false, meaning it does not count as knowledge. Similarly, if Charity A's interventions turned out to be ineffective despite your belief, this belief would not constitute knowledge, as genuine knowledge cannot be based on falsehoods.

Limitations of the Tripartite Theory of Knowledge 

Gettier Cases

While the tripartite theory of knowledge was widely accepted, the philosopher Edmund Gettier identified exceptions to the theory that a justified true belief always equates to knowledge, known as Gettier cases. A very common Gettier Case example is the stopped clock scenario. Suppose you look at a clock that shows 3 PM and form the belief that it is 3 PM. If the clock is actually stopped but coincidentally shows the correct time, your belief is true and justified, but it seems incorrect to say you "know" the time because the justification is flawed, but at the same time it is not necessarily false.

 

In addition, there can be further limitations identified from the tripartite theory of knowledge, a couple of which I have listed down below.

  • Justification Adequacy - The theory does not specify what constitutes adequate justification, and hence the question of whether something counts as sufficient evidence or reason for a belief can be quite subjective. In turn, this may not necessarily be an accurate measure or universal standard for knowledge. 
  • Reliance on Belief - The theory assumes that belief is a necessary component of knowledge. However, some argue that certain types of knowledge (for instance, skills like knowing how to ride a bike or cooking a dish) may not require an explicit belief.

I do feel that further limitations can be identified from the theory, so feel free to add to this list.

Comments1


Sorted by Click to highlight new comments since:

Executive summary: The tripartite theory of knowledge, which defines knowledge as justified true belief, provides a framework for understanding how we know what we know, but has limitations including Gettier cases that challenge its completeness.

Key points:

  1. The tripartite theory states knowledge requires belief, truth, and justification.
  2. This framework is relevant to effective altruism for evaluating evidence-based actions.
  3. Gettier cases demonstrate scenarios where justified true beliefs may not constitute knowledge.
  4. Limitations include subjectivity in determining adequate justification and reliance on explicit belief.
  5. Understanding these epistemological concepts is crucial for making well-founded decisions in fields like effective altruism.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 3m read
 · 
We’ve redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action.  View the new site → I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I’d love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA’s broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA’s branding and growth strategy: 1. Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. 2. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate with effective altruism more strongly—like compassionate, competent, and action-oriented. 3. Increase impactful actions Make it easier for visitors to take meaningful next steps, like signing up for the newsletter or intro course, exploring career opportunities, or donating. We focused especially on three key audiences: * To-be direct workers: young people and professionals who might explore impactful career paths * Opinion shapers and people in power: journalists, policymakers, and senior professionals in relevant fields * Donors: from large funders to smaller individual givers and peer foundations Before and after The changes across the site are aimed at making it clearer, more skimmable, and easier to navigate. Here are some side-by-side comparisons: Landing page Some of the changes: * Replaced the economic growth graph with a short video highlighting different cause areas and effective altruism in action * Updated tagline to "Find the best ways to help others" based on testing by Rethink
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed