Hide table of contents

A Case For Giving Money To Those Who Are Bad With Money

This is a short lesson, but one I return to again and again — a lesson I learned from Rutger Bregman in his book Utopia for Realists.

The Assumption

It’s easy to believe that poor people are poor because they make bad decisions.

Firstly, this assumption helps us make sense of why poverty exists, because we know from experience that bad decisions lead to bad outcomes. It gets us off the hook for living happily alongside poverty, after all, we are where we are in life because we have made better decisions than those who are not doing as well…

“Poverty is a personality defect.” — Margaret Thatcher

Secondly, we see evidence of poor people making bad decisions. We see homeless people being unproductive, we see poor teenagers dropping out of high school at a higher rate, and we see crime and anti-social behavior accruing in low socio-economic regions. However, as discussed in Saving Lives Reduces Over-Population, correlation isn’t causation, and sometimes a counter-intuitive cause is at play.

The (counter-intuitive) Reality

But, while bad decisions can lead to poverty, it turns out people in poverty might also make worse decisions because they are poor.

This post continues at nonzerosum.games with interactive elements that can't be replicated here on the forum, please visit the site for the full experience.

4

0
0

Reactions

0
0
Comments5


Sorted by Click to highlight new comments since:

FWIW the study on scarcity priming that you cite on your website has failed to replicate.

Thanks for your comment. I've been aware than this perspective is prevalent, but I haven't actually seen examples of where replication of the same study has been attempted, I have only seen some that introduce other major factors that one would expect to influence results. The link you sent me to criticises priming in a broad way, pointing to heuristics like the effect being too large to be believable, which seems a pretty subjective judgment.

The link specifically criticises Danny Kahneman for using priming in small studies to make large generalisations, and in Kahneman's response he makes a fairly good rebuttal. The one thing he concedes is the small size of the studies he used, which is not the case in the priming case used in this post, which involved a series of studies with several hundred participants each.

I appreciate that I might be incorrect to have confidence in these studies, in light of the widely held opinion that priming studies are not reliable, but I'm yet to see the specific studies that have attempted, and failed, to replicate these specific studies.

The link I sent also discusses an article that meta-analyzed replications of studies using scarcity priming. The meta-analysis includes a failed replication of a key study from the Mani et al (2013) article you discuss in your post.
 

The Mani article itself has the hallmarks of questionable research practices. It's true that each experiment has about 100 participants, but given that these participants are split across 4 conditions, this is the bare minimum for the standard (n = 20-30 / group) at that time. The main results also have p-values between .01-.05, which is an indicator of p-hacking. And yes, the abnormally large effect sizes are relevant. An effect as large as is claimed by Mani et al (d=.88-.94) should be glaringly obvious. That's close to the effect size for the association between height and weight (r = .44 -> d = .98)

And more generally at this point, the default view should be that priming studies are not credible. One shouldn't wait for a direct failed replication of any particular study. There's enough indirect evidence that that whole approach is beset by bad practices.

Thanks for providing that link Nathan, that does seem to significantly undermine the Mani et al study. While I agree with you that at this point primary studies should be, by default, seen as not credible, it did help a lot (in terms of convincing me personally) to see a study specifically designed to replicate the Mani et al study.

Do you find any evidence for the conclusions in the post credible? Or are you aware of more credible studies that would support the argument the post makes? You seem to know your stuff, so, I don't want to waste your time, but I would value your input about whether the post's position is tenable at all given the available evidence.

I don't want to be posting nonsense, so depending on the evidence available I would either rewrite it with more reliable evidence, or take it down.

I'm familiar with psychology. But the causes and consequences of poverty are beyond my expertise.

In general, I think the case for alleviating poverty doesn't need to depend on what it does to people's cognitive abilities. Alleviating poverty is good because poverty sucks. People in poverty have worse medical care, are less safe, have less access to quality food, etc. If someone isn't moved by these things, then saying it also lowers IQ is kind of missing the point.

Another theme in your post is that those in poverty aren't to blame, since it was the poverty that caused them to make their bad decisions. I think a stronger case can be made by pointing to the fact that people don't choose where they're born. (And this fact doesn't depend on any dubious psychology studies.) For someone in Malawi, it will be hard to think about saving for retirement when you make $5/day.

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
JamesÖz
 ·  · 3m read
 · 
Why it’s important to fill out this consultation The UK Government is currently consulting on allowing insects to be fed to chickens and pigs. This is worrying as the government explicitly says changes would “enable investment in the insect protein sector”. Given the likely sentience of insects (see this summary of recent research), and that median predictions estimate that 3.9 trillion insects will be killed annually by 2030, we think it’s crucial to try to limit this huge source of animal suffering.  Overview * Link to complete the consultation: HERE. You can see the context of the consultation here. * How long it takes to fill it out: 5-10 minutes (5 questions total with only 1 of them requiring a written answer) * Deadline to respond: April 1st 2025 * What else you can do: Share the consultation document far and wide!  * You can use the UK Voters for Animals GPT to help draft your responses. * If you want to hear about other high-impact ways to use your political voice to help animals, sign up for the UK Voters for Animals newsletter. There is an option to be contacted only for very time-sensitive opportunities like this one, which we expect will happen less than 6 times a year. See guidance on submitting in a Google Doc Questions and suggested responses: It is helpful to have a lot of variation between responses. As such, please feel free to add your own reasoning for your responses or, in addition to animal welfare reasons for opposing insects as feed, include non-animal welfare reasons e.g., health implications, concerns about farming intensification, or the climate implications of using insects for feed.    Question 7 on the consultation: Do you agree with allowing poultry processed animal protein in porcine feed?  Suggested response: No (up to you if you want to elaborate further).  We think it’s useful to say no to all questions in the consultation, particularly as changing these rules means that meat producers can make more profit from sel
michel
 ·  · 4m read
 · 
I'm writing this in my personal capacity as someone who recently began working at the Tarbell Center for AI Journalism—my colleagues might see things differently and haven’t reviewed this post.  The rapid development of artificial intelligence could prove to be one of the most consequential technological transitions in human history. As we approach what may be a critical period in AI development, we face urgent questions about governance, safety standards, and the concentration of power in AI development. Yet many in the general public are not aware of the speed of AI development, nor the implications powerful AI models could have for them or society at large. Society’s capacity to carefully examine and publicly debate AI issues lags far behind their importance.  This post makes the case for why journalism on AI is an important yet neglected path to remedy this situation. AI journalism has a lot of potential I see a variety of ways that AI journalism can helpfully steer AI development.  Journalists can scrutinize proposed regulations and safety standards, helping improve policies by informing relevant actors of potential issues or oversights. * Example article: Euractiv’s detailed coverage of the EU AI Act negotiations helped policymakers understand the importance of not excluding general-purpose models from the act, which may have played a key role in shaping the final text.[1] * Example future opportunity: As individual US states draft AI regulations in 2025, careful analysis of proposed bills could help prevent harmful loopholes or unintended consequences before they become law. Journalists can surface (or amplify) current and future risks, providing activation energy for policymakers and other actors to address these risks.  * Example article: 404 Media's investigation revealing an a16z-funded AI platform's generation of images that “could be categorized as child pornography” led a cloud computing provider to terminate its relationship with the platf