Ways the world is getting better
Click the banner to add a piece of good news

New & upvoted

Customize feedCustomize feed
NEW
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
29
tobytrem
21h
4
FAQ: “Ways the world is getting better” banner The banner will only be visible on desktop. If you can't see it, try expanding your window. It'll be up for a week.  How do I use the banner? 1. Click on an empty space to add an emoji,  2. Choose your emoji,  3. Write a one-sentence description of the good news you want to share,  4. Link an article or forum post that gives more information.  If you’d like to delete your entry, click the cross that appears when you hover over it. It will be deleted for everyone. What kind of stuff should I write? Anything that qualifies as good news relevant to the world's most important problems.  For example, Ben West’s recent quick takes (1, 2, 3). Avoid posting partisan political news, but the passage of relevant bills and policies is on topic.  Will my entry be anonymous? All submissions are displayed without your Forum name, so they are ~anonymous to users, however, usual moderation norms still apply (additionally, we may remove duplicates or borderline trollish submissions. This is an experiment, so we reserve the right to moderate heavily if necessary). Ask any other questions you have in the comments below. Feel free to dm me with feedback or comments.  
Common prevalence estimates are often wrong. Example: snakebites and my experience reading Long Covid literature. Both institutions like the WHO and academic literature appear to be incentivized to exaggerate. I think the Global Burden of Disease might be a more reliable source, but have not looked into it. I advise everyone using prevalence estimates to treat them with some skepticism and look up the source.
EAGxUtrecht (July 5-7) is now inviting applicants from the UK (alongside other Western European regions that don't currently have an upcoming EAGx).[1] Apply here! Ticket discounts are available and we have limited travel support. Utrecht is very easy to get to. You can fly/Eurostar to Amsterdam and then every 15 mins there's a direct train to Utrecht, which only takes 35 mins (and costs €10.20). 1. ^ Applicants from elsewhere are encouraged to apply but the bar for getting in is much higher.
In my latest post I talked about whether unaligned AIs would produce more or less utilitarian value than aligned AIs. To be honest, I'm still quite confused about why many people seem to disagree with the view I expressed, and I'm interested in engaging more to get a better understanding of their perspective. At the least, I thought I'd write a bit more about my thoughts here, and clarify my own views on the matter, in case anyone is interested in trying to understand my perspective. The core thesis that was trying to defend is the following view: My view: It is likely that by default, unaligned AIs—AIs that humans are likely to actually build if we do not completely solve key technical alignment problems—will produce comparable utilitarian value compared to humans, both directly (by being conscious themselves) and indirectly (via their impacts on the world). This is because unaligned AIs will likely both be conscious in a morally relevant sense, and they will likely share human moral concepts, since they will be trained on human data. Some people seem to merely disagree with my view that unaligned AIs are likely to be conscious in a morally relevant sense. And a few others have a semantic disagreement with me in which they define AI alignment in moral terms, rather than the ability to make an AI share the preferences of the AI's operator.  But beyond these two objections, which I feel I understand fairly well, there's also significant disagreement about other questions. Based on my discussions, I've attempted to distill the following counterargument to my thesis, which I fully acknowledge does not capture everyone's views on this subject: Perceived counter-argument: The vast majority of utilitarian value in the future will come from agents with explicitly utilitarian preferences, rather than those who incidentally achieve utilitarian objectives. At present, only a small proportion of humanity holds partly utilitarian views. However, as unaligned AIs will differ from humans across numerous dimensions, it is plausible that they will possess negligible utilitarian impulses, in stark contrast to humanity's modest (but non-negligible) utilitarian tendencies. As a result, it is plausible that almost all value would be lost, from a utilitarian perspective, if AIs were unaligned with human preferences. Again, I'm not sure if this summary accurately represents what people believe. However, it's what some seem to be saying. I personally think this argument is weak. But I feel I've had trouble making my views very clear on this subject, so I thought I'd try one more time to explain where I'm coming from here. Let me respond to the two main parts of the argument in some amount of detail: (i) "The vast majority of utilitarian value in the future will come from agents with explicitly utilitarian preferences, rather than those who incidentally achieve utilitarian objectives." My response: I am skeptical of the notion that the bulk of future utilitarian value will originate from agents with explicitly utilitarian preferences. This clearly does not reflect our current world, where the primary sources of happiness and suffering are not the result of deliberate utilitarian planning. Moreover, I do not see compelling theoretical grounds to anticipate a major shift in this regard. I think the intuition behind the argument here is something like this: In the future, it will become possible to create "hedonium"—matter that is optimized to generate the maximum amount of utility or well-being. If hedonium can be created, it would likely be vastly more important than anything else in the universe in terms of its capacity to generate positive utilitarian value. The key assumption is that hedonium would primarily be created by agents who have at least some explicit utilitarian goals, even if those goals are fairly weak. Given the astronomical value that hedonium could potentially generate, even a tiny fraction of the universe's resources being dedicated to hedonium production could outweigh all other sources of happiness and suffering. Therefore, if unaligned AIs would be less likely to produce hedonium than aligned AIs (due to not having explicitly utilitarian goals), this would be a major reason to prefer aligned AI, even if unaligned AIs would otherwise generate comparable levels of value to aligned AIs in all other respects. If this is indeed the intuition driving the argument, I think it falls short for a straightforward reason. The creation of matter-optimized-for-happiness is more likely to be driven by the far more common motives of self-interest and concern for one's inner circle (friends, family, tribe, etc.) than by explicit utilitarian goals. If unaligned AIs are conscious, they would presumably have ample motives to optimize for positive states of consciousness, even if not for explicitly utilitarian reasons. In other words, agents optimizing for their own happiness, or the happiness of those they care about, seem likely to be the primary force behind the creation of hedonium-like structures. They may not frame it in utilitarian terms, but they will still be striving to maximize happiness and well-being for themselves and others they care about regardless. And it seems natural to assume that, with advanced technology, they would optimize pretty hard for their own happiness and well-being, just as a utilitarian might optimize hard for happiness when creating hedonium. In contrast to the number of agents optimizing for their own happiness, the number of agents explicitly motivated by utilitarian concerns is likely to be much smaller. Yet both forms of happiness will presumably be heavily optimized. So even if explicit utilitarians are more likely to pursue hedonium per se, their impact would likely be dwarfed by the efforts of the much larger group of agents driven by more personal motives for happiness-optimization. Since both groups would be optimizing for happiness, the fact that hedonium is similarly optimized for happiness doesn't seem to provide much reason to think that it would outweigh the utilitarian value of more mundane, and far more common, forms of utility-optimization. To be clear, I think it's totally possible that there's something about this argument that I'm missing here. And there are a lot of potential objections I'm skipping over here. But on a basic level, I mostly just lack the intuition that the thing we should care about, from a utilitarian perspective, is the existence of explicit utilitarians in the future, for the aforementioned reasons. The fact that our current world isn't well described by the idea that what matters most is the number of explicit utilitarians, strengthens my point here. (ii) "At present, only a small proportion of humanity holds partly utilitarian views. However, as unaligned AIs will differ from humans across numerous dimensions, it is plausible that they will possess negligible utilitarian impulses, in stark contrast to humanity's modest (but non-negligible) utilitarian tendencies." My response: Since only a small portion of humanity is explicitly utilitarian, the argument's own logic suggests that there is significant potential for AIs to be even more utilitarian than humans, given the relatively low bar set by humanity's limited utilitarian impulses. While I agree we shouldn't assume AIs will be more utilitarian than humans without specific reasons to believe so, it seems entirely plausible that factors like selection pressures for altruism could lead to this outcome. Indeed, commercial AIs seem to be selected to be nice and helpful to users, which (at least superficially) seems "more utilitarian" than the default (primarily selfish-oriented) impulses of most humans. The fact that humans are only slightly utilitarian should mean that even small forces could cause AIs to exceed human levels of utilitarianism. Moreover, as I've said previously, it's probable that unaligned AIs will possess morally relevant consciousness, at least in part due to the sophistication of their cognitive processes. They are also likely to absorb and reflect human moral concepts as a result of being trained on human-generated data. Crucially, I expect these traits to emerge even if the AIs do not share human preferences.  To see where I'm coming from, consider how humans routinely are "misaligned" with each other, in the sense of not sharing each other's preferences, and yet still share moral concepts and a common culture. For example, an employee can share moral concepts with their employer while having very different consumption preferences from them. This picture is pretty much how I think we should primarily think about unaligned AIs that are trained on human data, and shaped heavily by techniques like RLHF or DPO. Given these considerations, I find it unlikely that unaligned AIs would completely lack any utilitarian impulses whatsoever. However, I do agree that even a small risk of this outcome is worth taking seriously. I'm simply skeptical that such low-probability scenarios should be the primary factor in assessing the value of AI alignment research. Intuitively, I would expect the arguments for prioritizing alignment to be more clear-cut and compelling than "if we fail to align AIs, then there's a small chance that these unaligned AIs might have zero utilitarian value, so we should make sure AIs are aligned instead". If low probability scenarios are the strongest considerations in favor of alignment, that seems to undermine the robustness of the case for prioritizing this work. While it's appropriate to consider even low-probability risks when the stakes are high, I'm doubtful that small probabilities should be the dominant consideration in this context. I think the core reasons for focusing on alignment should probably be more straightforward and less reliant on complicated chains of logic than this type of argument suggests. In particular, as I've said before, I think it's quite reasonable to think that we should align AIs to humans for the sake of humans. In other words, I think it's perfectly reasonable to admit that solving AI alignment might be a great thing to ensure human flourishing in particular. But if you're a utilitarian, and not particularly attached to human preferences per se (i.e., you're non-speciesist), I don't think you should be highly confident that an unaligned AI-driven future would be much worse than an aligned one, from that perspective.
I've recently made an update to our Announcement on the future of Wytham Abbey, saying that since this announcement, we have decided that we will use some of the proceeds on Effective Venture's general costs.

Popular comments

Recent discussion

Just as the 2022 crypto crash had many downstream effects for effective altruism, so could a future crash in AI stocks have several negative (though hopefully less severe) effects on AI safety.

Why might AI stocks crash?

The most obvious reason AI stocks might crash is that...

Continue reading

Executive summary: A potential crash in AI stocks, while not necessarily reflecting long-term AI progress, could have negative short-term effects on AI safety efforts through reduced funding, shifted public sentiment, and second-order impacts on the AI safety community.

Key points:

  1. AI stocks, like Nvidia, have a significant chance of crashing 50% or more in the coming years based on historical volatility and typical patterns with new technologies.
  2. A crash could occur if AI revenues fail to grow fast enough to meet market expectations, even if capabilities con
... (read more)
SummaryBot commented on Shareholder Activism

TL;DR: Shareholder activism has shown a great deal of promise in general with inherent strengths, such as an automatic mechanism for being heard and for getting demands met. This contrasts with other methods of advocacy where these must be earned. It has already seen some...

Continue reading

Executive summary: Shareholder activism has shown promise as an effective advocacy tool for animal welfare causes, with some successes already, and opportunities exist to expand its use if done carefully in coordination with existing groups.

Key points:

  1. Shareholder activism leverages partial ownership of companies to achieve reforms, with increasing use and effectiveness in recent years.
  2. Key requirements include owning a certain amount of stock, dedicating staff time for advocacy, and having legal assistance to navigate procedures.
  3. Shareholder resolutions typi
... (read more)

Shrimp Welfare Project (SWP) produced this report because we believe it could have significant informational value to the movement, rather than because we anticipate SWP directly working on a shrimp paste intervention in the future. We think a new project focused on shrimp...

Continue reading

Executive summary: The shrimp paste industry, which relies heavily on wild-caught Acetes shrimps, raises significant animal welfare concerns that warrant further research and potential interventions to reduce suffering.

Key points:

  1. Acetes shrimps are likely the most utilized aquatic animal for food globally, with trillions harvested annually for shrimp paste production in Southeast Asia.
  2. Shrimp paste production involves sun-drying, grinding, and fermenting the shrimp, and is deeply rooted in the region's cultural heritage and cuisine.
  3. Small coastal communities
... (read more)
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

TL;DR: This is a written version of a talk given at EAG Bay Area 2023. It claims university EA community building can be incredibly impactful, but there are important pitfalls to avoid, such as being overly zealous, overly open, or overly exclusionary. These pitfalls...

Continue reading

Executive summary: University EA community building can be highly impactful, but important pitfalls like being overly zealous, open, or exclusionary can make groups less effective and even net negative.

Key points:

  1. University groups can help talented students have effective careers by shaping their priorities and connections at a pivotal time.
  2. Being overly zealous or salesy about EA ideas can put off skeptical truth-seekers and create an uncritical group.
  3. Being overly open and not prioritizing the most effective causes wastes limited organizer time and misrepr
... (read more)

Geography has much to do with your economic status, including access to knowledge and opportunities. I am from Zambia and was privileged to stumble upon the Effective Altruism community that aligns with SOME of my values and the kind of life I want to live. There are a ...

Continue reading

Update #2 

A lot is happening even when you think nothing is happening.

  • This is huge and maybe unrelated, but I am happy I was invited to take the charity Entrepreneurship test task 1. I don't know if I will succeed, but it's been a very enlightening experience, and I got to explore this project as a long-term option. I have come a long way reading about EA, interacting with less than five so far, and talking about EA at my Toast Masters meetings. This is an important note because if you are new to the EA community or believe you are "a normie," there i
... (read more)

Open Philanthropy[1] recently shared a blog post with a list of some cool things accomplished in 2023 by grantees of their Global Health and Wellbeing (GHW) programs (including farm animal welfare). The post “aims to highlight just a few updates on what our...

Continue reading

Executive summary: Open Philanthropy highlights impactful projects from their 2023 Global Health and Wellbeing grantees, spanning areas such as air quality monitoring, vaccine development, pain research, and farm animal welfare.

Key points:

  1. Dr. Sachchida Tripathi deployed 1,400 low-cost air quality sensors in rural India to improve data and encourage stakeholder buy-in for interventions.
  2. The Strep A Vaccine Global Consortium (SAVAC) is accelerating the development and implementation of strep A vaccines, which could prevent over 500,000 deaths per year.
  3. Dr. All
... (read more)
ezrah commented on tobytrem's quick take

FAQ: “Ways the world is getting better” banner

The banner will only be visible on desktop. If you can't see it, try expanding your window. It'll be up for a week. 

How do I use the banner?

  1. Click on an empty space to add an emoji, 
  2. Choose your emoji, 
  3. Write a one
...
Continue reading

Loved it as well

Thank you!

Thanks Jakub!

Summary

  1. Guided self-help involves self-learning psychotherapy, and regular, short contact with an advisor (ex. Kaya Guides, AbleTo). Unguided self-help removes the advisor (ex. Headspace, Waking Up, stress relief apps).
  2. It's probably 7× (2.5–12×) more cost-effective
    1. It
...
Continue reading

Thank you so much! Your criticism has helped me identify a few mistakes, and I think can get us closer to clarity. The main difference between our models is around who counts as a 'beneficiary', or what it means to 'recruit' someone.

The main thing I want to focus on is that you're predicting a cost per beneficiary that would be nearly 50% recruitment. I don't think that passes the smell test. The main difference is you're only counting the staff time for active participants, but even with modest dropout, we'd expect the vast majority of staff time to go to... (read more)

This announcement was written by Toby Tremlett, but don’t worry, I won’t answer the questions for Lewis.

Lewis Bollard, Program Director of Farm Animal Welfare at Open Philanthropy, will be holding an AMA on Wednesday 8th of May. Put all your questions for him on this thread...

Continue reading

Hello @LewisBollard and @MichaelStJules thank you for your replies. Some answers to your considerations: 

  1. If replacement to aquiculture is a logic, you can state the same to fight against land animal production, as this will all concentrate consumption on aquatic animals as people do not even know which fish is caught or farmed. Certainly this will mean more lives killed/ in suffering, the same way? Shall we stop talking about transitioning away from land animals? Certainly not. 
  2. You mention the unlikeliness of promoting a ban on fishing. Although
... (read more)
2
Vasco Grilo
Thanks, Lewis! I gave a bad example because 100 M$ is a significant fractio of the amount granted in farm animal welfare over the number of years respecting the budget allocation. I also assumed an elasticity of 1, but I can see somethng like 0.5 would be more reasonable. So my corrected statement would be something like a new animal welfare donor granting 10 M$ in a similar way to Open Phil (i.e. not just an increase in 10 M$ of funding, which may be poorly allocated) would decrease Open Phil funding in expectation by 5 M$. However, I see your replies to points 1 to 3 would also apply, such that the elasticity may be closer to 1, and therefore one would not need to worry about Open Phil decreasing funding to animal welfare.
2
YvesBon
I have in mind several different examples of cultural strategies that are well known in France, but probably less so (or not at all) in the US. - One very effective cultural strategy is that of Paris Animaux Zoopolis / Projet Animaux Zoopolis (https://zoopolis.fr), which deals with wild animals (not RWAS) and liminal animals, but also recreational fishing and farmed fish for restocking rivers, and which, by changing the public's image of animals (e.g. rats), undoubtedly has a general cultural impact that changes the public's view of animals. PAZ uses the cultural (media) impact of its battles to put pressure on political figures (mayors, MPs) and achieve greater cultural effectiveness or even new laws (its other objective): I've written a description of the work of this association, and how it uses cultural struggle very effectively to bring about concrete, sometimes legislative changes. https://docs.google.com/document/d/1Cj2w9xd9vNjNBGuTpe_WNjIi816E_2roIx_FnK6cvug/edit - A cultural strategy that would be more effective if it had a bit more funding: the organisation of the World Days for the End of Fishing (and Fish Farming: https://end-of-fishing.org) and for the End of Speciesism (https://end-of-speciesism. org), in which about a hundred organisations from all five continents participate each year (Africa is still poorly represented), and whose aim is to penetrate the culture of the animal advocacy movement by proposing that it take part in these World Days and that once a year (while waiting for something better!) it adopt a discourse centred either on the denunciation of speciesism or on the question of aquatic animals (fish, shrimps), elements that the movement hardly takes into account spontaneously. The strategy is cost-effective (one full-time staff member can reach two times a year hundreds of organisations, some of which will then carry out campaigns), but suffers from its limitations: one full-time staff member can't organise each year more than two da