EARadio is a new podcast made up of material relevant to effective altruists. There are a lot of great talks online, such as those from the 2013 Effective Altruism Summit and Giving What We Can. But this content is spread out over multiple websites and often available only as video.

EARadio packages this material into a convenient podcast.

Subscribe with

EARadio was created by Chris Calabro and Patrick Brinich-Langlois. If you have suggestions for materials to add, please let us know in the comments.

9

0
0

Reactions

0
0
Comments7


Sorted by Click to highlight new comments since:

This is great. Are there any more talks on career choice that you could add?

80,000 Hours have only a few videos on their YouTube channel that're more than ten minutes long.

Is there anything you had in mind? I don't know of any other talks that are relevant and whose permissions would be easy to secure.

P.S. I just realized that the audio quality of Toby Ord's talk is very bad.

P.P.S. If anyone would like to take over this project, let me know.

If you'd like I can give a go at cleaning up the audio of Ord's talk.

And by give a go I mean, run it through a few filters to see if it can go from "very bad" to "passable".

Sure, that would be very helpful. Boris did that for a couple of other files. I'll upload any cleaned-up audio sent my way. New audio is also welcome!

E-mail is probably the best way to get in touch: pbrinichlanglois@gmail.com

And sorry for the delayed reply! I didn't see your comment.

Chris and Patrick have made the Podcast via radio a valuable economical resource to communicate and learn without visual component, thus enabling blind or low-vision people to learn and share the EA experiences, and also via low-speed internet .

"Error establishing a database connection" for http://earad.io/

Is there another website where I can access this content?

Great work Patrick and Chris!

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
LewisBollard
 ·  · 6m read
 · 
> Despite the setbacks, I'm hopeful about the technology's future ---------------------------------------- It wasn’t meant to go like this. Alternative protein startups that were once soaring are now struggling. Impact investors who were once everywhere are now absent. Banks that confidently predicted 31% annual growth (UBS) and a 2030 global market worth $88-263B (Credit Suisse) have quietly taken down their predictions. This sucks. For many founders and staff this wasn’t just a job, but a calling — an opportunity to work toward a world free of factory farming. For many investors, it wasn’t just an investment, but a bet on a better future. It’s easy to feel frustrated, disillusioned, and even hopeless. It’s also wrong. There’s still plenty of hope for alternative proteins — just on a longer timeline than the unrealistic ones that were once touted. Here are three trends I’m particularly excited about. Better products People are eating less plant-based meat for many reasons, but the simplest one may just be that they don’t like how they taste. “Taste/texture” was the top reason chosen by Brits for reducing their plant-based meat consumption in a recent survey by Bryant Research. US consumers most disliked the “consistency and texture” of plant-based foods in a survey of shoppers at retailer Kroger.  They’ve got a point. In 2018-21, every food giant, meat company, and two-person startup rushed new products to market with minimal product testing. Indeed, the meat companies’ plant-based offerings were bad enough to inspire conspiracy theories that this was a case of the car companies buying up the streetcars.  Consumers noticed. The Bryant Research survey found that two thirds of Brits agreed with the statement “some plant based meat products or brands taste much worse than others.” In a 2021 taste test, 100 consumers rated all five brands of plant-based nuggets as much worse than chicken-based nuggets on taste, texture, and “overall liking.” One silver lining
 ·  · 1m read
 ·