All posts

Top

Monday, 22 April 2024
Mon, 22 Apr 2024

AI safety 6
Community 3
Existential risk 3
Building effective altruism 2
Opinion 2
Policy 2
More

Frontpage Posts

Quick takes

CEA is hiring for someone to lead the EA Global program. CEA's three flagship EAG conferences facilitate tens of thousands of highly impactful connections each year that help people build professional relationships, apply for jobs, and make other critical career decisions. This is a role that comes with a large amount of autonomy, and one that plays a key role in shaping a key piece of the effective altruism community’s landscape.  See more details and apply here!
Quote from VC Josh Wolfe: > Biology. We will see an AWS moment where instead of you having to be a biotech firm that opens your own wet lab or moves into Alexandria Real Estate, which is you know, specializes in hosting biotech companies, in in all these different regions approximate to academic research centers. You will be able to just take your experiment and upload it to the cloud where there are cloud-based robotic labs. We funded some of these. There's one company called Stratios. > > There's a ton that are gonna come on wave, and this is exciting because you can be a scientist on the beach in the Bahamas, pull up your iPad, run an experiment. The robots are performing 90% of the activity of Pouring something from a beaker into another, running a centrifuge, and then the data that comes off of that. > > And this is the really cool part. Then the robot and the machines will actually say to you, “Hey, do you want to run this experiment but change these 4 parameters or these variables?” And you just click a button “yes” as though it's reverse prompting you, and then you run another experiment. So the implication here is that the boost in productivity for science, for generation of truth, of new information, of new knowledge, That to me is the most exciting thing. And the companies that capture that, forget about the societal dividend, I think are gonna make a lot of money. https://overcast.fm/+5AWO95pnw/46:15
2
ABishop
13d
0
I noticed that many people write a lot not only on forums but also on personal blogs and Substack. This is sad. Competent and passionate people are writing in places that get very few views. I too am one of those people. But honestly, magazines and articles are stressful and difficult, and forums are so huge that even if they have a messaging function, it is difficult to achieve a transparent state where each person can fully recognize their own epistemological status. I'm interested in such collaborative blogs, similar to the early Overcoming Bias. I believe that many bloggers and writers need help and that we can help each other. Is there anyone who wants to be with me?
Has anyone seen an analysis that takes seriously the idea that people should eat some fruits, vegetables and legumes over others based on how much animal suffering they each cause? I.e. don't eat X fruit, eat Y one instead, because X fruit is [e.g.] harvested in Z way, which kills more [insert plausibly sentient creature].
The catchphrase I walk around with in my head regarding the optimal strategy for AI Safety is something like: Creating Superintelligent Artificial Agents* (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (*we already have AGI). I thought it might be useful to spell that out.