New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Why are April Fools jokes still on the front page? On April 1st, you expect to see April Fools' posts and know you have to be extra cautious when reading strange things online. However, April 1st was 13 days ago and there are still two posts that are April Fools posts on the front page. I think it should be clarified that they are April Fools jokes so people can differentiate EA weird stuff from EA weird stuff that's a joke more easily. Sure, if you check the details you'll see that things don't add up, but we all know most people just read the title or first few paragraphs.
Could it be more important to improve human values than to make sure AI is aligned? Consider the following (which is almost definitely oversimplified):   ALIGNED AI MISALIGNED AI HUMANITY GOOD VALUES UTOPIA EXTINCTION HUMANITY NEUTRAL VALUES NEUTRAL WORLD EXTINCTION HUMANITY BAD VALUES DYSTOPIA EXTINCTION For clarity, let’s assume dystopia is worse than extinction. This could be a scenario where factory farming expands to an incredibly large scale with the aid of AI, or a bad AI-powered regime takes over the world. Let's assume neutral world is equivalent to extinction. The above shows that aligning AI can be good, bad, or neutral. The value of alignment exactly depends on humanity’s values. Improving humanity’s values however is always good.  The only clear case where aligning AI beats improving humanity’s values is if there isn’t scope to improve our values further. An ambiguous case is whenever humanity has positive values in which case both improving values and aligning AI are good options and it isn’t immediately clear to me which wins. The key takeaway here is that improving values is robustly good whereas aligning AI isn’t - alignment is bad if we have negative values. I would guess that we currently have pretty bad values given how we treat non-human animals and alignment is therefore arguably undesirable. In this simple model, improving values would become the overwhelmingly important mission. Or perhaps ensuring that powerful AI doesn't end up in the hands of bad actors becomes overwhelmingly important (again, rather than alignment). This analysis doesn’t consider the moral value of AI itself. It also assumed that misaligned AI necessarily leads to extinction which may not be accurate (perhaps it can also lead to dystopian outcomes?). I doubt this is a novel argument, but what do y’all think?
The TV show Loot, in Season 2 Episode 1, introduces a SBF-type character named Noah Hope DeVore, who is a billionaire wonderkid who invents "analytic altruism", which uses an algorithm to determine "the most statistically optimal ways" of saving lives and naturally comes up with malaria nets. However, Noah is later arrested by the FBI for wire fraud and various other financial offenses.
Many organizations I respect are very risk-averse when hiring, and for good reasons. Making a bad hiring decision is extremely costly, as it means running another hiring round, paying for work that isn't useful, and diverting organisational time and resources towards trouble-shooting and away from other projects. This leads many organisations to scale very slowly. However, there may be an imbalance between false positives (bad hires) and false negatives (passing over great candidates). In hiring as in many other fields, reducing false positives often means raising false negatives. Many successful people have stories of being passed over early in their careers. The costs of a bad hire are obvious, while the costs of passing over a great hire are counterfactual and never observed. I wonder  whether, in my past hiring decisions, I've properly balanced the risk of rejecting a potentially great hire against the risk of making a bad hire. One reason to think we may be too risk-averse, in addition to the salience of the costs, is that the benefits of a great hire could grow to be very large, while the costs of a bad hire are somewhat bounded, as they can eventually be let go.
From a utilitarian perspective, it would seem there are substantial benefits to accurate measures of welfare.  I was listening to Adam Mastroianni discuss the history of trying measure happiness and life satisfaction and it was interesting to find a level of stability across the decades. Could it really be that the increases in material wealth do not result in huge objective increases in happiness and satisfaction for humans? It would seem the efforts to increase GDP and improve standard of living beyond the basics may be misdirected. Furthermore, it seems like it would be extremely helpful in terms of policy creation to have an objective unit like a util.  We could compare human and animal welfare directly, and genetically engineer animals to increase their utils.  While efforts might not super successful, it would seem very important to merely improve objective measures of wellbeing by say 10%.

Popular comments

Recent discussion

This is a Book Review & Summary of The Art of Gathering: How We Meet and Why It Matters by Priya Parker. 

Rating: 4/5

I've pulled the main insights and actionable recommendations from each chapter, so someone can orient themselves to the main upshots of the book quickly, and potentially identify which chapters they'd like to dig deeper into if they'd like to learn more but don't have the time to read the whole book. I hope this can be useful for EA group organizers, and plan to release a post soon applying these insights to EAGs. 

Review

Overall, I really liked it, mostly because it showed me that organizing is not fundamentally up to “luck” or things out of your hand, but is rather something that you can make go better as an organizer. While it’s weak on evidence (you kind of have to take Priya at her word for a lot of this), much of it resonated with my own experience of...

Continue reading
niplav commented on Peter Eckersley (1979-2022) 38m ago
496
9

Security engineer, digital rights activist, AI safety and policy researcher. Beloved in these communities.

Eckersley is most famous as an advocate/developer at the intersection of tech and legal activism. His work on the Let's Encrypt free certificate authority, HTTPS Everywhere...

Continue reading

Due to the sudden work of unsung heroes, he was cryopreserved despite not having been signed up at the time of his deänimation.

With AI Impacts, we’re pleased to announce an essay competition on the automation of wisdom and philosophy. Submissions are due by July 14th. The first prize is $10,000, and there is a total of $25,000 in prizes available. 

Submit an entry

The full announcement text is reproduced here:

Background

AI is likely to automate more and more categories of thinking with time.

By default, the direction the world goes in will be a result of the choices people make, and these choices will be informed by the best thinking available to them. People systematically make better, wiser choices when they understand more about issues, and when they are advised by deep and wise thinking.

Advanced AI will reshape the world, and create many new situations with potentially high-stakes decisions for people to make. To what degree people will understand these situations well enough to make wise choices remains to...

Continue reading
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Why are April Fools jokes still on the front page? On April 1st, you expect to see April Fools' posts and know you have to be extra cautious when reading strange things online. However, April 1st was 13 days ago and there are still two posts that are April Fools posts on...

Continue reading

Dissenting view: like everywhere else on the internet, when you encounter something really crazy you sometimes have to look at the publication date. I trust readers can do that.


Author: Leonard Dung

Abstract: Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment...

Continue reading

I want to encourage more papers like this and more efforts to lay an entire argument for x-risk out.

 That being said, the arguments are fairly unconvincing. For example, the argument for premise 1 completely skips the step where you sketch out an actual path for AI to disempower humanity if we don't voluntarily give up. "AI will be very capable" is not the same thing as "AI will be capable of 100% guaranteed conquering all of humanity", you need a joining argument in the middle. 

11
Matthew_Barnett
4h
I read most of this paper, albeit somewhat quickly and skipped a few sections. I appreciate how clear the writing is, and I want to encourage more AI risk proponents to write papers like this to explain their views. That said, I largely disagree with the conclusion and several lines of reasoning within it. Here are some of my thoughts (although these not my only disagreements): * I think the definition of "disempowerment" is vague in a way that fails to distinguish between e.g. (1) "less than 1% of world income goes to humans, but they have a high absolute standard of living and are generally treated well" vs. (2) "humans are in a state of perpetual impoverishment and oppression due to AIs and generally the future sucks for them". * These are distinct scenarios with very different implications (under my values) for whether what happened is bad or good * I think (1) is OK and I think it's more-or-less the default outcome from AI, whereas I think (2) would be a lot worse and I find it less likely. * By not distinguishing between these things, the paper allows for a motte-and-bailey in which they show that one (generic) range of outcomes could occur, and then imply that it is bad, even though both good and bad scenarios are consistent with the set of outcomes they've demonstrated * I think this quote is pretty confused and seems to rely partially on a misunderstanding of what people mean when they say that AGI cognition might be messy: "Second, even if human psychology is messy, this does not mean that an AGI’s psychology would be messy. It seems like current deep learning methodology embodies a distinction between final and instrumental goals. For instance, in standard versions of reinforcement learning, the model learns to optimize an externally specified reward function as best as possible. It seems like this reward function determines the model’s final goal. During training, the model learns to seek out things which are instrumentally relevant to this
1
0

Wouldn't it be more efficient to create new programs within existing charities rather than starting new charities?

Has someone written about this?

I've particularly noticed that there are many small charities working on animal advocacy.

(I can imagine both options have pros and cons, similar to startups vs big tech)

cc: @Joey

Continue reading

What are the facts around Sam Bankman-Fried and FTX about which all parties agree? What was the nature of Will's relationship with SBF? What things, in retrospect, should've been red flags about Sam or FTX? Was Sam's personality problematic? Did he ever really believe in EA principles? Does he lack empathy? Or was he on the autism spectrum? Was he naive in his application of utilitarianism? Did EA intentionally install SBF as a spokesperson, or did he put himself in that position of his own accord? What lessons should EA leaders learn from this? What steps should be taken to prevent it from happening again? What should EA leadership look like moving forward? What are some of the dangers around AI that are not related to alignment? Should AI become the central (or even the sole) focus of the EA movement?

 

the Clearer Thinking podcast is aimed more at people in or related to EA, whereas

...
Continue reading