New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Marcus Daniell appreciation note @Marcus Daniell, cofounder of High Impact Athletes, came back from knee surgery and is donating half of his prize money this year. He projects raising $100,000. Through a partnership with Momentum, people can pledge to donate for each point he gets; he has raised $28,000 through this so far. It's cool to see this, and I'm wishing him luck for his final year of professional play!
36
harfe
8h
7
FHI has shut down yesterday: https://www.futureofhumanityinstitute.org/
Why are April Fools jokes still on the front page? On April 1st, you expect to see April Fools' posts and know you have to be extra cautious when reading strange things online. However, April 1st was 13 days ago and there are still two posts that are April Fools posts on the front page. I think it should be clarified that they are April Fools jokes so people can differentiate EA weird stuff from EA weird stuff that's a joke more easily. Sure, if you check the details you'll see that things don't add up, but we all know most people just read the title or first few paragraphs.
I am not confident that another FTX level crisis is less likely to happen, other than that we might all say "oh this feels a bit like FTX". Changes: * Board swaps. Yeah maybe good, though many of the people who left were very experienced. And it's not clear whether there are due diligence people (which seems to be what was missing). * Orgs being spun out of EV and EV being shuttered. I mean, maybe good though feels like it's swung too far. Many mature orgs should run on their own, but small orgs do have many replicable features. * More talking about honesty. Not really sure this was the problem. The issue wasn't the median EA it was in the tails. Are the tails of EA more honest? Hard to say * We have now had a big crisis so it's less costly to say "this might be like that big crisis". Though notably this might also be too cheap - we could flinch away from doing ambitious things * Large orgs seem slightly more beholden to comms/legal to avoid saying or doing the wrong thing. * OpenPhil is hiring more internally Non-changes: * Still very centralised. I'm pretty pro-elite, so I'm not sure this is a problem in and of itself, though I have come to think that elites in general are less competent than I thought before (see FTX and OpenAI crisis) * Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him * Little discussion of what led us to ignore the base rate of scamminess in crypto and how we'll avoid that in future
Could it be more important to improve human values than to make sure AI is aligned? Consider the following (which is almost definitely oversimplified):   ALIGNED AI MISALIGNED AI HUMANITY GOOD VALUES UTOPIA EXTINCTION HUMANITY NEUTRAL VALUES NEUTRAL WORLD EXTINCTION HUMANITY BAD VALUES DYSTOPIA EXTINCTION For clarity, let’s assume dystopia is worse than extinction. This could be a scenario where factory farming expands to an incredibly large scale with the aid of AI, or a bad AI-powered regime takes over the world. Let's assume neutral world is equivalent to extinction. The above shows that aligning AI can be good, bad, or neutral. The value of alignment exactly depends on humanity’s values. Improving humanity’s values however is always good.  The only clear case where aligning AI beats improving humanity’s values is if there isn’t scope to improve our values further. An ambiguous case is whenever humanity has positive values in which case both improving values and aligning AI are good options and it isn’t immediately clear to me which wins. The key takeaway here is that improving values is robustly good whereas aligning AI isn’t - alignment is bad if we have negative values. I would guess that we currently have pretty bad values given how we treat non-human animals and alignment is therefore arguably undesirable. In this simple model, improving values would become the overwhelmingly important mission. Or perhaps ensuring that powerful AI doesn't end up in the hands of bad actors becomes overwhelmingly important (again, rather than alignment). This analysis doesn’t consider the moral value of AI itself. It also assumed that misaligned AI necessarily leads to extinction which may not be accurate (perhaps it can also lead to dystopian outcomes?). I doubt this is a novel argument, but what do y’all think?

Popular comments

Recent discussion

The people arguing against stopping (or pausing) either have long timelines or low p(doom).

The tl;dr is the title. Below I try to provide a succinct summary of why I think this is the case (read just the headings on the left for a shorter summary).

Timelines are short

The...

Continue reading
4
Greg_Colbourn
2h
I see in your comment on that post, you say "human extinction would not necessarily be an existential catastrophe" and "So, if advanced AI, as the most powerful entity on Earth, were to cause human extinction, I guess existential risk would be negligible on priors?". To be clear: what I'm interested in here is human extinction (not any broader conception of "existential catastrophe"), and the bet is about that.

To be clear: what I'm interested in here is human extinction (not any broader conception of "existential catastrophe"), and the bet is about that.

Agreed.

4
Greg_Colbourn
2h
See my comment on that post for why I don't agree. I agree nuclear extinction risk is low (but probably not that low)[1]. ASI is really the only thing that is likely to kill every last human (and I think it is quite likely to do that given it will be way more powerful than anything else[2]). 1. ^ But too be clear, global catastrophic / civilisational collapse risk from nuclear is relatively high (these often get conflated with "extinction"). 2. ^ Not only do I think it will kill every last human, I think it's quite likely it will wipe out all known carbon-based life.


Author: Leonard Dung

Abstract: Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment...

Continue reading
2
Matthew_Barnett
1h
I'm a bit surprised you haven't seen anyone make this argument before. To be clear, I wrote the comment last night on a mobile device, and it was intended to be a brief summary of my position, which perhaps explains why I didn't link to anything or elaborate on that specific question. I'm not sure I want to outline my justifications for my view right now, but my general impression is that civilization has never had much central control over cultural values, so it's unsurprising if this situation persists into the future, including with AI. Even if we align AIs, cultural and evolutionary forces can nonetheless push our values far. Does that brief explanation provide enough of a pointer to what I'm saying for you to be ~satisfied? I know I haven't said much here; but I kind doubt my view on this issue is that rare that you've literally never seen someone present a case for it.
1
Ryan Greenblatt
41m
Where the main counterargument is that now the groups in power can be immortal and digital minds will be possible. See also: AGI and Lock-in

I have some objections to the idea that groups will be "immortal" in the future, in the sense of never changing, dying, or rotting, and persisting over time in a roughly unchanged form, exerting consistent levels of power over a very long time period. To be clear, I do think AGI can make some forms of value lock-in more likely, but I want to distinguish a few different claims:

(1) is a future value lock-in likely to occur at some point, especially not long after human labor has become ~obsolete?

(2) is lock-in more likely if we perform, say, century more of ... (read more)

Summary

  • In this post, I hope to inspire other Effective Altruists to focus more on donation and commiserate with those who have been disappointed in their ability to get an altruistic job.
  • First, I argue that the impact of having a job that helps others is complicated. In
...
Continue reading

Is harder to find an EA job if you are from LATAM? Considering there are more opportunities for the USA and Europe in EA.

I'm starting my search as a Project Management Professional in EA Jobs. 

I try it!

2
David_Moss
1h
  Our 2022 survey offers further illustration of this. Only 10% of respondents have earning to give as their career plan. And that masks a stark divide between highly engaged EAs, for whom less than 6% plan to pursue earning-to-give, compared to closer to 16% of less engaged EAs.
6
Julia_Wise
3h
I admire your drive to help others! I do think early in my life I underweighted shopping around because I was so focused on frugality (and it's easy to be discouraged when job searches take a long time). Best wishes as you explore the options.
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.
Ben Millwood commented on harfe's quick take 2h ago
harfe
8h36
0
0
2

FHI has shut down yesterday: https://www.futureofhumanityinstitute.org/

Continue reading

more discussion at forum post

For the disagree voters (I didn't agreevote either way) -- perhaps a more neutral way to phrase this is might be:

Oxford and/or its philosophy department apparently decided that continuing to be affiliated with FHI wasn't in its best interests. It seems this may have developed well before the Bostrom situation. Given that, and assuming EA may want to have orgs affiliated with other top universities, what lessons might be learned from this story? To the extent that keeping the university happy might limit the org's activities, when is accepting that compromise worth it?

I also didn't vote but would be very surprised if that particular paper - a policy proposal for a biosecurity institute in the context of a pandemic - was an example of the sort of thing Oxford would be concerned about affiliating with (I can imagine some academics being more sceptical of the FHI's other research topics). Social science faculty academics write papers making public policy recommendations on a routine basis, many of them far more controversial.

The postmortem doc says "several times we made serious missteps in our communications with other parts of the university because we misunderstood how the message would be received" which suggests it might be internal messaging that lost them friends and alienated people. It'd be interesting if there are any specific lessons to be learned, but it might well boil down to academics being rude to each other, and the FHI seems to want to emphasize it was more about academic politics than anything else.

How bad would it be to cause human extinction? ‘'If we do not soon destroy ourselves’, write Carl Sagan and Richard Turco, ‘but instead survive for a typical lifetime of a successful species, there will be humans for another 10 million years or so. Assuming that our lifespan...

Continue reading

If additional human lives have no value in themselves, that implies that the government would have more reason to take precautionary measures against a virus that would kill most of us than one that would kill all of us, even if the probabilities were equal.

Maybe I'm misunderstanding, but if

  • we totally discounted what happens to future/additional people (even stronger than no reason to create them), and only cared about present/necessary people, and
  • killing everyone/extinction means killing all present/necessary people (not extinction in the future) and no o
... (read more)
1
Matthew Rendall
1h
Thanks! Perhaps I haven't grasped what you're saying. In my example, if the first virus mutates, it'll be the one that kills more people--17 billion. If the second virus mutates, the entire human population dies at once from the virus, so only 8 billion people die in toto.  On either wide or narrow person-affecting views, it seems like we have to say that the first outcome--seven billion deaths and then ten million deaths a year for the next millennium--is worse than the second (extinction). But is that plausible? Doesn't this example undermine person-affecting views of either kind?
3
Matthew Rendall
1h
Actually, I guess that on a narrow person-affecting view, the first outcome would not be worse than the second, because plausibly a pandemic of this kind would affect the identities of subsequent generations. Assuming the lives of the people who died were still worth living, while the first virus would be worse for people--because it would kill ten billion more of them--it would not, for the most part, be worse for particular people. But that seems like the wrong kind of reason to conclude that A is better than B.

Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI’s closure. The abstract and an excerpt follow.


Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse

...
Continue reading
finm
1h20
3
0

I think it is worth appreciating the number and depth of insights that FHI can claim significant credit for. In no particular order:

... (read more)
18
Pablo
3h
There is a list by Sandberg here. (The other items in that post may also be of interest.)
54
Richard Y Chappell
4h
This is really sad news. I hope everyone working there has alternative employment opportunities (far from a given in academia!). I was shocked to hear that the philosophy department imposed a freeze on fundraising in 2020. That sounds extremely unusual, and I hope we eventually learn more about the reasons behind this extraordinary institutional hostility. (Did the university shoot itself in the financial foot for reasons of "academic politics"?) A minor note on the forward-looking advice: "short-term renewable contracts" can have their place, especially for trying out untested junior researchers. But you should be aware that it also filters out mid-career academics (especially those with family obligations) who could potentially bring a lot to a research institution, but would never leave a tenured position for short-term one. Not everyone who is unwilling to gamble away their academic career is thereby a "careerist" in the derogatory sense.

Summary

  1. Many views, including even some person-affecting views, endorse the repugnant conclusion (and very repugnant conclusion) when set up as a choice between three options, with a benign addition option.
  2. Many consequentialist(-ish) views, including many person-affecting
...
Continue reading
1
Kaspar Brandner
9h
Let's replace A with A' and A+ with A+'. A' has welfare level 4 instead of 100, and A+' has, for the original people, welfare level 200 instead of 101 (for a total of 299). According to your argument we should still rule out A+' because it's less fair than Z. Even though the original people get 196 points more welfare in A+' than in A'. So we end up with A' and a welfare level of 4. That seems highly incompatible with ethics being about affecting persons.
2
MichaelStJules
2h
Dasgupta's view makes ethics about what seems unambiguously best first, and then about affecting persons second. It's still person-affecting, but less so than necessitarianism and presentism. It could be wrong about what's unambiguously best, though, e.g. we should reject full aggregation, and prioritize larger individual differences in welfare between outcomes, so A+' (and maybe A+) looks better than Z. Do you think we should be indifferent in the nonidentity problem if we're person-affecting? I.e. between creating a person a person with a great life and a different person with a marginally good life (and no other options). For example, we shouldn’t care about the effects of climate change on future generations (maybe after a few generations ahead), because future people's identities will be different if we act differently. But then also see the last section of the post.

In the non-identity problem we have no alternative which doesn't affect a person, since we don't compare creating a person with not-creating it, but creating a person vs creating a different person. Not creating one isn't an option. So we have non-present but necessary persons, or rather: a necessary number of additional persons. Then even person-affecting views should arguably say, if you create one anyway, then a great one is better than a marginally good one.

But in the case of comparing A+ and Z (or variants) the additional people can't be treated as necessary because A is also an option.