New & upvoted

Customize feedCustomize feed
NEW
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
98
Cullen
21h
0
I am not under any non-disparagement obligations to OpenAI. It is important to me that people know this, so that they can trust any future policy analysis or opinions I offer. I have no further comments at this time.
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder: The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual aid and political activism. My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal's mugging” critique. But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it's the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice. (I mean, you can also find longtermism worthy because of something something math and cold utilitarianism. That’s not out of the question. I just don’t think it’s the only way to reach that conclusion.)
[PHOTO] I sent 19 emails to politicians, had 4 meetings, and now I get emails like this. There is SO MUCH low hanging fruit in just doing this for 30 minutes a day (I would do it but my LTFF funding does not cover this). Someone should do this!
Remember: EA institutions actively push talented people into the companies making the world changing tech the public have said THEY DONT WANT. This is where the next big EA PR crisis will come from (50%). Except this time it won’t just be the tech bubble.
Yesterday Greg Sadler and I met with the President of the Australian Association of Voice Actors. Like us, they've been lobbying for more and better AI regulation from government. I was surprised how much overlap we had in concerns and potential solutions: 1. Transparency and explainability of AI model data use (concern) 2. Importance of interpretability (solution) 3. Mis/dis information from deepfakes (concern) 4. Lack of liability for the creators of AI if any harms eventuate (concern + solution) 5. Unemployment without safety nets for Australians (concern) 6. Rate of capabilities development (concern) They may even support the creation of an AI Safety Institute in Australia. Don't underestimate who could be allies moving forward!

Popular comments

Recent discussion

Tyson Fury meets Oleksandr Usyk tonight as one of the biggest fights boxing has to offer finally takes place in Saudi Arabia.
This evening’s blockbuster showdown at Riyadh’s Kingdom Arena between two long-time rivals will crown the first undisputed world champion the heavyweight division has seen in over twenty years - and the first of the four-belt era.

🌟✅🔰GO LIVE🔴✅Tyson Fury vs Oleksandr Usyk LIVE

🌟✅🔰GO LIVE🔴✅Oleksandr Usyk vs Tyson Fury LIVE

How to watch Tyson Fury vs Oleksandr Usyk fight in UK: TV channel and live streams

Tyson Fury will attempt to become the first fighter ever to hold all four heavyweight belts when he takes on Oleksandr Usyk in Saudi Arabia tonight

Tyson Fury and Oleksandr Usyk will finally lock horns tonight when they clash for all the marbles in Saudi Arabia.

Tyson Fury vs Oleksandr Usyk has been a contest long in the making. Originally planned for February, it was...

Continue reading

I. Introduction and a prima facie case

It seems to me that most (perhaps all) effective altruists believe that:

  1. The global economy’s current mode of allocating resources is suboptimal. (Otherwise, why would effective altruism be necessary?)
  2. Individuals and institutions can
...
Continue reading

Even more reason to think that transitioning to socialism is not tractable - some people will fight against it like hell!

1
mhendric
I am similarly unenthused about the weird geneticism.  Insofar as somewhat more altruism in the economy is the aim, sure, why not! I'm not opposed to that, and you may think that e.g. giving pledges or founders pledge are already steps in that direction. But that seems different from what most people think of when you say socialism, which they associate with ownership of means of production, or very heavy state interventionism and planned economy! It feels a tiny bit bailey and motte ish. To give a bit of a hooray for the survey numbers - at the German unconference, I organized a fishbowl-style debate on economic systems. I was pretty much the only person defending a free market economy, with maybe 3-5 people silently supportive and a good 25 or so folks arguing for strong interventionism and socialism. I think this is pretty representative of the German EA community at least, so there may be country differences. 
4
Ebenezer Dukakis
Thanks for the response, upvoted. OP framed socialism in terms of resource reallocation. ("The global economy’s current mode of allocating resources is suboptimal" was a key point, which yes, sounded like advocacy for a command economy.) I'm trying to push back on millenarian thinking that 'socialism' is a magic wand which will improve resource allocation. If your notion of 'socialism' is favorable tax treatment for worker-owned cooperatives or something, that could be a good thing if there's solid evidence that worker-owned cooperatives achieve better outcomes, but I doubt it would qualify as a top EA cause. Here in EA, GiveDirectly (cash transfers for the poor) is considered a top EA cause. It seems fairly plausible to me that if the government cut a bunch of non-evidence-backed school and work programs and did targeted, temporary direct cash transfers instead, that would be an improvement. I'm skimming the post you linked and it doesn't look especially persuasive. Inferring causation from correlation is notoriously difficult, and these relationships don't look particularly robust. (Interesting that r^2=0.29 appears to be the only correlation coefficient specified in the article -- that's not a strong association!) As an American, I don't particularly want America to move in the direction of a Nordic-style social democracy, because Americans are already very well off. In 2023, the US had the world's second highest median income adjusted for cost of living, right after Luxembourg. From a poverty-reduction perspective, the US government should be focused on effective foreign aid and facilitating immigration. Similarly, from a global poverty reduction perspective, we should be focused on helping poor countries. If "socialism" tends to be good for rich countries but bad for poor countries, that suggests it is the wrong tool to reduce global poverty.

Two years ago, we ran a survey for everyone interested in improving humanity's longterm prospects. The results of that survey have now been shared with over 150 organisations and individuals who have been hiring or looking for cofounders.

Today, we’re running a similar survey for everyone interested in working on reducing catastrophic risks from AI. We're focusing on AI risks because:

  • We've been getting lots of headhunting requests for roles in this space.
  • It's our current best guess at the world's most pressing problem.
  • Many people are motivated to reduce AI risks without buying into longtermism or effective altruism.

We’re interested in hearing from anyone who wants to contribute to safely navigating the transition to powerful AI systems — including via operations, governance, engineering, technical research, and field-building. This includes people already working at AI safety or EA organisations...

Continue reading
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

TL;DR

I searched for other lists of biosecurity newsletters specifically and didn’t find one that suited my needs, so I made one! Please leave a comment with any other newsletters that I missed so that I can add them.  I hope you find something useful in this list.&...

Continue reading

A new biosecurity-relevant newsletter (which me and Anemone put together) is GCBR Organization Updates. Every few months, we’ll ask organizations who are doing impactful work to reduce GCBRs to share their current projects, recent publications, and any opportunities for collaboration.

 [memetic status: stating directly despite it being a clear consequence of core AI risk knowledge because many people have "but nature will survive us" antibodies to other classes of doom and misapply them here.]

Unfortunately, no.[1]

Technically, “Nature”, meaning the fundamental physical laws, will continue. However, people usually mean forests, oceans, fungi, bacteria, and generally biological life when they say “nature”, and those would not have much chance competing against a misaligned superintelligence for resources like sunlight and atoms, which are useful to both biological and artificial systems.

There’s a thought that comforts many people when they imagine humanity going extinct due to a nuclear catastrophe or runaway global warming: Once the mushroom clouds or CO2 levels have settled, nature will reclaim the cities. Maybe mankind...

Continue reading

Summary: This post argues that brain preservation via fluid preservation could potentially be a cost-effective method to save lives, meriting more consideration as an EA cause. I review the current technology, estimate its cost-effectiveness under various assumptions, and...

Continue reading
3
Frank_R
The difference is that if you are biologically dead, there is nothing you can do to prevent a malevolant actor to upload your mind. If you are terminally ill and are pessimistic about the future, you can at least choose cremation. I am not saying that there should be no funding for brain preservation, but personally I am not very enthusiastic since there is the danger that we will not solve the alignment problem.

I'm not sure I understand the scenario you are discussing. In your scenario, it sounds like you're positing a malevolent non-aligned AI that would forcibly upload and create suffering copies of people. Obviously, this is an almost unfathomably horrific hypothetical scenario which we should all try to prevent if we can. One thing I don't understand about the scenario you are describing is why this forcible uploading would only happen to people who are legally dead and preserved at the time, but not anyone living at the time. 

(Apologies for errors or sloppiness in this post, it was written quickly and emotionally.)

Marisa committed suicide earlier this month. She suffered for years from a cruel mental illness, but that will not be legacy–her legacy will be the enormous amount of suffering she...

Continue reading

I cried when I read this. What an absolutely miserable thing to have happened.

3
andrewpei
I am shocked and saddened. I did not know Marisa well but we were in the same EA Anywhere discussion group for several months. As you said she was quite funny and I enjoyed talking with her and hearing her ideas. 
6
Ozzie Gooen
I've known Marisa for a few years and had the privilege of briefly working with her. I was really impressed by her drive and excitement. She seemed deeply driven and was incredibly friendly to be around.  This will take me some time to process. I'm so sorry it ended like this.  She will be remembered.

A brief overview of recent OpenAI departures (Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Pavel Izmailov, William Saunders, Ryan Lowe Cullen O'Keefe[1]). Will add other relevant media pieces below as I come across them.


Some quotes perhaps worth highlighting...

Continue reading
16
Larks
Kelsey suggests that OpenAI may be admitting defeat here: https://twitter.com/KelseyTuoc/status/1791691267941990764

What about for people who’ve already resigned?

37
jimrandomh
The language shown in this tweet says: It's a trick! Departing OpenAI employees are then offered a general release which meets the requirements of this section and also contains additional terms. What a departing OpenAI employee needs to do is have their own lawyer draft, execute, and deliver a general release which meets the requirements set forth. Signing the separation agreement is a mistake, and rejecting the separation agreement without providing your own general release is a mistake. I could be misunderstanding this; I'm not a lawyer, just a person reading carefully. And there's a lot more agreement text that I don't have screenshots of. Still, I think the practical upshot is that departing OpenAI employees may be being tricked, and this particular trick seems defeatable to me. Anyone leaving OpenAI really needs a good lawyer.

Introduction

When trying to persuade people that misaligned AGI is an X-risk, it’s important to actually explain how such an AGI could plausibly take control. There are generally two types of scenario laid out, depending on how powerful you think an early AGI would be. ...

Continue reading

For reference, here is a seemingly nice summary of Fearon's "Rationalist explanations for war" by David Patel.