V

Vaipan

145 karmaJoined Dec 2022Working (0-5 years)

Participation
5

  • Completed the In-Depth EA Virtual Program
  • Attended an EA Global conference
  • Attended an EAGx conference
  • Attended more than three meetings with a local EA group
  • Received career coaching from 80,000 Hours

Comments
53

The board did great, I'm very happy we had Tasha and Helen on board to make AI safety concerns prevail. 

What I've been saying from the start is that this opinion isn't what I've seen on Twitter threads within the EA/rationalist community (I don't give credits to Tweets but I can't deny the role they play in AI safety cultural framework), or even on the EA forum, reddit, etc. Quite the opposite, actually: people advocating for Altman's return and heavily criticizing the board for their decision (I don't agree with the shadiness that surrounds the board's decision, but I nevertheless think it's a good decision).

Don't trust lost-canony individuals? Don't revere a single individual and trust him with deciding the fate of a such an important org?

It's what I've seen. Happy to be wrong. It's an impression--I didn't register in a notebook every time someone was supporting Altman, but I've read it quite a lot; just like you I can't prove it.

I'm happy to be wrong--not sure downvoting me to hell will make the threats mentioned in my quick take go away though.

But absolutely, and yet a big part of EAs seem to be pro-Altman! That was my point, I might not have been clear enough, thanks for calling this to attention

Reading McAskill's AMA from 4 years ago about what would kill EA, I can't help but find his predictions chillingly realistic!

  1. The brand or culture becomes regarded as toxic, and that severely hampers long-run growth. (Think: New Atheism) = OpenAI reshuffling and general focus on AI safety has increased the caution of the mainstream public towards EA 
  2. A PR disaster, esp among some of the leadership. (Think: New Atheism and Elevatorgate) = SBF debacle!
  3. Fizzle - it just ekes along, but doesn’t grow very much, loses momentum and goes out of fashion = This one hasn't happened yet but is the obviously structural one, still has a chance to happen

When will we learn? I feel that we haven't taken seriously the lessons from SBF given what happened at OpenAI and the split in the community concerning support for Altman and his crazy projects. Also, as a community builder who talks to a lot of people and who does outreach, I hear a lot of bad criticism concerning EA ('self-obsessed tech bros wasting money'), and while it's easy to think that these people speak out of ignorance, ignoring the criticism won't make it go away.

I would love to see more worry and more action around this. 

Interesting! This is a very similar reasoning to what CE suggested from the start, nice to see this going forward and more supported financially.

Your insights have been incredibly valuable. I'd like to share a few thoughts that might offer a balanced perspective going forward.

It's worth noting the need to approach the call for increased funding critically. While animal welfare and global health organizations might express similar needs, the current emphasis on AI risks often takes center stage. There's a clear desire for more support within these organizations, but it's important for OpenPhil and private donors to assess these requests thoughtfully to ensure their alignment with genuine justification.

The observation that AI safety professionals anticipate more attention within Effective Altruism for AI safety compared to AI governance confirms a suspicion I've had. There seems to be a tendency among AI safety experts to prioritize their field above others, urging a redirection of resources solely to AI safety. It's crucial to maintain a cautious approach to such suggestions. Given the current landscape in AI safety—characterized by disagreements among professionals and limited demonstrable impact—pursuing such a high-risk strategy might not be the most prudent choice.

In discussions with AI safety experts about the potential for minimal progress despite significant investment in the wrong direction over five years, their perspective often revolves around the need to explore diverse approaches. However, this approach seems to diverge considerably from the principles embraced within Effective Altruism. I can understand why a community builder might feel uneasy about a strategy that, after five years of intense investment, offers little tangible progress and potentially detracts from other pressing causes.

This phrase is highly sexist and doesn't mean anything, especially since the demographics have barely changed (from 26% to 29% of women...wouldn't call it a shift in demographics), and what does that mean that women cannot use strong quantitative evidence? I don't need to say how ridiculous.

I don't see the point of this text. It doesn't touch upon anything specific, remaining very vague as to what are the 'old values'. The thing about charities is also surprising giving OpenPhil's switch to GCR, funding less and less neartermist charities (human at least, animal-based charities might get more funding given the current call for that now).

Not only is it killing--if the fetus is sentient, it's likely quite painful.

So what? Do we forbid abortion and condemn women to have these children? Or should we rather talk about policies to ensure that we don't need abortions anymore--that is, making contraceptives widely available and costless and educating men and women from the youngest age about the need of having efficient contraceptives? 

You talk about the risk of conception being known--men know, but some pay very little attention to the consequences nevertheless. So, should we find binding ways to force men to care? 

I hope this conversation sounds as interesting as regulating women's bodies in the first place, because it's a conversation we must have if we start talking about removing the ability of giving women a choice. 

Load more