HE

Holly_Elmore

5745 karmaJoined

Sequences
2

Improving rodent welfare by reducing rodenticide use
The Rodenticide Reduction Sequence

Comments
284

It's not EA because it's for anyone who wants to PauseAI for any reason and does not share all the EA principles. It's just about pausing AI and it's a coalition. 

I personally still identify with EA principles and I came to my work at PauseAI through them, but I increasingly dislike the community and find it a drag on my work. That, combined with PauseAI being open to all comers, makes me want distance from the community and to keep a healthy distance between PauseAI and EA. More and more I think that the cost of remaining engaged with EA is too high because of how demanding EAs are and how little they contribute to what I'm doing.

I strongly relate to the philosophy here and I’m thrilled CEA is going to continue to be devoted to EA principles. EA’s principles will always be dear to me and a big part of my morality, but I’ve felt increasingly alienated from the community as it seemed to become only about technical AI Safety. I ended up going in my own direction (PauseAI is not an EA org) largely because the community was so reluctant to consider new approaches to AI Safety and Open Phil refused to fund it, a development that shocked and saddened me. I hope CEA will show strong leadership to keep the spirit of constant reevaluation of how we can do good alive. Imo having a preference as a community for only knowledge work and only associating with elite circles, as happened with technical AI Safety, is antithetical to EA scouty impact-focused thinking.

Huh, it shows me that it's available to anyone with the link. Here's it is again in case that helps: https://docs.google.com/document/d/1HiYMG2oeZO8krcCMEHlfAtHGTWuVDjUZQaPU9HMqb_w/edit?usp=sharing

Haven't always loved the SummaryBot summaries but this one is great

Agree, and my experience was also free of racism, although I only went to one session (my debate with Brian Chau) and otherwise had free-mingling conversations. It's possible the racist people just didn't gravitate to me.

I would never have debated Brian Chau for a podcast or video because I don't think it's worth /don't want to platform his org and its views more broadly, but Manifest was a great space where people who are sympathetic to his views are actually open to hearing PauseAI's case in response. I think conferences like that, with a strong emphasis on free speech and free exchange, are valuable.

Thank you :) (I feel I should clarify I'm lacto vegetarian now, at first as the result or a moral trade, but now that that's fallen apart I'm not sure if it's worth it to go back to full vegan.)

I agree! The focus on alignment is contingent on (now obsolete) historical thinking about the issue and it's time to update. The alignment problem is harder than we thought, AGI is closer at hand than we thought, no one was taking seriously how undemocratic pivotal act thinking was even if it had been possible for MIRI to solve the alignment problem by themselves, etc. Now that the problem is nearer, it's clearer to us and it's clearer to everyone else, so it's more possible to get government solutions implemented that both prevent AI danger and give us more time to work on alignment (if that is possible) rather than pursuing alignment as the only way to head off AI danger. 

But there are scarce resources and at some point hard decisions really do have to be made. The condemnation of triage is not fair because it dodges the brute reality that you can't always find a magic third solution that's positive sum. We have to work on all aspects of problem-- creating more options, creating more supply, and how to prioritize when there isn't enough for everyone. 

A friend advised me to provide the context that I had spent maybe 6 hours helping Mikhail with his moratorium-related project (a website that I was going over for clarity as a native English speaker) and perhaps an additional 8 hours over the last few months answering questions about the direction I had taken with the protests. Mikhail had a number of objections which required a lot of labor on my part to understand to his satisfaction, and he usually did not accept my answers when I gave them but continued to argue with me, either straight up or by insisting I didn't really understand his argument or was contradicting myself somehow. 

After enough of this, I did not think it was worth my time to engage further (EDIT: on the general topic of this post, protest messaging for 2/12— we continued to be friends and talk about other things), and I told him that I made my decisions and didn't need any more of his input a few weeks before the 2/12 protest. He may have had useful info that I didn't get out of him, and that's a pity because there are a few things that I would absolutely have done differently if I had realized at the time (such as removing language that implied OpenAI was being hypocritical that didn't apply when I realized we were only talking about the usage policies changing but which didn't register to me as needing to be updated when I corrected the press release) but I would make the same call again about how to spend my time.

I will not be replying to replies on this comment.

Load more