All of Evan_Gaensbauer's Comments + Replies

I'm the admin of that Facebook group. Do you have any suggestions for what I/we could do to help it serve its purpose of helping EAs better?

2
Arepo
1mo
I think there's some kind of network effect? It's more attractive to post there the more people post there. Such things are hard to get going, and maybe need constant input. Maybe you could ask someone on the EAAnywhere channel Jeroen mentioned to pin a comment linking to it/add it to the description.

When I first checked this post, it had 0 karma but 4 votes. That means about half of those who voted on this post downvoted it. I would understand why except nobody explained why. 

To explain that myself, it's typically preferred introductory posts like this are submitted as personal posts, or as a comment in an open thread, as opposed to as frontpage posts. Especially in light of how eagerly you seek to contribute to the community, it was rude nobody who disliked your post bothered taking a moment to explain why. In light of that, I've strongly upvote... (read more)

Thanks for the consideration, though I knew you weren't referring to me. What I meant is that, while lurkers or faithless actors may have motives including profiteering in theory, in practice there isn't enough incentive to put that much effort into profiteering that way. 
I'm not aware there's enough money in that kind of task for anyone to bother with the effort, compared to other ways they could make money. 

Those most motivated to weaponize damaging info on EA have proven themselves savvy enough to acquire that kind of info without bothering wi... (read more)

I'm considering writing a reply to one or more of Bordelon's reports. To aid others who might want to do so is one of the main reasons why I shared the document. Given my understanding that POLITICO is widely read by policymakers in DC, another reason I shared it is for more EAs to at least be aware of how they're being perceived in DC, for better or worse.

If I wind up writing a response, I'm not sure where I might publish it, though the EA Forum would likely be one platform. Other than EAs, it could serve as a resource to be shared with those outside of EA.

I'm not aware of reason to suspect any such person might be profiteering. That seems like by itself it wouldn't be lucrative to justify the effort. 

3
trevor1
2mo
Oh, sorry, by profiteers I was referring to people like forum lurkers and hostile open source researchers, not you at all.  My thinking was that this plan works fine with or without funding so long as someone (e.g. you) coordinates it, but it can't be open-source on EAforum or Lesswrong because the bad guys (not journalists, the other bad guys) would get too much information out of it.

Summary: As EA remains a poorly understood movement in DC, and POLITICO is a publication that may be widely read by policymakers, it's worth those in EA and AI safety being aware of how the perception of their efforts are being shaped, whether for better or worse. To facilitate that is one of the main reasons I shared this document. Journalists who spin conspiracy theories about EA tend to do so regardless of content like this on the EA Forum. Those journalists who would bother to be whatsoever accurate will probably be apt enough to check this comment for... (read more)

all I know about Politico is that a while ago they published an article on EA that seemed to be written in pretty bad faith.

Do you remember whether:

  1. that article was published in the last 6 months, or before that?
  2. the article covered an area EA has influence over, other than AI safety or biosecurity?

If it was published in the last 6 months, it's probably one of the ones I've listed. If it was published before then, it may have been one about the connections between Sam Bankman-Fried and FTX, and EA. If it was something else, it's one of their articles that most EAs wouldn't be aware of.

I'm sorry to hear to join Alameda under the impression that it would be the opposite of what it became to be, like happened to so many others, is the reason you quit the company you had founded. I didn't know you stopped working at the company you founded to work at Alameda.

At the time, I thought you had maybe sold your stake in it or something so you'd have a lot of cash to donate while moving onto an opportunity to do way more good in some other job at the advice of 80k or whoever. I thought it seemed strange you had left the company you founded then, gi... (read more)

4
Ben_West
2mo
Thanks Evan, I appreciate the kind words! And yeah, in retrospect staying at HeF would almost certainly have been more valuable, but oh well...

This was an acerbic and bitter comment I made as a reference to the fake MIRI strategy update in 2022 from Eliezer, the notorious "Dying with Dignity." I've thought about this for a few days and I'm sorry I made that nasty comment.

I was considering deleting or retracting it, though I've decided against that. The fact my comment has a significantly net negative karma score seems like punishment enough. Retracting the comment now probably wouldn't change that anyway.

I've decided against deleting or retracting this comment because its reception seems like a u... (read more)

3
Malo
1mo
FWIW, I found this last bit confusing. In my experience chatting with folk, regardless of how much they agree with or like MIRI, they usually think MIRI is quite candid an honest in it’s communication.  (TBC, I do think the “Death with Dignity” post was needlessly confusing, but that’s not the same thing as dishonest.)

I appreciate the pivot to a better-devised and merely pessimistic strategy on MIRI's part, as opposed to a deceptively dignified and misrepresentative resignation to death.

Every aspect of that summary of how MIRI's strategy has shifted seems misleading or inaccurate to me.

Welcome to the EA Forum! Thanks for sharing!

I've known Kat Woods for as long as Eric Chisholm has. I first met Eric several years before either of us first got involved in the EA or rationality communities. I had a phone call with him a few hours ago letting him know that this screencap was up on this forum. He was displeased you didn't let him know yourself that you started this thread.

He is extremely busy for the rest of the month. He isn't on the EA Forum either. Otherwise, I don't speak for Eric. I've also made my own reply in the comment thread Eric started on Eliezer's Facebook post. I'm assum... (read more)

I just posted on the Facebook wall of another effective altruist:

 Hey, I really appreciate everything you do for the effective altruism community! Happy birthday! 

We would all greatly benefit from expressing our gratitude like this to each other more often.

I've had a half-finished draft post about how effective altruists shouldn't be so hostile to newcomers to EA from outside the English-speaking world (e.g., primarily the United States and Commonwealth countries). In addition to English not being a first language, especially for younger people or students who don't have as much experience, there are the problems of mastering the technical language for a particular field, as well as the jargon unique to EA. That can be hard for even many native English speakers.

LessWrong and the rationality community are dis... (read more)

I read almost all of the comments on the original EA Forum post linking to the Time article in question. If I recall correctly,Will made a quick comment that he would respond to these kinds of details when he would be at liberty to do so. (Edit: he made that point even more clearly in this shortform post he wrote a few months ago. https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=ACDPftuESqkJP9RxP)

I assume he will address these concerns you've mentioned here at the same time he provides a fuller retrospective on the FTX collapse and its fallout.

Upvoted. Thanks for clarifying. The conclusion to your above post was ambiguous to me, though I now understand.

The rest of us can help by telling others Will MacAskill is seeking to divest himself of this reputation whenever we see or hear someone talking about him as if he still wants to be that person (not that he ever did, as evidenced by his above statement, a sentiment I've seen him express before in years past).

Please send me links to posts with those arguments you've made, as I've not read them, though my guess would be that you haven't convinced anyone because some of the greatest successes in EA started out so small. I remember the same kind of skepticism being widely expressed some projects like that.

Rethink Priorities comes to mind as one major example. The best example is Charity Entrepreneurship. It was not only one of those projects with potential scalability that was doubted. It keeps incubating successful non-profit EA startups for across almost every EA-affiliated cause. CE's cumulative track record might the best empirical argument against the broad applicability to the EA movement of your own position here.

4
Elizabeth
8mo
Your comment makes the most sense to me if you misread my post and are responding to exactly the opposite of my position, but maybe I'm the one misreading you.

How my perspective has changed on this during the last few years is to advise others not to give much weight to a single point of feedback. Especially for those who've told me only one or two people have discouraged them from be(com)ing a researcher, I tell them not to stop trying in spite of that. That's even when the person giving the discouraging feedback is in a position of relative power or prestige.

The last year seems to have proven that the power or prestige someone has gained in EA is a poor proxy for how much weight their judgment should be given ... (read more)

Okay, that's awesome. I appreciate it. I'd like to see more inspirational or personal posts like this on the EA Forum in the future, actually, so this kind of post personally speaks to me as well.

For years now, much ink has been spilled about the promise and peril of the portents, for effective altruism, of dank memes. Many have singled me out as the person best suited to speak to this controversy. I've heard, listened, and taken all such sentiments to heart. This is the year I've opted to finally to complete a full-spectrum analysis of the role of dank memes as a primary form of outreach and community-building.

This won't be a set of shitposts on other social media websites. This will be a sober evaluation of dank EA memes, composed of at least one... (read more)

I don't understand this post. It seems like based on the title that these are individuals who may have died of covid or related complications but that's not clarified. Most of the people listed were ageing, so it's easy to imagine might have them may have incidentally died during the pandemic for other reasons.

Most of these people are ones who effective altruists would find inspiring figures, though a few of them appear to just be your favourite musicians. I'm guessing overall they're people you personally find inspiring who died during the pandemic, for r... (read more)

4
Gavin
8mo
Yes, they all died of or with Covid; yes, you guess right that they inspire me. Besides my wanting to honour them, the point of the post was to give a sense of vertigo and concreteness to the pandemic then ending. At the time of writing, a good guess for the total excess deaths was 18 million - a million people for each of those named here. The only way to begin to feel anything appropriate about this fact is to use exemplars.

In hindsight, the number of weirdness points we have can be increased. This is especially true if some of the supporters of causes at one point considered weird later become very wealthy and patronize that niche, unusual cause to the tune of tens of millions of dollars. 

On the other hand, as much as the size of the pool of weirdness points is theoretically unlimited, it's still hard to practically increase the number of available weirdness points at an arbitrary rate. It's still very possible to spend one's weirdness points too quickly, and hurt one's reputation in the process, so the reservoir of weirdness points should still be spent wisely.

Several people we thought deeply shared our values have been charged with conducting one of the biggest financial frauds in history (one of whom has pled guilty).


Update July 2023

As of now, two other former executives at FTX and/or Alameda Research have plead guilty. The 3 who have plead guilty are the 3 former execs who are known as the most complicit in SBF's alleged crimes. That the 3 of them have plead guilty, while SBF is still denying all charges, seems like it will be one of the most interesting parts of his upcoming trial.

In terms of Elon Musk specifically, I feel like it affirms what most of already thought of his relationship with AI safety (AIS). Even among billionaire technologists conscious of AIS and who achieved fame and fortune in Silicon Valley, Musk is an ambitious and exceptionally impactful personality. This of course extends to all his business ventures, philanthropy and political influence. 

Regardless of whether it's ultimately positive or negative, I expect the impact of xAI, including for AIS, will be significant. What the quality of the impact will be ... (read more)

I haven't watched a recording of the debate yet and I intend to, though I feel like I'm familiar enough with arguments on both sides that I may not learn much. This review was informative. It helped me notice that understanding the nature of public dialogue and understanding of AGI risk may be the greatest learning value from the debate.

I agree with all the lessons you've drawn from this debate and laid out at the end of this post. I've got other reactions lengthy enough I may make them into their own top-level posts, though here's some quicker feedback.

... (read more)

I'm writing up some other responses in reaction to this post, though I've noticed a frustrating theme across my different takes. There are better and worse arguments against the legitimacy of AGI risk as a concern, though it's maddening that LeCunn and Mitchell mostly stuck to making the worse arguments.

Do you intend to build on this by writing about the roles played by those other dynamics you mentioned but didn't write about in this post?

tl;dr Dustin Moskowitz taking further his use of dank memes to make himself and EA more relatable, by becoming a notorious edgelord, like Elon Musk, or Biden, in the form of Dark Brandon, was something I thought could potentially be a good idea. I've since concluded it's unnecessary, especially given a downside risk like how doing so might inadvertently cause a toxic cult of personality around an unwitting Dustin.

Sometimes I use shortform posts as a notepad for unrefined thoughts that I might refine later. I posted the last couple on mobile which maybe has... (read more)

There were contests in the recent past. They haven't affected much practical change. My impression within effective altruism is that they were appreciated as an intellectual exercise but that they're isn't faith that another contest like that will provoke the desired reforms. 

Some of the public criticism of EA I saw a few months ago was that the criticism contest was meant only to attract the kind of criticism the leadership of EA would want to hear. That criticisms of EA on a fundamental level were relegated to outside media was taken as a sign EA-sp... (read more)

I thought more this morning about my shortform post from yesterday (https://forum.effectivealtruism.org/posts/KfwFDkfQFQ4kAurwH/evan_gaensbauer-s-shortform?commentId=SjzKMiw5wBe7bGKyT)and I've changed my mind about much of it. I expected my post to be downvoted because most people would perceive it as a stupid and irrelevant take. Here are some reasons I disagree now, though I couldn't guess whether anyone downvoted my post because they took my take seriously but still thought it sucked.

  1. I've concluded that Dustin Moskowitz shouldn't go full Dark Brandon

... (read more)
4
Linch
9mo
Can you be a bit more precise for what you mean? Even though I'm well aware of the Dark Brandon meme, I still don't know for sure what you're referring to.

I've long had animal welfare, especially wild animal welfare, as one priority in EA, among others. I also has a background of being involved in animal welfare and environmental movements independent of EA. My experience is that more environmentalists tend to be at least mildly more conscientious about wild animal welfare than the typical animal welfarist. 

That doesn't mean that the typical environmentalist cares more about wild animal welfare than the typical animal welfarist. Typically, both such kinds of people tend not to care much about wild anima... (read more)

tl;dr it'd take more digging to identify which charities would be the best candidates for donation to execute this kind of effort, though it seems likely the right charities for this effort can be identified among those already in the orbit of the global health & development arm of EA. 

I'm not aware of which organizations there may be that have launched in the last few years independent of the pandemic that have already received major funding from EA-affiliated donors and would be ready to receive more of the same funding to rapidly roll out a pro... (read more)

I'm really glad that this new book can focus on the last ~50 years of sociological changes since the last one; detailed research the phenomena of large numbers of people being slow to update their thinking on easily-provable moral matters is broadly applicable to global health and existential risk as well.

This is an excellent point. Not that this statement necessarily implies ignorance of other recent developments in animal welfare research originating in effective altruism, though sociology is far from the only field from which dramatic transformations in... (read more)

Thank you for your comment. I appreciate that from you especially as someone with a specialized focus on AI safety. 

Sometimes, I have the impression that perhaps most of those working in AI safety perceive only a small minority among themselves in the field as seriously considering the risk of adverse impacts on non-human life from advancing AI. I suspect it might even be a majority of those working in AI safety who afford some level of consideration to what impact advancing AI may have on other lifeforms. If that's true, it may not be common knowledg... (read more)

I've barely read this post yet, though just based on seeing it I want to express my excitement that Peter Singer has opted to post again on the EA Forum for the first time in almost 9 years. 

I'm not even aware whether Singer personally logged into the EA Forum then or it was a user account made on his behalf for posterity. Maybe Singer hasn't even logged into publish this post. 

It could be copy he authorized some site admin or assistant to post on his behalf. That'd be okay too. If Peter Singer himself is really, personally reading the other resp... (read more)

I can't overstate how much the UX and UI for the EA Forum on mobile sucks. It sucks so much. I know the Online Team at the CEA is endlessly busy and I don't blame anyone for this as their fault, though the UX/UI on mobile for the EA Forum is abysmal.

The dynamics of social status will cause some problems in ways unique to the EA community, though my experience is that the same will be true for any organized group of people. I've never encountered any organized group that doesn't encounter the general problem of navigating those dynamics coming at the expense of making progress to achieving shared goals. This problem of internal problems may be universal due to human nature, though how much of an adverse impact it has can be managed in organizations with:

1. A standard set of principles and protocols for... (read more)

I'm just seeing this post now and I haven't read it yet, though I'll offer my initial impression based on the title of this post because I think it will be informative. Why I'm reacting based on just the title of this post is because I'm aware of how a lot of other people will be triggered by the title of this post before they get to its contents.

The claim that we are in the midgame, specifically, as opposed to the endgame, is contentious and controversial. That's such a hot take that for a lot of people the only takeaway from this post was the fact that y... (read more)

I worry that the doomer levels are so high EAs will be frozen into inaction and non-EAs will take over from here. This is the default outcome, I think.

On one hand, as I got at in this comment, I'm more ambivalent than you about whether it'd be worse for non-EAs to take more control over the trajectory on AI alignment. 

On the other hand, one reason why I'm ambivalent about effective altruists (or rationalists) retaining that level is control is that I'm afraid that the doomer-ism may become an endemic or terminal disease for the EA community. AI alignm... (read more)

There's no cavalry coming - we are the cavalry. 

It's ambiguous who this "we" is. It obscures the fact there are overlapping and distinct communities among AI alignment as an umbrella movement. There have also been increasing concerns that a couple of those communities serving as nodes in that network, namely rationality and effective altruism, are becoming more trouble than they're worth. This has been coming from effective altruists and rationalists themselves.

I'm aware of, and have been part of, increasingly frequent conversations that AI safety and... (read more)

As others noted, the post also made a bunch of specific claims that others can disagree with as opposed to saying vague things or hedging a lot, which I also appreciate (see also epistemic legibility). 

Thank you for acknowledging this and emphasizing the specific claims being made. I'm guessing you didn't mean to cast aspersions through a euphemism. I'd respect you not being as explicit about it if that is part of what you meant here. 

For my part, though, I think you're understating how much of a problem those other posts are, so I feel obliged t... (read more)

(Importantly, from my understanding, this isn’t OpenAI being evil or anything like that—OpenAI would love to hire more alignment researchers, but there just aren’t many great researchers out there focusing on this problem.)

Thank you emphasizing you're not implying OpenAI is evil only because some practices at OpenAI may be inadequate. I feel like I shouldn't have to thank you for that, though I do, just to emphasize how backwards the thinking and discourse in the AI safety/alignment community often is when a pall of fear and paranoia is cast on all AI capa... (read more)

I have so far gotten the same impression that making RLHF work as a strategy by iteratively and kind of gradually scaling it in a very operationally secure way seems like maybe the most promising approach. My viewpoint right now still remains as the one you've expressed about how, while as much as the RLHF++ has going for it in a relative sense, in leaves a lot to be desired in an absolute sense in light of the alignment/control problem for AGI. 

Overall, I really appreciate how this post condenses well in detail what is increasingly common knowledge a... (read more)

At this point, I'll hear out the gameplan to align AGI from any kind of normie SEAL team. We're really scraping the bottom of the barrel right now. 

[epistemic status: half-joking]

There’s no secret elite SEAL team coming to save the day.

Are there any organized groups of alignment researchers who serve as a not-so-secret, normal civilian equivalent of a SEAL team trying their best to save the day, while also trying to make no promises of being some kind of elite, hyper-competent super-team?

2
Evan_Gaensbauer
1y
At this point, I'll hear out the gameplan to align AGI from any kind of normie SEAL team. We're really scraping the bottom of the barrel right now. 

I think if your mood or morale is low, there are better ways to cheer yourself up than to look out for memes on the EA Forum

Especially because there is already a group for that.

Agreed and upvoted. Here is he blurb I've put at the top of my post.

Disclaimer: I've written this as a reference post to cite for other, forthcoming posts making an empirical case for why and how non-violent methods to reduce the risk of the destruction of global civilization have practically been more effective than violent methods. This post in no way is meant to endorse violent action to reduce any potential existential risk. It cannot and should not be used as a reference in support of promoting any such violent agenda. 

Load more