All of John_Maxwell's Comments + Replies

I think critics see it as a "sharp left turn" in the AI Alignment sense, where the longtermist values were there all along but were much more dormant while EA was less powerful.

Not necessarily a deliberate strategy though -- my model is that EA started out fairly cause-neutral, people had lots of discussions about the best causes, and longtermist causes gradually emerged as the best.

E.g. in 2012 Holden Karnofsky wrote:

I consider the general cause of "looking for ways that philanthropic dollars can reduce direct threats of global catastrophic risks, pa

... (read more)
8
Jeff Kaufman
1y
I think a lot of people moved from "I agree others matter regardless of where, or when, they are but figuring out how to help people in the future isn't very tractable" to "ok, now I see some ways to do this, and it's important enough that we really need to try". Or maybe this was just my trajectory (2011, 2018, 2022) and I'm projecting a bit...

You said you were looking for "when the ideas started gathering people". I do suspect there's an interesting counterfactual where in-person gathering wasn't a major part of the EA movement. I can think of some other movements where in-person gathering is not focal. In any case, I'm not hung up on the distinction, it just seemed worth mentioning.

The "effective altruism" tag on LessWrong has lots of early EA discussion. E.g. here is a comment from Anna Salamon explaining Givewell to Eliezer Yudkowsky in early 2009.

My sense is that early EA summits were pretty important -- here are videos from the first EA Summit in 2013.

early EA summits were pretty important

The first EA summit was the one you linked in summer 2013, so it just wasn't early enough.

(You could argue that it was important for the movement's growth)

I think the "fulltime job as a scientist" situation could be addressed with an "apply for curation" process, as outlined in the second half of this comment.

Thanks a lot for writing this post!

Personal experience: When I tried a vegan diet, I experienced gradually decreasing energy levels and gradually increasing desire for iron-rich animal products (hamburgers). My energy levels went back to normal when I went ahead and ate the hamburgers.

So, I'm really excited about the potential of nutritional investigations to improve vegan diets!

For bivalvegans, note that some bilvalves are rich in heme iron (heme iron, from animals, is more easily absorbed than the non-heme iron found in plants).

Again, personal experienc... (read more)

Thanks for all your hard work, Megan.

I'm reminded of this post from a few months ago: Does Sam make me want to renounce the actions of the EA community? No. Does your reaction? Absolutely.

And this point from a post Peter Wildeford wrote: "I think criticism of EA may be more discouraging than it is intended to be and we don't think about this enough."

In theory, the EA movement isn't about us as EAs. It's about doing good for others. But in practice, we're all humans, and I think it's human nature to have an expectation of recognition/gratitude when we've ... (read more)

I wonder if a good standard rule for prizes is that you want a marketing budget which is at least 10-20% the size of the prize pool, for buying ads on podcasts ML researchers listen to or subreddits they read or whatever. Another idea is to incentivize people to make submissions publicly, so your contest promotes itself.

Title: Prizes for ML Safety Benchmark Ideas

Author: Joshc, Dan H

URL: https://forum.effectivealtruism.org/posts/jo7hmLrhy576zEyiL/prizes-for-ml-safety-benchmark-ideas

Why it's good: Benchmarks have been a big driver of progress in AI. Benchmarks for ML safety could be a great way to drive progress in AI alignment, and get people to switch from capabilities-ish research to safety-ish research. The structure of the prize looks good: They're offering a lot of money, there are still over 6 months until the submission deadline, and all they're asking for is a br... (read more)

4
John_Maxwell
1y
I wonder if a good standard rule for prizes is that you want a marketing budget which is at least 10-20% the size of the prize pool, for buying ads on podcasts ML researchers listen to or subreddits they read or whatever. Another idea is to incentivize people to make submissions publicly, so your contest promotes itself.

There are hundreds of startup incubators and accelerators -- is there a particular reason you like Entrepreneur First?

6
DavidNash
1y
I know a few people who have gone through EF and have said good things about their program. Also one of the founders has interest in EA and has written about it in his blog.

Interesting points.

I think we had a bunch of good shots of spotting what was going on at FTX before the rest of the world, and I think downplaying Sam's actual involvement in the community would have harmed that.

I could see this going the other way as well. Maybe EAs would've felt more free to criticize FTX if they didn't see it as associated with EA in the public mind. Also, insofar as FTX was part of the "EA ingroup", people might've been reluctant to criticize them due to tribalism.

I also think that CEA would have very likely approved any reques

... (read more)

I think it would be terrible if EA updated from the FTX situation by still giving fraudsters a ton of power and influence, but now just don't publicly associate with them.

I don't think fraudsters should be given power and influence. I'm not sure how you got that from my comment. My recommendation was made in the spirit of defense-in-depth.

I can see how a business founder trying to conceal their status as an EA might create an adversarial relationship, but that's not what I suggested.

Put it another way: SBF claimed he was doing good with lots of fanfar... (read more)

3
Habryka
1y
But your recommendation of defense-in-depth would I think have made this situation substantially worse. I think the best saving throws we had in this situation was people scrutinizing Sam and his activities, not trying to hide his involvement with stuff.  I think we had a bunch of good shots of spotting what was going on at FTX before the rest of the world, and I think downplaying Sam's actual involvement in the community would have harmed that.  I also think that CEA would have very likely approved any request by Sam to be affiliated with the movement, so your safeguard would have I think just differentially made it harder for the higher-integrity people (who CEA sadly tends to want to be associated less with, due to them by necessity also having more controversial beliefs) to actually be affiliated with EA, without helping much with the Sam/FTX case.

Our laws are the end result of literally thousands of years of of experimentation

The distribution of legal cases involving technology over the past 1000 years is very different than the distribution of legal cases involving technology over the past 10 years. "Law isn't keeping up with tech" is a common observation nowadays.

a literal random change to the status quo

How about we revise to "random viable legislation" or something like that. Any legislation pushed by artists will be in the same reference class as the "thousands of years of of experimen... (read more)

...their regulations will probably not, except by coincidence, be the type of regulations we should try to install.

A priori, I'd expect a randomly formulated AI regulation to be about 50% likely to be an improvement on the status quo, since the status quo wasn't selected for being good for alignment.

Adopting the wrong AI regulations could lock us into a suboptimal regime that may be difficult or impossible to leave.

I don't see good arguments supporting this point. I tend to think the opposite -- building a coalition to pass a regulation now makes i... (read more)

5
Matthew_Barnett
1y
I don't agree. It's true that the status quo wasn't selected for being good for alignment directly, but it was still selected for things that are arguably highly related to alignment. Our laws are the end result of literally thousands of years of experimentation, tweaking, and innovation in the face of risks. In that time, numerous technologies and threats have arisen, prompting us to change our laws and norms to adapt. To believe that a literal random change to the status quo has a 50% chance of being beneficial, you'd likely have to believe that AI is so radically outside the ordinary reference class of risks that it is truly nothing whatsoever like we have ever witnessed or come across before. And while I can see a case for AI being highly unusual, I don't think I'd be willing to go that far. Building a coalition now makes it easier to pass other similar regulations later, but it doesn't necessarily make it easier to switch to an entirely different regulatory regime.  Laws and their associated bureaucracies tend to entrench themselves. Suppose that as a result of neo-luddite sentiment, the people hired to oversee AI risks in the government concern themselves only with risks to employment, ignoring what we'd consider to be more pressing concerns. I think it would be quite a lot harder to fire all of them and replace them with people who care relatively more about extinction, than to simply hire right-minded people in the first place. I think it might be worth quoting Katja Grace from a few days ago, Likewise, I think actual quantities of data here might matter a lot. I'm not confident at all that arbitrarily restricting 98% of the supply of data won't make the difference between successful and unsuccessful alignment, relative to allowing the full supply of data. I do lean towards thinking it won't make that difference, but my confidence is low, and I think it might very easily come down to the specific details of what's being allowed and what's being restric

Suppose you saw a commercial on TV. At the end of the commercial a voice says "brought to you by Effective Altruism". The heart-in-lightbulb logo appears on screen for several seconds.

I actually did hear of a case of a rando outside the community grabbing a Facebook page for "Effective Altruism", gaining a ton of followers, and publishing random dubious stuff.

You can insist EA isn't a brand all you want, but someone still might use it that way!

I'm not super attached to getting permission from CEA in particular. I just like the idea of EAs starting more ... (read more)

2
jasonk
1y
My suggestion would be to have no process other than general social  sanctions. I don't think it makes sense to make any person or entity an authority over "effective altruism" any more than it would make sense to name a particular person or entity an authority over the appropriate use of "Christian" or "utilitarian". I believe you're introducing a new kind of connection when you talk about usage of the heart-in-lightbulb image. I couldn't tell you who originally produced that image, but I assume it was connected to CEA. I agree that using an image with strong associations with a particular organization that created it might morally require someone to check in with the organization even if the image wasn't copyrighted. I believe effective altruism benefits strongly from the push and pull of different thinkers and organizations as they debate its meaning and what's effective. Some stuff people do will seem obviously incongruous with the concept and in such cases it makes sense for people to express social disapproval (as has been done in the past).

With recent FTX news, EA has room for more billionaire donors. For any proposed EA cause area, a good standard question to ask is: "Could this be done as a for-profit?" Quoting myself from a few years ago:

There are a few reasons I think for-profit is generally preferable to non-profit when possible:

  • It's easier to achieve scale as a for-profit.
  • For-profit businesses are accountable to their customers. They usually only stay in business if customers are satisfied with the service they provide. Non-profits are accountable to their donors. The impression
... (read more)
5
Habryka
1y
This seems like the opposite lesson to learn from me. I think it would be terrible if EA updated from the FTX situation by still giving fraudsters a ton of power and influence, but now just don't publicly associate with them.  This seems like it creates an even more adversarial relationship to the public, and I don't think would have made this situation much better (the vast majority of the damage of this situation is because Sam stole $8 billion of customer deposits, was actually a quite influential EA, and in some sense was an important leader, not because he was publicly associated with EA).
6
jasonk
1y
  I strongly disagree with the idea that CEA (or any person or entity) should have that kind of ownership over "effective altruism". It's not a brand, but a concept whose boundaries are negotiated by a wide variety of actors.

However, that doesn't really change my point that usually the reason a new idea seems wacky and strange is because it's wrong.

I think seeming wacky and strange is mainly a function of difference, not wrongness per se.

I'd argue that the best way to evaluate the merits of a wacky idea is usually to consider it directly. And discussing wacky ideas is what brings them from half-baked to fully-baked.

If you can find a good way to count up the historical reference class of "wacky and strange ideas being explored by highly educated contrarians" and quantify the... (read more)

2
titotal
1y
I mean, we can start with this list here. I guarantee you there are highly educated people who buy into pretty much every conspiracy on that list. It's not at all hard to find, for example, engineers who think 9/11 was an inside job. Ted kascynski was a mathematics professor, etc, you get the point.  The list of possible wrong beliefs outnumbers the list of possible correct beliefs by many orders of magnitude. That stands for status quo opinions as well, but they have the advantage of withstanding challenges and holding for a longer period of time. That's the reason that if someone claims they've come up with a free energy machine, it's okay to dismiss them, unless you're feeling really bored that day.  Now, EA is exploring status quo ideas that are much less tested and firm that physics, so finding holes is much easier and worthwhile, and so I agree that strange ideas are worth considering. But most of them are still gonna be wrong, because they are untested. 

Interesting argument!

I'm not fully persuaded, because I think we're dealing with heterogeneous sub-populations.

Consider the statement "As a non-EA, I believe that EA funders don't allocate enough capital to funding development econ research". I don't think we can conclude from this statement that the opposite is true, and EA funders allocate too much capital to development econ research.

The heterogeneous subpopulations perspective suggests that people who think development econ research is the most promising cause may be self-selecting out of the "dedicat... (read more)

My sense is if you look at "wacky and strange ideas being explored by highly educated contrarians" as a historical reference class, they've been important enough to be worth paying attention to. I would put pre-WWW discussion & exploration of hypermedia in this category, for instance. And the first wiki was a rather wacky and strange thing. I think you could argue that the big ideas underpinning EA (RCTs, veganism, existential risk) were all once wacky and strange. (Existential risk was certainly wacky and strange about 10-15 years ago.)

3
titotal
1y
I think it's good to discuss wacky and strange ideas, because on the occasions where they actually are true, it can lead to great things.  A lot of great movements and foundations are built on disruptive ideas that were strange at the time but obvious in retrospect.  However, that doesn't really change my point that usually the reason a new idea seems wacky and strange is because it's wrong. And if you glorify the rare victories too much, you might start forgetting the many, many failures, leading towards a bias for accepting ideas that are somewhat half-baked. 

One extremely under-rated impact of working harder is that you learn more. You have sub-linear short-term impact with increasing work hours because of things like burnout, or even just using up the best opportunities, but long-term you have super-linear impact (as long as you apply good epistemics) because you just complete more operational cycles and try more ideas about how to do the work.

Working more hours could help learning in the sense of helping you collect data faster. But if you want to learn from the data you already have, I'd suggest working... (read more)

Variant: "EA funds should do small-scale experiments with mechanisms like quadratic voting and prediction markets, that have some story for capturing crowd wisdom while avoiding both low-info voting and single points of failure. Then do blinded evaluation of grants to see which procedure looks best after X years."

1
Guy Raveh
1y
I support experimenting with voting mechanisms, and strongly oppose putting prediction markets in there.

One consideration is for some of those names, their 'conversation' with EA is already sorta happening on Twitter. The right frame for this might be whether Twitter or a podcast is a better medium for that conversation.

You could argue podcasts don't funge against tweets. I think they might -- I think people are often frustrated and want to say something, and a spoken conversation can be more effective at making them feel heard. See The muted signal hypothesis of online outrage. So I'd be more concerned about e.g. giving legitimacy to inaccurate criticis... (read more)

You make good points, but there's no boolean that flips when "sufficient quantities of data [are] practically collected". The right mental model is closer to a multi-armed bandit IMO.

Great points.

There's an unfortunate dynamic which has occurred around discussions of longtermism outside EA. Within EA, we have a debate about whether it's better to donate to nearterm vs longterm charities. A lot of critical outsider discussion on longtermism ends up taking the nearterm side of our internal debate: "Those terrible longtermists want you to fund speculative Silicon Valley projects instead of giving to the world's poorest!"

But for people outside EA, nearterm charity vs longterm charity is generally the wrong counterfactual. Most people ou... (read more)

7
BrownHairedEevee
1y
Yeah, it's the narcissism of small differences. If we're gonna emphasize our diversity more, we should also emphasize our unity. The narrative could be "EA is a framework for how to apply morality, and it's compatible with several moral systems."

In terms of understanding the causal effect of talking to journalists, it seems hard to say much in the absence of an RCT.

Someone ought to flip a coin for every interview request, in order to measure (a) the causal effect of accepting an interview on probability of article publication, and (b) the direction of any effects on article accuracy, fairness, and useful critique.

(That was meant as a bit of a joke, but I would honestly be delighted to see a bunch of articles about EA which include sentences like "Person X did not offer any comment because we weren... (read more)

It is a joke, but it's an appropriate one.

EA has a pathology of insisting that we defer to data even in situations where sufficient quantities of data can't be practically collected before a decision is necessary.

And that is extremely relevant to EA's media problem.

Say it takes 100 datapoints over 10 years to make an informed decision. During that time:

  • The media ecosystem, the character of the discourse, the institutions (there are now prediction markets involved btw) and the dominant moral worldviews of the audience has completely changed, you no longer n
... (read more)
2
Arepo
1y
That might at least be a good way of establishing a lower bound for EV from talking to journalists.

I think almost everyone I know who has taken up requests to be interviewed about some community-adjacent thing in the last 10 years has regretted their choice, not because they were punished by the community or something, but because the journalists ended up twisting their words and perspective in a way both felt deeply misrepresentative and gave the interviewee no way to object or correct anything.

Do you have thoughts about the idea of creating a thread on a site like the EA Forum or Less Wrong where someone takes questions from the media and responds ... (read more)

I think something like that is a better idea. Or separately, for people to just write up their takes in comments and posts themselves. I've been reasonable happy with the outcomes of me doing that during this FTX thing. I think I've been quoted in one or two articles, and I think those quotes have been fine.

Is there somewhere we can see how the winners of donor lotteries have been donating their winnings?

Thanks for all your hard work in EA!

I think you (and lots of other EAs who feel the same way you do) are totally correct that you don't deserve the response you've been seeing to the FTX situation. You deserve a huge pat on the back for doing so much for the world.

Separately, I also agree with these paragraphs Oliver wrote a few days ago, and I'm (tentatively) glad that there's been more criticism than usual on the forum right now (even if it's ultimately unrelated to FTX):

I do think it is indeed really sad that people fear reprisal for disagreement. I

... (read more)

This is a really important point. It might make sense to talk to journalists in order to contextualize what you said on the EA Forum -- or to ask them not to use something!

Answering in writing should help with the "foot in mouth" problem. You can ask them to send questions, and say you don't promise to answer all of them.

A journalist reached out to me recently and this is basically what I did; no regrets so far at least.

IMO "try to respond in writing" should be standard advice when dealing with journalists. Past that, I remember a Less Wrong user once created a (public) thread specifically for taking journalist questions; that seems like a good way to discourage misrepresentation.

2
Greg_Colbourn
1y
I really like the idea of asking for a public written thread for Q & A from a journalist to avoid misrepresentation.

Any chance we can get an interview with Nishad or Caroline? I feel like their answers would be a lot more informative in terms of what EA should take away from all this.

Linch
1y17
12
0

I assume that their lawyers are strongly encouraging them to not say anything.

Fair enough!

You're correct that the EA Forum isn't as democratic as "one person one vote". However, it is one of the more democratic institutions in EA, so provides evidence re: whether moving in a more democratic direction would've helped.

I'd be interested if people can link any FTX criticism on reddit/Facebook prior to the recent crisis to see how that went. In any case, "one person one vote" is tricky for EA because it's unclear who counts as a "citizen". If we start deciding grant applications on the basis of reddit upvotes or Facebook likes, that creates a cash incentive for vote brigades.

1
Achim
1y
I think most democratic systems don't work that way - it's not that people vote on every single decision; democratic systems are usually representative democracies where people can try to convince others that they would be responsible policymakers, and where these policymakers then are subject to accountability and checks and balances. Of course, in an unrestricted democracy you could also elect people who would then become dictators, but that just says that you also need democrats for a democracy, and that you may first need fundamental decisions about structures.
3
Charlie_Guthmann
1y
you can see who likes things on Facebook, and reddit isn't especially used. You can actually see democratic voting on the tree of tags (weird that I can't find the same option for the forum itself...),  but you still run into the issue that people might upvote/downvote posts that have more upvotes in general. 

Not saying I disagree with this, but it may be worth noting that "democracy" as an alternative didn't exactly do great either -- Stuart Buck wrote this comment, and it got downvoted enough that he deleted it.

Indeed. I actually am inclined to agree that more democracy in distributing funds and making community decisions is safer overall and prevents bad tail risks,  and I think Zoe Cremer's suggestions should be take seriously, but let's remember that democracy in recent years has given us Modi, Bolsonaro, Trump, Duterte and Berlusconi as leaders of countries with millions of citizens, on the basis of millions of votes, and that Hitler did pretty well in early 1930s German elections. Democracy is not just "not infallible" but has led to plausibly bad decis... (read more)

2
GideonF
1y
This post is merely asking questions of those currently in power, not saying any specific form of greater internal democracy is a good thing (I know you acknowledge that the post is doing this as well, but thought I would reiterate :-)!). Moreover, because of the karma system, the EA Forum is hardly democratic either!

I agree dense housing would help. Another idea is more group houses. It seems that there's an excess of big houses in the US right now: https://www.wsj.com/articles/a-growing-problem-in-real-estate-too-many-too-big-houses-11553181782

More thoughts on roommates as a solution for loneliness in this post I wrote: How to Make Billions of Dollars Reducing Loneliness. (Have learned more about the topic since writing that post; can share if people are interested)

A small probability of a big future win. The world today has lots of governments, but they seem to mostly follow a very small number of basic governance templates. At some point, there will be new states with new Constitutions - maybe via space settlements, maybe via collapse of existing states, etc. - but I expect these moments to be few and far between. A significant literature and set of experts on "ideal governance" could lead to a radically different kind of state government, potentially with radically different policies that the rest of the world co

... (read more)

Holden Karnofsky has some interesting thoughts on governance:

One theme is that good governance isn't exactly a solved problem. IMO EA should use a mix of approaches: copying best practices for high-stakes scenarios, and pioneering new practices for lower-stakes scenarios. (For example, setting up a small fund to be distributed according to some experimen... (read more)

(Upvoted)

Events are not evidence to the truth of philosophical positions.

Are you sure? How about this position from Richard Chappell's post?

(3) Self-effacing utilitarian: Ex-utilitarian, gave up the view on the grounds that doing so would be for the best.

Psychological effects of espousing a moral theory are empirical in nature. Observations about the world could cause a consequentialist to switch to some other theory on consequentialist grounds, no?

Not sure there's a clean division between moral philosophy and moral psychology.

I agree hastily jum... (read more)

4
Ben Auer
1y
My understanding is that the self-effacing utilitarian is not strictly an 'ex-utilitarian', in that they are still using the same types of rightness criteria as a utilitarian (at least with respect to world-states). Although they may try to deceive themselves into actually believing another theory, since this would better achieve their rightness criterion, that is not the same as abandoning utilitarianism on the basis that it was somehow refuted by certain events. In other words, as you say, they're switching theories "on consequentialist grounds". Hence they're still a consequentialist in the sense that is philosophically important here.

I'd be interested to know if there's any psychological research on how niceness and being ethical may be related.

For example, prior to the FTX incident, I didn't usually give money to beggars, on the grounds that it was ineffective altruism. But now I'm starting to wonder if giving money to beggars is an easy way to cultivate benevolence in oneself, and cultivating benevolence in oneself is an important way to improve as an EA.

Does walking past beggars & rehearsing reasons why you won't give them money end up corroding your character over time, such t... (read more)

I'd be interested to know if there's any psychological research on how niceness and being ethical may be related.

 

There is a plethora of research on the subject, including a growing body of evidence which suggests we are born with a sense of compassion, empathy, and fairness. Paul Bloom has done some amazing research with babies at the Yale psych lab, and more recently the University of Washington published a study suggesting altruism is innate. 

A brief overview of Paul Bloom's work: 

The Moral Life of Babies, Yale Psychology Professor Paul B... (read more)

5
Jay Bailey
1y
I don't think that not giving beggars money corrodes your character, though I do think giving beggars money improves it. This can easily be extended from "giving beggars money" to "performing any small, not highly effective good deed". Personally, it was getting into a habit of doing regular good deeds, however small or "ineffective" that moved me from "I intellectually agree with EA, but...maybe later" to "I am actually going to give 10% of my money away". I still actively look for opportunities to do small good deeds for that reason - investing in one's own character pays immense dividends over time, whether EA-flavored or not, and is thus a good thing to do for its own sake.  

Thanks!

I'm not sure I share your view of that post. Some quotes from it:

...he just believed it was really important for humanity to make space settlements in order for it to survive long-term... From what I could tell, [my professor] probably spend less than 10 hours seriously figuring out if space settlements would actually be more valuable to humanity than other alternatives.

...

Take SpaceX, Blue Origin, Neurolink, OpenAI. Each of these started with a really flimsy and incredibly speculative moral case. Now, each is probably worth at least $10 Bil

... (read more)
7
Arepo
1y
I'm not familiar enough with the case of Andrew Carnegie to comment and I agree on the point of political tribalism. The other two are what bother me.  On the professor, the problem is there explicitly: you omitted a key line 'I tried asking for his opinion on existential threats', which is a strongly EA-identifying approach, and one which many people feel is too simplistic. Eg see Gideon Futurman's EAGx Rotterdam talk when it's up - he argues the way EAs think about x-risk is far too simplified, focusing on single-event narratives, ignoring countless possible trajectories that could end in extinction or similar any one of which is vanishingly unlikely, but which collectively we should take much more seriously. Whether or not one agrees with this view, it seems to me to be one a smart person could reasonably hold, and shows that by asking someone 'his opinion on existential threats, and which specific scenarios these space settlements would help with', you're pigeonholing them into EA-aligned specific-single-event way of thinking. As for Elon Musk, I think the same problem is there implicitly: he's written a paper called 'Making Humans a Multiplanetary Species', spoken extensively on the subject and spent his life thinking that it's important, and while you could reasonably disagree with his arguments, I don't see any grounds for dismissing them as 'really flimsy and incredibly speculative' without engagement, unless your reason for doing so is 'there exists a pool of important research which contradicts them and which I think is correct'. There are certainly plenty of other smart people who think as he does, some of them EAs (though maybe that doesn't contribute to my original complaint). Since there's  a very clear mathematical argument that it's harder to kill all of a more widespread and numerous civilisation, to say that the case is 'really flimsy', you basically need to assume the  EA-aligned narrative that AI is highly likely to kill us all.

I like how Hacker News hides comment scores. Seems to me that seeing a comment's score before reading it makes it harder to form an independent impression.

I fairly frequently find myself thinking something like: "this comment seems fine/interesting and yet it's got a bunch of downvotes; the downvoters must know something I don't, so I shouldn't upvote". If others also reason this way, the net effect is herd behavior? What if I only saw a comment's score after voting/opting not to vote?

Maybe quadratic voting could help, by encouraging everyone to focus t... (read more)

Perhaps the ditch the "Your intellectual contributions are poorly regarded" thread; at best, it is unsupported & off-topic

Morale is low right now and senior EA figures are occupied and some have come under direct criticism, whether justified or not. In this environment, it's difficult to communicate or express leadership. Only the CEA community health team seems to be taking the initiative, which must be very difficult and this is heroic.

In this situation there is often gardening of the online space that tends to be performed by marginal actors. LW and MIRI has been left mostly unscathed by the FTX disaster, and now, Eliezer and Rob B (professional communicator employed by MI... (read more)

Seems plausible, I think it would be good to have a dedicated "translator" who tries to understand & steelman views that are less mainstream in EA.

Wasn't sure about the relevance of that link?

(from phone) That was an example of an ea being highly upvoted for dismissing multiple extremely smart and well meaning people's life's work as 'really flimsy and incredibly speculative' because he wasn't satisfied that they could justify their work within a framework that the ea movement had decided is one of the only ones worth contemplating. As if that framework itself isn't incredibly speculative (and therefore if you reject any of its many suppositions, really flimsy)

I'm not sure what you mean by "the principles have little room for errors in implementing them".

That quote seems scarily plausible.

EDIT: Relevant Twitter thread

3
Sharmake
1y
Specifically, I was saying that wrong results would come up if you failed in one of the steps of reasoning, and there's no self-correction mechanism for bad reasoning like Sam Bankman-Fried was doing.

I think your first paragraph provides a potential answer to your second :-)

There's an implicit "Sam fell prey to motivated reasoning, but I wouldn't do that" in your comment, which itself seems like motivated reasoning :-)

(At least, it seems like motivated reasoning in the absence of a strong story for Sam being different from the rest of us. That's why I'm so interested in what people like nbouscal have to say.)

4
Sharmake
1y
So you think there's too much danger of cutting yourself and everyone else via motivated reasoning, ala Dan Luu's "Normalization of Deviance" and the principles have little room for errors in implementing them, is that right? Here's a link to it: https://danluu.com/wat/ And a quote:

Well that's the thing -- it seems likely he didn't see his actions as contradicting those principles. Suggesting that they're actually a dangerous set of principles to endorse, even if they sound reasonable. That's what's really got me thinking.

I wonder if part of the problem is a consistent failure of imagination on the part of humans to see how our designs might fail. Kind of like how an amateur chess player devotes a lot more thought to how they could win than how their opponent could win. So if the principles Sam endorsed are at all recoverable, ma... (read more)

3
Sharmake
1y
My guess is standard motivated reasoning explains why he thought he wasn't in violation of his stated principles. Question, but why do you think the principles were dangerous, exactly? I am confused about the danger you state.

Thanks for the reply!

In terms of public interviews, I think the most interesting/relevant parts are him expressing willingness to bite consequentialist/utilitarian bullets in a way that's a bit on the edge of the mainstream Overton window, but I believe would've been within the EA Overton window prior to recent events (unsure about now). BTW I got these examples from Marginal Revolution comments/Twitter.

  • This one seems most relevant -- the first question Patrick asks Sam is whether the ends justify the means.

  • In this interview, search for "So why then

... (read more)

This one is tricky, because it seems bad to tell people who already experience Chidi Anagonye-style crippling self-doubt that they should self-doubt even more.

EA self-doubt has always seemed weirdly compartmentalized to me. Even the humblest of people in the movement is often happy to dismiss considered viewpoints by highly intelligent people on the grounds that it doesn't satisfy EA principles. This includes me - I think we are sometimes right to do so, but probably do so far too much nonetheless.

3
Simon Bazelon
1y
What's interesting about this interview clip though is that he seems to explicitly endorse a set of principles that directly contradict the actions he took! 

I'm curious if you (or any other "SBF skeptic") has any opinion regarding whether his character flaws should've been apparent to more people outside the organizations he worked at, e.g. on the basis of his public interviews. Or alternatively, were there any red flags in retrospect when you first met him?

I'm asking because so far this thread has discussed the problem in terms of private info not propagating. But I want to understand if the problem could've been stopped at the level of public info. If so that suggests that a solution of just getting bette... (read more)

One of the biggest lessons I learned from all of this is that while humans are quite good judges of character in general, we do a lot worse in the presence of sufficient charisma, and in those cases we can't trust our guts, even when they're usually right. When I first met SBF, I liked him quite a bit, and I didn't notice any red flags. Even during the first month or two of working with him, I kind of had blinders on and made excuses for things that in retrospect I shouldn't have.

It's hard for me to say about what people should have been able to detect fro... (read more)

Trying to brainstorm... I noticed this tweet from CZ, which states:

We gave support before, but we won't pretend to make love after divorce. We are not against anyone. But we won't support people who lobby against other industry players behind their backs.

Maybe SBF can hire an apology coach (if that exists? I might know someone kinda like that actually -- but someone SBF knows is probably better) and find it in his heart to apologize to CZ for "lobbying against other industry players behind their backs", and anything else he may have done that CZ resen... (read more)

5
Vincent van der Holst
1y
Even if that has extremely low odds of working that seems like it could be worth a try. Ego's have caused many catastrophe's before. 

Maybe he could get together with a few wealthy friends?

Sabs
1y11
5
1

Why? To light 5 billion on fire because....?

When Full Tilt Poker collapsed in 2011 after it turned they also had not segregated customer funds, Pokerstars bought them out and made their depositors whole. But Pokerstars did this because they were getting kicked out of the US market by the regulators and needed to buy some goodwill so they'd be let back in the event of eventual regulatory change (which is slowly happening, state by state). No one actually has a meaningful incentive to save FTX unless either a) you want to curry favour with crypto regulators ... (read more)

Load more