All of Yitz's Comments + Replies

that's a fair point, I'm reconsidering my original take.

This was a really fun read; thanks for helping put it together!!

Yitz
1y18
8
2

+1 from me.

 I was talking about the whole situation with my parents, and they mentioned that their local synagogue experienced a very similar catastrophe, with the community's largest funder turning out to be a con-man. Everybody impacted had a lot of soul-searching to do, but ultimately in retrospect, there was really nothing they could or should have done differently—it was a black-swan event that hasn't repeated in the quarter of a century or so since it happened, and there were no obvious red flags until it was too late. Yes, we can always find de... (read more)

Same here, this is really helping me understand the (at least perceived) narrative flow of events

Thank you for sharing, I can understand why you might be feeling burnt out!! I've been in a workplace environment that reminds me of this, and especially if you care about the people and projects there...it's painful.

Strongly agree that moving forward we should steer away from such organizational structures; much better that something bad is aired publicly before it has a chance to become malignant

this seems probable to me, thanks for sharing a good-faith explanation

+1 to this, can attest I've done the same, and immediately regretted it lol

Curious what you think about screenshots like this one, which I've now seen in a few different places.

7
Rafael Harth
1y
It's lowered my confidence. Though it could have various mundane explanations, the simplest one being that it was taken before he became a vegan, and if so I feel bad about speculating. 
Yitz
1y12
4
0

This is a fair critique imo, I'm updating against SBF using EA for sociopathic reasons. That being said, only slightly updating towards him using EA ideology as his main motivator to commit fraud, as that still may very well not be the case.

8
𝕮𝖎𝖓𝖊𝖗𝖆
1y
  For the record, I did not believe this to be the case (and had extensively argued as so on Twitter). Even a naive utilitarian calculus doesn't justify risking the funds of over a million customers, the FTX Future fund, the reputation and public goodwill of the EA community, and the entirety of FTX itself to try and bail out Alameda (if that is indeed what happened).   That said, an EA in crypto I trust has told me that if Alameda went under, FTX would have gone  down with it, and so it may have been a case of "lose literally everything" or gamble customer funds to try and save Alameda (and by extension FTX).  If Alameda's bad bets was going to drag FTX under if SBF let it fail, then it's possible that the trade was: * Lose Alameda Research, FTX (and its customer funds), the FTX Future Foundation and all future donations * Gamble customer funds to try and save all the above (if you win) vs lose them and the reputation and public goodwill of the Effective Altruism community if you lose   Then the utilitarian calculus is very different. I'm not trying to argue that SBF committed fraud due to EA ideology, but it's no longer as implausible as it seemed to me in the first place. At least it may not be the case that SBF had the option to just let Alameda go under and keep FTX/its customer funds. It's not clear that the funds of over a million customers would have been preserved even if FTX did not gamble them.   (The above argument is speculative and based on second hand explanations of crypto dynamics I don't understand very well; it may be completely wrong.)

I'll be honest, I've been putting judgement based on his (apparent) lifestyle on hold, as I've seen some anecdotes/memes floating around twitter suggesting that he may not have been honest about his veganism/other lifestyle choices. I don't know enough about that situation to distinguish the actual truth of the matter, so it's possible I've been subject to misinformation there (also I scrolled past it quickly on Twitter and it's possible it was some meta-ironic meme or something). If there is (legitimate) evidence he was actually faking it, that would make me update strongly in the other direction, of course.

5
Rafael Harth
1y
If there is solid evidence that he was lying about being vegan, I'll change my position completely. That'd be a much worse sign than just not being vegan in the first place. (But as you say, [the fact that some people on twitter hinted at] isn't convincing.)

I got a free textbook  (that cost some insane amount of money on Amazon) once from a professor when I asked if he could share a copy of his work for reference in a video game I was making at the time. I don't know if that counts, but seems worth mentioning.

It certainly isn't a good outcome for EA either way, and I don't want us prematurely absolving ourselves of any responsibility we may end up holding. I just want to be as clear-thinking about this as possible, so we can best mend ourselves moving forward.

Yitz
1y14
7
1

Thanks for this; it's a nicely compact summary of a really messy situation that I can quickly share if necessary.

I have a sticker by my bed reading 'What Would SBF Do?' (from EAG SF 2022) (I should probably remove that)

Maybe don't remove that—this seems emblematic of a category of mistakes worth remembering , if only so we don't repeat it.

Yitz
1y9
10
1

+1 on this. It is painfully clear that we need to radically improve our practices relating to due diligence moving forward.

3
Clare_Diane
1y
Thank you, Yitz!  Glad you found it interesting! :)

This has created a potentially dangerous mismatch in public perception between what the more serious AI safety researchers think they're doing (e.g. reducing X risk from AGI), and what the public thinks AI safety is doing (e.g. developing methods to automate partisan censorship, to embed woke values into AI systems, and to create new methods for mass-customized propaganda).

This is the crux of the problem, yes. I don’t think this is because of a “conservative vs liberal” political rift though; the left is just as frustrated by, say, censorship of sex e... (read more)

4
Geoffrey Miller
2y
Yep, fair enough. I was trying to dramatize the most vehement anti-censorship sentiments in a US political context, from one side of the partisan spectrum. But you're right that there are plenty of other anti-censorship concerns from many sides, on many issues, in many countries.

Yes, the consequences are probably less severe in this context, which is why I wouldn't consider this a particularly strong argument. Imo, it's more important to understand this line of thinking for the purpose of modeling outsider's reactions to potential censorship, as this seems to be how people irl are responding to OpenAI, et al's policy decisions.

I would also like to emphasize again that sometimes regulation is necessary, and I am not against it on principle, though I do believe it should be used with caution; this post is critiquing the details of how we are implementing censorship in large models, not so much its use in the first place.

There's nothing in this section about why censoring model outputs to be diverse/not use slurs/not target individuals or create violent speech is actually a bad idea.

The argument in that section was not actually  an object-level one, but rather an argument from history and folk deontological philosophy (in the sense that "censorship is bad" is a useful, if not perfect, heuristic used in most modern Western societies). Nonetheless, here's a few  reasons why what you mentioned could be a bad idea: Goodhart's law, the Scunthorpe Problem, and the gene... (read more)

3
Karthik Tadepalli
2y
I think this makes a lot of sense for algorithmic regulation of human expression, but I still don't see the link to algorithmic expression itself. In particular I agree that we can't perfectly measure the violence of a speech act, but the consequences of incorrectly classifying something as violent seem way less severe for a language model than for a platform of humans.

Came across this post today—I assume the bounty has been long-closed by now?

2
WilliamKiely
2y
I also just came across this. Will DM the author to reply here.

Thanks, I think I somehow missed some of those!

Thanks for the clarification! I might try to do something on the Orthogonality thesis if I get the chance, since I think that tends to be glossed over in a lot of popular introductions.

Answer by YitzAug 05, 20223
0
0

My perspective on the issue is that by accepting the wager, you are likely to become far less effective at achieving your terminal goals, (since even if you can discount higher-probability wagers, there will eventually be a lower-probability one that you won’t be able to think your way out of and thus have to entertain on principle), and become vulnerable to adversarial attacks, leading to actions which in the vast majority of possible universes are losing moves. If your epistemics require that you spend all your money on projects that will, for all inten... (read more)

2
ColdButtonIssues
2y
I think this is true for some people, but not for most people. Religion seems helpful for happiness, health, having a family, etc which are some of the most common terminal goals out there.
1
Transient Altruist
2y
This argument is one that makes intuitive sense, and of course I am no exception to that intuition. However intuition is not the path to truth, logic is. Unless you can provide a logic-founded reason why almost certain loss with a minuscule chance of a huge win is worse than unlikely loss with a probable win, then I can't accept the argument.

Question—is $20,000 awarded to every entry which qualifies under the rules, or is there one winner selected among the pool of all who submit an entry?

6
ThomasW
2y
I edited the title to say "$20k in bounties" to make it more clear. From the original text: This doesn't mean each person who submits an entry gets $2,000. We will award this to entries that meet a high bar for quality (roughly, material that we would actually be interested in using for outreach).

This is really exciting! I’m glad there are so many talented people on the case, and hope the good news will only grow from here :)

I strongly agree with you on points one and two, though I’m not super confident on three. For me the biggest takeaway is we should be putting more effort into attempts to instill “false” beliefs which are safety-promoting and self-stable.

3
Greg_Colbourn
1y
I could see this backfiring. What if insilling false beliefs just later led to the meta-belief that deception is useful for control?

Thanks for making this document public, it’s an interesting model! I am slightly concerned this could lead to reduced effectiveness within the organization due to reduced communication, which could plausibly cause more net harm in EV than the increased risk of infohazard leakage. I assume you’ve done that cost/benefit analysis already of course, but thought it’s worth mentioning just in case.

We are in the process of reaching out to individuals and we will include them after they confirm. If you have suggestions for individuals to include please add a com

... (read more)

There is a very severe potential downside if many funders think in this manner, which is that it will discourage people from writing about potentially important ideas. I’m strongly in favor of putting more effort and funding into PR (disclaimer that I’ve worked in media relations in the past), but if we refuse to fund people with diverse, potentially provocative takes, that’s not a worthwhile trade-off, imo. I want EA to be capable of supporting an intellectual environment where we can ask about and discuss hard questions publicly without worrying about being excluded as a result. If that means bad-faith journalists have slightly more material to work with, than so be it.

Not a bad idea! I’d love to try to actually test this hypothesis—my hunch is that it will do worse at prediction in most areas, but there may be some scenarios where thinking things through from a narrative perspective could provide otherwise hard-to-reach insight

I was personally unaware of the situation until reading this comment thread, so can confirm

My brother was recently very freaked out when I asked him to pose a set of questions that he thinks an AI wouldn’t be able to answer, and GPT-3 gave excellent-sounding responses to his prompts.

Seconding this—I’m definitely personally curious what such a chart would look like!

I don’t think that would imply that nothing really matters, since reducing suffering and maximizing happiness (as well as good ol’ “care about other human beings while they live”) could still be valid sources of meaning. In fact, insuring that we do not become extinct too early would be extremely important to insure the best possible fate of the universe (that being a quick and painless destruction or whatever), so just doing what feels best at the moment probably would not be a great strategy for a True Believer in this hypothetical.

I’m really exited about this, and look forward to participating! Some questions—how will you determine which submissions count as “ Winners” vs “runners up” vs “honorable mentions”? I’m confused what the criteria for differentiating categories are. Also, are there any limits as to how many submissions can make each category?

I didn't focus on it in this post, but I genuinely think that the most helpful thing to do involves showing proficiency in achieving near-term goals, as that both allows us to troubleshoot potential practical issues, and allows outsiders to evaluate our track record. Part of showing integrity is showing transparency (assuming that we want outside support), and working on neartermist causes allows us to more easily do that.

Fair enough; I didn’t mean to imply that $100M is exactly the amount that needs to be spent, though I would expect it to be near a lower bound he would have to spend (on projects with clear measurable results) if he wants to because known as “that effective altruism guy” rather than “that cryptocurrency guy”

It's hard to imagine him not being primarily seen as a crypto guy while he's regularly going to Congress to talk about crypto, and lobbying for a particular regulatory regime. Gates managed this by not running Microsoft any more, it might take a similarly big change in circumstances to get there for SBF.

Within the domain of politics (and to a lesser degree, global health), PR impact makes an extremely large difference in how effective you’re able to be at the end of the day. If you want, I’d be happy to provide data on that, but my guess is you’d agree with me there (please let me know if that isn’t the case). As such, if you care about results, you should care about PR as well. I suspect that your unease mostly lies in the second half of your response—we should do things for “direct, non-reputational reasons,” and actions done for reputational reasons wo... (read more)

Other than the donations towards helping Ukraine, I’m not sure there’s any significant charity on the linked page that will have really noticeable effects within a year or two. For what I’m talking about, there needs to be an obvious difference made quickly—it also doesn’t help that those are all pre-existing charities under other people’s names, which makes it hard to say for sure that it was SBF’s work that made the crucial difference even if one of them does significantly impact the world in the short term.

If it was just me (and maybe a few other similar-minded people) in the universe however, and if I was reasonably certain it would actually do what it said in the label, then I may very well press it. What about you, for the version I presented for your philosophy?

2
AnaDoe
2y
Teo sums it up pretty well here: https://forum.effectivealtruism.org/posts/JnHeeTGAohMFxNbGK/peacefulness-nonviolence-and-experientialist-minimalism

Excellent question! I wouldn’t, but only because of epistemic humility—I would probably end up consulting with as many philosophers as possible and see how close we can come to a consensus decision regarding what to practically do with the button.

1
Yitz
2y
If it was just me (and maybe a few other similar-minded people) in the universe however, and if I was reasonably certain it would actually do what it said in the label, then I may very well press it. What about you, for the version I presented for your philosophy?
Answer by YitzMay 29, 20221
0
0

I'm not sure if you're still actively monitoring this post, but the Wikipedia page on the Lead-crime hypothesis (https://en.wikipedia.org/wiki/Lead%E2%80%93crime_hypothesis) could badly use some infographics!! My favorite graph on the subject is this one (from https://news.sky.com/story/violent-crime-linked-to-levels-of-lead-in-air-10458451; I like it because it shows this isn't just localized to one area), but I'm pretty sure it's under copyright unfortunately.

Love this newsletter, thanks for making it :)

One possible “fun” implication of following this line of thought to its extreme conclusion would be that we should strive to stay alive and improve science to the point at which we are able to fully destroy the universe (maybe by purposefully paperclipping, or instigating vacuum decay?). Idk what to do with this thought, just think it’s interesting.

4
Anthony Fleming
2y
Side note: I love that "paperclipping" is a verb now.
2
PaulCousens
2y
That's an interesting way of looking at it. That view seems nihilistic and like it could lead to hedonism since if our only purpose is to make sure we completely destroy ourselves and the universe, nothing really matters.

Thanks for the post—It was really amazing talking with you at the conference :)

4
Milan.Patel
2y
Was great to meet and hang out with you Yitz!

We already know that we can create net positive lives for individuals

Do we know this? Thomas Ligotti would argue that even most well-off humans live in suffering, and it’s only through self-delusion that we think otherwise (not that I fully agree with him, but his case is surprisingly strong)

4
PaulCousens
2y
That is a good point. I was actually considering that when I was making my statement. I suspect self-delusion might be the core of the belief of many individuals who think their their lives are net positive. In order to adapt/avoid great emotional pain, humans might self-delude when faced with the question of whether their life is overall positive. Even if it is not possible for human lives to be net positive, my first counterargument would still hold for two different reason.  First, we'd still be able to improve the lives of other species. Second, it would still be valuable to prevent much more negative lives that might happen if other kinds of humans were allowed to evolve in our absence. It might be difficult to ensure our extinction was permanent. If we took care to make ourselves extinct and that we somehow wouldn't come back, it's possible that within, say, a billion years the universe would change in such a way as to make the spark of life that would lead to humans happen again. Cosmological and extremely long processes might undo any precautions we took. Alternatively, maybe different kinds of humans that would evolve in our absence would be more capable of having positive lives than we are.    I don't think I am familiar with anything by Thomas Ligotti. I'll look into them.

If you could push a button and all life in the universe would immediately, painlessly, and permanently halt, would you push it?

2
AnaDoe
2y
Would you cleanse all the universe with that utilitronium shockwave which is a no less relevant thought experimemt pertaining to CU?

I think it’s okay to come off as a bit insulting in the name of better feedback, especially when you’re unlikely to be working with them long-term.

Buck
2y16
0
0

If you come across as insulting, someone might say you're an asshole to everyone they talk to for the next five years, which might make it harder for you to do other things you'd hoped to do.

2
Jeroen Willems
2y
I agree, and like I said, I'm sure those sentences can be massively improved. I prefer to have my feelings a little hurt than remain in the dark as to why a grant didn't get accepted.
Yitz
2y12
0
0

my best guess is that more time delving into specific grants will only rarely actually change the final funding decision in practice

Has anyone actually tested this? It might be worthwhile to record your initial impressions on a set number of grants, then deliberately spend x amount of time researching them further, and calculating the ratio of how often further research makes you change your mind.

Load more