+1 from me.
I was talking about the whole situation with my parents, and they mentioned that their local synagogue experienced a very similar catastrophe, with the community's largest funder turning out to be a con-man. Everybody impacted had a lot of soul-searching to do, but ultimately in retrospect, there was really nothing they could or should have done differently—it was a black-swan event that hasn't repeated in the quarter of a century or so since it happened, and there were no obvious red flags until it was too late. Yes, we can always find de...
Curious what you think about screenshots like this one, which I've now seen in a few different places.
I'll be honest, I've been putting judgement based on his (apparent) lifestyle on hold, as I've seen some anecdotes/memes floating around twitter suggesting that he may not have been honest about his veganism/other lifestyle choices. I don't know enough about that situation to distinguish the actual truth of the matter, so it's possible I've been subject to misinformation there (also I scrolled past it quickly on Twitter and it's possible it was some meta-ironic meme or something). If there is (legitimate) evidence he was actually faking it, that would make me update strongly in the other direction, of course.
This has created a potentially dangerous mismatch in public perception between what the more serious AI safety researchers think they're doing (e.g. reducing X risk from AGI), and what the public thinks AI safety is doing (e.g. developing methods to automate partisan censorship, to embed woke values into AI systems, and to create new methods for mass-customized propaganda).
This is the crux of the problem, yes. I don’t think this is because of a “conservative vs liberal” political rift though; the left is just as frustrated by, say, censorship of sex e...
Yes, the consequences are probably less severe in this context, which is why I wouldn't consider this a particularly strong argument. Imo, it's more important to understand this line of thinking for the purpose of modeling outsider's reactions to potential censorship, as this seems to be how people irl are responding to OpenAI, et al's policy decisions.
I would also like to emphasize again that sometimes regulation is necessary, and I am not against it on principle, though I do believe it should be used with caution; this post is critiquing the details of how we are implementing censorship in large models, not so much its use in the first place.
There's nothing in this section about why censoring model outputs to be diverse/not use slurs/not target individuals or create violent speech is actually a bad idea.
The argument in that section was not actually an object-level one, but rather an argument from history and folk deontological philosophy (in the sense that "censorship is bad" is a useful, if not perfect, heuristic used in most modern Western societies). Nonetheless, here's a few reasons why what you mentioned could be a bad idea: Goodhart's law, the Scunthorpe Problem, and the gene...
My perspective on the issue is that by accepting the wager, you are likely to become far less effective at achieving your terminal goals, (since even if you can discount higher-probability wagers, there will eventually be a lower-probability one that you won’t be able to think your way out of and thus have to entertain on principle), and become vulnerable to adversarial attacks, leading to actions which in the vast majority of possible universes are losing moves. If your epistemics require that you spend all your money on projects that will, for all inten...
Thanks for making this document public, it’s an interesting model! I am slightly concerned this could lead to reduced effectiveness within the organization due to reduced communication, which could plausibly cause more net harm in EV than the increased risk of infohazard leakage. I assume you’ve done that cost/benefit analysis already of course, but thought it’s worth mentioning just in case.
...We are in the process of reaching out to individuals and we will include them after they confirm. If you have suggestions for individuals to include please add a com
There is a very severe potential downside if many funders think in this manner, which is that it will discourage people from writing about potentially important ideas. I’m strongly in favor of putting more effort and funding into PR (disclaimer that I’ve worked in media relations in the past), but if we refuse to fund people with diverse, potentially provocative takes, that’s not a worthwhile trade-off, imo. I want EA to be capable of supporting an intellectual environment where we can ask about and discuss hard questions publicly without worrying about being excluded as a result. If that means bad-faith journalists have slightly more material to work with, than so be it.
I don’t think that would imply that nothing really matters, since reducing suffering and maximizing happiness (as well as good ol’ “care about other human beings while they live”) could still be valid sources of meaning. In fact, insuring that we do not become extinct too early would be extremely important to insure the best possible fate of the universe (that being a quick and painless destruction or whatever), so just doing what feels best at the moment probably would not be a great strategy for a True Believer in this hypothetical.
I’m really exited about this, and look forward to participating! Some questions—how will you determine which submissions count as “ Winners” vs “runners up” vs “honorable mentions”? I’m confused what the criteria for differentiating categories are. Also, are there any limits as to how many submissions can make each category?
I didn't focus on it in this post, but I genuinely think that the most helpful thing to do involves showing proficiency in achieving near-term goals, as that both allows us to troubleshoot potential practical issues, and allows outsiders to evaluate our track record. Part of showing integrity is showing transparency (assuming that we want outside support), and working on neartermist causes allows us to more easily do that.
Fair enough; I didn’t mean to imply that $100M is exactly the amount that needs to be spent, though I would expect it to be near a lower bound he would have to spend (on projects with clear measurable results) if he wants to because known as “that effective altruism guy” rather than “that cryptocurrency guy”
It's hard to imagine him not being primarily seen as a crypto guy while he's regularly going to Congress to talk about crypto, and lobbying for a particular regulatory regime. Gates managed this by not running Microsoft any more, it might take a similarly big change in circumstances to get there for SBF.
Within the domain of politics (and to a lesser degree, global health), PR impact makes an extremely large difference in how effective you’re able to be at the end of the day. If you want, I’d be happy to provide data on that, but my guess is you’d agree with me there (please let me know if that isn’t the case). As such, if you care about results, you should care about PR as well. I suspect that your unease mostly lies in the second half of your response—we should do things for “direct, non-reputational reasons,” and actions done for reputational reasons wo...
Other than the donations towards helping Ukraine, I’m not sure there’s any significant charity on the linked page that will have really noticeable effects within a year or two. For what I’m talking about, there needs to be an obvious difference made quickly—it also doesn’t help that those are all pre-existing charities under other people’s names, which makes it hard to say for sure that it was SBF’s work that made the crucial difference even if one of them does significantly impact the world in the short term.
I'm not sure if you're still actively monitoring this post, but the Wikipedia page on the Lead-crime hypothesis (https://en.wikipedia.org/wiki/Lead%E2%80%93crime_hypothesis) could badly use some infographics!! My favorite graph on the subject is this one (from https://news.sky.com/story/violent-crime-linked-to-levels-of-lead-in-air-10458451; I like it because it shows this isn't just localized to one area), but I'm pretty sure it's under copyright unfortunately.
One possible “fun” implication of following this line of thought to its extreme conclusion would be that we should strive to stay alive and improve science to the point at which we are able to fully destroy the universe (maybe by purposefully paperclipping, or instigating vacuum decay?). Idk what to do with this thought, just think it’s interesting.
my best guess is that more time delving into specific grants will only rarely actually change the final funding decision in practice
Has anyone actually tested this? It might be worthwhile to record your initial impressions on a set number of grants, then deliberately spend x amount of time researching them further, and calculating the ratio of how often further research makes you change your mind.
that's a fair point, I'm reconsidering my original take.