All of WilliamKiely's Comments + Replies

This is horrifying! A friend of the author just shared this along with a Business Insider post that was just published that links to this post:

https://www.businessinsider.com/dangerous-surgery-stop-blushing-side-effects-ruined-life-no-emotions-2024-2

I'm curious if you or the other past participants you know had a good experience with AISC are in a position to help fill the funding gap AISC currently has. Even if you (collectively) can't fully fund the gap, I'd see that as a pretty strong signal that AISC is worth funding. Or, if you do donate but you prefer other giving opportunities instead (whether in AIS or other cause areas) I'd find that valuable to know too.

4
Linda Linsefors
3mo
From Lucius Bushnaq: Full comment here: This might be the last AI Safety Camp — LessWrong

But on the other hand, I've regularly meet alumni who tell me how useful AISC was for them, which convinces me AISC is clearly very net positive. 

Naive question, but does AISC have enough of such past alumni that you could meet your current funding need by asking them for support? It seems like they'd be in the best position to evaluate the program and know that it's worth funding.

7
Linda Linsefors
3mo
We have reached out to them and gotten some donations. 

Nevertheless, AISC is probably about ~50x cheaper than MATS

~50x is a big difference, and I notice the post says:

We commissioned Arb Research to do an impact assessment
One preliminary result is that AISC creates one new AI safety researcher per around $12k-$30k USD of funding. 

Multiplying that number (which I'm agnostic about) by 50 gives $600k-$1.5M USD. Does your ~50x still seem accurate in light of this?

I'm guessing that what Marius means by "AISC is probably about ~50x cheaper than MATS" is that AISC is probably ~50x cheaper per participant than MATS.

Our cost per participant is $0.6k - $3k USD

50 times this would be 30k - 150k per participant. 
I'm guessing that MATS is around 50k per person (including stipends).


Here's where the $12k-$30k USD comes from:

Dollar cost per new researcher produced by AISC

  • The organizers have proposed $60–300K per year in expenses. 
  • The number of non-RL participants of programs have increased from 32 (AISC4) to 130&
... (read more)

I'm a big fan of OpenPhil/GiveWell popularizing longtermist-relevant facts via sponsoring popular YouTube channels like Kurzgesagt (21M subscribers). That said, I just watched two of their videos and found a mistake in one[1] and took issue with the script-writing in the other one (not sure how best to give feedback -- do I need to become a Patreon supporter or something?):

Why Aliens Might Already Be On Their Way To Us

My comment:

9:40 "If we really are early, we have an incredible opportunity to mold *thousands* or *even millions* of planets according to ou

... (read more)
8
Holly Morgan
3mo
Found it! https://www.youtube.com/user/Kurzgesagt > click on "and 7 more links" in the little bio > click on "View email address" > do the CAPTCHA (I've also DM'd it to you)

I also had a similar experience making my first substantial donation before learning about non-employer counterfactual donation matches that existed.

It was the only donation I regretted since by delaying making it 6 months I could have doubled the amount of money I directed to the charity for no extra cost to me.

6
Neil Warren
3mo
That's an interesting anecdote! I donated for the first time a few days ago, and did not know "Giving Tuesday" existed, so I'm one of today's lucky 10,000. I really hope organisations like GWWC that help funnel money to the right charities engage in tricks like this; not investing your money immediately, but finding various opportunities to increase the pot. It would probably be worth the time and money at GWWC to centralize individual discoveries like this, and have a few people constantly looking out for opportunities. The EA forum only partially solves this. 

Great point, thanks for sharing!

While I assume that all long-time EAs learn that employer donation matching is a thing, we'd do well as a community to ensure that everyone learns about it before donating a substantial amount of money, and clearly that's not the case now.

Reminds me of this insightful XKCD: https://xkcd.com/1053/

For each thing 'everyone knows' by the time they're adults, every day there are, on average, 10,000 people in the US hearing about it for the first time.

4
WilliamKiely
3mo
I also had a similar experience making my first substantial donation before learning about non-employer counterfactual donation matches that existed. It was the only donation I regretted since by delaying making it 6 months I could have doubled the amount of money I directed to the charity for no extra cost to me.

Thanks for sharing about your experience.

I see 4 people said they agreed with the post and 3 disagreed, so I thought I'd share my thoughts on this. (I was the 5th person to give the post Agreement Karma, which I endorse with some nuance added below.)

I've considered going on a long hike before and like you I believed the main consideration against doing so was the opportunity cost for my career and pursuit of having an altruistic impact.

It seemed to me that clearly there was something else I could do that would be better for my career and altruistic impact ... (read more)

3
Emily Grundy
5mo
Thanks for sharing this! I really appreciated hearing your personal experience and perspective on this.  I agree that it's important to consider the realistic counterfactual (maybe that term is already implying 'realistic', but just wanted to emphasise it). There's definitely a world in which I could have spent six months doing something that was even better for my career on the whole. But, whether I actually knew what that alternative was or would have actually done it is a different story. Your message that almost everything is suboptimal is also really insightful. I agree, and think that trying to pursue the 'optimal' path can lead to some anxiety (e.g., "What if I'm not doing the best thing I could be doing?") and sometimes away from action (e.g., "I'm going to say no to this opportunity, because I can imagine something being better / more impactful"). I obviously still think it's worth considering impact and weighing different options against each other, but while always keeping in mind what's realistic (and that what you choose might not optimal in the ideal world). Thanks again for the reflections, William.
2
Jon
5mo
This is a very interesting point of view.  I also noticed that there were some disagree-votes. There is so much context to these individual choices, and I would be interested in hearing which specific points people disagree with.   

I'll also add that I didn't like the subtitle of the video: "A case for optimism".

A lot of popular takes on futurism topics seem to me to focus on being optimistic or pessimistic, but whether one is optimistic or pessimistic about something doesn't seem like the sort of thing one should argue for. It seems a little like writing the bottom line first.

Rather, people should attempt to figure out what the actual probabilities of different futures are and how we are able to influence the future to make certain futures more or less probable. From there it's just... (read more)

3
Jelle Donders
8mo
Agreed. In a pinned comment of his he elaborates on why he went for the optimistic tone:  It seems melodysheep went for a more passive "it's plausible the future will be amazing, so let's hope for that" framing over a more active "both a great, terrible or nonexistent are possible, so let's do what we can to avoid the latter two" framing. A bit of a shame, since it's this call to action where the impact is to be found.

I've been a fan of melodysheep since discovering his Symphony of Science series about 12 years ago.

Some thoughts as I watch:

- Toby Ord's The Precipice and his 16 percent estimate of existential catastrophe (in the next century) is cited directly

- The first part of the script seems heavily-inspired by Will MacAskill's What We Owe the Future
- In particular there is a strong focus on non-extinction, non-existentially catastrophic civilization collapse, just like in WWOTF

- 12:40 "But extinction in the long-term is nothing to fear. No species survives forever. ... (read more)

2
Iyngkarran Kumar
8mo
Kurzgesagt script + Melody Sheep music and visuals = great video about the long term future. Someone should get a colab between the two going
3
WilliamKiely
8mo
I'll also add that I didn't like the subtitle of the video: "A case for optimism". A lot of popular takes on futurism topics seem to me to focus on being optimistic or pessimistic, but whether one is optimistic or pessimistic about something doesn't seem like the sort of thing one should argue for. It seems a little like writing the bottom line first. Rather, people should attempt to figure out what the actual probabilities of different futures are and how we are able to influence the future to make certain futures more or less probable. From there it's just a semantic question whether having a certain credence in a certain kind of future makes one an optimistic or a pessimist. If one sets out to argue for being an optimist or pessimist, that seems like it would just introduce a bias into one's thinking, where once one identifies as e.g. an optimist, they'll have trouble updating their beliefs about the probability that the future will be good or bad to various degrees. Paul Graham says Keep Your Identity Small, which seems very relevant.

That is, I wasn’t viscerally worried. I had the concepts. But I didn’t have the “actually” part.

For me I don't think having a concrete picture of the mechanism for how AI could actually kill everyone ever felt necessary to viscerally believing that AI could kill everyone.

And I think this is because every since I was a kid, long before hearing about AI risk or EA, the long-term future that seemed most intuitive to me was a future without humans (or post-humans).

The idea that humanity would go on to live forever and colonize the galaxy and the universe and l... (read more)

Thinking out loud about credences and PDFs for credences (is there a name for these?):

I don't think "highly confident people bare the burden of proof" is a correct way of saying my thought necessarily, but I'm trying to point at this idea that when two people disagree on X (e.g. 0.3% vs 30% credences), there's an asymmetry in which the person who is more confident (i.e. 0.3% in this case) is necessarily highly confident that the person they disagree with is wrong, whereas the the person who is less confident (30% credence person) is not necessarily highly ... (read more)

I just got notified that my December 7th test donation was matched. This is extremely unexpected to me, and leads me to believe I got my forecast wrong and that the EA community actually could have gotten ~$1M matched this year with the donation trade scheme I had in mind.



By "messaged" do you mean you got an email, Facebook notification, or something else?

I'm not sure. I think you are the first person I heard of saying they got matched. When I asked in the EA Facebook group for this on December 15th if anyone got matched, all three people who responded (including myself) reported that they were double-charged for their December 15th donations. Initially we assumed the second receipt was a match, but then we saw that Facebook had actually just charged us twice. I haven't heard anything else about the match since then and just assumed I didn't get matched.

2
david_reinstein
1y
Here's what it looked like (I cut out the information about the fundraiser I donated to):

Neat! Cover jacket could use a graphic designer in my opinion. It's also slotted under engineering? Am I missing something?

Throughout the story I was wondering why Larry was advocating for this at a town meeting rather than finding someone to help turn his idea into a reality (like a Sarah Fletcher or an entrepreneurial friend), so I'm glad that was the punchline.

I felt a [...] profound sense of sadness at the thought of 100,000 chickens essentially being a rounding error compared to the overall size of the factory farming industry.

Yes, about 9 billion chickens are killed each year in the US alone, or about 1 million per hour. So 100,000 chickens are killed every 6 minutes in the US (and every 45 seconds globally). Still, it's a huge tragedy.

This is a great point, thanks. Part of me thinks basically any work that increases AI capabilities probably accelerates AI timelines. But it seems plausible to me that advancing the frontier of research accelerates AI timelines much more than other work that merely increases AI capabilities, and that most of this frontier work is done at major AI labs.

If that's the case, then I think you're right that my using a prior for the average project to judge this specific project (as I did in the post) is not informative.

It would also mean we could tell a story ab... (read more)

I replied on LW:

Thanks for the response and for the concern. To be clear, the purpose of this post was to explore how much a typical, small AI project would affect AI timelines and AI risk in expectation. It was not intended as a response to the ML engineer, and as such I did not send it or any of its contents to him, nor comment on the quoted thread. I understand how inappropriate it would be to reply to the engineer's polite acknowledgment of the concerns with my long analysis of how many additional people will die in expectation due to the project accel

... (read more)

I only play-tested it once (in-person with three people with one laptop plus one phone editing the spreadsheet) and the most annoying aspect of my implementation of it was having to record one's forecasts in a spreadsheet from a phone. If everyone had a laptop or their own device it'd be easier. But I made the spreadsheet to handle games (or teams?) of up to 8 people, so I think it could work well for that.

I don't operate with this mindset frequently, but thinking back to some of the highest impact things I've done I'm realizing now that I did those things because I had this attitude. So I'm inclined to think it's good advice.

I love Wits & Wagers! You might be interested in Wits & Calibration, a variant I made during the pandemic in which players forecast the probability that each numeric range is 'correct' (closest to the true answer without being greater than it) rather than bet on the range that is most probable (as in the Party Edition) or highest EV given payout-ratios (regular Wits & Wagers). The spreadsheet I made auto-calculates all scores, so players need only enter their forecasts and check a box next to the correct answer.

I created the variant because I t... (read more)

3
JohnW
1y
That looks really cool, thanks for sharing! Do you think it would work well in a large group setting? It seems like a good halfway-house between the standard Wits&Wagers and a forecasting tournament.

I second this.

FWIW I read from the beginning through What actually is "value-alignment"? then decided it wasn't worth reading further and just skimmed a few more points and the conclusion section. I then read some comments.

IMO the parts of the post I did read weren't worth reading for me, and I doubt they're worth reading for most other Forum users as well. (I strong-downvoted the post to reflect this, though I'm late to the party, so my vote probably won't have the same effect on readership as it would have if I had voted on it 13 days ago).

Hi Devon, FWIW I agree with John Halstead and Michael PJ re John's point 1.

If you're open to considering this question further, you may be interested in knowing my reasoning (note that I arrived at this opinion independently of John and Michael), which I share below.

Last November I commented on Tyler Cowen's post to explain why I disagreed with his point:

I don't find Tyler's point very persuasive: Despite the fact that the common sense interpretation of the phrase "existential risk" makes it applicable to the sudden downfall of FTX, in actuality I think fo

... (read more)

Here's your updated list: https://forum.effectivealtruism.org/posts/SQBYHEWBTB2krA9kk/what-we-owe-the-future-updated-media-list

I'd recommend editing this post with a link to the updated post at the top of it.

Great list! 5 of the 6 "Other Items" are YouTube videos.

Thanks for sharing!

Forewarning: I have not read your post (yet).

I argue that moral offsetting is not inherently immoral

(I'm probably just responding to a literal interpretation of what you wrote rather than the intended meaning, but just in case and to provide clarity:) I'm not aware of anyone who argues that offsetting itself is immoral (though EAs have pointed out Ethical offsetting is antithetical to EA).

Rather, the claim that I've seen some people make is that (some subset of) the  actions that would normally be impermissible (like buying factory farmed animal produ... (read more)

1
Ariel Pontes
1y
As you imagined, the blog post does respond to your argument. If you don't think the response is satisfactory, I'd be curious to hear your thoughts :)

To add, I and some other EAs were recently recruited to an INFER Forecasting Tournament by Manuel Carranza, a pro-forecaster on INFER from Mexico City, which I thought was cool. (His EA Forum profile)

5[anonymous]1y
HAHAHAAH Thank you, Will! 😊😊😊😊😊

The main downside to everyone strong-upvotes themselves by default in my view is that it punishes new users (or those with lower karma and thus weaker strong-upvotes) too much. Maybe this isn't that important of a factor?

5
RyanCarey
1y
To me, that sounds like a feature, not a bug, given how the influx of users has degraded average post quality recently.

As to whether voting on overall karma for one's own comment should be eliminated, I would prefer deactivating voting to a default strong-upvote, however a third option that I think might be better would be to default-normal-upvote and disable strong-upvote on one's own comment.

A fourth option (that I think I'd prefer the most) would be to retain the ability to strong upvote one's own comments while making the default for everyone normal-upvote or no-upvote (to preserve the ability to self-boost unusually important comments). Some other mechanism would be n... (read more)

2
Habryka
1y
I think the key problem, both for upvoting and agreement-voting is that is that it hurts much more to have your comments in the negatives than it feels good to have your comments in the positives (and indeed, whenever I see a negative number, it feels really harsh and it does give me a sense that the community overall disapproves or disagrees with the content).  I think usually when a discussion is heated, I prefer the equilibrium where the two primary discussion partners have votes that cancel each other out, instead of an equilibrium where just all the comments are in the negatives. This includes the case where the person you are responding to is strong-downvoting your comment, and then I think it can make sense to strong-upvote your comment, in order to not give the false impression that there is a consensus against your comment.  I don't currently know a good way to handle this. I also dislike the recent change to disagreement-voting for that reason, and would prefer a world where we also make agreement-votes automatically self-apply, since my brain definitely parses a discussion with everything in the negatives on agreement voting as "there is consensus against this" as opposed to "there are two people disagreeing".
2
RyanCarey
1y
The third proposal seems fine to me, but the fourth is complex, and still rewards users who strong-upvote their own comments as much as the rules allow.

I strongly agree about eliminating the ability to agree/disagree-vote on one's own comment. I expect everyone to agree with what they write by default unless e.g. they say they're playing devil's advocate. Giving people the option to agree-vote on their own comment just adds unnecessary uncertainty by making it so people can't tell if an agreement vote on a comment is coming from the author or another user.

Perhaps it's not clear whether adding agreement karma to posts is positive on net; but I think perhaps it would be worth adding for a month as an experiment.

A counter-consideration is that many voters on the Forum may not understand the difference between overall karma and agreement karma still. Unconclusive weak evidence: This answer got 3 overall karma with 22 votes (at some point it was negative) and 18 agreement karma with 20 votes:

(It's unconclusive evidence because while the regular karma downvotes surprised me, people could have had legitimate reaso... (read more)

1
Pato
1y
I agree that maybe people don't get it (like kinda me) but I think both things, posts and comments, should have it or neither.

Add Agreement Karma to posts.

This comment suggesting this feature got 32 Agreement with 9 votes:

2
WilliamKiely
1y
Perhaps it's not clear whether adding agreement karma to posts is positive on net; but I think perhaps it would be worth adding for a month as an experiment. A counter-consideration is that many voters on the Forum may not understand the difference between overall karma and agreement karma still. Unconclusive weak evidence: This answer got 3 overall karma with 22 votes (at some point it was negative) and 18 agreement karma with 20 votes: (It's unconclusive evidence because while the regular karma downvotes surprised me, people could have had legitimate reasons for not liking the meta-answer and downvoting it. My suspicion though is that at least some people down-voted this in an attempt to "Disagree" vote in the poll.)

Then I would have read it more as a friendly "I'm new to this and sceptical and X and Y - what's going on with those?" and less as a "I'm sceptical, you clearly have no idea what you're talking about"

Ah, I'm really sorry I didn't clarify this!

For the record, you're clearly an expert on WELLBYs and I'm quite new to thinking about them.

My initial exposure to HLI's WELLBY approach to evaluating interventions was the post Measuring Good Better and this post is only my second time reading about WELLBYs. I also know very little about subjective wellbeing surveys... (read more)

Here are two lists:

Additionally you might look at which orgs/people the Survival and Flourishing Fund has granted money to (I'm not sure if the SFF itself accepts donations), and consider individuals without nonprofit status that need funding, as they may be especially negle... (read more)

1
callum
1y
Brilliant, thanks!

Thank you very much for taking the time to write this detailed reply, Michael! I haven't read the To WELLBY or not to WELLBY? post, but definitely want to check that out to understand this all better.

I also want to apologize for my language sounding overly critical/harsh in my previous comment. E.g. Making my first sentence "This post didn't address my concerns related to using WELLBYs..." when I knew full well that wasn't what the post was intending to address was very unfair of me.

I know you've put a lot of work into researching the WELLBY approach and a... (read more)

1
MichaelPlant
1y
Hello William, Thanks for saying that. Yeah, I couldn't really understand where you were coming from (and honestly ended up spending 2+ hours drafting a reply). On reflection, we should probably have done more WELLBY-related referencing in the post, but we were trying to keep the academic side light. In fact, we probably need to recombine our various scratching on the WELLBY and put them onto a single page on our website - it's been a lower priority than doing the object-level charity analysis work. If you're doing the independent impression thing again, then, as a recipient, it would have been really helpful to know that. Then I would have read it more as a friendly "I'm new to this and sceptical and X and Y - what's going on with those?" and less as a "I'm sceptical, you clearly have no idea what you're talking about" (which was more-or-less how I initially interpreted it... :) )

There's "longtermism" as the group of people who talk a lot about x-risk, AI safety and pandemics because they hold some weird beliefs here

Interesting--When I think of the group of people "longtermists" I think of the set of people who subscribe to (and self-identify with) some moral view that's basically "longtermism," not people who work on reducing existential risks. While there's a big overlap between these two sets of people, I think referring to e.g. people who reject caring about future people as "longtermists" is pretty absurd, even if such people ... (read more)

3
Neel Nanda
1y
Yeah this feels like the crux, my read is that "longtermist EA" is a term used to encompass holy shit x risk EA too

119 'Going', 685 'Interested' on the Facebook RSVPs, nice!

Could you clarify what the "We’ll also hear from our community members on where they donate and why!" part consists of during the main event?

Specifically, I see that there's more opportunity to talk about this topic in the Gathertown event after the main event, but I'm curious if event attendees will get an opportunity to share where they donated and why during the main event, or if the content on this during the main event is going to consist of something pre-planned from already selected-members o... (read more)

3
Giving What We Can
1y
Also between FB, LinkedIn and EA Forum about ~1000 people have responded to the event!
3
Giving What We Can
1y
We already have a bunch of community members involved in the main event (they'll be sharing about their giving onscreen) but there will be a chance for everyone to discuss during the gathertown event afterwards! We'll also be prompting people to let us know in the chat during the YouTube event!

Thanks for finding and sharing that quote. I agree that it doesn't fully entail Matt's claim, and would go further to say that it provides evidence against Matt's claim.

In particular, SBF's statement...

At what point are you out of ways for the world to spend money to change? [...] [I]t’s unclear exactly what the answer is, but it’s at least billions per year probably, so at least 100 billion overall before you risk running out of good things to do with money.

... makes clear that SBF was not completely risk neutral.

At the end of the excerpt Rob says "So you... (read more)

Thanks for the reply, Neel.

First I should note that I wrote my previous comment on my phone in the middle of the night when I should have been asleep long before, so I wasn't thinking fully about how others would interpret my words. Seeing the reaction to it I see that the comment didn't add value as written and I probably should just just waited to write it later when I could unambiguously communicate what bothered me about it at length (as I do in this comment).

To clarify, I agree with you an Yglesias that most longtermists are working on things like pre... (read more)

4
Neel Nanda
1y
No worries! I appreciate the context and totally relate :) (and relate with the desire to write a lot of things to clear up a confusion!) For your general point, I would guess this is mostly a semantic/namespace collision thing? There's "longtermism" as the group of people who talk a lot about x-risk, AI safety and pandemics because they hold some weird beliefs here, and there's longtermism as the moral philosophy of future people matter a lot. I saw Matt's point as saying that the "longtermism" group, doesn't actually need to have much to do with the longtermism philosophy, and that thus it's weird that they call themselves longtermists. Because they are basically the only people working on AI X-risk and thus are the group associated with that worldview, and try hard to promote it. Even though this is really an empirical belief and not much to do with their longtermism. I mostly didn't see his post as an attack or comment on the philosophical movement of longtermism. But yeah, overall I would guess that we mostly just agree here?

That was my reaction. Also I had assumed that John had probably sent this post to the Bulletin and that it would help him get the desired retraction/appology if this post had more karma, so I was tempted to upvote the post to support with that.

(But despite the temptation I originally abstained from voting due to not wanting to promote more Torres-related content, then strong-downvoted after reading Neel's comment and seeing another front-page post responding to (IMO problematic) journalism (Rob Wiblin's post responding to Matt Yglesias' re SBF and risk neutrality) that also wasn't the sort of content I want to fill up the Forum.)

I didn't disagreement-karma your comment, but do want to note that I think it would likely help to at least partially solve the problem.

E.g. (Largely due to your original comment, but also in part due to feeling similarly to you independently first) I strong-downvoted the OP despite strongly agreeing with it and feeling very grateful to John for doing such a thorough job dealing with and responding to Torres and bad journalism related to EA.

I don't always downvote in cases like this--I usuually just abstain from voting--but if there was an agreement button... (read more)

2
Neel Nanda
1y
Actually that's a fair point, I somewhat retract my above comment. I think that in general, if I agree vote a comment I also up vote it. But I do vibe with the idea that I'd be more comfortable downvoting posts like this if I could also agree vote.

Also in the Yglesias post Rob wrote the OP in response to, Yglesias misrepresents SBF's view then cites the 80k podcast as supporting this mistaken view when in fact it does not. That's just bad journalism.

Until very recently, for example, I thought I had an unpublishable, off-the-record scoop about his weird idea that someone with his level of wealth should be indifferent between the status quo and a double-or-nothing bet with 50:50 odds.

There's no way that is or ever has been SBF's view. I don't buy it and think Yglesias is just misrepresenting SBF's... (read more)

5
davidc
1y
It still doesn't fully entail Matt's claim, but the content of the interview gets a lot closer than that description. You don't need to give it a full listen, I've quoted the relevant part: https://forum.effectivealtruism.org/posts/THgezaPxhvoizkRFy/clarifications-on-diminishing-returns-and-risk-aversion-in?commentId=ppyzWLuhkuRJCifsx

I just went down a medium-size Matthew Yglesias' Substack-posts-related-to-EA/longtermism rabbit hole and have to say I'm extremely disappointed by the quality of his posts.

I can't comment on them directly to give him feedback because I'm not a subscriber, so I'm sharing my reaction here instead.

e.g. This one has a click bait title and doesn't answer the question in the post, nor argue that the titular question assumes a false premise, which makes the post super annoying: https://www.slowboring.com/p/whats-long-term-about-longtermism

But after reading Will MacAskill’s book “What We Owe The Future” and the surge of media coverage it generated, I think I’ve talked myself into my own corner of semi-confusion over the use of the name “longtermist” to describe concerns related to advances in artificial intelligence. Because at the end of the day, the people who work in this field and who call themselves “longtermists” don’t seem to be motivated by any particularly unusual ideas about the long term. And it’s actually quite confusing to portray (as I have previously) their main message in te

... (read more)

Also in the Yglesias post Rob wrote the OP in response to, Yglesias misrepresents SBF's view then cites the 80k podcast as supporting this mistaken view when in fact it does not. That's just bad journalism.

Until very recently, for example, I thought I had an unpublishable, off-the-record scoop about his weird idea that someone with his level of wealth should be indifferent between the status quo and a double-or-nothing bet with 50:50 odds.

There's no way that is or ever has been SBF's view. I don't buy it and think Yglesias is just misrepresenting SBF's... (read more)

Perhaps posts should have agreement karma like comments do, so we can signal that we agree with John's post without making it more prominent on the Forum (which as you said is generally a waste of EAs' attention).

2
Neel Nanda
1y
I would be pro this! Though in practice I expect this to not solve the problem - I think the standard reaction is to feel outraged /righteously indignant and upvote this kind of post in a show of support/solidarity

Fair enough. I agree that the current title feeling a bit adversarial is only a minor cost.

I've realized that my main reason for not liking the title is that the post doesn't address my concerns about the WELLBY approach, so I don't feel like the post justifies the title's recommendation to "give WELLBYs" rather than "give well" (whether that means GiveWell or give well on some other basis).

On a meta-note, I'm reluctant to down-vote Julian's top comment (I certainly wouldn't want it to have negative karma), but it is a bit annoying that the (now-lengthy) t... (read more)

Load more