I'm curious if you or the other past participants you know had a good experience with AISC are in a position to help fill the funding gap AISC currently has. Even if you (collectively) can't fully fund the gap, I'd see that as a pretty strong signal that AISC is worth funding. Or, if you do donate but you prefer other giving opportunities instead (whether in AIS or other cause areas) I'd find that valuable to know too.
But on the other hand, I've regularly meet alumni who tell me how useful AISC was for them, which convinces me AISC is clearly very net positive.
Naive question, but does AISC have enough of such past alumni that you could meet your current funding need by asking them for support? It seems like they'd be in the best position to evaluate the program and know that it's worth funding.
Nevertheless, AISC is probably about ~50x cheaper than MATS
~50x is a big difference, and I notice the post says:
We commissioned Arb Research to do an impact assessment.
One preliminary result is that AISC creates one new AI safety researcher per around $12k-$30k USD of funding.
Multiplying that number (which I'm agnostic about) by 50 gives $600k-$1.5M USD. Does your ~50x still seem accurate in light of this?
I'm guessing that what Marius means by "AISC is probably about ~50x cheaper than MATS" is that AISC is probably ~50x cheaper per participant than MATS.
Our cost per participant is $0.6k - $3k USD
50 times this would be 30k - 150k per participant.
I'm guessing that MATS is around 50k per person (including stipends).
Here's where the $12k-$30k USD comes from:
...Dollar cost per new researcher produced by AISC
- The organizers have proposed $60–300K per year in expenses.
- The number of non-RL participants of programs have increased from 32 (AISC4) to 130&
I'm a big fan of OpenPhil/GiveWell popularizing longtermist-relevant facts via sponsoring popular YouTube channels like Kurzgesagt (21M subscribers). That said, I just watched two of their videos and found a mistake in one[1] and took issue with the script-writing in the other one (not sure how best to give feedback -- do I need to become a Patreon supporter or something?):
Why Aliens Might Already Be On Their Way To Us
My comment:
...9:40 "If we really are early, we have an incredible opportunity to mold *thousands* or *even millions* of planets according to ou
I also had a similar experience making my first substantial donation before learning about non-employer counterfactual donation matches that existed.
It was the only donation I regretted since by delaying making it 6 months I could have doubled the amount of money I directed to the charity for no extra cost to me.
Great point, thanks for sharing!
While I assume that all long-time EAs learn that employer donation matching is a thing, we'd do well as a community to ensure that everyone learns about it before donating a substantial amount of money, and clearly that's not the case now.
Reminds me of this insightful XKCD: https://xkcd.com/1053/
For each thing 'everyone knows' by the time they're adults, every day there are, on average, 10,000 people in the US hearing about it for the first time.
Thanks for sharing about your experience.
I see 4 people said they agreed with the post and 3 disagreed, so I thought I'd share my thoughts on this. (I was the 5th person to give the post Agreement Karma, which I endorse with some nuance added below.)
I've considered going on a long hike before and like you I believed the main consideration against doing so was the opportunity cost for my career and pursuit of having an altruistic impact.
It seemed to me that clearly there was something else I could do that would be better for my career and altruistic impact ...
I'll also add that I didn't like the subtitle of the video: "A case for optimism".
A lot of popular takes on futurism topics seem to me to focus on being optimistic or pessimistic, but whether one is optimistic or pessimistic about something doesn't seem like the sort of thing one should argue for. It seems a little like writing the bottom line first.
Rather, people should attempt to figure out what the actual probabilities of different futures are and how we are able to influence the future to make certain futures more or less probable. From there it's just...
I've been a fan of melodysheep since discovering his Symphony of Science series about 12 years ago.
Some thoughts as I watch:
- Toby Ord's The Precipice and his 16 percent estimate of existential catastrophe (in the next century) is cited directly
- The first part of the script seems heavily-inspired by Will MacAskill's What We Owe the Future
- In particular there is a strong focus on non-extinction, non-existentially catastrophic civilization collapse, just like in WWOTF
- 12:40 "But extinction in the long-term is nothing to fear. No species survives forever. ...
That is, I wasn’t viscerally worried. I had the concepts. But I didn’t have the “actually” part.
For me I don't think having a concrete picture of the mechanism for how AI could actually kill everyone ever felt necessary to viscerally believing that AI could kill everyone.
And I think this is because every since I was a kid, long before hearing about AI risk or EA, the long-term future that seemed most intuitive to me was a future without humans (or post-humans).
The idea that humanity would go on to live forever and colonize the galaxy and the universe and l...
Thinking out loud about credences and PDFs for credences (is there a name for these?):
I don't think "highly confident people bare the burden of proof" is a correct way of saying my thought necessarily, but I'm trying to point at this idea that when two people disagree on X (e.g. 0.3% vs 30% credences), there's an asymmetry in which the person who is more confident (i.e. 0.3% in this case) is necessarily highly confident that the person they disagree with is wrong, whereas the the person who is less confident (30% credence person) is not necessarily highly ...
I just got notified that my December 7th test donation was matched. This is extremely unexpected to me, and leads me to believe I got my forecast wrong and that the EA community actually could have gotten ~$1M matched this year with the donation trade scheme I had in mind.
I'm not sure. I think you are the first person I heard of saying they got matched. When I asked in the EA Facebook group for this on December 15th if anyone got matched, all three people who responded (including myself) reported that they were double-charged for their December 15th donations. Initially we assumed the second receipt was a match, but then we saw that Facebook had actually just charged us twice. I haven't heard anything else about the match since then and just assumed I didn't get matched.
Neat! Cover jacket could use a graphic designer in my opinion. It's also slotted under engineering? Am I missing something?
Throughout the story I was wondering why Larry was advocating for this at a town meeting rather than finding someone to help turn his idea into a reality (like a Sarah Fletcher or an entrepreneurial friend), so I'm glad that was the punchline.
I felt a [...] profound sense of sadness at the thought of 100,000 chickens essentially being a rounding error compared to the overall size of the factory farming industry.
Yes, about 9 billion chickens are killed each year in the US alone, or about 1 million per hour. So 100,000 chickens are killed every 6 minutes in the US (and every 45 seconds globally). Still, it's a huge tragedy.
This is a great point, thanks. Part of me thinks basically any work that increases AI capabilities probably accelerates AI timelines. But it seems plausible to me that advancing the frontier of research accelerates AI timelines much more than other work that merely increases AI capabilities, and that most of this frontier work is done at major AI labs.
If that's the case, then I think you're right that my using a prior for the average project to judge this specific project (as I did in the post) is not informative.
It would also mean we could tell a story ab...
...Thanks for the response and for the concern. To be clear, the purpose of this post was to explore how much a typical, small AI project would affect AI timelines and AI risk in expectation. It was not intended as a response to the ML engineer, and as such I did not send it or any of its contents to him, nor comment on the quoted thread. I understand how inappropriate it would be to reply to the engineer's polite acknowledgment of the concerns with my long analysis of how many additional people will die in expectation due to the project accel
I only play-tested it once (in-person with three people with one laptop plus one phone editing the spreadsheet) and the most annoying aspect of my implementation of it was having to record one's forecasts in a spreadsheet from a phone. If everyone had a laptop or their own device it'd be easier. But I made the spreadsheet to handle games (or teams?) of up to 8 people, so I think it could work well for that.
I don't operate with this mindset frequently, but thinking back to some of the highest impact things I've done I'm realizing now that I did those things because I had this attitude. So I'm inclined to think it's good advice.
I love Wits & Wagers! You might be interested in Wits & Calibration, a variant I made during the pandemic in which players forecast the probability that each numeric range is 'correct' (closest to the true answer without being greater than it) rather than bet on the range that is most probable (as in the Party Edition) or highest EV given payout-ratios (regular Wits & Wagers). The spreadsheet I made auto-calculates all scores, so players need only enter their forecasts and check a box next to the correct answer.
I created the variant because I t...
I second this.
FWIW I read from the beginning through What actually is "value-alignment"? then decided it wasn't worth reading further and just skimmed a few more points and the conclusion section. I then read some comments.
IMO the parts of the post I did read weren't worth reading for me, and I doubt they're worth reading for most other Forum users as well. (I strong-downvoted the post to reflect this, though I'm late to the party, so my vote probably won't have the same effect on readership as it would have if I had voted on it 13 days ago).
Hi Devon, FWIW I agree with John Halstead and Michael PJ re John's point 1.
If you're open to considering this question further, you may be interested in knowing my reasoning (note that I arrived at this opinion independently of John and Michael), which I share below.
Last November I commented on Tyler Cowen's post to explain why I disagreed with his point:
...I don't find Tyler's point very persuasive: Despite the fact that the common sense interpretation of the phrase "existential risk" makes it applicable to the sudden downfall of FTX, in actuality I think fo
Here's your updated list: https://forum.effectivealtruism.org/posts/SQBYHEWBTB2krA9kk/what-we-owe-the-future-updated-media-list
I'd recommend editing this post with a link to the updated post at the top of it.
Forewarning: I have not read your post (yet).
I argue that moral offsetting is not inherently immoral
(I'm probably just responding to a literal interpretation of what you wrote rather than the intended meaning, but just in case and to provide clarity:) I'm not aware of anyone who argues that offsetting itself is immoral (though EAs have pointed out Ethical offsetting is antithetical to EA).
Rather, the claim that I've seen some people make is that (some subset of) the actions that would normally be impermissible (like buying factory farmed animal produ...
To add, I and some other EAs were recently recruited to an INFER Forecasting Tournament by Manuel Carranza, a pro-forecaster on INFER from Mexico City, which I thought was cool. (His EA Forum profile)
The main downside to everyone strong-upvotes themselves by default in my view is that it punishes new users (or those with lower karma and thus weaker strong-upvotes) too much. Maybe this isn't that important of a factor?
As to whether voting on overall karma for one's own comment should be eliminated, I would prefer deactivating voting to a default strong-upvote, however a third option that I think might be better would be to default-normal-upvote and disable strong-upvote on one's own comment.
A fourth option (that I think I'd prefer the most) would be to retain the ability to strong upvote one's own comments while making the default for everyone normal-upvote or no-upvote (to preserve the ability to self-boost unusually important comments). Some other mechanism would be n...
I strongly agree about eliminating the ability to agree/disagree-vote on one's own comment. I expect everyone to agree with what they write by default unless e.g. they say they're playing devil's advocate. Giving people the option to agree-vote on their own comment just adds unnecessary uncertainty by making it so people can't tell if an agreement vote on a comment is coming from the author or another user.
Perhaps it's not clear whether adding agreement karma to posts is positive on net; but I think perhaps it would be worth adding for a month as an experiment.
A counter-consideration is that many voters on the Forum may not understand the difference between overall karma and agreement karma still. Unconclusive weak evidence: This answer got 3 overall karma with 22 votes (at some point it was negative) and 18 agreement karma with 20 votes:
(It's unconclusive evidence because while the regular karma downvotes surprised me, people could have had legitimate reaso...
Add Agreement Karma to posts.
This comment suggesting this feature got 32 Agreement with 9 votes:
Then I would have read it more as a friendly "I'm new to this and sceptical and X and Y - what's going on with those?" and less as a "I'm sceptical, you clearly have no idea what you're talking about"
Ah, I'm really sorry I didn't clarify this!
For the record, you're clearly an expert on WELLBYs and I'm quite new to thinking about them.
My initial exposure to HLI's WELLBY approach to evaluating interventions was the post Measuring Good Better and this post is only my second time reading about WELLBYs. I also know very little about subjective wellbeing surveys...
Here are two lists:
Additionally you might look at which orgs/people the Survival and Flourishing Fund has granted money to (I'm not sure if the SFF itself accepts donations), and consider individuals without nonprofit status that need funding, as they may be especially negle...
Thank you very much for taking the time to write this detailed reply, Michael! I haven't read the To WELLBY or not to WELLBY? post, but definitely want to check that out to understand this all better.
I also want to apologize for my language sounding overly critical/harsh in my previous comment. E.g. Making my first sentence "This post didn't address my concerns related to using WELLBYs..." when I knew full well that wasn't what the post was intending to address was very unfair of me.
I know you've put a lot of work into researching the WELLBY approach and a...
There's "longtermism" as the group of people who talk a lot about x-risk, AI safety and pandemics because they hold some weird beliefs here
Interesting--When I think of the group of people "longtermists" I think of the set of people who subscribe to (and self-identify with) some moral view that's basically "longtermism," not people who work on reducing existential risks. While there's a big overlap between these two sets of people, I think referring to e.g. people who reject caring about future people as "longtermists" is pretty absurd, even if such people ...
119 'Going', 685 'Interested' on the Facebook RSVPs, nice!
Could you clarify what the "We’ll also hear from our community members on where they donate and why!" part consists of during the main event?
Specifically, I see that there's more opportunity to talk about this topic in the Gathertown event after the main event, but I'm curious if event attendees will get an opportunity to share where they donated and why during the main event, or if the content on this during the main event is going to consist of something pre-planned from already selected-members o...
Thanks for finding and sharing that quote. I agree that it doesn't fully entail Matt's claim, and would go further to say that it provides evidence against Matt's claim.
In particular, SBF's statement...
At what point are you out of ways for the world to spend money to change? [...] [I]t’s unclear exactly what the answer is, but it’s at least billions per year probably, so at least 100 billion overall before you risk running out of good things to do with money.
... makes clear that SBF was not completely risk neutral.
At the end of the excerpt Rob says "So you...
Thanks for the reply, Neel.
First I should note that I wrote my previous comment on my phone in the middle of the night when I should have been asleep long before, so I wasn't thinking fully about how others would interpret my words. Seeing the reaction to it I see that the comment didn't add value as written and I probably should just just waited to write it later when I could unambiguously communicate what bothered me about it at length (as I do in this comment).
To clarify, I agree with you an Yglesias that most longtermists are working on things like pre...
That was my reaction. Also I had assumed that John had probably sent this post to the Bulletin and that it would help him get the desired retraction/appology if this post had more karma, so I was tempted to upvote the post to support with that.
(But despite the temptation I originally abstained from voting due to not wanting to promote more Torres-related content, then strong-downvoted after reading Neel's comment and seeing another front-page post responding to (IMO problematic) journalism (Rob Wiblin's post responding to Matt Yglesias' re SBF and risk neutrality) that also wasn't the sort of content I want to fill up the Forum.)
I didn't disagreement-karma your comment, but do want to note that I think it would likely help to at least partially solve the problem.
E.g. (Largely due to your original comment, but also in part due to feeling similarly to you independently first) I strong-downvoted the OP despite strongly agreeing with it and feeling very grateful to John for doing such a thorough job dealing with and responding to Torres and bad journalism related to EA.
I don't always downvote in cases like this--I usuually just abstain from voting--but if there was an agreement button...
Also in the Yglesias post Rob wrote the OP in response to, Yglesias misrepresents SBF's view then cites the 80k podcast as supporting this mistaken view when in fact it does not. That's just bad journalism.
Until very recently, for example, I thought I had an unpublishable, off-the-record scoop about his weird idea that someone with his level of wealth should be indifferent between the status quo and a double-or-nothing bet with 50:50 odds.
There's no way that is or ever has been SBF's view. I don't buy it and think Yglesias is just misrepresenting SBF's...
I just went down a medium-size Matthew Yglesias' Substack-posts-related-to-EA/longtermism rabbit hole and have to say I'm extremely disappointed by the quality of his posts.
I can't comment on them directly to give him feedback because I'm not a subscriber, so I'm sharing my reaction here instead.
e.g. This one has a click bait title and doesn't answer the question in the post, nor argue that the titular question assumes a false premise, which makes the post super annoying: https://www.slowboring.com/p/whats-long-term-about-longtermism
...But after reading Will MacAskill’s book “What We Owe The Future” and the surge of media coverage it generated, I think I’ve talked myself into my own corner of semi-confusion over the use of the name “longtermist” to describe concerns related to advances in artificial intelligence. Because at the end of the day, the people who work in this field and who call themselves “longtermists” don’t seem to be motivated by any particularly unusual ideas about the long term. And it’s actually quite confusing to portray (as I have previously) their main message in te
Also in the Yglesias post Rob wrote the OP in response to, Yglesias misrepresents SBF's view then cites the 80k podcast as supporting this mistaken view when in fact it does not. That's just bad journalism.
Until very recently, for example, I thought I had an unpublishable, off-the-record scoop about his weird idea that someone with his level of wealth should be indifferent between the status quo and a double-or-nothing bet with 50:50 odds.
There's no way that is or ever has been SBF's view. I don't buy it and think Yglesias is just misrepresenting SBF's...
Perhaps posts should have agreement karma like comments do, so we can signal that we agree with John's post without making it more prominent on the Forum (which as you said is generally a waste of EAs' attention).
Fair enough. I agree that the current title feeling a bit adversarial is only a minor cost.
I've realized that my main reason for not liking the title is that the post doesn't address my concerns about the WELLBY approach, so I don't feel like the post justifies the title's recommendation to "give WELLBYs" rather than "give well" (whether that means GiveWell or give well on some other basis).
On a meta-note, I'm reluctant to down-vote Julian's top comment (I certainly wouldn't want it to have negative karma), but it is a bit annoying that the (now-lengthy) t...
This is horrifying! A friend of the author just shared this along with a Business Insider post that was just published that links to this post:
https://www.businessinsider.com/dangerous-surgery-stop-blushing-side-effects-ruined-life-no-emotions-2024-2