What are your thoughts, for you personally, around...
I) Time spent
II) Joy of use
III) Value of information gained
of Manifold vs Metaculus?
Great listen, I enjoyed this a lot!
Kudos to Luisa who does a really good job of acting as a "Watson", asking the followup questions that listeners might have. Several times in this podcast I was happy with her summaries or clarifying questions, even if I suspect she already knew the answers many of those times.
I would be surprised if the effect from the lack of a pledge drive would run on into February and March 2023 though. Comparison YoY here is 12 months before, Jan 2023 to 2022 etc.
Emm sorry, what? Out of 8,000 GWWC pledgers, who have at least pledged to give 10%, very few earn $1M?
This is a great post!
I assume that you are, but better safe than sorry: Are you discussing this with Chris Lloyd at Good Impressions who's currently " investigating whether paid ads can be an effective fundraising tool" for EA organizations?
Thank you Eda for posting this. This must be a horrible situation to be in and I am so sorry for the losses and suffering.
Could you please give more pointers on why these organizations were chosen? While you can't vouch for their effectiveness, I guess you are very comfortable with them doing relevant work and having a solid track record of similar activity? (To be extra clear, this is not criticism, just understanding the extent of efforts.)
At Ge Effektivt (Swedish effective donations platform) we wrote a blog post about it partly because we get questions from donors about how to approach the current crisis, but also for SEO purposes and to have more people discover EA/effective charities. We did mention some organizations that we were comfortable with naming, but as I've also seen Ahbap recommended elsewhere I'd be happy to extend/replace the charities we're currently naming.
Best of luck in the fundraising efforts!
Listened to it while doing other stuff so might not be 100 % accurate.
To my understanding Tegmark appears for 10 minutes, doing a normal AI-risk spiel. I think the angle relevant to the podcast is the risk of concentration of power in the hands of a few. So some accusations of big tech capturing AI conferences etc.
There's a small segue talking about covid where Tegmark states he felt it was such an infected discussion that he couldn't talk about it openly in some work environments for fear of repercussions.
As a Swede who is somewhat familiar with the publication Expo, I would maybe put the risk of forgery of that document at <5%. They are specifically known for their digging journalism, and I would be very surprised if they screwed up something basic like that.
Also, wouldn't it be extremely strange behavior from FLI if that document actually was a forgery? Would be the go-to defense rather than what they are doing now.
I agree with this, there's both a communication and a memory-hogging issue for each new Slack workspace you bring in.
So many conversations you're in include a "Yeah, I think I'm in that Slack space, not sure" since a few of them look alike.
That aside, I applaud the creation and hope to contribute.
Thanks for sharing this Linch, I found it a useful complement to the marginal grant thresholds post, which I recommend for those who enjoyed this post.
Thanks Joel for your thoughtful comment, which I'd like to build on.
I was thinking about how we can get funders to make calculated bets on those that have been discarded elsewhere, and get rewarded when they proved others right. Isn't AI Safety Impact Markets trying to solve some of the issues with adverse selection through that kind of mechanism? Sorry for the lack of depth, but I think others can weigh in better.