All of iporphyry's Comments + Replies

Some possible bugs: 

*When I click on the "listen online" option it seems broken (using this on a mac)

*When I click on the "AGI safety fundamentals" courses as podcasts, they take me to the "EA forum curated and popular" podcast. Not sure if this is intentional, or if they're meant to point to a podcast containing just the course

2
peterhartree
10mo
Thanks! Now fixed.

Certainly not deliberately. I'll try to read it more carefully and update my comment 

8
Linch
2y
Thanks. I've retracted my comment since I think it's too harsh. <3

Fair enough. I admit that I skimmed the post quickly, for which I apologize, and part of this was certainly a knee-jerk reaction to even considering Leverage as a serious intellectual project rather than a total failure as such, which is not entirely fair.  But I think maybe a version of this post I would significantly prefer would first explain your interest in Leverage specifically: that while they are a particularly egregious failure of the closed-research genre, it's interesting to understand exactly how they failed and how the idea of a fast, les... (read more)

Edit: I mostly retract this comment. I skimmed and didn't read the post carefully (something one should never do before leaving a negative comment) and interpreted it as "Leverage wasn't perfect, but it is worth trying to make Leverage 2.0 work or have similar projects with small changes". On rereading, I see that Jeff's emphasis is more on analyzing and quantifying the failure modes than on salvaging the idea. 

That said, I just want to point out that (at least as far as I understand it), there is a significant collection of people within and around E... (read more)

4
Kerry_Vaughan
2y
I can’t comment on whether rumors like this still persist in the EA community, but to the degree that they do, I think there is now a substantial amount of available information that allows for a more nuanced picture of the organization and the people involved. Two of the best, in my view, are Cathleen’s post and our Inquiry Report. Both posts are quite lengthy, but as you seem passionate about this topic, they may nevertheless be worth reading. I think it’s fair to say that the majority of people involved in Leverage would strongly disagree with your characterization of the organization. As someone who works at Leverage and was friends with many of the people involved previously, I can say that your characterization strongly mismatches my experience.
8
Jeff Kaufman
2y
Note that Leverage 2.0 is a thing, and seems to be taking a very different approach towards the history of science, with regular public write-ups: https://www.leverageresearch.org/history-of-science
7
Linch
2y
It seems like you're misreading Jeff's post. Perhaps deliberately. I will prefer it if people on this forum do this less.

attempting similar things in the future

I intended this a bit more broadly than you seem to have interpreted it; I'm trying to include exploratory research groups in general.

gain any value from it (other than as a cautionary tale)

That is essentially what this post is: looking in detail at one specific way I think things went wrong, and thinking about how to avoid this in the future.

I expect tradeoffs around how much you should prioritize external communication will continue to be a major issue for research groups!

I enjoyed this post a lot! 

I'm really curious about your mention of the "schism" pattern because I both haven't seen it and I sort of believe a version of it. What were the schism posts? And why are they bad? 

I don't know if what you call "schismatics" want to burn the commons of EA cooperation (which would be bad), or if they just want to stop the tendency in EA (and really, everywhere) of people pushing for everyone to adopt convergent views (the focus of "if you believe X you should also believe Y" which I see and dislike in EA, versus "I don'... (read more)

5
Gavin
2y
It seems bad in a few ways, including the ones you mentioned. I expect it to make longtermist groupthink worse, if (say) Kirsten stops asking awkward questions under (say) weak AI posts. I expect it to make neartermism more like average NGO work. We need both conceptual bravery and empirical rigour for both near and far work, and schism would hugely sap the pool of complements. And so on. Yeah the information cascades and naive optimisation are bad. I have a post coming on a solution (or more properly, some vocabulary to understand how people are already solving it). DMed examples.

I like hypothesis generation, and I particularly like that in this post a few of the points are mutually exclusive (like numbers 7 and 10), which should happen in a hypothesis generation post. However this list, as well as the topic, feels lazy to me, in the sense of needing much more specificity in other to generate more light than heat.

I think my main issue is the extremely vague use of"quality" here. It's ok to use vague terms when a concept is hard to define, but in this case it feels like there are more useful ways to narrow it down. For example you c... (read more)

2
Thomas Kwa
2y
I agree that this list is "lazy", and I'd be excited about someone doing a better analysis.

I'm one of the people who submitted a post right before the deadline of the criticism contest. FWIW I think number 6 is off base. In my case, the deadline felt like a Schelling point. My post was long and kind of technical, and I didn't have any expectation of getting money - though having a fake deadline was very helpful and I would probably not have written it without the contest. I don't think that any of the posts that got prizes were written with an expectation of making a profit. They all looked like an investment of multiple hours by talented people... (read more)

But I agree with your meta-point that I implicitly assumed SSA together with my "assumption 5" and SSA might not follow from the other assumptions

Thanks! I didn't fully understand what people meant by that and how it's related to various forms of longtermism. Skimming the linked post was helpful to  get a better picture.

Thanks for the links!  They were interesting and I'm happy that philosophers, including ones close to EA, are trying to grapple with these questions. 

I was confused by SIA, and found that I agree with Bostrom's critique of it much more than with the argument itself. The changes to the prior it proposes seem ad hoc, and I don't understand how to motivate them. Let me know if you know how to motivate them (without a posteriori arguments that they - essentially by definition - cancel the update terms in the DA). It also seems to me to quickly lead t... (read more)

4
iporphyry
2y
But I agree with your meta-point that I implicitly assumed SSA together with my "assumption 5" and SSA might not follow from the other assumptions

I'm trying to optimise something like "expected positive impact on a brighter future conditional on being the person that I am with the skills available to/accessible for me".

If this is true, then I think you would be an EA. But from what you wrote it seems that you have a relatively large term in your philosophical objective function (as opposed to your revealed objective function, which for most people gets corrupted by personal stuff) on status/glory. I think the question determining your core philosophy would be which term you consider primary. For exa... (read more)

1
𝕮𝖎𝖓𝖊𝖗𝖆
2y
I plan to seek status/glory through making the world a better place. That is, my desire for status/prestige/impact/glory is interpreted through an effective altruistic like framework. "I want to move the world" transformed into "I want to make the world much better". "I want to have a large impact" became "I want to have a large impact on creating a brighter future". I joined the rationalist community at a really impressionable stage. My desire for impact/prestige/status, etc. persisted, but it was directed at making the world better. If this is not answered by the earlier statements, then it's incoherent/inapplicable. I don't want to have a large negative impact, and my desire for impact/prestige cannot be divorced from the context of "a much brighter world". My EV is personally making the world a brighter place. I don't think this is coherent either. I don't view them as a means to an end of helping people. But I don't know how seeking status/glory by making the world a brighter place could possibly be reducing my expected value? It feels incoherent/inapplicable. This is true, and if I'm not an EA, I'll have to accept it. But it's not yet clear to me that I'm just "very EA adjacent" as opposed to "fully EA". And I do want to be an EA I think. I might modify my values in that direction (why I said I'm not "yet" vegan as opposed to not vegan).

I think a lot of people miss the idea that "being an EA" is a different thing from being "EA adjacent"/"in the EA community"/ "working for an EA organization" etc. I am saying this as someone who is close to the EA community, who has an enormous amount of intellectual affinity, but does not identify as an EA. If the difference between the EA label and the EA community is already clear to you, then I apologize for beating a dead horse.

It seems from your description of yourself like you're actually not an Effective Altruist in the sense of holding a signific... (read more)

3
𝕮𝖎𝖓𝖊𝖗𝖆
2y
I have a significantly consequentialist world view. I am motivated by the vision of a much better world. I am trying to create such a better world. I want to devote my career to that project. I'm trying to optimise something like "expected positive impact on a brighter future conditional on being the person that I am with the skills available to/accessible for me". The ways I perceive that I differ from EAs is: * Embracing my desire for status/prestige/glory/honour * I'm not impartial to my own welfare/wellbeing/flourishing * I'm much less willing to undertake personal hardship (frugality, donating the majority of my income, etc.) and I think this is fine * I'm not (currently) vegan I want to say that I'm not motivated by altruism. But people seem to be imagining behaviour/actions that I oppose/would not take and I do want to create a brighter future. And I'm not sure how to explain why I want a much brighter future in a way that isn't altruistic. * A much (immensely) better world is possible * We can make that happen The "we should make that happen" feels like an obvious conclusion. Explaining the why draws blanks. Rather than saying I'm not altruistic, I think it's more accurate to say that I'm less willing to undertake significant personal hardship and I'm more partial to my own welfare/flourishing/wellbeing. Maybe that makes me not EA, but I was under the impression that I was simply a non standard EA.

I like that you admit that your examples are cherry-picked. But I'm actually curious what a non-cherry-picked track record would show. Can people point to Yudkowsky's successes? What did he predict better than other people? What project did MIRI generate that either solved clearly interesting technical problems or got significant publicity in academic/AI circles outside of rationalism/EA? Maybe instead of a comment here this should be a short-form question on the forum.

I like that you admit that your examples are cherry-picked. But I'm actually curious what a non-cherry-picked track record would show. Can people point to Yudkowsky's successes?

While he's not single-handedly responsible, he lead the movement to take AI risk seriously at a time when  approximately no one was talking about it, which has now attracted the interests of top academics. This isn't a complete track record, but it's still a very important data-point. It's a bit like if he were the first person to say that we should take nuclear war seriously, and then five years later people are starting to build nuclear bombs and academics realize that nuclear war is very plausible.

I have some serious issues with the way the information here is presented which make me think that this is best shared as something other than an EA forum post. My main issues are:

  1. This announcement is in large part a promotion for the Fistula Foundation, with advertising-esque language. It would be appropriate in an advertising banner of an EA-aligned site but not on the forum, where critical discussion or direct information-sharing is the norm.
  2. It includes the phrase that Fistula Foundation is "widely regarded as one of the most effective charities in the
... (read more)
4
gogreatergood
2y
1. I  am using the same language here, that I present this project to the media and to others with. I thought this would be beneficial. You are seeing the same thing that the general public sees. Except (I hope) with a lot of background info and links to explain my thinking.   2. My language in that is not weaselly because it links to a page that shows exactly what I'm stating. It's indeed, widely regarded, as one of the most effective charities in the world. By (as the linked page shows) The Life You Can Save; CharityWatch; Great Nonprofits; GuideStar; Charity Navigator.  Do you have any evidence that Fistula Foundation is NOT widely regarded to be one of the most effective charities in the world? Maybe you don't think it is one of the most effective? But it's widely regarded to be, and by some prominent and well-regarded third parties.   3. Your link here is exactly my same link that I put; I think you missed that. Yes I agree, that it may not be an upper elite ranked charity. And that's why I linked to the same page. However, within this link that we both posted, they do state: "We think that Fistula Foundation may be in the range of cost-effectiveness of our current top charities. However, this estimate is highly uncertain for a number of reasons." It seems to be well within a good range of high effectiveness. But if you are a stickler for elite effectiveness only, a great case can be made for that, to NOT donate to them, and fair enough. We seem in general agreement here. I'm not sure what I'm stating that's false. It did not make the GiveWell cut after they looked into them. I agree.   + I am in no way whatsoever affiliated with Fistula Foundation. Why do you think so? If you are going to donate less to them in the future, based just on the wording of this post from a random person you don't know, and not based on the evidence of the work that they do, I'm not sure your reasoning on that. + I do hope you join me in skinny dipping on t
7
Amber Dawn
2y
fwiw I disagree with this. People often 'advertize' or argue for things on the Forum - e.g. promoting some new EA project, saying 'come work for us at X org!', or arguing strongly that certain cause areas should be considered. The main difference with this post is that the language is more 'advertizing-esque' than normal - but this seems to me an aesthetic consideration. I'm not sure what would be gained by OP rewriting it with more caveats.  Re "one of the most effective charities", OP does immediately justify this in the bullet points below - it's recommended by The Life You Can Save, and Givewell says it 'may be in the range of cost-effectiveness of our top charities'. 

"Cardinal" and "Ordinal" denote a certain extremely crude way in economics in which different utility functions can still be compared in certain cases. They gesture at a very important issue in EA which everybody who thinks about it encounters: that different people (/different philosophies) have different ideas of the good, which correspond to different utility functions.

But the two terms come from math and are essentially only useful in theoretical arguments. In practical applications they are extremely weak to the point of being essentially meaningless ... (read more)

0
Barracuda
2y
The term comes from economics (the term was created by Pareto who pioneered the field of micro-economics...)

I like this criticism, but I think there are two essentially disjoint parts here that are being criticized. The first is excess legibility, i.e., the issue of having explicit metrics and optimizing to the metrics at all. The second is that a few of the measurements that determine how many resources a group gets/how quickly it grows are correlated with things that are not inherently valuable at best and harmful at worst. 

The first problem seems really hard to me: the legibility/autonomy trade-off is an age-old problem that happens in politics, business... (read more)

I think this is a great post! It addresses a lot of my discomfort with the EA point of view, while retaining the value of the approach. Commenting in the spirit of this post.

I want to point out two things that I think work in Eric's favor in a more sophisticated model like the one you described. 

First, I like the model that impact follows an approximately log distribution. But I would draw a different conclusion from this. 

It seems to me that there is some set of current projects, S (this includes the project of "expand S"). They have impact given by some variable that I agree is closer to log normal than normal. Now one could posit two models: one idealized model in which people know (and agree on) magnitue of impac... (read more)

I agree with you that EA outreach to non-Western cultures is an important and probably neglected area — thank you for pointing that out! 

There are lots of reasons to make EA more geographically (and otherwise) diverse, and also some things to be careful about, given that different cultures tend to have different ethical standards and discussion norms. See this article about translation of EA into Mandarin. Something to observe is that outreach is very language and culture-specific. I generally think that international outreach is best done in a granul... (read more)

There seems to be very little precedent of someone founding new successful universities, partially because the perceived success of a university is so dependent on pedigree. There is even less precedent of successful "themed" universities, and the only ones I know that have attained mainstream success (not counting women's universities or black universities, which are identity-based rather than movement-based) are old religious institutions like Saint John's or BYU. I think a more realistic alternative would be to buy something like EdX or a competing onli... (read more)

3
So-Low Growth
2y
IIRC, OpenPhil are funding EAish academics to produce online courses. I think the old Peter Singer one on Coursera/EDx did pretty well.
3
RyanCarey
2y
Major philanthropists have successful started universities before, e.g. Carnegie, Rockefeller, Vanderbilt. More recently, major institutes have been started, e.g. IAS, various think tanks. I agree there is a formidable "moat" of prestige that would be hard to overcome, but the prospect is not one to be ruled out entirely.

I think the Christian Science Monitor's popularity and reputation makes Christian Scientists (note: totally different from Scientologists) significantly more respectable than they would be otherwise. 

From Britannica: 

The Christian Science Monitor, American daily online newspaper that is published under the auspices of the Church of Christ, Scientist. Its original print edition was established in 1908 at the urging of Mary Baker Eddy, founder of the church, as a protest against the sensationalism of the popular press. The Monitor became famous for

... (read more)

This is a nice project, but as many people point out this seems a bit fuzzy for a "FAQ" question. If it's an ongoing debate within the community, it seems unlikely to have a good 2-minute answer for the public. There's probably a broader consensus around the idea that if you commit to any realistic discount scheme, you see that the future deserves a lot more consideration than it is getting in the public and the academic mainstreams, and I wonder whether this can be phrased as a more precise question. I think a good strategy for public-facing answers would be to compare climate change (where people often have a more reasonable rate of discount) to other existential risks

1
james
2y
That's reasonable - thanks for sharing! We might try and shake it up if we do a future round; will need to think about it.

Disclaimer: this is an edited version of a much harsher review I wrote at first. I have no connection to the authors of the study or to their fields of expertise, but am someone who enjoyed the paper here critiqued and in fact think it very nice and very conservative in terms of its numbers (the current post claims the opposite). I disagree with this post and think it is wrong in an obvious and fundamental way, and therefore should not be in decade review in the interest of not posting wrong science. At the same time it is well-written and exhibits a good ... (read more)

I think that it's not always possible to check that a project is "best use, or at least decent use" of its resources. The issue is that these kinds of checks are really only good on the margin. If someone is doing something that jumps to a totally different part of the pareto manifold (like building a colony on Mars or harnessing nuclear fission for the first time), conventional cost-benefit analyses aren't that great. For example a standard post-factum justification of the original US space program is that it accelerated progress in materials science and ... (read more)

4
Ozzie Gooen
2y
Space is a particularly complicated area with respect to EV. I imagine that a whole lot of the benefit came from "marketing for science+tech", and that could be quantified easily enough. For the advancements they made in materials science and similar, I'm still not sure these were enough to justify the space program on their own. I've heard a lot of people make this argument to defend NASA, and I haven't seen them refer to simple cost/benefit reports. Sure, useful tech was developed, but that doesn't tell us that by spending the money on more direct measures, we couldn't have had even useful tech. It also takes the careers of thousands of really smart, hard-working, and fairly altruistic scientists and engineers. This is a high cost!   VCs support very reckless projects. If they had their way, startups would often be more ambitious than the entrepreneurs desire. VCs are trying to optimize money, similar to how I recommend we try to optimize social impact. I think that prioritization can and should often result in us having more ambitious projects, not less.