iporphyry

Topic Contributions

Comments

Bad Omens in Current Community Building

I like this criticism, but I think there are two essentially disjoint parts here that are being criticized. The first is excess legibility, i.e., the issue of having explicit metrics and optimizing to the metrics at all. The second is that a few of the measurements that determine how many resources a group gets/how quickly it grows are correlated with things that are not inherently valuable at best and harmful at worst. 

The first problem seems really hard to me: the legibility/autonomy trade-off is an age-old problem that happens in politics, business, and science, and seems to involve a genuine trade-off between organizational efficiency and the ability to capitalize on good but unorthodox ideas and individuals.

The second seems more accessible (though still hard), and reasonably separable from the first. Here I see a couple of things you flag (other than legibility/"corporateness" by itself) as parameters that positively contribute to growth but negatively contribute to the ability of EA to attract intellectually autonomous people. The first is "fire-and-brimstone" style arguments, where EA outreach tends to be all-or-nothing, "you either help save the sick children or you burn in Utilitarian Hell", and the second is common-denominator level messaging that is optimized to build community (so things like slogans, manufactured community and sense of purpose; things that attract the people like Bob in your thought experiment), but not optimized to appeal to meta-level thinkers who understand the reasoning behind the slogans. Both are vaguely correlated with EA having commonalities with religious communities, and so I'm going to borrow the adjective "pious" to refer to ideas and individuals for which these factors are salient.

I like that you are pointing out that a lot of EA outreach is, in one way or another, an "appeal to piety", and this is possibly bad. There might be a debate about whether this is actually bad and to what extent (e.g., the Catholic church is inefficient, but the sheer volume of charity it generates is nothing to sneer at), but I think I agree with the intuition that this is suboptimal, and that by Goodhart's law, if pious people are more likely to react to outreach, eventually they will form a supermajority. 

I don't want to devalue the criticism that legibility is in itself a problem, and particularly ugh-y to certain types of people (e.g. to smart humanities majors). But I think that the problem of piety can be solved without giving up on legibility, and instead by using better metrics, that have more entanglement with the real world. This is something I believed before this post, so I might be shoe-horning it here: take this with a grain of salt. 

But I want to point out that organizations that are constantly evaluated on some measurable parameter don't necessarily tend to end up excessively pious. A sports team can't survive by having the best team spirit; a software company will not see any profit if it only hires people who fervently believe in its advertising slogans. So maybe a solution to the problem of appeals to piety is to, as you say, reduce the importance of the metric of "HEA" generation in determining funding, clout, etc., but replace it with other hard-to-fake metrics that are less correlated with piety and more correlated with actually being effective at what you do. 

I haven't thought much about what the best metrics would be and am probably not qualified to make recommendations, but I just for plausibility's sake here are a couple of examples of things that I think would be cool (if not necessarily realistic): 

  1. First, it would be neat (though potentially expensive) if there were a yearly competition between teams of EA's (maybe student groups, or maybe something on a larger level) to use a funding source to create an independent real-world project and have their impact in QALY's judged by an impartial third party.
  2. Second, I think it would be nice to make "intramural" forms of existing competitions, such as the fiction contest, Scott Alexander's book review contest, various super-forecasting contests, etc., and grading University groups on success (relative to past results). If something like this is implemented, I'd also like to see the focus of things like the fiction competition move away from "good messaging" (which smacks of piety) and towards "good fiction that happens to have an EA component, if you look hard enough".

I think that if the funding culture becomes more explicitly focused on concentration of talent and on real-world effects and less on sheer numbers or uncritical mission alignment, then outreach will follow suit and some of the issues that you address will be addressed.

My bargain with the EA machine

I think this is a great post! It addresses a lot of my discomfort with the EA point of view, while retaining the value of the approach. Commenting in the spirit of this post.

My bargain with the EA machine

I want to point out two things that I think work in Eric's favor in a more sophisticated model like the one you described. 

First, I like the model that impact follows an approximately log distribution. But I would draw a different conclusion from this. 

It seems to me that there is some set of current projects, S (this includes the project of "expand S"). They have impact given by some variable that I agree is closer to log normal than normal. Now one could posit two models: one idealized model in which people know (and agree on) magnitue of impacts and a second, more realistic model, where impact is extremely uncertain, with standard deviation on the same order as the potential impact. In the idealized model, you would maximize impact by working on the most impactful project, and get comparatively much less impact by working on a random project you happen to enjoy. But in the realistic world with very large uncertainty, you would maximize expected value by working on a project on the fuzzy Pareto frontier of "potentially very impactful projects", but within this set you would prioritize projects that you have the largest competitive advantage in (which I think is also log distributed to a large extent). Presumably "how much you enjoy a subject" is correlated to "how much log advantage you have over the average person", which makes me suspicious of the severe impact/enjoyment trade-off in your second graph. 

I think a strong argument against this point would be to claim that the log difference between individual affinities is much less than the log difference between impacts. I intuitively think this is likely, but the much greater knowledge people have of their comparative strengths over the (vastly uncertain) guesses about impact will counteract this. Here I would enjoy an analysis of a model of impact vs. personal competitive advantage that takes both of these things into account. 

Another point, which I think is somewhat orthogonal to the discussion of "how enjoyable is the highest-impact job", and which I think is indirectly related to Eric's point, is nonlinearity of effort. 

Namely, there is a certain amount of nonlinearity in how "amount of time dedicated to cause X" correlates with "contribution to X". There is some superlinearity at low levels (where at first most of your work goes into gaining domain knowledge and experience), and some sublinearity at high levels (where you run the risk of burnout as well as saturation potential if you chose a narrow topic). Because of the sublinearity at high levels, I think it makes sense for most people to have at least two "things" they do. 

If you buy this I think it makes a lot of sense to make your second "cause" some version of "have fun" (or related things like "pursue personal growth for its own sake"). There are three reasons I believe this. First, this is a neglected cause: unless you're famous or rich, no one else will work on it, which means that no one else will even try to pick the low-hanging fruit. Second, it's a cause where you are an expert and, from your position, payoff is easy to measure and unambiguous. And third, if you are genuinely using a large part of your energy to have a high-impact career, being someone who has fun (and on a meta level, being a community that actively encourages people to have fun) will encourage others to be more likely to follow your career path. 

I should caveat the third point: there are bad/dangerous arguments that follow similar lines, that result in people convincing themselves that they are being impactful by being hedonistic, or pursuing their favorite pet project. People are rationalizers and love coming up with stories that say "making myself happy is also the right thing to do". But while this is something to be careful of, I don't think it makes arguments of this type incorrect.

Where is the Social Justice in EA?

I agree with you that EA outreach to non-Western cultures is an important and probably neglected area — thank you for pointing that out! 

There are lots of reasons to make EA more geographically (and otherwise) diverse, and also some things to be careful about, given that different cultures tend to have different ethical standards and discussion norms. See this article about translation of EA into Mandarin. Something to observe is that outreach is very language and culture-specific. I generally think that international outreach is best done in a granular manner — not just “outreach to all non-Western cultures” or “outreach to all the underprivileged”. So I think it would be wonderful for someone to post about how to best approach outreach in Malawi, but that the content might be extremely different from writing about outreach in Nigeria. 

So: if you're interested in questions like this, I think it would be great if someone were to choose a more specific question and research it! (And I appreciate that your post points out a real gap.)

On a different note, I think that the discussion around your post would be more productive if you used other terms than “social justice.” Similarly, I think that the dearth of the phrase “social justice” on the EA Forum is not necessarily a sign of a lack of desire for equity and honesty. There are many things about the “social justice” movement that EAs have become wary of. For instance, my sense is that the conventional paradigm of the contemporary Western elite is largely based on false or unfalsifiable premises. I’d guess that this makes EAs suspicious when they hear “social justice” — just like they’re often wary about certain types of sociology research (things like “grit,” etc. which don’t replicate) or psychosexual dynamics and other bits of Freud’s now-debunked research.  

At the same time (just like with Freudism), a lot of the core observations that the modern social justice paradigm makes are extremely true and extremely useful. It is profoundly obvious, both from statistics and from the anecdotal evidence of any woman that pretty much every mixed-gender workplace has an unacceptable amount of harassment. There is abundant evidence that e.g. non-white Americans experience some level of racism, or at least are treated differently, in many situations. 

Given this, here are some things that I think it would be useful to do:

  1. Make the experience of minorities within EA more comfortable and safe.
  2. Continue seriously investigating translating EA concepts to other cultural paradigms (or conversely, translating useful ideas from other cultural paradigms into EA). (See also this article .)
  3. Take some of the more concrete/actionable pieces of the social justice paradigm and analyze/ harmonize them with the more consequentialist/science-based EA philosophy (with the understanding that an honest analysis sometimes finds cherished ideas to be false).

I think the last item is definitely worth engaging with more, especially with people who understand and value the social justice paradigm. Props if you can make progress on this!

What brand should EA buy? If we had to buy one.

There seems to be very little precedent of someone founding new successful universities, partially because the perceived success of a university is so dependent on pedigree. There is even less precedent of successful "themed" universities, and the only ones I know that have attained mainstream success (not counting women's universities or black universities, which are identity-based rather than movement-based) are old religious institutions like Saint John's or BYU. I think a more realistic alternative would be to buy something like EdX or a competing online academic content aggregator ("MOOC") and give it an EA slant. The success of programs like EdX is much more recent, and much more "buyable", since it is just a matter of either licensing the right content or hiring famous academics to give independent courses.

What brand should EA buy? If we had to buy one.

I think the Christian Science Monitor's popularity and reputation makes Christian Scientists (note: totally different from Scientologists) significantly more respectable than they would be otherwise. 

From Britannica: 

The Christian Science Monitor, American daily online newspaper that is published under the auspices of the Church of Christ, Scientist. Its original print edition was established in 1908 at the urging of Mary Baker Eddy, founder of the church, as a protest against the sensationalism of the popular press. The Monitor became famous for its thoughtful treatment of the news and for the quality of its long-range, comprehensive assessments of political, social, and economic developments. It remains one of the most respected American newspapers. Headquarters are in Boston.

So I would try to buy a dying newspaper, or another media source. Alternatively (and more likely), I would found a new newspaper with a name like "San Francisco Herald" and try to attract a core of editors from a dying media source. 

Bounty for your best 2 minute answer to an EA 'frequently asked question'

This is a nice project, but as many people point out this seems a bit fuzzy for a "FAQ" question. If it's an ongoing debate within the community, it seems unlikely to have a good 2-minute answer for the public. There's probably a broader consensus around the idea that if you commit to any realistic discount scheme, you see that the future deserves a lot more consideration than it is getting in the public and the academic mainstreams, and I wonder whether this can be phrased as a more precise question. I think a good strategy for public-facing answers would be to compare climate change (where people often have a more reasonable rate of discount) to other existential risks

The Fermi Paradox has not been dissolved

Disclaimer: this is an edited version of a much harsher review I wrote at first. I have no connection to the authors of the study or to their fields of expertise, but am someone who enjoyed the paper here critiqued and in fact think it very nice and very conservative in terms of its numbers (the current post claims the opposite). I disagree with this post and think it is wrong in an obvious and fundamental way, and therefore should not be in decade review in the interest of not posting wrong science. At the same time it is well-written and exhibits a good understanding of most of the parts of the relevant model, and a less extreme (and less wrong :) version of this post would pass my muster. In particular I think that the criticism 

However, since this parameter is capped at 1, while there is no lower limit to the long tail of very low estimates for fl, in practise this primarily has the effect of reducing the estimated probability of life emerging spontaneously, even though it represents an additional pathway by which this could occur.

is very valid, and a model taking this into account would have a correspondingly higher credence for "life is common" scenarios. However the authors of the paper being criticized are explicitly thinking about the likelihood of "life is not common" scenarios (which a very naive interpretation of the Drake equation would claim are all but impossible) and here this post is deeply flawed.

The essential beef of the author of the post (henceforth the OP) with the authors of the paper (henceforth, Sandberg et al) concerns their value fl, which is the "log standard deviation in the log uncertainty of abiogenesis" (abiogenesis is the event wherein random and non-replicating chemical processes create the first replicating life). A very rough explanation of this parameter (in the log uncertainty model which Sandberg et al use and OP subscribes to) is the probability of the best currently known model for abiogenesis occuring on a given habitable planet. Note that this is very much not the probability of abiogenesis itself, since there can be many other methods which produce abiogenesis a lot more frequently than the best currently known model. The beautiful conceit of this paper (and the field it belongs to) is the idea that, absent a model for a potentially very large or very small number (in this case, the probability of abiogenesis, or, in the larger paper, the probability of the emergence of life on a given paper), our best rough estimate for our uncertainty it is more or less log uniformly distributed between the largest and smallest "theoretically possible" values (so a number between 10^-30 and 10^-40 is roughly as likely as a value between 10^-40 and 10^-50, provided these numbers are within the "theoretically possible" range. The difference between "log uniform" and "log normal" is irrelevant to a first approximation). The exact definition of "theoretically possible" is complicated, but in the case of abiogenesis the largest theoretically possible value of fl (as of any other probability measure) is 1 while the smallest possible value is the probability of abiogenesis given the best currently known methods. The model is not perfect, but by far the best we have for predicting the lower tail of such distributions, i.e., in this case, the likelihood of the cosmos being mostly devoid of intelligent life. (Note that the model doesn't tell us this probability is close to 1! Just that it isn't close to 0.)

Now the best theoretically feasible model for abiogenesis currently known is the so-called RNA world model, which is analyzed in supplement 1 of Sandberg et al. Essentially, the only sure-fire way we know of abiogenesis is spontaneously generating the genome of an archaeobacterium, which has hundreds of thousands of base pairs, and would put the probability of abiogenesis at under 10^-100,000 (insanely small). However, we are fairly confident both that much smaller self-replicating RNA sequence would be possible in certain conducive chemical environments (the putative RNA world), and that there is some redundancy in how to generate a near minimal self-replicating RNA sequence (so you don't have to get every base pair right). The issue is that we don't know how small the smallest genome is and how much redundancy there is in choosing it. By the nature of log uncertainty, if we want to get the lowest value in the range of uncertainties (what OP and Sandberg et al call log standard deviation) we should take the most pessimistic reasonable estimates. These are attempted in the previously mentioned supplement, though rather than actually taking pessimistic values, Sandberg et al rather liberally assume a very general model of self-replicating RNA formation, with their lower bound based on assumptions about protein folding (rather than a more restrictive model based on assuming low levels of redundancy, which I would have chosen, and which would have put the value of fl significantly lower even than the Sandberg et al paper: they explicitly say that they are trying to be conservative). Still, they estimate a value of fl equal or lower than 10^-30 with the current best model. In order to argue for a 10^-2 result while staying within the log normal model, OP would have to convince me of some drastic additional knowledge. Either that they have a proof, beyond all reasonable doubt, that either an RNA chain shorter than the average protein is capable of self-replicating, or that there is a lot of redundance in how self-replicating RNA can form, and a chemical "RNA soup" would naturally tend to self-replication under certain conditions. Both of these are plausible theories, but as such methods for abiogenesis are not currently known to exist, assuming they work for your lower bounds on log probability is precisely not how log uncertainty works. In this way OP is, quite simply, wrong. Therefore, as incorrect science, I do not recommend this post for the decade review.

Flimsy Pet Theories, Enormous Initiatives

I think that it's not always possible to check that a project is "best use, or at least decent use" of its resources. The issue is that these kinds of checks are really only good on the margin. If someone is doing something that jumps to a totally different part of the pareto manifold (like building a colony on Mars or harnessing nuclear fission for the first time), conventional cost-benefit analyses aren't that great. For example a standard post-factum justification of the original US space program is that it accelerated progress in materials science and computer science in a way that payed off the investment even if you don't believe that manned space exploration is worthwhile. Whether or not you agree with this (and I doubt this counterfactual can be quantified with any confidence), I don't think that the people who were working on it would have been able to make this argument convincingly at the time. I imagine that if you ran a cost-benefit analysis at the time, it would have found that a better investment would be to put money into incremental materials research. But without the challenge of having to develop insulators, etc., that work in space, there would have plausibly been fewer new materials discovered. 

I think that here there is an important difference between SpaceX and facebook, since SpaceX is an experiment that just burns private money if it fails to have a long-term payoff, whereas facebook is a global institution whose negative aspects harm billions of people. There's also a difference between something like Mars exploration, which is a simple and popular idea that's expensive to implement,  and more kooky vanity projects which consist of rich people imagining that their being rich also makes them able to solve  hairy problems that more qualified people have failed to solve for ages (an example that comes to mind, which thankfully doesn't have billions of dollars riding on it, is Wolfram's project to solve physics: https://blog.wolfram.com/2021/04/14/the-wolfram-physics-project-a-one-year-update/). I think that many big ambitious initiatives by billionaires are somewhere in between kooky ego-trip and genuinely original/pareto-optimal experiment, but it seems important to recognize that these are different things. Given this point of view, along with the general belief that large systems tend to err on the side of being conservative, I think that it's at least defensible to support experiments like SpaceX or Zuckerberg's big Newark school project, even when (like Zuckerberg's school project) they end up not being successful.