All of Linch's Comments + Replies

(Even) More Early-Career EAs Should Try AI Safety Technical Research

(Maybe obvious point, but) there just aren't that many people doing longtermist EA work, so basically every problem will look understaffed, relative to the scale of the problem. 

To fund research, or not to fund research, that is the question

I think the model setup or at least the clarifications around it needs tweaking. Namely you're assuming that the main reason we may discontinue a researched-to-be-positive intervention is due to intrinsic time preference. But I think it's much more likely that over enough time there will be distributional shift/generalizability issues with old studies.

For one example, if we're all dead, a lot of studies are kind of useless. For another example, studies on the cost-effectiveness of (e.g.) malaria nets and deworming pills becomes increasingly out-of-distribution as (thankfully!) malarial and intestinal worm loads decrease worldwide, perhaps in the future approaching zero.

(Even) More Early-Career EAs Should Try AI Safety Technical Research

if you have good evidence that your shape rotator abilities aren’t reasonably strong — e.g., you studied reasonably hard for SAT Math but got <700 (90th percentile among all SAT takers).[6]

This is really minor, but I think there's a weird conflation of spatial-visual reasoning and mathematical skills in this post (and related memespaces like roon's). This very much does not fit my own anecdotal experiences*, and I don't think this is broadly justified in psychometrics research. 

*FWIW, I've bounced off of AISTR a few times. I usually attribute the d... (read more)

3Lukas_Finnveden2d
I agree. Anecdotally, among people I know, I've found aphantasia to be more common among those who are very mathematically skilled. (Maybe you could have some hypothesis that aphantasia tracks something slightly different than other variance in visual reasoning. But regardless, it sure seems similar enough that it's a bad idea to emphasize the importance of "shape rotating". Because that will turn off some excellent fits.)
Some unfun lessons I learned as a junior grantmaker

I’d still be interested in hearing how the existing level of COIs affects your judgement of EA epistemics.

I'm confused by this. My inside view guess is that this is just pretty small relative to other factors that can distort epistemics. And for this particular problem, I don't have a strong coherent outside view because it's hard to construct a reasonable reference class for what communities like us with similar levels of CoIs might look like.

Impact markets may incentivize predictably net-negative projects

Ironically, the person I mentioned in my previous comment is one of the main players at Anthropic, so your second paragraph doesn't give me much comfort.

I don't understand your sentence/reasoning here. Naively this should strengthen ofer's claim, not weaken it.

2Arepo4d
Why? The less scrupulous one finds Anthropic in their reasoning, the less weight a claim that Wuhan virologists are 'not much less scrupulous' carries.
Some unfun lessons I learned as a junior grantmaker

Here's my general stance on integrity, which I think is a superset of issues with CoI. 

As noted by ofer, I also think investments are structurally different from grants. 

2aogara9d
This is a great set of guidelines for integrity. Hopefully more grantmakers and other key individuals will take this point of view. I’d still be interested in hearing how the existing level of COIs affects your judgement of EA epistemics. I think your motivated reasoning critique of EA is the strongest argument that current EA priorities do not accurately represent the most impactful causes available. I still think EA is the best bet available for maximizing my expected impact, but I have baseline uncertainty that many EA beliefs might be incorrect because they’re the result of imperfect processes with plenty of biases and failure modes. It’s a very hard topic to discuss, but I think it’s worth exploring (a) how to limit our epistemic risks and (b) how to discount our reasoning in light of those risks.
On Deference and Yudkowsky's AI Risk Estimates

Speaking for myself, I was interested in a lot of the same things in the LW cluster (Bayes, approaches to uncertainty, human biases, utilitarianism, philosophy, avoiding the news) before I came across LessWrong or EA. The feeling is much more like "I found people who can describe these ideas well" than "oh these are interesting and novel ideas to me." (I had the same realization when I learned about utilitarianism...much more of a feeling that "this is the articulation of clearly correct ideas, believing otherwise seems dumb").

That said, some of the ideas ... (read more)

Why EAs should normalize using Glassdoor

I think an ideal social norm would be only reviewing an organization after you are no longer working with them and thus can review the full experience.

This implies that a nontrivial fraction of employees would leave, which seems true of some EA orgs and not others (and I think the difference is non-random for pretty obvious reasons).

5JP Addison9d
Yeah, I expect employees who leave to be a pretty biased sample, and the bias to have different strengths depending on the org. Now, maybe that bias is weaker than the bias to try to make your own org look good. I'd be more optimistic about a central group sending a survey to a sample of employees and telling them to take it seriously for the benefit of the world. Probably reduces much of the bias of a CEO sending out a link in slack and asking everyone to review.
2Benjamin_Todd4d
+1, I think that's a good framing of being a dedicate. (And also doesn't say that doing good is all that matters, but rather that it's your most important and driving life project.)
On Deference and Yudkowsky's AI Risk Estimates

I'm a bit confused by both this post and comments about questions like what level/timing the deference happens.

Speaking for myself, if an internet rando wrote a random blog post called "AGI Ruin: A List of Lethalities," I probably would not read it.  But I did read Yudkowsky's post carefully and thought about it nontrivially, mostly due to his track record and writing ability (rather than e.g. because the title was engaging or because the first paragraph was really well-argued).

Impact markets may incentivize predictably net-negative projects

Fair, though many  EAs are probably in positions where they can talk to other billionaires (especially with >5 hours of planning), and probably chose not to do so. 

Impact markets may incentivize predictably net-negative projects

In 2015, when I was pretty new to EA, I talked to a billionaire founder of a company I worked at and tried to pitch them on it. They seemed sympathetic but empirically it's been 7 years and they haven't really done any EA donations or engaged much with the movement. I wouldn't be surprised if my actions made it at least a bit harder for them to be convinced of EA stuff in the future.

In 2022, I probably wouldn't do the same thing again, and if I did, I'd almost certainly try to coordinate a bunch more with the relevant professionals first. Certainly the current generation of younger highly engaged EAs seemed more deferential (for better or worse) and similar actions wouldn't be in the Overton window.

5ofer11d
Unless ~several people in EA had an opportunity to talk to that billionaire, I don't think this is an example of the unilateralist's curse (regardless of whether it was net negative for you to talk to them).
Impact markets may incentivize predictably net-negative projects

My understanding is that without altruistic end-buyers, then the intrinsic value of impact certificates becomes zero and it's entirely a confidence game.

On Deference and Yudkowsky's AI Risk Estimates

Thank you, this clarification makes sense to me! 

On Deference and Yudkowsky's AI Risk Estimates

This critique strikes me as about as sensible as digging up someone's old high-school essays and critiquing their stance on communism or the criminal justice system. I want to remind any reader that this is an opinion from 1999, when Eliezer was barely 20 years old. I am confident I can find crazier and worse opinions for every single leadership figure in Effective Altruism, if I am willing to go back to what they thought while they were in high-school. To give some character, here are some things I believed in my early high-school years

This is really mino... (read more)

Oh, hmm, I think this is just me messing up the differences between the U.S. and german education systems (I was 18 and 19 in high-school, and enrolled in college when I was 20). 

I think the first quote on nanotechnology was actually written in 1996 originally (though was maybe updated in 1999). Which would put Eliezer at ~17 years old when he wrote that. 

The second quote was I think written in more like 2000, which would put him more in the early college years, and I agree that it seems good to clarify that. 

Seven ways to become unstoppably agentic

Sure that makes more sense to me. I was previously reading "few" as 2-4 times, and was thinking that's way too few times to be asking for help from coworkers total in a week, but a bit too high to be asking (many) specific senior people for help each year.

Seven ways to become unstoppably agentic

My guess is that people should ask their friends/colleagues/acquaintances for help with things a few times a week, and ask senior people they don't know for help with things a few times a year. T

Is this a few times each person, or a few times total? It's hard for me to tell because either seems slightly off to me.

I meant like maybe 3-15 times total ("few" was too ambiguous to be a good word choice).

Writing that out maybe I want to change it to 3-30 (the top end of which doesn't feel quite like "a few"). And I can already feel how I should be giving more precise categories // how taking what I said literally will mean not doing enough asking in some important circumstances, even if I stand by my numbers in some important spiritual sense.

Anyway I'm super interested to get other people's guesses about the right numbers here. (Perhaps with better categories.)

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

Have you considered that deworming may be a perpetual need while influencing a decision that motivates a sustainable systemic change a permanent solution? This could justify spending on advocacy, in general.

It's an interesting hypothesis, but I don't think deworming is a perpetual need? I don't think I took deworming pills growing up, and I doubt most Forum readers did. 

Framed another way, I don't think we should have a strong prior belief that if we subsidize health interventions for X years, this means they'll need to be continuously subsidized by t... (read more)

1brb24313d
That is true, infrastructure can be build and infections eliminated. That is echoed by the WHO [https://www.who.int/docs/default-source/ntds/soil-transmitted-helminthiases/school-deworming-at-a-glance-2003.pdf?sfvrsn=a93eff88_4] (only some schools are recommended treatment while "sanitation and access to safe water [and] hygiene" can reduce transmission). I possibly underestimated the facility of reducing contamination and overestimated the inevitability of avoiding contaminants. According to the above sources, sanitation, hygiene, and refraining from using animal fertilizer can reduce contamination. Further [https://my.clevelandclinic.org/health/articles/14072-hookworm-disease], wearing shoes and refraining from spending time in possibly contaminated water reduces the risk of infection by avoiding contaminants. Thus, only people that cannot reasonably avoid spending time in water, such as water-intensive crop farmers, who are in areas with high infection prevalence are at risk while solutions in other situations are readily available. Since farming can be automated, risk can be practically eliminated entirely. I agree. According to [https://www.youtube.com/watch?v=UmZ3GmAHsZw&t=2140s] Dr. Gabby Liang, the SCI Foundation currently works on "capacity development within the [program country] ministries." This suggest that international assistance can be eventually phased out. It can also be that most health programs develop capacity and thus make lasting changes, even if they are not specifically targeted at that. Policy changes that are a result of organized advocacy, on the other hand, may be temporary/fragile, also since they are not based on the institution's reasoning or research. So, I could agree with the greater effectiveness of SCI than the cited letter writing (but still would need more information to have either perspective).
A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

Basically, there's a big difference between "OP made a mistake because they over/underrated X" and "OP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants."

The synthesis position might be something like "some subset of OP made a mistake because they were subconsciously politically or PR motivated and unintentionally made sub-optimal grants."

I think this is a reasonable candidate hypothesis, and should not be that much of a surprise, all things considered. We're all human.

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

Yeah I mean, no kidding. But it's called Open Philanthropy. It's easy to imagine there exists a niche for a meta-charity with high transaparency and visibility. It also seems clear that Open Philanthropy advertises as a fulfillment of this niche as much as possible and that donors do want this.

I don't understand this point. Can you spell it out? 

From my perspective, Open Phil's main legible contribution is a) identifying great donation opportunities, b) recommending Cari Tuna and Dustin Moskovitz to donate to such opportunities, and c) building up an ... (read more)

3Agrippa5d
Sorry I did not realize that OP doesn't solicit donations from non megadonors. I agree this recontextualizes how we should interpret transparency. Given the lack of donor diversity, tho, I am confused why their cause areas would be so diverse.
7Ozzie Gooen14d
In fairness, the situation is a bit confusing. Open Phil came from GiveWell, which is meant for external donors. In comparison, as Linch mentioned, Open Phil mainly recommends donations just to Good Ventures (Cari Tuna and Dustin Moskovitz). My impression is that OP's main concern is directly making good grants, not recommending good grants to other funders. Therefore, a large amount of public research is not particularly crucial. I think the name is probably not quite ideal for this purpose. I think of it more like "Highly Effective Philanthropy"; it seems their comparative advantage / unique attribute is much more their choices of focus and their talent pool, than it is their openness, at this point. If there is frustration here, it seems like the frustration is a bit more "it would be nice if they could change their name to be more reflective of their current focus", than "they should change their work to reflect the previous title they chose".
Linch's Shortform

A general policy I've adapted recently as I've gotten more explicit* power/authority than I'm used to is to generally "operate with slightly to moderately more integrity than I project explicit reasoning or cost-benefits analysis would suggest." 

This is primarily for epistemics and community epistemics reasons, but secondarily for optics reasons.

I think this almost certainly does risk leaving value on the table, but on balance it is a better balance than potential alternatives: 

  • Just following explicit reasoning likely leads to systematic biases "
... (read more)
A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

A lot of people, myself included, had relatively weak priors on the effects of marginal imprisonments on crime, and were subsequently convinced by the Roodman report. It might be valuable for people interested in this or adjacent cause areas to commission a redteaming of the Roodman report, perhaps by the CityJournal folks?

4will_c16d
That's an interesting idea. It seems like an effort that would require a lot of subject-matter expertise, so your idea to commision the CJ folks makes sense. I do wonder if cause areas that rely on academic fields which we have reason to believe may be ideologically biased would generally benefit from some red-teaming process.
Apply to join SHELTER Weekend this August

I'm very excited about this and there's a ~70% chance I will be interested in attending assuming it makes sense for me to do so!

The dangers of high salaries within EA organisations

I don't know how much credit/inspiration this should really give people. As you note, the other conditions for EA org work is often better than external jobs (though this is far from universal). And as you allude to in your post, there are large quality of life improvements from working on something that genuinely aligns with my values. At least naively, for many people (myself included) it is selfishly worth quite a large salary cut to do this. Many people both in and outside of EA also take large salary cuts to work in government and academia as well, sometimes with less direct alignment with their values, and often with worse direct working conditions.

Steering AI to care for animals, and soon

Thanks for the explanation. My impression is that 

  • a lot of the animal activism -> animal agriculture lobby connection is adversarial, so this will be an unusually bad way to do outreach to them.
  • agricultural lobby -> government efforts in AI safety also feels a bit weak to me. I'd be more excited about transferring of efforts/learnings from biosecurity lobbying, or prediction markets, or maybe even global health and development lobbying.
Expected ethical value of a career in AI safety

Thanks, this makes sense!

I do appreciate you (and others) thinking clearly about this, and your interest in safeguarding the future.

Expected ethical value of a career in AI safety

One issue here with some of the latter numbers is that a lot of the work is being done by the expected value of the far future being very high, and (to a lesser extent) by us living in the hinge of history. 

Among the set of potential longtermist projects to work on (e.g. AI alignment, vs. technical biosecurity, or EA community building, or longtermist grantmaking, or AI policy, or macrostrategy), I don't think the present analysis of very high ethical value (in absolute terms)  should be dispositive in causing someone to choose careers in AI alignment.

4Jordan Taylor19d
Yes, that is true. I'm sure those other careers are also tremendously valuable. Frankly I have no idea if they're more or less valuable than direct AI safety work. I wasn't making any attempt to compare them (though doing so would be useful). My main counterfactual was a regular career in academia or something, and I chose to look at AI safety because I think I might have good personal fit and I saw opportunities to get into that area.
Steering AI to care for animals, and soon

Also, you are assuming an erroneous dynamic. Animal welfare is important for AI safety not only because it enables it to acquire diametrically different impact but also since it provides a connection to the agriculture industry, a strategic sector in all nations. Once you have the agri lobbies on board, you speak with the US and Europe, at least, about safety sincerely.

Can you spell out the connection?

3brb24318d
Animal welfare 1) relates to animal agriculture, 2) which relates to the agricultural industry lobby, which 3) influences the government. 1) Animal welfare-animal agriculture. Animal welfare advocates form connections with the producers in a way in which they have certain influence over their decisions, considering that producers may choose to lose profit for uncertain return. 2) Animal agriculture - agriculture lobby. Companies of various sizes join associations that lobby governments. For example, the chicken producer Mountaire spent [https://www.opensecrets.org/industries/indus.php?Ind=A] almost $6m on lobbying last fiscal year.[1] [#fnwdtzg3qhj0o] 3) Agriculture lobby - government. Companies can facilitate introductions of the welfare advocates' wider network through their connections, attach technological and AI safety to their dialogues, or try get extra benefits by relating their interests to the catchy AI safety topic. The introduction of AI safety through a trusted network[2] [#fn7m0zmlp9do7]can motivate the government to internalizes interest in AI safety. Then, AI safety research would be sought for and thus better tailored to the current needs and accepted/implemented. Further, animal welfare advocates develop generalizable skills, know-how, and capacity to influence national and regional decisionmaking. For example, an animal welfare org in Poland influenced a politician soon after their election by a skillful mention of him adhering to various promises but just not being able to summon the welfare one. They also conducted a research on him having a pet etc and tailored the appeal specifically while taking advantage of the timing. These skills can be less developed in AI safety, where academic paper writing, perhaps less accessible to politicians, is prioritized. So, animal welfare can be beneficial to AI safety also in more direct political advocacy. 1. ^ [#fnrefwdtzg3qhj0o]Note that most investments go to conservatives, which is a group po
What are EA's biggest legible achievements in x-risk?

EAs have legible achievements in x-risk-adjacent domains (e.g. highly cited covid paper in Science, Reinforcement Learning from Human Feedback which was used to power stuff like InstructGPT), and illegible achievements in stuff like field-building and disentanglement research.

However, the former doesn't have a clean connection to actually reducing x-risk, and the latter isn't very legible.

So I think it is basically correct that we have not done legible things to reduce object-level x-risk like cause important treaties to be signed, ban gain-of-function res... (read more)

1acylhalide19d
Thanks for your reply! This makes sense. Linked post at the end was useful and new for me.
Open Thread: Spring 2022

Up until recently, the vast majority of EA donations come from Open Philanthropy, so you can look at their grants database to get a pretty good sense.

Stephen Clare's Shortform

https://docs.google.com/spreadsheets/d/1vew8Wa5MpTYdUYfyGVacNWgNx2Eyp0yhzITMFWgVkGU/edit#gid=0

needs to grant access.

We might also want to praise users to those who have a high ratio of highly upvoted comments to posts

One thing that confuses me is that the karma metric probably already massively overemphasizes rather than underemphasizes the value of comments relative to posts. Writing 4 comments that have ~25 karma each probably provides much less value (and certainly takes me much less effort) than writing a post that gets ~100 karma.

Michael_Wiebe's Shortform

I think for moderate to high levels of x-risk, another potential divergence is that while both longtermism and non-longtermism axiologies will lead you to believe that large scale risk prevention and mitigation is important, specific actions people take may be different. For example:
 

  • non-longtermism axiologies will all else equal be much more likely to prioritize non-existential GCRs over existential
  • mitigation (especially worst-case mitigation) for existential risks is comparatively more important for longtermists than for non-longtermists.

Some of the... (read more)

2Michael_Wiebe20d
Agreed, that's another angle. NTs will only have a small difference between non-extinction-level catastrophes and extinction-level catastrophes (eg. a nuclear war where 1000 people survive vs one that kills everyone), whereas LTs will have a huge difference between NECs and ECs.
Seems impossible to find any EA meetups in SF

You may benefit from joining the Facebook group Bay Area Effective Altruists, which does list some events. It's easier to find events if you're willing to widen your search space to include East Bay and South Bay, though still not amazing. 

For as long as I've been here, the bay area has substantially less publicly accessible EA events than other EA hubs. Most EAs coming to the area find connections via pre-existing personal or professional networks.

This is a Known Issue, but relatively few people try very hard to resolve it. Or more precisely, many pe... (read more)

5Chris Leong20d
Seems like this should be resolvable by recruiting a new person or two to organise these events. If these people go on to better opportunities, well then they could be replaced again. Constantly replacing people would be some effort, but could be worthwhile if it’s a pipeline to high impact opportunities.
High Impact Medicine, 6 months later - Update & Key Lessons

I think your two comments here are well-argued, internally consistent, and strong. However, I think I disagree with 

As, to a first approximation, reality works in first-order terms

in the context of EA career choice writ large, which I think may be enough to flip the bottom-line conclusion. 

I think the crux for me is that I think if the differences in object-level impact across people/projects is high enough, then for anybody whose career or project is not in the small subset of the most impactful careers/projects, their object-level impacts will ... (read more)

Most problems fall within a 100x tractability range (under certain assumptions)

I'm curious whether people have thoughts on whether this analysis of problem-level tractability also applies to personal fit. I think many of the arguments here naively seems like it should apply to personal fit as well. Yet many people (myself included) make consequential career- and project- selection decisions based on strong intuitions of personal fit. 

This article makes a strong argument that it'd be surprising if tractability (but not importance, or to a lesser degree neglectedness) can differ by >2 OOMS. In a similar vein, I think it'd also ... (read more)

Lifeguards

Minor, but 

Many readers will be familiar with Peter Singer’s Drowning Child experiment:

Should be 

Peter Singer's Drowning Child thought experiment.  

A  "Drowning Child experiment" will be substantially more concerning

What’s the theory of change of “Come to the bay over the summer!”?

You got a lot of flak this post, and I think many of the  dissenting comments were good (I strongly upvoted the top one). I also think some specific points could be better argued, and it'd be quite valuable to have a deeper understanding of the downside risks and where the bottom-line advice is not applicable.

Nonetheless, I think I should mention publicly that I broadly agree with this post. I think the post advances a largely correct bottom-line conclusion, and for the right reasons. I think many EAs in positions to do so, for example undergrads/grad... (read more)

5Vaidehi Agarwalla21d
I was going to comment pretty much exactly the same thing, thanks for doing the hard work for me :) I think part of what is missing here for me is a bit of the context before hand * who is saying come to the bay? it seems like this message is shared in specific circles * what factors fell into place for you to have a positive experience, where others may not have, e.g. the kinds of thing Chana points out, and Joseph Lemien's comment * (this one is unfair, since it's a bit out of scope for the post) how one might actually go about going to the bay. I think it's pretty fuzzy and unclear how to navigate that space unless you already know the relevant people
The dangers of high salaries within EA organisations

I think the reasoning is sound. One caveat on the specific numbers/phrasing:

So whilst I think it's true for some EAs that EA jobs offer slightly less pay [emphasis mine] relative to their other options

To be clear, many of us originally took >>70% pay cuts to do impactful work, including at EA orgs. EA jobs pay more now, but I imagine being paid <50% of what you'd otherwise earn elsewhere is still pretty normal for a fair number of people in meta and longtermist roles.

2James Ozden23d
Thanks for the correction - I'll edit this in the comment above as I agree my phrasing was too weak. Apologies as I didn't mean to underplay the significance of the pay cut and financial sacrifice yourself and others took - I think it's substantial (and inspiring).
The dangers of high salaries within EA organisations

I agree with the rest of your comparisons but I think this one is suspect:

Compare the salaries of ETG EAs with non-ETG EAs that are otherwise as similar as possible, e.g. a quant researcher at Jane Street vs one at Redwood Research. Usually, I think the ETG EAs earn more.

"Pure" ETG positions are optimized for earning potential, so we should expect them to be systematically more highly paid than other options. 

The dangers of high salaries within EA organisations

There's one example comparison here and to clarify I think this is most true for more meta/longtermist organisations, as salaries within animal welfare (for example) are still quite low IMO[...] Rethink Priorities 

Please note that Rethink Priorities, where I work, has the same salary band across cause areas.

4James Ozden23d
Ah yes that's definitely fair, sorry if I was misrepresenting RP! I wasn't referring to intra-organisation when I made that comment, but I was thinking more across organisations like The Humane League / ACE vs 80K/CEA.
Holly_Elmore's Shortform

What's so weird to me about this is that EA has the clout it does today because of these frank discussions. Why shouldn't we keep doing that?

I think the standard thing for many orgs and cultures to start off open and transparent and move towards closedness and insularity. There are good object-level reasons for the former, and good object-level reasons for the latter, but taken as a whole, it might just better be viewed as a lifecycle thing rather than one of principled arguments.

Open Phil is an unusually transparent and well-documented example in my mind (though perhaps this is changing again in 2022)

6Holly_Elmore12d
I can see good reasons for individual orgs to do that, but way fewer for EA writ large to do this. I'm with Rob Bensinger on this [https://twitter.com/robbensinger/status/1538911393671356416?s=20&t=h9isAoR0789TYGaNcPBnMQ] .
AGI Ruin: A List of Lethalities

At least for me, I thought we should avoid talking about the pivotal act stuff through a combination of a) this is obviously an important candidate hypothesis but seems bad to talk about because then the Bad Guys will Get Ideas and b) other people who're better at math/philosophy/alignment presumably know this and are privately considering it in detail, I have only so much to contribute here.

b) is plausibly a dereliction of duty, as is my relative weighting of the terms, but at least in my head it wasn't (isn't?) obvious to me that it was wrong for me not to spend a ton of time thinking about pivotal acts.

9RobBensinger25d
I think that makes sense as a worry, but I think EAs' caution and reluctance to model-build and argue about this stuff has turned out to do more harm than good, so we should change tactics. (And we very probably should have done things differently from the get-go.) If you're worried that it's dangerous to talk about something publicly, I'd start off by thinking about it privately and talking about it over Signal with friends, etc. Then you can progress to contacting more EAs privately, then to posting publicly, as it becomes increasingly clear "there's real value in talking about this stuff" and "there's not a strong-enough reason to keep quiet". Step one in doing that, though, has to be a willingness to think about the topic at all, even if there isn't clear public social proof that this is a normal or "approved" direction to think in. I think a thing that helps here is to recognize how small the group of "EA leaders and elite researchers" is, how divided their attention is between hundreds of different tasks and subtasks, and how easy it is for many things to therefore fall through the cracks or just-not-happen.
Jobs at EA-organizations are overpaid, here is why

I upvoted this even though I strongly disagree with it. 

  • I think this is an underrated point in the current discourse
  • I expect it to be systematically unsaid or underrated for pretty obvious incentive-related reasons.
  • The language was not overly flowery/emotional/blame-oriented

____

(However, for other readers, just in case it needs saying: please make your own independent assessment of whether this post is overall worthwhile*). 

*One thing I dislike is overcorrection for "oh no I might be biased for liking/not liking this post, so I can't downvote it"

What’s the theory of change of “Come to the bay over the summer!”?

(strongly upvoted because I think this is a clean explanation of what I think is an underrated point at the current stage, particularly among younger EAs).

Breaking Up Elite Colleges

I agree with others here that it's not clear whether undifferentiated scientific progress is good or bad at the current margin. 

However, assuming scientific progress is good, I'm also not convinced that breaking up elite colleges will increase scientific progress. Some counterpoints:

  • Having the smartest people in the same room might increase net scientific progress
  • Giving them resources is probably good
  • (less certain) there might be increasing returns to scale, like maybe having 100 supersmart people in the same place is better than 20 places with 5 supe
... (read more)
2MaxRa21d
I suppose all your points would be satisfied as long the breaking up of colleges happens in a to me pretty reasonable way e.g. by not forcing the new colleges to stay small and non-elite? I understood the main benefit of this to be to remove the current possibly suboptimal college administrations and to replace them with better management that avoids current problems.
Breaking Up Elite Colleges

Scientific progress  has been the root of so much progress, I think we should have a strong prior that more of it is good!

See discussion here.

The Strange Shortage of Moral Optimizers

Also cost-effectiveness analyses in general, of which only a subset is in EA.

What does ‘one dollar of value’ mean?

I don't use this framing very often because I think it confuses more than enlightens, but I roughly mean a something similar to #3:

 13. I value this action roughly equivalently to "EA coffers" increasing by ~$10k. 

3Jelle Donders16d
Agreed, this appears to be the most neutral interpretation. Since the marginal value of increasing "EA coffers" depends on what EA as a whole spends its money on, it could function as a pretty useful metric for intuitively communicating value across cause areas imo. A disadvantage might be that it's not a very concrete metric, unlike something like the QALY. Additionally, someone needs to have a somewhat accurate understanding of what the funding distribution in EA looks like (and what the funding at the margin is being used on!) for this metric to make any sense.
"Big tent" effective altruism is very important (particularly right now)

Personally, I primarily downvote posts/comments where I generally think "reading this post/comment will on average make forum readers be worse at thinking about this problem than if they didn't read this post/comment, assuming that the time spent reading this post/comment is free."

I basically never strong downvote posts unless it's obvious spam or otherwise an extremely bad offender in the "worsens thinking" direction. 

Load More