All of Owen Cotton-Barratt's Comments + Replies

Deferring

Using votes to push towards the score we think it should be at sounds worse than just individually voting according to some thresholds of how good/helpful/whatever a post needs to be? I'm worried about zero sum (so really negative sum because of the effort) attempts to move karma around where different people are pushing in different ways, where it's hard to know how to interpret the results, compared to people straightforwardly voting without regard to others' votes.

At least, if we should be voting to push things towards our best guess I think the karma system should be reformed to something that plays nice with that -- e.g. each individual gives their preferred score, and the displayed karma is the median.

4calebp3d
(I think that the pushing towards a score thing wasn't a crux in downvoting, I think there are lots of reasons to downvote things that aren't harmful as outlined in the 'how to use the form post/moderator guidelines') I think that karma is supposed to be a proxy for the relative value that a post provides. I'm not sure what you mean by zero-sum here, but I would have thought that the control system type approach is better as the steady-state values will be pushed towards the mean of what users see as the true value of the post. I think that this score + total number of votes is quite easy to interpret. The everyone voting independently thing performs poorly when some posts have much more views than others (so it seems to be tracking something more like how many people saw it and liked it rather than is the post high quality). I think I misunderstand your concern, but the control system approach seems, on the surface to be much better to me, but I am keen to find the crux here, if there is one.
Some potential lessons from Carrick’s Congressional bid

Although early fundraising could be correlational with success rather than causal, if it's an indicator of who can generate support from the electorate.

(I'd be pretty confident there's an effect like this but don't know how strong, and haven't tried to understand if the article you're quoting from tries to correct for it.)

Deferring

I sort of guess the second thing?; although I never downvoted at least I felt a little defensive and negative about "tone-deaf, indeed chilling" and didn't upvote despite having found your comment useful!

(I've now noticed the discrepancy and upvoted it)

Deferring

On my impressions: relative to most epistemic communities I think EA is doing pretty well. Relatively to a hypothetical ideal I think we've got a way to go. And I think the thing is good enough to be worth spending perfectionist attention on trying to make excellent.

Some (controversial) reasons I'm surprisingly optimistic about the community:

1) It's already geographically and social-network bubbly and explores various paradigms.

2) The social status gradient is aligned with deference at the lower levels, and differentiation at the higher levels (to some extent). And as long as testimonial evidence/deference flows downwards (where they're likely to improve opinions), and the top-level tries to avoid conforming, there's a status push towards exploration and confidence in independent impressions.

3) As long as deference is... (read more)

Deferring

Communication channels which allow for lots of information and context to flow back and forth between people. e.g. if I read an article and then go to enact the plan described in the article, that's low-bandwidth. If I sit down with the author for three hours and interrogate them about the reasoning and ask what they think about my ideas for possible plan variations, that's high-bandwidth. 

Deferring

I vibe with the sentiment "particularly uncomfortable with people in the meta space deferring to authority", but I think it's too strong. e.g. I think it's valuable for people to be able to organize big events, and delegate tasks among the event organizers. 

Maybe I'm more like "I feel particularly uncomfortable with people in the meta space deferring without high bandwidth, and without explicit understanding that that's what they're doing".

5Vaidehi Agarwalla9d
I think the important thing with delegation which Howie pointed out, is that there is a social contract in the example you gave of event organising between the volunteer / volunteer manager or employer / contractor where I'd expect that in the process of choosing to sign up for this job, the person makes a decision based on their own thinking (or epistemic deference) to contribute to this event - I think this is what you mean by high bandwidth? If so, I feel in agreement with the statement: "I feel particularly uncomfortable with people in the meta space delegating choice without high bandwidth, and without explicit understanding that that's what they're doing"
Deferring

I'm fine with junior ops people at an AI org being not really at all bought into the specific research agenda.

I'm fine with senior technical people not being fully bought in -- in the sense that maybe they think if it were up to them a different agenda would be slightly higher value, or that they'd go about things a slightly different way. I think we should expect that people have slightly different takes, and don't get the luxury of ironing all of those differences out, and that's pretty healthy. (Of course I like them having a go at discussing differences of opinion, but I don't think failure to resolve a difference means that the they need to adopt the party line or go find a different org.)

2Vaidehi Agarwalla9d
That makes sense, and feels mostly in line with what I would imagine. Maybe this is a small point (since there will be many more junior than senior roles in the long run) : I feel like the senior group would likely join an org for many other reasons than deference to authority (e.g. not wanting to found an org themselves, wanting to work with particular people they feel they could get a good work environment from, or because of epistemic deference). It seems like in practice those would be much stronger motivating reasons than authority, and I'm having a hard time picturing someone doing this in practice.
Deferring

[Without implying I agree with everything ...]

This comment was awesome, super high density of useful stuff. I wonder if you'd consider making it a top level post?

5Emrik9d
Thanks<3 Well, I've been thinking about these things precisely in order to make top-level posts, but then my priorities shifted because I ended up thinking that the EA epistemic community was doing fine without my interventions, and all that remained in my toolkit was cool ideas that weren't necessarily usefwl. I might reconsider it. :p Keep in mind that in my own framework, I'm an Explorer, not an Expert. Not safe to defer to.
Deferring

No, that doesn't work because epistemic deferring is also often about decisions, and in fact one of the key distinctions I want to make is when someone is deferring on a decision how that can be for epistemic or authority reasons, and how those look different.

I agree it's slightly awkward that authorities often delegate, but I think that that's usually delegating tasks; "delegating choices" to me has much less connotation of a high-status person delegating to a low-status person.

Although ... one of the examples of "deferring to authority" in my sense is a ... (read more)

2Vaidehi Agarwalla9d
Just to make sure I understand correctly is"delegating choice" is "delegating a choice (of an action to be made)" ? If so, I think this is a much better phrase at least than deferring to authority, and would even propose editing the OP to suggest this as an alternative phrase / address this so that others don't get the wrong impression - based on our conversation it seems we have more agreement than I would have guessed from reading the OP alone.
2HowieL9d
Yeah that does sell me a bit more on delegating choice.
Deferring

Perhaps "deferring on views" vs "delegating choices" ?

2HowieL9d
I think that's an improvement though "delegating" sounds a bit formal and it's usually the authority doing the delegating. Would "deferring on views" vs "deferring on decisions" get what you want?
Deferring

"Deferring to experts" carries the wrong meaning, I think? At least to me that sounds more like epistemic deferring.

An alternative to "deferring to authority" a couple of people have suggested to me is "delegating", which I sort of like (although maybe it's confusing if one of the paradigm examples is delegating to your boss).

8Vaidehi Agarwalla9d
In light of the other discussions, delegating choice seems better than deferring to experts.
Deferring

Interesting, thanks.

So, my immediate reaction is that I can feel that kind of concern, but I think the "see the truth, obey your leaders" is exactly the kind of dynamic I'm worried about! & then I'm trying to help avoid it by helping to disambiguate between epistemic deferring and deferring to authority (because conflating them is where I think a lot of the damage comes from).

So then I'm wondering if I've made some bad branding decisions (e.g. should I have used a different term for what I called "deferring to authority"? It's meant to evoke that someo... (read more)

6MichaelPlant9d
Okay, well, just to report that what you said by way of clarification was reassuring but not what I picked up originally from your post! I agree with Vaidehi below that an issue was a lack of specificity, which led to me reading it as a pretty general comment. Reading your other comments, it seems what you're getting at is a distinction between trusting someone is right without understanding why vs just following their instructions. I agree that there's something there: to e.g. run an organisation, it's sometimes impractical or unnecessary to convince someone of your entire worldview vs just ask them to do something. FWIW, what I see lots of in EA, worries me, and I was hoping your post would be about, is that people defer so strongly to community leaders that they refuse to even engage with object-level arguments against whatever it is that community leaders believe. To draw from a personal example, quite often when I talk about measuring wellbeing, people will listen and then say something to the effect of "what you say seems plausible, I can't think of any objections, but I'm going to defer to GiveWell anyway". Deferring may have a time and a place, but presumably we don't want deference to this extent.

(fwiw I upvoted this post, because I thought it raised a lot of interesting points that are worth discussing despite disagreeing some bits).

In sum: I think your post sometimes lacks specificity which makes people think you're talking more generally than (I suspect) you are.

  1. Who exactly you're proposing doesn't buy into the agenda - this is left vague in your post. Are you envisioning 20% of people? 50%? What kinds of roles are these folks in? Is it only junior level non-technical roles or even mid-managers doing direct work?

Those details matter because ... (read more)

5Vaidehi Agarwalla9d
"Deferring to experts" might be a less loaded term. Also defining what experts are especially for a lot of EA fields that are newer and less well established could help.
Deferring

Yeah I briefly alluded to this but your explanation is much more readable (maybe I'm being too terse throughout?).

My take is "this dynamic is worrying, but seems overall less damaging than deferral interfering with belief formation, or than conflation between epistemic deferring and deferring to authority".

1calebp12d
I think I roughly agree althought I haven't thought much about the epistemic vs authority deferring thing before. Idk if you were too terse, it seemed fine to me. That said, I would have predicted this would be around 70 karma by now, so I may be poorly calibrated on what is appealing to other people.
Deferring

I think this is getting downvotes and I'm curious whether this is because:

  1. People are disagreeing with the conclusions?
  2. It's poorly explained/confusing?
  3. Something about tone is rubbing people the wrong way?
  4. Something else?
7MichaelPlant10d
[Writing in a personal capacity, etc.] I found this post tone-deaf, indeed chilling, when I read it, in light of the current dynamics of the EA movement. I think its the combination of: (1)lots of money appearing in EA [https://forum.effectivealtruism.org/posts/cfdnJ3sDbCSkShiSZ/ea-and-the-current-funding-situation] (with the recognition this might be a big problem for optics and epistemics [https://forum.effectivealtruism.org/posts/HWaH8tNdsgEwNZu8B/free-spending-ea-might-be-a-big-problem-for-optics-and] and there are already 'bad omens' [https://forum.effectivealtruism.org/posts/xomFCNXwNBeXtLq53/bad-omens-in-current-community-building] ) (2) the central bits of EA seeming to obviously push an agenda (EA being ‘just longtermism' [https://forum.effectivealtruism.org/posts/LRmEezoeeqGhkWm2p/is-ea-just-longtermism-now-1] now, with CEA's CEO, Max Dalton, indicating their content will be "70-80% longtermism"; CEA's Julia Wise is suggesting people shouldn't talk to high net worths themselves, but should funnel them towards LongView [https://forum.effectivealtruism.org/posts/zgHWeMBPnCMvdoZvz/ea-will-likely-get-more-attention-soon] ) (3) this post then saying people should defer to authority. Taken in isolation, these are somewhat concerning. Taken together, they start to look frightening - of the flavour, "join our elite society, see the truth, obey your leaders". I am pretty sure anyone reading this will agree that this is not how we want EA either to be or to be perceived to be. However, things do seem to be moving in that direction, and I don't think this post helped - sorry, Owen, I am sure you wrote it with the best of intentions. But the road to hell, pavements, etc.
Why Helping the Flynn Campaign is especially useful right now

Thanks! Constructive suggestions about good things to do seem great.

I think Carrick is getting a lot of support from a combination of making crucial issues like pandemic preparedness priorities, and also benefiting from reputation networks here (so people are justifiably confident that he isn't going to be in it for himself or giving out political favours, which is just a super-important dimension). It's certainly plausible that McLeod-Skinner's campaign is a great opportunity to help out with, but my personal impression is that you haven't (yet) made a co... (read more)

Why Helping the Flynn Campaign is especially useful right now

I might well be wrong (in which case hopefully someone will correct me), but my understanding is that your second $2,900 would be restricted for use in the general election so couldn't be spent on the remainder of the primary. (Earlier in the campaign there were indirect benefits of total money raised, but it's probably a bit late in the day for those.)

EA and the current funding situation

FWIW I've been trying to fly business class for transatlantic flights for a few years for these reasons. I think it's an usually big effect size for me because otherwise long haul flights play badly with my chronic fatigue and can cost me effectively >1 day, but I expect that many people would get a few hours' worth of extra productive time (I take advantage of both the lie-flat bed and the good work environment for writing that doesn't need internet).

I've felt weird about expensing it so mostly just been paying for it myself (I don't have many other bi... (read more)

What are the coolest topics in AI safety, to a hopelessly pure mathematician?

Cool!

My opinionated takes for problem solvers:

(1) Over time we'll predictably move in the direction from "need theory builders" to "need problem solvers", so even if you look around now and can't find anything, it might be worth checking back every now and again.

(2) I'd look at ELK now for sure, as one of the best and further-in-this-direction things.

(3) Actually many things have at least some interesting problems to solve as you get deep enough. Like I expect curricula teaching ML to very much not do this, but if you have mastery of ML and are trying to a... (read more)

6Jenny K E17d
The point about checking back in every now and then is a good one; I had been thinking in more binary terms and it's helpful to be reminded that "not yet, maybe later" is also a possible answer to whether to do AI safety research. I like logic puzzles, and I like programming insofar as it's like logic puzzles. I'm not particularly interested in e.g. economics or physics or philosophy. My preferred type of problem is very clear-cut and abstract, in the sense of being solvable without reference to how the real world works. More "is there an algorithm with time complexity Y that solves math problem X" than "is there a way to formalize real-world problem X into a math problem for which one might design an algorithm." Unfortunately AI safety seems to be a lot of the latter!
What are the coolest topics in AI safety, to a hopelessly pure mathematician?

To me they feel like pre-formal math? Like the discussion of corrigibility gives me a tingly sense of "there's what on the surface looks like an interesting concept here, and now the math-y question is whether one can formulate definitions which capture that and give something worth exploring".

(I definitely identify more with the "theory builder" of Gower's two cultures.)

3Jenny K E18d
Ah, that's a good way of putting it! I'm much more of a "problem solver."
3Max_Daniel18d
(Terry Tao's distinction between 'pre-rigorous', 'rigorous', and 'post-rigorous' maths [https://terrytao.wordpress.com/career-advice/theres-more-to-mathematics-than-rigour-and-proofs/] might also be relevant.)

The second and third strike me as useful ideas and kind of conceptually cool, but not terribly math-y; rather than feeling like these are interesting math problems, the math feels almost like an afterthought. (I've read a little about corrigibility before, and had the same feeling then.) The first is the coolest, but also seems like the least practical -- doing math about weird simulation thought experiments is fun but I don't personally expect it to come to much use.

Thank you for sharing all of these! I sincerely appreciate the help collecting data about how existing AI work does or doesn't mesh with my particular sensibilities.

Should we buy coal mines?

Makes sense; thanks for flagging. I'm tempted to conclude "robustly a bad idea".

Maybe the parameter that I can most imagine someone pushing on to make it look better is that I'm assuming 5% of mineable coal will stay in the ground on default trajectories, and you might think it would be significantly less than that. I don't think this would make it look better than generic clean energy R&D, but it's not impossible (my cost-effectiveness estimate is >1000x below where I'd put the threshold for interventions I'm excited about, so it seems pretty much impossible for it to reach that if my calc is currently skewing optimistic in places).

Most problems fall within a 100x tractability range (under certain assumptions)

Thanks, really like this point (which I've kind of applied many times but not had such a clean articulation of).

I think it's important to remember that the log returns model is essentially an ignorance prior[1]. If you understand things about the structure of the problem at hand you can certainly depart from it. e.g. when COVID emerged, nobody had spent any time trying to find and distribute a COVID vaccine. But it will be obvious that going from $1 million to $2 million spent won't give you an appreciable chance of solving the problem (since there's no wa... (read more)

Should we buy coal mines?

Thanks John. I happen to have done a BOTEC on this; I'll post it here b/c this seems like a canonical place for conversation. It's pretty scrappy, and you shouldn't feel obliged to respond, but I'd be interested to know if you think it's going wrong anywhere (I think my bottom-line is slightly more negative than your "might be a good option for donors who would otherwise have lower impact").

  • What does success look like?
    • Large amounts of fossil fuels remain in the ground but accessible for re-industrialization in the event of civilizational collapse
    • Mostly tar
... (read more)
4Timothy_Liptrot20d
What's a BOTEC

FWIW, your calculation seems still optimistic to me, still, e.g. assuming quite a high elasticity (cost of coal is not such an important part of the cost of producing electricity with coal) and, if I understand your reasoning correctly, a fairly high chance of additionality (by default, coal is in structural decline globally).

5John G. Halstead22d
Thanks for this. Yeah, I haven't crunched the numbers on cost per microdoom and probs don't have time to go through your calcs.
Longtermist EA needs more Phase 2 work

Yeah. I don't have more understanding of the specifics than are given on that grant page, and I don't know the theory of impact the grantmakers had in mind, but it looks to me like something that's useful because it feeds into future strategy by "our crowd", rather than because it will have outputs that put the world in a generically better position.

Longtermist EA needs more Phase 2 work

Here's a quick and dirty version of taking the OP grant database and then for ones in the last 9 months categorizing them first by whether they seemed to be motivated by longtermist considerations and for the ones I marked as yes by what phase they seemed to be.

Of 24 grants I thought were unambiguously longtermist, there was 1 I counted as unambiguously Phase 2. There were a couple more I thought might well be Phase 2, and a handful which might be Phase 2 (or have Phase 2 elements) but I was sort of sceptical, as well as three more which were unambiguously... (read more)

2Benjamin_Todd1mo
Super helpful, thank you! Just zooming in on the two biggest ones. One was CSET, which I think I understand why is Phase 1. The other is this one: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/massachusetts-institute-of-technology-ai-trends-and-impacts-research-2022 [https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/massachusetts-institute-of-technology-ai-trends-and-impacts-research-2022] Is this Phase 1 because it's essentially an input to future AI strategy?
Against immortality?

Thanks for voicing the frustration!

I regarded the post not really as a point about cause prioritization (I agree longevity research doesn't get much attention, and I think possibly it should get more), but about rhetoric. "Defeating death" seems to be a common motif e.g. in various rationalist/EA fic, or the fable of the dragon tyrant. I just wanted some place which assembled the arguments that make me feel uneasy about that rhetoric. I agree that a lot of my arguments are not really novel intellectual contributions (just "saying the obvious things") and t... (read more)

2Will Bradshaw1mo
Thanks, Owen! I do feel quite conflicted about my feelings here, appreciate your engagement. :) Yeah, I agree with this -- ultimately it's on those of us more on the pro-immortality side to make the case more strongly, and having solid articulations of both sides is valuable. Also flagging that this... ...seems roughly right to me.
Against immortality?

I certainly feel like it's a very stakesy decision! This is somewhere where a longtermist perspective might be more hesitant to take risks that jeopardize the entire future to save billions alive today.

I also note that your argument applies to past cases too. I'm curious in what year you guess it would first have been good to grant people immortality?

(As mentioned in the opening post, I'm quite confused about what's good here.)

5Matt Brooks1mo
I agree, it feels like a stakesy decision! And I'm pretty aligned with longtermist thinking, I just think that "entire future at risk due to totalitarianism lock-in due to removing death from aging" seems really unlikely to me. But I haven't really thought about it too much so I guess I'm really uncertain here as we all seem to be. I kind of reject the question due to 'immortality' as that isn't the decision we're currently faced with. (unless you're only interested in this specific hypothetical world). The decision we're faced with is do we speed up anti-aging efforts to reduce age-related death and suffering? You can still kill (or incapacitate) people that don't age, that's my whole point of the great minds vs. dictators. But to consider the risks in the past vs today: Before the internet and modern society/technology/economy it was much much harder for great minds to coordinate against evils in a global sense (thinking of the Cultural Revolution as you mentioned). So my "great-minds counter dictators" theory doesn't hold up well in the past but I think it does in modern times. The population 200 years ago was 1/8 what is today and growing much slower so the premature deaths you would have prevented per year with anti-aging would have been much less than today so you get less benefit. The general population's sense of morals and demand for democracy is improving so I think the tolerance for evil/totalitarianism is dropping fairly quickly. So you'd have to come up with an equation with at least the following: - How many premature deaths you'd save with anti-aging - How likely and in what numbers will people, in general, oppose totalitarianism - If there was opposition, how easily could the global good coordinate to fight totalitarianism - If there was coordinated opposition would their numbers/resources outweigh the numbers/resources of totalitarianism - If the coordinated opposition was to fail, how long would this totalitarian society last (could it last f
Against immortality?

I agree that probably you'd be fine starting today, and it's a much safer bet than starting 1,000 years ago, but is it a safer bet than waiting say another 200 years?

I'd be concerned about dictators inciting violence against precisely the people they most perceive as threats. e.g. I don't know the history of the Cultural Revolution well, but my impression is that something like this happened there.

1Matt Brooks1mo
The thing that's hard to internalize (at least I think) is that by waiting 200 years to start anti-aging efforts you are condemning billions of people to an early death with a lifespan of ~80 years. You'd have to convince me that waiting 200 years would reduce the risk of totalitarian lock-in so much that it offsets billions of lives that would be guaranteed to "prematurely end". Totalitarian lock-in is scary to think about and billions of people's lives ending prematurely is just text on a screen. I would assume that the human brain can easily simulate the everyday horror of a total totalitarian world. But it's impossible for your brain to digest even 100,000,000 premature deaths, forget billions and billions.
Against immortality?

It's a good point that by default you'd be extending all the great minds too. Abstractly I was tracking this, but I like calling it out explicitly.

& I agree with the trend that we're improving over time. But I worry that if we'd had immortality for the last thousand years maybe we wouldn't have seen the same degree of improvement over time. The concern is if someone had achieved global dictatorship maybe that would have involved repressing all the smart good people, and preventing coordination to overthrow them.

9Matt Brooks1mo
But we're not debating if immortality over the last thousand years would have been better or not, we're looking at current times and then estimating forward, right? (I agree a thousand years ago immortality would have been much much riskier than starting today) In today's economy/society great minds can instantly coordinate and outnumber the dictators by a large margin. I believe this trend will continue and that if you allow all minds to continue the great minds will outgrow the dictator minds and dominate the equation. Dictators are much more likely to die (not from aging) than the average great mind (more than 50x?). This means that great minds will continue to multiply in numbers and resources while dictators sometimes die off (from their risky lifestyle of power-grabbing). Once there are 10,000 more brilliant minds with 1,000x more resources than the evil dictators how do you expect the evil dictator to successfully power grab a whole country/the whole world?
Against immortality?

Good questions! I could give answers but my error bars on what's good are enormous.

(I do think my post is mostly not responding to whether longevity research is good, but to what the appropriate attitudes/rhetoric towards death/immortality are.)

Against immortality?

Re. term limits on jobs, I think this is a cool idea. But I don't know that I'd expect that to be implemented, which makes want to disambiguate between the questions:

  1. Would the ideal society have immortality?
  2. Would immortality make our society better?

My guesses would be "yes" to A, and a very tentative "no" to B. Of course if there was a now-or-never moment of choosing whether to get immortality, one might still like to have it now, but it seems like maybe we'd ideally wait until society is mature enough that it can handle immortality well before granting it.

4Linch1mo
I don't have well-formed views on these questions myself, but yeah, I think #2 is a more important question than #1 right now.
Against immortality?

I hadn't seen this discussion, thanks! I find the dictator data somewhat reassuring, but only somewhat. Because I care about not the average case dictator, but the tail of dictators having power for a long time. And if say 2% of dictators are such that they'd effectively work out how to have an ironclad grasp of their country that would persist for 1000+ years, I don't really expect our data to be rich enough to be able to pull out that structure.

When thinking about the tail of dictators don't you also have to think of the tail of good people with truly great minds you would be saving from death? (People like John von Neumann, Benjamin Franklin, etc.)

Overall, dictators are in a very tough environment with power struggles and backstabbing, lots of defecting, etc. while great minds tend to cooperate, share resources, and build upon each other.

 Obviously, there are a lot more great minds doing good than 'great minds' wishing to be world dictators. And it seems to trend in the right direction. Com... (read more)

Longtermist EA needs more Phase 2 work

Thanks for this; it made me notice that I was analyzing Chris's work more in far mode and Redwood's more in near mode. Maybe you're right about these comparisons. I'd be be interested to understand whether/how you think the adversarial training work could most plausibly be directly applied (or if you just mean "fewer intermediate steps till eventual application", or something else).

UK University Admissions Support Programme

Nice! I'm particularly excited by the emphasis on support with choosing degree courses. I think this is important and really underprovided in general.

Have you thought about framing the programme as about helping people who want to have a positive impact with their work rather than about helping young EAs? I'm a little worried about community effects if "joining EA" comes to be perceived as a way to get generic boosts to one's career (and that people who join in circumstances where they didn't really let themselves think about why they might not want to will be worse long-term contributors than if they had space to think clearly about it). But maybe I'm missing some advantages of framing in terms of EAs.

5Hannah Rowberry1mo
To update, the EA terminology was actually becoming problematic already for referrals for students engaged with outreach orgs, so I've updated the language both within the post and related materials to be more inclusive - many thanks for your insights on this Owen!
4Hannah Rowberry1mo
Thanks Owen! And thank you for raising this, it’s been something I’ve been thinking on and I agree a broader ‘positive impact’ framing could be better on a number of levels, and likely something I’d be looking to do if I scale to a larger project. My reasoning currently is more simply for practicalities for a small scale project (just me & my spare time!) with recruiting & screening students more broadly being less operationally feasible at this point.
Longtermist EA needs more Phase 2 work

Re. Gripe #3 (/#3.5): I also think AI stuff is super important and that we're mostly not ready for Phase 2 stuff. But I'm also very worried that a lot of work people do on it is kind of missing the point of what ends up mattering ...

So I think that AI alignment etc. would be in a better place if we put more effort into Phase 1.5 stuff. I think that this is supported by having some EA attention on Phase 2 work for things which aren't directly about alignment, but affect the background situation of the world and so are relevant for how well AI goes. Having t... (read more)

Longtermist EA needs more Phase 2 work

Re. Gripe #2: I appreciate I haven't done a perfect job of pinning down the concepts. Rather than try to patch them over now (I think I'll continue to have things that are in some ways flawed even if I add some patches), I'll talk a little more about the motivation for the concepts, in the hope that this can help you to triangulate what I intended:

  • I think that there's a (theoretically possible) version of EA which has become sort of corrupt, and continues to gather up resources while failing to deploy them for good ends
    • I think keeping a certain amount of P
... (read more)
1Aaron_Scher1mo
Thanks for the clarification! I would point to this recent post [https://www.lesswrong.com/posts/Ai28GyiB3GpGGoaWs/ineffective-altruism] on a similar topic to the last thing you said.
Longtermist EA needs more Phase 2 work

Jeff's comment (and my reply) covers ~50% of my response here, with the remaining ~50% splitting as ~20% "yeah you're right that I probably have a sampling bias" and ~30% "well we shouldn't be expecting all the Phase 2 stuff to be in this explicitly labelled core, but it's a problem that it's lacking the Phase 1.5 stuff that connects to the Phase 2 stuff happening elsewhere ... this is bad because it means meta work in the explicitly labelled parts is failing to deliver on its potential".

Longtermist EA needs more Phase 2 work

Yeah I did mean "longtermist EA", meaning "stuff that people arrived at thinking was especially high priority after taking a long hard look at how most of the expected impact of our actions is probably far into the future and how we need to wrestle with massive uncertainty about what's good to do as a result of that".

I was here imagining that the motivation for working on Wave wasn't that it seemed like a top Phase 2 priority from that perspective. If actually you start with that perspective and think that ~Wave is one of the best ways to address it, then ... (read more)

3lincolnq1mo
Thanks. I definitely can't count Wave in that category because longtermism wasn't a thing on my radar when Wave was founded. Anyway, I missed that in your original post and I think it somewhat invalidates my point; but only somewhat.
Longtermist EA needs more Phase 2 work

I agree with quite a bit of this. I particularly want to highlight the point about combo teams of drivers and analytical people — I think EA doesn't just want more executors, but more executor/analyst teams that work really well together. I think that because of the lack of feedback loops on whether work is really helpful for longterm outcomes we'll often really need excellent analysts embedded at the heart of execute-y teams. So this means that as well as cultivating executors we want to cultivate analyst types who can work well with executors.

Longtermist EA needs more Phase 2 work

That seems archetypically Phase 1 to me? (There's a slight complication about the thing being recruited to not quite being EA)

But I also think most people doing Phase 1 work should stay doing Phase 1 work! I'm making claims about the margin in the portfolio.

4Peter S. Park1mo
Thanks so much for your comment, Owen! I really appreciate it. I was under the impression (perhaps incomplete!) that your definition of "phase 2" was "an action whose upside is in its impact," and "phase 1" was "an action whose upside is in reducing uncertainty about what is the highest impact option for future actions." I was suggesting that I think we already know that recruiting people away from AI capabilities research (especially into AI safety) has a substantially high impact, and this impact per unit of time is likely to improve with experience. So pondering without experientially trying it is worse for optimizing its impact, for reducing uncertainty.
Longtermist EA needs more Phase 2 work

I meant to include both (A) and (B) -- I agree that (A) is a bottleneck right now, though I think doing this well might include some reasonable fraction of (B).

Longtermist EA needs more Phase 2 work

One set of examples is in this section of another post I just put up (linked from a footnote in this post), but that's pretty gesturing / not complete.

I think that for lots of this alignment work there's an ambiguity about how much to count the future alignment research community as part of "longtermist EA" which creates ambiguity about whether the research is itself Phase 2. I think that Redwood's work is Phase 1, but it's possible that they'll later produce research artefacts which are Phase 2. Chris Olah's old work on interpretability felt like Phase 2 ... (read more)

3Buck1mo
FWIW I think that compared to Chris Olah's old interpretability work, Redwood's adversarial training work feels more like phase 2 work, and our current interpretability work is similarly phase 2.
Time-Time Tradeoffs

I like this. I touched on some similar themes in these old notes on "neutral hours" around different energy levels being more or less productive for different kinds of work, but I didn't really get to the "value of time varies a lot with opportunities", and I think you're right that that's an important part of the puzzle.

Democratising Risk - or how EA deals with critics

I feel like there's just a crazy number of minority views (in the limit a bunch of psychoses held by just one individual), most of which must be wrong. We're more likely to hear about minority views which later turn out to be correct, but it seems very implausible that the base rate of correctness is higher for minority views than majority views.

On the other hand I think there's some distinction to be drawn between "minority view disagrees with strongly held majority view" and "minority view concerns something that majority mostly ignores / doesn't have a view on".

that is a fair point. departures from global majority opinion still seems like a pretty weak 'fire alarm' for being wrong. Taking a position that is eg contrary to most experts on a topic would be a much greater warning sign. 

Supporting Video, Audio, and other non-text media on the Forum

Intuitively I'm pretty interested in the possibility of supporting more formats in service of serious discourse (e.g. having a place to share recordings of conversations that others might benefit from), and pretty uninterested in extra formats for the sake of driving more engagement ... there's a middle ground of "driving engagement with serious discourse" which I'm not sure what to feel about.

Truthful AI

If this looks like an issue, one could distinguish speech acts (which are supposed to meet certain standards) from the outputs of various transparency tools (which hopefully meet some standards of accuracy, but might be based on different standards).

Truthful AI

The idea is that one statement which is definitely false seems a much more egregious violation of truthfulness than e.g. four statements each only 75% true.

Raising it to a power >1 is a factor correcting for this. The choice of four is a best guess based on thinking through a few examples and how bad things seemed, but I'm sure it's not an optimal choice for the parameter.

Truthful AI

The distinction I'm drawing is that "cannot spread it to you" is ambiguous between whether it's shorthand for:

  1. Cannot (in any circumstances) spread it to you
  2. Cannot (as a rule of thumb) spread it to you

Whereas I think that "can never spread it to you" or "absolutely cannot spread it to you" are harder to interpret as being shortenings of 2.

Load More