All of OscarD's Comments + Replies

Great post, and an interesting counterfactual history!

Hooray for moral trade.

Evolutionary debunking arguments feel relevant re the causal history of our beliefes.

One thing I have heard is that having long-ish application stages provides value by getting more people to think about relevant topics (I have heard this from at least two orgs I think). E.g. having several hundred people spend an hour writing a paragraphs about an AI safety topic might be good by virtue of generally having more people think more about this being good. I haven't seen a write-up weighing up the pros and cons of this though. I agree this can be bad for applicants.

NIce post!

We might then expect a lot of powerful attempts to change prevailing ‘human’ values, prior to the level of AI capabilities where we might have worried a lot about AI taking over the world. If we care about our values, this could be very bad. 

This seems like a key point to me, that it is hard to get good evidence on. The red stripes are rather benign, so we are in luck in a world like that. But if the AI values something in a more totalising way (not just satisficing with a lot of x's and red stripes being enough, but striving to make all hum... (read more)

Hmm true, I think I agree that this means the dynamics I describe matter less in expectation (because the positional goods-oriented people will be quite marginal in terms of using the resources of the universe).

Good point re aesthetics perhaps mattering more, and about people dis-valuing inequality and therefore not wanting to create a lot of moderately good lives lest they feel bad about having amazing lives and controlling vast amounts of resources.

Re "But I don't think ..." in your first paragraph, I am not sure what if anything we actually disagree about. I think what you are saying is that there are plenty of resources in our galaxy, and far more beyond, for all present people to have fairly arbitrarily large levels of wealth. I agree, and I am also saying that people may want to keep it roughly that way, rather than creating heaps of people and crowding up the universe.

3
Lukas Finnveden
16d
There might not be any real disagreement. I'm just saying that there's no direct conflict between "present people having material wealth beyond what they could possibly spend on themselves" and "virtually all resources are used in the way that totalist axiologies would recommend".

Nice, good idea and well implemented!

In terms of wastewater being good for getting samples from lots of people at once and not needing ethics clearance, but being worse for respiratory pathogens, how feasible is airborne environmental DNA sampling? I have never looked into it, I just remember hearing someone give a talk about their work on this, I think related to this paper: https://www.sciencedirect.com/science/article/pii/S096098222101650X

I assume it is just hard to get the quantity of nucleic acids we would want from the air.

Flagging this for @Conrad K... (read more)

3
Jeff Kaufman
16d
Thanks for the feedback! The values in the list aren't drawn from a parametrized distribution, they're the observed values in a small study. Done! Fixed! This was due to me not testing on monitors that had that aspect ratio. Whoops! Fixed by allowing you to scroll that section.
3
ljusten
17d
We've done a fairly thorough investigation into air sampling as an alternative to wastewater at the NAO. We currently have a preprint on the topic here and a much more in-depth draft we hope to publish soon. 
2
Conrad K.
17d
Thanks for the tag @OscarD, this is awesome! I'd basically hoped to build this but then additionally convert incidence at detection to some measure of expected value based on the detection architecture (e.g. as economic gains or QALYs). Something way too ambitious for me at the time haha, but I am still thinking about this. I definitely want to play with this in way more detail and look into how it's coded, will try and get back with hopefully helpful feedback here.

Thanks for writing this up! Have you spoken to Christian Ruhl or anyone else at Founder's Pledge about this work? I think FP would be interested in and benefit from this.

1
Conrad K.
18d
Thank you! Yes I've been in touch with Christian Ruhl :)

I downvoted because there are lots of questions lumped in together without enough motivation and cohesion for my liking, and compared to e.g. the moral weights project the engagement with these subtle issues feels more flippant than serious.

Nice post! Re the competitive pressures, this seems especially problematic in long-timelines worlds where TAI is really hard to build, as (toy model) if company A spends all its cognitive assets on capabilities (including generating profit to fund this research), while company B spends half its cognitive assets at any given time on safety work with no capabilities overflows, then if there is a long time over which this exponential growth continues, company A will likely reach the lead even if it starts well behind. Whereas if there is a relatively smaller ... (read more)

Exciting! Why the relocation from Switzerland to the UK? The fact that there are more EA/X-risk projects already in London seems like both a pro (more networking and community opportunities, better access to mentors) and a con (less differentiation with other projects like ERA and MATS, less neglected than mainland Europe fellowships).

Feel free to not reply if you deliberately don't want to make this reasoning public.

6
Tobias Häberli
22d
Hi Oscar, thanks for the question! To clarify, only the fellowship has moved to the UK, not our entire organisation. We've thought a lot about the pros and cons of moving from Switzerland and largely agree with your points.[1] The main driver for our decision was Switzerland's comparatively small GCR network. We see the fellowship as an opportunity to immerse fellows in a rich intellectual environment, which London’s – and especially LISA’s – GCR ecosystem offers. Our experience of running fellowships outside of established hubs suggests that fellowships alone are not a great vehicle to build a new GCR hub due to their seasonal nature and limited ability to retain people long-term. Nevertheless, we do see significant value in diversification and are considering future projects outside established GCR hubs for this reason. Hope this explains our thinking, happy to answer more questions. 1. ^ Mentor access isn't a huge concern for us, since we expect most mentor-mentee interactions to happen virtually either way.

My guess now of where we most disagree is regarding the value of a world where AIs disempower humanity and go onto have a vast technologically super-advanced, rapidly expanding civilisation. I think this would quite likely be ~0 value since we don't understand consciousness at all really, and my guess is that AIs aren't yet conscious and if we relatively quickly get to TAI in the current paradigm they probably still won't be moral patients. As a sentientist I don't really care whether there is a huge future if humans (or something sufficiently related to h... (read more)

3
Matthew_Barnett
24d
Thanks. I disagree with this for the following reasons: 1. AIs will get more complex over time, even in our current paradigm. Eventually I expect AIs will have highly sophisticated cognition that I'd feel comfortable calling conscious, on our current path of development (I'm an illusionist about phenomenal consciousness so I don't think there's a fact of the matter anyway). 2. If we slowed down AI, I don't think that would necessarily translate into a higher likelihood that future AIs will be conscious. Why would it? 3. In the absence of a strong argument that slowing down AI makes future AIs more likely to be conscious, I still think the considerations I mentioned are stronger than the counter-considerations you've mentioned here, and I think they should push us towards trying to avoid entrenching norms that could hamper future growth and innovation.

To red-team a strawman of your (simulated) argument: what about the Pascallian and fanatical implications across evidentially cooperating large worlds? I think we need some Bayesian, anthropic reasoning, lots of squiggle notebooks, and perhaps a cross-cause cost-effectiveness model to get to the bottom of this!

Thanks, interesting idea, I think I mostly disagree and would like to see AI progress specifically slowed/halted while continuing to have advances in space exploration, biology, nuclear power, etc and that if we later get safe TAI we won't have become too anti-technology/anti-growth to expand a lot. But I hadn't thought about this before and there probably is something to this, I just think it is most likely swamped by the risks from AI. It is a good reason to be careful in pause AI type pitches to be narrowly focused on frontier AI models rather than tech... (read more)

2
Matthew_Barnett
24d
Of those technologies, AI seems to be the only one that could be transformative, in the sense of sustaining dramatic economic growth and bringing about a giant, vibrant cosmic future. In other words, it seems you're saying we should slow down the most promising technology -- the only technology that could actually take us to the future you're advocating for -- but make sure not to slow down the less promising ones. The fact that people want to slow down (and halt!) precisely the technology that is most promising is basically the whole reason I'm worried here -- I think my argument would be much less strong if I we were talking about slowing down something like nuclear power. It's important to be clear about what we mean when we talk about the risks from AI. Do you mean: 1. The risk that AI could disempower humanity in particular? 2. The risk that AI could derail a large, vibrant cosmic civilization? I think AI does pose a large risk in the sense of (1), but (2) is more important from a total utilitarian perspective, and it doesn't seem particularly likely to me that AIs pose a large risk in the sense of (2) (as the AIs themselves, after disempowering humanity, would presumably go on to create a big, vibrant civilization).  If you care about humanity as a species in particular, I understand the motive behind slowing down AI. On the other hand, if you're a total utilitarian (or you're concerned about the present generation of humans who might otherwise miss out on the benefits of AI), then I'm not convinced, as you seem to be, that the risks from AI outweigh the considerations that I mentioned. Again, the frontier AI models, in my view, are precisely what is most promising from a pro-growth perspective. So if you are worried about EAs choking off economic growth and spurring cosmic NIMBYism by establishing norms against growth, it seems from my perspective that you should be most concerned about attempts to obstruct frontier AI research.

That makes sense, yes perhaps there are some fanaticism worries re my make-the-future large approach even more so than x-risk work, and maybe I am less resistant to fanaticism-flavoured conclusions than you. That said I think not all work like this need be fanatical - e.g. improving international cooperation and treaties for space exploration could be good in more frames (and bad is some frames you brought up, granted).

I don't know lots about it, but I wonder if you prefer more of a satisficing decision theory where we want to focus on getting a decent out... (read more)

Thanks for this really thoughtful engagement! I expected this would not be a take particularly to your liking, but your pushback is stronger than I thought, this is useful to hear. Perhaps I failed to realise how controversial and provocative these ideas would be after playing with them myself and with a few relatively similar people. Onto the substance:

  • That makes sense to me that the analogy is a bit weak, I think I mostly agree. I think the strongest part of the analogy to me is less the NIMBYs themselves and more who is politically empowered (a smaller
... (read more)
2
Sarah Weiler
24d
First two points sound reasonable (and helpfully clarifying) to me! I share the guess that scope sensitivity and prioritarianism could be relevant here, as you clearly (I think) endorse these more strongly and more consistently than I do; but having thought about it for only 5-10 minutes, I'm not sure I'm able to exactly point at how these notions play into our intuitions and views on the topic - maybe it's something about me ignoring the [(super-high payoff of larger future)*(super-low probability of affecting whether there is a larger future) = (there is good reason to take this action)] calculation/conclusion more readily?  That said, I fully agree that "something being very important and neglected and moderately tractable (like x-risk work) isn't always enough for it to be the 'best' ". To figure out which option is best, we'd need to somehow compare their respective scores on importance, neglectedness, and tractability... I'm not sure actually figuring that out is possible in practice, but I think it's fair to challenge the claim that "action X is best because it is very important and neglected and moderately tractable" regardless. In spite of that, I continue to feel relatively confident in claiming that efforts to reduce x-risks are better (more desirable) than efforts to increase the probable size of the future, because the former is an unstable precondition for the latter (and because I strongly doubt the tractability and am at least confused about the desirability of the latter). I think my stance on this example would depend on the present state of the company. If the company is in really dire straits, I'm resource-constrained, and there are more things that need fixing now than I feel able to easily handle, I would seriously question whether one of my employees should go thinking about making best-case future scenarios the best they can be[1]. I would question this even more strongly if I thought that the world and my company (if it survives) will cha
Answer by OscarDMar 28, 20248
2
0
1

My understanding is you are unsupportive of earning-to-give. I agree the trappings of expensive personal luxuries are both substantively bad (often) and poor optics. But the core idea that some people are very lucky and have the opportunity to earn huge amounts of money which they can (and should) then donate, and that this can be very morally valuable, seems right to me. My guess is that regardless of your critiques of specific charities (bednets, deworming, CATF) you still think there are morally important things to do with money. So what do you think of ETG - why is the central idea wrong (if you indeed think that)?

2
huw
1mo
I thought he spelled out his ETG criticism quite clearly in the article, so I’ll paraphrase what I imbibed here. I think he would argue that, for the same person in the same job, donating X% of their money is a better thing. However, the ETG ethos that has hung around in the community promotes seeking out extremely high-paying jobs in order to donate even more money. These jobs often bring about more harms in turn (both in an absolute sense but possibly also to the point that ETG is net-negative, for example in the case of SBF), especially if we live in an economic system that rewards behaviour that profits off negative externalities.

I was disappointed GiveDirectly wasn't mentioned given that seems to be more what he would favour. The closing anecdote about the surfer-philosopher donating money to Bali seems like a proto-GiveDirectly approach but presumably a lot less efficient without the infrastructure to do it at scale.

2
huw
1mo
I think his take on GiveDirectly is likely to be very similar—he would point to the fraud and note that neither them or any of their evaluators took into account the harms caused by the beneficiaries of that fraud in their calculations. And I don’t think that that would be an unfair criticism (if delivered with a bit less snark).
2
Arden Wiese
1mo
Same, Oscar! I hope to ask him about this

Thanks for sharing, it sucks that you went through this (and sucks that the moths went through this :( ). As uncomfortable as thinking about these topics is, I am glad to be part of a community of people who take ethics seriously and try to act with compassion and consideration. Let's hope market forces take effect and enough people inquiring about low-suffering ways to kill insects creates a market for companies to offer this :)

Nice!

I think this makes good sense as a toy theoretical model, and updates me some way towards these conclusions, but not very far because this sort of armchair theorising (while valuable and fun) is hard to get accurate for something as messy and empirical as this, as you note. So I think if someone were to investigate this further the key steps would be to:

  • look at empirical literature, or conduct primary research, on pleasure/pain symmetry and whether this holds (maybe this would be intractable though)
  • do some more involved population dynamics modelling,
... (read more)

Thanks for writing this! I agree that bioanchors is still worth engaging with and revisiting given how important it has been and is.

I like the overall approach of trying to quantify how much different criticism would update the 1e41 estimate. I don't feel well-placed to comment on the thermodynamic approach part, but if it works roughly as you outline this seems like an important robustness check for the evolution anchor.

I left a bunch of more minor comments in the report.

I think this is a good framing! And I think I am happy to bite this bullet and say that for the purposes of deciding what to do it matters relatively little whether my action being effective relies on systems of humans acting predictably (like polio vaccine deliverers getting paid to do their job) or natural forces (atmospheric physics for a climate geoengineering intervention). Whereas regarding what is a virtuous attitude to have, yes probably it is good to foreground the many (sometimes small) contributions of other humans that help our actions have their desired impacts.

Yes I think that makes sense. I think for me the area where I am most sympathetic to your collective rationality approach is voting, where as you noted elsewhere the 80K narrow consequentialist approach is pretty convoluted. Conversely, the Categorical Imperative, universalisability perspective is very clear that voting is good, and thinking in terms of larger groups and being part of something is perhaps helpful here. So yes while I still generally prefer the counterfactual perspective, I am probably not fully settled there.

I suppose in theory being part ... (read more)

Yes, I think this issue of how many people you need to get on board with the vision/goals to make some change happen is key (and perhaps a crux). I agree the number of people needed to implement a change might be huge (all the farm workers making changes for various animal welfare things) but think we probably don't need to get all of them to care a lot more about nonhumans to get the job done. So in my view often a small-ish set of people advocate for/research/fund/plan some big change, and then lots of people implement it because they are told to/paid to.

Makes sense, I think I don't know enough to continue this line of reasoning that sensibly!

On 2, I like this point about the distribution being shaped by the choices of others, I think it is quite true that if more people cared about impact it would be a lot harder to counterfatually achieve very high impact actions (because there would be so much 'competition' with other impact seekers). Reminiscent of how financial markets are pretty efficient because so many people are seeking to make money trading - I think if a similar number of people were looking to succeed in the 'impact market' there wouldn't be these super cost-effective low-hanging fr... (read more)

Cool, great you had a go at this! I have not had a look at your new code yet (and am not sure I will) but if I do and I have further comments I will let you know :)

Good to hear! Yes I imagine having 50+ comments, many of them questioning/pushing-back, could be a bit overwhelming, from my perspective and I am guessing for others as well it is fine and reasonable if you choose not to engage now/ever. Putting this essay out into the world has already been a useful contribution to the discourse I think :)

Finally, I really hope you do choose to stay at least somewhat involved in ~EA things, as you say having the added intellectual diversity is valuable I think. You are probably the sometimes-critic of EA conventions/dogmas whose views I am most moved by.

2
Sarah Weiler
1mo
Thanks a lot for taking the time to read the essay and write up those separate thoughts in response!! I'll get to the other comments over the next week or so, but for now: thank you for adding that last comment. Though I really (!) am grateful for all the critical and thought-provoking feedback from yourself and others in this comment thread, I can't deny that reading the appreciative and encouraging lines in that last response is also welcome (and will probably be one of the factors helping me to keep exercising a critical mind even if it feels exhausting/confusing at times) :D 

Re the polio vaccine, I don't know much about it, but I think the inventors probably do deserve a lot of credit! Yes, lots and lots of people were needed to manufacture and distribute many vaccine doses, but I think the counterfactual is illustrative: the workers driving the trucks and going door to door and so forth seem very replaceable to me and it is hard to imagine a great vaccine being invented, but then not being rolled our because no-one is willing to take a job as a truck driver distributing the doses. Whereas if the inventors didn't invent it, ma... (read more)

2
Sarah Weiler
1mo
[The thoughts expressed below are tentative and reveal lingering confusion in my own brain. I hope they are somewhat insightful anyways.] Completely agree! The concept of counterfactual analysis seems super relevant to explaining how and why some of my takes in the original post differ from "the mainstream EA narrative on impact". I'm still trying to puzzle out exactly how my claims in "The empirical problem" link to the counterfactual analysis point - do I think that my claims are irrelevant to a counterfactual impact analysis? do I, in other words, accept and agree that impact between actions/people differs by several magnitudes when calculated via counterfactual analysis methods? how can I best name, describe, illustrate, and maybe defend the alternative perspective on impact evaluations that seems to inform my thinking in the essay and in general? what role does and should counterfactual analysis play in my thinking alongside that alternative perspective? To discuss with regards to the polio example: I see the rationale for claiming that the vaccine inventors are somehow more pivotal because they are less easily replaceable than all those people performing supportive and enabling actions. But just because an action is replacement doesn't mean it's unimportant. It is a fact that the vaccine discovery could not have happened and would not have had any positive consequences if the supporting & enabling actions had not been performed by somebody. I can't help myself, but this seems relevant and important when I think about the impact I as an individual can have; on some level, it seems true to say that as an individual, living in a world where everything is embedded in society, I cannot have any meaningful impact on my own; all effects I can bring about will be brought about by myself and many other people; if only I acted, no meaningful effects could possibly occur. Should all of this really just be ignored when thinking about impact evaluations and my personal d

I think elitism and inequality are real worries - I think it is lamentable but probably true that some people's lives will have far greater instrumental effects on the world than others. (But this doesn't change their intrinsic worth as an experiencer of emotions and haver of human connections.)

So I agree that there is a danger of thinking too much of oneself as some sort of ubermensch do-gooder, but the question of to what extent impact varies by person or action is separate.

2
Sarah Weiler
1mo
I think that makes sense and is definitely a take that I feel respect (and gratitude/hope) for. Even after a week of reflecting on the empirical question - do some people have magnitudes higher impact than others? - and the conceptual question - which impact evaluation framework (counterfactual, Shapley value attribution, something else entirely) should we use to assess levels of impact? -, I remain uncertain and confused on my own beliefs here (see more in my comment on the polio vaccine example above). So I'm not sure what my current response to your claim "[it's] probably true that some people's lives will have far greater instrumental effects on the world than others" is or should be.

Footnote 5 predicted perfectly the sort of thing I was going to say in response. You probably know more economics than I do, but I feel like there are some models of how markets work that quite successfully predict macro behaviour of systems without knowing all the local individual factors? E.g. re your suggestion that nurses are a large fraction of the 'highest impact' career paths, I think we could run some decent calculations about the elasticity of the nursing labour market to find how many more nurses there will overall be if I decide to be a nurse in... (read more)

2
Sarah Weiler
1mo
You're right that you're more optimistic than me for this one. I don't think we have good models of that kind in economics (or: I haven't come across such models; I have tried to look for them a little bit but am far from knowing all modeling attempts that have ever been made, so I might have missed the good/empirically reliable ones). I do agree that "we can make, in some cases, simple models that accurately capture some important features of the world" - but my sense is that in the social sciences (/ whenever the object of interest is societal or human), the features we are able to capture accurately are only a (small) selection of the ones that are relevant for reasonably assessing something like "my expected impact from taking action X." And my sense is also that many (certainly not all!) people who like to use models to improve their thinking on the world over-rely on the information they gain from the model and forget that these other, model-external features also exist and are relevant for real-life decision-making.

I do not agree that there are vast differences in value among those actions and strategies that have crossed the bar of having a significant positive impact on the world

(emphasis added)

Perhaps this is a strawman of your position, but it sounds a bit like you want to split actions into basically three buckets: negative, approximately neutral, and significantly positive. This seems unhelpful to me, for several reasons:

  • I think it is uncontroversial that at least on the negative side of the scale some actions are vastly worse than others, e.g. a mass murder or
... (read more)
1
Sarah Weiler
1mo
Agreed! I share the belief that there are huge differences in how bad an action can be and that there's some relevance in distinguish between very bad and just slightly bad ones. I didn't think this was important to mention in my post, but if it came across as suggesting that we basically should only think in terms of three buckets, I clearly communicated poorly - I agree that this would be too crude. Strongly agreed! I strongly share the worry that identifying neutral actions would be extremely hard in practice - took me a while to settle on "bullshit jobs" as a representative example in the original post, and I'm still unsure whether it's a solid case of "neutral actions". But I think for me, this uncertainty reinforces the case for more research/thinking to identify actions with significantly positive outcomes vs actions that are basically neutral. I find myself believing that dividing actions into "significantly positive" vs "everything else" is epistemologically more tractable than dividing them into "the very best" vs "everything else". (I think I'd agree that there is a complementary quest - identifying very bad actions and roughly scoring them on how bad they would be - which is worthwhile pursuing alongside either of the two options mentioned in the last sentence; maybe I should've mentioned this in the post?) I think I disagree mostly for epistemological reasons - I don't think we have much access to that information at a finer-grained scale; based on that, giving up on finding such information wouldn't be a great loss because there isn't much to lose in the first place. I think I might also disagree from a conceptual or strategic standpoint: my thinking on this - especially when it comes to catastrophic risks, maybe a bit less for global health & development / poverty - tends to be more about "what bundle of actions and organisations and people do we need for the world to improve towards a state that is more sustainable and exhibits higher wellbeing (/

An overarching thought, not responding to any particular quote from you: I think lots of people in the world (the vast majority in fact!) don't really think about impartial altrusitic impact, let alone maximising it. If this is right, I think it would be a priori not so surprising if there are lots of high-impact opportunities left on the table by most people, waiting for ~EAs to action. Perhaps the clearest case here is something like shrimp or insect welfare. By some lights at least this is very high impact, but it makes sense it wasn't already being worked on because primarily only people with an ~EA mindset would be interested in it.

2
Sarah Weiler
1mo
[The thoughts expressed below are tentative and reveal lingering confusion in my own brain. I hope they are somewhat insightful anyways.] This seems on-point and super sensible as a rough heuristic (not a strict proof) when looking at impact through a counterfactual analysis that focuses mostly on direct effects. But I don't know if and how it translates to different perspectives of assessing impact. If there never were high impact opportunities in the first place, because impact is dispersed across the many actions needed to bring about desired consequences, then it doesn't matter whether a lot or only a few people try to grab these opportunities from the table - because there would be nothing to grab in the first place.  Maybe the example helps to explain my thinking here (?): If we believe that shrimp/insect welfare can be improved significantly by targeted interventions that a small set of people push for and implement, then I think your case for it being a high impact opportunity is much more reasonable than if we believe that actual improvements in this area will require a large-scale effort by millions of people (researchers, advocates, implementers, etc). I think most desirable change in the world is closer to the latter category.*  *Kind of undermining myself: I do recognise that this depends on what we "take for granted" and I tentatively accept that there are many concrete decision situations where it makes sense to take more for granted than I am inclined to do (the infrastructure we use for basically everything, many of the implementing and supporting actions needed for an intervention to actually have positive effects, etc), in which case it might be possible to consider more possible positive changes in the world to fall closer to the former category (the former category ~ changes in the world that can be brought about by a small group of individuals).

Wow great essay Sarah, very thought-provoking and relevant I thought.

I have lots of things to say, I will split them into separate comments in case you want to reply to specific parts (but feel free to reply to none of it, especially given I see you have a dialogue coming soon). Or we can just discuss it all on our next call :) But I thought I would write them down while I remember.

2
OscarD
1mo
Finally, I really hope you do choose to stay at least somewhat involved in ~EA things, as you say having the added intellectual diversity is valuable I think. You are probably the sometimes-critic of EA conventions/dogmas whose views I am most moved by.
3
OscarD
1mo
Re the polio vaccine, I don't know much about it, but I think the inventors probably do deserve a lot of credit! Yes, lots and lots of people were needed to manufacture and distribute many vaccine doses, but I think the counterfactual is illustrative: the workers driving the trucks and going door to door and so forth seem very replaceable to me and it is hard to imagine a great vaccine being invented, but then not being rolled our because no-one is willing to take a job as a truck driver distributing the doses. Whereas if the inventors didn't invent it, maybe it would be years or decades before someone else did. But I can think of a case where inventors should get far less credit I think: if there is a huge prize for developing a vaccine, then quite likely lots of teams will try to do it, and if you are the winning team you might have only accelerated it by a few months. So in this case maybe the people who made/funded the prize get a lot of the credit. I really like your inclusion of people who have influenced us in thinking about how to apportion credit. For me personally, my parents sometimes muse that despite all the great things they have done directly, parenting my brother and I well may be the single biggest 'impact' of their lives. Of course it is hard to guess, but this seems at least plausible, and I think parenting (and more broadly supporting/mentoring/caring for other people) is really valuable!
2
OscarD
1mo
I think elitism and inequality are real worries - I think it is lamentable but probably true that some people's lives will have far greater instrumental effects on the world than others. (But this doesn't change their intrinsic worth as an experiencer of emotions and haver of human connections.) So I agree that there is a danger of thinking too much of oneself as some sort of ubermensch do-gooder, but the question of to what extent impact varies by person or action is separate.
1
OscarD
1mo
Footnote 5 predicted perfectly the sort of thing I was going to say in response. You probably know more economics than I do, but I feel like there are some models of how markets work that quite successfully predict macro behaviour of systems without knowing all the local individual factors? E.g. re your suggestion that nurses are a large fraction of the 'highest impact' career paths, I think we could run some decent calculations about the elasticity of the nursing labour market to find how many more nurses there will overall be if I decide to be a nurse in some particular place. Me being a nurse increases labour supply, marginally reducing wages in expectation, reducing the number of other people who choose to be nurses; this effect may be quite different in different professions, e.g. if there is a cap of X places in some government medical certification program and lots of people apply, as with medical school in India, then joining that profession may increase the total supply of doctors very little. So I suppose I am still more optimistic than you that we can make, in some cases, simple models that accurately capture some important features of the world.
2
OscarD
1mo
(emphasis added) Perhaps this is a strawman of your position, but it sounds a bit like you want to split actions into basically three buckets: negative, approximately neutral, and significantly positive. This seems unhelpful to me, for several reasons: * I think it is uncontroversial that at least on the negative side of the scale some actions are vastly worse than others, e.g. a mass murder or a military coup of a democratic leader, compared to more 'everyday' bads like being a grumpy boss. * It feels pretty hard to know which actions are neutral, for many of the reasons you say that the world is complex and there are lots of flow-through effects and interactions. * Identifying which positive actions are significantly so versus insignificantly so feels like it just loses a lot of information compared to a finer-grained scale.
1
OscarD
1mo
An overarching thought, not responding to any particular quote from you: I think lots of people in the world (the vast majority in fact!) don't really think about impartial altrusitic impact, let alone maximising it. If this is right, I think it would be a priori not so surprising if there are lots of high-impact opportunities left on the table by most people, waiting for ~EAs to action. Perhaps the clearest case here is something like shrimp or insect welfare. By some lights at least this is very high impact, but it makes sense it wasn't already being worked on because primarily only people with an ~EA mindset would be interested in it.

Thanks for writing this! Indeed counterfactuals are hard. I have also joined a large EA org (Rethink Priorities) and so far agree it is useful. I think a possible failure mode for me is that I am a bit risk-averse, and also just really like working with EAs, so I'm guessing if in X months/years time I have the option to go off and start/do something by myself or with a small group I might be reluctant to leave a nice, comfortable, convenient, EA org like RP. But I agree there are lots of advantages to working at an established org, at least for a while at the start of my career.

Woohoo, wonderful news and thanks for your efforts!

CHERI is also planning to run this year I believe, for anyone looking to do non-AI projects (I am not involved with CHERI).

Nice! I realised that I can't think of the last time I received low-quality criticism (but can think of a moderate amount of fairly high-quality criticism) so I am probably quite lucky in that regard, as my work/writing thus far has either been privately shared or public but not very provocative. (Of course the flipside is having more people engage with one's writing is one way to increase impact.)

I hadn't heard the "idea inoculation" term before - that does seem like a useful framing. I wonder if that is part of the explanation for some of the AI safety/x... (read more)

Good on you for being courageous and scout-minded enough to shut this down (and to start it in the first place)! I hope you find great projects to move onto.

I quite like the summary bot, and think it would often be useful (particularly for posts without author-written summaries) to read the summary first before deciding to read the whole post. Of course, it is easy to scroll all the way down, read the summary, and then decide whether to read the post. But humans are lazy and to make the user experience as frictionless as possible, how about the AI-written summary goes at the top, above the post? Not everyone would like this, so I think there should be an option for each user whether they want the summary at th... (read more)

Nice! I was surprised that more present-day harms were not more front of mind for respondents (e.g. job losses, AI pornography, and racial and gender bias were far below preventing catastrophic outcomes). Interesting.

What updates are you thinking of? Gemini 1.5?

5
Chris Leong
2mo
Yep, that's the main one, but to a lesser extent Sora being ahead of schedule + realising what this means for AI agents. It's less about my median timeline moving down, but more about the tail end not extending out as far.

Thanks for writing up a version for the forum, and congrats on finishing your thesis!

I thought this was useful and clearly written. I particularly liked the discussion of the tension between BWC Articles IV and X, which I hadn't thought about. And very interesting re your detailed digging into IGSC companies and that many of them don't take it very seriously. Shows gov regulation is more important, perhaps. That would be wild if the companies would actually be contractually obliged to not deny dangerous orders in some case!! I know next to nothing about la... (read more)

5
Isaac Heron
2mo
Thanks Oscar. I appreciate your comment about genetic sequences, which I have now edited throughout to refer to 'physical genetic sequences'. Yes I have reached out to Braden Leach, once in the middle of my research and again once I had finished the dissertation, although I haven't heard from him since. I have been thinking about getting in contact with Piers Millet about my ideas for IBBIS for a while, so an introduction with him would probably be a good idea.

Thanks for sharing, yes motivational benefits do seem important too!

Nice, I didn't know about some of these, good to take stock after an eventful year! I am so used to GPT-4 and integrating it into my work and life that it is weird to think it has been around such a short length of time ...

Thanks good points, I don't think we disagree directionally, perhaps just on how important some of these effects are. It feels like a very difficult epistemic problem to attribute how much the relative absence of bioweapons use is attributable to the BWC - I know roughly nothing about exploding bullets and the like, but maybe they are just more useful than bioweapons for most belligerants? And therefore are used more irrespective of how strong the relevant treaties are. But yes, agree that these aspects still provide some value :)

4
Davidmanheim
4mo
Yeah, I don't think there's a ton of benefit it trading hypotheticals and counterfactuals here, especially because I don't think much of anyone's intutions will be conveyed clearly, but I do think it's worth noting that it's not obvious to me that the convention didn't have a large counterfactual impact over the past 50 years.

Thanks, useful thoughts, I think I roughly agree with you and will change this. I suppose the tradeoff I was facing with the title (not that I spent any time weighing up different options consciously) is between brevity, accurateness, and interestingness. I think the more complete title would be something like 'Updating weakly against the Biological Weapons Convention being as important to work on as I thought'. I think I will change the title to 'Reflections on the BWC' so that people who only see the title don't get a negative vibe (I agree we want peopl... (read more)

Thanks for writing this, I thought it was moving and beautifully written. I think the world would be a lot better if more people showed this sort of radical empathy.

Load more