OscarD

799 karmaJoined Apr 2021Working (0-5 years)Brisbane QLD, Australia

Comments
144

Hmm true, I think I agree that this means the dynamics I describe matter less in expectation (because the positional goods-oriented people will be quite marginal in terms of using the resources of the universe).

Good point re aesthetics perhaps mattering more, and about people dis-valuing inequality and therefore not wanting to create a lot of moderately good lives lest they feel bad about having amazing lives and controlling vast amounts of resources.

Re "But I don't think ..." in your first paragraph, I am not sure what if anything we actually disagree about. I think what you are saying is that there are plenty of resources in our galaxy, and far more beyond, for all present people to have fairly arbitrarily large levels of wealth. I agree, and I am also saying that people may want to keep it roughly that way, rather than creating heaps of people and crowding up the universe.

Nice, good idea and well implemented!

In terms of wastewater being good for getting samples from lots of people at once and not needing ethics clearance, but being worse for respiratory pathogens, how feasible is airborne environmental DNA sampling? I have never looked into it, I just remember hearing someone give a talk about their work on this, I think related to this paper: https://www.sciencedirect.com/science/article/pii/S096098222101650X

I assume it is just hard to get the quantity of nucleic acids we would want from the air.

Flagging this for @Conrad K. - this seems like perhaps a better version of what you were considering building last year I think? If you have time you might have useful thoughts/suggestions.

I played around with the simulator a bit but didn't find anything too counterintuitive. I noticed various minor suboptimal things, depending on what you want to do with the simulator some of these may not be worth changing:

  • I found having many values in relative abundance box for nasal swabs a bit confusing and harder to manage as a user. Why not just specify a distribution with some parameters rather than list lots of possible values drawn from that distribution?
  • The line is not monotonic as it should be here, seemingly because the simulation hits 30% of the population and then stops. Maybe rather than have the line go back to 0, just stop it when it hits 30%, or have it plateau at 30%?
  • There were some issues with the sizing of the graph for me. I am using Chrome on Windows 11. At 100% zoom part of the x-axis label and the y-axis numbers are cut off:

    And the problem becomes worse if for whatever reason you run lots of scenarios, where the whole bottom half of the graph disappears:

Thanks for writing this up! Have you spoken to Christian Ruhl or anyone else at Founder's Pledge about this work? I think FP would be interested in and benefit from this.

I downvoted because there are lots of questions lumped in together without enough motivation and cohesion for my liking, and compared to e.g. the moral weights project the engagement with these subtle issues feels more flippant than serious.

Nice post! Re the competitive pressures, this seems especially problematic in long-timelines worlds where TAI is really hard to build, as (toy model) if company A spends all its cognitive assets on capabilities (including generating profit to fund this research), while company B spends half its cognitive assets at any given time on safety work with no capabilities overflows, then if there is a long time over which this exponential growth continues, company A will likely reach the lead even if it starts well behind. Whereas if there is a relatively smaller amount of cognitive assets ever deployed before TAI, safety-oriented companies being in the lead should be the dominant factor and safety-ignoring companies wouldn't be able to catch up even by 'defecting'.

Exciting! Why the relocation from Switzerland to the UK? The fact that there are more EA/X-risk projects already in London seems like both a pro (more networking and community opportunities, better access to mentors) and a con (less differentiation with other projects like ERA and MATS, less neglected than mainland Europe fellowships).

Feel free to not reply if you deliberately don't want to make this reasoning public.

My guess now of where we most disagree is regarding the value of a world where AIs disempower humanity and go onto have a vast technologically super-advanced, rapidly expanding civilisation. I think this would quite likely be ~0 value since we don't understand consciousness at all really, and my guess is that AIs aren't yet conscious and if we relatively quickly get to TAI in the current paradigm they probably still won't be moral patients. As a sentientist I don't really care whether there is a huge future if humans (or something sufficiently related to humans e.g. we carefully study consciousness for a millennium and create digital people we are very confident have morally important experiences to be our successors) aren't in it.

So yes I agree frontier AI models are where the most transformative potential lies, but I would prefer to get there far later once we understand alignment and consciousness far better (while other less important tech progress continues in the meantime).

Load more