CarlShulman

Wiki Contributions

Comments

Artificial Suffering and Pascal's Mugging: What to think?

And once I accept this conclusion, the most absurd-seeming conclusion of them all follows. By increasing the computing power devoted to the training of these utility-improved agents, the utility produced grows exponentially (as more computing power means more digits to store the rewards). On the other hand, the impact of all other attempts to improve the world (e.g. by improving our knowledge of artificial sentience so we can more efficiently promote their welfare) grows at only a polynomial rate with the amount of resource devoted into these attempts. Therefore, running these trainings is the single most impactful thing that any rational altruist should do. Q.E.D. 


If you believed in wildly superexponential impacts from more compute, you'd be correspondingly uninterested in what could be done with the limited computational resources of our day, since a  Jupiter Brain playing with big numbers instead of being 10^40 times as big a deal as an ordinary life today could be 2^(10^40) times as big a deal. And likewise for influencing more computation rich worlds  that are simulating us.

The biggest upshot (beyond ordinary 'big future' arguments) of superexponential-with-resources utility functions is greater willingnesss to take risks/care about tail scenarios with extreme resources, although that's bounded by 'leaks' in the framework (e.g. the aforementioned influence on simulators with hypercomputation), and greater valuation of futures per unit computation (e.g. it makes welfare in sims like ours conditional on the simulation hypothesis less important).

I'd say that ideas of this sort, like infinite ethics, is reason to develop  a much more sophisticated, stable, and well-intentioned society (which can more sensibly address complex issues affecting an important future) that can address these well, but doesn't make the naive action you describe desirable even given certainty in a superexponential model of value. 

Towards a Weaker Longtermism

FWIW, my own views are more like 'regular longtermism' than 'strong longtermism,' and I would agree with Toby that existential risk should be a global priority, not the global priority. I've focused my career on reducing existential risk, particularly from AI, because it seems like a substantial chance of happening in my lifetime, with enormous stakes and extremely neglected. I probably wouldn't have gotten into it when I did if I didn't think doing so was much more effective than GiveWell top charities at saving current human lives, and outperforming even more on metrics like cost-benefit in $.

Longtermism as such (as one of several moral views commanding weight for me) plays the largest role for things like refuges that would prevent extinction but not catastrophic disaster, or leaving seed vaults and knowledge for apocalypse survivors. And I would say longtermism provides good reason to make at least modest sacrifices for that sort of thing (much more than the ~0 current world effort), but not extreme fanatical ones.

There are definitely some people who are fanatical strong longtermists, but a lot of people who are made out to be such treat it as an important consideration but not one held with certainty or overwhelming dominance over all other moral frames  and considerations. In my experience one cause of this is that if you write about implications within a particular worldview people assume you place 100% weight on it, when the correlation is a lot less than 1. 

I see the same thing happening with Nick Bostrom, e.g. his old Astronomical Waste article explicitly  explores things from a totalist view where existential risk dominates via long-term effects, but also from a person-affecting view where it is balanced strongly by other considerations like speed of development. In Superintelligence he explicitly prefers not making drastic sacrifices of existing people for tiny proportional (but immense absolute) gains to future generations, while also saying that the future generations are neglected and a big deal in expectation.

 

Economic policy in poor countries

Alexander Berger discusses this at length in a recent 80,000 Hours podcast interview with Rob Wiblin.

What grants has Carl Shulman's discretionary fund made?

Last update is that they are, although there were coronavirus related delays.

What is an example of recent, tangible progress in AI safety research?

Focusing on empirical results:

Learning to summarize  from human feedback was good, for several reasons.

I liked the recent paper empirically demonstrating objective robustness failures hypothesized in earlier theoretical work on inner alignment.

 

Help me find the crux between EA/XR and Progress Studies

Side note:  Bostrom does not hold or argue for 100% weight on total utilitarianism such as to take overwhelming losses on other views for tiny gains on total utilitarian stances. In Superintelligence he specifically rejects an example extreme tradeoff of that magnitude (not reserving one galaxy's worth of resources out of millions for humanity/existing beings even if posthumans would derive more wellbeing from a given unit of resources).

I also wouldn't actually accept a 10 million year delay in tech progress (and the death of all existing beings who would otherwise have enjoyed extended lives from advanced tech, etc) for a 0.001% reduction in existential risk.

Help me find the crux between EA/XR and Progress Studies

By that token most particular scientific experiments or contributions to political efforts may be such: e.g. if there is a referendum to pass a pro-innovation regulatory reform and science funding package, a given donation or staffer in support of it is very unlikely to counterfactually tip it into passing, although the expected value and average returns could be high, and the collective effort has a large chance of success.

Load More