All of harsimony's Comments + Replies

Thanks for posting this, this seems like valuable work.

I'm particularly interested in using MLOSS to intentionally shape AI development. For example, could we identify key areas where releasing particular MLOSS can increase safety or extend the time to AGI?

Finding ways to guide AI development towards narrow and simple AI models can extend AI timelines, which is complimentary to safety work:

https://www.lesswrong.com/posts/BEWdwySAgKgsyBzbC/satisf-ai-a-route-to-reducing-risks-from-ai

In your opinion, what traits of a particular piece of MLOSS determine whether it increases or decreases risk?

Ok, and any advice for reaching out to trusted-but-less-prestigious experts? It seems unlikely that reaching out to e.g. Kevin Esvelt will generate a response!

5
Linch
2y
I think someone like Esvelt (and also Greg, who personally answered in the affirmative) will probably respond. Even if they are too busy to do a call, they'll know the appropriate junior-level people to triage things to. 

Great post, I really appreciate an in-depth review of research on reducing sleep need.

I wrote some arguments for why reducing sleep is important here:

https://harsimony.wordpress.com/2021/02/05/why-sleep/

I also submitted a cause exploration app:

https://harsimony.wordpress.com/2022/07/14/cause-exploration-prize-application/

Your post includes substantially more research than mine and I would encourage you to reformat it and submit it to the OpenPhil's Cause Exploration Prize. I'm happy to help you with edits or combine our efforts!

2
JohnBoyle
2y
Thanks!  It's great to see that other people have discovered FNSS research and had similar thoughts about the implications.  I have indeed submitted the post for the Cause Exploration Prize.  At this point I feel reluctant to make significant changes to the post, but if there are major/glaring issues, those could be worth pointing out.

This kind of thing could be made more sophisticated by making fines proportional to the harm done, requiring more collateral for riskier projects, or setting up a system to short sell different projects. But simpler seems better, at least initially.

Have you thought about whether it could work with a more free market, and not necessarily knowing all of the funders in advance?

Yeah, that's a harder case. Some ideas:

  • People undertaking projects could still post collateral on their own (or pre-commit to accepting a fine under certain conditions). This kin

... (read more)
4
Ofer
2y
I don't think that short selling would work. Suppose a net-negative project has a 10% chance to end up being beneficial, in which case its certificates will be worth $1M (and otherwise the certificates will end up being worth $0). Therefore, the certificates are worth today $100K in expectation. If someone shorts the certificates as if they are worth less than that, they will lose money in expectation.

This kind of thing could be made more sophisticated by making fines proportional to the harm done

I was thinking of this. Small funders could then potentially buy insurance from large funders in order to allow them to fund projects that they deem net positive even though there's a small risk of a fine that would be too costly for them.

I proposed a simple solution to the problem:

  1. For a project to be considered for retroactive funding, participants must post a specific amount of money as collateral.
  2. If a retroactive funder determines that the project was net-negative, they can burn the collateral to punish the people that participated in it. Otherwise, the project receives its collateral back.

This eliminates the "no downside" problem of retroactive funding and makes some net-negative projects unprofitable.

The amount of collateral can be chosen adaptively. Start with a small amount and ... (read more)

Crypto's inability to take debts or enact substantial punishments beyond slashing stakes is a huge limitation and I would like it if we didn't have to swallow that (ie, if we could just operate in the real world, with non-anonymous impact traders, who can be held accountable for more assets than they'd be willing to lock in a contract.)

Given enough of that, we would be able to implement this by just having an impact cert that's implicated in a catastrophe turn into debt/punishment, and we'd be able to make that disincentive a lot more proportional to the s... (read more)

Related: requiring some kind of insurance that pays out when a certificate becomes net-negative.

Suppose we somehow have accurate positive and negative valuations of certificates. We can have insurers sell put options on certificates, and be required to maintain that their portfolio has positive overall impact. (So an insurer needs to buy certificates of positive impact to offset negative impact they've taken on.)

Ultimately what's at stake for the insurer is probably some collateral they've put down, so it's a similar proposal.

4
Emrik
2y
I don't think such a rule has a chance of surviving if impact markets take off? 1. Added complexity to the norms for trading needs to pay for itself to withstand friction or else decay to its most intuitive equilibrium. 1. Or the norm for punishing defectors needs to pay for itself in order to stay in equilibrium. 2. Or someone needs to pay the cost of punishing defectors out of pocket for altruistic reasons. 2. Once a collateral-charging market takes off, someone could just start up an exchange that doesn't demand a collateral, and instead just charge a nominal fee that doesn't disincentivise risky investments but would still make them money. Traders would defect to this market if it's more profitable for them. (To be clear, I think I'm very pro GoodX's project here; I'm just skeptical of the collateral suggestion.)
7
Owen Cotton-Barratt
2y
Nice, that's pretty interesting. (It's hacky, but that seems okay.) It's easy to see how this works in cases where there's a single known-in-advance funder that people are aiming to get retro funding from (evaluated in five years, say). Have you thought about whether it could work with a more free market, and not necessarily knowing all of the funders in advance?

I make a slightly different anti-immortality case here:

https://harsimony.wordpress.com/2020/11/27/is-immortality-ethical/

Summary: At a steady state of population, extended lifespan means taking resources away from other potential people. Technology for extended life may not be ethical in this case. Because we are not in steady state, this does not argue against working on life extension technology today.

One reason people make this claim is that many models of economic growth depend on population growth. Like you noted, there are lots of other ways to grow the economy by making each individual more productive (lower poverty, more education, automating tasks, more focus on research, etc.).

But crucially, all of these measures have diminishing returns. Let's say that in the future everyone on earth has a PhD, is highly productive, and works in an important research field. In this case the only way to continue growing economy is through population growth, sinc... (read more)

7
Henry Howard
2y
Thanks for sharing this. Definitely agree with him that it can't go on forever: "Many of the sources of growth historically — including rising educational attainment, rising research intensity, and declining misallocation — are inherently limited and cannot go on forever." Also agree when he says: "We are a long way from hitting any constraint that we have run out of people to hunt for new ideas" We're way off everyone having PhDs. I'll hold off worrying about declining birth rates for a few decades.

Thanks for writing this. Great to see people encouraging a sustainable approach to EA!

I want to tell you that taking care of yourself is what’s best for impact. But is it?

I claim that this is true:

  • Finding personal fulfillment is a positive result in and of itself.
  • It's important to prioritize personal needs, otherwise you will not be in a good position to help others (family, friends, charity, etc.).
  • Ensuring one's relationship with EA is sustainable can actually lead to more impact over the long run (though this shouldn't be peoples primary goal, pe
... (read more)

These are all true, but (as Julia alludes to) not necessarily enough to lead us to correctly conclude that the conclusion we really want to believe is the correct one.

(Of course, we don't live in the most inconvenient world, so wanting to believe in a conclusion is only some evidence against the veracity of a conclusion, not necessarily decisive evidence)

I think another possible route around gambling restrictions to prediction markets is to ensure all proceeds go to charity, but the winners get to choose which charity to donate to. I wrote about this more here:

https://forum.effectivealtruism.org/posts/d43f6HCWawNSazZqb/charity-prediction-markets

2
gvst
2y
Great idea! Makes me think, it would be interesting to see a political prediction market where the winnings go to your preferred candidate in the race. Not sure about if that would have a positive impact, but would be cool to study. Edit: Just read your post and see that you discuss this haha

I have noticed that few people hold the view that we can readily reduce AI-risk. Either they are very pessimistic (they see no viable solutions so reducing risk is hard) or they are optimistic (they assume AI will be aligned by default, so trying to improve the situation is superfluous).

Either way, this would argue against alignment research, since alignment work would not produce much change.

Strategically, it's best to assume that alignment work does reduce AI-risk, since it is better to do too much alignment work (relative to doing too little alignment work and causing a catastrophe).

Though I am not super familiar with the research, it seems that in general more indirect democracy functions better due to the fact that voters have little incentive to cast informed votes, whereas representatives are incentivized to make informed decisions on voters behalf.

I think the book 10% Less Democracy can point you to relevant research on this topic. It was discussed briefly on MR here.

You may also want to check out Caplan's The Myth of the Rational Voter for research along similar lines.

2
VictorSintNicolaas
2y
Thanks for your thoughts!  To let voters cast more informed votes, there's a whole movement around deliberation. You can argue that increases costs for voters, but I think the trade-off is unclear and likely context dependent. The books are great pointers, will have a look at the research referenced!

Great post!

To reiterate what AppliedDivinityStudies said, I would love to hear more about proposed solutions to this problem. For example, what do you think of this paper on preventing supervolcanic eruptions?

Interventions that may prevent or mollify supervolcanic eruptions

7
Mike Cassidy
3y
Thanks!  As the authors put it in that paper:  I think this is right, and until we can competently model how a magma will respond to any interventions that we might do, it's perhaps too risky to do at the moment. Nevertheless volcanologists have gone the other way and completely dismissed this whole concept of intervention. Personally, I think it be very worthwhile to investigate this concept in the lab and with numerical models, as after all, humans have  drilled directly into magma reservoirs by mistake (while looking for geothermal energy) (~4 times in fact!) with limited negative consequences. So the knowledge we can find out by drilling  magmas (one of the links I shared in the conclusions) would be highly informative for the coming decades of volcano science. At the moment, there are far less risky options that could do to mitigate the risk in the short term, for instance we haven't even identified all the volcanoes capable of climate-altering eruptions, and how we can best to monitor them (many of these will not even be monitored, especially in resource-poor, volcano-rich countries like Indonesia and the Philippines).

Of course, EA funds can do all of these things, and I appreciate the work they are doing.

I think it is important to be explicit about the structure of EA funds, meta-charities, and charitable foundations: they typically involve pooling money from many donors and putting funding decisions in the hands of a few people. This is not a criticism! It makes a lot of sense to turn these decisions over to knowledgeable, committed specialists in the EA community. This approach likely improves the impact of peoples donations over the counterfactual where people give ... (read more)

2
Emrik
3y
Strongly agree with trying to think about creative ways of effectively pooling wisdom together to make better donations collectively. Prediction markets, as you point out, is an example of a really high-impact related idea, so there might be more in the vicinity. I should've made that clear from the start. I just don't think this particular suggestion (Iterative Public Donation) beats what we've currently got. But, as I've said, I like the way you think. :)

I agree that the EA funds (and meta-charities like Givewell), are great opportunities to give and can help balance the flow of donations going to different charities. But I don't think that these funds have entirely solved the collective action problem in charitable giving. Rather, they aggregate money from many donors and turn over funding decisions to a handful of experts. These experts are doing great work, and I really respect them, but it doesn't hurt to consider how we might do things even better!

If we really did have a system for small donors to coo... (read more)

1
Emrik
3y
I'm a bit confused about your view here. Why can't EA Funds, with enough money, fund specific research projects and new charitable organizations? Why can't they "work with mega-donors (...) to pursue much larger projects"? Also, it seems like you have more faith than me in the collective wisdom of many non-experts, compared to a team of experts whose job is to work on the questions full-time.  Do you think a donation would be better allocated by votes from 1000 average EAs who each spend 2 hours of research each, or by a team of 10 highly experienced EAs who each spend 200 hours of research each?

Which of your writings (including things like blog posts) do you consider most important for making the world a better place? Assuming many people agreed to deeply consider your arguments on one topic, what would you have them read?

9
Jason Brennan
3y
I am tempted to say the stuff on open borders and immigration, because the welfare effects of increased immigration are much higher than anything else I've worked on. But realistically, it's difficult to change people's minds even when you give them overwhelming evidence.  The work I did with Peter Jaworski on taboo markets seems persuasive to most people who encounter it. If people followed our advice, we'd save tens of thousands of lives per year in the US. But then the issue is that even if you agree with us, it's not like you can personally legalize kidney markets or other needed markets. That's kind of the problem with much of my work. It's about politics, institutions, and policy. Even when there's good advice, it's not like readers have the power to act on it, and the people in power have little incentive to do what's right. 

Wonderful idea, it looks great so far.

I appreciate that the list of charities one can donate to is relatively restricted since this prevents people from publicly donating to highly political charities for signalling purposes.

I also like that there is a dashboard showing how your donations are being spent.

One thing I find a little strange is the "lives saved" total (whereas the "CO2 Reduced" total seems perfectly normal to me). I don't have a good reason for this, its just a personal feeling. Perhaps instead show the total spent or fraction spent on different causes areas rather than assert the overall impact of the donations?