All of rk's Comments + Replies

rk
5y3
0
0

Seems right, though I don't know to what extent Paul's view is representative of OpenAI's overall view.

rk
5y12
0
0

Paul Christiano has a notion of competitiveness, which seems relevant. Directions and desiderata for AI control seems to be the the place it's stated most clearly.

The following quote (emphasis in the original) is one of the reasons he gives for desiring competitiveness, and seems to be in the same ballpark as the reason you gave:

You can’t unilaterally use uncompetitive alignment techniques; we would need global coordination to avoid trouble. If we _don’t know how to build competitive benign AI, then users/designers of AI systems have to compr_omise effic

... (read more)
5
Eli Rose
5y
Thanks for the link. So I guess I should amend what Paul and OpenAI's goal seems like to me, to "create AGI, make sure it's aligned, and make sure it's competitive enough to become widespread."
rk
5y1
0
0

This seems like an important worry. I've updated the main post to state that I'm now unclear whether reports are good or bad (because it seems like most of the effect comes from how others' use the information in the reports, and it's unclear to me whether they will mostly improve or worsen their judgement).

I do think that (a) people will discount lottery winners at least a bit relative to donors of the same size and (b) it's good to introduce input on funding evaluation from someone with errors that are (relatively) uncorrelated with major funding bodies' errors.

rk
5y1
0
0

That the use of the funds will be worse when writing a report is plausible. Do you also think that reports change others' giving either negligibly or negatively?

2
Jan_Kulveit
5y
It's hard to estimate. Winning the lottery likely amplifies the voice of the winner, but the effect may be conditional on how much credibility the winner had beforehand. So far, the lottery winners were highly trusted people working in central organizations. Overall, I would estimate the indirect effect on giving by other individual donors is with 90% confidence within 3x the size of the direct effect, with an unclear sign. There is a significant competition for the attention (and money) of individual donors
rk
5y1
0
0

I guess it depends on the details of the returns to scale for donors. If there are returns to scale across the whole range of possible values of the donor lottery, as long as one person who would do lots of work/has good judgment joins the donor lottery, we should be excited about less conscientious people joining as well.

To be more concrete, imagine the amount of good you can do with a donation goes with the square of the donation. Let's suppose one person who will be a good donor joins the lottery with $1. Everyone else in the lottery will make a neutral

... (read more)
4
CarlShulman
5y
Except that the pot size isn't constrained by the participation of small donors: the CEA donor lottery has fixed pot sizes guaranteed by large donors, and the largest donors could be ~risk-neutral over lotteries with pots of many millions of donors. So there is no effect of this kind, and there is unlikely to ever be one except at ludicrously large scales (where one could use derivatives or the like to get similar effects).
rk
5y2
0
0

I don't have to if it doesn't seem worth the opportunity cost

Thanks for highlighting in this comment. It don't think I made that prominent enough in the post itself

rk
5y1
0
0

Sorry, I didn't communicate what I meant well there.

It might be the case that DALYs somewhat faithfully track both (a) the impact of conditions on subjective wellbeing and (b) the impact of conditions on economic contribution, even if they're not explicitly intended to track (b). It might also be the case that efforts to extend DALYs to more faithfully track (a) for things that are worse than death would mean that they tracked (b) less well in those cases.

Then, it could be the case that it's better to stick with the current way of doing things.

I don't actu

... (read more)
rk
5y3
0
0

I agree that this seems important.

If I remember/understand correctly, the normal instruments fail to deliver useful answers for very bad conditions. For example, if you administer a survey asking how many years of healthy life the survey-taker thinks a year where they suffer X is worth, very bad situations generate incredibly broad answers.

Some people say those years are valueless (so just at 0), some say they have huge disvalue (so they'd rather die now than face one year with the condition and then the rest of their life in good health), and some say tha

... (read more)
3
Derek
5y
I’m working with Paul Dolan on a report related to this topic, and the need to value states worse than dead (SWD) is our only major point of disagreement. He gives some kind of justification for his views on p26 of this report, but I find it extremely unconvincing. But to be fair, nobody has proposed a particularly good method for dealing with SWD. Most attempts have used versions of the time trade-off, with limited success – for a slightly outdated review, see Tilling et al., 2012. In this Facebook post I’ve listed a few options for achieving the related but more modest objective of determining the point in happiness scales that is equivalent to death.
1
drbrake
5y
Thanks for the additional readings. I think Paul Dolan is asking the right questions. I am disappointed that after a promising initial discussion eight years ago, Holden doesn't seem to have spoken again on the subject and to the best of my knowledge there is still no way on GiveWell to put different weights on "impact" to give different results. I don't understand your last paragraph though. DALYs don't seem to measure economic effects on others at all, so if you do start to consider them wouldn't that be a big argument to make some DALYs negative?
rk
5y0
0
0

"I downvoted and the claims or arguments were a reason why"

rk
5y0
0
0

"I downvoted and something else stylistic was a reason why"

rk
5y0
0
0

"I downvoted and the picture was a reason why"

rk
5y0
0
0

"I downvoted and the section headers were a reason why"

rk
5y1
0
0

This has gotten a few downvotes. My best guess as to the causes are the section headers and the picture, but I'm not sure. So I'm going to add four subcomments to this: "I downvoted and the section headers were a reason why", "I downvoted and the picture was a reason why", "I downvoted and something else stylistic was a reason why", "I downvoted and the claims or arguments were a reason why". If you downvote, I'd be grateful if you indicate a reason why (either by commenting or voting on a comment)

4
DavidNash
5y
I didn't vote either way but the picture, emojis and multiple disclaimers throughout are probably not needed. Also some of the ideas aren't explored that much and would be better either fleshed out or not included. Here is a post on a similar topic which could be a good style to copy.
0
rk
5y
"I downvoted and the claims or arguments were a reason why"
0
rk
5y
"I downvoted and something else stylistic was a reason why"
0
rk
5y
"I downvoted and the picture was a reason why"
0
rk
5y
"I downvoted and the section headers were a reason why"
rk
5y5
0
0

unfortunately I wasn’t able to include a table of contents

on the greaterwrong version of the EA forum, there's an automatically generated TOC. So that's an option for people who would strongly prefer a TOC


I have been feeling the siren song of agent-based models recently (I think it seems a natural move in a lot of cases, because we are actually modelling agents), but your criticisms of them reminded me that they often don't pay for their complexity in better predictions. It seems quite a general and useful point, and perhaps could be extracted to a st

... (read more)
3
Max_Daniel
5y
Not really I'm afraid. I'd expect that due to the risk of inadvertent negative impacts and large improvements from weeding out obviously suboptimal options a pure lottery will rarely be a good idea. How much effort to expend beyond weeding out clearly suboptimal options to me likely seems to depend on contextual information specific to the use case. I'm not sure how much there is to be said in general except for platitudes along the lines of "invest time into explicit evaluation until the marginal value of information has diminished sufficiently".
rk
5y1
0
0

I split out the comments into areas of concern over on lesswrong. I think it would be a bit too noisy to duplicate that over here, but do feel free to bring up any issues!