M

matthewp

217 karmaJoined May 2017

Posts
6

Sorted by New

Comments
35

My quick answer would be: since writing the comment I noticed plenty of people made first contact via hpmor :D

I still don't know the answer though. I'd guess a startupy algorithm to answer this might lookw like:

  1. identify audience (is it local folks, all peeps on web, 'agenty' people) and desired outcomes (more active community members, or just spread the concepts)
  2. find channels to first reach that audience (go viral on tiktok or guest lecture at stanford)
  3. funnel into a broader learning programme (is it a mooc, a YT playlist)

But obvs this is a pretty involved effort and perhaps something one would go for a grant for :o

Do you know if there are any orgs in the UK housing Ukrainian refugees?

> How difficult should we expect AI alignment to be?

With many of the AI questions, one needs to reason backwards rather than pose the general question.

Suppose we all die because unaligned AI. What form did the unaligned AI take? How did it work? Which things that exist now were progenitors of it, and what changed to make it dangerous? How could those problems have been avoided, technically? Organisationally?

I don't see how useful alignment research can be done quite separately to capabilities research. Otherwise we'll get will be people coming in at the wrong time with a bunch of ideas that lack technical purchase. 

Similarly, the questions about what applications we'll see first are already hinted at in capabilities research.

That being the case, it will take way more energy than 1 year for someone to upskill because they actually need to understand something about capabilities work.

As someone with a mathematical background, I see a claim about a general implication (the RC) arising from Total Utilitarianism. I ask 'what is Total Utilitarianism?' I understand 'add up all the utilities'. I ask 'what would the utility functions have to look like for the claim to hold?' The answer is, 'quite special'.

I don't think any of us should be comfortable with not checking the claim works at a gears level. The claim here being, approximately, that the RC is implied under Total Utilitarianism regardless of the choice of utility function. Which is false, as demonstrated above.

> This subdiscipline treats distributions of wellbeing across individuals in different hypothetical worlds as a given input, and seeks to find a function that outputs a plausible ranking of those worlds. 

If you'd be interested formalising what this means, I could try and show that either the formalisation is uninteresting or that some form of my counterexamples to the RC still holds.

Thanks for the considered reply :)

The crux I think lies in, "is not meant to be sensitive to how resources are allocated or how resources convert to wellbeing." I guess the point established here is that it is, in fact, sensitive to these parameters.

In particular if one takes this 'total utility' approach of adding up everyone's individual utility we have to ask what each individual's utility is a function of.

It seems easy to argue that the utility of existing individuals will be affected by expanding or contacting the total pool of individuals. There will be opposing forces of division of scarce resources vs network effects etc. 

are ruled out by stipulation.

A way the argument above could be taken down would be writing down some example of a utility function, plugging it into the total utility calculation and showing the RC does hold. Then pointing out that the function comes from a broad class which covers most situations of practical interest.

If the best defence is indeed just pointing out that it's true for a narrow range of assumptions, my reaction will be like, "OK, but that means I don't have to pay much attention whenever it crops up in arguments because it probably doesn't apply."

Well, on the basis of the description in the SEP article:

The idea behind this view is that the value of adding worthwhile lives to a population varies with the number of already existing lives in such a way that it has more value when the number of these lives is small than when it is large

It's not the same thing, since above we're saying that each individual's utility is a function of the whole setup. So when you add new people you change the existing population's utilities. The SEP description instead sounds like changing only what happens at the margin.

The main argument above is more or less technical, rather than 'verbal'. And reliance on verbal argument is pretty much the root of the original issue.

In the event someone else said something similar some other time, there's still value in a rederivation from a different starting position. I'm not so much concerned with credit for coming up with an idea than that I less frequently encounter instances of this issue.

I think it's more of a comment that one would find the number of academics 'excited' about AIS would increase as the number of venues for publication grew.

This doesn't seem to have been said, so I will: $1m is enough to live off as an endowment. You can use this to work your entire life on any cause you want to, and then donate as much of it in your will as you wish to. 

Answer by matthewpSep 04, 202120
0
0

Upvoted because I think that this should not be downvoted without comment. However I think OP will get more engagement and generate a fuller respose here if:

  • The arguments that there is a mass extinction going on are summarized rather than linked to.
  • Some facts about funding levels are given. E.g. how much £ overall goes to eco / wildlife / conservation charities overall versus EA causes. Otherwise some may respond that funding from EA sources is not required, or there are more neglected priorities.
  • The case for impact and why biodiversity loss is worse than other causes the community focuses on. E.g. there will be those happy to concede that it is bad, but meaningless in the event of x-risks coming to pass.

Note: I am sympathetic generally to the need for a diversity of causes, I'm just pointing out some elements I'd expect to see in an argument which proved persuasive.

Load more