A

Arepo

4431 karmaJoined

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
639

Topic contributions
17

Thanks! I wrote a first draft a few years ago, but I wanted an approach that leaned on intuition as little as possible if at all, and ended up thinking my original idea was untenable. I do have some plans on how to revisit it and would love to do so once I have the bandwidth.

I remember finding getting started really irritating. I just gave you the max karma I can on your two comments. Hopefully that will get you across the threshold.

Sorry, I should say either that or imply some at-least-equally-dramatic outcome (e.g. favouring immediate human extinction in the case of most person-affecting views). Though I also think there's convincing interpretations of such views in which they still favour some sort of shockwave, since they would seek to minimise future suffering throughout the universe, not just on this planet.

more on such appearances here

I'll check this out if I ever get around to finishing my essay :) Off the cuff though, I remain immensely sceptical that one could usefully describe 'preference as basically an appearance of something mattering, being bad, good, better or worse' in such a way that such preferences could be

a. detachable from conscious, and

b. unambiguous in principle, and

c. grounded in any principle that is universally motivating to sentient life (which I think is the big strength of valence-based theories)

Notice that Shulman does not say anything about AI consciousness or sentience in making this case. Here and throughout the interview, Shulman de-emphasizes the question of whether AI systems are conscious, in favor of the question of whether they have desires, preferences, interests. 

I'm a huge fan of Shulman in general, but on this point I find him quasi-religious. He once sincerely described hedonistic utilitarianism as 'a doctrine of annihilation' on the grounds (I assume) that it might advocate tiling the universe with hedonium - ignoring that preference-based theories of value either reach the same conclusions or have a psychopathic disregard for the conscious states sentient entities do have. I've written more about why here

So “existential catastrophe” probably shouldn't just mean "human extinction". But then it surprisingly slippery as a concept. Existential risk is the risk of existential catastrophe, but it's difficult to give a neat and intuitive definition of “existential catastrophe” such that “minimise existential catastrophe” is a very strong guide for how to do good. Hilary Greaves dicusses candidate definitions here.

 

Tooting my own trumpet, I did a lot of work on improving the question x-riskers are asking in this sequence.

METR is hiring ML engineers and researchers to drive these AI R&D evaluations forward.

 

These links both say the respective role is now closed.

I think this is a reasonable take in its own right, but it sits uncomfortably with Caleb Parikh's statement in a critical response to the Nonlinear Fund that 'I think the current funders are able to fund things down to the point where a good amount of things being passed on are net negative by their lights or have pretty low upside.'

I'm not aware of anybody who was convinced

 

While I'm also sceptical of this type of grant, I think this sort of comment is fundamentally misunderstanding marketing, which is what it sounds like this game essentially was. I'd be hard pressed to name anyone who made a decision based on a single advert, yet thousands of companies pay vast sums of money to produce them. 

When your reach is high enough (and 450 unique visitors in 11 days is a very large reach by comparison to, say, a 2-year old intro video by Robert Miles which has 150k total views to date), even an imperceptibly small nudge can have a huge effect in expectation.

The stated behaviour sounds like grounds for 

  • opening an investigation, 
  • ensuring they got written statements from Altman on concerns they thought he might be dishonest about and comparing them to the actual facts then giving him concrete requirements to improve his behaviour, 
  • and perhaps (if it's compatible with an investigation) publicly expressing concerns and calling out Altman for his behaviour. 

If none of that worked, they could publicly call for his resignation and if he didn't give it, then make the difficult decision of whether to oust him on nonspecific grounds or collectively resign as the board.

Choosing instead to fire him to the complete shock of other employees and the world at large still seems like such a deeply counterproductive path that it inclines me towards scepticism of her subsequent justification and toward the interpretation of bad faith Peter presented in this comment.

Fwiw I share Jack's impression that EA retreats are substantially more valuable. In fact, I'd go much further - I think the 'connections' metric is inherently biased towards a events where you are encouraged to have a bunch of amiable and forgettable conversations with a large number of people.

For me the qualitative difference is extreme. I've been to maybe 5 EAG(x) events, and I think, after 6 months, the number of people I would actually reach out to ask for a favour has basically dropped to zero. Conversely, I went to one retreat about a decade ago, and still consider basically everyone I met there a friend - someone I would be happy to see, and pleased to be able to offer meaningful support to

Inasmuch as you can metricify this kind of thing, I think for a fair comparison we really need both more nuance on how people should/do interpret 'favour' - which can mean anything from 'give a text introduction to a mutual acquaintance' to something out of the Godfather - and something like the integral of strength of favour you're willing to ask over time.

Having said all of the above, I do also find EAGx Virtual distinctly good. A lot of that is the lower time/energy cost to me as a participant, but I think there are some subtle benefits like in the Gather Town careers fair (which I'm very biased on, having set up the GT), it's much easier to openly eavesdrop on what people in front of you in the 'queue' are saying to each other than in a noisy room, often meaning the person at the stand can effectively answer the questions 3+ people have at once, and making the whole thing much more efficient.

I strongly suspect there's a lot of room to play around with the online format in other ways that would similarly take advantage of it being online, rather than trying to mimic the processes you'd expect to find at a physical EAG.

Load more