A

anormative

89 karmaJoined

Posts
3

Sorted by New
2
· · 1m read

Comments
11

I've often found it hard to tell whether an ideology/movement/view has just found a few advocates among a group, or whether it has totally permeated that group.  For example, I'm not sure that Srinivasan's politics have really changed recently or that it would be fair to generalize from his beliefs to all of the valley. How much of this is actually Silicon Valley's political center shifting to e/acc and the right, as opposed to people just having the usual distribution of political beliefs (in addition to a valley-unspecific decline of the EA brand)? 

I'm pretty confident that a majority of the population will soon have very negative attitudes towards big AI labs.

Can you elaborate on what makes you so certain about this? Do you think that the reputation will be more like that of Facebook or that of Big Tobacco? Or will it be totally different?

Will it be focused on GCRs again?

It feels like in the past, more considerateness might have led to less hard discussions about AI or even animal welfare.

I think factory farmed animals is the better example here. It can be pretty hurtful to tell someone you think a core facet of their life (meat eating) has been a horrendous moral error, just as was slavery or genocide. It seems we all feel fine putting aside the considerateness consideration when the stakes are high enough.

Can you elaborate on what you mean by “the EA-offered money comes with strings?”

But the AIs-with-different-values – even: the cooperative, nice, liberal-norm-abiding ones – might not even be sentient! Rather, they might be mere empty machines. Should you still tolerate/respect/etc them, then?"

The flavor of discussion here on AI sentience that follows what I've quoted above always reminds me of, and I think is remarkably similar to, the content of this scene from the Star Trek: The Next Generation episode "The Measure of a Man." It's a courtroom drama-style scene in which Data, an android, is threatened by a scientists who wants to make copies of Data and argues he's property of the Federation. Patrick Stewart, playing Jean Luc-Picard, defending Data, makes an argument similar to Joe's.

You see, he's met two of your three criteria for sentience, so what if he meets the third. Consciousness in even the smallest degree. What is he then? I don't know. Do you? (to Riker) Do you? (to Phillipa) Do you? Well, that's the question you have to answer. Your Honour, the courtroom is a crucible. In it we burn away irrelevancies until we are left with a pure product, the truth for all time. Now, sooner or later, this man or others like him will succeed in replicating Commander Data. And the decision you reach here today will determine how we will regard this creation of our genius. It will reveal the kind of a people we are, what he is destined to be. It will reach far beyond this courtroom and this one android. It could significantly redefine the boundaries of personal liberty and freedom, expanding them for some, savagely curtailing them for others. Are you prepared to condemn him and all who come after him to servitude and slavery?

This seems awesome! Thanks for sharing.

We’ve tried to do research-based meetings in the past, but we’ve found that people tend to just focus on debating abstract or shallow topics and we haven’t been able to sufficiently incentivize diving in to the more nitty-gritty details or really digging for cruxes. This might not have worked for us because we tried to have too much control over the research process, or because we presented the activity as a debate, or maybe because of the makeup of our group.

Some questions: Did all of the meetings go well? Did you notice any of the issues I mentioned (if not, any idea why)? How many people did you do this with? Were they all post-Intro fellows or selected for in some other way? How much progress did you make on the questions?

I'm not suggesting this in any serious way, and I don't know anything about Bregman or this organization, but an interesting thought comes to mind—I've often heard people ask something along the lines of "should we rebrand EA?" and the answer is "maybe, but that's not probably not feasible." If this organization is truly so good at growth, is based on the same core principles EA is based on (it might not be beyond the shallow "OOO"), and it hasn't been aspersed or tarnished by SBF etc—prima facie it might not be so bad for the EA brand to recede and for currently EA invididuals and institutions to transitions to SMA (SoMA?) ones.

Edit: it's SfMA, I realize now, but I care too much for my bad pun that I'll keep it there...

FWIW, the "deals and fairness agreement" section of this blogpost by Karnofsky seems to agree about (or at least discuss) trade between different worldviews :

It also raises the possibility that such “agents” might make deals or agreements with each other for the sake of mutual benefit and/or fairness.

Methods for coming up with fairness agreements could end up making use of a number of other ideas that have been proposed for making allocations between different agents and/or different incommensurable goods, such as allocating according to minimax relative concession; allocating in order to maximize variance-normalized value; and allocating in a way that tries to account for (and balance out) the allocations of other philanthropists (for example, if we found two worldviews equally appealing but learned that 99% of the world’s philanthropy was effectively using one of them, this would seem to be an argument – which could have a “fairness agreement” flavor – for allocating resources disproportionately to the more “neglected” view). The “total value at stake” idea mentioned above could also be implemented as a form of fairness agreement. We feel quite unsettled in our current take on how best to practically identify deals and “fairness agreements”; we could imagine putting quite a bit more work and discussion into this question.

Different worldviews are discussed as being incommensurable here (under which maximizing expected choice-worthiness doesn't work). My understanding though is that the (somewhat implicit but more reasonable) assumption being made is that under any given worldview, philanthropy in that worldview's preferred cause area will always win out in utility calculations, which makes sort of deals proposed in "A flaw in a simple version of worldview diversification" not possible/useful. 

Load more