Thanks for replying!
Rest of my comment is conditional on EA folks having consensus for >50% by 2040. I personally don't actually believe this yet, but let's imagine a world where you and I did. This discussion would still be useful if we are one day year X and we knew >50% odds by year X + 20.
And there are other causes where we can have a positive effect without directly competing for resources
I do feel it competes for resources though. Attention in general is a finite resource.
- People learning of EA have a finite attention span and can ... (read more)
Single data point but I think my life trajectory could look different if I believed >50% odds of AGI by 2040, versus >50% odds by 2100. And I can't be the only one who feels this way.
On the 2100 timeline I can imagine trusting other people to do a lot of the necessary groundwork to build the field globally till it reaches peak size. On the 2040 timeline every single month matters and it's not obvious the field will grow to peak size at all. So I'd feel much more compelled to do field building myself. And so would others even if they have poor persona... (read more)
These are real factors and major reasons people are not ambitious.
Some level of financial stability is prudent. Beyond that you just have to care about the goal more than securing complete stability. Lots of people don't stay at early-stage startups because they can get good enough pay at later stage ones.
Burnout is real and there are tips you will find (on this site as well as elsewhere). Many people don't reduce this odds of burnout to zero (I don't know if they even can), but you can reduce it by some amount.
Imposter syndrome and aversion to rejec... (read more)
Appreciate your attempts to steer rather than mutiny! Hope they work out well for you.
Not sure why this is on EAF rather than LW or maybe AF, but anyway
Not sure why this is on EAF rather than LW or maybe AF, but anyway
One obvious answer is LW community and mods tend to defer to yudkowksy more than EAF connunity.
(This doesn't argue whether the deferrence is good or bad, but this difference is a fact about reality I think)
I understand, thanks again
Thanks for replying. I'm not sure this satisfies the criteria of "legible" as I was imagining it, since I buy most of the AI risk arguments and still feel poorly equipped to evaluate how important this was. But I do not have sufficient ML knowledge; perhaps it was legible to people with sufficient knowledge.
P.S. If it doesn't take too much of your time I would love to know if there's any discussion on why this was significant for x-risk, say, on the alignment forum. I found the paper and OpenAI blogpost but couldn't find discussion. (If it will take time I totally understand, I will try finding it myself.)
Thanks for the reply. This makes a lot of sense! I will look into Bridgewater.
Oh okay I see. Thank you for responding!
You have shifted my opinion a bit, it's no longer obvious to me where exactly the line between private and public information should be drawn here.
I can see why releasing information about personal incompetence for instance might be unusual in some cultures; I'm not sure why you can't build a culture where releasing such information is accepted.
You're right OpenPhil not being a grant recommendation service changes things a bit.
I think I would still be keen on seeing more research (or posts or comments etc) that tries... (read more)
Good to hear it shifted your opinion!> I can see why releasing information about personal incompetence for instance might be unusual in some cultures; I'm not sure why you can't build a culture where releasing such information is accepted.I agree it's possible, but think it's a ton of work! Intense cultural change is really tough. Imagine an environment, for instance, where we had a public ledger of, for every single person:
There would... (read more)
However, it would likely be very awkward and unprofessional to actually release this information publicly.
IMO this is a norm that should urgently change. AFAIK 80k hours and CEA have admitted mistakes and changes in cause prioritisation before, including the whole global health to longtermist shift.
Willingness to admit mistakes is table stakes for good epistemics.
In this case, the "mistakes" is often a list of things like, "This specific organization was much worse than we thought. The founders happened to have issues A, B, and C, which really hurt the organization's performance". Releasing such information publicly is a big pain and something our culture is not very well attuned to. If OP basically announced information like, "This small group we funded is terrible, in large part because their CEO, George, is very incompetent", that would be very unusual and there would likely be a large amount of resis... (read more)
Interesting link, thanks for sharing! I can think of counterpoints to that assumption too but won't discuss here as its offtopic. (I mean I agree it is a real effect, its strength can vary I think.)
As an undergrad who has considered doing movement-building despite probably having more fit for direct research, you're basically right.
The biggest reason for not doing direct research though is shortage of mentorship. Expecting someone to write their own research agendas, apply for funding and work solo straight after undergrad is a tall order. There is no clear feedback loop and there is less clearly visible scope for upskilling, which as someone junior you intrinsically tend to consider valuable. It is easier to take the lower risk path which is apply f... (read more)
I see. That statement of yours brought to my mind questions like:
- can we assume (most) criminals are rational actors? is there evidence?
- how do you compare say, years in prison versus benefits from getting away with a theft in an EV calculation?
- how do you consider personal circumstances? (a poor thief gets more utility than a rich thief)
- does the punitive justice system currently do any calculations of this sort?
I don't think any of this discussion is essential to the piece, it's just that that line caught my eye as smuggling in a whole perspective.
Not a complete answer, but some subquestions this question invokes for me:
- How much difference does lack of mentorship have on junior researchers? Are they likely to quit EA research altogether and do something else, are they likely to take more years to reach same capability level, are they likely to simply fail to reach the same level of capability despite trying?
- Split above question by more minimal mentorship and deeper mentorship. I have a feeling deeper mentoring relationships can be very valuable (I could be wrong).
- Mentorship h... (read more)
If digital minds have moral status comparable to biological ones, will this matter as much?
a) Digital minds are more matter- and energy-efficient, so we can have much more of them than biological minds.
b) If we deploy transformative AI that is aligned, we likely can eventually get digital minds if we want. (Atleast under naturalist worldviews)
c) Digital minds optimised for any metric probably do not look identical to either human or animal minds. Examples of metrics include any function of capability, energy-efficiency, hedonic capacity or neural correlates of consciousness (if we find such a thing).
Criminals should be punished in proportion to an estimate of the harm they have caused, times a factor to account for a less than 100% chance of getting caught, to ensure that crimes are not worth it in expectation.
Where is this perspective coming from? I googled this and it seems to be referring to rational choice theory in criminology. How well supported is this theory?
. My concern however is that this is not EA, or at least not EA as embodied by its fundamental principles as explored in my piece.
And I think your last para is mostly valid too.
Thanks this definitely seems helpful!
Thanks for this anecdote!
Given the scarcity of such successes, I think people here would be interested in hearing a longer form version of this. Just wished to suggest!
Thanks for your reply! I'll see if I can convince people using this.
(Also very small point but: the pdf title says "Insert your title here" when viewed on chrome atleast)
Thanks for your reply!
This makes sense. Linked post at the end was useful and new for me.
Would recommend adding something attention catching or interesting from the special issue if you can. Will increase the likelihood someone decides its worth their time investment to go through it.
One mental move you can make to avoid this totalisation is to frame "doing the most good" not as the terminal goal of your life, but as an instrumental goal to your terminal goal which is "be happy/satisfied/content". Doing good is obviously one of the things that brings you satisfaction (if it didn't you obviously would not do it), but it isn't the only thing.
Accepting this frame risks ending up not doing good at all, cause there's plenty of people who are happy without doing much good for others (atleast as measured by an EA lens). Which may... (read more)
Interesting suggestion, although my first reaction is it feels a bit like handing things over to Moloch. Like, I would rather replace a bad judge of what is infohazardous content with a good judge, than lose our ability to keep any infohazards private at all.
There's also a similar discussion on having centralized versus decentralized grantmaking for longterm future stuff. People pointed out unilateralists curse as a reason to keep it centralised.
I am normally super in favour of more decentralisation and pluralism of thought, but having seen a bunch of info... (read more)
Just as a datapoint:
I had once explicitly posted on LW asking what pivotal act you could take by querying a misaligned oracle AI, with the assumption that you want to leak as little bits of information about the world as possible to the AI. Reasoning being if it had lesser data it would have lesser ability to break out of its box even if it failed to answer your query.
LW promptly downvoted the question heavily so I assumed it's taboo.
Ignoring inside views on specific topics, yes we should have that prior. But having inside views on both AI risk and stable totalitarianism (without use of AGI), I'm personally leaning towards net negative currently.
Safety work on either AI risk or biorisk or stable totalitarianism doesn't seem as limited by the wealth civilisation as a whole has, as it does by the number of people who agree and care enough direct funds, attention or energy to such causes.
a) It isn't obvious undifferentiated scientific progress is net good, or any interventions to speed it up. AI capabilities will also be sped up which seems net bad. Even in futures where AI risk is not an issue (we don't build an AGI), scientific progress being sped up might bring closer the possibility of stable totalitarianism, and black balls our society is currently ill-prepared to tackle.
b) EA disproprotionately originates and hires from elite colleges. IMO this means you might have to come up with really good arguments if you want to convince them. Just wished to keep you aware.
Trying to downplay rather than promote the visibility of AI risk feels to me at gut level like a losing strategy.
There are real benefits to having more people convinced about AI risk, such as more funding and talent dedicated to the problem. As you mention there are downsides if you fail to convince someone. There are ways to get higher upside to downside ratio when spreading ideas - like only focussing on newly graduated AI researchers. But if the ideas ever hit mainstream, it will be hard to control who does or not get exposed to them. What we terminally... (read more)
Might be worth making this distinction more prominent in the post! I didn't notice it on first (brief) read either.
Small point but the linked tweet in your last para doesn't come across as someone who feels EAs are morally arrogant, atleast if I read the thread without any other context. He's both appreciative and critical of EA, and his criticisms seem mostly on the actual work rather than attitudes or traits of the people involved.
I missed this, my bad.
It still invalidates the specific critique - but yeah if Pablo's point is just about "quality of critique" then this doesn't really invalidate that.
80k hours article on voting does not say don't vote. Do atleast link the article in your post!
pablo is quoting a 10-year-old comment; the 80k article your link was published in 2020.
What is the benefit of using an AI here?
Im not sure how to upload a doc on phone so I'm just copy pasting the content.
If this isn't valuable for the forum I'm also happy to take it down (either the comment or the post)
This is mostly a failed attempt at doing anything useful, that I nevertheless wish to record.
See also: https://forum.effectivealtruism.org/posts/AiH7oJh9qMBNmfsGG/institution-design-for-exponential-technology
There is a lot of complexity to understanding how tech changes the landscape of power and how bureaucracies can or should function. A lot of this complexity is specific to th... (read more)
Thanks for your reply. I did in fact realise the same thing later on! I could send you my attempt or link it here if you're interested.
I suggested a similar idea here, although my posts wasn't as clear:
Also I think [person with a viewpoint] needs to be narrowed down to the few most popular viewpoints in the world or atleast developed world, otherwise this becomes a very large task.
Not sure why you got downvoted. First para is valid, second seems a bit off context. (Like yes, it's related but is it related enough to actually further the goals of the OP?)
Understood. Sorry for harshness in my original comment then.
Note: OP has assured me the following is not a concern. Haven't deleted the comment so the discussion is still visible.
I'm using a large heading because I feel the cost of EA supporting fraudulent projects can be high; I don't usually do this.
As someone who has previously worked in the cryptocurrency space, I don't think your stablecoin is likely to maintain peg longterm unless it has significant underlying reserves - USD or other assets. Significant here means atleast 40-50% backing, although it is much better if it higher. You ma... (read more)
I see. Thanks for this.
Peter Thiel's address to EA in 2013, San Francisco:
I see. Rather than being "the" keynote speaker in 2013, there were four keynote speakers of which Thiel was one (the others were Peter Singer, Jaan Tallinn, and Holden Karnofsky.)
My concern about people and animals having net-negative lives has been related to what’s happening with my own depression.
+1 on this part.
I feel like this is a very common failure mode - people look at the happiness or suffering prevalent in their own personal life and extrapolate it to whether the average person is net positive, zero or negative utility.
For what it's worth, my experience hasn't matched this. I started becoming concerned about the prevalence of net-negative lives during a particularly happy period of my own life, and have noticed very little correlation between the strength of this concern and the quality of my life over time. There are definitely some acute periods where, if I'm especially happy or especially struggling, I have more or less of a system-1 endorsement of this view. But it's pretty hard to say how much of that is a biased extrapolation, versus just a change in the size of my empathy gap from others' suffering.
You're right there's some difficulty in figuring out which opinions do we want to poll EAs for and present data. And how much would outsiders trust that data, as opposed to perceptions they may have obtained through other means. I didn't fully think this through when I was posting.
re: PR department
I think it would be useful to have PR, although I'm not entirely sure what policies they should take, or how insider versus outsider conversations should be fragmented. Some people have raised issue with the EA forum too because it is too open... (read more)
Hmm, maybe I can call it a prior but from the view of the outsider. Say an outsider with far-left stances comes and talks to 5 EAs, and finds 1 person who shares their far-left opinions, and 1 person who has stances they find repulsive. Or maybe it's not 5 EAs in person they meet, but 5 articles or blog posts they read. They're now going to assume that roughly 20% of EAs are far-left and 20% have opinions they find repulsive, by extrapolating their "prior".
(And sure the ideal thing to do is talk to more than 5 people but not everyone has time or interest for that, which is why I might want EAs to instead present this data to them explicitly.)
I see - this is a valid point.
I was thinking of reporting survey results (of EA opinions on non-EA stances) - do you think it is hard to conduct surveys objectively?
You're right "bias" may not have been the best word for what I wanted to say, it's somewhat negatively-coded.
I'd be keen to know more what you mean by "what directs one to arrive at such a position".
2 makes sense.
Regarding 3, I completely agree. I think you can present this nuance in the intro pages too. Like here is the current distribution of EA opinions on your favorite movement X, but if you feel you can convince us on X we're open to change.
Thank you for this! This makes sense, and this could be added to intro pages.
I also still think it's useful to list out (in intro pages) systemic changes that other movements support that we don't support (or atleast don't have consensus support for inside of EA).