Recent Discussion

Epistemic status: Just a thought that I have, nothing too rigorous

The reason Longtermism is so enticing (to me at least), is that the existence of so many future life hangs in the balance right now. It just seems to be a pretty good deed to me, to bring 10^52 people (or whatever the real number will turn out to be) into existence.

This hinges on the belief that Utility scales linearly with the number of QUALYs, so that twice as many people are also twice as morally valuable. My belief in this was recently shaken by this thought experiment:

***

You are a traveling EA on a trip to St. Petersburg. In a dark alley, you meet a Demon with the ability to create Universes and a serious gambling addiction....

[Daniel Kokotajlo has a great sequence on the topic.](https://forum.effectivealtruism.org/s/MJKgevWYc6digKLux) I think the second post is going to be most relevant.

In my mind that’s no more a challenge to longtermism than general relativity (or the apparent position of stars around the sun during an eclipse) was a challenge to physics. But everyone seems to have their own subtly different take on what longtermism is. 🤷

1Harrison Durland8h
Whenever your expected value calculation relies on infinity—especially if it relies on the assumption that an infinite outcome will only occur when given infinite attempts—your calculation is going to end up screwy. In this case, though, an infinite outcome is impossible: as others have pointed out, the EV of infinitely taking the bet is 0. Relatedly, I think that at some point moral uncertainty might kick in and save the day.
1PaulCousens11h
In David Deutsch's The Beginning of Infinity: Explanations That Transform the World there is a chapter about infinity in which he discusses many aspects of infinity. He also talks about the hypothetical scenario that David Hilbert proposed of an infinity hotel with infinite guests, infinite rooms, etc. I don't know which parts of the hypothetical scenario are Hilbert's original idea and which are Deutsch's modifications/additions/etc. In the hypothetical infinity hotel, to accommodate a train full of infinite passengers, all existing guests are asked to move to a room number that is double the number of their current room number. Therefore, all the odd numbered rooms will be available for the new guests. There are as many odd numbered rooms (infinity) as there are even numbered rooms (infinity). If an infinite number of trains filled with infinite passengers arrive, all existing guests with room number n are given the following instructions: Move to room n*((n+1/2)). The train passengers are given the following instructions: every nth passenger from mth train go to room number n+n^2+((n-m)/2). (I don't know if I wrote that equation correctly. I have the audio book and don't know how it is written.) All of the hotel guests' trash will disappear into nowhere if the guests are given these instructions: Within a minute, bag up their trash and give it to the room that is one number higher than the number of their room. If a guest receives a bag of trash within that minute, then pass it on in the same manner within a half minute. If a guest receives a bag of trash within that half minute, then pass it on within the a quarter minute, and so on. Furthermore, if a guest accidentally put something of value to them in the trash, they will not be able to retrieve it after the two minutes. If they were somehow able to retrieve it, to account for the retrieval would involve explaining it with an infinite regress. Some other things about infinity that he notes in the chapter

Most of us have a problem with motivation.  Effective Altruism suggests that we can do substantial good through acting altruistically.  Yet, it is hard for an individual to motivate himself to act as well as reason requires.

 

Normative judgements and psychological motivation

A normative value judgement about how best to act is a separate matter from the psychological question of motivation.[1]  Effective Altruism answers the normative question of what to do but perhaps has given less attention to motivation. 

Effective Altruism reasons from a perspective that values sentient welfare impartially and suggests that it is normatively good for individuals to make substantial donations to effective charities, be vegan and direct their careers towards the common good.  We believe these actions are normatively good from the point of view of the...

This should recognise that more reliable motivation comes from norm-following rather than from individual willpower

I think this is right and is more true and important when the positive impacts you might have are distant in time, space or both. If you're doing something to help your local community then you should be able to see the impact yourself fairly quickly and willpower could well be the best thing to get you out picking litter or whatever. This falls down a bit if your beneficiaries are halfway round the world, in the future, or both.

TL;DR:

UCLA EA ran an AI timelines retreat for community members interested in pursuing AI safety as a career. Attendees sought to form inside views on the future of AI based on an object-level analysis of current AI capabilities.

We highly recommend other university groups hold similar <15 person object-level-based retreats. We tentatively recommend other organizers hold AI timelines retreats, with caveats discussed below.

Why did we run the UCLA EA AI Timelines Retreat?

Most people in the world do not take AI risk seriously. On the other hand, some prominent members of our community believe we have virtually no chance of surviving this century due to misaligned AI. These are wild-seeming takes with massive implications. We think that assessing AI risk should be a serious and thoughtful endeavor. We sought...

One quick question about your post -- you mention that some in the community think there is virtually no chance of humanity surviving AGI and cite an April Fool's Day post. (https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy) I'm not sure if I'm missing some social context behind this post, but have others claimed that AGI is basically certain to cause an extinction event in a non-joking manner?

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some question posts that could use more answers.

The waste of human resources caused by poor selection procedures should pain the professional conscience of I–O psychologists.[1]

 

Short Summary: I think that EA organizations can do hiring a lot better.[2] If you are involved in hiring, you should do these three things: avoid unstructured interviews, train your interviewers to do structured interviews, and have a clear idea of what your criteria are before you start to review and filter applicants. Feel free to jump to the “Easy steps you should implement” section if you don’t want to go through all the details.

Intro

If we view a hiring process as an attempt to predict which applicant will be successful in a role, then we want to be able to predict this as accurately as we can. That is what this...

While I'm familiar with literature on hiring, particularly unstructured interviews, I think EA organizations should give serious consideration to the possibility that they can do better than average. In particular, the literature is  correlational, not causal, with major selection biases, and is certainly not as broadly applicable as authors claim.

From Cowen and Gross's book Talent, which I think captures the point I'm trying to make well:
> Most importantly, many of the research studies pessimistic about interviewing focus on unstructured interview... (read more)

1Joseph Lemien9h
TLDR: I agree with you. It is complicated and ambiguous and I wish it was more clear-cut. Regarding GMA Tests, my loosely help opinion at the moment that I think there is a big difference between 1) GMA being a valid predictor, and 2) having a practical way to use GMA in a hiring process. All the journal articles seem to point toward 1, but what I really want is 2. I suppose we could simply require that all applicants do a test from Wonderlic/GMAT/SAT, but I'm wary of the legal risks and the biases, two topics about which I lack the knowledge to give any confident recommendations. That is roughly why my advice is "only use these if you have really done your research to make sure it works in your situation." I'm still exploring the area, and haven't yet found anything that gives me confidence, but I'm assuming there has to be solutions that exist other than "just pay Wonderlic to do it." I strongly agree with you. I'll echo a previous idea I wrote about: the gap between this is valid and here are the details of how to implement this seem fairly large. If I was a researcher I assume I'd have mentors and more senior researchers that I could bounce ideas off of, or who could point me in the right direction, but learning about these topics as an individual without that kind of structure is strange: I mostly just search on Google Scholar and use forums to ask more experienced people.
1Joseph Lemien9h
Regarding structured versus unstructured interviews, I was just introduced to the 2016 update yesterday and I skimmed through it. I, too, was very surprised to see that there was so little difference. While I want to be waring of over-updating from a single paper, I do want to read the Rethinking the validity of interviews for employment decision making [https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=Oh%2C+Postlethwaite%2C+%26+Schmidt%2C+2013&btnG=] paper so that I can look at the details.

This year I’ve started using 3 remote personal/executive assistants for my work projects. Our remote assistants have been awesome and super useful, so I thought it would be useful to try and write a guide to help others get started with using remote assistants.

If working with a remote assistant doesn’t work out for you I think you’ll lose around £300 and 12 hours of your time in 1 month. But if it does work well, then I think you have a lot to gain - my estimate is my assistants save me around 20-30 hours a month.

Ways in which my remote assistants have helped me

  • We have a remote assistant who does all our events logistics work, including sourcing and booking venues, booking transport and catering, and handling
...

I don't think my particular VAs have more capacity, but I believe Virtalent has other VAs ready to match with clients.

It is unclear to me whether I’ve just gotten lucky. But with Virtalent you can switch VA and the minimum commitment is very low, which is why I think the best strategy is just to try

3Holly Morgan7h
Fancy Hands [https://www.fancyhands.com/] "is a team of US-based virtual assistants". Comments I've heard on them from a couple of EAs: And then I wasn't sure if this EA has actually used Fancy Hands, but they summarised it thus: When I was doing more PA work for EAs myself, I briefly tried experimenting with re-delegating anonymised tasks to Upwork [https://www.upwork.com/hire/us/], but I couldn't find any takers for the first task I tried. Another EA I know uses them for PA tasks though. Re assistant headhunters, one EA recommended US-based Pocketbook Agency... ...and another EA said...
1Holly Morgan7h
Mati Roy [https://www.linkedin.com/in/matiroy/] is an EA with some US-timezone friendly VAs: https://bit.ly/PantaskServices [https://bit.ly/PantaskServices] (on the website [https://www.pantask.com/] it says "We hire mainly in North America and Europe" but I think they still generally prefer to share the Google doc). [Edit: And before anyone wastes time on CampusPA [https://www.campuspa.com/] - another EA-run PA agency that I sometimes hear mentioned - I'm pretty sure they're dead now.]

I have previously encountered EAs who have beliefs about EA communication that seem jaded to me. These are either, “Trying to make EA seem less weird is an unimportant distraction, and we shouldn’t concern ourselves with it” or “Sounding weird is an inherent property of EA/EA cause areas, and making it seem less weird is not tractable, or at least not without compromising important aspects of the movement.” I would like to challenge both of these views.

“Trying to make EA seem less weird is unimportant”

As Peter Wildeford explains in this LessWrong post:

People take weird opinions less seriously. The absurdity heuristic is a real bias that people -- even you -- have. If an idea sounds weird to you, you're less likely to try and believe it, even

...
6RedStateBlueState7h
It's kind of funny for me to hear about people arguing that weirdness is a necessary part of EA. To me, EA concepts are so blindingly straightforward ("we should try to do as much good with donations as possible", "long-term impacts are more important than short-term impacts", "even things that have a small probability of happening are worth tackling if they are impactful enough") that you have to actively modify your rhetoric to make them seem weird. Strongly agree with all of the points you brought up - especially on AI Safety. I was quite skeptical for a while until someone gave me an example of AI risk that didn't sound like it was exaggerated for effect, to which my immediate reaction was "Yeah, that seems... really scarily plausible".

It seems like there are certain principles that have a 'soft' and a 'hard' version - you list a few here. The soft ones are slightly fuzzy concepts that aren't objectionable, and the hard ones are some of the tricky outcomes you come to if you push them. Taking a couple of your examples:

Soft: We should try to do as much good with donations as possible

Hard: We will sometimes guide time and money away from things that are really quite important, because they're not the most important

 

Soft:  Long-term impacts are more important than short-term impac... (read more)

1Ines8h
Hm, yeah, I see where you're coming from. Changed the phrasing.

This post includes some great follow up questions for the future. Has anything been posted re: these follow up questions?

1Agrippa4h
As far as I can tell liberal nonviolence is a very popular norm in EA. At the same time I really cannot thing of anything more mortally violent I could do than to build a doomsday machine. Even if my doomsday machine is actually a 10%-chance-of-doomsday machine or 1% or etcetera (nobody even thinks it's lower than that). How come this norm isn't kicking in? How close to completion does the 10%-chance-of-doomsday machine have to be before gentle kindness is not the prescribed reaction?
1Agrippa5h
My favorite thing about EA has always been the norm that in order to get cred for being altruistic, you actually are supposed to have helped people. This is a great property, just align incentives. But now re: OpenAI I so often hear people say that gentle kindness is the only way, if you are openly adversarial then they will just do the opposite of what you want even more. So much for aligning incentives.