Shortform Content [Beta]

Michael_Wiebe's Shortform

FTX Future Fund says they support "ambitious projects to improve humanity's long-term prospects". Does it seem weird that they're unanimously funding neartermist global health interventions like lead elimination?

Will MacAskill:

Lead Exposure Elimination Project. [...] So I saw the talk, I made sure that Clare was applying to  [FTX] Future Fund. And I was like, “OK, we’ve got to fund this.” And because the focus [at FTX] is longtermist giving, I was thinking maybe it’s going to be a bit of a fight internally. Then it came up in the Slack, and everyone w

... (read more)

LEEP is lead by a very talented team of strong "neartermist" EAs. 

In the real world and real EA, a lot of interest and granting can be dependent on team and execution (especially given the funding situation). Very good work and leaders are always valuable. 

Casting everything into some longtermist/neartermist thing online seems unhealthy.

This particular comment seems poorly written (what does "unanimously" mean?) and seems to pull on some issue, but it just reads that everyone likes MacAskill, everyone likes LEEP and so decided to make a move.

Sophia's Shortform

The goal of this short-form post: to outline what I see as the key common ground between the “big tent” versus “small and weird” discussions that have been happening recently and to outline one candidate point of disagreement.  

Tl;dr:

  • Common ground:
    • Everyone really values good thinking processes/epistemics/reasoning transparency and wants to make sure we maintain that aspect of the existing effective altruism community
    • Impact is fat-tailed
    • We might be getting a lot more attention soon because of our increased spending and because of the August release of
... (read more)
rohinmshah's Shortform

There's been a few posts recently about how there should be more EA failures, since we're trying a bunch of high-risk, high-reward projects, and some of them should fail or we're not being ambitious enough.

I think this is a misunderstanding of what high-EV bets look like. Most projects do not either produce wild success or abject failure, there's usually a continuity of outcomes in between, and that's what you hit. This doesn't look like "failure", it looks like moderate success.

For example, consider the MineRL BASALT competition that I organized. The low-... (read more)

Agrippa's Shortform

https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support 

To me at this point the expected impact of the EA phenomena as a whole is negative. Hope we can right this ship, but things really seem off the rails.

2MichaelDickens1d
Eliezer said something similar, and he seems similarly upset about it: https://twitter.com/ESYudkowsky/status/1446562238848847877 [https://twitter.com/ESYudkowsky/status/1446562238848847877] (FWIW I am also upset about it, I just don't know that I have anything constructive to say)

Eliezer's tweet is about the founding of OpenAI, whereas Agrippa's comment is about a 2017 grant to OpenAI (OpenAI was founded in 2015, so this was not a founding grant). It seems like to argue that Open Phil's grant was net negative (and so strongly net negative as to swamp other EA movement efforts), one would have to compare OpenAI's work in a counterfactual world where it never got the extra $30 million in 2017 (and Holden never joined the board) with the actual world in which those things happened. That seems a lot harder to argue for than what Elieze... (read more)

2Agrippa2d
This post includes some great follow up questions for the future. Has anything been posted re: these follow up questions?
Michael_Wiebe's Shortform

What is the definition of longtermism, if it now includes traditional global health interventions like reducing lead exposure?

Will MacAskill says (bold added):

Well, it’s because there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear. Whereas something in this broad longtermist area — like reducing people’s exposure to lead, improving brain and other health development

... (read more)
Agrippa's Shortform

As far as I can tell liberal nonviolence is a very popular norm in EA. At the same time I really cannot thing of anything more mortally violent I could do than to build a doomsday machine. Even if my doomsday machine is actually a 10%-chance-of-doomsday machine or 1% or etcetera (nobody even thinks it's lower than that). How come this norm isn't kicking in? How close to completion does the 10%-chance-of-doomsday machine have to be before gentle kindness is not the prescribed reaction? 

Agrippa's Shortform

My favorite thing about EA has always been the norm that in order to get cred for being altruistic, you actually are supposed to have helped people. This is a great property, just align incentives. But now re: OpenAI I so often hear people say that gentle kindness is the only way, if you are openly adversarial then they will just do the opposite of what you want even more. So much for aligning incentives.

quinn's Shortform

Stem cell slowdown and AI timelines

My knowledge of christians and stem cell research in the US is very limited, but my understanding is that they accomplished real slowdown. 

Has anyone looked to that movement for lessons about AI? 

Did anybody from that movement take a "change it from the inside" or "build clout by boosting stem cell capabilities so you can later spend that clout on stem cell alignment" approach? 

Puggy Knudson's Shortform

Carrick Flynn lost the nomination, and over $10 million dollars from EA aligned individuals went to support his nomination.

So these questions may sound pointed:

There was surely a lot of expected value in having an EA aligned thinker in congress supporting pandemic preparedness, but there were a lot of bottlenecks that he would have had to go through to make a change.

He would have been one of hundreds of congresspeople. He would have had to get bills passed. He would have had to win enough votes to make it past the primary. He would have had to have his pol... (read more)

I think seeing the attacks that he's captured by crypto interests was useful, in that future EA political forays will know that attack is coming and be able to fend it off better. Worth $11 mil in itself, probably not, but the expected value was already pretty high (a decent probability of having someone in congress who can champion bills no one disagrees with but doesn't want to spend time and effort on) so this information gained is helpful and might make either future campaigns more successful or alternatively dissuade future spending in this area. Definitely good to try once, we'll see how it plays out in the long run. We didn't know he'd lose until he lost!

James Montavon's Shortform

https://www.nytimes.com/2022/05/14/opinion/sunday/rich-happiness-big-data.html

This article from Seth Stephens-Davidowitz describes a paper (here) that examines who are the people in the top 0.1% of earners in the US, making at least $1.58 million per year. It  was interesting to me in that many of those people were not high-status jobs, but rather owning unsexy businesses such as a car dealership or a beverage distribution operation. Obviously, this has implications for how we structure society, but it could also be a good thing to keep in mind for th... (read more)

An interesting thought, but I think this overlooks the fact that wealth is heavy tailed. So it is (probably) higher EV to have someone with a 10% shot at their tech startup getting huge than one person with a 100% chance of running a succesful plumbing company.

Stefan_Schubert's Shortform

"Write a Philosophical Argument That Convinces Research Participants to Donate to Charity"

Has this every been followed up on? Is their data public?

AABoyles's Shortform

I recently experienced a jarring update on my beliefs about Transformative AI. Basically, I thought we had more time (decades) than I now believe we will (years) before TAI causes an existential catastrophe. This has had an interesting effect on my sensibilities about cause prioritization. While I applaud wealthy donors directing funds to AI-related Existential Risk mitigation, I don't assign high probability to the success of any of their funded projects. Moreover, it appears to me that there is essentially no room for additional funds in kinds of denomin... (read more)

Consider s-risk:

From your comment, I understand that you believe the funding situation is strong and not limiting for TAI, and also that the likely outcomes of current interventions is not promising. 

(Not necessarily personally agreeing with the above) given your view, I think one area that could still interest you is "s-risk". This also relevant for your interests in alleviating massive suffering. 

I think talking with CLR, or people such as Chi there might be valuable (they might be happy to speak if you are a personal donor).

 

Leadership de... (read more)

2Zach Stein-Perlman3d
If there's at least a 1% chance that we don't experience catastrophe soon, and we can have reasonable expected influence over no-catastrophe-soon futures, and there's a reasonable chance that such futures have astronomical importance, then patient philanthropy [https://80000hours.org/podcast/episodes/phil-trammell-patient-philanthropy/] is quite good in expectation. Given my empirical beliefs, it's much better then GiveDirectly. And that's just a lower bound; e.g., investing in movement-building might well be even better.
james.lucassen's Shortform

Question for anyone who has interest/means/time to look into it: which topics on the EA  forum are overrepresented/underrepresented? I would be interested in comparisons of (posts/views/karma/comments) per (person/dollar/survey interest) in various cause areas. Mostly interested in the situation now, but viewing changes over time would be great!

My hypothesis [DO NOT VIEW IF YOU INTEND TO INVESTIGATE]:

I expect longtermism to be WILDLY, like 20x, overrepresented. If this is the case I think it may be responsible for a lot of the recent angst about the relationship between longtermism and EA more broadly, and would point to some concrete actions to take.

1Kevin Lacker5d
Even a brief glance through posts indicates that there is relatively little discussion about global health issues like malaria nets, vitamin A deficiency, and parasitic worms, even though those are among the top EA priorities.
Ben Garfinkel's Shortform

(Disclaimer: The argument I make in this short-form feels I little sophistic to me. I’m not sure I endorse it.)

Discussions of AI risk, particular risks from “inner misalignment,” sometimes heavily emphasize the following observation:

Humans don’t just care about their genes: Genes determine, to a large extent, how people behave. Some genes are preserved from generation-to-generation and some are pushed out of the gene-pool. Genes that cause certain human behaviours (e.g. not setting yourself on fire) are more likely to be preserved. But people don’t care

... (read more)
21Rohin Shah5d
The actual worry with inner misalignment style concerns is that the selection you do during training does not fully constrain the goals of the AI system you get out; if there are multiple goals consistent with the selection you applied during training there's no particular reason to expect any particular one of them. Importantly, when you are using natural selection or gradient descent, the constraints are not "you must optimize X goal", the constraints are "in Y situations you must behave in Z ways", which doesn't constrain how you behave in totally different situations. What you get out depends on the inductive biases of your learning system (including e.g. what's "simpler"). For example, you train your system to answer truthfully in situations where we know the answer. This could get you an AI system that is truthful... or an AI system that answers truthfully when we know the answer, but lies to us when we don't know the answer in service of making paperclips. (ELK [https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge] tries to deal with this setting.) When I apply this point of view to the evolution analogy it dissolves the question / paradox you've listed above. Given the actual ancestral environment and the selection pressures present there, organisms that maximized "reproductive fitness" or "tiling the universe with their DNA" or "maximizing sex between non-sterile, non-pregnant opposite-sex pairs" would all have done well there (I'm sure this is somehow somewhat wrong but clearly in principle there's a version that's right), so who knows which of those things you get. In practice you don't even get organisms that are maximizing anything, because they aren't particularly goal-directed, and instead are adaption-executers rather than fitness-maximizers [https://www.lesswrong.com/posts/XPErvb8m9FapXCjhA/adaptation-executers-not-fitness-maximizers] . I do think that once you inhabit this way of thinking abo
5Ben Garfinkel5d
I think that's well-put -- and I generally agree that this suggests genuine reason for concern. I suppose my point is more narrow, really just questioning whether the observation "humans care about things besides their genes" gives us any additional reason for concern. Some presentations seem to suggest it does. For example, this introduction [https://www.lesswrong.com/posts/AHhCrJ2KpTjsCSwbt/inner-alignment-explain-like-i-m-12-edition] to inner alignment concerns (based on the MIRI mesa-optimization paper) says: And I want to say: "On net, if humans did only care about maximizing inclusive genetic fitness, that would probably be a reason to become more concerned (rather than less concerned) that ML systems will generalize in dangerous ways." While the abstract argument makes sense, I think this specific observation isn't evidence of risk. -------------------------------------------------------------------------------- Relatedly, something I'd be interested in reading (if it doesn't already exist?) would be a piece that takes a broader approach to drawing lessons from the evolution of human goals - rather than stopping at the fact that humans care about things besides genetic fitness. My guess is that the case of humans is overall a little reassuring (relative to how we might have expected generalization to work), while still leaving a lot of room for worry. For example, in the case of violence: People who committed totally random acts of violence presumably often failed to pass on their genes (because they were often killed or ostracized in return). However, a large portion of our ancestors did have occasion for violence. On high-end estimates, our average ancestor may have killed about .25 people. This has resulted in most people having a pretty strong disinclination to commit murder; for most people, it's very hard to bring yourself to murder and you'll often be willing to pay a big cost to avoid committing murder. The three main reasons for concern, t

I suppose my point is more narrow, really just questioning whether the observation "humans care about things besides their genes" gives us any additional reason for concern.

I mostly go ¯\_(ツ)_/¯ , it doesn't feel like it's much evidence of anything, after you've updated off the abstract argument. The actual situation we face will be so different (primarily, we're actually trying to deal with the alignment problem, unlike evolution).

I do agree that in saying " ¯\_(ツ)_/¯  " I am disagreeing with a bunch of claims that say "evolution example implies misa... (read more)

evelynciara's Shortform

I think some of us really need to create op-eds, videos, etc. for a mainstream audience defending longtermism. The Phil Torres pieces have spread a lot (people outside the EA community have shared them in a Discord server I moderate, and Timnit Gebru has picked them up) and thus far I haven't seen an adequate response.

NunoSempere's Shortform

Infinite Ethics 101: Stochastic and Statewise Dominance as a Backup Decision Theory when Expected Values Fail

First posted on nunosempere.com/blog/2022/05/20/infinite-ethics-101 , and written after one too many times encountering someone who didn't know what to do when encountering infinite expected values.

In Exceeding expectations: stochastic dominance as a general decision theory, Christian Tarsney presents stochastic dominance (to be defined) as a total replacement for expected value as a decision theory. He wants to argue that one decision is only ratio... (read more)

5Kevin Lacker6d
You could discount utilons - say there is a “meta-utilon” which is a function of utilons, like maybe meta utilons = log(utilons). And then you could maximize expected metautilons rather than expected utilons. Then I think stochastic dominance is equivalent to saying “better for any non decreasing metautilon function”. But you could also pick a single metautilon function and I believe the outcome would at least be consistent. Really you might as well call the metautilons “utilons” though. They are just not necessarily additive.
2Charles He6d
A monotonic transformation like log doesn’t solve the infinity issue right? Time discounting (to get you comparisons between finite sums) doesn’t preserve the ordering over sequences. This makes me think you are thinking about something else?

Monotonic transformations can indeed solve the infinity issue. For example the sum of 1/n doesn’t converge, but the sum of 1/n^2 converges, even though x -> x^2 is monotonic.

Ben Garfinkel's Shortform

The existential risk community’s relative level of concern about different existential risks is correlated with how hard-to-analyze these risks are. For example, here is The Precipice’s ranking of the top five most concerning existential risks:

  1. Unaligned artificial intelligence[1]
  2. Unforeseen anthropogenic risks (tied)
  3. Engineered pandemics (tied)
  4. Other anthropogenic risks
  5. Nuclear war (tied)
  6. Climate change (tied)

This isn’t surprising.

For a number of risks, when you first hear about them, it’s reasonable to have the reaction “Oh, hm, maybe that could be a ... (read more)

Related:

The uncertainty and error-proneness of our first-order assessments of risk is itself something we must factor into our all-things-considered probability assignments. This factor often dominates in low-probability, high- consequence risks—especially those involving poorly understood natural phenomena, complex social dynamics, or new technology, or that are difficult to assess for other reasons. Suppose that some scientific analysis A indicates that some catastrophe X has an extremely small probability P(X) of occurring. Then the probability that A

... (read more)
Emrik's Shortform

A way of reframing the idea of "we are no longer funding-constrained" is "we are bottlenecked by people who can find new cost-effective opportunities to spend money". If this is true,  we should plausibly stop donating to funds that can't give out money fast enough anyway, and rather spend money on orgs/people/causes you personally estimate needs more money now. Maybe we should up-adjust how relevant we think personal information is to our altruistic spending decisions.

Is this right? And are there any good public summaries of the collective wisdom fun... (read more)

FWIW, I think personal information is very relevant to giving decisions, but I also think the meme "EA is no longer funding-constrained" perhaps lacks nuance that's especially relevant for people with values or perspectives that differ substantially from major funders.

Relevant: https://forum.effectivealtruism.org/posts/GFkzLx7uKSK8zaBE3/we-need-more-nuance-regarding-funding-gaps

1james.lucassen7d
Hey, I really like this re-framing! I'm not sure what you meant to say in the second and third sentences tho :/
Michael_Wiebe's Shortform

How to make the long-term future go well: get every generation to follow the rule "leave the world better off than it was under the previous generation".

NegativeNuno's Shortform

I recently read a post which:

  • I thought was treating the reader like an idiot
  • I thought was below-par in terms of addressing the considerations of the topic it broached
  • I would nonetheless expect to be influential, because [censored]

Normally, I would just ask if they wanted to get a comment from this account. Or just downvote it and explain my reasons for doing so. Or just tear it apart. But today, I am low on energy, and I can't help but feel: What's the point? Sure, if I was more tactful, more charismatic, and glibber, I might both be able to explain ... (read more)

4MaxRa8d
Possible solution: I imagine some EAs would be happy to turn a rambly voice message about your complaints into a tactful comment now and then.

I have a few drafts which could use that, send me a message if you feel like doing that.

Load More