All of Tobias Häberli's Comments + Replies

"Profits for investors in this venture [ETA: OpenAI] were capped at 100 times their investment (though thanks to a rule change this cap will rise by 20% a year starting in 2025)."


I stumbled upon this quote in this recent Economist article [archived] about OpenAI. I couldn't find any good source that supports the claim additionally, so this might not be accurate. The earliest mention I could find for the claim is from January 17th 2023 although it only talks about OpenAI "proposing" the rule change.

If true, this would make the profit cap less meaningful, es... (read more)

I've talked to some people who are involved with OpenAI secondary markets, and they've broadly corroborated this.

One source told me that after a specific year (didn't say when), the cap can increase 20% per year, and the company can further adjust the cap as they fundraise.

1
trevor1
4mo
As of January 2023, the institutional markets were not predicting AGI within 30 years.
  • Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. 

Would the information in this quote fall under any of the Freedom... (read more)

1
Tony Barrett
5mo
Yes I expect that the government would aim to protect the reported information (or at least key sensitive details) as CUI or in another way that would be FOIA exempt.

As far as I understand the plan is for it to be a (sort of?) national/governmental institute.[1] The UK government has quite a few scientific institutes. It might be the first in the world of that kind.

  1. ^

    In this article from early October, the phrasing implies that it would be tied to the UK government:

    Sunak will use the second day of Britain's upcoming two-day AI summit to gather “like-minded countries” and executives from the leading AI companies to set out a roadmap for an AI Safety Institute, according to five people familiar with the government’s

... (read more)
1
SebastianSchmidt
5mo
Having thought more about it, I think the AI safety institute might be a continuation of the UK Frontier AI Taskforce. I don't know anything about the object-level output of the Taskforce but they've certainly managed to put together a great bunch of people as advisors and contributors (Yoshua Bengio, Paul Christiano, etc.). Very excited to see what comes out of this.

Thanks for the context, didn't know that!

SBF was additionally charged with bribing Chinese officials with $40 million. Caroline Ellison testified in court that they sent a $150 million bribe.

bern
5mo10
0
0
1

True, true. But that charge is part of next year's trial and is a helluva lot more straightforward than the 7 charges in this one. (And it shouldn't have featured in this trial—"She called this a bribe earlier in her testimony, a comment stricken from the record by Judge Kaplan. The judge also instructed the jury to disregard this comment." says Blockworks)

2[comment deleted]6mo
3
Ramiro
6mo
OMG thanks for this. My bad. I edited the original to contemplate this.

My hope and expectation is that neither will be focused on EA

I'd be surprised [p<0.1] if EA was not a significant focus of the Michael Lewis book – but agree that it's unlikely to be the major topic. Many leaders at FTX and Alameda Research are closely linked to EA. SBF often, and publically, said that effective altruism was a big reason for his actions. His connection to EA is interesting both for understanding his motivation and as a story-telling element. There are Manifold prediction markets on whether the book would mention 80'000h (74%), Open Phil... (read more)

Update: the court ruled SBF can't make reference to his philanthropy

5
Ben_West
6mo
Yeah, "touches on EA but isn't centred on it" is my modal prediction for how major stories will go. I expect that more minor stories (e.g. the daily "here's what happened on day n of the trial" story) will usually not mention EA. But obviously it's hard to predict these things with much confidence.

Yeah, unfortunately I suspect that "he claimed to be an altruist doing good! As part of this weird framework/community!" is going to be substantial part of what makes this an interesting story for writers/media, and what makes it more interesting than "he was doing criminal things in crypto" (which I suspect is just not that interesting on its own at this point, even at such a large scale).

(Not sure if this is within the scope of what you're looking for. )
I'd be excited about having something like a roundtable with people who have been through 80'000h advising – talking about how their thinking about their career has changed, advice for people in a similar situation, etc. I'd imagine this could be a good fit for 80k After Hours?

On Microsoft Edge (the browser) there's a "read aloud" option that offers a range of natural voices for websites and PDFs. It's only slightly worse than speechify and free – and can give a glimpse of whether $139/year might be worth it for you.

I think that a very simplified ordering for how to impress/gain status within EA is:

Disagreement well-justified ≈ Agreement well-justified >>> Agreement sloppily justified > Disagreement sloppily justified

Looking back on my early days interacting with EAs, I generally couldn't present well-justified arguments. I then did feel pressure to agree on shaky epistemic grounds. Because I sometimes disagreed nevertheless, I suspect that some parts of the community were less accessible to me back then.

I'm not sure about what hurdles to overcome if you w... (read more)

As far as I understand, the paper doesn't disagree with this and an explanation for it is given in the conclusion:

Communication strategies such as the ‘funnel model’ have facilitated the enduring perception amongst the broader public, academics and journalists that ‘EA’ is synonymous with ‘public-facing EA’. As a result, many people are confused by EA’s seemingly sudden shift toward ‘longtermism’, particularly AI/x-risk; however, this ‘shift’ merely represents a shift in EA’s communication strategy to more openly present the movement’s core aims.

2
Chris Leong
8mo
Interesting. Seems from my perspective to be a shift towards AI, followed by a delayed update on EA’s new position, followed by further shifts towards AI.

The low number of human-shrimp connections may be due to the attendance dip in 2020. Shrimp have understandably a difficult relationship with dips.

1
Jeffrey Kursonis
1y
Hahahah, awesome!

This is the kind of comment strong upvotes are for.

There is a comprehensive process in place... it is a cohesive approach to aligning font, but thank you for the drama!

Insiders know that EA NYC has ambitious plans to sprout a whole network of Bodhi restaurants. To those who might criticize this blossoming "bodhi count," let's not indulge in shaming their gastronomic promiscuity. After all, spreading delicious vegan dim sum and altruism is something we can all savour.

3
MeganNelson
1y
We wouldn't be the first uh, social movement to spread our message via BBQ seitan.

I find the second one more readable. 

Might be due to my display: If I zoom into the two versions, the second version separates letters better.



But you're also right, that we'll get used to most changes :)

I find the font to be less readable and somewhat clunky. 
Can't quite express why it feels that way. It reminds me of display scaling issues, where your display resolution doesn't match the native resolution.

Can't quite express why it feels that way. It reminds me of display scaling issues, where your display resolution doesn't match the native resolution.

Now that you mention it, I feel a bit the same. It might be that we just need to get used to it, but maybe it's the font-weight?

Is the second one better? I just changed font-weight from 450 to 400

I'm not really sure if the data suggests this.

The question is rather vague, making it difficult to determine the direction of the desired change. It seems to suggest that longtermists and more engaged individuals are less likely to support large changes in the community in general. But both groups might, on average, agree that change should go in the 'big tent' direction.

Although there are statistically significant differences in responses to "I want the community to look very different"  between those with mild vs. high engagement, their average resp... (read more)

The only source for this claim I've ever found was Emile P. Torres's article What “longtermism” gets wrong about climate change

It's not clear where they take the information about an "enormous promotional budget of roughly $10 million" from. Not saying that it is untrue, but also unclear why Torres would have this information.

The implication is also, that the promotional spending came out of EA pockets. But part of it might also be promotional spending by the book publisher.

ETA: I found another article by Torres that discusses the claim in a bit mor... (read more)

That "floated" is so weasely!

If I remember correctly, Claude had limited public deployment roughly a month before the Google investment, and roughly 2 months after their biggest funder (FTX) went bankrupt.

Thanks for getting back to me and providing more context. 

I do agree that Churchill was probably surprised by Roosevelt's use of the term because it was not in the official communiqué. Trying to figure out how certain historical decisions were influenced is very challenging.

The way you describe the events strikes me as very strong and requires a lot of things to be true other than the term being used accidentally:

Accidentally called for unconditional surrender of the Japanese, leading to the eventual need for the bomb to be dropped. (p.35)

Based on the... (read more)

In EA, the roles of "facilitator" and "attendee" may not be as straightforward as they appear to be in AR. From personal experience, there are many influential people in the EA community who do not hold designated roles that overtly reveals their power. Their influence/soft power only becomes apparent once you get a deeper understanding of how community members interrelate and how information is exchanged. On the other hand, someone who is newly on a Community Building grant may have more power on paper than in reality.

I agree with the need for a policy. I... (read more)

This is currently at 14 agree votes and the same question for Will MacAskill is at -13 disagree votes.

Would be curious if this is mainly because of Nick Beckstead having been the CEO and therefore carrying more responsibility, or are there other considerations?

The most recent Scott Alexander Post seems potentially relevant to this discussion.

The following long section is about what OpenAI could be thinking – and might also translate to Anthropic. (The rest of the post is also worth checking out.)

Why OpenAI Thinks Their Research Is Good Now, But Might Be Bad Later

OpenAI understands the argument against burning timeline. But they counterargue that having the AIs speeds up alignment research and all other forms of social adjustment to AI. If we want to prepare for superintelligence - whether solving the technical c

... (read more)
-2
sergia
1y
This analysis seems to be considering only the future value, ignoring current value. How does it address current issues, like ones here? https://forum.effectivealtruism.org/posts/bmfR73qjHQnACQaFC/call-to-demand-answers-from-anthropic-about-joining-the-ai?commentId=ZxxC8GDgxvkPBv8mK Why does a small secretive group of ppl who plan to do some sort of a "world AI revolution" that brings "UBI" (without much plan on how exactly) is by-default considering itself "good" I'm one of those who was into this secretive group of people before, only to see how much there is on the outside. Not everyone think what currently is is "good by-default" Goodness comes from participation, listening, talking to each other. Not necessarily from some moral theory. I call to discuss this plan with larger public. I think it will go well and I have evidence for this if you're interested. Thank you.

The report suggests that Roosevelt's supposed accidental use of the term"unconditional surrender" and his subsequent failure to back down played a significant role in shaping the strategy that led to the launch of atomic bombs on Japan. I found this claim hard to believe – and after some research, I think it's probably not correct.

Quite amazingly, the term ‘unconditional’ only entered into the Allied demands due to a verbal mistake made by Roosevelt when reading a joint statement in a live broadcast in January 1943, a fact that he later admitted. Ch

... (read more)
1
Toby_Ord
1y
You may be right, and these sources to make it less clear. I haven't looked at the original sources and as with most parts of my report am following the eminent nuclear historian Richard Rhodes, who marshals some pretty convincing evidence that it was accidental. On page 521 of The Making of the Atomic Bomb (which I cited in that paragraph of my report): I don't have time to follow up on all the sources Rhodes used to construct this passage, but it does sound like there is some remaining mystery here. We have direct quotes from Roosevelt and Churchill saying it was accidental, but some other evidence which might contradict that.

I understand that downvotes can be hurtful – but afaik the post has been up for 45min, so maybe it would be a good idea to wait a bit before reading too much into the reaction/non-reaction? 

 

I agree that it's not well embedded into the book. However,  I'm not sure it has to be.

In most of Western Europe, abortion is not a significant political issue. For example, polling consistently finds around 86% of people in the UK think that "Women should have the right to an abortion" and only around 5% of people think that they shouldn't. Given that the readers of WWOTF likely hold even more progressive views, it may be sufficient to make a brief mention of the topic and move on.

It is possible to interpret the book's emphasis on the value of future... (read more)

Thank you for your response – I think you make a great case! :) 

I very much agree that Pascal's Mugging is relevant to longtermist philosophy,[1] for similar reasons to what you've stated – like that there is a trade-off between high existential risk and a high expected value of the future.[2]

I'm just pretty confused about whether this is the point being made by Philosophy Tube.  Pascal's mugging in the video has as an astronomical upside that "Super Hitler" is not born - because his birth would mean that "the future is doomed". She doesn't ... (read more)

1
tobycrisford
1y
I should admit at this point that I didn't actually watch the Philosophy Tube video, so can't comment on how this argument was portrayed there! And your response to that specific portrayal of it might be spot on. I also agree with you that most existential risk work probably doesn't need to rely on the possibility of 'Bostromian' futures (I like that term!) to justify itself. You only need extinction to be very bad (which I think it is), you don't need it to be very very very bad. But I think there must be some prioritisation decisions where it becomes relevant whether you are a weak longtermist (existental risk would be very bad and is currently neglected) or a strong longtermist (reducing existential risk by a tiny amount has astronomical expected value).  This is also a common line of attack that EA is getting more and more, and I think the reply "well yeah but you don't have to be on board with these sci-fi sounding concepts to support work on existential risk" is a reply that people are understandably more suspicious of if they think the person making it is on board with these more sci-fi like arguments. It's like when a vegan tries to make the case that a particular form of farming is unnecessarily cruel, even if you're ok with eating meat otherwise. It's very natural to be suspicious of their true motivations. (I say this as a vegan who takes part in welfare campaigns).

Love this type of research, thank you very much for doing it!

I'm confused about the following statement:

While carp and salmon have lower scores than pigs and chickens, we suspect that’s largely due to a lack of research.

Is this a species-specific suspicion? Or does a lower amount of (high-quality) research on a species generally reduce your welfare range estimate? 
On average I'd have expected the welfare range estimate to stay the same with increasing evidence, but the level of certainty about the estimate to increase. 

If you have reason to belie... (read more)

6
Bob Fischer
1y
Great question, Tobias. Yes, less research on a species generally reduces our welfare range estimate. I agree with you that it would be better, in some sense, to have our confidence increase in a fixed estimate rather than having the estimates themselves vary. However, we couldn't see how to do that without invoking either our priors (which we don't trust) or some other arbitrary starting point (e.g., neuron counts, which we don't trust either). In any case, that's why we frame the estimates as placeholders and give our overall judgments separately: vertebrates at 0.1 or better, the vertebrates themselves within 2x of one another, and the invertebrates within 2 OOMs of the vertebrates.

Moonshot EA Forum Feature Request 

It would be awesome to be able to opt-in for "within-text commenting" (similar to what happens when you enable commenting in a google doc) when posting on the EA Forum. 

Optimally those comments could also be voted on.

9
JP Addison
1y
I have good news for you! LessWrong has developed this feature. You can access the feature by going to your settings and checking "opt-in to experimental features." You might think that this will lead to a "party-of-1" dynamic, but due to the way it's implemented (check out the above post), quoted text in comments will lead to side comments for you.

I recently heard the Radio Bostrom audio version of the Unilateralist's Curse after only having read it before. Something about the narration made me think that it lends itself very well to an explainer video. 

[Edit after months: While I still believe these are valid questions, I now think I was too hostile, overconfident, and not genuinely curious enough.] One additional thing I’d be curious about:

You played the role of a messenger between SBF and Elon Musk in a bid for SBF to invest up to 15 billion of (presumably mostly his) wealth in an acquisition of Twitter. The stated reason for that bid was to make Twitter better for the world. This has worried me a lot over the last weeks. It could have easily been the most consequential thing EAs have ever done and th... (read more)

It could have easily been the most consequential thing EAs have ever done and there has - to my knowledge- never been a thorough EA debate that signalled that this would be a good idea.

I don't think EAs should necessary require a community-wide debate before making major decisions, including investment decisions; sometimes decisions should be made fast, and often decisions don't benefit a ton from "the whole community weighs in" over "twenty smart advisors weighed in".

But regardless, seems interesting and useful for EAs to debate this topic so we can form ... (read more)

I think that it's supposed to be Peter Thiel (right) and Larry Page (top) in the cover photo. They are mentioned in the article, are very rich and look to me more like the drawings.

3[anonymous]1y
Thanks for pointing this out. I agree. I updated my comments. 

Release shocking results of an undercover investigation ~2 weeks before the vote. Maybe this could have led to a 2-10% increase?


My understanding is, that they did try to do this with an undercover investigation report on poultry farming. But it was only in the news for a very short time and I'm guessing didn't have a large effect.

A further thing might have helped:

  • Show clearly how the initiative would have improved animal welfare. 
    The whole campaign was a bit of a mess in this regard.  In the "voter information booklet" the only clearly understand
... (read more)
1
Jonas V
1y
Excellent points, thank you!

I looked into evidence for the quote you posted for one hour. While I think the phrasing is inaccurate, I’d say the gist of the quote is true.  For example, it's pretty understandable that people jump from "Emile Torres says that Nick Beckstead supports white supremacy" to  "Emile Torres says that Nick Beckstead is a white supremacist". 

White Supremacy:
In a public facebook post you link to this public google doc where you call a quote from Nick Beckstead “unambiguously white-supremacist”.

You reinforce that view in a later tweet:
https://twitt... (read more)

Tobias, I think you are absolutely correct. But I will note that this is a well-worn pattern:

Given a long list of tweets and articles that make it quite obvious that Torres is deliberately and repeatedly construing everything ever written or said by longtermists in order to make them appear maximally sinister and dangerous and racist, Torres protests that they have never actually written the sentence "Toby Ord is a white supremacist".

Rather, Torres is using the scholarly definition of white supremacy, not the every day definition. In this way there's alway... (read more)

Toby Ord touches on that in The Precipice.
For example here (at 11:40)

But the same study also found that only 41% of respondents from the general population placed AI becoming more intelligent than humans into the 'first 3 risks of concern' out of a choice of 5 risks. 
Only for 12% of respondents was it the biggest concern. 'Opinion leaders' were again more optimistic – only 5% of them thought AI intelligence surpassing human intelligence was the biggest concern.

Question: "Which of the potential risks of the development of artificial intelligence concerns you the most? And the second most? And the third most?"
Option 1: T
... (read more)

I recently found a Swiss AI survey that indicates that many people do care about AI.
[This is only very weak evidence against your thesis, but might still interest you 🙂.]

Sample size:
Population – 1245 people
Opinion Leaders – 327 people [from the economy, public administration, science and education]

The question: 
"Do you fear the emergence of an "artificial super-intelligence", and that robots will take power over humans?"

From the general population, 11% responded "Yes, very", and 37% responded "Yes, a bit". 
So, half of the respondents (that expre... (read more)

1
Tobias Häberli
2y
But the same study also found that only 41% of respondents from the general population placed AI becoming more intelligent than humans into the 'first 3 risks of concern' out of a choice of 5 risks.  Only for 12% of respondents was it the biggest concern. 'Opinion leaders' were again more optimistic – only 5% of them thought AI intelligence surpassing human intelligence was the biggest concern. Question: "Which of the potential risks of the development of artificial intelligence concerns you the most? And the second most? And the third most?" Option 1: The risks related to personal security and data protection. Option 2: The risk of misinterpretation by machines. Option 3: Loss of jobs. Option 4: Artificial intelligence that surpasses human intelligence. Option 5: Others
2
Olivia Addy
2y
These are interesting findings!  Be interesting to see if these kind of results are similar elsewhere. 

From a welfarist perspective, and under the assumption that going vegan/vegetarian isn't an option, one challenge might be: 
"Should we promote grass-fed beef consumption instead?"

A very rough estimate (might be off by orders of magnitude):

I'm super uncertain if I'm comfortable with giving mussels approx. 1/20'000 the moral worth compared to cows. Even after reading, for example, this blog post arguing The Ethical Ca... (read more)

Substitution is unclear. In my experience it's very clear that scallop is served as a main course protein in contexts where the alternative is clearly fish, or most often shrimp. So insofar that substitution occurs, we'd mainly see substitution of shrimp and fish. 

However, it is not clear how much substitution of meat in fact occurs at all as supply increases. People generally seem to like eating meat and meat-like stuff. I don't know data here but meat consumption is globally on the rise.

Nice analysis – thank you for posting!

While I agree that bivalves are very likely at most minimally sentient, I'd feel more comfortable with people promoting bivalve aquaculture at scale if the downside risks are clearer to me.

Do you have any sense of exactly how unlikely it is that bivalves suffer?

5
Timothy Chan
2y
Brian Tomasik wrote this analysis of bivalve suffering. I think it offers some good reasons not to conclude that it's super unlikely. It might be that how much weight/likelihood to place on bivalve suffering is ultimately quite subjective though (e.g., I think I would place more weight on it than as expressed in the article because of different intuitions about how much different processes matter as evidence of suffering).
9
Agrippa
2y
https://www.animal-ethics.org/snails-and-bivalves-a-discussion-of-possible-edge-cases-for-sentience/#:~:text=Many%20argue%20that%20because%20bivalves,bivalves%20do%20in%20fact%20swim I found this discussion interesting. To me it seems like they feel aversion -- not sure how that is any different from suffering -- so it is just a question of "how much?". 
6
Tobias Häberli
2y
From a welfarist perspective, and under the assumption that going vegan/vegetarian isn't an option, one challenge might be:  "Should we promote grass-fed beef consumption instead?" A very rough estimate (might be off by orders of magnitude): * Cows have at least 400'000 kcal according to this back-of-the-envelope calculation. * A large mussel has maybe 20 kcal according to the USDA. I'm super uncertain if I'm comfortable with giving mussels approx. 1/20'000 the moral worth compared to cows. Even after reading, for example, this blog post arguing The Ethical Case for Eating Oysters and Mussels. [Edit: If bivalves mainly substitute fish, then this challenge might be missing the issue.]

That's very cool!

Does it adjust the karma for when the post was posted? 
Or does it adjust for when the karma was given/taken?

For example:
The post with the highest inflation-adjusted karma was posted 2014, and had 70 upvotes out of 69 total votes in 2019 and now sits at 179 upvotes out of 125 total votes. Does the inflation adjustment consider that the average size of a vote after 2019 was around 2?

3
JP Addison
2y
It's just the posted-at date of the post, and makes no attempt to adjust for when the karma was given. So if something has stayed a classic for much longer than its peers, it will rank highly on this metric.

How well does this represent your views to people unfamiliar with it as a term in population ethics?

It might sound as if you're an EA only concerned about affecting persons (as in humans, or animals with personhood).

3
freedomandutility
2y
Very badly, probably, but I was assuming that most EAs will be familiar with the term.

Would it be possible for the usernames to be searchable inside the forum's search function but not searchable through other search engines (e.g. Google)? Afaik it should at least be possible for the user page/ profile not to be indexed.

And would it help with these problems?

9
Sarah Cheng
2y
Yes, currently you can contact us and our team can hide your profile page from search engines. We are considering allowing users to do this themselves. However, there are ways to view profile info that are outside the profile itself, so we need to be careful about how we communicate this feature.

It might be the combination of small funding and local knowledge about people's skills that is valuable. For example, funding a person that is (currently) not impressive to grantmakers but impressive if you know them and their career plans deeply.

0
Ruby
2y
I bet that if they are impressive to you (and your judgment is reasonable), you can convince grantmakers at present.

This hasn't been implemented yet, was it forgotten about or just not worth it?

2
Habryka
2y
Oh, I think the functionality is currently net-positive. I was just commenting on the technical difficulty of implementing it if the EA Forum thought it was worth the change.

This might be the best intervention EAs could work on because it is making a lot of future economists extremely happy!

"This chance of a better world is only slightly out of reach; out of reach because the best minds of our generation have not been directed towards a life of drugs." 

Thanks for this beautiful piece of sophistry!

Oh, very exciting – looking forward to attending a Forum workshop! :)

2
Lizka
2y
Awesome, looking forward to seeing you at one! 

Some quick ideas:

Existential Jackpot
Existential Boon
Surprising Societal Boon
Unanticipated Societal Windfall
Major Unexpected Gains
Unexpected Supergains
White Swan Event [I just checked, that already has a different meaning.]
 

4
Linch
2y
I think I like "existential boon" the most, though "boon" does not convey nearly the same strength of effect as catastrophe.
Load more