- Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.
Would the information in this quote fall under any of the Freedom...
As far as I understand the plan is for it to be a (sort of?) national/governmental institute.[1] The UK government has quite a few scientific institutes. It might be the first in the world of that kind.
In this article from early October, the phrasing implies that it would be tied to the UK government:
Sunak will use the second day of Britain's upcoming two-day AI summit to gather “like-minded countries” and executives from the leading AI companies to set out a roadmap for an AI Safety Institute, according to five people familiar with the government’s
SBF was additionally charged with bribing Chinese officials with $40 million. Caroline Ellison testified in court that they sent a $150 million bribe.
True, true. But that charge is part of next year's trial and is a helluva lot more straightforward than the 7 charges in this one. (And it shouldn't have featured in this trial—"She called this a bribe earlier in her testimony, a comment stricken from the record by Judge Kaplan. The judge also instructed the jury to disregard this comment." says Blockworks)
My hope and expectation is that neither will be focused on EA
I'd be surprised [p<0.1] if EA was not a significant focus of the Michael Lewis book – but agree that it's unlikely to be the major topic. Many leaders at FTX and Alameda Research are closely linked to EA. SBF often, and publically, said that effective altruism was a big reason for his actions. His connection to EA is interesting both for understanding his motivation and as a story-telling element. There are Manifold prediction markets on whether the book would mention 80'000h (74%), Open Phil...
Yeah, unfortunately I suspect that "he claimed to be an altruist doing good! As part of this weird framework/community!" is going to be substantial part of what makes this an interesting story for writers/media, and what makes it more interesting than "he was doing criminal things in crypto" (which I suspect is just not that interesting on its own at this point, even at such a large scale).
(Not sure if this is within the scope of what you're looking for. )
I'd be excited about having something like a roundtable with people who have been through 80'000h advising – talking about how their thinking about their career has changed, advice for people in a similar situation, etc. I'd imagine this could be a good fit for 80k After Hours?
On Microsoft Edge (the browser) there's a "read aloud" option that offers a range of natural voices for websites and PDFs. It's only slightly worse than speechify and free – and can give a glimpse of whether $139/year might be worth it for you.
I think that a very simplified ordering for how to impress/gain status within EA is:
Disagreement well-justified ≈ Agreement well-justified >>> Agreement sloppily justified > Disagreement sloppily justified
Looking back on my early days interacting with EAs, I generally couldn't present well-justified arguments. I then did feel pressure to agree on shaky epistemic grounds. Because I sometimes disagreed nevertheless, I suspect that some parts of the community were less accessible to me back then.
I'm not sure about what hurdles to overcome if you w...
As far as I understand, the paper doesn't disagree with this and an explanation for it is given in the conclusion:
Communication strategies such as the ‘funnel model’ have facilitated the enduring perception amongst the broader public, academics and journalists that ‘EA’ is synonymous with ‘public-facing EA’. As a result, many people are confused by EA’s seemingly sudden shift toward ‘longtermism’, particularly AI/x-risk; however, this ‘shift’ merely represents a shift in EA’s communication strategy to more openly present the movement’s core aims.
The low number of human-shrimp connections may be due to the attendance dip in 2020. Shrimp have understandably a difficult relationship with dips.
There is a comprehensive process in place... it is a cohesive approach to aligning font, but thank you for the drama!
Insiders know that EA NYC has ambitious plans to sprout a whole network of Bodhi restaurants. To those who might criticize this blossoming "bodhi count," let's not indulge in shaming their gastronomic promiscuity. After all, spreading delicious vegan dim sum and altruism is something we can all savour.
I find the second one more readable.
Might be due to my display: If I zoom into the two versions, the second version separates letters better.
But you're also right, that we'll get used to most changes :)
I find the font to be less readable and somewhat clunky.
Can't quite express why it feels that way. It reminds me of display scaling issues, where your display resolution doesn't match the native resolution.
Can't quite express why it feels that way. It reminds me of display scaling issues, where your display resolution doesn't match the native resolution.
Now that you mention it, I feel a bit the same. It might be that we just need to get used to it, but maybe it's the font-weight
?
Is the second one better? I just changed font-weight from 450 to 400
I'm not really sure if the data suggests this.
The question is rather vague, making it difficult to determine the direction of the desired change. It seems to suggest that longtermists and more engaged individuals are less likely to support large changes in the community in general. But both groups might, on average, agree that change should go in the 'big tent' direction.
Although there are statistically significant differences in responses to "I want the community to look very different" between those with mild vs. high engagement, their average resp...
The only source for this claim I've ever found was Emile P. Torres's article What “longtermism” gets wrong about climate change.
It's not clear where they take the information about an "enormous promotional budget of roughly $10 million" from. Not saying that it is untrue, but also unclear why Torres would have this information.
The implication is also, that the promotional spending came out of EA pockets. But part of it might also be promotional spending by the book publisher.
ETA: I found another article by Torres that discusses the claim in a bit mor...
If I remember correctly, Claude had limited public deployment roughly a month before the Google investment, and roughly 2 months after their biggest funder (FTX) went bankrupt.
Thanks for getting back to me and providing more context.
I do agree that Churchill was probably surprised by Roosevelt's use of the term because it was not in the official communiqué. Trying to figure out how certain historical decisions were influenced is very challenging.
The way you describe the events strikes me as very strong and requires a lot of things to be true other than the term being used accidentally:
Accidentally called for unconditional surrender of the Japanese, leading to the eventual need for the bomb to be dropped. (p.35)
Based on the...
In EA, the roles of "facilitator" and "attendee" may not be as straightforward as they appear to be in AR. From personal experience, there are many influential people in the EA community who do not hold designated roles that overtly reveals their power. Their influence/soft power only becomes apparent once you get a deeper understanding of how community members interrelate and how information is exchanged. On the other hand, someone who is newly on a Community Building grant may have more power on paper than in reality.
I agree with the need for a policy. I...
This is currently at 14 agree votes and the same question for Will MacAskill is at -13 disagree votes.
Would be curious if this is mainly because of Nick Beckstead having been the CEO and therefore carrying more responsibility, or are there other considerations?
The most recent Scott Alexander Post seems potentially relevant to this discussion.
The following long section is about what OpenAI could be thinking – and might also translate to Anthropic. (The rest of the post is also worth checking out.)
...Why OpenAI Thinks Their Research Is Good Now, But Might Be Bad Later
OpenAI understands the argument against burning timeline. But they counterargue that having the AIs speeds up alignment research and all other forms of social adjustment to AI. If we want to prepare for superintelligence - whether solving the technical c
The report suggests that Roosevelt's supposed accidental use of the term"unconditional surrender" and his subsequent failure to back down played a significant role in shaping the strategy that led to the launch of atomic bombs on Japan. I found this claim hard to believe – and after some research, I think it's probably not correct.
...Quite amazingly, the term ‘unconditional’ only entered into the Allied demands due to a verbal mistake made by Roosevelt when reading a joint statement in a live broadcast in January 1943, a fact that he later admitted. Ch
I understand that downvotes can be hurtful – but afaik the post has been up for 45min, so maybe it would be a good idea to wait a bit before reading too much into the reaction/non-reaction?
I agree that it's not well embedded into the book. However, I'm not sure it has to be.
In most of Western Europe, abortion is not a significant political issue. For example, polling consistently finds around 86% of people in the UK think that "Women should have the right to an abortion" and only around 5% of people think that they shouldn't. Given that the readers of WWOTF likely hold even more progressive views, it may be sufficient to make a brief mention of the topic and move on.
It is possible to interpret the book's emphasis on the value of future...
Thank you for your response – I think you make a great case! :)
I very much agree that Pascal's Mugging is relevant to longtermist philosophy,[1] for similar reasons to what you've stated – like that there is a trade-off between high existential risk and a high expected value of the future.[2]
I'm just pretty confused about whether this is the point being made by Philosophy Tube. Pascal's mugging in the video has as an astronomical upside that "Super Hitler" is not born - because his birth would mean that "the future is doomed". She doesn't ...
Love this type of research, thank you very much for doing it!
I'm confused about the following statement:
While carp and salmon have lower scores than pigs and chickens, we suspect that’s largely due to a lack of research.
Is this a species-specific suspicion? Or does a lower amount of (high-quality) research on a species generally reduce your welfare range estimate?
On average I'd have expected the welfare range estimate to stay the same with increasing evidence, but the level of certainty about the estimate to increase.
If you have reason to belie...
Moonshot EA Forum Feature Request
It would be awesome to be able to opt-in for "within-text commenting" (similar to what happens when you enable commenting in a google doc) when posting on the EA Forum.
Optimally those comments could also be voted on.
I recently heard the Radio Bostrom audio version of the Unilateralist's Curse after only having read it before. Something about the narration made me think that it lends itself very well to an explainer video.
[Edit after months: While I still believe these are valid questions, I now think I was too hostile, overconfident, and not genuinely curious enough.] One additional thing I’d be curious about:
You played the role of a messenger between SBF and Elon Musk in a bid for SBF to invest up to 15 billion of (presumably mostly his) wealth in an acquisition of Twitter. The stated reason for that bid was to make Twitter better for the world. This has worried me a lot over the last weeks. It could have easily been the most consequential thing EAs have ever done and th...
It could have easily been the most consequential thing EAs have ever done and there has - to my knowledge- never been a thorough EA debate that signalled that this would be a good idea.
I don't think EAs should necessary require a community-wide debate before making major decisions, including investment decisions; sometimes decisions should be made fast, and often decisions don't benefit a ton from "the whole community weighs in" over "twenty smart advisors weighed in".
But regardless, seems interesting and useful for EAs to debate this topic so we can form ...
I think that it's supposed to be Peter Thiel (right) and Larry Page (top) in the cover photo. They are mentioned in the article, are very rich and look to me more like the drawings.
Release shocking results of an undercover investigation ~2 weeks before the vote. Maybe this could have led to a 2-10% increase?
My understanding is, that they did try to do this with an undercover investigation report on poultry farming. But it was only in the news for a very short time and I'm guessing didn't have a large effect.
A further thing might have helped:
I looked into evidence for the quote you posted for one hour. While I think the phrasing is inaccurate, I’d say the gist of the quote is true. For example, it's pretty understandable that people jump from "Emile Torres says that Nick Beckstead supports white supremacy" to "Emile Torres says that Nick Beckstead is a white supremacist".
White Supremacy:
In a public facebook post you link to this public google doc where you call a quote from Nick Beckstead “unambiguously white-supremacist”.
You reinforce that view in a later tweet:
https://twitt...
Tobias, I think you are absolutely correct. But I will note that this is a well-worn pattern:
Given a long list of tweets and articles that make it quite obvious that Torres is deliberately and repeatedly construing everything ever written or said by longtermists in order to make them appear maximally sinister and dangerous and racist, Torres protests that they have never actually written the sentence "Toby Ord is a white supremacist".
Rather, Torres is using the scholarly definition of white supremacy, not the every day definition. In this way there's alway...
But the same study also found that only 41% of respondents from the general population placed AI becoming more intelligent than humans into the 'first 3 risks of concern' out of a choice of 5 risks.
Only for 12% of respondents was it the biggest concern. 'Opinion leaders' were again more optimistic – only 5% of them thought AI intelligence surpassing human intelligence was the biggest concern.
I recently found a Swiss AI survey that indicates that many people do care about AI.
[This is only very weak evidence against your thesis, but might still interest you 🙂.]
The question:
"Do you fear the emergence of an "artificial super-intelligence", and that robots will take power over humans?"
From the general population, 11% responded "Yes, very", and 37% responded "Yes, a bit".
So, half of the respondents (that expre...
From a welfarist perspective, and under the assumption that going vegan/vegetarian isn't an option, one challenge might be:
"Should we promote grass-fed beef consumption instead?"
A very rough estimate (might be off by orders of magnitude):
I'm super uncertain if I'm comfortable with giving mussels approx. 1/20'000 the moral worth compared to cows. Even after reading, for example, this blog post arguing The Ethical Ca...
Substitution is unclear. In my experience it's very clear that scallop is served as a main course protein in contexts where the alternative is clearly fish, or most often shrimp. So insofar that substitution occurs, we'd mainly see substitution of shrimp and fish.
However, it is not clear how much substitution of meat in fact occurs at all as supply increases. People generally seem to like eating meat and meat-like stuff. I don't know data here but meat consumption is globally on the rise.
Nice analysis – thank you for posting!
While I agree that bivalves are very likely at most minimally sentient, I'd feel more comfortable with people promoting bivalve aquaculture at scale if the downside risks are clearer to me.
Do you have any sense of exactly how unlikely it is that bivalves suffer?
That's very cool!
Does it adjust the karma for when the post was posted?
Or does it adjust for when the karma was given/taken?
For example:
The post with the highest inflation-adjusted karma was posted 2014, and had 70 upvotes out of 69 total votes in 2019 and now sits at 179 upvotes out of 125 total votes. Does the inflation adjustment consider that the average size of a vote after 2019 was around 2?
How well does this represent your views to people unfamiliar with it as a term in population ethics?
It might sound as if you're an EA only concerned about affecting persons (as in humans, or animals with personhood).
Would it be possible for the usernames to be searchable inside the forum's search function but not searchable through other search engines (e.g. Google)? Afaik it should at least be possible for the user page/ profile not to be indexed.
And would it help with these problems?
It might be the combination of small funding and local knowledge about people's skills that is valuable. For example, funding a person that is (currently) not impressive to grantmakers but impressive if you know them and their career plans deeply.
This might be the best intervention EAs could work on because it is making a lot of future economists extremely happy!
"This chance of a better world is only slightly out of reach; out of reach because the best minds of our generation have not been directed towards a life of drugs."
Thanks for this beautiful piece of sophistry!
Some quick ideas:
Existential Jackpot
Existential Boon
Surprising Societal Boon
Unanticipated Societal Windfall
Major Unexpected Gains
Unexpected Supergains
White Swan Event [I just checked, that already has a different meaning.]
I stumbled upon this quote in this recent Economist article [archived] about OpenAI. I couldn't find any good source that supports the claim additionally, so this might not be accurate. The earliest mention I could find for the claim is from January 17th 2023 although it only talks about OpenAI "proposing" the rule change.
If true, this would make the profit cap less meaningful, es... (read more)
I've talked to some people who are involved with OpenAI secondary markets, and they've broadly corroborated this.
One source told me that after a specific year (didn't say when), the cap can increase 20% per year, and the company can further adjust the cap as they fundraise.