New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
60
Jason
3d
2
The government's sentencing memorandum for SBF is here; it is seeking a sentence of 40-50 years. As typical for DOJ in high-profile cases, it is well-written and well-done. I'm not just saying that because it makes many of the same points I identified in my earlier writeup of SBF's memorandum. E.g., p. 8 ("doubling down" rather than walking away from the fraud); p. 43 ("paid in full" claim is highly misleading) [page cites to numbers at bottom of page, not to PDF page #]. EA-adjacent material: There's a snarky reference to SBF's charitable donations "(for which he still takes credit)" (p. 2) in the intro, and the expected hammering of SBF's memo for taking credit for attempting to take credit for donations paid with customer money (p. 95). There's a reference to SBF's "idiosyncratic . . . beliefs around altruism, utilitarianism, and expected value" (pp. 88-89). This leads to the one surprise theme (for me): the need to incapacitate SBF from committing additional crimes (pp. 87, 90). Per the feds, "the defendant believed and appears still to believe that it is rational and necessary for him to take great risks including imposing those risks on others, if he determines that it will serve what he personally deems a worthy project or goal," which contributes to his future dangerousness (p. 89). For predictors: Looking at sentences where the loss was > $100MM and the method was Ponzi/misappropriation/embezzlement, there's a 20-year, two 30-years, a bunch of 40-years, three 50-years, and three 100+-years (pp. 96-97). Interesting item: The government has gotten about $3.45MM back from political orgs, and the estate has gotten back ~$280K (pp. 108-09). The proposed forfeiture order lists recipients, and seems to tell us which ones returned monies to the government (Proposed Forfeiture Order, pp. 24-43). Life Pro Tip: If you are arrested by the feds, do not subsequently write things in Google Docs that you don't want the feds to bring up at your sentencing. Jotting down the idea that "SBF died for our sins" as some sort of PR idea (p. 88; source here) is particularly ill-advised.  My Take: In Judge Kaplan's shoes, I would probably sentence at the high end of the government's proposed range. Where the actual loss will likely be several billion, and the loss would have been even greater under many circumstances, I don't think a consequence of less than two decades' actual time in prison would provide adequate general deterrence -- even where the balance of other factors was significantly mitigating. That would imply a sentence of ~25 years after a prompt guilty plea. Backsolving, that gets us a sentence of ~35 years without credit for a guilty plea. But the balance of other factors is aggravating, not mitigating. Stealing from lots of ordinary people is worse than stealing from sophisticated investors. Outright stealing by someone in a fiduciary role is worse than accounting fraud to manipulate stock prices. We also need to adjust upward for SBF's post-arrest conduct, including trying to hide money from the bankruptcy process, multiple attempts at witness tampering, and perjury on the stand. Stacking those factors would probably take me over 50 years, but like the government I don't think a likely-death-in-prison sentence is necessary here.
What would be the pros and cons of adding a semi-hidden-but-permanent Hot Takes section to the Forum? All of my takes are Hot and due to time constraints I would otherwise not post at all. Some would argue that someone like me should not post Hot Takes at all. Anyway, in true lazy fashion here is ChatGPT on the pros and cons: Pros: * Encourages diverse perspectives and stimulates debate. * Can attract more engagement and interest from users. * Provides a platform for expressing unconventional or controversial ideas. * Fosters a culture of intellectual curiosity and open discourse within the community. Cons: * May lead to increased polarization and conflict within the community. * Risk of spreading misinformation or poorly researched opinions. * Could divert attention from more rigorous and evidence-based discussions. * Potential for reputational damage if controversial opinions are associated with the forum.
Anyone else ever feel a strong discordance between emotional response and cognitive worldview when it comes to EA issues? Like emotionally I’m like “save the animals! All animals deserve love and protection and we should make sure they can all thrive and be happy with autonomy and evolve toward more intelligent species so we can live together in a diverse human animal utopia, yay big tent EA…” But logically I’m like “AI and/or other exponential technologies are right around the corner and make animal issues completely immaterial. Anything that detracts from progress on that is a distraction and should be completely and deliberately ignored. Optimally we will build an AI or other system that determines maximum utility per unit of matter, possibly including agency as a factor and quite possibly not, so that we can tile the universe with sentient simulations of whatever the answer is.” OR, a similar discordance between what was just described and the view that we should also co-optimize for agency, diversity of values and experience, fun, decentralization, etc., EVEN IF that means possibly locking in a state of ~99.9999+percent of possible utility unrealized. Very frustrating, I usually try to push myself toward my rational conclusion of what is best with a wide girth for uncertainty and epistemic humility, but it feels depressing, painful, and self-de-humanizing to do so.
I've loved seeing all the Draft Amnesty posts on the Forum so far! Some really great stuff has been posted (and I'll highlight that when I write a retrospective) Posting this quick take as a reminder that people who are considering posting for Draft Amnesty can run a draft past me for quick feedback. Just DM me. 
You may know will.i.am as the frontman of The Black Eyed Peas, but his interests beyond music have taken him down a fascinating path at the intersection of creativity and technology. In a recent podcast, he discussed his thoughts on AI and the creative process with host Adam Grant. Some key points: * Adam notes that the most creative people are often the worst at explaining their ideas, because creativity requires divergent, non-linear thinking while explanation favors convergence and linearity. * The podcast features some impressive live wordplay and freestyling from will.i.am. His verbal creativity is on full display. * Interestingly, will.i.am now hosts a radio show with an AI co-host named Fiona. He shares his hopes about the future of AI in entertainment and creativity. * will.i.am and Adam debate what AI can and can't do for human creativity. No definitive answers, but a great discussion nonetheless. I didn't previously associate will.i.am with the AI scene, but he clearly has an innovative and forward-thinking perspective to share. Worth a listen for anyone interested in the intersection of AI and creativity. Listen here | Read the transcript here

Popular comments

Recent discussion

Lex Fridman posts timestamped transcripts of his interviews. It's an 83 minute read here and a 115 minute watch on Youtube.

It's neat to see Altman's side of the story. I don't know whether his charisma is more like +2SD or +5SD above the average American (concept origin: planecrash, likely doesn't follow a normal distribution), and I only have a vague grasp of what kinds of shenanigans +5SDish types can do when they pull out the stops in face-to-face interactions, so maybe you'll prefer to read the transcript over watching the video (although they're largely related to reading and responding to your facial expression and body language on the fly, not projecting their own).

If you've missed it, Gwern's side of the story is here.

Lex Fridman(00:01:05) Take me through the OpenAI board saga that started on Thursday, November 16th, maybe Friday, November 17th for you.

Sam Altman(00:01:13) That was

...
Continue reading
Yanni Kyriacos posted a Quick Take 1h ago

What would be the pros and cons of adding a semi-hidden-but-permanent Hot Takes section to the Forum? All of my takes are Hot and due to time constraints I would otherwise not post at all. Some would argue that someone like me should not post Hot Takes at all. Anyway, in true lazy fashion here is ChatGPT on the pros and cons:

Pros:

  • Encourages diverse perspectives and stimulates debate.
  • Can attract more engagement and interest from users.
  • Provides a platform for expressing unconventional or controversial ideas.
  • Fosters a culture of intellectual curiosity and open discourse within the community.

Cons:

  • May lead to increased polarization and conflict within the community.
  • Risk of spreading misinformation or poorly researched opinions.
  • Could divert attention from more rigorous and evidence-based discussions.
  • Potential for reputational damage if controversial opinions are associated with the forum.
Continue reading

TL;DR: Someone should probably write a grant to produce a spreadsheet/dataset of past instances where people claimed a new technology would lead to societal catastrophe, with variables such as “multiple people working on the tech believed it was dangerous.”

Slightly longer...

Continue reading

I'm not sure. IMHO a major disaster is happening with the climate. Essentially, people have a false belief that there is some kind of set-point, and that after a while the temperature will return to that, but this isn't the case. Venus is an extreme example of an Earth-like planet with a very different climate. There is nothing in physics or chemistry that says Earth's temperature could not one day exceed 100 C. 

It's always interesting to ask people how high they think sea-level might rise if all the ice melted. This is an uncontroversial calculation which involves no modelling - just looking at how much ice there is, and how much sea-surface area there is. People tend to think it would be maybe a couple of metres. It would actually be 60 m (200 feet). That will take time, but very little time on a cosmic scale, maybe a couple of thousand years. 

Right now, if anything what we're seeing is worse than the average prediction. The glaciers and ice sheets are melting faster. The temperature is increasing faster. Etc. Feedback loops are starting to be powerful. There's a real chance that the Gulf Stream will stop or reverse, which would be a disaster for Europe, ironically freezing us as a result of global warming ... 

Among serious climate scientists, the feeling of doom is palpable. I wouldn't say they are exaggerating. But we, as a global society, have decided that we'd rather have our oil and gas and steaks than prevent the climate disaster. The US seems likely to elect a president who makes it a point of honour to support climate-damaging technologies, just to piss off the scientists and liberals. 

Venus is an extreme example of an Earth-like planet with a very different climate. There is nothing in physics or chemistry that says Earth's temperature could not one day exceed 100 C. 
[...]
[Regarding ice melting -- ] That will take time, but very little time on a cosmic scale, maybe a couple of thousand years.

I'll be blunt, remarks like these undermine your credibility. But regardless, I just don't have any experience or contributions to make on climate change, other than re-emphasizing my general impression that, as a person who cares a lot about existential risk and has talked to various other people who also care a lot about existential risk, there seems to be very strong scientific evidence suggesting that extinction is unlikely.

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Acronyms & abbreviations that I've come across during my time in EA. Big thanks to the CEA team for their help compiling this.

Are there any I've misunderstood or missed? Please add more in the comments!

acronymmeaning
:)))big smile
~approximately / roughly
80k80,000 hours
...
Continue reading
7
Bella
18h
This was cool to read — a number I didn't know! :D 'Crux' has a quasi-formal definition when used by EA/rat types. I think your definition might be good enough for navigating discussions where the word is used, but I think crux (as formally defined) is a cool/useful concept :)

Thanks @Bella! I added "crux" to the list and linked the article you shared.

Authors of linked report: Josh Rosenberg, Ezra Karger, Avital Morris, Molly Hickman, Rose Hadshar, Zachary Jacobs, Philip Tetlock[1]

Today, the Forecasting Research Institute (FRI) released “Roots of Disagreement on AI Risk: Exploring the Potential and Pitfalls of Adversarial...

Continue reading
5
ramekin
3h
I'd be interested in an investigation and comparison of the participants' Big Five personality scores. As with the XPT, I think it's likely that the concerned group is higher on the dimensions of openness and neuroticism, and these persistent personality differences caused their persistent differences in predictions. To flesh out this theory a bit more: * Similar to the XPT, this project failed to find much difference between the two groups' predictions for the medium term (i.e. through 2030) - at least, not nearly enough disagreement to explain the divergence in their AI risk estimates through 2100. So to explain the divergence, we'd want a factor that (a) was stable over the course of the study, and (b) would influence estimates of xrisk by 2100 but not nearer-term predictions * Compared to the other forecast questions, the question about xrisk by 2100 is especially abstract; generating an estimate requires entering far mode to average out possibilities over a huge set of complex possible worlds. As such, I think predictions on this question are uniquely reliant on one's high-level priors about whether bizarre and horrible things are generally common or are generally rare - beyond those priors, we really don't have that much concrete to go on. * I think neuroticism and openness might be strong predictors of these priors: * I think one central component of neuroticism is a global prior on danger.[1] Essentially: is the world essentially a safe place where things are fundamentally okay? Or is the world vulnerable?  * I think a central component of openness to experience is something like "openness to weird ideas"[2]: how willing are you to flirt with weird/unusual ideas, especially those that are potentially hazardous or destabilizing to engage with? (Arguments that "the end is nigh" from AI probably fit this bill, once you consider how many religious, social, and political movements have deployed similar arguments to attract followers throughout history.

On a slight tangent from the above: I think I might have once come across an analysis of EAs' scores on the Big Five scale, which IIRC found that EAs' most extreme Big Five trait was high openness. (Perhaps it was Rethink Charity's annual survey of EAs as e.g. analyzed by ElizabethE here, where [eyeballing these results] on a scale from 1-14, the EA respondents scored an average of 11 for openness, vs. less extreme scores on the other four dimensions?)

If EAs really do have especially high average openness, and high openness is a central driver of high AI x... (read more)

I am watching Episode 419 of the podcast with the guest Sam Altman. It is around 2 hours that I plan to watch it completely. I am 30 minutes in, and I don't think he is going to reveal why exactly he was fired.

 Feel free to provide spoilers for the rest of the episode, or if it will have some hint. I will finish it though, because of my itch to keep myself updated. Planning to watch it all over the weekend. I do not feel very comfortable while watching the episode. Did anyone else feel that way? 

Continue reading

This is a draft amnesty post.

Summary

  • It seems plausible that fetuses can suffer from 12 weeks of age, and quite reasonable that they can suffer from 24 weeks of age.
  • Some late-term abortion procedures seem that they might cause a fetus excruciating suffering.
  • Over 35,000 of
...
Continue reading
3
Henry Howard
4h
Sounds very difficult when deadly drugs like fentanyl, midazolam and propofol can easily be injected through an intravenous line. You can't get an IV line on a baby in-utero, I think that's why injection into the heart is done in that case.
5
lilly
4h
I don't have time to research this in depth, but am pretty sure this post is missing a lot of nuance about how anesthesia works in abortion. Importantly, because mother and fetus share a circulation, IV sedation that is given to the mother will—to some extent—sedate the fetus as well, depending on the specific regimen used. So it's not quite right to say "The fetus is administered a lethal injection with no anesthesia." Correspondingly, I think this post overstates the risk of fetal suffering associated with abortion. 

This description of labor induction abortion says:

The skin on your abdomen is numbed with a painkiller, and then a needle is used to inject a medication (digoxin or potassium chloride) through your abdomen into the fluid around the fetus or the fetus to stop the heartbeat.

That sounds like local anesthesia for the mother, which from what I understand is achieved through an injection which numbs the tissue in a specific area rather than through an IV drip. So I don't think this protocol would have any anesthetic effect on the fetus, though I'm not a medical ... (read more)

This is a (late!) Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. 

Commenting and feedback guidelines: 

  1. This draft lacks the polish of a full post, but the content is almost there. The kind of constructive feedback you would normally put on a Forum post is very welcome. 

Epistemic status: Tentative — I have thought about this for some time (~2 years) and have firsthand experience, but have done minimal research into the literature.

TL;DR: Language learning is probably not the best use of your time. Some exceptions might be (1) learning English as a non-native speaker, (2) if you are particularly apt at learning languages, (3) if you see it as leisure and so minimize opportunity costs, (4) if you are aiming at regional specialist roles (e.g., China specialist) and are playing the long game, and ...

Continue reading

Key Takeaways

  • The evidence that animal welfare dominates in neartermism is strong.
  • Open Philanthropy (OP) should scale up its animal welfare allocation over several years to approach a majority of OP's neartermist grantmaking.
  • If OP disagrees, they should practice reasoning
...
Continue reading

Agreed. I disagree with the general practice of capping the probability distribution over animals' sentience at 1x that of humans'. (I wouldn't put much mass above 1x, but it should definitely be more than zero mass.)

4
MichaelDickens
4h
It seems to me that the naive way to handle the two envelopes problem (and I've never heard of a way better than the naive way) is to diversify your donations across two possible solutions to the two envelopes problem: * donate half your (neartermist) money on the assumption that you should use ratios to fixed human value * donate half your money on the assumption that you should fix the opposite way (eg fruit flies have fixed value) Which would suggest donating half to animal welfare and probably half to global poverty. (If you let moral weights be linear with neuron count, I think that would still favor animal welfare, but you could get global poverty outweighing animal welfare if moral weight grows super-linearly with neuron count.) Plausibly there are other neartermist worldviews you might include that don't relate to the two envelopes problem, e.g. a "only give to the most robust interventions" worldview might favor GiveDirectly. So I could see an allocation of less than 50% to animal welfare.