Shortform Content [Beta]

Write your brief or quickly written post here.
Exploratory, draft-stage, rough, and off-the-cuff thoughts are all welcome on Shortform.

cross-posted from Facebook.

Sometimes I hear people who caution humility say something like "this question has stumped the best philosophers for centuries/millennia. How could you possibly hope to make any progress on it?". While I concur that humility is frequently warranted and that in many specific cases that injunction is reasonable [1], I think the framing is broadly wrong.


In particular, using geologic time rather than anthropological time hides the fact that there probably weren't that many people actively thinking about these issues, e... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Showing 3 of 4 replies (Click to show all)
1saulius1d Honest question: are there examples of philosophical problems that were solved in the last 50 years? And I mean solved by doing philosophy not by doing mostly unrelated experiments (like this one [https://www.thevintagenews.com/2018/06/18/riddle/]). I imagine that even if some philosophers felt they answered a question, other would dispute it. More importantly, the solution would likely be difficult to understand and hence it would be of limited value. I'm not sure I'm right here.
2saulius1d After a bit more googling I found this [https://philosophy.stackexchange.com/a/3149] which maybe shows that there have been philosophical problems solved recently. I haven't read about that specific problem though. It's difficult to imagine a short paper solving the hard problem of consciousnesses though.

Is there a "scientific method"?

If you learned about science in school, or read the Wikipedia page on the scientific method, you might have encountered the idea that there is a single thing called “The Scientific Method.” Different formulations of the scientific method are described differently, but it involves generating hypotheses, making predictions, running experiments, evaluating the results and then submitting them for peer review.

The idea is that all scientists follow something like this method.

The idea of there being a “scientific method” exists for

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

If we run any more anonymous surveys, we should encourage people to pause and consider whether they are contributing productively or just venting. I'd still be in favour of sharing all the responses, but I have enough faith in my fellow EAs to believe that some would take this to heart.

I want to write a post saying why Aaron and I* think the Forum is valuable, which technical features currently enable it to produce that value, and what other features I’m planning on building to achieve that value. However, I've wanted to write that post for a long time and the muse of public transparency and openness (you remember that one, right?) hasn't visited.

Here's a more mundane but still informative post, about how we relate to the codebase we forked off of. I promise the space metaphor is necessary. I don't know whether to apo... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Appreciation post for Saulius

I realized recently that the same author that made the corporate commitments post and the misleading cost effectiveness post also made all three of these excellent posts on neglected animal welfare concerns that I remembered reading.

Fish used as live bait by recreational fishermen

Rodents farmed for pet snake food

35-150 billion fish are raised in captivity to be released into the wild every year

For the first he got this notable comment from OpenPhil's Lewis Bollard. Honorable mention includes this post which I also remember... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Also, I feel that as the author, I get more credit than is due, it’s more of a team effort. Other staff members of Rethink Charity review my posts, help me to select topics, and make sure that I have to worry about nothing else but writing. And in some cases posts get a lot of input from other people. E.g., Kieran Greig was the one who pointed out the problem of fish stocking to me and then he gave extensive feedback on the post. My CEE of corporate campaigns benefited tremendously from talking with many experts on the subject who generously shared their k

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
13saulius13d Thanks JP! I feel I should point out that it's now basically my full time job to write for the EA forum, which is why there are quite many posts by me :)

Some efforts to improve scientific research:

https://www.replicationmarkets.com - A prediction market for the replicability of studies.

https://www.darpa.mil/program/systematizing-confidence-in-open-research-and-evidence - A DARPA project with the goal of giving a confidence level to results in social and behavioural studies.

Createquity was an initiative to help make the world a better place by better understanding the arts.

In 2013 they had an interesting blog post on what EA means about the importance of their work.

In the recent 80k podcast, Vitalik and Rob talked about how future de-urbanisation might lead to lower risk of catastrophe from nuclear explosions and biohazards.

This seems like a very interesting argument to lower the importance of biorisk reduction work. It seems plausible that in 20 years, advances in communication technologies would allow people to easily work remotely, advances in energy (say, solar) can allow people to live outside of the grid, advances in additive manufacturing (3d printing) and in agriculture can perhaps allow small communities to... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Posting this on shortform rather than as a comment because I feel like it's more personal musings than a contribution to the audience of the original post —

Things I'm confused about after reading Will's post, Are we living at the most influential time in history?:

What should my prior be about the likelihood of being at the hinge of history? I feel really interested in this question, but haven't even fully read the comments on the subject. TODO.

How much evidence do I have for the Yudkowsky-Bostrom framework? I'd like to get bette... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Following on from the Does climate change deserve more attention within EA post from earlier in the year, I have compiled a 'watch-list' of climate interventions and focus areas that I would really like to:

  1. Share with you all
  2. Get feedback on areas I may have missed
  3. Get resources for tracking some of the gaps

Please check it out!

Lead with the punchline when writing to inform

The convention in a lot of public writing is to mirror the style of writing for profit, optimized for attention. In a co-operative environment, you instead want to optimize to convey your point quickly, to only the people who benefit from hearing it. We should identify ways in which these goals conflict; the most valuable pieces might look different from what we think of when we think of successful writing.

  • Consider who doesn't benefit from your article, and if you can help them filter themselves out.
  • Conside
... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Agree that there's a different incentive for cooperative writing than for clickbait-y news in particular. And I agree with your recommendations. That said, I think many community writers may undervalue making their content more goddamn readable. Scott Alexander is a verbose and often spends paragraphs getting to the start of his point, but I end up with a better understanding of what he's saying by virtue of being fully interested.

All in all though, I'd recommend people try to write like Paul Graham more than either Scott Alexander or an int... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Tip: if you want a way to view Will's AMA answers despite the long thread, you can see all his comments on his user profile.

I suspect that it could be impactful to study say a masters of AI or computer science even if you don't really need it. University provides one of the best opportunities to meet and deeply connect with people in a particular field and I'd be surprised if you couldn't persuade at least a couple of people of the importance of AI safety without really trying. On the other hand, if you went in with the intention of networking as much as possible, I think you could have much more success.

On the incentives of climate science

Alright, the title sounds super conspiratorial, but I hope the content is just boring. Epistemic status: speculating, somewhat confident in the dynamic existing.

Climate science as published by the IPCC tends to

1) Be pretty rigorous

2) Not spend much effort on the tail risks

I have a model that they do this because of their incentives for what they're trying to accomplish.

They're in a politicized field, where the methodology is combed over and mistakes are harshly criticized. Also, they want to show enough da... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

In a building somewhere, tucked away in a forgotten corner, there are four clocks. Each is marked with a symbol: the first with a paperclip, the second with a double helix, the third with a trefoil, and the fourth with a stormcloud.

As you might expect from genre convention, these are not ordinary clocks. In fact, they started ticking when the first human was born, and when they strike midnight, a catastrophe occurs. The type depends on the clock, but what is always true is the disaster kills at least one in ten.

The times currently remaining on the clocks a... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

I really like seeing problems presented like this. It makes them easier to understand.

One of the vague ideas spinning around in my head is that maybe in addition to EA which is a fairly open, loosely co-ordinated, big-tent movement with several different cause areas, there would also in value in a more selective, tightly co-ordinated, narrow movement focusing just on the long term future. Interestingly, this would be an accurate description of some EA orgs, with the key difference being that these orgs tend to rely on paid staff rather than volunteers. I don't have a solid idea of how this would work, but just thought I'd put this... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Oh, I would've sworn that was already the case (with the understanding that, as you say, there is less volunteering involved, because with the "inner" movement being smaller, more selective, and with tighter/more personal relationships, there is much less friction in the movement of money, either in the form of employment contracts or grants).

What is the global burden of menopause?

Symptoms include hot flushes, difficulty sleeping, vaginal irritation or pain, headaches, and low mood or anxiety. These symptoms normally last around five years, although 10% of women experience them for up to 12 years.

I couldn't see a Disability-Adjusted Life Years rating for menopause. I'd imagine that it might have a similar impact to mild depression, which in 2004 was rated as 0.140.

Currently, about 200 million people are going through menopause, 80% of whom are experiencing symptoms. I'd expect this to increase

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

I emailed four menopause researchers to get their views on the best way to help women suffering from menopause symptoms. Two have responded so far. Both suggested charities they are affiliated with.

The first suggested the North American Menopause Society. It seems quite reputable. It focuses on the education of women and health professionals in North America. I'm sure there's a lot of work to be done there, but it seems pretty unlikely to do more good than healthcare in the developing world.

The second suggested the International Menopause Society. It's bee

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

One way that x-risk outreach is done outside of EA is by evoking the image of some sort of countdown to doom. There are 12 years until climate catastrophe. There are two minutes on the Doomsday clock, etc.

However, in reality, instead of doomsday being some fixed point of time on the horizon that we know about, all the best-calibrated experts have is probability distribution smeared over a wide range of times, mostly sitting on “never” simply for the purposes of just taking the median time not working.

And yet! The doomsday clock, so evocative! And I would l... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

On Paths and Problems

Abstract: "Using reasoning and evidence to do the most good"; the fundamental idea of Effective Altruism. But how does one accomplish that? In this article I'll discuss why I believe the focus of EA so far has been largely misplaced, and how we can tackle the subject better.

Disclaimer: My knowledge of how things actually get done by the people who read this forum is rather limited. The points made in this article were based on reading what the site recommends, and skimming the forum.

Let's call the point in time we... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

In 2017, 80k estimated that $10M of extra funding could solve 1% of AI xrisk (todo: see if I can find a better stock estimate for the back of my envelope than this). Taking these numbers literally, this means that anyone who wants to buy AI offsets should, today, pay $1G*(their share of the responsibility).

There are 20,000 AI researchers in the world, so if they're taken as being solely responsible for the totality of AI xrisk the appropriate pigouvian AI offset tax fine is $45,000 per researcher hired per year. This is large but not overwhelmingly so... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Load More