There's also the possibility that a maximum doesn't exist.
Suppose you had a one-shot utility machine, where you simply punch in a number, and the machine will generate that many utils then self-destruct. The machine has no limit in the number of utils it can generate. How many utils do you select?
"Maximise utility" has no answer to this, because there is no maximum.
In real life, we have a practically infinite number actions available to us. There might be a sense in which due to field quantisation and finite negentropy there are technically finite actions ...
I had a crack at doing the Fermi Paradox calculations using vanilla JS for benchmarking. Took maybe 5 minutes to build reusable probabilistic estimation functions from scratch. On that basis, it doesn't look to me like it would be worth the effort of learning a new syntax.
However, what took me almost all day was trying to get a nice visualisation of the probability distribution I came up with. I would like to be able to zoom and pan, hover over different x-values to get the PDF or CDF as a function of x, and maybe vary model parameters by dragging sliders....
Yeah, hard to know what to do with that. I'll make it clear in the post that it is an acknowledged mistake that has been apologised for.
I have a few novel ideas about how to make infinite ethics problems go away (by solving or dissolving them, depending on your perspective), but they would take hours or days to write down. How valuable would it be for me to do this?
If you agree that EA should:
Please upvote this comment (see the last paragraph of the post).
Thank you and good points.
but then the policy suggestions seem to endorse every criticism. (Maybe you do agree with all of them?)
I guess what I was attempting was to steelman all of the criticisms I heard. Trying to come up with a version I do agree with.
I will change the title to "Be more respectful of climate change work"
If you agree EA should:
Please upvote this comment (see the last paragraph of the post).
If you agree EA should:
Please upvote this comment (see the last paragraph of the post).
If you agree EA should:
Please upvote this comment (see the last paragraph of the post).
If you agree EA should:
Please upvote this comment (see the last paragraph of the post).
If you agree that EA should:
Please upvote this comment (see the last paragraph of the post).
If you agree that EA should:
Please upvote this comment (see the last paragraph of the post).
Love it. The doggos are goddamn adorable.
Two issues:
Great post. 👍 I vibe with this.
Obvious suggestion, but have you tried looking for a Steve Jobs? Like through a founder dating thing? Or posting on this forum? Or emailing the 80k people?
Full of flaws? Yes. Cringe? Yes. 2-3 times longer than it should be? Yes.
Overrated? Only slightly. There's some great dramatisations of dry academic ideas (similar to Taleb), and the philosophy is plausibly life changing.
Got some nice feedback, but no clear signal that it was genuinely useful so quietly dropped for now.
And yet, this is a great contribution to EA discourse, and it's one that a "smart" EA couldn't have made.
You have identified a place where EA is failing a lot of people by being alienating. Smart people often jump over hurdles and arrive at the "right" answer without even noticing them. These hurdles have valuable information. If you can get good at honestly communicating what you're struggling with, then there's a comfy niche in EA for you.
"What obstacles are holding you back from changing roles or cofounding a new project?"
Where's the option for "Cofounding a project feels big and scary and it's hard to know where to begin or if I'm remotely qualified to try"?
I'm aggregating and visualising EA datasets on https://www.effectivealtruismdata.com/.
I haven't yet implemented data download links, but they should be done within a week.
I only included karma from posts you're the first author of.
So the missing karma is probably from comments or second author posts.
Not a philosopher, but I have overlapping interests.
Thanks for the suggestion. I don't have a super clear idea of what the main issues/chunks actually are at the moment, but I'll work towards that.
Very cute. 🙂
I'm curious about your thinking on colour symbolism. On the one hand, ravens are smart and crafty, so "black bird = smart/strategic bird" makes sense. But on the other hand, blue is kinda an EA colour, so at first I thought the blue bird would represent EA. Why did you choose to make the lay-bird a blue bird?
Thank you. I have corrected the mistake.
The relationship between Lindy, Doomsday, and Copernicus is as follows:
This is brilliant!
I think we can actually do an explicit expected-utility and value-of-information calculation here:
It just occurred to me that you don't actually need to convert the forecaster's odds to bits. You can just take the ceiling of the odds themselves:
Which is more useful for calibrating in the low-confidence range.
Additional note: BitBets is a proper scoring rule, but not strictly proper. If you round report odds which are rounded up to the next power of two you will achieve the same scores in expectation.
Thanks for the insightful comments.
One other thought I've had in this area is "auctioning" predictions, so that whoever is willing to assign the most bits of confidence to their prediction (the highest "bitter") gets exclusive payoffs. I'm not sure what this would be useful for though. Maybe you award a contract for some really important task to whoever is most confident they will be able to fulfill the contract.
Lol. Not bad for 60% joking.
PS, here's the code actually deployed: https://hamishhuggard.com/misc/fermi.html