All of Hmash's Comments + Replies

Lol. Not bad for 60% joking.

PS, here's the code actually deployed: https://hamishhuggard.com/misc/fermi.html

I'd love to enter a competition like this.

There's also the possibility that a maximum doesn't exist.

Suppose you had a one-shot utility machine, where you simply punch in a number, and the machine will generate that many utils then self-destruct. The machine has no limit in the number of utils it can generate. How many utils do you select?

"Maximise utility" has no answer to this, because there is no maximum.

In real life, we have a practically infinite number actions available to us. There might be a sense in which due to field quantisation and finite negentropy there are technically finite actions ... (read more)

1[comment deleted]2y

I had a crack at doing the Fermi Paradox calculations using vanilla JS for benchmarking. Took maybe 5 minutes to build reusable probabilistic estimation functions from scratch. On that basis, it doesn't look to me like it would be worth the effort of learning a new syntax.

However, what took me almost all day was trying to get a nice visualisation of the probability distribution I came up with. I would like to be able to zoom and pan, hover over different x-values to get the PDF or CDF as a function of x, and maybe vary model parameters by dragging sliders.... (read more)

1[comment deleted]2y
5
NunoSempere
2y
I like the chutzpa. Up to Ozzie, but most likely not.

Yeah, hard to know what to do with that. I'll make it clear in the post that it is an acknowledged mistake that has been apologised for.

I have a few novel ideas about how to make infinite ethics problems go away (by solving or dissolving them, depending on your perspective), but they would take hours or days to write down. How valuable would it be for me to do this?

1
Pato
2y
Oh, wait, I thought Infinite Ethics included all moral math with infinites, like Pascal's Mugging.  Honestly, personally I think we should focus on AI and community building and everything else seems almost irrelevant.

Third option:

I object to the very existence of this survey

No, infinite ethics is not a serious problem and doesn't deserve criticism.

Yes, infinite ethics is a serious problem and deserves criticism.

If you agree that EA should:

Be more accomodating of people who want to work on climate change

Please upvote this comment (see the last paragraph of the post).

I ended up significantly reworking the section. Any feedback on the new version?

4
Gavin
2y
lgtm

Thank you and good points.

but then the policy suggestions seem to endorse every criticism. (Maybe you do agree with all of them?)

I guess what I was attempting was to steelman all of the criticisms I heard. Trying to come up with a version I do agree with.

I will change the title to "Be more respectful of climate change work"

Great, thank you.

I will update the bullet point with a link to your comment.

If you agree EA should:

Have more quiet spaces at conferences

Please upvote this comment (see the last paragraph of the post).

If you agree EA should:

Have better mental health support

Please upvote this comment (see the last paragraph of the post).

If you agree EA should:

Have more money transparency

Please upvote this comment (see the last paragraph of the post).

If you agree EA should:

Be more positive

Please upvote this comment (see the last paragraph of the post).

If you agree that EA should:

Give more attention to EA outsiders

Please upvote this comment (see the last paragraph of the post).

If you agree that EA should:

Be more human / emotional

Please upvote this comment (see the last paragraph of the post).

Love it. The doggos are goddamn adorable.

Two issues:

  • Robert Miles' audio quality seemed not great.
  • The video felt too long and digressive. By about halfway I had to take a break to stop my brain from overheating. Also by about halfway I had lost track of what the original point was and how it had led to the current point. I think it would've worked better broken up into at least 3 shorter videos, each with its own hook and punchy finish.
1
Writer
2y
Hey, thanks for the feedback here.  Regarding Rob Miles' audio: is there anything more specific you have to say about it? I want to improve the audio aspect of the videos, but the last one seemed better than usual to me on that front. If you could pinpoint any specific thing that seemed off, that would be helpful to me.

Great post. 👍 I vibe with this.

Obvious suggestion, but have you tried looking for a Steve Jobs? Like through a founder dating thing? Or posting on this forum? Or emailing the 80k people?

Full of flaws? Yes. Cringe? Yes. 2-3 times longer than it should be? Yes.

Overrated? Only slightly. There's some great dramatisations of dry academic ideas (similar to Taleb), and the philosophy is plausibly life changing.

Got some nice feedback, but no clear signal that it was genuinely useful so quietly dropped for now.

And yet, this is a great contribution to EA discourse, and it's one that a "smart" EA couldn't have made.

You have identified a place where EA is failing a lot of people by being alienating. Smart people often jump over hurdles and arrive at the "right" answer without even noticing them. These hurdles have valuable information. If you can get good at honestly communicating what you're struggling with, then there's a comfy niche in EA for you.

"What obstacles are holding you back from changing roles or cofounding a new project?"

Where's the option for "Cofounding a project feels big and scary and it's hard to know where to begin or if I'm remotely qualified to try"?

Answer by HmashOct 28, 20214
0
0

I'm aggregating and visualising EA datasets on https://www.effectivealtruismdata.com/. 

I haven't yet implemented data download links, but they should be done within a week.

I only included karma from posts you're the first author of. 
So the missing karma is probably from comments or second author posts.

2
Peter Wildeford
2y
Oh yeah, must be due to comment karma.
Answer by HmashSep 20, 20216
0
0

Not a philosopher, but I have overlapping interests.

  1. I'm not sure what you mean here. What's RDM? Robust decision making? So you'd want to formalise decision making in terms of the Bayesian or frequentist interpretation of probability?
  2. Again, I'm not sure what "maximising ambition" means? Could you expand on this?
  3. How would you approach this? Surveys? Simulations? From a probability perspective I'm not sure that there's anything to say here. You choose a prior based on symmetry/maximum-entropy/invariance arguments, then if observations give you more informati
... (read more)
1
tcelferact
3y
Thanks for your suggestions! Some answers: 1. Robust decision making. And yes, pretty much, I was thinking of the interpretations covered here: https://plato.stanford.edu/entries/probability-interpret. 2. I think formalizing this properly would be part of the task, but if we take the Impact, Neglectedness, Tractability framework, I'm roughly thinking of a decision-making framework that boosts the weight given to impact and lowers the weight given to tractability. 3. I was roughly thinking of an analysis of the approach used by exceptional participants in forecasting tournaments like Tetlock's. Most of them seem to be doing something Bayesian in flavor, if not strictly Bayesian updating, and with impressive results. I suspect that could have interesting implications for how we understand (the relation of subjectivity to) a Bayesian interpretation of probability.

Thanks for the suggestion. I don't have a super clear idea of what the main issues/chunks actually are at the moment, but I'll work towards that.

Very cute. 🙂

I'm curious about your thinking on colour symbolism. On the one hand, ravens are smart and crafty, so "black bird = smart/strategic bird" makes sense. But on the other hand, blue is kinda an EA colour, so at first I thought the blue bird would represent EA. Why did you choose to make the lay-bird a blue bird?

3
D0TheMath
3y
Death of the author interpretation: currently there are few, large, EA-aligned organizations which were created by EAs. Much of the funding for EA aligned projects just supports smart people who happen to be doing effective altruism. The blue bird represents the EA community going to smart people, symbolized by the black bird, and asking why they’re working on what they’re working on. If the answer is a good one, the community / blue bird will pitch in and help.
6
Lizka
3y
To be honest, I didn't think very hard about the names. The thought process was roughly: 1) I want to make a story whose characters are birds, and I could have a smart black bird. 2) Incidentally, I like that it doesn't have to be technical or complicated--- there are birds you can call "blackbirds," and there are birds you can call "bluebirds," so 3) I'll call my characters "black bird" and "blue bird." And I liked the colors this suggested, so that didn't veto the decision. :)  In any case, I'm glad you liked it, thanks! 

Thank you. I have corrected the mistake.

The relationship between Lindy, Doomsday, and Copernicus is as follows:

  • The "Copernican Principle" is that "we" are not special. This is a generalisation of how the Earth is not special: it's just another planet in the solar system, not the centre of the universe.
  • In John Gott's famous paper on the Doomsday Argument, he appeals to the the Copernican Principle to assert "we are also not special in time", meaning that we should expect ourselves to be in a typical point in the history of humanity.
  • The "most typical" point in history is exactly in the middle. Thus your best guess of the longevity of humanity is twice its current age: Lindy's Law. 

This is brilliant!

I think we can actually do an explicit expected-utility and value-of-information calculation here:

  • Let one five-star book = one util 
  • Each book's quality can be modelled as a rate  of producing stars. 
  • The star rating you give a book is the sum of 5 Bernoulli trials with rate 
  • The book will produce  utils of value per read in expectation.
  • To estimate , sum up the total stars awarded  and total possible stars .
  • The probability distribution is then  
... (read more)
1
MaxRa
3y
Cool idea! Send you a message.

It just occurred to me that you don't actually need to convert the forecaster's odds to bits. You can just take the ceiling of the odds themselves:
 

Which is more useful for calibrating in the low-confidence range.

Additional note: BitBets is a proper scoring rule, but not strictly proper. If you round report odds which are rounded up to the next power of two you will achieve the same scores in expectation.

Thanks for the insightful comments.

One other thought I've had in this area is "auctioning" predictions, so that whoever is willing to assign the most bits of confidence to their prediction (the highest "bitter") gets exclusive payoffs. I'm not sure what this would be useful for though. Maybe you award a contract for some really important task to whoever is most confident they will be able to fulfill the contract.

2
NunoSempere
3y
The auctioning scheme might not end up being proper, though
4
MaxRa
3y
@Simon_Grimm and me ended up also organizing a forecasting tournament. It went really well, people seemed to like it a lot, so thanks for the inspiration and the instructions!  One thing we did differently * we hanged posters for each question in the main hallway because we thought it would make the forecasts more visible/present and it would be interesting to see what others write down on the poster as their forecast - I would likely do this again, even though hammering all the numbers into an excel sheet was some effort Questions we used 1. Will the probability of Laschet becoming the next German chancellor be higher than 50% on Hypermind on Sunday, 3pm? 2. Will more than 30 people make a forecast in this tournament? 3. Will one person among the participants do the Giving What We Can pledge during the weekend? 4. During scoring on Sunday before dinner, will two randomly chosen participants report having talked to each other during the weekend? Needs to be more than „Hi“! 5. At the end of Sunday‘s lunch, will there be leftover food in the pots? 6. At Sunday‘s breakfast at 9am, will more than 10 people wear an EA shirt? 7. Will a randomly chosen participant have animal suffering as their current top cause? 8. Will a randomly chosen participant have risks associated with AI as their top cause? 9. At breakfast on Sunday at 9am, will more than half have read at least half of Doing Good Better? 10. Will there be more packages of Queal than Huel at Sunday‘s breakfast? 11. Will there be any rain up until Sunday, 5pm? 12. Will you get into the top 3 forecasters for all other questions except this one? 13. Will participants overall overestimate their probability of getting into the top three? 14. Will more people arrive from the South of Germany than from the North? 15. On average, did people pay more than the standard fee of 100€?