All of hrosspet's Comments + Replies

I believe improving (group) epistemics outside of our bubble is an important mission. So great you are working with policy makers! 

Hi Niplav, thanks for your work! I've been thinking about doing the same, so you saved me quite some time :)

I made a pull request where I'm suggesting a couple small changes and bug fixes to make it more portable and usable in other projects.

For other readers this might be the most interesting part: I created a jupyter notebook loading all datasets and showing their preview. So now it should be really simple to start working with the data or just see if it's relevant for you at all.

If you'd like to collaborate on this further I might add support for Manifo... (read more)

2
niplav
1y
Thank you! I'll review the pull request later today, but it looks quite useful :-) Not sure how much free time I can spend on the forecasting library, but I'll add the sources to the post.

I'd also add that virtues and deontologically right actions are results of a memetic evolution and as such can be thought of as precomputed actions or habits that have proven to be beneficial over time and have thus high expected value.

2
Martin (Huge) Vlach
1y
Was the slavery and oppression propagated by the same "memetic evolution" mechanics though?

Not all conscious experiences are created equal.

Pursuing those ends Tyler talks about helps cultivate higher quality conscious experiences.

Not sure how seriously you mean this, but news should be both important and surprising (=have new information content). I mean, you could post this a couple times, as for many non-EA people these news might be surprising, but you shouldn't keep posting them indefinitely, even though they remain true.

Thanks for sharing, will take a look!

This is my list of existing prediction markets (and related things like forecasting platforms) in case anyone wants to add what's missing..

https://www.metaculus.com/ https://polymarket.com/ https://insightprediction.com/ https://kalshi.com/ https://manifold.markets/ https://augur.net/ https://smarkets.com/

Interesting experiment!

One argument against the predictive power of stories is that many stories evolved as cautionary tales. Which means that if they work, they will have zero predictive accuracy. Which would also possibly fit this particular scenario

I don't want to push you into feeling more guilty, but honestly I don't think directing the profit towards charities can offset the harm if the purchase is wasteful. In this case I'd focus more on the core problem, ie. what need of yours is behind the shopping binges and why they help you, rather than trying to patch the consequences of it.

3
Vincent van der Holst
2y
It's obviously better to not buy stuff you don't need, but when you do need something, buying it from guided companies would be a better option than traditional companies, simply because they create impact from their donations. If you buy a 100USD worth of stuff online and 5USD goes to effective climate charities, in most cases you would be offsetting more CO2 than what was generated in the supply chain of those products.  I think the key word in Aswasse's message is "necessary". I agree it's probably not too healthy if people buy even more because they no longer feel guilty buying stuff they don't need. 

My experience from a big tech company: ML people are too deep in the technical and practical everyday issues that they don't have the capacity (nor incentive) to form their own ideas about the further future.

I've heard people say, that it's so hard to make ML do something meaningful that they just can't imagine it would do something like recursive self-improvement. AI safety in these terms means making sure the ML model performs as well in deployment as in the development.

Another trend I noticed, but I don't have much data for it, is that the somewhat old... (read more)

Not sure if I understand the text correctly, but the reasoning seams off to me. Eg.

Expected value calculations don't seem to faithfully represent a person's internal sense of conviction that an outcome occurs. Or else opportunities with small chances of success would not attract people.

Isn't the exact opposite true? Don't opportunities with small chances of success still attract people exactly because of (subconscious) expected value value calculations?

1
Noah Scales
2y
Thank you for your comment. I think that in some cases,  subconscious calculations of expected value motivate actions.  But I don't think that expected value calculations faithfully (reliably or consistently) represent a person's degree of conviction (or confidence) that an outcome occurs given the person's actions. In particular, I suggest alternatives at work when people claim that their decision is to choose an expected value of very low probability and very high value: * that those people would have fun during the pursuit and so choose the pursuit of the outcome * that those people have a hidden (benign) agenda behind their pursuit of the outcome * that those people have an idea of what they could (dubiously) achieve that seems real and therefore, somehow, likely, or even certain. What I don't think they have is a strong expectation that they will fail.  We are not wired to meaningfully pursue outcomes that we believe, really believe, will not occur. What they should have, if their probability estimate of success gets really low, is a true expectation that their efforts will fail. In some cases they don't have that expectation because subconsciously they have a simple, clear, and vivid idea of that unlikely outcome. They pursue it even though the pursuit is high cost and the outcome is virtually impossible.

The problem is that sometimes you can see a process is actually continuous only ex post. I think I saw this argument in Yudkowski's writing that sometimes you just don't know what variable to observe, so then a discontinuous event surprises you and only after that you realize you should have been observing X, which would make it seem continuous.

Answer by hrosspetJun 03, 202223
0
0

I'm looking for a cofounder / ML researcher / ML engineer for a new FTX-funded project related to prediction markets and large language models!

The long term vision is to improve our decision making as a humanity. We aim to do that by improving how prediction markets work by employing AI. See the full role description: https://bit.ly/3zg5UFm

Little bit about me.

That something is very unlikely doesn't mean it's unimaginable. The goal of imagining and exploring such unlikely scenarios is that with a positive vision we can at least attempt to make it more likely. Without a positive vision there are only catastrophic scenarios left. That's I think the main motivation for FLI to organize this contest.

I agree, though, that the base assumptions stated in the contest make it hard to come up with a realistic image.

7
Czynski
2y
A positive vision which is false is a lie. No vision meeting the contest constraints is achievable, or even desirable as a post-AGI target. There might be some good fiction that comes out of this, but it will be as unrealistic as Vinge's Zones of Thought setting. Using it in messaging would be at best dishonest, and, worse, probably self-deceptive.

During the 2 hours of reading and skimming through the relevant blog posts I was able to come up with 2 strategies (and no counter-examples so far). They seem to me as quite intuitive and easy to come up with, so I'm wondering what I got wrong about the ELK problem or the contest...

Due to the low confidence in my understanding I don't feel comfortable submitting these strategies, as I don't want to waste the ARC's team time.

My background: ML engineer (~7 years of experience), some previous exposure to AGI and AIS research and computer security.

ARC would be excited for you to send a short email to elk@alignmentresearchcenter.org with a few bullet points describing your high level ideas, if you want to get a sense for whether you're on the right track / whether fleshing them out would be likely to win a prize.

I am pretty confident that ARC would want you to submit those strategies, especially given your background. Even if both trivially fail, it seems useful for them to  know that they did not seem obviously ruled out to you by the provided material.

Thank you for a thought provoking post! I enjoyed it a lot.

I also find the "innovation as mining" hypothesis intuitive. I'd just add that innovation gets harder for humans, but we don't know whether it holds in general (think AI). Our mental capacity has been roughly constant since ancient Greece, but there is more and more previous work to understand before one can come up with something new.  This might not be true for AI, if their capacity scales.

On the other hand there is a combinatorial explosion of facts that you can combine to come up with an i... (read more)

I think governments are not aware of the stop button problem and they think in case of emergency they can just shut down the company / servers running the AGI using force. That's what happened in the past with digital currencies (which Jackson Wagner mentions here as a plausible member of the same reference class as AGI for governments) before bitcoin - they either failed on their own, or if successful, were shut down by government (https://en.wikipedia.org/wiki/Digital_currency#History). 

Daniel Schmachtenberger. Look up some of his youtube interviews. I like especially the one with Lex Fridman (https://youtu.be/hGRNUw559SE). He's a very thoughtful, yet humble person. His approach is very multi-disciplinary, systems-level, holistic. For me he is a role model for how he combines the world-knowledge and self-knowledge, and how clearly he is able to articulate his ideas, which I think are very EA-compatible (he mentions EA from time to time, but I haven't heard any endorsement from him). Yet he goes further than what is discussed within EA eg.... (read more)

Very interesting read, thanks for publishing this!

I am curious what qualified as "having longtermist experience" for you?

2
Clifford
2y
Glad to hear! Roughly this would mean having worked in a relevant area (e.g. bio, AI safety) for at least 1 - 2 years and able to contribute in some capacity to that field. To be clear, some ideas would require a lot more experience - this is just a rough proxy.

a meaningful retrospective is much easier to come by than for, say, the Covid pandemic.

Agreed, but we have this rare example of Dominic Cummings, the chief adviser to Boris Johnson during the pandemic, being thoroughly interviewed about the UK's response to the pandemic. For me it was extremely interesting to peek under the hood of UK government departments and see their failure modes. If you enjoyed the CS report, you might enjoy this one, too.

https://parliamentlive.tv/event/index/d919fbc9-72ca-42de-9b44-c0bf53a7360b