Founder / ML researcher / ML engineer
Working (6-15 years of experience)
54Joined Aug 2019


I'm a future ex-Apple ML engineer ;)  with some research and entrepreneurial background. I've been following AI technical alignment research, but am also interested in the bigger problem which I call Human alignment. I'm starting a new project, which aims at tackling one aspect of this bigger problem. I'm a long term member of the Czech EA and LW community, attended CFAR workshop.

How others can help me

I'm looking for a cofounder / ML researcher / ML engineer for my new FTX-funded project! See the role description:


I'd also add that virtues and deontologically right actions are results of a memetic evolution and as such can be thought of as precomputed actions or habits that have proven to be beneficial over time and have thus high expected value.

Not all conscious experiences are created equal.

Pursuing those ends Tyler talks about helps cultivate higher quality conscious experiences.

Not sure how seriously you mean this, but news should be both important and surprising (=have new information content). I mean, you could post this a couple times, as for many non-EA people these news might be surprising, but you shouldn't keep posting them indefinitely, even though they remain true.

Thanks for sharing, will take a look!

This is my list of existing prediction markets (and related things like forecasting platforms) in case anyone wants to add what's missing..

Interesting experiment!

One argument against the predictive power of stories is that many stories evolved as cautionary tales. Which means that if they work, they will have zero predictive accuracy. Which would also possibly fit this particular scenario

I don't want to push you into feeling more guilty, but honestly I don't think directing the profit towards charities can offset the harm if the purchase is wasteful. In this case I'd focus more on the core problem, ie. what need of yours is behind the shopping binges and why they help you, rather than trying to patch the consequences of it.

My experience from a big tech company: ML people are too deep in the technical and practical everyday issues that they don't have the capacity (nor incentive) to form their own ideas about the further future.

I've heard people say, that it's so hard to make ML do something meaningful that they just can't imagine it would do something like recursive self-improvement. AI safety in these terms means making sure the ML model performs as well in deployment as in the development.

Another trend I noticed, but I don't have much data for it, is that the somewhat older generation (35+) is mostly interested in the technical problems and don't feel that much responsibility for how the results are used. Vs. the generation of 25 - 35 care much more about the future. I'm noticing similarities with climate change awareness, although the generation delimitations might differ.

Not sure if I understand the text correctly, but the reasoning seams off to me. Eg.

Expected value calculations don't seem to faithfully represent a person's internal sense of conviction that an outcome occurs. Or else opportunities with small chances of success would not attract people.

Isn't the exact opposite true? Don't opportunities with small chances of success still attract people exactly because of (subconscious) expected value value calculations?

The problem is that sometimes you can see a process is actually continuous only ex post. I think I saw this argument in Yudkowski's writing that sometimes you just don't know what variable to observe, so then a discontinuous event surprises you and only after that you realize you should have been observing X, which would make it seem continuous.

I'm looking for a cofounder / ML researcher / ML engineer for a new FTX-funded project related to prediction markets and large language models!

The long term vision is to improve our decision making as a humanity. We aim to do that by improving how prediction markets work by employing AI. See the full role description:

Little bit about me.

Load More