Ex-Apple ML engineer with some research and entrepreneurial background. Technical AI alignment research, but am also interested in the bigger problem which I call Human alignment. I'm starting a new project, which aims at tackling one aspect of this bigger problem. I'm a long term member of the Czech EA and LW community, attended CFAR workshop.
Hi Niplav, thanks for your work! I've been thinking about doing the same, so you saved me quite some time :)
I made a pull request where I'm suggesting a couple small changes and bug fixes to make it more portable and usable in other projects.
For other readers this might be the most interesting part: I created a jupyter notebook loading all datasets and showing their preview. So now it should be really simple to start working with the data or just see if it's relevant for you at all.
If you'd like to collaborate on this further I might add support for Manifold Markets data and Autocast dataset, as that's what I've been working with up till now.
Not sure how seriously you mean this, but news should be both important and surprising (=have new information content). I mean, you could post this a couple times, as for many non-EA people these news might be surprising, but you shouldn't keep posting them indefinitely, even though they remain true.
Thanks for sharing, will take a look!
This is my list of existing prediction markets (and related things like forecasting platforms) in case anyone wants to add what's missing..
https://www.metaculus.com/ https://polymarket.com/ https://insightprediction.com/ https://kalshi.com/ https://manifold.markets/ https://augur.net/ https://smarkets.com/
I don't want to push you into feeling more guilty, but honestly I don't think directing the profit towards charities can offset the harm if the purchase is wasteful. In this case I'd focus more on the core problem, ie. what need of yours is behind the shopping binges and why they help you, rather than trying to patch the consequences of it.
My experience from a big tech company: ML people are too deep in the technical and practical everyday issues that they don't have the capacity (nor incentive) to form their own ideas about the further future.
I've heard people say, that it's so hard to make ML do something meaningful that they just can't imagine it would do something like recursive self-improvement. AI safety in these terms means making sure the ML model performs as well in deployment as in the development.
Another trend I noticed, but I don't have much data for it, is that the somewhat older generation (35+) is mostly interested in the technical problems and don't feel that much responsibility for how the results are used. Vs. the generation of 25 - 35 care much more about the future. I'm noticing similarities with climate change awareness, although the generation delimitations might differ.
I believe improving (group) epistemics outside of our bubble is an important mission. So great you are working with policy makers!