H

Habryka

19002 karmaJoined

Bio

Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).

Comments
1204

Topic contributions
1

I feel like the prediction-markets themselves are best modeled as derivative markets. And then you are talking about second-order derivative markets here. But IDK, mostly sounds like semantics.

I was reading the Charity Commission report on EV and came across this paragraph: 

During the inquiry the charity took the decision to reach a settlement agreement in relation to the repayment of funds it received from FTX in 2022. The charity made this decision following independent legal advice they had received. The charity then notified the Commission once this course of action had been taken. The charity returned $4,246,503.16 USD (stated as £3,340,021 in its Annual Report for financial year ending 30 June 2023). The Commission had no involvement in relation to the discussions and ultimate settlement agreement to repay the funds.

This seems directly in conflict with the settlement agreement between EV and FTX which Zachary Robinson summarized as: 

First, we’re pleased to say that both Effective Ventures UK and Effective Ventures US have agreed to settlements with the FTX bankruptcy estate. As part of these settlements, EV US and EV UK (which I’ll collectively refer to as “EV”) have between them paid the estate $26,786,503, an amount equal to 100% of the funds the entities received from FTX and the FTX Foundation (which I’ll collectively refer to as “FTX”) in 2022.

These two amounts hugely differ. My guess is this is because most of the FTX Funds were received by EV US and that wasn't included in the charity commission? But curious whether I am missing something. 

Me and the other people working on Lightcone + LW are pretty interested in working in this space (and LW + the AI Alignment Forum puts us IMO in a great positions to get ongoing feedback on users for our work in the space, and we've also collaborated a good amount with Manifold historically). However, we currently don't have funding for it, which is our biggest bottleneck for working on this.

AI engineering tends to be particularly expensive in terms of talent and capital expenditures. If anyone knows of funders interested in this kind of stuff, who might be interested in funding us for this kind of work, letting me know would be greatly appreciated. 

Yeah, I don't think this is a crazy take. I disagree with it based on having thought about it for many years, but yeah, I agree that it could make things better (though I don't expect it would and would instead make things worse).

Yes, that if we send people to Anthropic with the aim of "winning an AI arms race" that this will make it more likely that Anthropic will start to cut corners. Indeed, that is very close to the reasoning that caused OpenAI to exist and what seems to have caused it to cut lots of corners.

That sounds like the way OpenAI got started.

Oh, I quite like the idea of having the AI score the writing on different rubrics. I've been thinking about how to better use LLMs on LW and the AI Alignment Forum, and I hadn't considered rubric scoring so far, and might give it a shot as a feature to maybe integrate.

That's an interesting idea, I hadn't considered that!

Yeah, I've considered this a bunch (especially after my upvote strength on LW went up to 10, which really limits the number of people in my reference class). 

I think a whole multi-selection UI would be hard, but maybe having a user setting that you can change on your profile where you can set your upvote-strength to be any number between 1 and your current vote strength seems less convenient but much easier UI wise. It would require some kind of involved changes in the way votes are stored (since we currently have an invariant that guarantees you can recalculate any users karma from nothing but the vote table, and this would introduce a new dependency into that that would have some reasonably big performance implications).

Load more