H

Habryka

18969 karmaJoined

Bio

Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).

Comments
1201

Topic contributions
1

Me and the other people working on Lightcone + LW are pretty interested in working in this space (and LW + the AI Alignment Forum puts us IMO in a great positions to get ongoing feedback on users for our work in the space, and we've also collaborated a good amount with Manifold historically). However, we currently don't have funding for it, which is our biggest bottleneck for working on this.

AI engineering tends to be particularly expensive in terms of talent and capital expenditures. If anyone knows of funders interested in this kind of stuff, who might be interested in funding us for this kind of work, letting me know would be greatly appreciated. 

Yeah, I don't think this is a crazy take. I disagree with it based on having thought about it for many years, but yeah, I agree that it could make things better (though I don't expect it would and would instead make things worse).

Yes, that if we send people to Anthropic with the aim of "winning an AI arms race" that this will make it more likely that Anthropic will start to cut corners. Indeed, that is very close to the reasoning that caused OpenAI to exist and what seems to have caused it to cut lots of corners.

That sounds like the way OpenAI got started.

Oh, I quite like the idea of having the AI score the writing on different rubrics. I've been thinking about how to better use LLMs on LW and the AI Alignment Forum, and I hadn't considered rubric scoring so far, and might give it a shot as a feature to maybe integrate.

That's an interesting idea, I hadn't considered that!

Yeah, I've considered this a bunch (especially after my upvote strength on LW went up to 10, which really limits the number of people in my reference class). 

I think a whole multi-selection UI would be hard, but maybe having a user setting that you can change on your profile where you can set your upvote-strength to be any number between 1 and your current vote strength seems less convenient but much easier UI wise. It would require some kind of involved changes in the way votes are stored (since we currently have an invariant that guarantees you can recalculate any users karma from nothing but the vote table, and this would introduce a new dependency into that that would have some reasonably big performance implications).

(I care quite a bit about votes being anonymous, so will generally glomarize in basically all situations where someone asks me about my voting behavior or the voting behavior of others, sorry about that)

My guess is LW both bans and rate-limits more. 

Academia pre the mid-20th-century was a for-profit enterprise. It did not receive substantial government grants and indeed was often very tightly intertwined with the development of industry (much more so than today).

Indeed, the degree to which modern academia is operating on a grant basis and has adopted more of the trappings of the nonprofit space is one of the primary factors in my model of its modern dysfunctions.

Separately, I think the contribution of militaries to industrial and scientific development is overrated, though that also would require a whole essay to go into.

Load more