Ozzie Gooen

10184 karmaJoined Berkeley, CA, USA

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
929

Topic contributions
4

Quick point, but I think this title is overstating things. "Is AI Hitting a Wall or Moving Faster Than Ever?" That sounds like it's presupposing that the answer is extreme, where the truth is almost always somewhere in-between.

I've seen a lot of media pieces use this naming convention ("Edward Snowden: Hero or Villain?"), and generally try to recommend against this, going forward. 

That's interesting. But I agree that VC is a blessing and a curse. I'm hesitant to rely too much on VC-backed infrastructure, in a similar way that I am on small-independent-project infrastructure. I wish we had better mechanisms for this sort of thing, it could provide a lot of value to have more projects like this have more incentive-compatible ways of making money.

Not a biggie, I mainly just found it confusing. 

Huh, this is neat!

My "Forum Personality" is "Beloved Online Karma Farmer"? This confuses me a bit - "karma farmer" typically refers to a fairly pejorative role, from what I can tell. Just FYI, this strikes me as you saying, "We noticed you're using semi-questionable methods to technically gain Karma, but it's not as bad as it normally would be." Is this meant to be like a semi-playful dig, without as negative a spin? Was there a system that determined that I gained Karma in over-rated ways? Sorry if I'm newish to some of this terminology. 

Oh interesting. Can you explain more about what you mean, and how this would work? I think there are a lot of ways this sort of thing could be done. 

Good to hear! Do let us know if there are any frustrations you have or improvements you'd like to see!

This space can move somewhat quickly. I just looked into Marimo - seems interesting. It was announced about a year ago and seems to be run as an independent project by two people.

I think it's easy to get burned by jumping on neat new projects. Before I've had people argue that we should have been deep into the Julia ecosystem, or at some point, the OCaml ecosystem (OWL seemed neat for a few years, but then the lead developers left). We previously were excited by ReasonML / Rescript, but then that sort of fizzled out. 

We started Squiggle over 3 years ago an published the first main version, with the editor, 2 years ago. Then we wrote about how we didn't think that Python made sense at that point. 

I'd flag that "Squiggle AI", despite the name, is fairly language-independent. Most of the software and learnings would allow us to change languages without too much difficulty (until/unless we really get into the details of composability). AI is also often good at translating between languages. We think we could have it optionally or only output Python, if that's a feature users would want later on, or if we think that's best.

All that said, I appreciate the suggestion. I don't think we made the wrong move looking back, but we'll keep our eyes on new technologies like this. Right now we work well with Squiggle - the UI / UX is very optimized for this kind of estimation, and it's very easy for us to customize and interact with. But it's definitely the case that it's a lot of work, and it might be the case that one of these options winds up good enough to spend the effort and risk to transfer to. 

If you'd prefer, feel free to leave questions for Squiggle AI here, and I'll run the app on them and respond with the results.

I agree, I've also been thinking about this. I think there's a great deal of interesting work here, to try to put together better terminology. 

My guess is that it would be difficult to change all dialogue using this vocabulary anytime soon, but even shifting some of the research dialogue could go a long way. 

Thanks for summarizing!

It strikes me that the above criticisms don't really seem consequentialist / hedonic utilitarian-focused. I'm curious if other criticisms are, or if some of these are intended to be as such (some complex logic like, "Acting in the standard-morality way will wind up being good for consequentialist reasons in some round-about way".)

More generally, those specific objections strike me as very weak. I'd expect and hope that people at Open Philanthropy and GiveWell would have better objections. 

Load more