Ariel G.

Mechanical Engineer
102 karmaJoined Jun 2022Working (0-5 years)



Mechatronics engineer, recently quit my job at a medical robotics startup (semi-autonomous eye surgery robot).

Semi active in the LW space (attended LWCW in Berlin last year)

Currently in the AI Safety Camp, working on technical standards for JTC21, as part of the EU AI Act.


I'm not sure I agree with the conclusion but I like the overall analysis, I think it is very useful.

I'm confused, we make this caution compromise all the time - for example, medicine trial ethics. Can we go faster? Sure, but the risks are higher. Yes, that can mean that some people will not get a treatment that is developed a few years too late.

Another closer example is gain of function research. The point is, we could do a lot, but we chose not to - AI should be no different.

Seems to me that this post is a little detached from real world caution considerations, even if it isn't making an incorrect point.

Answer by Ariel G.Jan 18, 202330

Might be better to look for a Prof doing interesting/relevant research, rather than specifically a PhD program

Ah right, I had that thought but wasn't sure, makes sense!

From my playing with it, ChatGPT uses complex language even when told not to. In notion, there's a AI assistant (GPT3 based) and it has a "simplify writing" feature. The outputs were still pretty verbose and had overly long sentences. Soon though, sure!

Ariel G.

Well said, though I think your comment could use that advice :) Specific phrases/words I noticed: reign in, tendancy, bearing in mind, inhibit, subtlety, IQ-signal (?).

I'm non-native and I do know these words, but I'm mostly native level at this point (spent half my life in an English speaking country) I think many non-native speakers won't be as familiar

Answer by Ariel G.Dec 28, 202232

I came across this some time ago through lesswrong -

might be what your looking for :)

As a (semi humorous) devils advocate, if we applied existential risk/longtermist ideas to non human animals, couldn't the animal lives on Mars still be net positive, as they help their respective species flourish for billions of years, and reduce their risk of going extinct if they were only on earth?

I'm not sure I take this seriously yet, but it's interesting to think about.

This was really well written! I appreciate the concise and to the point writing style, as well as a summary at the top.

Regarding the arguments, I think they make sense to me. Although this is where the whole discussion of longtermism does tend to stay pretty abstract, if we can't actually put real numbers on it.

For ex, in the spirit of your example - does working on AI safety at MIRI prevent extinction, while assuming a sufficiently great future compared to, say, working on AI capabilities at OpenAI? (That is, maybe a misaligned AI can cause a greater future?)

I don't think it's actually possible to do a real calculations in this case, and so we make the (reasonable) base assumption that a future with alligned AI is better than a future with a misaligned AI, and go from there.

Maybe I am overly biased against longtermism either way, but in this example it seems to me like the problem you mention isnt really a real-world worry, but only really a theoretically possible pascals mugging.

Having said that I still think it is a good argument against strong longtermism

Ariel G.

While this is important (clarifying of misinformation), I want to mention that I don't think this takes away from the main message of the post. I think it's important to remember that even with a culture of rationality, there are times when we won't have enough information to say what happened (unlike in Scotts case), and for that reason Mayas post is very relevant and I am glad it was shared.

It also doesn't seem appropriate to mention this post as "calling out". While it's legitimate to fear reputations being damaged with unsubstantiated claims, this post doesn't strike me as doing such.

Load more