๐•ฎ๐–Ž๐–“๐–Š๐–—๐–†

2229 karmaJoined Feb 2020Pursuing a graduate degree (e.g. Master's)Seeking work
www.lesswrong.com/posts/68mLJEfYj3WkTJyd4/about-me-cinera-s-home-page

Bio

Participation
1

Theoretical Computer Science Msc student at the University of [Redacted] in the United Kingdom. 

I'm an aspiring alignment theorist; my research vibes are descriptive formal theories of intelligent systems (and their safety properties) with a bias towards constructive theories.

I think it's important that our theories of intelligent systems remain rooted in the characteristics of real world intelligent systems; we cannot develop adequate theory from the null string as input.

How others can help me

I'm looking for UK based organisations that would accept students for a 30+ week placement (starting June to September) to do theoretical AI Safety research.

How I can help others

Reach out to me about AI existential safety. I'm willing to discuss, brainstorm, review and pick holes in ideas, etc.

Posts
22

Sorted by New

Comments
75

For context, I'm black (Nigerian in the UK).

 
I'm just going to express my honest opinions here:

The events of the last 48 hours (slightly) raised my opinion of Nick Bostrom. I was very relieved that Bostrom did not compromise his epistemic integrity by expressing more socially palatable views that are contrary to those he actually holds.

I think it would be quite tragic to compromise honestly/accurately reporting our beliefs when the situation calls for it to fit in better. I'm very glad Bostrom did not do that.

As for the contents of the email itself, while very untasteful, they were sent in a particular context to be deliberately offensive and Bostrom did regret it and apologise for it at the time. I don't think it's useful/valuable to judge him on the basis of an email he sent a few decades ago as a student. The Bostrom that sent the email did not reflectively endorse its contents, and current Bostrom does not either.

  I'm not interested in a discussion on race & IQ, so I deliberately avoided addressing that.

Sad to hear this happened, but it seems the situation was irrecoverable, and the organisation was already dead for a bit before it officially shuttered.

Glad for this post and all the comments.

Thanks, yeah.

My main hesitancy about this is that I probably want to go for a PhD, but can only get the graduate visa once, and I may want to use it after completing the PhD.

But I've come around to maybe it being better to use it up now, pursue a PhD afterwards, and try to secure employment before completing my program so I can transfer to the skilled workers visa.

Immigration is such a tight constraint for me.

My next career steps after I'm done with my TCS Masters are primarily bottlenecked by "what allows me to remain in the UK" and then "keeps me on track to contribute to technical AI safety research".

What I would like to do for the next 1 - 2 years ("independent research"/ "further upskilling to get into a top ML PhD program") is not all that viable a path given my visa constraints.

Above all, I want to avoid wasting N more years by taking a detour through software engineering again so I can get Visa sponsorship.

[I'm not conscientious enough to pursue AI safety research/ML upskilling while managing a full time job.]

Might just try and see if I can pursue a TCS PhD at my current university and do TCS research that I think would be valuable for theoretical AI safety research.

The main detriment of that is I'd have to spend N more years in <city> and I was really hoping to come down to London.

Advice very, very welcome.

[Not sure who to tag.]

FWIW, I most read the core message of this post as: "you should start an AI safety lab. What are you waiting for? ;)".

The post felt to me like debunking reasons people might feel they aren't qualified to start an AI safety lab.

I don't think this was the primary intention though. I feel like I came away with that impression because of the Twitter contexts in which I saw this post referenced.

This is a good post.

There are counterarguments about how the real world is a much richer and more complex environment than chess (e.g. consider that a superintelligence can't beat knowledgeable humans in tic-tac-toe but that doesn't say anything interesting).

However, I don't really feel compelled to elaborate on those counterarguments because I don't genuinely believe them and don't want to advocate a contrary position for the sake of it.

[Crossposting from LessWrong]

 

I wouldn't be able to start until October (I'm a full time student and might be working on my thesis and have at least one exam to write during the summer); should I still apply?

I am otherwise very interested in the SERI MATS program and expect to be a strong applicant in other ways.

I notice that I am surprised and confused.

I'd have expected Holden to contribute much more to AI existential safety as CEO of Open Philanthropy (career capital, comparative advantage, specialisation, etc.) than via direct work.

I don't really know what to make of this.

That said, it sounds like you've given this a lot of deliberation and have a clear plan/course of action.

I'm excited about your endeavours in the project!

Load more