jacquesthibs

Posts

Sorted by New

Topic Contributions

Comments

Training a GPT model on EA texts: what data?

I just scraped the EA Forum for you. Contains metadata too: authors, score, votes, date_published, text (post contents), comments.

Here’s a link: https://drive.google.com/file/d/1XA71s2K4j89_N2x4EbTdVYANJ7X3P4ow/view?usp=drivesdk

Good luck.

Note: We just released a big dataset of AI alignment texts. If you’d like to learn more about it, check out our post here: https://www.lesswrong.com/posts/FgjcHiWvADgsocE34/a-descriptive-not-prescriptive-overview-of-current-ai

EA will likely get more attention soon

Great points, here’s my impression: 

Meta-point: I am not suggesting we do anything about this or that we start insulting people and losing our temper (my comment is not intended to be prescriptive). That would be bad and it is not the culture I want within EA. I do think it is, in general, the right call to avoid fanning the flames. However, my first comment is meant to point at something that is already happening: many people uninformed about EA are not being introduced in a fair and balanced way, and first impressions matter. And lastly, I did not mean to imply that Torres’ stuff was the worse we can expect. I am still reading Torres’ stuff with an open-mind to take away the good criticism (while keeping the entire context in consideration).

Regarding the articles: His way of writing is by telling the general story in a way that it’s obvious he knows a lot about EA and had been involved in the past, but then he bends the truth as much as possible so that the reader leaves with a misrepresentation of EA and what EAs really believe and take action on. Since this is a pattern in his writings, it’s hard not to believe he might be doing this because it gives him plausible deniability since what he’s saying is often not “wrong”, but it is bent to the point that the reader ends up inferring things that are false.

To me, in the case of his latest article, you could leave with the impression that Bostrom and MacAskill (as well as the entirety of EA) both think that the whole world should stop spending any money towards philanthropy that helps anyone in the present (and if you do, only to those who are privileged). The uninformed reader can leave with the impression that EA doesn’t even actually care about human lives. The way he writes gives him credibility to the uninformed because it’s not just an all-out attack where it is obvious to the reader what his intentions are.

Whatever you want to call it, this does not seem good faith to me. I welcome criticism of EA and longtermism, but this is not criticism.

*This is a response to both of your comments.

What are your recommendations for technical AI alignment podcasts?

Aside from those already mentioned:


The Inside View has a couple of alignment relevant episodes so far.

These two episodes of Machine Learning Street Talk.

FLI has some stuff.

EA will likely get more attention soon

One thing that may backfire with the slow rollout of talking to journalists is that people who mean to write about EA in bad faith will be the ones at the top of the search results. If you search something like “ea longtermism”, you might find bad faith articles many of us are familiar with. I’m concerned we are setting ourselves up to give people unaware of EA a very bad faith introduction.

Note: when I say “bad faith“ here, it may just be a matter of semantics with how some people are seeing it as. I think I might not have the vocabulary to articulate what I mean by “bad faith.” I actually agree with pretty much everything David has said in response to this comment.

AI Alignment YouTube Playlists

Saving for potential future use. Thanks!

Transcripts of interviews with AI researchers

Fantastic work. And thank you for transcribing!

The AI Messiah

If anything, this is a claim that people have been bringing up on Twitter recently, the parallels between EA and religion. It’s certainly something we should be aware of since, having ”blind faith” in religion is something that be good, we don’t seem to actually want to do this within EA. I could explain why I think AI risk is different from messiah thing, but Rob Miles explains it well here: 

Given limited information (but information nonetheless), I think AI risk could potentially lead to serious harm or not at all, and it’s worth hedging our bets on this cause area (among others). This feels different then choosing to have blind faith in a religion, but I can see why outsiders think this. Though we can be victims of post-rationalization, I think religious folks have reasons to believe in a religion. I think some people might gravitate towards AI risk as a way to feel more meaning in their lives (or something like that), but my impression is that this is not the norm. 

At least in my case,  it’s like, “damn we have so many serious problems in the world and I want to help with them all, but I can’t. So, I’ll focus on areas of personal fit and hedge my bets even though I’m not so sure about this AI thing and donate what I can to these other serious issues.”

2021 AI Alignment Literature Review and Charity Comparison

Avast is telling me that the following link is malicious: 

Ding's China's Growing Influence over the Rules of the Digital Road describes China's approach to influencing technology standards, and suggests some policies the US might adopt.  #Policy

I’m Offering Free Coaching for Software Developers in the EA community

Who am I? Until recently, I worked as a data scientist in the NLP space. I'm currently preparing for a new role, but unsure if I want to:

  1. Work as a machine learning engineer for a few years then transition to alignment, founding a startup/org or continue working as ML engineer.
  2. Or, try to get a role as close to alignment as possible.

When I first approached Yonatan, I told him that my goal was to become "world-class in ml within 3 years" in order to make option 1 work. My plan involved improving my software engineering skills since it was something I felt I was lacking. I told him my plan on how to improve my skills and he basically told me I was going about it all wrong. In the end, he said I should seek mentorship with someone who has the incentive to help me improve my programming skills (via weekly code reviews) ASAP. I had subconsciously avoided this approach because my experiences with mentorship were less than stellar. I took a role with the promise that I would be mentored and, in the end, I was the one doing all the mentoring...

Anyway, after a few conversations with Yonatan, it became clear that seeking mentorship would be at least 10X more effective than my initial plan.

Besides helping me change my approach to becoming a better programmer (and everything else in general), our chats have allowed me to change my career approach in a better direction. Yonatan is good at helping you avoid spouting vague, bad arguments for why you want to do x.

I'm still in the middle of the job search process so I will update this comment in a few months once the dust has settled. For now, I need to go, things have changed recently and I need to get in touch with Yonatan for feedback. :)

I highly recommend this service. It is lightyears ahead of a lot of other "advice" I've found online.

Potential EA NYC Coworking Space

I'd be interested in this if I moved to NYC. I'm currently at the very early beginnings of preparing for interviews and I'm not sure where I'll land yet so I won't answer the survey. Definitely a great idea, though. The decently-sized EA community in NYC is one of the reasons it's my top choice for a place to move to.

Load More