Tristan Williams

AI Policy RA @ CAIP
678 karmaJoined Sep 2022Seeking workMadrid, España

Bio

Participation
5

I hope you've smiled today :) 

I really want to experience and learn about as much of the world as I can, and pride myself on working to become a sort of modern day renaissance man, a bridge builder between very different people if you will. Some not-commonly-seen-in-the-same-person things: I've slaughtered pigs on my family farm and become a vegan, done HVAC (manual labor) work and academic research, been a member of both the Republican and Democratic clubs at my university. 

Discovering EA has been one of the best things to happen to me in my life. I think I likely share something really important with all the people that consider themselves under this umbrella. EA can be a question, sure, but I hope more than that that EA can be a community, one that really works towards making the world a little better than it was. 

Below are some random interests of mine. I'm happy to connect over any of them, and over anything EA, please feel free to book a time whenever is open on my calendly

  • Philosophy (anything Plato is up my alley, but also most interested in ethical and political texts)
  • Psychology (not a big fan of psychotropic medication, also writing a paper on a interesting, niche brand of therapy called logotherapy that analyses its overlap with religion and thinking about how religion, specifically Judaism, could itself be considered a therapeutic practice)
  • Music (Lastfm, Spotify, Rateyourmusic; have deep interests in all genres but especially electronic and indie, have been to Bonnaroo and have plans to attend more festivals)
  • Politics (especially American)
  • Drug Policy (current reading Drugs Without the Hot Air by David Nutt)
  • Gaming (mostly League these days, but shamefully still Fortnight and COD from time to time)
  • Cooking (have been a head chef, have experience working with vegan food too and like to cook a lot)
  • Photography (recently completed a project on community with older people (just the text), arguing that the way we treat the elderly in the US is fairly alarming)
  • Meditation (specifically mindfulness, which I have both practiced and looked at in my RA work, which involved trying to set forth a categorization scheme for the meditative literature)
  • Home (writing a book on different conceptions of it and how relationships intertwine, with a fairly long side endeavor into what forms of relationships should be open to us)
  • Speaking Spanish (Voy a Espana por un ano a dar clases de ingles, porque quiero hablar en Espanol con fluidez)
  • Traveling (have hit a fair bit of Europe and the US, as well as some random other places like Morocco)
  • Reading (I think I currently have over 200 books to read, and have been struggling getting through fantasy recently finding myself continually pulled to non-fiction, largely due to EA reasoning I think)

How others can help me

I've done some RA work in AI Policy now, so I'd be eager to try to continue that moving forward in a more permanent position (or at least a longer period funded) and any help better myself (e.g. how can I do research better?) or finding a position like that would be much appreciated. Otherwise I'm on the look for any good opportunities in the EA Community Building or General Longtermism Research space, so again any help upskilling or breaking into those spaces would be wonderful. 

Of a much lower importance, I'm still not for sure on what cause area I'd like to go into, so if you have any information on the following, especially as to a career in it, I'd love to hear about it: general longtermism research, EA community building, nuclear, AI governance, and mental health.

How I can help others

I don't have domain expertise by any means, but I have thought a good bit about AI policy and next best steps that I'd be happy to share about (i.e. how bad is risk from AI misinformation really?). Beyond EA related things, I have deep knowledge in Philosophy, Psychology and Meditation, and can potentially help with questions generally related to these disciplines. I would say the best thing I can offer is a strong desire to dive deeper into EA, preferably with others who are also interested. I can also offer my experience with personal cause prioritization, and help others on that journey (as well as connect with those trying to find work).

Comments
131

Would love to hear more about what you didn't like, but the other piece sounds like it's worth checking out, I'll try to give it a read soon! 

Ah I searched for a post a while back but didn't find anything, might have been because I searched for the full book title, not sure, but thanks for flagging. 

Thanks for the kind message Jon :) I actually have the third one sitting as a draft right now, and hadn't put the last little bit in because I wasn't sure of the value, but I'll go ahead and finish that one up. I'd love to continue reviewing, but I'd have to see if they ship to other countries, I picked up the first three at an EAGx. Highly encourage you to leave your thoughts on them as a comment here, think that could be helpful if you're interested! 

Sorry I must have missed this earlier, but yeah we linked to your shortforum in the post, a really great example of the positive upshot of doing this sort of work! Have you done any writeups on how you went about the meetings yet? Feels like that might also be worthwhile. 

The dropdowns appear to be broken, had to go to the coldtakes version

I tried to explain why you may not want to put it that way, i.e. that there's perhaps an issue of framing here, and you first reply "but the statement is true" and essentially miss the point.

I'll briefly respond to one other point, but then want to reframe this because the confusion here seems unproductive to me (I'm not sure where our views differ and I don't think our responses are helping to clarify that for one another). The original comment was expressing a view like "using the phrase 'EAs are out' is probably a bad way to frame this". You responded "but it's literally true" and then went on to talk about how disusing this seems important for EA. But no one's implying it's not important for us to discuss? The argument is not "let's not talk about their relations to EA" it's a framing thing, so I think you're either mistaken on what the claim is here, or you just wrote this in a somewhat confusing manner where you started talking about something new and unrelated to the original point in your second paragraph.

To reframe: I'd perhaps want you to think on a question: what does it mean for us to be concerned that EAs are no longer on the board? Untangling why we care, and how we can best represent that, was the goal of my comment. To this end, I found the bits where you expand on your opinions on Toner and the board generally to be helpful.

I think the general point is that this makes sense from a charitable perspective, but is open to a fair degree of uncharitable impressions as well. When you say "EAs are out" it seems like we want some of our own on the inside, as opposed to just sensible, saftey concerned people.

It kind of implies EAs are uniquely able to conduct the sort of saftey conscious work we want, when really (I think) as a community what we care about is having anyone on there who can serve as a ready tonic to for-profit pressures.

What succinct way to put this is better? "Saftey is out" feels slightly better but like it's still making some sort of claim that we have unique providence here. So idk, maybe we just need slightly longer expressions here like "Helen Toner and Tasha McCauley have done really great work and without their concern for saftey we're worried about future directions of the board" or something like that.

(the other two paragraphs of yours focus somewhat confusingly on the idea of labeling EAs as being necessary for considering the impact of this on EA (and on their ability to govern in EA) which I think is best discussed as its own separate point?)

What sort of goals might be common to human level AIs emerging in different contexts for different purposes? Why wouldn't these AIs (in this situation where they're developed slowly and in an embedded context) have just as much diversity in goals as humans do? Or is the argument that they, at an incredibly high level, are just going to end up wanting things more similar to other AIs than humans?

I think this series is out of order, at least if it's intended to be read in release order based on your website (and the into here "In this more recent series, I’ve been trying to..." indicates to me this is not the start but rather a more middle lying article and that I should stick with the release order to read the series. 

Why exactly is this obvious?

If you think humans (or our descendants) have billions of years ahead of us, you should think that we are among the very earliest humans, which makes it much more plausible that our time is among the most important. 

I understand the basic thrust of actions taken earlier having a greater chance to cause significant downstream effects because there's simply more years in play, but is that all there is to the claim?

Load more