All of Algo_Law's Comments + Replies

"I think the second view is basically correct for policy in general, although I don't have a strong view yet of how it applies to AI governance specifically. One thing that's become clear to me as I've gotten more involved in institution-focused work and research is that large governments and other similarly impactful organizations are huge, sprawling social organisms, such that I think EAs simultaneously underestimate and overestimate the amount of influence that's possible in those settings."

 

This is a problem I've spoken often about, and I'm curren... (read more)

Good post, thank you.

"or other such nonsense that advocates never taking on risks even when the benefits clearly dominate"

An important point to note here - the people who suffer the risks and the people who reap the benefits are very rarely the same group. Deciding to use an unsafe AI system (whether presently or in the far future) using a risks/benefits analysis goes wrong so often because one man's risk is another's benefit.

Example: The risk of lung damage from traditional coal mining compared to the industrial value of the coal is a very different risk/reward analysis for the miner and the mine owner. Same with AI.

This would be an interesting approach to generate charitable donations. I would caution though that this seems to me (a non-USA person, so take this with a grain of salt) to skirt a little close to some laws surrounding this so I'd definitely check that out first. One man's charitable fundraising could be another man's false representation!

Still though, interesting thought :)

This is helpful. My entire career revolves around "conceal, but don't mislead" and even I'm still learning where lines are. Thank you for this post.

This was a great event which I followed very closely indeed. It generated so much interesting exploration of different areas and I learned so much about the world I live in.

I just want to add that the idea of having 'good-faith' submission prizes was a fantastic addition, and really helped level the playing field for people who otherwise might not have been able to contribute.  I heard from a couple of people that they may not have been able to submit without them. I'd love to see more of these in similar contests in future.

I understand there are some people in the early stages of exploring this, though I'm sorry for the life of me I can't remember who. The Law and Longtermism slack channel which is run by the Legal Priorities Project and may be a good starting point, as I understand some people have found people for this there before.

1
effectiveutils
2y
Thank you for the pointer, I reference this post in the slack channel.

You raise some fair points, but some others I would disagree with. I would say that just because there isn't a popular argument that AGI risk affects underpriviliged people the most, doesn't make it not true.  I can't think of a transformative technology in human history that didn't impact people more the lower down the social strata you go, and AI thus far has not only followed this trend but greatly exaccerbated it. Current AI harms are overwhelmingly targetted towards these groups. I can't think of any reason why much more powerful AI such as AGI w... (read more)

3
Karthik Tadepalli
2y
People are concerned about AGI because it could lead to human extinction or civilizational collapse. That really seems like it affects everyone. It's more analogous to nuclear war. If there was a full scale global nuclear war, being privileged would not help you very much. Besides, if you're going to make the point that AI is just like every other issue in affecting the most vulnerable, then you haven't explained why people don't care about AI risk. That is, you haven't identified something unique about AI. You could apply the same argument to climate change, to pandemic risk, to inequality. All of these issues disproportionately affect the poor, yet all of them occupy substantially more public discussion than AI. What makes AI different?

Good points. An important point to bear in mind though is that once again well-roundedness, volunteer work, hobbies etc are all related to factors apart from motivation/ability. Generally, people from wealthier backgrounds have much more of these on their resume than people from poorer backgrounds because they could afford to take part in the hobbies, could afford to work for free, etc. Lots of supposedly 'academic filtering' is actually just socioeconomic filtering with extra steps.

Great post! I'm gonna throw out two spicy takes.

Firstly, I don't think it's so much that people don't care about AI Safety, I think it's largely who cares about a threat is highly related to who it affects. Natural disasters etc affect everyone relatively (though not exactly) equally, whereas AI harms overwhelmingly affect the underpriviliged and vulnerable. People who are vastly underrepresented in both EA and in wider STEM/academia, who are less able to collate and utilise resources, who are less able to raise alarms. As a result, AI Safety is a field wh... (read more)

1
Karthik Tadepalli
2y
The first argument seems suspect on a few levels. 1. No argument about AGI risk that I've seen argues that it affects the underprivileged most. In fact, arguments emphasize how every single one of us is vulnerable to AI and that AI takeover would be a catastrophe for all of humanity. There is no story in which misaligned AI only hurts poor/vulnerable people. 2. The representation argument doesn't make sense as it would imply that EA, as a pretty undiverse space, would not care about AGI risk. That is not the case. Moreover it would imply that there are many suppressed advocates for AI safety among social activists and leaders of underprivileged groups. That is definitely not the case.

This would be very helpful. It's often confusing for the applicant as they have no idea what to change/work on. For me, I've been rejected by every EA fellowship I've ever applied for (woop woop, highscore) but I don't know how to improve. Twice orgs have legit emailed me out the blue saying they like my blog/forum content and asking me to apply and then rejected me. I have no idea what stage I failed at. Was my application poorly written? Were my research suggestions poor? Is it a CV issue? Am I over or under qualified? Who knows. Certainly not me. So I'm... (read more)

The audience section is a wide scope which is useful, but just to confirm - would doing an explainer for AI Safety topics/issues affecting other areas with those as the audience count?

Eg.  Producing content on AI Safety risks in economics for economists? Or producing content on AI Safety topics in medicine for doctors?

I think you're right in your third paragraph. I lead a small group far outside of a 'hub' and I'd find this really useful as being immersed in a fully EA environment for some time. It wouldn't so much being a case of pulling me FT to Prague, but more a chance to spend time in an EA environment that is different from my home one. That's largely what was behind my own application, anyway.

No problem RE timescale of reply! Thank you for such a detailed and thoughtful one :)

I really enjoyed this post, thank you for writing it. I'm commenting from an AI law and policy centric view, so this comment is mainly aimed at that angle.

I agree with much of your post, but I want to highlight that there is a need for social scientists in some areas of AI Safety research. I have worked on a few projects for the UK government around AI Safety, helping to build legal, regulatory, and mitigation strategies in the AI Safety field. This is often part of an interdisciplinary team. A few of us are usually sociologists which, with me having a mix... (read more)

Preface: There is every possibility that I have misinterpreted or misread the post. Please do let me know if this is the case and I will rescind this comment.

This is an interesting post, thank you for making it. It must have taken a lot of effort, and it's always a bold move putting culture and community related thought pieces out there because they're often not 'safe' topics. Though I enjoyed many of your past posts, I don't agree with this one for a couple of reasons.

Firstly, I think the term 'elitism' is used too broadly in some areas. Eg. sometimes you... (read more)

2
James Lin
2y
Thanks for taking the time to write this response! We really appreciate the feedback. A couple of points: 1. On the first and second point, I agree that we could have been much more rigorous about the specifics of "what we mean by elitism." We mostly mean elite institutions and organizations, which we used interchangeably with elite environments (e.g. having worked at SpaceX, or having studied at MIT).   Sometimes (maybe even often?), the best in the field won't be from an 'elite' institution (e.g. Ramanujan). I agree that elite institutions =/= best talent. The claim that we're making is that elite institutions correlate very strongly with fairly great talent depending on the situation. We mention in the post that elite selection can systematically miss very great people, especially for traits like agency or risk-aversion (entrepreneurial types).   2. "this is less valuable to draw from as on average these people will have faced fewer obstacles and gained less life experience than equally able peers from different socioeconomic brackets."  I agree that equally able peers from different socioeconomic brackets could likely be better, for many of the reasons you stated. But the question is how to find these peers? If by equally able, you mean that those students attend the same institutions and the only difference is that they are from a lower socioeconomic bracket, we don't disagree. "You mention earlier traits like 'leadership' and 'agency'" It's hard to speak about these things without concrete numbers, and there's no doubt that leadership is also formed in people without access to elite environments. On agency, I agree with you. We explicitly mentioned it as a trait that isn't correlated much with elite environments.   3. On the last point, I've clarified our point and edited the original post. The claim is that people with the affordance to focus a lot of time on EA tend to skew towards people with the privilege to do so

I've purposely built my engagement with EA in line with the principles you wrote:

 

  1. Social - have non-EA friends. Ideally have some be local. Talk about other stuff with them, mostly.
  2. Financial - do not rely on EA funding sources for income that you couldn't do without. Don't apply for EA jobs.
  3. Emotional - do not have a unidimensional sense of self-worth that boils down to "how am I scoring on a vague, amorphous impact scale as envisioned in EA terms

In this way I think I'm fairly permanently 'out' of EA. But I think I get the best of both worlds like this... (read more)

3
Justis
2y
Seconded!

Finally. An EA fellowship that cannot reject me.  Victory is mine!

Joking side, I really like a lot of the questions here. It's also worth bearing in mind a lot of the categories can overlap with each other which is another big bonus I see. Ideally, would these discussions take place on a single megathread, or across multiple smaller ones? Perhaps stickied topics in each tag?

1
brb243
2y
Yes! .. thank you. I think maybe there can be some organized page of summaries that people going though a 'fellowship' can update - so an aspect of the Wiki. Otherwise, just writing a comment or a comment on a comment can be a good way to demonstrate that one thought about the topics. Or, forming several narratives of the articles can be nice (the activity where anyone writes the next sentence). Thank you for pointing out the overlap. I can come up only with organization according to a vector space where the elements are the extent to which the article relates to specific topics but it would be nice to have something with better flow and with paths (with intersections) that would lead one to go for a bit at a time. A megathread would not solve the organization issue and could feel like the thoughts developed are not being utilized. Multiple smaller threads can be cool, but mostly for questions that are actually advanced by discussion or for those that can be interesting to get opinions on (not e. g. asking someone to rephrase main points). Stickied questions under tags may be a solution - also once a question is somewhat resolved or opinions at the time gathered, it can be replaced.

Yeah, I'm going to agree with JeffreyK here and say that you could definitely pursue philosophy as an interdiscipline. Tbh, PhD is a philosophy degree in every area. My law PhD is technically a philosophy degree.  I'm doing a lot of philosophy in it despite not having a prior interest in that area, because it's required for new knowledge.

If you have an interest area such as law, tech, animal rights, etc, you can always combine that with philosophy. That way you get to be a philosopher whilst also doing something you enjoy doing.

Also, be aware that EA ... (read more)

1
jessefrances
2y
True. I do agree that those two did have a lot of help along the way.

This is a fantastic and much-needed post with loads of consideration. Really great to see, especially with a small but growing legal and legal-adjacent community in EA.

I did a similar route to this but in the UK, so in the spirit of adding context for those who aren't US-based, I'm going to cover some points in this comment. It's not agreeing or disagreeing with anything you said in your post, but providing a little bit of info from a different area for anyone from there :)

I did my undergrad in AI, formally the BSc (Hons) Computer Science with Artificial I... (read more)

1[anonymous]2y
Luke, thanks so much for sharing these perspectives—so helpful to have a UK perspective for a topic like this one that comes with so many jurisdictional differences. (I suspect some U.S.-based contributors who are still making tuition or loan payments may have winced at your observation that “you can’t really go broke by going to law school” in the UK…) Cheers!

This is a really great, high-quality look at this area. Thank you all so much for writing it, especially in such an easy to read way that doesn't sacrifice any detail.

One bit I really like about this is it addresses a major blind spot in AI safety:

"AGI is not necessary for AI to have long-term impacts. Many long-term impacts we consider could happen with "merely" comprehensive AI services, or plausibly also with non-comprehensive AI services (e.g. Sections 3.2 and 5.2)."

I feel that Section 4 is an area that the current AI Safety research really neglects, w... (read more)

1
Sam Clarke
2y
Thanks, I'm glad this was helpful to you!

If you enjoyed some of the issues raised in Weapons of Math Destruction (which I really enjoyed, as it's an AI book written by an actual developer but focuses on the social issues), you may enjoy going down the regulation/policy rabbithole. None of these are EA books, but I think that's important and in some ways makes them better due to a wider viewpoint.

- Algorithmic Regulation by Karen Yeung and Martin Lodge
This is a great, user-friendly intro to algorithmic regulation, especially because it also explores the how and more importantly why of regulation e... (read more)

1
Joseph Lemien
2y
Lovely! Thank you so much for the recommendations. All three of these are books I've never heard of before. Much appreciated.

I'm not sure I can think of a single example from history or nature where a more advanced species/culture had power over a less advanced/adapted one, and where that ended up well for the underling.

This is really fantastic news, and is a much needed area of work. I did my master's degree in space law and it was such an intruiging area of governance. The current state of space governance and space law is the equivalent of trying to run modern society using only the laws of the Roman Empire. I think if people knew how outdated our international agreements were, and how unsuitable they are for modern space governance (let alone expansion, extraction, and colonisation!), they'd be a lot more panicked about getting space governance right.

Very excited to see the publication of your research agenda.

This is a really interesting piece of research. It is certainly a good omen for access to justice, both directly and indirectly. 

The issue of balancing privacy with transparency is an interesting one, and one I've done a lot of work in within Criminal Justice. It's never an easy decision to make, and I had never considered what good training material it would make for privacy-centred LLMs.

I'm still not completely sold on LFAI, but I agree that this is a promising factor in bringing it from theory to a more experimental basis.

Pre-Warning: Please don't read any of this as a criticism of those people who fit into the super-intelligent,  hard degrees at Oxford etc.  If you're that person, you're awesome, not critical of you, and this comment is directed at exploring the strengths of other paths :) 

Tl;dr at bottom of post for the time-constrained :)

This was a really interesting post to read. I wrote a slightly controversial piece a little while back that highlighted that 'top' Universities like Oxford, Cambridge, Stanford have a lot of class and wealth restrictions s... (read more)

9
Olivia Addy
2y
Thank you so much for this comment! The points you made about the community needing different types of skills is great and I totally agree...your comment (and lots of others) has definitely helped open my mind up and think a bit more about ways in which I could be useful here...even if it's outside the traditional view I had of what an EA is...so thank you for that!!

Thank you for such an informative and well-thought-out reply. I appreciate you taking the time :)

I think you raise some good points here, and yes I have personally found getting access to money much easier than with most other orgs. I still do think that there may be an unintentional chilling effect on people from rent-seeker discourse, but I think we can both agree with @Levin that perhaps using a different term related to good and bad faith may be a good avenue to pursue.

All in all I think you do raise really good points both in the original post and in this reply, but do also think it's worth being mindful, as always, of unintended consequences :)

That's a good point, about community organisers being kind of a filter. I like to think I'd know if someone was looking to extract profit. To be honest we usually have the other problem. I've heard a few times before from people they 'dont want to take the p*ss' and I have to convince them it's alright to stay at a 2 star instead of a 1 star! I think the groups function well because it's (in theory for me, never happened yet) possible to tell when someone's shifty. So I agree with that point. 

I do still think though that too much focus on the discours... (read more)

2
tlevin
2y
I think you're probably right that there are elitism risks depending on how it's phrased. Seems like there should be ways to talk about the problem without sounding alienating in this way. Since I'm claiming that the focus really should just be on detecting insincerity, I think a good way to synthesize this would just be to talk about keeping an eye out for insincerity rather than "rent-seeking" per se.

That's an interesting tie-in to the 'burnout' discourse we've been seeing lately that I had not even considered.

It's something I would be willing to write if others wanted to read it, unless the original poster would rather do it.

Please do - at a minimum you could post what you've already written as a comment, but if you have more to say I'd be interested.

I'm actually going to reply to my own comment here with the cardinal sin of thinking of another point after hitting 'post', but not wanting to disrupt the flow of the original comment!

I believe there IS a case to be made for teaching organisers how to better spend funds smartly. I have been to larger EA events before where I've thought to myself 'this could have been done at half the price'.  Maybe it's the fact I grew up in an environment where you had to make every penny stretch as far as possible, but it blew me away when another group leader menti... (read more)

I think this is a good guide, and thank you for writing it. I found the bit on how to phrase event advertising particularly helpful.

One thing I would like to elaborate on is the 'rent-seekers' bit. I'm going to say something that disagrees with a lot of the other comments here. I think we need to be careful about how we approach such 'rent-seeking' conversations. This isn't a criticism of what you wrote, as you explained it really well, but more of a trend I've noticed recently in EA discourse and this is a good opportunity to mention it. 

It's importa... (read more)

0[anonymous]2y

I agree that it's very important to continue using EA money to enable people who otherwise wouldn't be able to participate in EA to do so, and it certainly sounds like in your case you're doing this to great effect on socioeconomic representation. And I agree that the amount of funding a group member requests is a very bad proxy for whether they're rent-seeking. But I don't agree with several of the next steps here, and as a result, I think the implication — that increased attention to rent-seeking in EA is dangerous for socioeconomic inclusion — is wrong.... (read more)

6
Nathan Young
2y
Agreed. I'm gonna channel my inner Ollie Base here and say "it's the EAG team's job to accept and pay for those they think will create the most value by attending". I think currently if you get accepted go, go joyfully and enjoy the city you go to. I went to the zoo on the Sunday of EAG Prague. Some of my flights were paid for by CEA because I was cash strapped at the time. I could have decided that was an inappropriate use of the time, but I think it made me enjoy the EAG more, I still talked to lots of people and I would be more likely to fly to another EAGx. Signalling masters, yes, but counterfactual impact is more important. If someone applies to an EAG partly for the holiday, then as long as they intend to take the EAG seriously and are honest on their application, more power to them. CEA can read their application and accept them if they want.
1
Algo_Law
2y
I'm actually going to reply to my own comment here with the cardinal sin of thinking of another point after hitting 'post', but not wanting to disrupt the flow of the original comment! I believe there IS a case to be made for teaching organisers how to better spend funds smartly. I have been to larger EA events before where I've thought to myself 'this could have been done at half the price'.  Maybe it's the fact I grew up in an environment where you had to make every penny stretch as far as possible, but it blew me away when another group leader mentioned to me they don't negotiate costs with vendors! Like haggle on price for room fees, food etc. Some find it distasteful, and I get that, but a lot could be saved.  Also, some events can be unnecessarily ostentatious. Like do we really need a room with this much gold and antique clocks? You could have rented a soviet-style office room at half the price like 2 miles away.  Then again, it's very easy for me to criticise others given my near-zero large-scale event planning experience. Maybe there are other factors I'm not considering. That said, maybe give group leaders some books on negotiation or on frugality tips. That may help a range of the issues highlighted in this post. 
9
Kirsten
2y
This is a great comment and I think would make a good standalone Forum post - I'd certainly like to link to it.

I face enormous challenges convincing people of this.  Many people don't see, for example, widespread AI-empowered human rights infringements as an 'existential catastrophe' because it doesn't directly kill people, and as a result it falls between the cracks of AI safety definitions - despite being a far more plausable threat than AGI considering it's already happening. Severe curtailments to humanity's potential still firmly count as an existential risk in my opinion.

I've often thought about the idea of paying automated, narrow-AI systems such as warehouse bots or factory robots a wage even though they're not sentient or anything would help with many of the issues ahead of us with increased general automation. As employment goes down (less tax money) and unemployment (voluntary or otherwise) and therefore social welfare goes up, it creates a considerable strain. Paying automated systems a 'wage' which can then be taxed might help alleviate that. It wouldn't be a wage, obviously, more like an ongoing fee for using such ... (read more)

2
Shakked Noy
2y
Economists have thought a bit about automation taxes (which is essentially what you're suggesting). See, e.g., this paper.

"There's some vague sense that current-day concerns (like algorithmic bias) are not really AI Safety research. Although I've talked to some who think addressing these issues first is key in building towards alignment. 

 

Now don't go setting me off about this topic! You know what I'm like. Suffice to say I think combatting social issues like algorithmic bias are potentially the only way to realistically begin the alignment process. Build transparency etc.  But that's a conversation for another post :D

Frances, your posts are always so well laid out with just the right amount of ease-of-reading colloquialism and depth of detail. You must teach me this dark art at some point!

As for the content of the post itself, it's funny that recently the two big criticisms of longermism in EA are that EA is too longtermist and that EA isn't longtermist enough!  I've always thought that means it's about right, haha.  You can't keep everyone happy all of the time. 

I'm one of those people you mention who only really interacts with the longtermist side of E... (read more)

4
frances_lorenz
2y
Luke, thank  you for always being  so kind :)) I very much appreciate you sharing your thoughts!! "sometimes people exclude short-term actions because it's not 'longtermist enough'" That's a really good point on how we see longtermism being pursued in practice. I would love to investigate whether others are feeling this way. I have certainly felt it myself in AI Safety. There's some vague sense that current-day concerns (like algorithmic bias) are not really AI Safety research. Although I've talked to some who think addressing these issues first is key in building towards alignment. I'm  not even totally sure where this sense comes from, other than that fairness research is really not talked about much at all in safety spaces. Glad you brought this up as it's definitely important to field/community building.

No problem!

Absolutely, I can see what you mean. Personal lens counts for a lot and people can run away with ideas of badness. Things are rarely as bad as people criticise them for being, and EA is no different. Yeah it has a few issues here and there but the media and Twitter can often make these issues look far, far worse than they actually are. I can totally see where you're coming from.

I think this idea bears some merit in itself, but it would be a lot more complex in practice I think. Some other replies have covered that and I have nothing useful to add, so I won't. One thing I would say is that we have to acknowledge that some of the criticism you list is pretty genuine. We aren't a perfect community and this does impact our activities. Some examples from what Timnit et al discussed on Twitter:
 

 'EA and longtermism are claimed to be divorced from the lived experience of common people'

The majority  of EA's base is in the ... (read more)

That's a very good point. I still feel there could be more contests, grants, orgs etc in this area but you're right in that there's resources there and there's some serious knowledge at those orgs. Perhaps talent, not funding,  is main bottleneck we need to address. They two may be interrelated to an extent.

It's really frustrating to see so much governance talent at law conferences but very little within EA working on Longtermist issues. I think it's a mixture of a lack of outreach in those industries and the fact EA's reputation has taken a couple of... (read more)

A whole range of things, from elements on the 'fancy spreadsheet' side such as recidivism and predpol to the more complex elements surrounding evidential aspects. I am aware none of these are close to AGI, considering no current AI is given its hyper-specialism, but the point of that paragraph isn't about the AI but how humans and organisations have been shown to use AI (or automation software, if you're more comfortable with that phrase). When the first actual AGI is developed, it is likely to be in a very well-funded lab - a lab likely under control of a... (read more)

You're right in that those situations aren't impossible, but also governance doesn't have to be an end goal but a process. Even helping to govern current AI efforts will shape the field, much the same kind of attitude regulation has with the nuclear field.

Thanks! I'm not certain of concrete data, as this has been just from my experience interacting with others within EA's field, at EAGs, and checking out resource lists/experiencing the courses.
 

There's one list here for example that shows current AI Safety resources and the spread of major AI works within EA such as books. In addition there is a tool here but the CompSci focus might not be because that field is that way, but because the creator felt CompSci was most relevant, which I acknowledge. 

That said, I was pleasantly surprised that in The A... (read more)

That's a really important point to know, actually. I'm glad you told me that. I was always scared if I went over the limit, it might get revoked or I might get rejected in future if I was 'taking the mick'. It's good to know it's not as strict. I tend to ask for lower amounts because if I get rejected, it's catastrophic, so I'd rather suffer more and raise my odds than risk it. 

I got £500 approved to go to Oxford and London split between them (£300 and £200), but in future I might ask for more.

It's becoming clear in this thread that a lot of the probl... (read more)

2
Charles He
2y
Note that the recent docs on EAG travel (that at least covers through EAGx Prague, but may not cover beyond that) suggest that requesting cover for travel does not negatively affect your chances of acceptance. (Note that this doesn't necessarily give a person cover in the situation of ex-post going over budget, or cover reputational effects—but you should be able to push through any issues by being good and impactful).   Another perspective is that for many people, like myself, we can take that 20% chance of no reimbursement (in other careers/institutions, if not EA), and by doing this, we can get comfortable and "learn the system".  On the other hand, you can't, with just five pounds in your bank account.  So you put yourself in a tough situation, and then blow past three meetings. So there's three people who might think you are less promising, because of your own hardship.  It is costly to be poor. Not everyone knows the feeling of fear, social stigma. I am furious at this situation you had to experience.   Disclaimer: I believe the things I said in this thread, but I detest giving an impression of "virtue" or clout and I am also uncertain about the value or system effects of any action here from EA. So I add that: I don't really know the answer or fully agree with everything you wrote in your blog. From my personal perspective, there seems to be many constituencies in EA who need satisfying already. I don't want to add necessarily more noise, such as making CEA, who are already up to their eyeballs in work and other considerations, think about this issue in an unnatural way. I am very privileged.

Thanks! Yes, I am sure some parts are misinterpreted or just down to my own experience, but tbh EA as an org tries super hard to be inclusive so they're probably working on it. Let me know when you next hit up an EAG and I'll come say hi. My girlfriend is a paramedic student too, so winner winner chicken dinner RE any future medical cost concerns :) She didn't charge me when I broke my ankle that one time, anyway ;)

Sorry to hear about your experience Joe, and thanks Julia for the heads up on procedures for everyone.

Regarding the socioeconomic background community element of EA, I feel the same. I started a blog lately and my first post was a post about my own socioeconomic challenges in EA, as well as some of the socioeconomic bottlenecks we face. It may be interesting to you to see you're not the only one:  https://legal-longtermist.ghost.io/why-eas-talent-bottleneck-is-a-barrier-of-its-own-making/. If it helps, reading this let me know I wasn't the only one ei... (read more)

1
joe k
2y
Really enjoyed reading that post, thanks for sharing! I'm happy you commented on this, and I also feel better after receiving the DMs about relatable experiences. I hope the issue you bring up on inadvertent filters on socioeconomic status is evaluated carefully by some people in the EA group!

I had a similar experience. I couldn't afford both nights in a hotel, so I slept on the Megabus on Friday night and chose to stay in a hotel for Saturday night as my most important meetings were Sunday morning and I wanted to be fresh.

Obviously didn't sleep on the bus because it keeps stopping and the chairs are designed not to be slept in - not helped by the fact the bus was apparently being driven by Colin McRae. So I had worked all day Friday, been awake all night (but safe and warm!) on the bus, and then had an entire day of conference on the Saturday.... (read more)

Hi Luke — sorry to hear about all of this! I work on the EA Global team and I can confirm that we definitely definitely don't want you sleeping on the bus! Please apply for more travel/accommodation funding next time if it'd be useful, it definitely won't affect your chances and we won't reject you for thinking you're taking advantage of us!

For folks who need it, funding is also available up-front (rather than having to wait to be reimbursed), with an option to return extra money should you have any leftover.

3
Charles He
2y
Conditional on you already receiving reimbursement of costs and approval to EAG, I'm 80% sure that some marginal increases over your limit, like an extra hotel within reasonable tube distance (even at last minute London prices) would be approved without issue. Also, I will just flat out say that if someone has a situation like this in the future, where payment for a hotel would be this key, please contact me. There are things that will reimburse this.  By the way, note that I currently request and receive reimbursement of travel costs to conferences. Also, I intend to push my luck over the approved limit for at least one event in the future (it seems like I undershot my estimates, things have gotten pricier, and there's valuable peripheral events before/after the conference that increases stay time).

I agree with you, and with John and the OP. I have had exactly the same experience of the Longtermist community pushing away Phase 2 work as you have - particularly in AI Alignment. If it's not purely technical or theoretical lab work then the funding bodies have zero interest in funding it, and the community has barely much more interest than that in discussion. This creates a feedback loop of focus.

For example, there is a potentially very high impact opportunity in the legal sector right now to make a positive impact in  AI Alignment. There are curr... (read more)

This is a good post. At present I'm happy being a public EA, but I recently was recommended for my first public appointment - I wonder if any of my posts will ever be held to me to justify? Perhaps used as evidence of my character, beliefs, or bias. Also, not everything ages well. Look how LGBTQ+ language has changed just in a decade. What is acceptable verbiage today is hateful tomorrow, and vice versa. I just write my comments with that in mind for the moment and try to be the best person I can, but it does indeed mean there are risks attached. It's a co... (read more)

This is a really valuable idea and is certainly an area we should research more heavily. I have some brief thoughts on the 'pros' and some ideas that aren't so much 'cons' as 'areas for further exploration (AFFE)'. The AFFE list will be longer due to the explanation necessary, not because there's more AFFE than Pros :)

Pros:

  • Law tends to be quite a precise field, which lends itself to CompSci more than many other areas
  • Law (generally) evolves to reflect society's current moral beliefs and values
  • Law has a huge focus on 'unintended consequences' which is a big
... (read more)
3
Cullen
2y
Thanks a ton for your substantive engagement, Luke! I'm sorry it took so long to respond, but I highly value it. Yeah, definitely agree that this is tricky and should be analyzed more (especially drawing on the substantial existing literature about moral permissibility of lawbreaking, which I haven't had the time to fully engage in). Yeah, I do think there's an interesting thing here where LFAI would make apparent the existing need to adopt some jurisprudential stance about how to think about the evolution of law, and particularly of predicted changes in the law. As an example of how this already comes up in the US, judges sometimes regard higher courts' precedents as bad law, notwithstanding the fact that the higher court has not yet overruled it. The addition of AI into the mix—as both a predictor of and possible participant in the legal system, as well as a general accelerator of the rate of societal change—certainly threatens to stretch our existing ways of thinking about this. This is also why I'm worried about asymmetrical use of advanced AI in legal proceedings. See footnote 6. (And yes, the US[1] is also common law. :-) ) Definitely agree. I think the practical baby step is to develop the capability of AI to interpret and apply any given legal system. But insofar as we actually want AIs to be law-following, we obviously need to solve the jurisdictional and choice of law questions, as a policy matter. I don't think we're close to doing that—even many of the jurisdictional issues in cyber are currently contentious. And as I think you allude to, there's also a risk of regulatory arbitrage, which seems bad. ---------------------------------------- 1. Except the civil law of Louisiana, interestingly. ↩︎
Load more