BR

Brandon Riggs

40 karmaJoined Working (6-15 years)

Comments
5

Fair question, I guess some of the numbers I've been hearing can wipe out a (high) yearly salary well within a month (or days).  

Perhaps one layer deeper I generally "back" money spent on someone working on AIS full time for a year and think there will probably be some good to come out of that.  Although it may happen quickly, it seems that at least some level of thought goes into which positions are needed to fill before the job posting goes out. 

However, on individual experiments level, I think the level of scrutiny is much lower/potentially nonexistent.  

There seem to be plausible arguments for paying market rate to retain top talent (although you may disagree with them), but I don't really think there's an argument to spend huge sums on experiments without even double-checking if there's a way this can reduce x-risk. 

At what level of compute spending will AI Safety research be cut off from being considered effective altruism (if any)?

Of course, saving humanity from misaligned AI could be argued to be close to priceless. But how many experiments have a direct theory of change (ToC) of how it's going to mitigate existential risk?  Perhaps a general one is fine at low compute ("it only costs $10 and 'control research' is generally thought to be a good research agenda").  

But what about $5,000? What about $10,000? These numbers start to compare to or surpass what organizations like Giving What We Can receive from someone who donates for a whole year. It also starts to compete with saving a human life via programmes like those in GiveWell's top charities. 

What about $20,000? $30,000? $50,000?  Over what time frame are we comfortable spending that much money on compute and still considering that money well (effectively) spent? A year? A month? A single experiment?  What kind of discovery is worth $50,000 in AIS research? Should we expect a clear ToC? 

I'm very pro AI Safety, but I'm worried about some of the numbers I'm hearing for compute budgets being thrown around (compared to the information gained). I'm wondering - is anyone else is worried about a movement being (famously) concerned with cost effectiveness continuing on this path? Should we encourage more accountability? 

 

Love this!

I'm a big proponent using love of humanity as a motivator. It's true that guilt and/or rationalism can be motivating, but I've found that helping people because people are what make your life worth living seems like a much healthier way, and is even more motivating (more effective).

You nail the sentiment in your post on Life in a Day. Of course it would be nice if our biological evolution led us to being highly motivated by numbers on a spreadsheet, but operating on the hardware we have, the feeling Life in a Day gives is massively more motivating - at least to me - and I would love to see this fact leveraged more throughout the EA community. 

I was happy to see Zach's speech at EAG London this year tapping a bit more into this humanity and believe it was a great step in this direction. 

I think there may be an overcorrection in the EA movement away from using emotion as a motivator as we often encounter people working on less impactful things via this guidance (the classic example of guide dogs in California vs. donating to The Against Malaria foundation). It seems possible to decouple choosing based on emotion and using emotion to motivate action. 

Thanks Scott! And for your work on the ai safety map - was a great surprise to find out you had helped with that! 

Agree with you on your points, especially around theories of change and the post you shared.  I feel like the highest amount of value for least amount of work an org can produce is ensuring that the work they're doing is valuable/impactful in the first place.  Without an explicit theory of change, or seeing how their org/ToC fits into the "larger picture", well-intentioned people can be stuck spinning their wheels. Without a centralized plan (the larger picture made explicit), I think your proposal of compiling organization's ToCs could be a great place to start. 

I've booked in a call with Kabir and will definitely loop you in depending on how that goes!

They actually make some appearances in this post! Do you know who runs it? Don't see any team info on their about page