All of PeterSlattery's Comments + Replies

Also someone messaged me about a recent controversy that Bryan was involved in. I thought he had been exonerated but this person thought that he had still done bad things.

See: https://www.dailymail.co.uk/news/article-11692609/Anti-aging-biotech-tycoon-accused-dumping-fianc-e-breast-cancer-diagnosis.html

And his response https://twitter.com/bryan_johnson/status/1734257098119356900?t=DHcSxlZ5PkxhREVJkAdXag&s=19

Worth knowing about when judging his character.

Yeah I think that's part of it. I also thought it was very interesting how he justified what he was doing as being important for the long term future given the expected emergence of superhuman AI. E.g., he is running his life by an algorithm in expectation that society might be run in a similar way.

I will definitely say that he does come across as hyper rational and low empathy in general but there's also some touching moments here where he clearly has a lot of care for his family and really doesn't want to lose them. Could all be an act of course.

Thanks for sharing your opinion. What's your evidence for this claim?

1
AnonymousTurtle
16h
https://forum.effectivealtruism.org/posts/nb6tQ5MRRpXydJQFq/ea-survey-2020-series-donation-data#Donation_and_income_for_recent_years, and personal conversations which make me suspect the assumption of non-respondents donating as much as respondents is excessively generous. Not donating any of their money is definitely an exaggeration, but it's not more than the median rich person https://www.philanthropyroundtable.org/almanac/statistics-on-u-s-generosity/

Yeah, he could be planning to donate money once his attempt to reduce our overcome mortality is resolved.

He said several times that what he's doing now is only part one of the plants so I guess there is a opportunity to withhold judgment and see what he does later.

Having said all that I don't want to come across as trusting him. I just heard the interview and was really surprised by all the EA themes which emerged and the narrative he proposed for why what he's doing is important

Thanks for the input!

I think of EA as a cluster of values and related actions that people can hold/practice to different extents. For instance, caring about social impact, seeking comparative advantage, thinking about long term positive impacts, and being concerned about existential risks including AI. He touched on all of those.

It's true that he doesn't mention donations. I don't think that discounts his alignment in other ways.

Useful to know he might not be genuine though.

1
Michael Noetel
1mo
Thanks Peter. Fixed!

Thanks for this, I appreciate that someone read everything in depth and responded. 

I feel I should say something because I defended Nonlinear (NL) in previous comments, and it feels like I am ignoring the updated evidence/debate if I don't.

I also really don’t want to get sucked in, so I will try to keep it short:

How I feel
I previously said that I was very concerned after Ben's post, then persuaded by the response from NL that they are not net negative. 

Since then, I realized that there have been more negative views expressed towards NL than I rea... (read more)

Thanks for the detailed response, I appreciate it!

Thanks for writing this, Joseph. 

Minor, but I don't really understand this claim:

Someone made a forum post about taking several months off work to hike, claiming that it was a great career decision and that they gained lots of transferable skills. I see this as LinkedIn-style clout-seeking behavior.

I am curious why you think this i) gains them clout or ii) was written with that intention? 

It seems very different to the other examples, which seem about claiming unfair competencies or levels of impact etc. 

I personally think that taking t... (read more)

9
Joseph Lemien
4mo
Sure, I'll try to type out some thoughts on this. I've spent about 20-30 minutes pondering this, and this is what I've come up with. I'll start by saying I don't view this hiking post as a huge travesty; I have a general/vague feeling of a little yuckiness (and I'll acknowledge that such gut instincts/reactions are not always a good guide to clear thinking), and I'll also readily acknowledge that just because I interpret a particular meaning doesn't mean that other people interpreted the same meaning (nor that the author intended that meaning). (I'll also note that if the author of that hiking post reads this: I have absolutely no ill-will toward you. I am not angry, I enjoyed reading about your hike, and it looked really fun. I know that tone is hard to portray in writing, and that the internet is often a fraught place with petty and angry people around every corner. If you are reading this it might come across as if I am angrily smashing my keyword simply because I disagree with something. I assure you that I am not angry. I am sipping my tea with a soft smile while I type about your post. I view this less like "let's attack this person for some perceived slight" and more like "let's explore the semantics and implied causation of an experience.") One factor is that it doesn't seem generalizable. If 10,000 people took time off work to do a hike, how many of them would have the same positive results? From the perspective of simply sharing a story of "this is what happened to me" I think it is fine. But the messaging of "this specific action I took helped me get a new job" seems like the career equivalent of "I picked this stock and it went up during a decade-long bear market, so I will share my story about how I got wealthy."  A second factor is the cause-and-effect. I don't know for sure, but I suspect that the author's network played a much larger role in getting a job than the skills picked up while hiking. The framing of the post was "It was a great career d

If you have time, can you provide some examples of what you saw as evidence of wrongdoing? 

I feel that much of what I saw from my limited engagement was a valid refutation of the claims made. For instance, see the examples given in the post above.

There were responses to new claims and I saw those as being about making it clear that other claims, which had been made separately from Ben's post, were also false.

I did see some cases where a refutation and claim didn't exactly match, but I didn't register that as wrongdoing (which might be due to bias or n... (read more)

If you have time, can you provide some examples of what you saw as evidence of wrongdoing? 

I didn't interpret any of these as evidence of the original wrongdoing, but these were the main things Kat did in her evidence thread that, in my opinion, muddled Nonlinear's defense:

  1. Lots of motte-and-bailey/strawmanning their critics, like claiming to refute an allegation but then providing evidence that they didn't do some other, more egregious thing, or saying that the only way something bad could have come out of their actions was that they were "secretly ev
... (read more)

The most obviously questionable choice was, of course, their questionable usage of quotation marks, which is still only mentioned in the appendix. This introduced substantial confusion as to whether their responses to ostensible quotations in fact addressed the claims made in the original post, and this was exacerbated by their extensive editorialisation. I am not interested in legislating claims, but do notice their determination to muddy the waters, and find it very indicative that they believed this was their best path forward.

Another couple things that... (read more)

Thank you for explaining all of this.

If it is ok, can you please clarify what you said here because I am not sure if I properly understand it: "in our direct messages about this post prior to publication, provided a snippet of a private conversation about the ACX meetup board decision where you took a maximally broad interpretation of while I had limited ways of verifying, pressured me to add it as context to this post in a way that would have led to a substantially false statement on my part, then admitted greater confusion to a board member while saying nothing to me about the same, after which I reconfirmed with the same board member that the wording I chose was accurate to his perception."

After checking with Oliver, my impression is that to properly detail that section would require sharing non-public information we would need another party's permission to share. Given that and the degree to which litigating it could derail things, I'll simply say that the core takeaway should be that I shared part of the post with Oliver prior to publication and we had a confusing and somewhat frustrating conversation about it, and that the bulk of the post contained arguments we had already been litigating in the public sphere.

(I do also want to clarify that Me and Ben were somewhat ironically not shared on this post in advance, and I would have left comments with evidence falsifying at least one or two of the claims)

I agree that you probably should have had the chance to review the post (noting what TracingWoodgrain has said makes me a little less certain of this, though I still believe it)

I also think that most people, myself included, would be happy to read your comments now if you want to share them.

I'll just quickly say that my experience of this saga was more like this: 

Before BP post: NL are a sort of atypical, low structure EA group, doing entrepreneurial and coordination focused work that I think is probably positive impact.
After BP post: NL are actually pretty exploitative and probably net negative overall. I'll wait to hear their response, but I doubt it will change my mind very much.
After NL post: NL are probably not exploitative. They made some big mistakes (and had bad luck) with some risks they took in hiring and working unconventional... (read more)

Thank you for putting so much effort into helping with this community issue. 

What do you think community members should do in situations similar to what Ben and Oliver believed themselves to be in: where a community member believes that some group is causing a lot of harm to the community, and it is important to raise awareness?

Should they do a similar investigation, but better or more fairly? Should they hire a professional? Should we elect a group (e.g., the CEA community health team (or similar)) to do these sorts of investigation? 

All of those are reasonable options. The money Ben paid to sources would go a long way towards hiring a professional—it's almost as much as I make in my (part-time) journalism-adjacent work in a year.

Like I say, I'm not averse to citizen journalism and it would be incredibly hypocritical of me if I was. There's a lot amateurs can do in this sort of thing, but I think it requires willingness to act as something other than prosecutor—or, if you see yourself only able to act as prosecutor, to provide your evidence to a neutral third party who can get the oth... (read more)

Here is what I eventually extracted and will share, just in case it's useful. 

**★★★ (RP DG) By what year will at least 15% of patents granted in the US be for designs generated primarily via AI? Reasons for inclusion: both an early sign that AI might be able to design dangerous technology and an indicator that AIs will be economically useful to deploy across diverse industries. Question resolves according to the best estimate by the [resolution council].

**★★★ (UF RP) How long will be the gap between the first creation of an AI which could automate 65%... (read more)

Thanks. A sympathetic disagree from me. I think that knowing what specific people think is signal not noise. Who thinks what is often some of the most important information to communicate.

If you know I generally agree with you and support you you are much more likely to talk to me or to collaborate on projects etc. The converse is true if you know I disagree with you about many things. You don't get that information from my votes.

Fair but to be clear, this occurred, and usually occurs, before I had any upvotes.

6
Jason
4mo
That's weird. Someone could be downvoting in expectation of upvotes, but I agree that datum makes "someone just doesn't like these comments" much more likely. 

I am not sure who, if anyone, will read this, but please stop downvoting my supportive posts mystery person(s), or at least explain your logic for doing so. 

As I understand, commenting on things to express enthusiasm/offer support is a good norm. This is a forum and lots of people put effort into posting things and want engagement and support from their posts (amongst other things). 

I certainly do. Feel free to comment on my posts saying anything supportive at any time, no matter how trite. I will always prefer it over you saying nothing at all, ... (read more)

I'd add that this kind of supportive behaviour was encouraged by the forum team at least over some period of time.

9
Pat Myron
4mo
Agree-upvoting and karma-downvoting similar comments seems like a sensible use of the split-voting system. They're not wildly insightful and don't add much to discussions beyond that commenters' own votes: https://forum.effectivealtruism.org/posts/oZff425xLnikfxeGD/pat-myron-s-shortform?commentId=HsadBx85nAggb8Q5T Generic engagement/support can be channeled through voting/reactions rather than comments
Jason
4mo31
14
1
4

I didn't downvote your comment, and don't downvote similar comments. But I can understand why someone might.

One theory of voting is that one should vote in the direction of how much karma a post "should" have. And, in light of the purposes of the karma system, 28 karma for a sixteen-word comment expressing that this was well done, a sense of excitement, and well wishes is probably more than the comment "should" have.

So unless the mystery downvoter chooses to explain their rationale, I would suggest reading their downvote as an attempt to correct the perceived overkarma-ing of the comment. It's possible they simply dislike the comment, but there's not enough evidence to conclude that.

I want to share the following, while expecting that it will probably be unpopular. 

I feel many people are not being charitable enough to Nonlinear here. 

I have only heard good things about Nonlinear, outside these accusations. I know several people who have interacted with them - mainly with Kat - and had good experiences. I know several people who deeply admire her. I have interacted with Kat occasionally, and she was helpful. I have only read good things about Emerson. 

As far as I can tell from this and everything I know/have read, it seem... (read more)

Before Ben's post, I had heard some good things and many bad things about Nonlinear, to the point that I was trying to figure out who their board members were in case I needed to raise concerns about one or both of the co-founders (I failed to figure it out because they weren't a registered charity and didn't have their board members listed on their website either).

bruce
4mo123
24
5
1
8

I think it is entirely possible that people are being unkind because they updated too quickly on claims from Ben's post that are now being disputed, and I'm grateful that you've written this (ditto chinscratch's comment) as a reminder to be empathetic. That being said, there are also some reasons people might be less charitable than you are for reasons that are unrelated to them being unkind, or the facts that are in contention:
 

I have only heard good things about Nonlinear, outside these accusations

Right now, on the basis of what could turn out to ha

... (read more)

Well done! I am excited to see this, and I wish you the best of luck.

I am not sure who, if anyone, will read this, but please stop downvoting my supportive posts mystery person(s), or at least explain your logic for doing so. 

As I understand, commenting on things to express enthusiasm/offer support is a good norm. This is a forum and lots of people put effort into posting things and want engagement and support from their posts (amongst other things). 

I certainly do. Feel free to comment on my posts saying anything supportive at any time, no matter how trite. I will always prefer it over you saying nothing at all, ... (read more)

Thanks for this. If easy, can you please curate your suggested questions in a spreadsheet so that I can filter them by priority and type? If you do this, I will share with at least two academics and labs who might do some of the research desired. I may do so anyway, but at the moment, it probably won't be something that they will find time to read unless I can refer them to the parts that are most immediately relevant.

6
PeterSlattery
4mo
Here is what I eventually extracted and will share, just in case it's useful.  **★★★ (RP DG) By what year will at least 15% of patents granted in the US be for designs generated primarily via AI? Reasons for inclusion: both an early sign that AI might be able to design dangerous technology and an indicator that AIs will be economically useful to deploy across diverse industries. Question resolves according to the best estimate by the [resolution council]. **★★★ (UF RP) How long will be the gap between the first creation of an AI which could automate 65% of current labour and the availability of an equivalently capable model as a free open-source program? **★★★ (RP) Meta-capabilities question: by 2029 will there be a better way to assess the capabilities of models than testing their performance on question-and-answer benchmarks? **★★★ (RP UF) How much money will the Chinese government cumulatively spend on training AI models between 2024 and 2040 as estimated by the [resolution council]? **★★★ (UF, FE, RP) Consider the first AI model able to individually perform any cognitive labour that a human can. Then, how likely is the chance of an deliberately engineered pandemic which kills >20% of the world's population in the 50 years after the first such model is built? **★★★ (UF, FE, RP) How does the probability of the previous question change if models are widely available to citizens and private businesses, compared to if only government and specified trusted private organizations are allowed to use them? **★★★ (FE, RP) What is the total number of EAs in technical AI alignment Across academia, industry, independent research organizations, ¿government?, etc. See The academic contribution to AI safety seems large for an estimate from 2020. **★★★ (FE, RP) What is the total number of non-EAs in technical AI alignment? Across academia, industry, independent research organizations, ¿government?, etc. **★★★ (RP) How likely is it that an AI could get nanomachines built
4
NunoSempere
4mo
I have extracted top questions to here: https://github.com/NunoSempere/clarivoyance/blob/master/list/top-questions.md with the Linux command at the top of the page. Hope this is helpful enough.

I'll also just quickly say that I am still somewhat conflicted about how to interpret the threat of legal action made by NL. On one hand, that seems extreme and a very bad signal for an EA organisation. 

On the other hand, as we see here, someone publishing a lot of (in your view) false information about your organisation is extremely harmful and time-consuming to those who are invested in that organisation. It does irreparable damage to reputations and trust.

So this does seem like an exceptional circumstance where you might consider exceptional action... (read more)

One lesson I see in this saga that we, as a community, and hopefully as a society, should be more aware of the fact that accusations are sometimes false and a little slower to pass judgement or react to them. 

I think that EAs are particularly vulnerable to a sort of 'moral hazard' of being especially receptive to perceived victims; many of us are empathetic people who feel strong moral obligations to help others. In this case, I can imagine Ben feeling a strong need or even obligation to do something and acting according. If so, what he did was actually very admirable, even if it turns out to have been misguided in hindsight.

I'll also just quickly say that I am still somewhat conflicted about how to interpret the threat of legal action made by NL. On one hand, that seems extreme and a very bad signal for an EA organisation. 

On the other hand, as we see here, someone publishing a lot of (in your view) false information about your organisation is extremely harmful and time-consuming to those who are invested in that organisation. It does irreparable damage to reputations and trust.

So this does seem like an exceptional circumstance where you might consider exceptional action... (read more)

[Third edit to add my current position on 22/12/23]

I said below that I would read the arguments from both sides and then make a final decision. I haven't done that because I didn't have time, and it didn't feel like high value. Especially in light of later posts and comments by people who are better qualified. I feel that it is still better (or at least closer to keeping my prior commitment) to state my current position for future readers than to not say anything further. With that in mind, this (copied from elsewhere) is where I ended up:

Before BP post: N... (read more)

One lesson I see in this saga that we, as a community, and hopefully as a society, should be more aware of the fact that accusations are sometimes false and a little slower to pass judgement or react to them. 

I think that EAs are particularly vulnerable to a sort of 'moral hazard' of being especially receptive to perceived victims; many of us are empathetic people who feel strong moral obligations to help others. In this case, I can imagine Ben feeling a strong need or even obligation to do something and acting according. If so, what he did was actually very admirable, even if it turns out to have been misguided in hindsight.

Thank you for this, Ren! I really appreciate it.

One tip I would suggest is to consider trying to find template(s) to follow for any new research project. That is, try to find one or more papers which address the same topic and/or use the same method as your paper, before you start on the paper. The best case is if the templates are published in the journal/outlet you are targeting. 

When you have templates, use them to guide and simplify the production of your paper. For instance, this might be by closely following the method, structure, style of diagr... (read more)

Is this true GWWC? I didn't realise that sacrificed income counted.

4
Luke Freeman
5mo
Not exactly, depending on what someone means by "sacrificed income". See my comment clarifying this. Essentially "salary sacrifice" (a form of payroll giving where you take home less pay for some kind of benefit including a donation to a charity; or equivalent arrangements) is different to "choosing a lower paying job for impact reasons". The key here is it's voluntary, revocable, has a specific monetary value, and the donation is very specifically one that would count towards a pledge.

Thank you for sharing. This felt very personally relevant. I also haven't taken the pledge. I very nearly did in 2016. I go back on forth on whether and when I should take it. There are so many considerations at play. 

How much am I willing/able to sacrifice my wealth and financial security alongside sacrifices made for my direct work? Would/will I have as much direct impact without my savings/passive income potential? I'd be more risk-averse and less likely to address 'funding market failures'. Could I give more and give better later on that now? How ... (read more)

The vast majority of academic contributions get read by approximately no one

Academia has benefits, but reaching a larger audience is not one of them, as far as I can tell (there are of course exceptions and some publications are much better suited for being published in a journal than a blogpost, but by and large academia does not have a good way of actually driving readership).

I agree that most academic research is a bad ROI but I find that a lot of this sort of 'nobody reads research' commentary is equating reads with citations which seems complet... (read more)

3
PeterSlattery
5mo
Just saw this from @soroushjp - it seems very relevant.

I agree that most academic research is a bad ROI but I find that a lot of this sort of 'nobody reads research' commentary is equating reads with citations which seems completely wrong. By that metric most forum posts would also not be read by anyone.

I agree-for one, the studies I've seen saying that the median publication is not cited are including conference papers, so if one is talking about the peer-reviewed literature, citations are significantly greater. I've estimated the average number of citations per paper is around 30 for the peer-reviewed litera... (read more)

5
EatLiver
6mo
The academic publishing system is deeply flawed and we need to push for change from an EA standpoint. As of now, we are losing thousands upon thousands of top-grade, peer-reviewed research articles that are stuck in obscure journals or self-published and read by nobody. This is due to the high cost of translation, publication, and open access fees, which are particularly prohibitive for researchers in the third world. We simply can't afford to publish in high-impact journals unless we do self-funding or beg for foreign funds. I believe the EA community has compassion for the injustice in the world, and I believe that we can be the push for that societal change. We need to prioritize knowledge and information above the corporate greed of publishers. 

Thank you for this. I found it very helpful, for instance, because it gave me some insight into which audiences are currently perceived as being most valuable to engage by leaders in the AI safety and governance communities.

As I mentioned in my series of posts about AI safety movement building, I would like to see a larger and more detailed version of this survey.

Without going into too much detail, I basically want more uncertainty reducing and behavior prompting information. Information that I think will help to coordinate the broader AI safety community ... (read more)

Yeah, I think that might be one reason it isn't done. I personally think that it is probably somewhat important for the community to understand itself better (e.g., the relative progress and growth in different interests/programs/geographies). Especially for people in the community who are community builders, recruiters, founders, etc. I also recognise that it might not be seen as priority for various reasons or risky for other reasons and I haven't thought a lot about it. 

Regardless, if people who have data about the community that they don't want to... (read more)

My uncertainties were mainly related to questions like how FTX had affected the trajectory of the community, size of pledge programs, and growth of AI relative to other areas of EA. But also around broader community understanding, like which programs are bigger, growing faster, better to recommend people to etc.

Thanks, I didn't know about the dashboard or had forgotten about it. Very helpful.

Short of doing something like this again, a simple annual post that reminds people this dashboard exists a and summarises what it has/shows, could get a lot of the value of the bigger analysis with a lot less effort. I imagine that a lot of people don't know about the dashboard and a lot of new people won't know next year.  

Thank you for this Angelina! It is extremely informative and has given me many useful updates about the size and trajectory of EA and its programs. It has also resolved some uncertainties which helps with my motivation. I expect that many readers will have a similar experience.

I would like to see more of this sort of monitoring in the future. Do you plan to do a similar analysis next year?

3
Angelina Li
6mo
Thanks for the feedback!  Curious if you are willing to share, what were the uncertainties? Thanks! I was initially thinking of this project as a one-off piece for a specific timely event. It's possible we'll conduct another analysis next year, but I think that will depend a lot on my capacity and priorities at the time. But FWIW, some of this data is public and ~live if you ever need to see it, e.g. this dashboard on CEA program metrics is updated every month.

Thank you for all of this work. I really appreciate it. 

The process of writing this post has only strengthened my conviction about an issue I’ve previously noted: I believe the community should assign responsibility to, and funding for, one or more people or organizations to conduct and disseminate this sort of high-level analysis of community growth metrics. I honestly find it baffling that measuring the growth of EA and reporting findings back to the community isn’t someone’s explicit job.

I completely agree with this and have made several ... (read more)

I think that one reason this isn’t done is that the people who have the best access to such metrics might not think it’s actually that important to disseminate them to the broader EA community, rather than just sharing them as necessary with the people for whom these facts are most obviously action-relevant.

9
Peter Wildeford
6mo
Seems like a good fit for Rethink Priorities, but we’re very funding constrained
3
James Herbert
6mo
Yes, this would be great, and it's something we're working on improving in the Netherlands, but it's hard to find the capacity. 

Thanks for writing this!

Lessons from the social and behavioral sciences can and should be adapted to promote proactive biorisk management. For example, literature on social norms, persuasion, attitude change, and habit formation could be used to design and test behavior-change interventions. The bar is low; researchers have not rigorously tested interventions to change life scientists' proactive BRM practices. Funders should support social scientists and biorisk experts to partner with life scientists on programs of applied research that start with intervi

... (read more)

(Commenting here in addition to your post)

Thanks for this! I appreciate the write-up. Just wanted to quickly share that tried the EEM but eventually moved to a 'Must, Should, Could' system like here. I use this on Google Tasks and other task management systems. Depending on the system I use a number or title to indicate the class of task. So far it has worked well for me. Of course different things will work for different people!

Thanks for this! I appreciate the write-up. Just wanted to quickly share that tried the EEM but eventually moved to a 'Must, Should, Could' system like here. I use this on Google Tasks and other task management systems. Depending on the system I use a number or title to indicate the class of task. So far it has worked well for me. Of course different things will work for different people!

1
Harry Luk
7mo
Thank you so much for the comment. The referenced article is a great write-up. Will have to dive deeper when I get the chance to see what/how I can implement some of the ideas in. Always looking to improve to thank you for the feedback :)

Thanks for this! I would like someone to be funded to regularly report on the funding landscape. Ideally, I'd like periodic reports providing simple graphics like those in here. Data could be easily visualised and updated for free on Google Data Studio.

Meta point:
I think EA has a big blind spot about how information quality mediates the behavioural outcomes we want. As an example, we presumably want more people to set up EA organisations and apply to EA jobs etc. These people will care about the funding landscape when making decisions between career option... (read more)

4
redbermejo
7mo
@Vilhelm Skoglund Would you be able to share how much time it took to put together this report at this level of quality. Curious as to its "costs" should it be a regularly updated public good.

Thank you Peter!

I agree, some kind of regular report would be useful. And definitely think they should include more graphics (erred on the side of getting this out there).

On your meta point, I would be curious to hear if you know of communities or similar that have better information quality, more effectively mediating sought-after behavioral outcomes? My feeling is that this indeed is very important, but rarely is invested in and/or done very well. It would be very interesting with some kind of survey mapping what (easy) system-level improvements/public goods the community would be most excited about (e.g. regular funding updates).  

Thanks for writing this Catherine! I am glad that things are going so well now. You are finally getting the recognition and responsibilities that you have long deserved. 
 

I had worked really hard, but I felt “EA” wasn't “supporting me”. I think this was a really unfair characterisation, but it is what it felt like at the time. Then I worried I just wasn’t very useful or worthy... In retrospect it is crazy that I updated so much on only four rejections! Many people go for tons of roles before being successful or changing their plans. I'm a bit ann

... (read more)

A quick and impulsive comment.

I am very sorry that this happened to you. I had a somewhat similar experience of disillusion and depression a few years ago. I eventually realised that it was because my life was deeply imbalanced at that time: I focused on and valued work too much and I didn't prioritise my wellbeing and happiness sufficiently. 

Five years later, I feel happier and more productive that I have ever been. I now feel that I needed my burnout to see the error of my ways and develop a better mentality and lifestyle (although I wish it were no... (read more)

Really glad to see this is happening! In terms of questions, it might be easiest if you share a draft of the planned questions so that people can see what is already in there and what seems in scope to include. My desire to share my ideas for specific questions is hindered by the concern that it will be a wasted effort.

One option is to share a spreadsheet of potential questions with a relevant audience/here and ask people to indicate the X (e.g., 5) questions that are of most and least interest, and/or suggest changes or new questions.

Some general thoughts... (read more)

4
David_Moss
8mo
Thanks!   Makes sense. We're trying to elicit another round of suggestions here first (since people may well have new requests since the original announcement).

Revisiting this to share with someone and wanted to mention that I probably wouldn't use Mailer Lite to create a newsletter now. Instead, I would probably use Substack.

I really appreciate that you took the time to provide such a detailed response to these arguments. I want to say this pretty often when on the forum, and maybe I should do it more often! 

Great work, I am really excited to see this! Wanted to add that my personal experience (particularly at Ready Research when we briefly focused on trying to provide research and publication experience) has given me the impression that there is a massive demand for researchers and research training but insufficient resources and training. In a sense doing and reading research is at the heart of nearly all EA activities and almost universally useful.

8
KarolinaSarek
10mo
I agree (of course ;) ), and that’s what we've noticed as well. Particularly, there are some crucial research skills that are not being taught elsewhere but are commonly used in EA/when one aims to have a significant impact. For example, prioritization research, calculations of cost-effectiveness at different levels of depth, issues of moral weights, etc. We aim to address this gap as well as provide training in generalizable research skills for example literature reviews. If you know people, who are interested in such a training program, feel free to send them information about it. We would love to see applications from them.

Thanks for writing this - it was useful to read the pushbacks! 

As I said below, I want more synthesis of these sorts of arguments. I know that some academic groups are preparing literature reviews of the key arguments for and against AGI risk.

I really think that we should be doing that for ourselves as a community and to make sure that we are able to present busy smart people with more compelling content than a range of arguments spread across many different forum posts. 

I don't think that that is going to cut it for many people in the policy space.

5
Greg_Colbourn
1y
Agree. But at the same time, we need to do this fast! The typical academic paper review cycle is far too slow for this. We probably need groups like SAGE (and Independent SAGE?) to step in. In fact, I'll try and get hold of them.. (they are for "emergencies" in general, not just Covid[1]) 1. ^ Although it looks like they are highly specialised on viral threats. They would need totally new teams to be formed for AI. Maybe Hinton should chair?

Thanks for writing this. I appreciate the effort and sentiment. My quick and unpolished thoughts are below. I wrote this very quickly, so feel free to critique.

The TLDR is that I think that this is good with some caveats but also that we need more work on our ecosystem to be able to do outreach (and everything else) better.

I think we need a better AI Safety movement to properly do and benefit from outreach work. Otherwise, this and similar posts for outreach/action are somewhat like a call to arms without the strategy, weapons and logistics structure neede... (read more)

6
Geoffrey Miller
1y
Peter -- good post; these all seem reasonable as comments. However, let me offer a counter-point, based on my pretty active engagement on Twitter about AI X-risk over the last few weeks: it's often very hard to predict which public outreach strategies, messages, memes, and points will resonate with the public, until we try them out. I've often been very surprised about which ideas really get traction, and which don't. I've been surprised that meme accounts such as @AISafetyMemes have been pretty influential. I've also been amazed at how (unwittingly) effective Yann LeCun's recklessly anti-safety tweets have been at making people wary of the AI industry and its hubris. This unpredictability of public responses might seriously limit the benefits of carefully planned, centrally organized activism about AI risk. It might be best just to encourage everybody who's interested to try out some public arguments, get feedback, pay attention to what works, identify common misunderstandings and pain points, share tactics with like-minded others, and iterate. Also, lack of formal central organization limits many of the reputational risks of social media activism. If I say something embarrassing or stupid as my Twitter persona @primalpoly, that's just a reflection on that persona (and to some extent, me), not on any formal organization. Whereas if I was the grand high vice-invigilator (or whatever) in some AI safety group, my bad tweets could tarnish the whole safety group.  My hunch is that a fast, agile, grassroots, decentralized campaign of raising AI X risk awareness could be much more effective than the kind of carefully-constructed, clearly-missioned, reputationally-paranoid organizations that EAs have traditionally favored.

These all seem like good suggestions, if we still had years. But what if we really do only have months (to get a global AGI moratorium in place)? In some sense the "fog of war" may already be upon us (there are already too many further things for me to read and synthesise, and analysis paralysis seems like a great path toward death). How did action on Covid unfold? Did all these kinds of things happen first before we got to lockdowns? 

vegan activists

This is quite different. It's about personal survival of each and every person on Earth, and their fami... (read more)

Load more