TLDR: Bio Data scientist here concerned about AI risks , working to get his institution (DCRI) at Duke working on AI and alignment.
Long Version: I wrote below blurb and pasted it into https://bard.google.com/ to get TLDR to us...
Can you create a TLDR for the following post: Hi Sage Arbor here. I just joined effectivealtruism.org and have been listening to the 80K podcast for about a year. I work in data science (PhD biochem) and currently work with clinical trails at Duke. My main concernt is AI in the next 10 years. Id like my institution Duke Clinical Research Institute (DCRI) create a center of excellence for AI and dedicate a percentage of work towards alignment. Im starting to give talks to that end now. I thought this might be a good place to post alignment articles or work I have to get suggestions on improvement (as always tha work seems 6 months off when I find time).
Sure, here is a TLDR for Sage Arbor's post:
Here is a more concise TLDR:
I hope this is helpful!