DR

Dylan Richardson

222 karmaJoined Seeking workPursuing a graduate degree (e.g. Master's)San Diego, CA, USA
substack.com/profile/46244575-dylan-richardson

Bio

Participation
2

Graduate student at Johns Hopkins. Looking for entry level work, feel free to message me about any opportunities! 

Comments
74

"Super dead" is a bit of an exaggeration, but there is less activity than there should be. The main issue is pretty apparent to me though: there isn't sufficient cross-posting to the forum. There's tons of good stuff on substack, but also on various org websites and blogs. 

I don't think other causes are nearly as important as this.

I may be missing important context, but I think you are mistaken here on the norms at hand in this case. I do applaud you for helping your friend out; that makes you a good friend. But opportunities for people to be altruistic are completely unbounded; I could find hundreds of similar asks for help in a 5 minute google search, most of which aren't distinctively "good opportunities". If this wasn't a personal request, but instead calling for donations to a related cause you were making a case for, that would be fine.  I think highlighting personal requests for help is permissible and is even virtuous interpersonal behavior between friends and family. People reach out on facebook pages like this all the time. But it just looks like spam or emotional manipulation when posted on online forums dedicated to other purposes, with colleagues or strangers. 
Hopefully this helps! This can definitely be a confusing discourse norm contextually.

For context: Clara is right, there is good experimental evidence that this occurs in online comment forums. This is on top of the simple mechanism that more highly upvoted content is more likely to be seen for various reasons.

I'd assume this holds true for EA forum content. I do the same thing @Toby Tremlett🔹 is describing to some extent, but I'd be surprised if my system 2 thinking outweighs my system 1 on net in this regard. I suspect I personally do this most with very low Karma posts, which I neglect to upvote because of a vague embarrassment over the possibility of promoting content with some flaw I missed. 

Dylan Richardson
*1
0
0
60% disagree

Due to Value Lock-in, TAI poses a time constraint for farmed animal social progress.

I do not expect most issues to be resolved before this time, due to technological limitations, heightened barriers to social change relative to historic movements, and increasing developing world meat consumption. 

If we open this up to wild animals rather than just farmed, net-negative outcomes are much more assured.

I do tend to favor longer AGI/TAI timelines than many for roughly these reasons. But I don't think you are exactly right about the AI data access trend. For one, whether or not me or Americans at large are "happy to give an ASI full autonomous power to gather such biomedical data", China will be.

I tentatively I expect capabilities with real-world economic importance to come to some extent in the US as well, even if the most radical and transformative stuff requires further integration into the physical world for modeling. And at that point there may simply be a iterative process of greater and greater integration, as public perception improves and dependence increases. The complication here is moral backlash of some sort, which I note you've written about before. I agree that this is plausible, I simply wouldn't call it probable. Things look more bi-modal to me; most likely we get the outcome I've described above (mild harms could still be disregarded by China), or we get a longer slow down before curing aging.

 

Semantic quibble: I think most people, myself included, simply define ASI as either encompassing those capabilities or being sufficient at recursive self-improvement such it will possess those capabilities in short order.

If your point is primarily that the existing AI paradigm is inadequate, I would tend to agree. There's also a distinct question of what an intelligence explosion looks like; it may well be that tedious real-world experimentation is necessary for these sorts of biomedical advances, which takes time. That too is a compelling possibility; but I would expect it in a decade at most and certainly quicker than human R&D can advance.

It might genuinely be the time to boycott Chat GPT and start campaigns targeting corporate partners. But this isn't yet obvious. Even if so, what would be the appropriate concrete and reasonable asks? I think there is a bit of epistemic crisis emerging at the moment. If there's a case to be made, it needs to be made sooner rather than latter. And then we need coordination.

I found this Peter Wildeford piece helpful. My rough understanding now is that it was (implicitly?) rejecting "lawful use", especially within classified contexts, that was the contentious bit all along. 
 

But I'm still uncertain about the extent these contracts can be renegotiated in the future, when capabilities evolve. As well as the extent that black-swan type future capabilities could be "lawfully" used secretly, under classification? And presumably the nature of classified uses will kept secret from Open AI as well?

I am still confused about what exactly Open AI is requiring here and how (or if) it diverges substantively from Anthropic's contract. Is this merely a symbolic victory for the DOW? Or is the language about "lawful use" allowing a back door somehow?

Load more