Davidmanheim

Head of Research and Policy @ ALTER - Association for Long Term Existence and Resilience
6883 karmaJoined Working (6-15 years)

Participation
4

  • Received career coaching from 80,000 Hours
  • Attended more than three meetings with a local EA group
  • Completed the AGI Safety Fundamentals Virtual Program
  • Completed the In-Depth EA Virtual Program

Sequences
2

Deconfusion and Disentangling EA
Policy and International Relations Primer

Comments
799

Confirmed; he does work in this area, there's independent reporting about his work on these topics, and has a substack about his very relevant legal work; https://www.nlrbedge.com/

Do you have any comment on the idea that nondisparagement clauses like this could be found invalid for being contrary to public policy? (How would that be established?)

I think there are useful analogies between specific aspects of bio, cyber, and AI risks, and it's certainly the case that when the biorisk is based on information security, it's very similar to cybersecurity, not the least in that it requires cybersecurity! And the same is true for AI risk; to the extent that there is a risk of model weights leaking, this is in part a cybersecurity issue.

So yes, I certainly agree that many of the dissimilarities with AI are not present if analogizing to cyber. However, more generally, I'm not sure cybersecurity is a good analogy for biorisk, and have heard that computer security people often dislike the comparison of computer viruses and biological viruses for that reason, though they certainly share some features.

Gutting the FTT token is customers losing money because of their investing, not customer losses via FTX loss of custodial funds or token, though, isn't it?

Alameda exile told Time that SBF "didn’t have a distinction between firm capital and trading capital. It was all one pool.” That's at least a badge of fraud (commingling)

Alameda was a prop trading firm, so there isn't normally any distinction between those. The only reason this didn't apply was that there was a third bucket of funds, pass-through custodial funds that belonged to FTX customers, which they evidently didn't pass through due to poor record keeping. That's not as much indicative of fraud, it's indicative of incompetance.

Yes, I see a strong argument for the claim that the companies are in the best position to shoulder the harms that will inevitably come along, and pass that risk onto their customers through higher prices - but the other critical part is that this also changes incentives because liability insurers will demand the firms mitigate the risks. (And this is approaching the GCR argument, from a different side.)

I think that the use of insurance for moderate harms is often a commercial boondoggle for insurers, a la health insurance, which breaks incentives in many ways an leads to cost disease. And typical insurance regimes shift burden of proof about injury in damaging ways because insurers have deep pockets to deny claims in court and fight cases that establish precedents. I also don't think that it matters for tail risks - unless explicitly mandating unlimited coverage, firms will have caps in the millions of dollars, and will ignore tail risks that will bankrupt them. 

One way to address the tail, in place of strict liability, would be legislation allowing anticipated harms to be stopped via legal action, as opposed to my understanding that pursuing this type of prior restraint for uncertain harms isn't possible in most domains. 

I'd be interested in your thoughts on these points, as well as Cecil and Marie's.

I would be interested in understanding whether you think that joint-and-several liability among model training, model developers, application developers, and users would address many of the criticisms you point out against civil liability. As I said last year, "joint-and-several liability for developers, application providers, and users for misuse, copyright violation, and illegal discrimination would be a useful initial band-aid; among other things, this provides motive for companies to help craft regulation to provide clear rules about what is needed to ensure on each party’s behalf that they will not be financially liable for a given use, or misuse." 

I also think that this helps mitigate the issue with fault-based liability in proving culpability, but I'm agnostic about which liability regime is justified.

Lastly, I think that your arguments mean that there's good reason to develop a clear proposal for some new liability standard, perhaps including requirements for uncapped liability insurance for some specific portion of eventual damages, rather than assume that the dichotomy of strict vs. fault based is immutable.

If you find anyone who quotes that as an excuse where a modern Halachik authority would rule that they don't have too much money for that to apply to them, I'll agree they are just fine only giving 20%. (On the other hand, my personal conclusion is less generous.) But DINKs or single people  making $100k+ each who comprise most of the earning to give crowd certainly don't have the same excuse!

It was actually quoting the first bit; "The amount of charity one should give is that if you can afford to, give as much as is needed. Under ordinary circumstances, a fifth of one's property is most laudable. To give one-tenth is normal. To give less than one-tenth is stingy."

Load more