T

Throwaway81

75 karmaJoined

Posts
1

Sorted by New

Comments
22

I think it would be helpful to be able to see the number of applications to EA global over time compared to attendance. 

Most legislation is written broadly enough so that it won't have to be repealed because it's rare that legislation is repealed. For example, take their current definition of frontier AI model, which is extremely prescriptive and uses 10^26 in some cases. Usually to be future proof these definitions will be written broadly enough such that the executive can update the technical specifics as the technology advances. Those types of regulations are the sorts of things that would include such details, but not the legislation. I can imagine a future where models are all over 10^26 and meet the other requirements of the model act's definition of frontier AI model. The reason to even govern the frontier in the first place is because you don't know what's coming -- it's not like we know that dangerous capabilities emerge at 10^26 so there's no reason to use this threshold as putting models under regulatory scrutiny forever. Also (eventually) we might achieve some algorithmic efficiency breakthroughs in which the most capable (and therefore dangerous) models don't need as much compute anymore and so might not even qualify as frontier AI models under the Act anymore. So I see the risk of this bill first capturing a bunch of models that it doesn't mean to cover and then possibly not covering any models -- all because it's not written in a future proof way. The bill is written more like a regulation for for the executive level rather than the legislative level.

Sure, I'm not going to be able to respond any more to this thread but the methods of governance prescribed themselves are not future proof, as AI governance may need may change as the tech or landscape changes, and the definitions are not future proof. 

This contains several inaccuracies and misleading statements that I won't fully enumerate, but at least 2: 

  1. The Nucleic Acid Synthesis Act does not at all "require biolabs that receive federal funding to confirm the real identity of customers who are buying their synthetic DNA." It empowers NIST to create standards and best practices for screening.
  2. It's not the case that "The particular bills that we edited did not pass Congress, but this is because almost nothing passed out of the 118th Congress." Lots of bills passed in the CR and other packages. But it was a historically dysfunctional and slow year

Personal gripe: the model legislation is overly prescriptive in a way that does not future proof the statute enough to protect against the fast moving nature of AI and how governance may need to shift and adapt. 

I disagree voted because I think that withholding of private info should be a strong norm and that it's not the poster's job to please the community with privileged info that could hurt them when they are already doing a service by posting. I also think it could possibly serve as an indicator of some sort (eg, if people searched the forum for comments like this perhaps it might point towards a trend of posters worrying about how much blowback they think they might get from funders/other EA orgs if actual criticism of them/backdoor convos were revealed -- whether that's a warranted worry or not). I also think that by leaking private convos that will hurt a person because now every time someone interacts with that person they will think that their convo might get leaked online and will not engages with that person. Seems mean to ask someone to do that for you just so you can have more data to judge them on -- they are trying to communicate something real to you but obviously can't. I don't have any reason to doubt the poster unless they've lied before and have a strong trust norm unless there is a reason not to trust. But I double liked because most of the rest of your comment was good :-) 

Ahhh got it, thanks! Funny how most of the comments there are trying to rationalize his affiliation with EA as "not EA" lol.

Hadn't seen it mentioned anywhere yet that Luigi Mangione (US person who killed a health insurance executive) was interested in EA. "He suggested I schedule group video calls as he really wanted to meet my other founding members and start a community based on ideas like rationalism, Stoicism, and effective altruism.”

https://www.nbcnews.com/news/rcna183996

shrug I think it would be helpful to me, and like I said the reader can take it or leave it. Thems the breaks. I think commenting from a throwaway account providing the data and letting the reader decide is better than not commenting and not providing data

Load more