Most legislation is written broadly enough so that it won't have to be repealed because it's rare that legislation is repealed. For example, take their current definition of frontier AI model, which is extremely prescriptive and uses 10^26 in some cases. Usually to be future proof these definitions will be written broadly enough such that the executive can update the technical specifics as the technology advances. Those types of regulations are the sorts of things that would include such details, but not the legislation. I can imagine a future where models are all over 10^26 and meet the other requirements of the model act's definition of frontier AI model. The reason to even govern the frontier in the first place is because you don't know what's coming -- it's not like we know that dangerous capabilities emerge at 10^26 so there's no reason to use this threshold as putting models under regulatory scrutiny forever. Also (eventually) we might achieve some algorithmic efficiency breakthroughs in which the most capable (and therefore dangerous) models don't need as much compute anymore and so might not even qualify as frontier AI models under the Act anymore. So I see the risk of this bill first capturing a bunch of models that it doesn't mean to cover and then possibly not covering any models -- all because it's not written in a future proof way. The bill is written more like a regulation for for the executive level rather than the legislative level.
This contains several inaccuracies and misleading statements that I won't fully enumerate, but at least 2:
Personal gripe: the model legislation is overly prescriptive in a way that does not future proof the statute enough to protect against the fast moving nature of AI and how governance may need to shift and adapt.
I disagree voted because I think that withholding of private info should be a strong norm and that it's not the poster's job to please the community with privileged info that could hurt them when they are already doing a service by posting. I also think it could possibly serve as an indicator of some sort (eg, if people searched the forum for comments like this perhaps it might point towards a trend of posters worrying about how much blowback they think they might get from funders/other EA orgs if actual criticism of them/backdoor convos were revealed -- whether that's a warranted worry or not). I also think that by leaking private convos that will hurt a person because now every time someone interacts with that person they will think that their convo might get leaked online and will not engages with that person. Seems mean to ask someone to do that for you just so you can have more data to judge them on -- they are trying to communicate something real to you but obviously can't. I don't have any reason to doubt the poster unless they've lied before and have a strong trust norm unless there is a reason not to trust. But I double liked because most of the rest of your comment was good :-)
Hadn't seen it mentioned anywhere yet that Luigi Mangione (US person who killed a health insurance executive) was interested in EA. "He suggested I schedule group video calls as he really wanted to meet my other founding members and start a community based on ideas like rationalism, Stoicism, and effective altruism.”
I think it would be helpful to be able to see the number of applications to EA global over time compared to attendance.