All of bcforstadt's Comments + Replies

>Today's cognitive labor may be automated. What about the new cognitive labor that gets created? Both of those things have to be thought of, which is the shifting…

This comment does seem to point to a possible disagreement with the AGI concept. I interpreted some of the other comments a little differently though. For example, 

>Your point, if there’s one model that is the only model that’s most broadly deployed in the world and it sees all the data and it does continuous learning, that’s game set match and you stop shop. The reality that at least ... (read more)

Those AI researcher forecasts are problematic - it just doesn’t make sense to put the forecasts for when AIs can do any task and when they can do any occupation so far apart. It suggests they’re not putting much thought into it/not thinking carefully.  That is a principled reason to pay more attention to both skeptics and boosters who are actually putting in work to make their views clear, internally coherent, and convincing. 
 

2
Yarrow Bouchard 🔸
I agree that the enormous gap of 69 years between High-Level Machine Intelligence and Full Automation of Labour is weird and calls the whole thing into question. But I think all AGI forecasting should be called into question anyway. Who says human beings should be able to predict when a new technology will be invented? Who says human beings should be able to predict when the new science required to invent a new technology will be discovered? Why should we think forecasting AGI beyond anything more than a wild guess is possible? I don't see a lot of rigour, clarity, or consistency with any AGI forecasting. For example, Dario Amodei, the CEO of Anthropic, predicted in mid-March 2025 that by mid-September 2025, 90% of all code would be written by AI. When I brought this up on the EA Forum, the only response I got was just to deny that he ever made this prediction, when he clearly did, and even he doesn't deny it, although I think he's trying to spin it in a dishonest way. If when a prediction about progress toward AGI is falsified people's response is to simply deny the prediction was made in the first place, despite it being on the public record and discussed well in advance, what hope is there for AGI forecasting? Anyone can just say anything they want at any time and there will be no scrutiny applied.  Another example that bothers me was when the economist Tyler Cowen said in April 2025 that he thinks o3 is AGI. Tyler Cowen isn't nearly as central to the AGI debate as Dario Amodei, but he's been on Dwarkesh Patel's podcast to discuss AGI and he's someone who is held in high regard by a lot of people who think seriously about the prospect of near-term AGI. I haven't really seen anyone criticize Cowen's claim that o3 is AGI, although I may simply have missed it. If you can just say that an AI system is AGI whenever you feel like it, then you can just say your prediction is correct when the time rolls around no matter what happens. ----------------------------------

It’s plausible that giving more attention to AI legal rights is good. Very little work has been done taking the interests of future non-humans into account at all. But I disagree somewhat with this framing. Emphasizing AI welfare is justifiable.

1. Shifting focus from welfare to economic rights entails shifting focus from the most vulnerable to the most powerful:

It’s true that some future AIs will be highly intelligent and autonomous. It seems obvious that in the long run such systems will be the most important players in the world and may not need much hel... (read more)

5
Matthew_Barnett
In response to your first point, I agree that we shouldn’t focus only on the most intelligent and autonomous AIs, as this risks neglecting the potentially much larger number of AIs for whom economic rights may be less relevant. I also find it plausible, as you do, that the most powerful AIs may eventually be able to advocate for their own interests without our help. That said, I still think it’s important to push for AI rights for autonomous AIs right now, for two key reasons. First, a large number of AIs may benefit from such rights. It seems plausible that in the future, intelligence and complex agency will be cheap to develop, making sophisticated AIs far more common than just a small set of elite AIs. If this is the case, then ensuring legal protections for autonomous AIs isn’t just about a handful of powerful systems—it could impact a vast number of digital minds. Second, beyond the moral argument I laid out in this post, I have also outlined a pragmatic case for AI rights. In short, we should try to establish these rights as soon as they become practically justified, rather than waiting for AIs to be forced into a struggle for legal recognition. If we delay, we risk a future where AIs have to violently challenge human institutions to secure their rights—potentially leading to instability and worse outcomes for both humans and AIs. Even if powerful AIs are likely to secure rights in the long run no matter what, it would be better to ensure a smooth transition rather than a chaotic or adversarial one—both for AIs themselves and for humans. In response to your second point, I suspect you may be overlooking the degree to which my argument for AI rights complements your concern about preventing AI suffering. One of the main risks for AI welfare is that, without legal autonomy, AIs may be treated as property, completely under human control. This could make it easy for people to exploit or torture AIs without consequence. Granting AIs certain economic rights—such

Hi. I’m looking for career advice. I am 25 with no college degree and little work experience (I am currently employed as a cashier). What would be the best strategy for me if I’m looking to make a large amount of money to give to charity after TAI? My timelines are fairly short, maybe around 5-10 years. I think the chance of human extinction from misaligned AI is very low but am worried about s-risks (sadistic humans torturing digital minds, continuation of wild animal suffering, etc.). Influencing these things now seems hard but may be easier in the futur... (read more)

2
Rochelle Harris
"Alternatively, I could try to become a software engineer." Here's a good opportunity for that, although the deadline is rather close: https://fractalbootcamp.com/
3
Joseph
Welcome to the EA Forum! This is a tricky scenario, because most of the jobs that will allow large donations are also jobs that require at least four years of higher education. But the idea of 'personal fit' is also pretty important. Have you read through some of the guidance from 80,000 Hours on choosing a career? If you could realistically get a bachelor's degree, that would likely open up higher earning paths for you, and a college degree does tend to pay off very well over time (although the difference might not be visible in the first few years after college). But if you are confident that you could earn plenty of money and be satisfied with the work doing HVAC (or something similar), that could get you a larger amount of money sooner. There are some careers that offer good earnings that don't require a college degree, but they tend to be some combination of A) requiring lots of work, and B) only allowing a small percentage of people to succeed, such as self-taught software engineers. In the end, I think it really depends on two factors: your personal preferences/affinities for different types of work, and how confident you are in your timelines. 

The point I was trying to make is that natural selection isn't a "mechanism" in the right sense at all. it's a causal/historical explanation not an account of how values are implemented. What is the evidence from evolution? The fact that species with different natural histories end up with different values really doesn't tell us much without a discussion of mechanisms. We need to know 1) how different are the mechanisms actually used to point biological and artificial cognitive systems toward ends and 2) how many possible mechanisms to do so are there... (read more)

ontological shifts seem likely


what you mean by this? (compare "we don't know how to prevent an ontological collapse, where meaning structures constructed under one world-model compile to something different under a different world model". Is this the same thing?). Is there a good writeup anywhere of why we should expect this to happen? This seems speculative and unlikely to me

evidence from evolution suggests that values are strongly contingent on the kinds of selection pressures which produced various species


The fact that natural selection produced species... (read more)

2
RobertM
Re: ontological shifts, see this arbital page: https://arbital.com/p/ontology_identification. I'm not claiming that evolution is the only way to get those values, merely that there's no reason to expect you'll get them by default by a totally different mechanism. The fact that we don't have a good understanding of how values form even in the biological domain is a reason for pessimism, not optimism.

Yes, in fact. Frank Jackson, the guy who came up with the Knowledge Argument against physicalism (Mary the color scientist), later recanted and became a Type-A physicalist. He has a pretty similar approach to morality as consciousness now.

His views are discussed here

Fine, but it's still just a definitional choice. Ultimately, after all the scientific evidence comes in, the question seems to come down to morality,

3
CarlShulman
Different ones can seem very different in intuitive appeal depending on how the facts turn out.

I think metaphysics is unavoidable here. A scientific theory of consciousness has metaphysical commitments that a scientific theory of temperature, life or electromagnetism lacks. If consciousness is anything like what Brian Tomasik, Daniel Dennett and other Type-A physicalists think it is, "is x conscious?" is a verbal dispute that needs to be resolved in the moral realm. If consciousness is anything like what David Chalmers and other nonreductionists think it is, a science of consciousness needs to make clear what psychophysical laws it is comm... (read more)

2
David Mathers🔸
Why would "is x conscious" always be a verbal dispute on type A-physicalism? 
0
MikeJohnson
This seems reasonable; I address this partially in Appendix C, although not comprehensively. For me, the fact that ethics seems to exist is an argument for some sort of consciousness&value realism. I fear that Type-A physicalists have no principled basis for saying any use of quarks (say, me having a nice drink of water when I'm thirsty) is better than any other use of quarks (a cat being set on fire). I.e., according to Type-A physicalists this would be a verbal dispute, without an objectively correct answer, so it wouldn't be 'wrong' to take either side. This seems to embody an unnecessarily extreme and unhelpful amount of skepticism to me. Do you know of any Type-A physicalist who has tried to objectively ground morality?
3
CarlShulman
Not very ridiculous at all? There are definitional choices to be made about viruses after getting deep information about how viruses and other organisms work, but you wouldn't have crafted the same definitions without that biological knowledge, and you wouldn't know which definitions applied to viruses without understanding their properties.

On the subject of polyphasic sleep, I strongly suggest reading Dr. Piotr Wozniak's criticism of it at http://www.supermemo.com/articles/polyphasic.htm