All posts

Top

Today, 5 May 2024
Today, 5 May 2024

Quick takes

In my latest post I talked about whether unaligned AIs would produce more or less utilitarian value than aligned AIs. To be honest, I'm still quite confused about why many people seem to disagree with the view I expressed, and I'm interested in engaging more to get a better understanding of their perspective. At the least, I thought I'd write a bit more about my thoughts here, and clarify my own views on the matter, in case anyone is interested in trying to understand my perspective. The core thesis that was trying to defend is the following view: My view: It is likely that by default, unaligned AIs—AIs that humans are likely to actually build if we do not completely solve key technical alignment problems—will produce comparable utilitarian value compared to humans, both directly (by being conscious themselves) and indirectly (via their impacts on the world). This is because unaligned AIs will likely both be conscious in a morally relevant sense, and they will likely share human moral concepts, since they will be trained on human data. Some people seem to merely disagree with my view that unaligned AIs are likely to be conscious in a morally relevant sense. And a few others have a semantic disagreement with me in which they define AI alignment in moral terms, rather than the ability to make an AI share the preferences of the AI's operator.  But beyond these two objections, which I feel I understand fairly well, there's also significant disagreement about other questions. Based on my discussions, I've attempted to distill the following counterargument to my thesis, which I fully acknowledge does not capture everyone's views on this subject: Perceived counter-argument: The vast majority of utilitarian value in the future will come from agents with explicitly utilitarian preferences, rather than those who incidentally achieve utilitarian objectives. At present, only a small proportion of humanity holds slightly utilitarian views. However, as unaligned AIs will differ from humans across numerous dimensions, it is plausible that they will possess negligible utilitarian impulses, in stark contrast to humanity's modest (but non-negligible) utilitarian tendencies. As a result, it is plausible that almost all value would be lost, from a utilitarian perspective, if AIs were unaligned with human preferences. Again, I'm not sure if this summary accurately represents what people believe. However, it's what some seem to be saying. I personally think this argument is weak. But I feel I've had trouble making my views very clear on this subject, so I thought I'd try one more time to explain where I'm coming from here. Let me respond to the two main parts of the argument in some amount of detail: (i) "The vast majority of utilitarian value in the future will come from agents with explicitly utilitarian preferences, rather than those who incidentally achieve utilitarian objectives." My response: I am skeptical of the notion that the bulk of future utilitarian value will originate from agents with explicitly utilitarian preferences. This clearly does not reflect our current world, where the primary sources of happiness and suffering are not the result of deliberate utilitarian planning. Moreover, I do not see compelling theoretical grounds to anticipate a major shift in this regard. I think the intuition behind the argument here is something like this: In the future, it will become possible to create "hedonium"—matter that is optimized to generate the maximum amount of utility or well-being. If hedonium can be created, it would likely be vastly more important than anything else in the universe in terms of its capacity to generate positive utilitarian value. The key assumption is that hedonium would primarily be created by agents who have at least some explicit utilitarian goals, even if those goals are fairly weak. Given the astronomical value that hedonium could potentially generate, even a tiny fraction of the universe's resources being dedicated to hedonium production could outweigh all other sources of happiness and suffering. Therefore, if unaligned AIs would be less likely to produce hedonium than aligned AIs (due to not having explicitly utilitarian goals), this would be a major reason to prefer aligned AI, even if unaligned AIs would otherwise generate comparable levels of value to aligned AIs in all other respects. If this is indeed the intuition driving the argument, I think it falls short for a straightforward reason. The creation of matter-optimized-for-happiness is more likely to be driven by the far more common motives of self-interest and concern for one's inner circle (friends, family, tribe, etc.) than by explicit utilitarian goals. If unaligned AIs are conscious, they would presumably have ample motives to optimize for positive states of consciousness, even if not for explicitly utilitarian reasons. In other words, agents optimizing for their own happiness, or the happiness of those they care about, seem likely to be the primary force behind the creation of hedonium-like structures. They may not frame it in utilitarian terms, but they will still be striving to maximize happiness and well-being for themselves and others they care about regardless. And it seems natural to assume that, with advanced technology, they would optimize pretty hard for their own happiness and well-being, just as a utilitarian might optimize hard for happiness when creating hedonium. In contrast to the number of agents optimizing for their own happiness, the number of agents explicitly motivated by utilitarian concerns is likely to be much smaller. Yet both forms of happiness will presumably be heavily optimized. So even if explicit utilitarians are more likely to pursue hedonium per se, their impact would likely be dwarfed by the efforts of the much larger group of agents driven by more personal motives for happiness-optimization. Since both groups would be optimizing for happiness, the fact that hedonium is similarly optimized for happiness doesn't seem to provide much reason to think that it would outweigh the utilitarian value of more mundane, and far more common, forms of utility-optimization. To be clear, I think it's totally possible that there's something about this argument that I'm missing here. And there are a lot of potential objections I'm skipping over here. But on a basic level, mostly just lack the intuition that the thing we should care about, from a utilitarian perspective, is the existence of explicit utilitarians in the future, for the aforementioned reasons. The fact that our current world isn't well described by the idea that what matters most is the number of explicit utilitarians, strengthens my point here. (ii) "At present, only a small proportion of humanity holds slightly utilitarian views. However, as unaligned AIs will differ from humans across numerous dimensions, it is plausible that they will possess negligible utilitarian impulses, in stark contrast to humanity's modest (but non-negligible) utilitarian tendencies." My response: Since only a small portion of humanity is explicitly utilitarian, the argument's own logic suggests that there is significant potential for AIs to be even more utilitarian than humans, given the relatively low bar set by humanity's limited utilitarian impulses. While I agree we shouldn't assume AIs will be more utilitarian than humans without specific reasons to believe so, it seems entirely plausible that factors like selection pressures for altruism could lead to this outcome. Indeed, commercial AIs seem to be selected to be nice and helpful to users, which (at least superficially) seems "more utilitarian" than the default (primarily selfish-oriented) impulses of most humans. The fact that humans are only slightly utilitarian should mean that even small forces could cause AIs to exceed human levels of utilitarianism. Moreover, as I've said previously, it's probable that unaligned AIs will possess morally relevant consciousness, at least in part due to the sophistication of their cognitive processes. They are also likely to absorb and reflect human moral concepts as a result of being trained on human-generated data. Crucially, I expect these traits to emerge even if the AIs do not share human preferences.  To see where I'm coming from, consider how humans routinely are "misaligned" with each other, in the sense of not sharing each other's preferences, and yet still share moral concepts and a common culture. For example, an employee can share moral concepts with their employer while having very different consumption preferences from them. This picture is pretty much how I think we should primarily think about unaligned AIs that are trained on human data, and shaped heavily by techniques like RLHF or DPO. Given these considerations, I find it unlikely that unaligned AIs would completely lack any utilitarian impulses whatsoever. However, I do agree that even a small risk of this outcome is worth taking seriously. I'm simply I'm skeptical that such low-probability scenarios should be the primary factor in assessing the value of AI alignment research. Intuitively, I would expect the arguments for prioritizing alignment to be more clear-cut and compelling than "if we fail to align AIs, then there's a small chance that these unaligned AIs might have zero utilitarian value, so we should make sure AIs are aligned instead". If low probability scenarios are the strongest considerations in favor of alignment, that seems to undermine the robustness of the case for prioritizing this work. While it's appropriate to consider even low-probability risks when the stakes are high, I'm doubtful that small probabilities should be the dominant consideration in this context. I think the core reasons for focusing on alignment should probably be more straightforward and less reliant on complicated chains of logic than this type of argument suggests. In particular, as I've said before, I think it's quite reasonable to think that we should align AIs to humans for the sake of humans. In other words, I think it's perfectly reasonable to admit that solving AI alignment might be a great thing to ensure human flourishing in particular. But if you're a utilitarian, and not particularly attached to human preferences per se (i.e., you're non-speciesist), I don't think you should be highly confident that an unaligned AI-driven future would be much worse than an aligned one, from that perspective.
2
JWS
9h
0
Going to quickly share that I'm going to take a step back from commenting on the Forum for the foreseeable future. There are a lot of ideas in my head that I want to work into top-level posts to hopefully spur insightful and useful conversation amongst the community, and while I'll still be reading and engaging I do have a limited amount of time I want to spend on the Forum and I think it'd be better for me to move that focus to posts rather than comments for a bit.[1] If you do want to get in touch about anything, please reach out and I'll try my very best to respond. Also, if you're going to be in London for EA Global, then I'll be around and very happy to catch up :) 1. ^ Though if it's a highly engaged/important discussion and there's an important viewpoint that I think is missing I may weigh in
Much of the writing here relies on confusing metaconcepts. Preferences, intuition, etc. are not things like apples and trees. I hope the confusion will be overcome. Is there a useful metaphysics for dealing with this? Perhaps anyone interested in complex controversies in the philosophy of mathematics will know.

Friday, 3 May 2024
Fri, 3 May 2024

Frontpage Posts

110
kta
· 2d ago · 14m read
15
niplav
· 2d ago · 6m read

Quick takes

I worked at OpenAI for three years, from 2021-2024 on the Alignment team, which eventually became the Superalignment team. I worked on scalable oversight, part of the team developing critiques as a technique for using language models to spot mistakes in other language models. I then worked to refine an idea from Nick Cammarata into a method for using language model to generate explanations for features in language models. I was then promoted to managing a team of 4 people which worked on trying to understand language model features in context, leading to the release of an open source "transformer debugger" tool. I resigned from OpenAI on February 15, 2024.
Not sure how to post these two thoughts so I might as well combine them. In an ideal world, SBF should have been sentenced to thousands of years in prison. This is partially due to the enormous harm done to both FTX depositors and EA, but mainly for basic deterrence reasons; a risk-neutral person will not mind 25 years in prison if the ex ante upside was becoming a trillionaire. However, I also think many lessons from SBF's personal statements e.g. his interview on 80k are still as valid as ever. Just off the top of my head: * Startup-to-give as a high EV career path. Entrepreneurship is why we have OP and SFF! Perhaps also the importance of keeping as much equity as possible, although in the process one should not lie to investors or employees more than is standard. * Ambition and working really hard as success multipliers in entrepreneurship. * A career decision algorithm that includes doing a BOTEC and rejecting options that are 10x worse than others. * It is probably okay to work in an industry that is slightly bad for the world if you do lots of good by donating. [1] (But fraud is still bad, of course.) Just because SBF stole billions of dollars does not mean he has fewer virtuous personality traits than the average person. He hits at least as many multipliers than the average reader of this forum. But importantly, maximization is perilous; some particular qualities like integrity and good decision-making are absolutely essential, and if you lack them your impact could be multiplied by minus 20.     [1] The unregulated nature of crypto may have allowed the FTX fraud, but things like the zero-sum zero-NPV nature of many cryptoassets, or its negative climate impacts, seem unrelated. Many industries are about this bad for the world, like HFT or some kinds of social media. I do not think people who criticized FTX on these grounds score many points. However, perhaps it was (weak) evidence towards FTX being willing to do harm in general for a perceived greater good, which is maybe plausible especially if Ben Delo also did market manipulation or otherwise acted immorally. Also note that in the interview, SBF didn't claim his donations offset a negative direct impact; he said the impact was likely positive, which seems dubious.
(EA) Hotel dedicated to events, retreats, and bootcamps in Blackpool, UK?  I want to try and gauge what the demand for this might be. Would you be interested in holding or participating in events in such a place? Or work running them? Examples of hosted events could be: workshops, conferences, unconferences, retreats, summer schools, coding/data science bootcamps, EtG accelerators, EA charity accelerators, intro to EA bootcamps, AI Safety bootcamps, etc.  This would be next door to CEEALAR (the building is potentially coming on the market), but most likely run by a separate, but close, limited company (which would charge, and funnel profits to CEEALAR, but also subsidise use where needed). Note that being in Blackpool in a low cost building would mean that the rates charged by such a company would be significantly less than elsewhere in the UK (e.g. £300/day for use of the building: 15 bedrooms and communal space downstairs to match that capacity). Maybe think of it as Whytham Abbey, but at the other end of the Monopoly board: only 1% of the cost! (A throwback to the humble beginnings of EA?) From the early days of the EA Hotel (when we first started hosting unconferences and workshops), I have thought that it would be good to have a building dedicated to events, bootcamps and retreats, where everyone is in and out as a block, so as to minimise overcrowding during events, and inefficiencies of usage of the building either side of them (from needing it mostly empty for the events); CEEALAR is still suffering from this with it’s event hosting. The yearly calendar could be filled up with e.g. 4 10-12 week bootcamps/study programs, punctuated by 4 1-3 week conferences or retreats in between.  This needn't happen straight away, but if I don't get the building now, the option will be lost for years. Having it next door in the terrace means that the building can be effectively joined to CEEALAR, making logistics much easier (and another option for the building could be a further expansion of CEEALAR proper[1]). Note that this is properly viewed as an investment to take into account a time-limited opportunity, and shouldn't be seen as fungible with donations (to CEEALAR or anything else); if nothing happens I can just sell the building again and recoup most/all of the costs (selling shouldn’t be that difficult, given property prices are rising again in the area due to a massive new development in the town centre). 1. ^ CEEALAR has already expanded once. When I bought the second building it also wasn’t ideal timing, but it never is; I didn’t want to lose option value.
I might start doing some policy BOTEC (Back of the envelope calculation) posts. ie where I suggest an idea and try and figure out how valuable it is. I think that do this faster with a group to bounce ideas off.  If you'd like to be added to a message chat (on whatsapp probably) to share policy BOTECs then reply here or DM me. 
New: “card view” for frontpage posts We’re testing out a new “card view” for the main post list on the home page. You can toggle the layout by clicking the dropdown circled in red below. You can see more details in GitHub here. Let us know what you think! :)

Thursday, 2 May 2024
Thu, 2 May 2024

Frontpage Posts

Quick takes

There have been multiple occasions where I've copy and pasted email threads into an LLM and asked it things like: 1. What is X person saying 2. What are the cruxes in this conversation? 3. Summarise this conversation 4. What are the key takeaways 5. What views are being missed from this conversation I really want an email plugin that basically brute forces rationality INTO email conversations. Tangentially - I wonder if LLMs can reliably convert peoples claims into a % through sentiment analysis? This would be useful for Forecasters I believe (and rationality in general)

Wednesday, 1 May 2024
Wed, 1 May 2024

Frontpage Posts

Quick takes

Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn't give specifics on his policy positions, this seems like something he is particularly interested in. I know politics is discouraged on the EA forum, but I thought I would post this to say: EA should really be preparing for a Trump presidency. He's up in the polls and IMO has a >50% chance of winning the election. Right now politicians seem relatively receptive to EA ideas, this may change under a Trump administration.
I intend to strong downvote any article about EA that someone posts on here that they themselves have no positive takes on.  If I post an article, I have some reason I liked it. Even a single line. Being critical isn't enough on it's own. If someone posts an article, without a single quote they like, with the implication it's a bad article, I am minded to strong downvote so that noone else has to waste their time on it. 
Quick poll [✅ / ❌]: Do you feel like you don't have a good grasp of Shapley values, despite wanting to?  (Context for after voting: I'm trying to figure out if more explainers of this would be helpful. I still feel confused about some of its implications, despite having spent significant time trying to understand it)
Is EA as a bait and switch a compelling argument for it being bad? I don't really think so 1. There are a wide variety of baits and switches, from what I'd call misleading to some pretty normal activities - is it a bait and switch when churches don't discuss their most controversial beliefs at a "bring your friends" service? What about wearing nice clothes to a first date? [1] 2. EA is a big movement composed of different groups[2]. Many describe it differently. 3. EA has done so much global health stuff I am not sure it can be described as a bait and switch. eg https://docs.google.com/spreadsheets/d/1ip7nXs7l-8sahT6ehvk2pBrlQ6Umy5IMPYStO3taaoc/edit#gid=9418963 4. EA is way more transparent than any comparable movement. If it is a bait and switch then it does so much more to make clear where the money goes eg (https://openbook.fyi/). On the other hand: 1. I do sometimes see people describing EA too favourably or pushing an inaccurate line.   I think that transparency comes with a feature of allowing anyone to come and say "what's going on there" and that can be very beneficial at avoiding error but also bad criticism can be too cheap.  Overall I don't find this line that compelling. And that parts that are seem largely in the past when EA was smaller (when perhaps it mattered less). Now that EA is big, it's pretty clear that it cares about many different things.  Seems fine.  1. ^ @Richard Y Chappell created the analogy.  2. ^ @Sean_o_h argues that here. 
Do you believe that altruism actually makes people happy? Peter Singer's book argues that people become happier by behaving altruistically, and psychoanalysis also classifies altruism as a mature defense mechanism. However, there are also concerns about pathological altruism and people pleasers. In-depth research data on this is desperately needed.

Tuesday, 30 April 2024
Tue, 30 Apr 2024

Frontpage Posts

Quick takes

56
tlevin
5d
3
I think some of the AI safety policy community has over-indexed on the visual model of the "Overton Window" and under-indexed on alternatives like the "ratchet effect," "poisoning the well," "clown attacks," and other models where proposing radical changes can make you, your allies, and your ideas look unreasonable. I'm not familiar with a lot of systematic empirical evidence on either side, but it seems to me like the more effective actors in the DC establishment overall are much more in the habit of looking for small wins that are both good in themselves and shrink the size of the ask for their ideal policy than of pushing for their ideal vision and then making concessions. Possibly an ideal ecosystem has both strategies, but it seems possible that at least some versions of "Overton Window-moving" strategies executed in practice have larger negative effects via associating their "side" with unreasonable-sounding ideas in the minds of very bandwidth-constrained policymakers, who strongly lean on signals of credibility and consensus when quickly evaluating policy options, than the positive effects of increasing the odds of ideal policy and improving the framing for non-ideal but pretty good policies. In theory, the Overton Window model is just a description of what ideas are taken seriously, so it can indeed accommodate backfire effects where you argue for an idea "outside the window" and this actually makes the window narrower. But I think the visual imagery of "windows" actually struggles to accommodate this -- when was the last time you tried to open a window and accidentally closed it instead? -- and as a result, people who rely on this model are more likely to underrate these kinds of consequences. Would be interested in empirical evidence on this question (ideally actual studies from psych, political science, sociology, econ, etc literatures, rather than specific case studies due to reference class tennis type issues).
Time for the Shrimp Welfare Project to do a Taylor Swift crossover? https://www.instagram.com/p/C59D5p1PgNm/?igsh=MXZ5d3pjeHAxeHR2dw==

Monday, 29 April 2024
Mon, 29 Apr 2024

Quick takes

Excerpt from the most recent update from the ALERT team:   Highly pathogenic avian influenza (HPAI) H5N1: What a week! The news, data, and analyses are coming in fast and furious. Overall, ALERT team members feel that the risk of an H5N1 pandemic emerging over the coming decade is increasing. Team members estimate that the chance that the WHO will declare a Public Health Emergency of International Concern (PHEIC) within 1 year from now because of an H5N1 virus, in whole or in part, is 0.9% (range 0.5%-1.3%). The team sees the chance going up substantially over the next decade, with the 5-year chance at 13% (range 10%-15%) and the 10-year chance increasing to 25% (range 20%-30%).   their estimated 10 year risk is a lot higher than I would have anticipated.
How do you deal with the frustration of trying to find an Entry-level Machine Learning job as a Software Engineer not based near Bay Area or London?

Sunday, 28 April 2024
Sun, 28 Apr 2024

Quick takes

A lot of policy research seems to be written with an agenda in mind to shape the narrative. And this kind of destroys the point of policy research which is supposed to inform stakeholders and not actively convince or really nudge them. This might cause polarization in some topics and is in itself, probably snatching legitimacy away from the space. I have seen similar concerning parallels in the non-profit space, where some third-sector actors endorse/do things which they see as being good but destroys trust in the whole space. This gives me scary unilaterist's curse vibes..
In case you're interested in supporting my EA-aligned YouTube channel A Happier World: I've lowered the minimum funding goal from $10,000 to $2,500 to give donors confidence that their money will directly support the project. Because if the minimum funding goal isn't reached, you won't get your money back. Instead it will go back in your Manifund balance for you to spend on a different project. I understand this may have been a barrier for some, which is why I lowered the minimum funding goal. Manifund fundraising page EA Forum post announcement
How to communicate EA to the commonsense Christian: has it been done before? I'm considering writing a series of posts exploring the connection between EA and the common-sense Christianity one might encounter on the street if you were to ask someone about their 'faith.' I've looked into EA for Christians a bit, and haven't done a deep dive into their articles yet. I'm wondering what the consensus is on this group, and if anyone involved can give me a synopsis on how that's been going. Has it been effective? I'm posting this quick take as a means of feeling out this idea. This mini-series would probably consist of exploring EA from a commonsense place, considering how the use of Church-language can allow one to communicate more effectively and bypass being seen as a member of the out-group, and hopefully enable more Christians to see this movement as something they may want to be a part of even if they don't share the same first premises. I don't want to put more time into work that has been deeply covered in the community but feel that this is an area I can provide some insight into, as I have my motivations for reconciliation beyond academic interest. What are your thoughts?

Saturday, 27 April 2024
Sat, 27 Apr 2024

Quick takes

I can't find a better place to ask this, but I was wondering whether/where there is a good explanation of the scepticism of leading rationalists about animal consciousness/moral patienthood. I am thinking in particular of Zvi and Yudkowsky. In the recent podcast with Zvi Mowshowitz on 80K, the question came up a bit, and I know he is also very sceptical of interventions for non-human animals on his blog, but I had a hard time finding a clear explanation of where this belief comes from. I really like Zvi's work, and he has been right about a lot of things I was initially on the other side of, so I would be curious to read more of his or similar people's thoughts on this. Seems like potentially a place where there is a motivation gap: non-animal welfare people have little incentive to convince me that they think the things I work on are not that useful.

Friday, 26 April 2024
Fri, 26 Apr 2024

Frontpage Posts

Quick takes

American Philosophical Association (APA) announces two $10,000 AI2050 Prizes for philosophical work related to AI, with June 23, 2024 deadline:  https://dailynous.com/2024/04/25/apa-creates-new-prizes-for-philosophical-research-on-ai/ https://www.apaonline.org/page/ai2050 https://ai2050.schmidtsciences.org/hard-problems/
Paul Graham about getting good at technology (bold is mine): > How do you get good at technology? And how do you choose which technology to get good at? Both of those questions turn out to have the same answer: work on your own projects. Don't try to guess whether gene editing or LLMs or rockets will turn out to be the most valuable technology to know about. No one can predict that. Just work on whatever interests you the most. You'll work much harder on something you're interested in than something you're doing because you think you're supposed to. > > If you're not sure what technology to get good at, get good at programming. That has been the source of the median startup for the last 30 years, and this is probably not going to change in the next 10. From "HOW TO START GOOGLE", March 2024. It's a talk for ~15 year olds, and it has more about "how to get good at technology" in it.
This WHO press release was a good reminder of the power of immunization – a new study forthcoming publication in The Lancet reports that (liberally quoting / paraphrasing the release) * global immunization efforts have saved an estimated 154 million lives over the past 50 years, 146 million of them children under 5 and 101 million of them infants  * for each life saved through immunization, an average of 66 years of full health were gained – with a total of 10.2 billion full health years gained over the five decades * measles vaccination accounted for 60% of the lives saved due to immunization, and will likely remain the top contributor in the future  * vaccination against 14 diseases has directly contributed to reducing infant deaths by 40% globally, and by more than 50% in the African Region * the 14 diseases: diphtheria, Haemophilus influenzae type B, hepatitis B, Japanese encephalitis, measles, meningitis A, pertussis, invasive pneumococcal disease, polio, rotavirus, rubella, tetanus, tuberculosis, and yellow fever * fewer than 5% of infants globally had access to routine immunization when the Expanded Programme on Immunization (EPI) was launched 50 years ago in 1974 by the World Health Assembly; today 84% of infants are protected with 3 doses of the vaccine against diphtheria, tetanus and pertussis (DTP) – the global marker for immunization coverage * there's still a lot to be done – for instance, 67 million children missed out on one or more vaccines during the pandemic years
A corporation exhibits emergent behavior, over which no individual employee has full control. Because the unregulated market selects for profit and nothing else, any successful corporation becomes a kind of "financial paperclip optimizer". To prevent this, the economic system must change.
Everyone who seems to be writing policy papers/ doing technical work seems to be keeping generative AI at the back of their mind, when framing their work or impact.    This narrow-eyed focus on gen AI might almost certainly be net-negative for us- unknowingly or unintentionally ignoring ripple effects of the gen AI boom in other fields (like robotics companies getting more funding leading to more capabilities, and that leads to new types of risks).   And guess who benefits if we do end up getting good evals/standards in place for gen AI? It seems to me companies/investors are clear winners because we have to go back to the drawing board and now advocate for the same kind of stuff for robotics or a different kind of AI use-case/type all while the development/capability cycles keep maturing. We seem to be in whack-a-mole territory now because of the overton window shifting for investors.

Load more days