titotal

3231Joined Jul 2022

Comments
248

First of all, great post, thanks for exploring this topic! 

So I'm a little confused about the definition here:

AI winter is operationalised as a drawdown in annual global AI investment of ≥50%

 I would guess that the burst of the dot-com bubble meets this definition? But I wouldn't exactly call 2002-2010 an "internet winter": useage kept growing and growing, just with a better understanding of what you can and can't profit from. I think theres a good chance (>30%) of this particular definition of "AI winter" occurring, but I also reckon if it happens, people will feel like it's unfair to characterize it as such. 

I think a more likely outcome is a kind of "AI autumn": Investment keeps coming at a steady rate, and lots and lots of people are using AI for the things it's good at, but the number of advancements slows significantly, and certain problems prove intractable, and the hype dies down. I think we've already seen this process happen for Autonomous Vehicles. I think this scenario is very likely. 

titotal3d102

Might as well make an alternate prediction here: 

There will be no AGI in the next 10 years. There will be an AI bubble over the next couple of years as new applications for deep learning proliferate, creating a massive hype cycle similar to the dot-com boom. 

This bubble will die down or burst when people realize the limitations of deep learning in domains that lack gargantuan datasets. It will fail to take hold in domains where errors cause serious damage (see the unexpected difficulty of self-driving cars). Like with the burst of the dot-com bubble, people will continue to use AI a lot for the applications that it is actually good at. 

If AGI does occur, it will be decades away at least, and require further conceptual breakthroughs and/or several orders of magnitude higher computing power. 

Yes, it seems like more uncertain and speculative questions with fewer available evidence would have larger swings in beliefs. So it's possible that updating does help, but not enough to overcome the difficulty of the problems. If this is what happened, the takeaway is that we should more be more skeptical of predictions that are more speculative and more uncertain, which makes sense. 

I could see a way for updating to make predictions worse, if there was systematic bias in whether pro or anti proposition evidence is seen, or a bias in how pro or anti evidence is updated on. To pick an extreme example, if someone was trying to evaluate whether the earth was flat, but only considered evidence from flat earth websites, then higher amounts of updating would simply drag them further and further away from the truth. This could also explain why metacalculus is doing worse on AI prediction than other predictions, if there was a bias specifically in this field. 

  • Total belief movement (0.262; 0), i.e. predictions are less accurate for questions with a greater amount of updating. This is surprising, as one would expect predictions to converge to the truth as they are updated.

This is fascinating, at face value it would imply that the whole process of "updating", as practiced by the average metacalculus user, is useless and actually makes predictions worse. Another theory would be that people update more on topics they are interested in, but are more likely to have biases on those topics and therefore be more wrong overall. Any other explanations people can think of?

titotal10d1511

Indeed, 4 and 5 are the weakest parts of the AI risk argument, and often seems to be based on an overly magical view of what computation/intelligence can achieve, and neglecting the fact that all intelligences are fallible. There is an overly large reliance on making up science fiction scenarios without putting any effort into proving that said scenarios are likely or even possible (see Yudkowsky's absurd "mixing proteins to make nanobots that kill everyone" scenario). 

I'm working on a post elaborating in more depth on this based on my experience as a computational physicist. 

titotal13d110

Indeed, the only way to 100% ensure no misconduct ever would be to shut down the society entirely.  But i'll note that none of the actions we took in our club cost any money, really it's mostly a culture and norms thing. EA does pay the community health team, but I would guess it gets back far more than it spends, in terms of recruitment, reducing PR disasters, etc.  

I'll note that high standards are important in general as EA becomes more powerful. EA may have a strong voice in writing the value systems of AI, for example, so it's important that the people doing so are not ethically compromised. 

titotal13d7475

Look, I think Will has worked very hard to do good and I don’t want to minimize that, but at some point (after the full investigation has come out) a pragmatic decision needs to be made about whether he and others are more valuable in the leadership or helping from the sidelines. If the information in the article is true, I think the former has far too great a cost. 

This was not a small mistake. It is extremely rare for charitable foundations to be caught up in scandals of this magntiude, and this article indicates that a signficant amount of the fallout could have been prevented with a little more investigation at key moments, and that clear signs of unethial behaviour were deliberately ignored. I think this is far from competent. 

We are in the charity business. Donors expect high standards when it comes to their giving, and bad reputations directly translate into dollars. And remember, we want new donors, not just to keep the old ones. I simply don’t see how “we have high standards, except when it comes to facilitating billion dollar frauds” can hold up to scrutiny. I'm not sure we can "credibly convince people" if we keep the current leadership in place. The monetary cost could be substantial. 

We also want to recruit people to the movement. Being associated with bad behaviour will hurt our ability to recruit people with strong moral codes. Worse though, would be if we encouraged “vultures”. A combination of low ethical standards and large amounts of money would make our movement an obvious target for unethical exploiters, as appears to have already happened with SBF.

Being a brilliant philosopher or intellectual does not necessarily make you a great leader. I think we can keep the benefits of the former while recognizing that someone is no longer useful at the latter. Remaining in a leadership position is a privilege, not a right.

titotal14d7150

EA leaders should be held to high standards, and it's becoming increasingly difficult to believe that the current leadership has met those standards. I'm open to having my mind changed when the investigation is concluded and the leaders respond (and we get a better grasp on who knew what when). As it stands now, I would guess it would be in the best interest of the movement (in terms of avoiding future mistakes, recruitment, and fundraising) for those who have displayed significantly bad judgement to step down from leadership roles. I recognize that they have worked very hard to do good, and I hope they can continue helping in non-leadership roles. 

titotal15d1112

In my experience, the most important parts of a sensitive discussion is to display kindness, empathy, and common ground. 

It's disheartening to write something on a sensitive topic based on upsetting personal experiences, only to be met with seemingly stonehearted critique or dismissal. Small displays of empathy and gratitude can go a long way towards this, to make people feel like their honesty and vulnerability has been rewarded rather than punished. 

I think your points are good, but if deployed wrongly could make things worse.  For example, if a non-rationalist friend of yours tells you about their experiences with harassment,  immediately jumping into a bayesian analysis of the situation is ill-advised and may lose you a friend. 

titotal15d30

This was in Australia.  Most of the incoming members were university aged, but it was open to people of all ages, so there were people in their 30's alongside 18 year old's fresh out of high school (a dynamic that probably applies to EA spaces as well). I think this kind of dynamic warrants significant extra caution, as you don't want older men coming along in order to try and "pick up" college girls. 

Load more