All of Ada-Maaria Hyvärinen's Comments + Replies

How I failed to form views on AI safety

Hi Otto!

I agree that the example was not that great and that definitely lack of data sources can be countered with general intelligence, like you describe. So it could definitely be possible that a a generally intelligent agent could plan around to gather needed data. My gut feeling is still that it is impossible to develop such intelligence based on one data source (for example text, however large amounts), but of course there are already technologies that combine different data sources (such as self-driving cars), so this clearly is also not the limit. I... (read more)

2Otto22d
Hey I wasn't saying it wasn't that great :) I agree that the difficult part is to get to general intelligence, also regarding data. Compute, algorithms, and data availability are all needed to get to this point. It seems really hard to know beforehand what kind and how much of algorithms and data one would need. I agree that basically only one source of data, text, could well be insufficient. There was a post I read on a forum somewhere (could have been here) from someone who let GPT3 solve questions including things like 'let all odd rows of your answer be empty'. GPT3 failed at all these kind of assignments, showing a lack of comprehension. Still, the 'we haven't found the asymptote' argument from OpenAI (intelligence does increase with model size and that increase doesn't seem to stop, implying that we'll hit AGI eventually), is not completely unconvincing either. It bothers me that no one can completely rule out that large language models might hit AGI just by scaling them up. It doesn't seem likely to me, but from a risk management perspective, that's not the point. An interesting perspective I'd never heard before from intelligent people is that AGI might actually need embodiment [https://mauhn.com/blog/Blog%20Post%20Title%20One-2wppj] to gather the relevant data. (They also think it would need social skills first - also an interesting thought.) While it's hard to know how much (and what kind of) algorithmic improvement and data is needed, it seems doable to estimate the amount of compute needed, namely what's in a brain plus or minus a few orders of magnitude. It seems hard for me to imagine that evolution can be beaten by more than a few orders of magnitude in algorithmic efficiency (the other way round is somewhat easier to imagine, but still unlikely in a hundred year timeframe). I think people have focused on compute because it's most forecastable, not because it would be the only part that's important. Still, there is a large gap between what I think
How I failed to form views on AI safety

Thanks! And thank you for the research pointers.

How I failed to form views on AI safety

This intuition turned out harder to explain than I thought and got me thinking a lot about how to define "generality" and "intelligence" (like all talk about AGI does). But say, for example, that you want to build an automatic doctor that is able examine a patient and diagnose what illness they most likely have. This is not very general in the sense that you can imagine this system as a function of "read all kinds of input about the person, output diagnosis", but I still think it provides an example of the difficulty of collecting data. 

There are some... (read more)

1Otto1mo
Hi AM, thanks for your reply. Regarding your example, I think it's quite specific, as you notice too. That doesn't mean I think it's invalid, but it does get me thinking: how would a human learn this task? A human intelligence wasn't trained on many specific tasks in order to be able to do them all. Rather, it first acquired general intelligence (apparently, somewhere), and was later able to apply this to an almost infinite amount of specific tasks with typically only a few examples needed. I would guess that an AGI would solve problems in a similar way. So, first learn general intelligence (somehow), then learn specific tasks quickly with little data needed. For your example, if the AGI would really need to do this task, I'd say it could find ways itself to gather the data, just like a human would who would want to learn this skill, after first acquiring some form of general intelligence. A human doctor might watch the healthily moving joint, gathering visual data, and might hear the joint moving, gathering audio data, or might put her hand on the joint, gathering sensory data. The AGI could similarly film and record the healthy joint moving, with already available cameras and microphones, or use data already available online, or, worst case, send in a drone with a camera and a sound recorder. It could even send in a robot that could gather sensory data if needed. Of course, current AI lacks certain skills that are necessary to solve such a general problem in such a general way, such as really understanding the meaning behind a question that is asked, being able to plan a solution (including acquiring drones and robots in the process), and probably others. These issues would need to be solved first, so there is still a long way to go. But with the manpower, investment, and time (e.g. 100 years) available, I think we should assign a probability of at least tens of percents that this type of general intelligence including planning and acting effectively in the rea
How I failed to form views on AI safety

Thanks! It will be difficult to write an authentic response to TAP since these other responses were originally not meant to be public but I will try to keep the same spirit if I end up writing more about my AI safety journey.

I actually do find AI safety interesting, it just seems that I think about a lot of stuff differently than many people in the field and it hard for me to pin-point why. But the main motivations of spending a lot of time on forming personal views about AI safety are:
 

  • I want to understand x-risks better, AI risk is considered import
... (read more)
2rachelAF1mo
Thank you for explaining more. In that case, I can understand why you'd want to spend more time thinking about AI safety. I suspect that much of the reason that "understanding the argument is so hard" is because there isn't a definitive argument -- just a collection of fuzzy arguments and intuitions. The intuitions seem very, well, intuitive to many people, and so they become convinced. But if you don't share these intuitions, then hearing about them doesn't convince you. I also have an (academic) ML background, and I personally find some topics (like mesa-optimization) to be incredibly difficult to reason about. I think that generating more concrete arguments and objections would be very useful for the field, and I encourage you to write up any thoughts that you have in that direction! (Also, a minor disclaimer that I suppose I should have included earlier: I provided technical feedback on a draft of TAP, and much of the "AGI safety" section focuses on my team's work. I still think that it's a good concrete introduction to the field, because of how specific and well-cited it is, but I also am probably somewhat biased.)
How I failed to form views on AI safety

Yeah, I understand why you'd say that. However it seems to me that there are other limitations to AGI than finding the right algorithms. As a data scientist I am biased to think about available training data. Of course there is probably going to be progress on this as well in the future.

1Otto Barten1mo
Could you explain a bit more about the kind of data you think will be needed to train an AGI, and why you think this will not be available in the next hundred years? I'm genuinely interested, actually I'd love to be convinced about the opposite... We can also DM if you prefer.
I burnt out at EAG. Let's talk about it.

Hi, just wanted to drop in to say:

  • You had an experience that you describe as burn-out less than a week ago – it's totally ok not to be fine yet! It's good you feel better but take the time you need to recover properly. 
  • I don't know how old you are but it is also ok to feel overwhelmed by EA later when you no longer feel like describing yourself as "just a kid". Doing your best to make the world a better place is hard for a person of any age.
  • The experience you had does not necessarily mean you would not be cut out for community building. You've now lea
... (read more)
3Khorton1mo
Yes, please do take time to rest and recover!
How I failed to form views on AI safety

Hmm, with a non-zero probability in the next 100 years the likelihood for a longer time frame should be bigger given that there is nothing that makes developing AGI more difficult the more time passes, and I would imagine it is more likely to get easier than harder (unless something catastrophic happens). In other words, I don't think it is certainly impossible to build AGI, but I am very pessimistic about anything like current ML methods leading to AGI. A lot of people in the AI safety community seem to disagree with me on that, and I have not completely understood why.

1Otto Barten1mo
So although we seem to be relatively close in terms of compute, we don't have the right algorithms yet for AGI, and no one knows if and when they will be found. If no one knows, I'd say a certainty of 99% that they won't be found in hundred years, with thousands of people trying, is overconfident.
How I failed to form views on AI safety

Hi Caleb! Very nice to read your reflection on what might make you think what you think. I related to many things you mentioned, such as wondering how much I think intelligence matters because of having wanted to be smart as a kid.

You understood correctly that intuitively, I think AI is less of a big deal than some people feel. This probably has a lot to do with my job, because it includes making estimates on if problems can be solved with current technology given certain constraints, and it is better to err to the side of caution. Previously, one of my ta... (read more)

How I failed to form views on AI safety

Hi Otto! Thanks, it was nice talking to you on EAG. (I did not include any interactions/information I got from this weekend's EAG in the post because I had written it before the conference, felt like it should not be any longer than it already was, but wanted to wait until my friends who are described as "my friends" in the post had read it before publishing it.)

I am not that convinced AGI is necessarily the most important component to x-risk from AI – I feel like there could be significant risks from powerful non-generally intelligent systems, but of cour... (read more)

2Otto1mo
Thanks for the reply, and for trying to attach numbers to your thoughts! So our main disagreement lies in (1). I think this is a common source of disagreement, so it's important to look into it further. Would you say that the chance to ever build AGI is similarly tiny? Or is it just the next hundred years? In other words, is this a possibility or a timeline discussion?
How I failed to form views on AI safety

I feel like everyone I have ever talked about AI safety with would agree on the importance of thinking critically and staying skeptical, and this includes my facilitator and cohort members from the AGISF programme. 

I think the 1.5h discussion session between 5 people who have read 5 texts  does not allow really going deep into any topics, since it is just ~3 minutes per participant per text on average. I think these kind of programs are great for meeting new people, clearing misconceptions and providing structure/accountability on actually readin... (read more)

1Oliver Sourbut18d
OK, this is the terrible terrible failure mode which I think we are both agreeing on (emphasis mine) By 'a sceptical approach' I basically mean 'the thing where we don't do that'. Because there is not enough epistemic credit in the field, yet, to expect that all (tentative, not-consensus-yet) conclusions to be definitely right. In traditional/undergraduate mathematics, it's different - almost always when you don't understand or agree with the professor, she is simply right and you are simply wrong or confused! This is a justifiable perspective based on the enormous epistemic weight of all the existing work on mathematics. I'm very glad you call out the distinction between performing skepticism and actually doing it.
How I failed to form views on AI safety

Like I said it is based on my gut feeling, but fairly sure.

Is it your experience that adding more complexity and concatenating different ML models results to better quality and generality and if so, in what domains? I would have the opposite intuition especially in NLP.

Also, do you happen to know why "prosaic" practices are called "prosaic"? I have never understood the connection to the dictionary definition of "prosaic".

How I failed to form views on AI safety

I'm still quite uncertain on my beliefs but I don't think you got them quite right. Maybe a better summary is that I am generally pessimistic about both humans being ever able to create AGI and especially about humans being able to create safe AGI (it is a special case so it should probably be harder than any AGI). I also think that relying a lot on strong unsafe systems (AI powered or not) can be an x-risk. This is why it is easier to me to understand why AI governance is a way to try to reduce x-risk (at least if actors in the world want to rely on unsaf... (read more)

2colin1mo
Ah, yeah I misread your opinion of the likelihood that humans will ever create AGI. I believe it will happen eventually unless AI research stops due to some exogenous reason (civilizational collapse, a ban on development, etc.). Important assumptions I am making: * General Intelligence is all computation, so it isn't substrate-dependent * The more powerful an AI is the more economically valuable it is to the creators * Moore's Law will continue so more computing will be available. * If other approaches fail, we will be able to simulate brains with sufficient compute. * Fully simulated brains will be AGI. I'm not saying that I think this would be the best, easiest, or only way to create AGI, just that if every other attempt fails, I don't see what would prevent this from happening. Particularly since we are already to simulate portions of a mouse brain. I am also not claiming here that this implies short timelines for AGI. I don't have a good estimate of how long this approach would take.
How I failed to form views on AI safety

That's right, thanks again for answering my question back then! 

Maybe I formulated my question wrong but I understood from your answer that you got first interested in AI safety, and only then on DS/ML (you mentioned you had had a CS background before but not your academic AI experience). This is why I did not include you in this sample of 3 persons - I wanted to narrow the search to people who had more AI specific background before getting into AI safety (not just CS). It is true that you did not mention Superintelligence either, but interesting to h... (read more)

How I failed to form views on AI safety

Thanks for giving me permission, I guess can use this if I need ever the opinion of "the EA community" ;)

However, I don't think I'm ready to give up on trying to figure out my stance on AI risk just yet, since I still estimate it is my best shot in forming a more detailed understanding on any x-risk, and understanding x-risks better would be useful for establishing better opinions on other cause priorization issues.

2Geoffrey Irving1mo
That is also very reasonable! I think the important part is to not feel to bad about the possibility of never having a view (there is a vast sea of things I don't have a view on), not least because I think it actually increases the chance of getting to the right view if more effort is spent. (I would offer to chat directly, as I'm very much part of the subset of safety close to more normal ML, but am sadly over capacity at the moment.)
How I failed to form views on AI safety

Generally, I find links a lot less frustrating if they are written by the person who sends me the link :) But now I have read the link you gave and don't know what I am supposed to do next, which is another reason I sometimes find linksharing a difficult means of communication. Like, do I comment on specific parts on your post, or describe how reading it influenced me, or how does the conversation continue? (If you find my reaction interesting: I was mostly unmoved by the post, I think I had seen  most of the numbers and examples before, there were some sentences and extrapolations that were quite off-putting for me but I think "minimalistic" style was nice.)

It would be nice to call and discuss if you are interested.

2Gavin22d
Well, definitely tell me what's wrong with the post - and optionally tell me what's good about it (: There's a Forum version here [https://forum.effectivealtruism.org/posts/jeybxkZrJmWpJaatN/agi-risk-analogies-and-arguments] where your comments will have an actual audience, sounds valuable.
How I failed to form views on AI safety

Glad it may have invoked some ideas for any discussions you might be having at Israel :) For us in Finland, I feel like I at least personally need to get some more clarity on how to balance EA movement building efforts and possible cause priorization related differences between movement builders. I think this is non-trivial because forming a consensus seems hard enough.

Curious to read any object-level response if you feel like writing one! If I end up writing any "Intro to AI Safety" thing it will be in Finnish so I'm not sure if you will understand it (it would be nice to have at least one coherent Finnish text about it that is not written by an astronomer or a paleontologist but by some technical person). 

How I failed to form views on AI safety

To clarify, my friends (even if they are very smart) did not come up with all AI safety arguments by themselves, but started to engage with AI safety material because they had already been looking at the world and thinking "hmm, looks like AI is a big thing and could influence a lot of stuff in the future, hope it changes things for the good". So they  got quickly on board after hearing that there are people seriously working on the topic, and it made them want to read more. 

How I failed to form views on AI safety

I think you understood me in the same way than my friend did in the second part of the prolog, so I apparently give this impression. But to clarify, I am not certain that AI safety is impossible (I think it is hard, though), and the implications of that depend a lot on how much power the AI systems will be given at the end, and what part of the damage they might cause is due to them being unsafe and what for example misuse, like you said. 

How I failed to form views on AI safety

Interesting to hear your personal opinion on the persuasiveness of Superintelligence and Human Compatible! And thanks for designing the AGISF course, it was useful.

8richard_ngo1mo
Superintelligence doesn't talk about ML enough to be strongly persuasive given the magnitude of the claims it's making (although it does a reasonable job of conveying core ideas like the instrumental convergence thesis and orthogonality thesis, which are where many skeptics get stuck). Human Compatible only spends, I think, a couple of pages actually explaining the core of the alignment problem (although it does a good job at debunking some of the particularly bad responses to it). It doesn't do a great job at linking the conventional ML paradigm to the superintelligence paradigm, and I don't think the "assistance games" approach is anywhere near as promising as Russell makes it out to be.
How I failed to form views on AI safety

Thanks for the nice comment! Yes, I am quite uncomfortable with uncertainty and trying to work on that. Also, I feel like by now I am pretty involved in EA and ultimately feel welcome enough to be able to post a story like this in here (or I feel like EA apprechiates different views enough despite also feeling this pressure to conform at the same time). 

How I failed to form views on AI safety

thanks Aayush! Edited the sentence to be hopefully more clear now :)

Unsurprising things about the EA movement that surprised me

With EA career stories I think it is important to to keep in mind that new members might not read them the same way as more engaged EAs who already know what organization is considered cool and effective within EA.  When I started attending local EA meetups I met a person who worked at OpenPhil (maybe as a contractor? I can't remember the details), but I did not find it particularly impressive because I did not know what OpenPhilanthropy was and assumed the "phil" stood for "philosophy". 

[Creative Writing Contest] [Fiction] The Fey Deal

This one is nice as well! 

Personally I like the method of embedding the link in the story, but since a many in my test audience considered it off-putting and too advertisement-like I thought it it better to trust their feedback, since I obviously personally already agree with the thought I'm trying to convey with my text. But like I said I'm not certain what the best solution is, probably there is no perfect one.

[Creative Writing Contest] [Fiction] The Fey Deal

I tried out a couple of different ones and iterated based on feedback. 

One ending I considered would have been just leaving out the last paragraph and linking to GiveWell like this:

“Besides,” his best friend said. “If you actually want to save a life for 5000 dollars, you can do it in a way where you can verify how they are doing it and what they need your money for.”

“What do you mean?” he asked, now more confused than ever.


I also considered embedding the link explicitly in the story like this:
 

“Besides,” his best friend said. “If you actually wa

... (read more)
2WSCFriedman7mo
I really like your long version, myself, but I'm already familiar with EA. :)
3Rand7mo
I like the second one! Though I'd make a minor change, just for punch:
[Creative Writing Contest] [Fiction] The Fey Deal

Don't be sorry! Feedback on language and grammar is very useful to me, since I usually write in Finnish. (This is probably the first time since middle school that I've written a piece of fiction in English.) 

Apparently the punctuation slightly depends on whether you are using British or American English and whether the work is fiction or non-fiction (https://en.wikipedia.org/wiki/Quotation_marks_in_English#Order_of_punctuation ). Since this is fiction, you are in any case totally right about the commas going inside the quotes, and I will edit accordingly. Thanks for pointing this out!

[Creative Writing Contest] [Fiction] The Fey Deal

Thanks for the feedback! Deciding how to end the story was definitely the hardest part in writing this. Pulling the reader out of the fantasy was a deliberate choice, but that does not mean it was necessarily the best one – I did some A/B testing on my proof reading audience but I have to admit my sample size was not that big.  Glad you liked it in general anyway :)

1Rand8mo
Care to share the alternate ending?