Ada-Maaria Hyvärinen

I'm responsible for career advising and content at EA Finland. If you have any career advising advice, I'm all ears.

At my day job, I work as a data scientist. Always happy to chat about NLP and language.

Topic Contributions

Comments

Announcing giveffektivt.dk

Your Norwegian example is really inspiring in this space!

I just want to point out that in some places a bank account number to donate to is not going to be enough - for example in Finland the regulations on collecting donations and handling donated money are quite strict, so better check your local requirements before starting to collect money.

How I failed to form views on AI safety

Hi Otto, I have been wanting to reply to you for a while but I feel like my opinions keep changing so writing coherent replies is hard (but having fluid opinions in my case seems like a good thing). For example, while I still think only a precollected set of text as a data source is unsufficient for any general intelligence, maybe training a model on text and having it then interact with humans could lead it to connecting words to references (real world objects), and maybe it would not necessarily need many reference points of the language model is rich enough? This then again seems to sound a bit like the concept of imagination and I am worried I am antropomorphising in a weird way.

Anyway, I still hold the intuition that generality is not necessarily the most important in thinking about future AI scenarios – this of course is an argument towards taking AI risk more seriously, because it should be more likely someone will build advanced narrow AI or advanced AGI than just advanced AGI.

I liked "AGI safety from first principles" but I would still be reluctant to discuss it with say, my colleagues from my day job, so I think I would need something even more grounded to current tech, but I do understand why people do not keep writing that kind of papers because it does probably not directly help solving alignment. 

How I failed to form views on AI safety

Yeah, I think we agree on this, I think I want to write out more later on what communication strategies might help people actually voice scepticsm/concerns even if they are afraid of meeting some standards on elaborateness. 

My mathematics example actually tried to be about this: in my university, the teachers tried to make us forget the teachers are more likely to be right, so that we would have to think about things on our own and voice scepticism even if we were objectively likely to be wrong. I remember another lecturer telling us: "if you finish an excercise and notice you did not use all the assuptions in your proof, you either did something wrong or you came up with a very important discovery". I liked how she stated that it was indeed possible that a person from our freshman group could make a novel discovery, however unlikely that was.

The point is that my lecturers tried to teach that there is not a certain level you have to acquire before your opinions start to matter: you might be right even if you are a total beginner and the person you disagree with has a lot of experience. 

This is something I would like to emphasize when doing EA community building myself, but it is not very easy. I've seen this when I've taught programming to kids. If a kid asks me if their program is "done" or "good", I'd say "you are the programmer, do you think your program does what it is supposed to do", but usually the kids think it is a trick question and I'm just withholding the correct answer for fun. Adults, too, do not always trust that I actually value their opinion.

How I failed to form views on AI safety

Hi Otto!

I agree that the example was not that great and that definitely lack of data sources can be countered with general intelligence, like you describe. So it could definitely be possible that a a generally intelligent agent could plan around to gather needed data. My gut feeling is still that it is impossible to develop such intelligence based on one data source (for example text, however large amounts), but of course there are already technologies that combine different data sources (such as self-driving cars), so this clearly is also not the limit. I'll have to think more about where this intuition of lack of data being a limit comes from, since it still feels relevant to me. Of course 100 years is a lot of time to gather data.

I'm not sure if imagination is the difference either. Maybe it is the belief in somebody actually implementing things that can be imagined. 

How I failed to form views on AI safety

Thanks! And thank you for the research pointers.

How I failed to form views on AI safety

This intuition turned out harder to explain than I thought and got me thinking a lot about how to define "generality" and "intelligence" (like all talk about AGI does). But say, for example, that you want to build an automatic doctor that is able examine a patient and diagnose what illness they most likely have. This is not very general in the sense that you can imagine this system as a function of "read all kinds of input about the person, output diagnosis", but I still think it provides an example of the difficulty of collecting data. 

There are some data that can be collected quite easily by the user, because the user can for example take pictures of themselves, measure their temperature etc. And then there are some things the user might not be able to collect data about, such as "is this joint moving normally". I think it is not so likely we will be able to gather meaningful data about things like "how does a persons joint move if they are healthy" unless doctors start wearing gloves that track the position of their hand while doing the examination and all this data is stored somewhere with the doctor's interpretation. 

To me it currently seems that we are collecting a lot of data about various things but there are still many things where there are no methods for collecting the relevant data, and the methods do not seem like they would start getting collected as a by-product of something (like in the case where you track what people by from online stores). Also, a lot of data is unorganized and missing labels and it can be hard to label after it has been collected.

I'm not sure if all of this was relevant or if I got side-tracked too much when thinking about a concrete example I can imagine.

How I failed to form views on AI safety

Thanks! It will be difficult to write an authentic response to TAP since these other responses were originally not meant to be public but I will try to keep the same spirit if I end up writing more about my AI safety journey.

I actually do find AI safety interesting, it just seems that I think about a lot of stuff differently than many people in the field and it hard for me to pin-point why. But the main motivations of spending a lot of time on forming personal views about AI safety are:
 

  • I want to understand x-risks better, AI risk is considered important among people who worry about x-risk a lot, and because of my background I should be able to understand the argument for it (better than say, biorisk)
  • I find it confusing that I understanding the argument is so hard, and that makes me worried (like I explained in the sections "The fear of the answer" and "Friends and appreciation")
  • I find it very annoying when I don't understand why some people are convinced by something, especially if these people are with me in a movement that is important for us all
How I failed to form views on AI safety

Yeah, I understand why you'd say that. However it seems to me that there are other limitations to AGI than finding the right algorithms. As a data scientist I am biased to think about available training data. Of course there is probably going to be progress on this as well in the future.

I burnt out at EAG. Let's talk about it.

Hi, just wanted to drop in to say:

  • You had an experience that you describe as burn-out less than a week ago – it's totally ok not to be fine yet! It's good you feel better but take the time you need to recover properly. 
  • I don't know how old you are but it is also ok to feel overwhelmed by EA later when you no longer feel like describing yourself as "just a kid". Doing your best to make the world a better place is hard for a person of any age.
  • The experience you had does not necessarily mean you would not be cut out for community building. You've now learned more of your boundaries and you might be more able to recognize red flags earlier in the future.

Good luck and I hope you learn something valuable about yourself from the ADHD assessment!
 

How I failed to form views on AI safety

Hmm, with a non-zero probability in the next 100 years the likelihood for a longer time frame should be bigger given that there is nothing that makes developing AGI more difficult the more time passes, and I would imagine it is more likely to get easier than harder (unless something catastrophic happens). In other words, I don't think it is certainly impossible to build AGI, but I am very pessimistic about anything like current ML methods leading to AGI. A lot of people in the AI safety community seem to disagree with me on that, and I have not completely understood why.

Load More