Ada-Maaria Hyvärinen

622Joined Oct 2021

Bio

I'm responsible for career advising and content at EA Finland. If you have any career advising advice, I'm all ears.

At my day job, I work as a data scientist. Always happy to chat about NLP and language.

Comments
35

I generally agree with your comment but I want to point out that for a person who does not feel like their achievements are "objectively" exceptionally impressive Luisa's article can also come across as intimidating: "if a person who achieved all of this still thinks they are not good enough, then what about me?"

I think Olivia's post is especially valuable because she dared to post even when she does not have a list of achievements that would immediately convince readers that her insecurity/worry is all in her head. It is very relatable to a lot of folks (for example me) and I think she has been really brave to speak up about this!

thanks for the info! I didn't really get the part on ambitiousness, how is that connected to the amount of time participants want to spend on the event? (I can interpret this in either "they wouldn't do anything else anyway so they could as well be here the whole weekend" or "they don't want to commit to anything for longer than 1 day since they are not used to committing to things".)

Thanks for this write-up! For me, a 10 hour hackathon sounds rather short, since with the lectures and evaluations it only leaves a few hours for the actual hacking, but I have only participated in hackathons where people actually programmed something, so maybe that makes the difference? Did the time feel short to you? Did you get any feedback on the event length from the participants or did somebody say they won't participate because time commitment seems too big (since you mention it is was a big time commitment from them)?

Can you recommend me a place where I could find this information or will it spoil your test? I have looked into this on various places but I still have no idea on what the current best set of AI forecasts or P(AI doom) would be.

I'm really glad this post was useful to you :)

Thinking about this quote now, I think I should have written down more explicitely that it is possible to care a lot about having a positive impact, but not make it the definition of your self-worth; and that it is good to have positive impact as your goal and normal to be sad about not reaching your goals as you'd like to, but this sadness does not have to come with the feeling of worthlessness. I am still learning how to actually separate these on an emotional level.

Your Norwegian example is really inspiring in this space!

I just want to point out that in some places a bank account number to donate to is not going to be enough - for example in Finland the regulations on collecting donations and handling donated money are quite strict, so better check your local requirements before starting to collect money.

Hi Otto, I have been wanting to reply to you for a while but I feel like my opinions keep changing so writing coherent replies is hard (but having fluid opinions in my case seems like a good thing). For example, while I still think only a precollected set of text as a data source is unsufficient for any general intelligence, maybe training a model on text and having it then interact with humans could lead it to connecting words to references (real world objects), and maybe it would not necessarily need many reference points of the language model is rich enough? This then again seems to sound a bit like the concept of imagination and I am worried I am antropomorphising in a weird way.

Anyway, I still hold the intuition that generality is not necessarily the most important in thinking about future AI scenarios – this of course is an argument towards taking AI risk more seriously, because it should be more likely someone will build advanced narrow AI or advanced AGI than just advanced AGI.

I liked "AGI safety from first principles" but I would still be reluctant to discuss it with say, my colleagues from my day job, so I think I would need something even more grounded to current tech, but I do understand why people do not keep writing that kind of papers because it does probably not directly help solving alignment. 

Yeah, I think we agree on this, I think I want to write out more later on what communication strategies might help people actually voice scepticsm/concerns even if they are afraid of meeting some standards on elaborateness. 

My mathematics example actually tried to be about this: in my university, the teachers tried to make us forget the teachers are more likely to be right, so that we would have to think about things on our own and voice scepticism even if we were objectively likely to be wrong. I remember another lecturer telling us: "if you finish an excercise and notice you did not use all the assuptions in your proof, you either did something wrong or you came up with a very important discovery". I liked how she stated that it was indeed possible that a person from our freshman group could make a novel discovery, however unlikely that was.

The point is that my lecturers tried to teach that there is not a certain level you have to acquire before your opinions start to matter: you might be right even if you are a total beginner and the person you disagree with has a lot of experience. 

This is something I would like to emphasize when doing EA community building myself, but it is not very easy. I've seen this when I've taught programming to kids. If a kid asks me if their program is "done" or "good", I'd say "you are the programmer, do you think your program does what it is supposed to do", but usually the kids think it is a trick question and I'm just withholding the correct answer for fun. Adults, too, do not always trust that I actually value their opinion.

Hi Otto!

I agree that the example was not that great and that definitely lack of data sources can be countered with general intelligence, like you describe. So it could definitely be possible that a a generally intelligent agent could plan around to gather needed data. My gut feeling is still that it is impossible to develop such intelligence based on one data source (for example text, however large amounts), but of course there are already technologies that combine different data sources (such as self-driving cars), so this clearly is also not the limit. I'll have to think more about where this intuition of lack of data being a limit comes from, since it still feels relevant to me. Of course 100 years is a lot of time to gather data.

I'm not sure if imagination is the difference either. Maybe it is the belief in somebody actually implementing things that can be imagined. 

Thanks! And thank you for the research pointers.

Load More