I obviously don't have access to Mikkola's full interview transcripts, but when I think back to EA Helsinki on 2021, it is possible that none of us who were interviewed told her we'd do anything like that, and would only list serious sounding stuff such as donating and career planning as our EA actions :) this, again, shows the limitations of inspecting a whole movement with a limited interview study like this.
Interesting write-up, especially the part about adapting EA materials to French culture! Thanks for writing this :)
Thanks for asking! I meant to write it in negative – my point with this question was to encourage thinking about collaborators outside the EA organization space. This is because I think local groups can have the tendency to just think about (international) EA orgs when thinking about potential collaborators, and sometimes it could be valuable for them to collaborate with (local) organizations (in addition to EA orgs). Or maybe the question would show that no organizations that are interesting collaborators for the local group would even exist in the imaginary case – this would make the priorities of the local group clearer, and maybe encourage members to start new local projects.
Yeah, in Finnish contexts (nude) sauna is a normal option for an afterparty of a professional conference or similar context :) but in these cases, there is a gender separation of sauna turns (or different saunas) for men and women, just like at Finnish public swimming pools. At EA Finland events we have so far followed a quite usual Finnish student and hobby group policy of having a sauna turns for non-men and non-women separately and a mixed turn where everyone is welcome, with the option but not obligation to wear a swimming suit.
We personally also recommend engaging with the writings of Eliezer, Paul, Nate, and John. We do not endorse all of their research, but they all have tackled the problem, and made a fair share of their reasoning public. If we want to get better together, they seem like a good start.
I realize this is a cross post and your original audience might know where to find all these recommendations even without further info, but if you want new people to look into their writings, it would be better to at least use full names of the authors you recommend.
Onni works for Rethink Priorities and is part of the board of EA Finland, but does not actively participate in community building efforts in a hands-on practical basis anymore such as organizing events or so. My impression is that he is relieved that other people are doing it now :)
Good that you asked, since one thing I wanted to highlight with this story was that it is possible to succeed at community building even if it is not your favorite thing or the best personal fit for you considering all abstract possibilities – if you are the only per...
Good observation, I didn't notice that! Sure makes it harder for non-Swedish speakers to for example become aware of "should I check whether there is a connection to Nationaldemokraterna" if there is no English page that points to that direction.
What comes to Swedish speakers: If the letter of intent would have been signed because of nepotism, the vaccination skepticism part probably would not have come as a surprise since it seems to be a recurring theme in Per Shapiro's NyD contributions (again, if my Swedish does not fail me). Which to me seems evidence to the direction that nepotism did not influence the decision.
Like Elliot, while I think the FLI team has handled the whole thing just fine, I also find it confusing people think the far-right connections of Nya Dagbladet would have been difficult to identify. I didn't know anything about Nya Dagbladed in advance so I checked it:
The complete English Wikipedia article on Nya Dagbladet:
"Nya Dagbladet is a Swedish online daily newspaper founded in 2012,[1] which has a historical connection to the National Democrats, a far-right political party in Sweden. It publishes articles promoting conspiracy theories about the Holo...
Note that the English page was created in January of this year. The stuff on the Swedish page about Nordiska motståndsrörelsen and vaccination scepticisim and pseudoscience was added on September 14, after FLI signed the letter of intent.
Just to give you a data point from a non-native speaker who likes literature and languages, this quote wasn't a joy to read for me since it would have taken me a very long time to understand what this is about if I would not have known the context. So I am not sure what you mean by the best linguistic traditions – I think simple language can be elegant too.
I think for some of the questions the information the information would take an effort to collect. For example, I don't think anyone in CEA or EAIF knows the answer of "How old is EA Finland" (and many members of EA Finland would not know this either). Estimating the size of EA Finland is also a little tricky. When we applied for funding, we gave many different numbers, such as the number of active volunteers and number of people on our Telegram channel, so these numbers EAIF would know, in case they would want to start collecting an info list like the sug...
I share your feeling about free books! It is not super common for Finnish (student) organizations to give out free stuff unless it is for advertisement, so I would also be suspicious if somebody handed me over a free book and would probably not read it if I was not very interested in the contents in beforehand. As an alternative we've been selling books for a token sum, hoping it makes students value them more. We've also loaned books so that they can be read by many people.
I've heard Dutch people pride themselves about their straightforwardness, so I can ...
right! I think many of the same benefits can be gotten from starting to attend university courses while in high school and/or studying at a faster speed than the official recommendation. But I realize now typing this that this is also not that commonly possible outside of Nordics. (And could be hard for an upper secondary school student who does not live in a university city. OTOH moving to a different city to live on your own can be harder for some people with 17 than 19, even if they are very bright.)
The most common age for Finns to finish upper secondary school is the year they turn 19, so the lower bound comes from having done the matriculation exam, not because the university would not allow younger students. (So if you've started your school a year earlier than most kids or skipped a grade or done your upper secondary school in 2 instead of 3 years, you could start university the year you turn 18; if you've done several of these things then even younger.)
But gap years are quite common because trying an entrance exam several times can make it ...
right, thanks for the clarification! Do you think this is mostly common for EA/rationalist folks or do people from many social circles like group houses? In Finland flatsharing for non-necessary reasons is somewhat considered alternative lifestyle so most working adults who do it are "purposefully" questioning the nuclear family unit. I'm trying to understand if living in an EA group house would come off as a "statement"or just something people sometimes do.
I am not the best person to recommend you readings in philosophy, but I can try to elaborate on how I understand this sentence to refer to some common consequentialist perspectives. I hope I'm not repeating something that is already obvious to you.
Interesting perspective! I have definitely noticed how mood-lifting it can be to help others, especially if it is something I can do easily (such as translating a phrase to a coworker from a language they don't speak).
I also notice I am somewhat wary of using helping close ones as a form of emotional regulation because I've seen a fair share of co-dependency issues of different levels. Mostly in the form where someone gets closer to a person who is not doing that well, tries to take care of them and ends up in a space where their own mood is largely dictated by how the loved one feels. I'm wondering if that also has some roots in evolutionary psychology or if it is just "overdoing" the method you suggested.
I generally agree with your comment but I want to point out that for a person who does not feel like their achievements are "objectively" exceptionally impressive Luisa's article can also come across as intimidating: "if a person who achieved all of this still thinks they are not good enough, then what about me?"
I think Olivia's post is especially valuable because she dared to post even when she does not have a list of achievements that would immediately convince readers that her insecurity/worry is all in her head. It is very relatable to a lot of folks (for example me) and I think she has been really brave to speak up about this!
I agree. I would actually go further and say that bringing imposter syndrome into it is potentially unhelpful, as it's in some ways the opposite issue - imposter syndrome is about when you are as smart/competent/well-suited to a role as your peers, but have a mistaken belief that you aren't. What Olivia's talking about is actual differences between people that aren't just imagined due to worry. I could see it come off as patronising/out-of-touch to some, although I know it was meant well.
thanks for the info! I didn't really get the part on ambitiousness, how is that connected to the amount of time participants want to spend on the event? (I can interpret this in either "they wouldn't do anything else anyway so they could as well be here the whole weekend" or "they don't want to commit to anything for longer than 1 day since they are not used to committing to things".)
Thanks for this write-up! For me, a 10 hour hackathon sounds rather short, since with the lectures and evaluations it only leaves a few hours for the actual hacking, but I have only participated in hackathons where people actually programmed something, so maybe that makes the difference? Did the time feel short to you? Did you get any feedback on the event length from the participants or did somebody say they won't participate because time commitment seems too big (since you mention it is was a big time commitment from them)?
Can you recommend me a place where I could find this information or will it spoil your test? I have looked into this on various places but I still have no idea on what the current best set of AI forecasts or P(AI doom) would be.
I'm really glad this post was useful to you :)
Thinking about this quote now, I think I should have written down more explicitely that it is possible to care a lot about having a positive impact, but not make it the definition of your self-worth; and that it is good to have positive impact as your goal and normal to be sad about not reaching your goals as you'd like to, but this sadness does not have to come with the feeling of worthlessness. I am still learning how to actually separate these on an emotional level.
Your Norwegian example is really inspiring in this space!
I just want to point out that in some places a bank account number to donate to is not going to be enough - for example in Finland the regulations on collecting donations and handling donated money are quite strict, so better check your local requirements before starting to collect money.
Hi Otto, I have been wanting to reply to you for a while but I feel like my opinions keep changing so writing coherent replies is hard (but having fluid opinions in my case seems like a good thing). For example, while I still think only a precollected set of text as a data source is unsufficient for any general intelligence, maybe training a model on text and having it then interact with humans could lead it to connecting words to references (real world objects), and maybe it would not necessarily need many reference points of the language model is rich en...
Yeah, I think we agree on this, I think I want to write out more later on what communication strategies might help people actually voice scepticsm/concerns even if they are afraid of meeting some standards on elaborateness.
My mathematics example actually tried to be about this: in my university, the teachers tried to make us forget the teachers are more likely to be right, so that we would have to think about things on our own and voice scepticism even if we were objectively likely to be wrong. I remember another lecturer telling us: "if you finish a...
Hi Otto!
I agree that the example was not that great and that definitely lack of data sources can be countered with general intelligence, like you describe. So it could definitely be possible that a a generally intelligent agent could plan around to gather needed data. My gut feeling is still that it is impossible to develop such intelligence based on one data source (for example text, however large amounts), but of course there are already technologies that combine different data sources (such as self-driving cars), so this clearly is also not the limit. I...
This intuition turned out harder to explain than I thought and got me thinking a lot about how to define "generality" and "intelligence" (like all talk about AGI does). But say, for example, that you want to build an automatic doctor that is able examine a patient and diagnose what illness they most likely have. This is not very general in the sense that you can imagine this system as a function of "read all kinds of input about the person, output diagnosis", but I still think it provides an example of the difficulty of collecting data.
There are some...
Thanks! It will be difficult to write an authentic response to TAP since these other responses were originally not meant to be public but I will try to keep the same spirit if I end up writing more about my AI safety journey.
I actually do find AI safety interesting, it just seems that I think about a lot of stuff differently than many people in the field and it hard for me to pin-point why. But the main motivations of spending a lot of time on forming personal views about AI safety are:
Yeah, I understand why you'd say that. However it seems to me that there are other limitations to AGI than finding the right algorithms. As a data scientist I am biased to think about available training data. Of course there is probably going to be progress on this as well in the future.
Hi, just wanted to drop in to say:
Hmm, with a non-zero probability in the next 100 years the likelihood for a longer time frame should be bigger given that there is nothing that makes developing AGI more difficult the more time passes, and I would imagine it is more likely to get easier than harder (unless something catastrophic happens). In other words, I don't think it is certainly impossible to build AGI, but I am very pessimistic about anything like current ML methods leading to AGI. A lot of people in the AI safety community seem to disagree with me on that, and I have not completely understood why.
Hi Caleb! Very nice to read your reflection on what might make you think what you think. I related to many things you mentioned, such as wondering how much I think intelligence matters because of having wanted to be smart as a kid.
You understood correctly that intuitively, I think AI is less of a big deal than some people feel. This probably has a lot to do with my job, because it includes making estimates on if problems can be solved with current technology given certain constraints, and it is better to err to the side of caution. Previously, one of my ta...
Hi Otto! Thanks, it was nice talking to you on EAG. (I did not include any interactions/information I got from this weekend's EAG in the post because I had written it before the conference, felt like it should not be any longer than it already was, but wanted to wait until my friends who are described as "my friends" in the post had read it before publishing it.)
I am not that convinced AGI is necessarily the most important component to x-risk from AI – I feel like there could be significant risks from powerful non-generally intelligent systems, but of cour...
I feel like everyone I have ever talked about AI safety with would agree on the importance of thinking critically and staying skeptical, and this includes my facilitator and cohort members from the AGISF programme.
I think the 1.5h discussion session between 5 people who have read 5 texts does not allow really going deep into any topics, since it is just ~3 minutes per participant per text on average. I think these kind of programs are great for meeting new people, clearing misconceptions and providing structure/accountability on actually readin...
Like I said it is based on my gut feeling, but fairly sure.
Is it your experience that adding more complexity and concatenating different ML models results to better quality and generality and if so, in what domains? I would have the opposite intuition especially in NLP.
Also, do you happen to know why "prosaic" practices are called "prosaic"? I have never understood the connection to the dictionary definition of "prosaic".
I'm still quite uncertain on my beliefs but I don't think you got them quite right. Maybe a better summary is that I am generally pessimistic about both humans being ever able to create AGI and especially about humans being able to create safe AGI (it is a special case so it should probably be harder than any AGI). I also think that relying a lot on strong unsafe systems (AI powered or not) can be an x-risk. This is why it is easier to me to understand why AI governance is a way to try to reduce x-risk (at least if actors in the world want to rely on unsaf...
That's right, thanks again for answering my question back then!
Maybe I formulated my question wrong but I understood from your answer that you got first interested in AI safety, and only then on DS/ML (you mentioned you had had a CS background before but not your academic AI experience). This is why I did not include you in this sample of 3 persons - I wanted to narrow the search to people who had more AI specific background before getting into AI safety (not just CS). It is true that you did not mention Superintelligence either, but interesting to h...
Thanks for giving me permission, I guess can use this if I need ever the opinion of "the EA community" ;)
However, I don't think I'm ready to give up on trying to figure out my stance on AI risk just yet, since I still estimate it is my best shot in forming a more detailed understanding on any x-risk, and understanding x-risks better would be useful for establishing better opinions on other cause priorization issues.
Generally, I find links a lot less frustrating if they are written by the person who sends me the link :) But now I have read the link you gave and don't know what I am supposed to do next, which is another reason I sometimes find linksharing a difficult means of communication. Like, do I comment on specific parts on your post, or describe how reading it influenced me, or how does the conversation continue? (If you find my reaction interesting: I was mostly unmoved by the post, I think I had seen most of the numbers and examples before, there were some sentences and extrapolations that were quite off-putting for me but I think "minimalistic" style was nice.)
It would be nice to call and discuss if you are interested.
Glad it may have invoked some ideas for any discussions you might be having at Israel :) For us in Finland, I feel like I at least personally need to get some more clarity on how to balance EA movement building efforts and possible cause priorization related differences between movement builders. I think this is non-trivial because forming a consensus seems hard enough.
Curious to read any object-level response if you feel like writing one! If I end up writing any "Intro to AI Safety" thing it will be in Finnish so I'm not sure if you will understand it (it would be nice to have at least one coherent Finnish text about it that is not written by an astronomer or a paleontologist but by some technical person).
To clarify, my friends (even if they are very smart) did not come up with all AI safety arguments by themselves, but started to engage with AI safety material because they had already been looking at the world and thinking "hmm, looks like AI is a big thing and could influence a lot of stuff in the future, hope it changes things for the good". So they got quickly on board after hearing that there are people seriously working on the topic, and it made them want to read more.
I think you understood me in the same way than my friend did in the second part of the prolog, so I apparently give this impression. But to clarify, I am not certain that AI safety is impossible (I think it is hard, though), and the implications of that depend a lot on how much power the AI systems will be given at the end, and what part of the damage they might cause is due to them being unsafe and what for example misuse, like you said.
Interesting to hear your personal opinion on the persuasiveness of Superintelligence and Human Compatible! And thanks for designing the AGISF course, it was useful.
Thanks for the nice comment! Yes, I am quite uncomfortable with uncertainty and trying to work on that. Also, I feel like by now I am pretty involved in EA and ultimately feel welcome enough to be able to post a story like this in here (or I feel like EA apprechiates different views enough despite also feeling this pressure to conform at the same time).
A pretty good summary, but to clarify point 3: we had "external" volunteers (who are not deeply involved with EA) taking the role of practice advisees, a method I find more realistic than the volunter advisors in training roleplaying as advisees themselves.