All of Ada-Maaria Hyvärinen's Comments + Replies

A pretty good summary, but to clarify point 3: we had "external" volunteers (who are not deeply involved with EA) taking the role of practice advisees, a method I find more realistic than the volunter advisors in training roleplaying as advisees themselves.

I obviously don't have access to Mikkola's full interview transcripts, but when I think back to EA Helsinki on 2021, it is possible that none of us who were interviewed told her we'd do anything like that, and would only list serious sounding stuff such as donating and career planning as our EA actions :) this, again, shows the limitations of inspecting a whole movement with a limited interview study like this.

Interesting write-up, especially the part about adapting EA materials to French culture! Thanks for writing this :) 

Thanks for asking! I meant to write it in negative – my point with this question was to encourage thinking about collaborators outside the EA organization space. This is because I think local groups can have the tendency to just think about (international) EA orgs when thinking about potential collaborators, and sometimes it could be valuable for them to collaborate with (local) organizations (in addition to EA orgs). Or maybe the question would show that no organizations that are interesting collaborators for the local group would even exist in the imaginary case – this would make the priorities of the local group clearer, and maybe encourage members to start new local projects.

3
Melanie Brennan
11mo
Thanks for your reply, Ada. I completely understand what you mean, and it's true that sometimes the focus is too much on getting to work for the most famous international EA orgs and not enough on local orgs, companies, charities, etc. Thanks for sharing (and clarifying) this insight!

Yeah, in Finnish contexts (nude) sauna is a normal option for an afterparty of a professional conference or similar context :) but in these cases, there is a gender separation of sauna turns (or different saunas) for men and women, just like at Finnish public swimming pools. At EA Finland events we have so far followed a quite usual Finnish student and hobby group policy of having a sauna turns for non-men and non-women separately and a mixed turn where everyone is welcome, with the option but not obligation to wear a swimming suit.
 

4
Jeff Kaufman
1y
Is the idea that non binary people are welcome at either?

We personally also recommend engaging with the writings of Eliezer, Paul, Nate, and John. We do not endorse all of their research, but they all have tackled the problem, and made a fair share of their reasoning public. If we want to get better together, they seem like a good start.

 

I realize this is a cross post and your original audience might know where to find all these recommendations even without further info, but if you want new people to look into their writings, it would be better to at least use full names of the authors you recommend.

7
Andrea_Miotti
1y
Thanks a lot and good  point, edited to include full names and links!
7
rvnnt
1y
Eliezer Yudkowsky, Paul Christiano, Nate Soares (so8res), John Wentworth (johnswentworth).

Onni works for Rethink Priorities and is part of the board of EA Finland, but does not actively participate in community building efforts in a hands-on practical basis anymore such as organizing events or so. My impression is that he is relieved that other people are doing it now :) 

Good that you asked, since one thing I wanted to highlight with this story was that it is possible to succeed at community building even if it is not your favorite thing or the best personal fit for you considering all abstract possibilities  – if you are the only per... (read more)

Good observation, I didn't notice that! Sure makes it harder for non-Swedish speakers to for example become aware of "should I check whether there is a connection to Nationaldemokraterna" if there is no English page that points to that direction.

What comes to Swedish speakers: If the letter of intent would have been signed because of nepotism, the vaccination skepticism part probably would not have come as a surprise since it seems to be a recurring theme in Per Shapiro's NyD contributions (again, if my Swedish does not fail me). Which to me seems evidence to the direction that nepotism did not influence the decision.

Like Elliot, while I think the FLI team has handled the whole thing just fine, I also find it confusing people think the far-right connections of Nya Dagbladet would have been difficult to identify. I didn't know anything about Nya Dagbladed in advance so I checked it:

The complete English Wikipedia article on Nya Dagbladet:

"Nya Dagbladet is a Swedish online daily newspaper founded in 2012,[1] which has a historical connection to the National Democrats, a far-right political party in Sweden. It publishes articles promoting conspiracy theories about the Holo... (read more)

Note that the English page was created in January of this year. The stuff on the Swedish page about Nordiska motståndsrörelsen and vaccination scepticisim and pseudoscience was added on September 14, after FLI signed the letter of intent.

Just to give you a data point from a non-native speaker who likes literature and languages, this quote wasn't a joy to read for me since it would have taken me a very long time to understand what this is about if I would not have known the context. So I am not sure what you mean by the best linguistic traditions – I think simple language can be elegant too.

3
Dzoldzaya
1y
It is a more joyful sentence in the context, admittedly. Simple language can be elegant, of course, and there are excellent writers with a range of different styles and levels of simplicity. I wouldn't dream of saying that everyone should be striving for 200-word sentences, nor that we should be imitating Victorian-era philosophy, but I do think that the trends of relentless simplifying and trimming that editors and style guides foist upon budding writers have diminished the English language.

I think for some of the questions the information the information would take an effort to collect. For example, I don't think anyone in CEA or EAIF knows the answer of "How old is EA Finland" (and many members of EA Finland would not know this either). Estimating the size of EA Finland is also a little tricky. When we applied for funding, we gave many different numbers, such as the number of active volunteers and number of people on our Telegram channel, so these numbers EAIF would know, in case they would want to start collecting an info list like the sug... (read more)

I share your feeling about free books! It is not super common for Finnish (student) organizations to give out free stuff unless it is for advertisement, so I would also be suspicious if somebody handed me over a free book and would probably not read it if I was not very interested in the contents in beforehand. As an alternative we've been selling books for a token sum, hoping it makes students value them more. We've also loaned books so that they can be read by many people.

I've heard Dutch people pride themselves about their straightforwardness, so I can ... (read more)

2
Irene H
2y
The idea of selling books for a token sum is really interesting! I usually offer to loan books first indeed. And I also definitely agree that EA people from different backgrounds can all add something. That's also why I'm so excited about all the local groups that have popped up everywhere recently :).

right! I think many of the same benefits can be gotten from starting to attend university courses while in high school and/or studying at a faster speed than the official recommendation. But I realize now typing this that this is also not that commonly possible outside of Nordics. (And could be hard for an upper secondary school student who does not live in a university city. OTOH moving to a different city to live on your own can be harder for some people with 17 than 19, even if they are very bright.)

3
Linch
2y
In the US, it's (relatively) common to attend university courses while in high school, but not that common to attend courses from top universities while in high school (and in some cases this is almost literally impossible, e.g. because the best universities are physically too far away) .

The most common age for Finns to finish upper secondary school is the year they turn 19, so the lower bound comes from having done the matriculation exam, not because the university would not allow younger students. (So if you've started your school a year earlier than most kids or skipped a grade or done your upper secondary school in 2 instead of 3 years, you could start university the year you turn 18; if you've done several of these things then even younger.) 

But gap years are quite common because trying an entrance exam several times can make it ... (read more)

2
Owain_Evans
2y
Benefits:  * Meeting more like-minded people sooner  * (For people serious about academics) In some fields (like AI), learning is faster and more efficient if if the teachers are active researchers. You can do research as an undergrad. * For people focused on work, you can start working earlier and so it's easier to try more jobs/internships. (Many jobs require BA/MA and so you can't do as much of this before starting uni). 

right, thanks for the clarification! Do you think this is mostly common for EA/rationalist folks or do people from many social circles like group houses? In Finland flatsharing for non-necessary reasons is somewhat considered alternative lifestyle so most working adults who do it are "purposefully" questioning the nuclear family unit. I'm trying to understand if living in an EA group house would come off as a "statement"or just something people sometimes do.

3
Owain_Evans
2y
In the Bay Area, it's very common for younger people to share accommodation (and not an alternative lifestyle). But this is often a set of somewhat random people living together and not an intentional group house of like-minded people. As people get older and have higher incomes, people are less likely to share (AFAIK).  So EA group houses do indicate an alternative lifestyle ... but in places like SF and Berkeley such alternative lifestyles are also pretty common outside EA.
3
Stefan_Schubert
2y
I think that among non-EAs, flatsharing is much more common in the UK than in the Nordic countries. Relatedly, average household size is afaik bigger. https://landgeist.com/2021/07/27/household-size/ I think this difference largely ultimately has economic causes.
2
alex lawsen (previously alexrjl)
2y
I live in London and have quite a lot of EA and non-EA friends/colleagues/acquaintances, and my impression is that group houses "by choice" are much more common among the EAs. It's noteworthy that group houses are common among students and lower paid/early stage working professionals for financial reasons though.

I am not the best person to recommend you readings in philosophy, but I can try to elaborate on how I understand this sentence to refer to some common consequentialist perspectives. I hope I'm not repeating something that is already obvious to you.

  • as I understand it, from the point of utilitarianism this sentence is not true (since it describes a "right" to live and utilitarianism is not a rights-based ethical system)
  • but in (total hedonistic) utilitarianism, the net-positive experience of being alive and happy has value. In this sense, the person being ali
... (read more)
2
pete
2y
Really thoughtful responses, thank you. I tend to think the idea of intrinsic worth popular in the West stems from Christian influence but haven't found defense for it outside Christian frameworks

Interesting perspective! I have definitely noticed how mood-lifting it can be to help others, especially if it is something I can do easily (such as translating a phrase to a coworker from a language they don't speak).

I also notice I am somewhat wary of using helping close ones as a form of emotional regulation because I've seen a fair share of co-dependency issues of different levels. Mostly in the form where someone gets closer to a person who is not doing that well, tries to take care of them and ends up in a space where their own mood is largely dictated by how the loved one feels. I'm wondering if that also has some roots in evolutionary psychology or if it is just "overdoing" the method you suggested.

I generally agree with your comment but I want to point out that for a person who does not feel like their achievements are "objectively" exceptionally impressive Luisa's article can also come across as intimidating: "if a person who achieved all of this still thinks they are not good enough, then what about me?"

I think Olivia's post is especially valuable because she dared to post even when she does not have a list of achievements that would immediately convince readers that her insecurity/worry is all in her head. It is very relatable to a lot of folks (for example me) and I think she has been really brave to speak up about this!

I agree. I would actually go further and say that bringing imposter syndrome into it is potentially unhelpful, as it's in some ways the opposite issue - imposter syndrome is about when you are as smart/competent/well-suited to a role as your peers, but have a mistaken belief that you aren't. What Olivia's talking about is actual differences between people that aren't just imagined due to worry. I could see it come off as patronising/out-of-touch to some, although I know it was meant well. 

thanks for the info! I didn't really get the part on ambitiousness, how is that connected to the amount of time participants want to spend on the event? (I can interpret this in either "they wouldn't do anything else anyway so they could as well be here the whole weekend" or "they don't want to commit to anything for longer than 1 day since they are not used to committing to things".)

2
RobPra
2y
I encountered both types of participants (the ones that showed up because they had 'not much better to do' and the ones that are not used to committing). My impression was that most participants were ambitious and that they liked a challenge. The effect of the length of the event on potential participants with varying levels of ambition can be a bit ambiguous here. With a longer event it is also more likely that potential participants have other things planned during part of the event. My gut feeling says that making the commitment bigger than a day for an introduction hackathon (without coding) makes it less likely for people show up. 

Thanks for this write-up! For me, a 10 hour hackathon sounds rather short, since with the lectures and evaluations it only leaves a few hours for the actual hacking, but I have only participated in hackathons where people actually programmed something, so maybe that makes the difference? Did the time feel short to you? Did you get any feedback on the event length from the participants or did somebody say they won't participate because time commitment seems too big (since you mention it is was a big time commitment from them)?

2
RobPra
2y
Great questions! From the 10 hours only 5-6 were spent hacking, which felt as short. Some participants mentioned in the feedback form they would have loved to see a 2-day event, whereas others mentioned that they thought the length to be great. When we marketed the event there were students mentioning they preferred to spend the weekend not doing much or that they thought of themselves as not being ambitious enough. I think this amount of time balanced well the quality of the event, the entry barrier for participants (time commitment), the costs of venue and food & the ask from volunteers. Next time we could do a networking & briefing session the evening before the hackathon (perhaps online if necessary?), in that way we can add some hacking time during the day itself.

Can you recommend me a place where I could find this information or will it spoil your test? I have looked into this on various places but I still have no idea on what the current best set of AI forecasts or P(AI doom) would be.

I'm really glad this post was useful to you :)

Thinking about this quote now, I think I should have written down more explicitely that it is possible to care a lot about having a positive impact, but not make it the definition of your self-worth; and that it is good to have positive impact as your goal and normal to be sad about not reaching your goals as you'd like to, but this sadness does not have to come with the feeling of worthlessness. I am still learning how to actually separate these on an emotional level.

Your Norwegian example is really inspiring in this space!

I just want to point out that in some places a bank account number to donate to is not going to be enough - for example in Finland the regulations on collecting donations and handling donated money are quite strict, so better check your local requirements before starting to collect money.

Hi Otto, I have been wanting to reply to you for a while but I feel like my opinions keep changing so writing coherent replies is hard (but having fluid opinions in my case seems like a good thing). For example, while I still think only a precollected set of text as a data source is unsufficient for any general intelligence, maybe training a model on text and having it then interact with humans could lead it to connecting words to references (real world objects), and maybe it would not necessarily need many reference points of the language model is rich en... (read more)

Yeah, I think we agree on this, I think I want to write out more later on what communication strategies might help people actually voice scepticsm/concerns even if they are afraid of meeting some standards on elaborateness. 

My mathematics example actually tried to be about this: in my university, the teachers tried to make us forget the teachers are more likely to be right, so that we would have to think about things on our own and voice scepticism even if we were objectively likely to be wrong. I remember another lecturer telling us: "if you finish a... (read more)

Hi Otto!

I agree that the example was not that great and that definitely lack of data sources can be countered with general intelligence, like you describe. So it could definitely be possible that a a generally intelligent agent could plan around to gather needed data. My gut feeling is still that it is impossible to develop such intelligence based on one data source (for example text, however large amounts), but of course there are already technologies that combine different data sources (such as self-driving cars), so this clearly is also not the limit. I... (read more)

3
Otto
2y
Hey I wasn't saying it wasn't that great :) I agree that the difficult part is to get to general intelligence, also regarding data. Compute, algorithms, and data availability are all needed to get to this point. It seems really hard to know beforehand what kind and how much of algorithms and data one would need. I agree that basically only one source of data, text, could well be insufficient. There was a post I read on a forum somewhere (could have been here) from someone who let GPT3 solve questions including things like 'let all odd rows of your answer be empty'. GPT3 failed at all these kind of assignments, showing a lack of comprehension. Still, the 'we haven't found the asymptote' argument from OpenAI (intelligence does increase with model size and that increase doesn't seem to stop, implying that we'll hit AGI eventually), is not completely unconvincing either. It bothers me that no one can completely rule out that large language models might hit AGI just by scaling them up. It doesn't seem likely to me, but from a risk management perspective, that's not the point. An interesting perspective I'd never heard before from intelligent people is that AGI might actually need embodiment to gather the relevant data. (They also think it would need social skills first - also an interesting thought.) While it's hard to know how much (and what kind of) algorithmic improvement and data is needed, it seems doable to estimate the amount of compute needed, namely what's in a brain plus or minus a few orders of magnitude. It seems hard for me to imagine that evolution can be beaten by more than a few orders of magnitude in algorithmic efficiency (the other way round is somewhat easier to imagine, but still unlikely in a hundred year timeframe). I think people have focused on compute because it's most forecastable, not because it would be the only part that's important. Still, there is a large gap between what I think are essentially thought experiments (relevant ones though

Thanks! And thank you for the research pointers.

This intuition turned out harder to explain than I thought and got me thinking a lot about how to define "generality" and "intelligence" (like all talk about AGI does). But say, for example, that you want to build an automatic doctor that is able examine a patient and diagnose what illness they most likely have. This is not very general in the sense that you can imagine this system as a function of "read all kinds of input about the person, output diagnosis", but I still think it provides an example of the difficulty of collecting data. 

There are some... (read more)

2
Otto
2y
Hi AM, thanks for your reply. Regarding your example, I think it's quite specific, as you notice too. That doesn't mean I think it's invalid, but it does get me thinking: how would a human learn this task? A human intelligence wasn't trained on many specific tasks in order to be able to do them all. Rather, it first acquired general intelligence (apparently, somewhere), and was later able to apply this to an almost infinite amount of specific tasks with typically only a few examples needed. I would guess that an AGI would solve problems in a similar way. So, first learn general intelligence (somehow), then learn specific tasks quickly with little data needed. For your example, if the AGI would really need to do this task, I'd say it could find ways itself to gather the data, just like a human would who would want to learn this skill, after first acquiring some form of general intelligence. A human doctor might watch the healthily moving joint, gathering visual data, and might hear the joint moving, gathering audio data, or might put her hand on the joint, gathering sensory data. The AGI could similarly film and record the healthy joint moving, with already available cameras and microphones, or use data already available online, or, worst case, send in a drone with a camera and a sound recorder. It could even send in a robot that could gather sensory data if needed. Of course, current AI lacks certain skills that are necessary to solve such a general problem in such a general way, such as really understanding the meaning behind a question that is asked, being able to plan a solution (including acquiring drones and robots in the process), and probably others. These issues would need to be solved first, so there is still a long way to go. But with the manpower, investment, and time (e.g. 100 years) available, I think we should assign a probability of at least tens of percents that this type of general intelligence including planning and acting effectively in the rea

Thanks! It will be difficult to write an authentic response to TAP since these other responses were originally not meant to be public but I will try to keep the same spirit if I end up writing more about my AI safety journey.

I actually do find AI safety interesting, it just seems that I think about a lot of stuff differently than many people in the field and it hard for me to pin-point why. But the main motivations of spending a lot of time on forming personal views about AI safety are:
 

  • I want to understand x-risks better, AI risk is considered import
... (read more)
2
rachelAF
2y
Thank you for explaining more. In that case, I can understand why you'd want to spend more time thinking about AI safety. I suspect that much of the reason that "understanding the argument is so hard" is because there isn't a definitive argument -- just a collection of fuzzy arguments and intuitions. The intuitions seem very, well, intuitive to many people, and so they become convinced. But if you don't share these intuitions, then hearing about them doesn't convince you. I also have an (academic) ML background, and I personally find some topics (like mesa-optimization) to be incredibly difficult to reason about. I think that generating more concrete arguments and objections would be very useful for the field, and I encourage you to write up any thoughts that you have in that direction! (Also, a minor disclaimer that I suppose I should have included earlier: I provided technical feedback on a draft of TAP, and much of the "AGI safety" section focuses on my team's work. I still think that it's a good concrete introduction to the field, because of how specific and well-cited it is, but I also am probably somewhat biased.)

Yeah, I understand why you'd say that. However it seems to me that there are other limitations to AGI than finding the right algorithms. As a data scientist I am biased to think about available training data. Of course there is probably going to be progress on this as well in the future.

3
Otto Barten
2y
Could you explain a bit more about the kind of data you think will be needed to train an AGI, and why you think this will not be available in the next hundred years? I'm genuinely interested, actually I'd love to be convinced about the opposite... We can also DM if you prefer.

Hi, just wanted to drop in to say:

  • You had an experience that you describe as burn-out less than a week ago – it's totally ok not to be fine yet! It's good you feel better but take the time you need to recover properly. 
  • I don't know how old you are but it is also ok to feel overwhelmed by EA later when you no longer feel like describing yourself as "just a kid". Doing your best to make the world a better place is hard for a person of any age.
  • The experience you had does not necessarily mean you would not be cut out for community building. You've now lea
... (read more)
3
Kirsten
2y
Yes, please do take time to rest and recover!

Hmm, with a non-zero probability in the next 100 years the likelihood for a longer time frame should be bigger given that there is nothing that makes developing AGI more difficult the more time passes, and I would imagine it is more likely to get easier than harder (unless something catastrophic happens). In other words, I don't think it is certainly impossible to build AGI, but I am very pessimistic about anything like current ML methods leading to AGI. A lot of people in the AI safety community seem to disagree with me on that, and I have not completely understood why.

2
Otto Barten
2y
So although we seem to be relatively close in terms of compute, we don't have the right algorithms yet for AGI, and no one knows if and when they will be found. If no one knows, I'd say a certainty of 99% that they won't be found in hundred years, with thousands of people trying, is overconfident.

Hi Caleb! Very nice to read your reflection on what might make you think what you think. I related to many things you mentioned, such as wondering how much I think intelligence matters because of having wanted to be smart as a kid.

You understood correctly that intuitively, I think AI is less of a big deal than some people feel. This probably has a lot to do with my job, because it includes making estimates on if problems can be solved with current technology given certain constraints, and it is better to err to the side of caution. Previously, one of my ta... (read more)

Hi Otto! Thanks, it was nice talking to you on EAG. (I did not include any interactions/information I got from this weekend's EAG in the post because I had written it before the conference, felt like it should not be any longer than it already was, but wanted to wait until my friends who are described as "my friends" in the post had read it before publishing it.)

I am not that convinced AGI is necessarily the most important component to x-risk from AI – I feel like there could be significant risks from powerful non-generally intelligent systems, but of cour... (read more)

3
Otto
2y
Thanks for the reply, and for trying to attach numbers to your thoughts! So our main disagreement lies in (1). I think this is a common source of disagreement, so it's important to look into it further. Would you say that the chance to ever build AGI is similarly tiny? Or is it just the next hundred years? In other words, is this a possibility or a timeline discussion?

I feel like everyone I have ever talked about AI safety with would agree on the importance of thinking critically and staying skeptical, and this includes my facilitator and cohort members from the AGISF programme. 

I think the 1.5h discussion session between 5 people who have read 5 texts  does not allow really going deep into any topics, since it is just ~3 minutes per participant per text on average. I think these kind of programs are great for meeting new people, clearing misconceptions and providing structure/accountability on actually readin... (read more)

1
Oliver Sourbut
2y
OK, this is the terrible terrible failure mode which I think we are both agreeing on (emphasis mine) By 'a sceptical approach' I basically mean 'the thing where we don't do that'. Because there is not enough epistemic credit in the field, yet, to expect that all (tentative, not-consensus-yet) conclusions to be definitely right. In traditional/undergraduate mathematics, it's different - almost always when you don't understand or agree with the professor, she is simply right and you are simply wrong or confused! This is a justifiable perspective based on the enormous epistemic weight of all the existing work on mathematics. I'm very glad you call out the distinction between performing skepticism and actually doing it.

Like I said it is based on my gut feeling, but fairly sure.

Is it your experience that adding more complexity and concatenating different ML models results to better quality and generality and if so, in what domains? I would have the opposite intuition especially in NLP.

Also, do you happen to know why "prosaic" practices are called "prosaic"? I have never understood the connection to the dictionary definition of "prosaic".

I'm still quite uncertain on my beliefs but I don't think you got them quite right. Maybe a better summary is that I am generally pessimistic about both humans being ever able to create AGI and especially about humans being able to create safe AGI (it is a special case so it should probably be harder than any AGI). I also think that relying a lot on strong unsafe systems (AI powered or not) can be an x-risk. This is why it is easier to me to understand why AI governance is a way to try to reduce x-risk (at least if actors in the world want to rely on unsaf... (read more)

2
colin
2y
Ah, yeah I misread your opinion of the likelihood that humans will ever create AGI.  I believe it will happen eventually unless AI research stops due to some exogenous reason (civilizational collapse, a ban on development, etc.).  Important assumptions I am making:   * General Intelligence is all computation, so it isn't substrate-dependent * The more powerful an AI is the more economically valuable it is to the creators * Moore's Law will continue so more computing will be available. * If other approaches fail, we will be able to simulate brains with sufficient compute. * Fully simulated brains will be AGI. I'm not saying that I think this would be the best, easiest, or only way to create AGI, just that if every other attempt fails, I don't see what would prevent this from happening. Particularly since we are already to simulate portions of a mouse brain.  I am also not claiming here that this implies short timelines for AGI.  I don't have a good estimate of how long this approach would take.

That's right, thanks again for answering my question back then! 

Maybe I formulated my question wrong but I understood from your answer that you got first interested in AI safety, and only then on DS/ML (you mentioned you had had a CS background before but not your academic AI experience). This is why I did not include you in this sample of 3 persons - I wanted to narrow the search to people who had more AI specific background before getting into AI safety (not just CS). It is true that you did not mention Superintelligence either, but interesting to h... (read more)

Thanks for giving me permission, I guess can use this if I need ever the opinion of "the EA community" ;)

However, I don't think I'm ready to give up on trying to figure out my stance on AI risk just yet, since I still estimate it is my best shot in forming a more detailed understanding on any x-risk, and understanding x-risks better would be useful for establishing better opinions on other cause priorization issues.

2
Geoffrey Irving
2y
That is also very reasonable!  I think the important part is to not feel to bad about the possibility of never having a view (there is a vast sea of things I don't have a view on), not least because I think it actually increases the chance of getting to the right view if more effort is spent. (I would offer to chat directly, as I'm very much part of the subset of safety close to more normal ML, but am sadly over capacity at the moment.)

Generally, I find links a lot less frustrating if they are written by the person who sends me the link :) But now I have read the link you gave and don't know what I am supposed to do next, which is another reason I sometimes find linksharing a difficult means of communication. Like, do I comment on specific parts on your post, or describe how reading it influenced me, or how does the conversation continue? (If you find my reaction interesting: I was mostly unmoved by the post, I think I had seen  most of the numbers and examples before, there were some sentences and extrapolations that were quite off-putting for me but I think "minimalistic" style was nice.)

It would be nice to call and discuss if you are interested.

2
Gavin
2y
Well, definitely tell me what's wrong with the post - and optionally tell me what's good about it (: There's a Forum version here where your comments will have an actual audience, sounds valuable.

Glad it may have invoked some ideas for any discussions you might be having at Israel :) For us in Finland, I feel like I at least personally need to get some more clarity on how to balance EA movement building efforts and possible cause priorization related differences between movement builders. I think this is non-trivial because forming a consensus seems hard enough.

Curious to read any object-level response if you feel like writing one! If I end up writing any "Intro to AI Safety" thing it will be in Finnish so I'm not sure if you will understand it (it would be nice to have at least one coherent Finnish text about it that is not written by an astronomer or a paleontologist but by some technical person). 

To clarify, my friends (even if they are very smart) did not come up with all AI safety arguments by themselves, but started to engage with AI safety material because they had already been looking at the world and thinking "hmm, looks like AI is a big thing and could influence a lot of stuff in the future, hope it changes things for the good". So they  got quickly on board after hearing that there are people seriously working on the topic, and it made them want to read more. 

I think you understood me in the same way than my friend did in the second part of the prolog, so I apparently give this impression. But to clarify, I am not certain that AI safety is impossible (I think it is hard, though), and the implications of that depend a lot on how much power the AI systems will be given at the end, and what part of the damage they might cause is due to them being unsafe and what for example misuse, like you said. 

Interesting to hear your personal opinion on the persuasiveness of Superintelligence and Human Compatible! And thanks for designing the AGISF course, it was useful.

8
richard_ngo
2y
Superintelligence doesn't talk about ML enough to be strongly persuasive given the magnitude of the claims it's making (although it does a reasonable job of conveying core ideas like the instrumental convergence thesis and orthogonality thesis, which are where many skeptics get stuck). Human Compatible only spends, I think, a couple of pages actually explaining the core of the alignment problem (although it does a good job at debunking some of the particularly bad responses to it). It doesn't do a great job at linking the conventional ML paradigm to the superintelligence paradigm, and I don't think the "assistance games" approach is anywhere near as promising as Russell makes it out to be.

Thanks for the nice comment! Yes, I am quite uncomfortable with uncertainty and trying to work on that. Also, I feel like by now I am pretty involved in EA and ultimately feel welcome enough to be able to post a story like this in here (or I feel like EA apprechiates different views enough despite also feeling this pressure to conform at the same time). 

thanks Aayush! Edited the sentence to be hopefully more clear now :)

Load more