Thanks! And thank you for the research pointers.
This intuition turned out harder to explain than I thought and got me thinking a lot about how to define "generality" and "intelligence" (like all talk about AGI does). But say, for example, that you want to build an automatic doctor that is able examine a patient and diagnose what illness they most likely have. This is not very general in the sense that you can imagine this system as a function of "read all kinds of input about the person, output diagnosis", but I still think it provides an example of the difficulty of collecting data.
There are some... (read more)
Thanks! It will be difficult to write an authentic response to TAP since these other responses were originally not meant to be public but I will try to keep the same spirit if I end up writing more about my AI safety journey.
I actually do find AI safety interesting, it just seems that I think about a lot of stuff differently than many people in the field and it hard for me to pin-point why. But the main motivations of spending a lot of time on forming personal views about AI safety are:
Yeah, I understand why you'd say that. However it seems to me that there are other limitations to AGI than finding the right algorithms. As a data scientist I am biased to think about available training data. Of course there is probably going to be progress on this as well in the future.
Hi, just wanted to drop in to say:
Hmm, with a non-zero probability in the next 100 years the likelihood for a longer time frame should be bigger given that there is nothing that makes developing AGI more difficult the more time passes, and I would imagine it is more likely to get easier than harder (unless something catastrophic happens). In other words, I don't think it is certainly impossible to build AGI, but I am very pessimistic about anything like current ML methods leading to AGI. A lot of people in the AI safety community seem to disagree with me on that, and I have not completely understood why.
Hi Caleb! Very nice to read your reflection on what might make you think what you think. I related to many things you mentioned, such as wondering how much I think intelligence matters because of having wanted to be smart as a kid.
You understood correctly that intuitively, I think AI is less of a big deal than some people feel. This probably has a lot to do with my job, because it includes making estimates on if problems can be solved with current technology given certain constraints, and it is better to err to the side of caution. Previously, one of my ta... (read more)
Hi Otto! Thanks, it was nice talking to you on EAG. (I did not include any interactions/information I got from this weekend's EAG in the post because I had written it before the conference, felt like it should not be any longer than it already was, but wanted to wait until my friends who are described as "my friends" in the post had read it before publishing it.)
I am not that convinced AGI is necessarily the most important component to x-risk from AI – I feel like there could be significant risks from powerful non-generally intelligent systems, but of cour... (read more)
I feel like everyone I have ever talked about AI safety with would agree on the importance of thinking critically and staying skeptical, and this includes my facilitator and cohort members from the AGISF programme.
I think the 1.5h discussion session between 5 people who have read 5 texts does not allow really going deep into any topics, since it is just ~3 minutes per participant per text on average. I think these kind of programs are great for meeting new people, clearing misconceptions and providing structure/accountability on actually readin... (read more)
Like I said it is based on my gut feeling, but fairly sure.
Is it your experience that adding more complexity and concatenating different ML models results to better quality and generality and if so, in what domains? I would have the opposite intuition especially in NLP.
Also, do you happen to know why "prosaic" practices are called "prosaic"? I have never understood the connection to the dictionary definition of "prosaic".
I'm still quite uncertain on my beliefs but I don't think you got them quite right. Maybe a better summary is that I am generally pessimistic about both humans being ever able to create AGI and especially about humans being able to create safe AGI (it is a special case so it should probably be harder than any AGI). I also think that relying a lot on strong unsafe systems (AI powered or not) can be an x-risk. This is why it is easier to me to understand why AI governance is a way to try to reduce x-risk (at least if actors in the world want to rely on unsaf... (read more)
That's right, thanks again for answering my question back then!
Maybe I formulated my question wrong but I understood from your answer that you got first interested in AI safety, and only then on DS/ML (you mentioned you had had a CS background before but not your academic AI experience). This is why I did not include you in this sample of 3 persons - I wanted to narrow the search to people who had more AI specific background before getting into AI safety (not just CS). It is true that you did not mention Superintelligence either, but interesting to h... (read more)
Thanks for giving me permission, I guess can use this if I need ever the opinion of "the EA community" ;)
However, I don't think I'm ready to give up on trying to figure out my stance on AI risk just yet, since I still estimate it is my best shot in forming a more detailed understanding on any x-risk, and understanding x-risks better would be useful for establishing better opinions on other cause priorization issues.
Generally, I find links a lot less frustrating if they are written by the person who sends me the link :) But now I have read the link you gave and don't know what I am supposed to do next, which is another reason I sometimes find linksharing a difficult means of communication. Like, do I comment on specific parts on your post, or describe how reading it influenced me, or how does the conversation continue? (If you find my reaction interesting: I was mostly unmoved by the post, I think I had seen most of the numbers and examples before, there were some sentences and extrapolations that were quite off-putting for me but I think "minimalistic" style was nice.)
It would be nice to call and discuss if you are interested.
Glad it may have invoked some ideas for any discussions you might be having at Israel :) For us in Finland, I feel like I at least personally need to get some more clarity on how to balance EA movement building efforts and possible cause priorization related differences between movement builders. I think this is non-trivial because forming a consensus seems hard enough.
Curious to read any object-level response if you feel like writing one! If I end up writing any "Intro to AI Safety" thing it will be in Finnish so I'm not sure if you will understand it (it would be nice to have at least one coherent Finnish text about it that is not written by an astronomer or a paleontologist but by some technical person).
To clarify, my friends (even if they are very smart) did not come up with all AI safety arguments by themselves, but started to engage with AI safety material because they had already been looking at the world and thinking "hmm, looks like AI is a big thing and could influence a lot of stuff in the future, hope it changes things for the good". So they got quickly on board after hearing that there are people seriously working on the topic, and it made them want to read more.
I think you understood me in the same way than my friend did in the second part of the prolog, so I apparently give this impression. But to clarify, I am not certain that AI safety is impossible (I think it is hard, though), and the implications of that depend a lot on how much power the AI systems will be given at the end, and what part of the damage they might cause is due to them being unsafe and what for example misuse, like you said.
Interesting to hear your personal opinion on the persuasiveness of Superintelligence and Human Compatible! And thanks for designing the AGISF course, it was useful.
Thanks for the nice comment! Yes, I am quite uncomfortable with uncertainty and trying to work on that. Also, I feel like by now I am pretty involved in EA and ultimately feel welcome enough to be able to post a story like this in here (or I feel like EA apprechiates different views enough despite also feeling this pressure to conform at the same time).
thanks Aayush! Edited the sentence to be hopefully more clear now :)
With EA career stories I think it is important to to keep in mind that new members might not read them the same way as more engaged EAs who already know what organization is considered cool and effective within EA. When I started attending local EA meetups I met a person who worked at OpenPhil (maybe as a contractor? I can't remember the details), but I did not find it particularly impressive because I did not know what OpenPhilanthropy was and assumed the "phil" stood for "philosophy".
glad to hear you like it! :)
This one is nice as well!
Personally I like the method of embedding the link in the story, but since a many in my test audience considered it off-putting and too advertisement-like I thought it it better to trust their feedback, since I obviously personally already agree with the thought I'm trying to convey with my text. But like I said I'm not certain what the best solution is, probably there is no perfect one.
I tried out a couple of different ones and iterated based on feedback.
One ending I considered would have been just leaving out the last paragraph and linking to GiveWell like this:
“Besides,” his best friend said. “If you actually want to save a life for 5000 dollars, you can do it in a way where you can verify how they are doing it and what they need your money for.”
“What do you mean?” he asked, now more confused than ever.
I also considered embedding the link explicitly in the story like this:
... (read more)“Besides,” his best friend said. “If you actually wa
Don't be sorry! Feedback on language and grammar is very useful to me, since I usually write in Finnish. (This is probably the first time since middle school that I've written a piece of fiction in English.)
Apparently the punctuation slightly depends on whether you are using British or American English and whether the work is fiction or non-fiction (https://en.wikipedia.org/wiki/Quotation_marks_in_English#Order_of_punctuation ). Since this is fiction, you are in any case totally right about the commas going inside the quotes, and I will edit accordingly. Thanks for pointing this out!
Thanks for the feedback! Deciding how to end the story was definitely the hardest part in writing this. Pulling the reader out of the fantasy was a deliberate choice, but that does not mean it was necessarily the best one – I did some A/B testing on my proof reading audience but I have to admit my sample size was not that big. Glad you liked it in general anyway :)
Hi Otto!
I agree that the example was not that great and that definitely lack of data sources can be countered with general intelligence, like you describe. So it could definitely be possible that a a generally intelligent agent could plan around to gather needed data. My gut feeling is still that it is impossible to develop such intelligence based on one data source (for example text, however large amounts), but of course there are already technologies that combine different data sources (such as self-driving cars), so this clearly is also not the limit. I... (read more)