I'd be curious to hear about potential plans to address any of these, especially talent development and developing the pipeline of AI safety and governance.
Very interesting.
1. Did you notice an effect of how large/ambitious the ballot initiative was? I remember previous research suggesting consecutive piecemeal initiatives were more successful at creating larger change than singular large ballot initiatives.
2. Do you know how much the results vary by state?
3. How different do ballot initiatives need to be for the huge first advocacy effect to take place? Does this work as long as the policies are not identical or is it more of a cause specific function or something in between? Does it have a smooth gradient or is it discontinuous after some tipping point?
That's a good point. Although 1) if people leave a company to go to one that prioritizes AI safety, then this means there are fewer workers at all the other companies who feel as strongly. So a union is less likely to improve safety there. 2) It's common for workers to take action to improve safety conditions for them, and much less common for them to take action on issues that don't directly affect their work, such as air pollution or carbon pollution, and 3) if safety inclined people become tagged as wanting to just generally slow down the company, then hiring teams will likely start filtering out many of the most safety minded people.
I've thought about this before and talked to a couple people in labs about it. I'm pretty uncertain if it would actually be positive. It seems possible that most ML researchers and engineers might want AI development to go as quickly or more than leadership if they're excited about working on cutting edge technologies or changing the world or for equity reasons. I remember some articles about how people left Google for companies like OpenAI because they thought Google was too slow, cautious, and lost its "move fast and break things" ethos.
Really appreciate this post. Recently I've felt less certain about whether slowing down AI is feasible or helpful in the near future.
I think how productive current alignment and related research is at the moment is a key crux for me. If it's actually quite valuable at the moment, maybe having more time would seem better.
It does seem easier to centralize now when there are fewer labs and entrenched ways of doing things, though it's possible that exponentially rising costs could lead to centralization through market dynamics anyway. Though maybe that would be short lived and some breakthrough after would change the cost of training dramatically.
I really want to see more discussion about this. There's serious effort put in. I've often felt that nuclear is perhaps overlooked/underemphasized even within EA.
Actually, they are the same type of error. EA prides itself on using evidence and reason rather than taking the assessments of others at face value. So the idea that others did not sufficiently rely on experts who could obtain better evidence and reasoning to vet FTX is less compelling to me as an after-the-fact explanation to justify EA as a whole not doing so. I think probably just no one really thought much about the possibility and looking for this kind of social proof helps us feel less bad.
Yeah, I do sometimes wonder if perhaps there's a reason we find it difficult to resolve this kind of inquiry.
Yes, I think they're generally pretty wary of saying much exactly since it's sort of beyond conceptual comprehension. Something probably beyond our ideas of existence and nonexistence.
Glad to hear that! You're welcome :)
On Flynn Campaign: I don't know if it's "a catastrophe" but I think it is maybe an example of overconfidence and naivete. As someone who has worked on campaigns and follows politics, I thought the campaign had a pretty low chance of success because of the fundamentals (and asked about it at the time) and that other races would have been better to donate to (either state house races to build the bench or congressional candidates with better odds like Maxwell Frost, a local activist who ran for the open seat previously held by Val Demings, listed pandemic pr...
I think the main obstacle is tractability: there doesn't seem to be any known methodology that could be applied to resolve this question in a definitive way. And it's not clear how we could even attempt to find such a method. Whereas projects related to areas such as preventing pandemics and making sure AI isn't misused or poorly designed seem 1) incredibly important, 2) tractable - it looks like we're making some progress and have and can find directions to make further progress (better PPE, pathogen screening, new vaccines, interpretability, agent founda...
I've definitely seen well-meaning people mess up interactions without realizing it in my area (non-EA related). This seems like a really important point and your experience seems very relevant given all the recent talk about boards and governance. Would love to hear more of your thoughts either here or privately.
Seems interesting, I'll def check it out sometime
Jokes aside, this is a cool idea. I wonder if reading it yourself and varying the footage, or even adapting the concepts into something would help it be more attractive to watch. Though of course these would all increase the time investment cost. I can't say it's my jam but I'd be curious to see how these do on TikTok though since they seem to be a sort of prevalent genre/content style.
Yeah I think college students will often think "Fellowship" is religious because that's likely the only context they have seen the word used in, even though it's often used for all kinds of non-religious opportunities.
I'm not sure how important this is - I soon realized lots of fellowships at my school were not religious and that it had a broader meaning.
I guess people could try different things out and see how they work. Maybe something simple like EA reading group. Or focus on a topic name: people would probably be less likely to confuse something like "public health/pandemic prevention/AI Ethics fellowship" as something religious.
I have thought a few times that maybe a safer route to AGI would be to learn as much as we can about the most moral and trustworthy humans we can find and try to build on that foundation/architecture. I'm not sure how that would work with existing convenient methods of machine learning.
Yeah there are a lot of "fairweather friends" in politics who won't feel inclined to return any favors when it matters most. The opposite of that is having a committed constituency that votes enough in elections to not be worth upsetting - aka a base of people power. These take serious effort to create and not all groups are distributed geographically the same way so some have more/easier influence than others. One reason the NRA is so powerful and not abandoned despite negative media coverage is that they have tight relationships with Republican politicia...
I think politics can seem very opaque, incomprehensible, and lacking clear positive payoffs but after volunteering, studying, and working on campaigns for a few years, I think it's more simple but difficult.
If you click on your name in the top right corner, then click edit profile, you can scroll down and delete tags under "my activity" by clicking the x on the right side of each block.
What things would make people less worried about AI safety if they happened? What developments in the next 0-5 years should make people more worried if they happen?
What are good ways to test your fit for technical AI Alignment research? And which ways are best if you have no technical background?
Well squad-esque seems like an odd litmus test since there are many other progressive members of congress than them but POF did support Maxwell Frost who won.
Well to be fair I didn't say it was impossible, just that the outcome probably had more to do with the fundamentals of the race. It may have had a negative effect yes, but plenty of candidates win in races despite being supported by all kinds of PACs and having negative press about it.
Having more connections within the state for support and donations and highlighting those would have helped blunt negative attacks about PAC funding, for example.
I like the idea of Protect Our Future being more transparent about how and why they make endorsements. Giving a specific list of ways they evaluate candidates would be helpful for people to understand their actions. I also worry a little bit that this would make it easy to game their endorsement process or encourage political stunts that are more about drawing attention than doing something useful. But I'm not sure how big of a worry this should be.
Not sure but I think the Flynn campaign result was more likely an outcome of the fundamentals of the race: a popular, progressive, woman of color with local party support who already represented part of the district as a state rep and helped draw the new congressional district was way more likely to win over someone who hadn't lived there in years and had never run a political campaign before.
In terms of goal directedness, I think a lot of the danger hinges on whether and what kinds of internal models of the world will emerge in different systems, and not knowing what those will look like. Many capabilities people didn't necessarily expect or foresee suddenly emerged after more training - for example the jumps in abilities from GPT to GPT-2 and GPT-3. A similar jump to the emergence of internal models of the world may happen at another threshold.
I think I would feel better if we had some way of concretely and robustly specifying "goal dir...
Yeah I can see what you mean - they could have taken a less flashy and more straightforward approach. It would be interesting to think about what else they could have made or done with what they made instead that might have been better.
Yes - I didn't think about looking at how humane laws for people correlate with animals though. That's really interesting.
Do you write fiction at all?
Thanks for sharing. Yeah, I can see what you mean - Senku can be a bit annoying. I think that also makes him a more realistic character too, though maybe they overdo it a bit at times. I found Gen's syllable switching (like saying the second half first) on certain words really grating and it seemed to come out of nowhere. I didn't remember him doing that at the beginning. I think it would be super cool to try and make more stories like Dr. Stone where it's entertainment but also learning real stuff is a tool the characters use to progress the plot.
Oh...
Oh yeah that makes sense, I agree. Yeah, FMAB is often a good "first anime" to recommend since it does lots of things pretty well.
I'm really curious, how would you improve Dr. Stone? I think it could be improved but I'm not overflowing with ideas on how to do it at the moment.
Oh, I forgot about him being vegetarian. I think the reason the AI angle is more popular is because of how much more similar he seems to be to humans than animals. There are so many qualities people think of as being human capabilities/behavior that he does even if not all of th...
"They genuinely weren't surprised by anything that happened. They didn't necessarily predict everything perfectly, but everything that happened matched their model well enough. Their deep insight into ML progress enables them to clearly explain why AGI isn't coming soon, and they can provide rough predictions about the shape of progress over the coming years."
Would definitely like to hear from people like this and see them make lots of predictions about AI progress in the short term (months and years).
Seems like a very promising profile for identifying those with a valuable counter perspective/feedback for improving ideas.
FMAB is pretty widely liked. It used to be my favorite, but nowadays I think I put more emphasis on things that change the way I think. It has been a long time since I watched it though so I might change my mind if I rewatched.
Frankenstein makes me think about AI as well since it's all about creating something with greater capabilities than a human.
I've been meaning to read The Dispossessed. Will have to check out those other ones.
I liked Dr. Stone and Madoka Magica. The first one is pretty good at being entertainment that also occasionally happens to teach you about practical science concepts and the impacts of new technology, and the latter thinking about what we value and why.
Overlord makes me think about AI a lot. Having a bunch of undead soldiers that can do stuff like farm 24/7 without getting tired or needing to eat is kind of like having robots that can automate work. It also completely unbalances the world economy. And having a bunch of powerful servants who were crea...
This is really useful and makes sense - thanks for sharing your findings!
In my experience talking about an existing example of a problem like recommendation systems prioritizing unintended data points + example of meaningful AI capability usually gets people interested. Those two combined would probably be bad if we're not careful. Jumping to the strongest/worst scenarios usually makes people recoil because it's bad and unexpected and doesn't make sense why you're jumping to such an extreme outcome.
Do you have any examples of resources you were unaware of before? That could be useful to include as a section both for the actual resources and thinking about how to find such sources in the future.
I doubt it's the most important, and maybe it's not a good idea, but an option for users to hide post and comment karma from their own point of view could be interesting.
I think outreach to Buddhists is an interesting idea! I think many Buddhists would probably agree that the point of practice is to become a better person and thus have more impact on the world. And there are probably some who don't necessarily think about it the same way.
Buddhists and EAs could probably learn some things from each other yes. This is generally a good attitude to have towards others - curiosity and openness to experimentation.
I do think it's probably important for some people to dedicate themselves fulltime to practice and teaching as a profession, to keep the teachings and practices alive in a high-fidelity way.
I think building skills to become better at research and building skills to become better at things like dealing with stress or interacting with others are both important to having a greater positive impact on the world, so my point wasn't exactly about deontology vs consequentialism.
And I'd guess EA probably has more concrete consensus on the former than the latter.
This sounds interesting. One worry I have would be preventing any kind of exploitation of recipients in exchange for support.
Oh I can see what you mean. This post probably already took so many hours to make. Thanks for your response.
I think another part is that there definitely seems to be some variety of opinion on how targeted EA should be vs broad to maximize impact. I think I lean more towards broader, since 100 people doing their best is probably better than 10, even if the 10 are in 5x more high impact roles/situations. I have some other half baked thoughts probably but I should take some time to think them through.
Yeah, that's a good point about giving. Giving itself can be a transformative action. I guess if I were to put it another way, EA shares many religions' emphasis on works/charity but not necessarily experiences and practices of 'faith'/transcendence.
In terms of the Mormon stuff, I think importing cultural habits is maybe different but adjacent to what I mean. Although maybe some aren't that separate. It seems like abstaining from alcohol exists in a lot of different religions so maybe that particular behavior was found to be helpful in some way to pe...
- Interesting. Are there any examples of what we might consider a relatively small policy changes that received huge amounts of coverage? Like for something people normally wouldn't care about. Maybe these would be informative to look at compared to more hot button issues like abortion that tend to get a lot of coverage. I'm also curious if any big issues somehow got less attention than expected and how this looks for pass/fail margins compared to other states where they got more attention. There are probably some ways to estimate this that are better than o
... (read more)