Meal replacement companies were there for us, through thick and slightly less thick.
Some commentary from Zvi that I found interesting, including pointers to some other people’s reactions:
Just in case someone interested in this has not done so yet, I think Zvi‘s post about it was worth reading.
Thanks for your work on this, super interesting!
Based on just quickly skimming, this part seems most interesting to me and I feel like discounting the bottom-line of the sceptics due to their points seeming relatively unconvincing to me (either unconvincing on the object level, or because I suspect that the sceptics haven't thought deeply enough about the argument to evaluate how strong it is):
...We asked participants when AI will displace humans as the primary force that determines what happens in the future. The concerned group’s median date is 2045 and t
either unconvincing on the object level, or because I suspect that the sceptics haven't thought deeply enough about the argument to evaluate how strong it is
The post states that the skeptics spent 80 hours researching the topics, and were actively engaged with concerened people. For the record, I have probably spent hundreds of hours thinking about the topic, and I think the points they raise are pretty good. These are high quality arguments: you just disagree with them.
I think this post pretty much refutes the idea that if skeptics just "thought deeply" they would change their minds. It very much comes down to principled disagreement on the object level issues.
I agree that things like confirmation bias and myside bias are huge drivers impeding "societal sanity". And I also agree that it won't help a lot here to develop tools to refine probabilities slightly more.
That said, I think there is a huge crowd of reasonably sane people who have never interacted with the idea of quantified forecasting as a useful epistemic practice and a potential ideal to thrive towards when talking about important future developments. Like other commentators say, it's currently mostly attracting a niche of people who thrive for higher ...
Thanks, I think that's a good question. Some (overlapping) reasons that come to mind that I give some credence to:
a) relevant markets are simply making an error in neglecting quantified forecasts
I don't think there's actually a risk of CAISID damaging their EA networks here, fwiw, and I don’t think CAISID wanted to include their friendships in this statement.
My sense is that most humans are generally worried about disagreeing with what they perceive to be a social group’s opinion, so I spontaneously don’t think there’s much specific to EA to explain here.
I‘m really excited about more thinking and grant-making going into forecasting!
Regarding the comments critical of forecasting as a good investment of resources from a world-improving perspective, here some of my quick thoughts:
Systematic meritocratic forecasting has a track record of outperforming domain experts on important questions - Examples: Geopolitics (see Superforecasting), public health (see COVID), IIRC also outcomes of research studies
In all important domains where humans try to affect things, they are implicitly forecasting all the time a
Some other relevant responses:
My current impression of OpenAI’s multiple contradictory perspectives here is that they are genuinely interested in safety - but only insofar as that’s compatible with scaling up AI as fast as possible. This is far from the worst way that an AI company could be. But it’s not reassuring either.
Zvi Mowshowitz writes
...Even scaling back the misunderstandings, this is what ambition looks like.
It is not what safety looks like. It is not what OpenAI’s non-profit mission looks like. It is not what it looks like to
Thanks a lot for sharing, and for your work supporting his family and for generally helping the people who knew him in processing this loss. I only recently got to know him during the last two EA conferences I attended but he left a strong impression of being a very kind and caring and thoughtful person.
Huh, I actually kinda thought that Open Phil also had a mixed portfolio, just less prominently/extensively than GiveWell. Mostly based on hearling like once or twice that they were in talks with interested UHNW people, and a vague memory of somebody at Open Phil mentioning them being interested in expanding their donors beyond DM&CT...
Cool!
the article is very fair, perhaps even positive!
Just read the whole thing, wondering whether it gets less positive after the exerpt here. And no, it's all very positive. Thanks you guys for your work, so good to see forecasting gaining momentum.
For example, the fact that it took us more than ten years to seriously consider the option of "slowing down AI" seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy.
I'd guess it's also that advocacy and regulation seemed just less marginally useful in most worlds with the suspected AI timelines of even 3 years ago?
Hmmm, your reply makes me more worried than before that you'll engage in actions that increase the overall adversarial tone in a way that seems counterproductive to me. :')
I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth.
I'm not completely sure what you refer to with "legitimate jobs", but I generally have t...
It would be convenient for me to say that hostility is counterproductive but I just don’t believe that’s always true. This issue is too important to fall back on platitudes or wishful thinking.
Also, the way you frame your pushback makes me worry that you'll loose patience with considerate advocacy way too quickly
I don’t know what to say if my statements led you to that conclusion. I felt like I was saying the opposite. Are you just concerned that I think hostility can be an effective tactic at all?
Thanks for working on this, Holly, I really appreciate more people thinking through these issues and found this interesting and a good overview over considerations I previously learned about.
I'm possibly much more concerned than you about politicization and a general vague feeling of downside risks. You write:
...[Politization] is a real risk that any cause runs when it seeks public attention, and unfortunately I don’t think there’s much we can do to avoid it. Unfortunately, though, AI is going to become politicized whether we get involved in it or not. (I wou
On the discussion that AI will have deficits in expressing care and eliciting trust, I feel like he’s neglecting that AI systems can easily get a digital face and a warm voice for this purpose?
Interesting discussion, thanks! The discussion of AI potentially driving explosive innovations seemed much more relevant than the replacement of the jobs you spent most time discussing, and at the same time unfortunately much more rushed.
But it’s a kind of thing where, you know, I can keep coming up with new bottlenecks [for explosive innovations leading to economic growth], and [Tom Davidson] can keep dismissing them, and we can keep going on forever.
Relatedly, I'd've been interested how Michael relates to the Age of Em scenario, in which IIRC explosive i...
Hey Kieren :) Thanks, yeah, it was intentional but badly worded on my part. :D I adopted your suggestion.
(Very off-hand and uncharitably phrased and likely misleading reaction to the "Holden vs. hardcore utilitarianism" bit, thought it's just useful enough to quickly share anyways)
Fwiw, despite the tournmant feeling like a drag at points, I think I kept at it due to a mix of:
a) I committed to it and wanted to fulfill the committment (which I suppose is conscientiousness),
b) me generally strongly sharing the motivations for having more forecasting, and
c) having the money as a reward for good performance and for just keeping at it.
I was also a participant. I engaged less than I wanted mostly due to the amount of effort this demanded and losing more and more intrinsic motivation.
Some vague recollections:
OpenAI lobbied the European Union to argue that GPT-4 is not a ‘high-risk’ system. Regulators assented, meaning that under the current draft of the EU AI Act, key governance requirements would not apply to GPT-4.
Somebody shared this comment from Politico, which claims that the above article is not an accurate representation:
...European lawmakers beg to differ: Both Socialists and Democrats’ Brando Benifei and Renew’s Dragoș Tudorache, who led Parliament’s work on the AI Act, told my colleague Gian Volpicelli that OpenAI never sent them the paper, nor re
A simple analogy to humans applies here: Some of our goals would be easier to attain if we were immortal or omnipotent, but few choose to spend their lives in pursuit of these goals.
I feel like the "fairer" analogy would be optimizing for financial wealth, which is arguably also as close to omnipotence as one can get as a human, and then actually a lot of humans are pursuing this. Further, I might argue that currently money is much more of a bottleneck for people than longevity for ~everyone to pursue their ultimate goals. And for the rare exceptions (maybe something like the wealthiest 10k people?) those people actually do invest a bunch in their personal longevity? I'd guess at least 5% of them?
I spontaneously thought that the EA forum is actually a decentralizing force for EA, where everyone can participate in central discussions.
So I feel like the opposite, making the forum more central to the broader EA space relative to e.g. CEAs internal discussions, would be great for decentralization. And calling it „Zephyr forum“ would just reduce its prominence and relevance.
I think this is a place where the centralisation vs decentralisation axis is not the right thing to talk about. It sounds like you want more transparency and participation, which you might get by having more centrally controlled communication systems.
IME decentralised groups are not usually more transparent, if anything the opposite as they often have fragmented communication, lots of which is person-to-person.
Yeah, seems helpful to distinguish central functions (something lots of people use) from centralised control (few people have power). The EA forum is a central function, but no one, in effect, controls it (even though CEA owns and could control it). There are mods, but they aren't censors.
Moral stigmatization of AI research would render AI researchers undateable as mates, repulsive as friends, and shameful to family members. Parents would disown adult kids involved in AI. Siblings wouldn’t return their calls. Spouses would divorce them. Landlords wouldn’t rent to them.
I think such a broad and intense backlash against AI research broadly is extremely unlikely to happen, even if we put all our resources on it.
I'd be very surprised if AI will predominantly be considered risk-free in long-timelines worlds. The more AI will be integrated into the world, the more it will interact with and cause harmful events/processes/behaviors/etc., like take the chatbot that apparently facilitated a suicide.
And I take Snoop Doggs reaction to recent AI progress as somewhat representative of a more general attitude that will get stronger even with relatively slow and mostly benign progress
...Well I got a motherf*cking AI right now that they did made for me. This n***** could ta
Thanks for sharing, I like how concrete all of this is and think it's generally a really important practice.
One "hack" that came to mind that I think helped me feeling more relaxed about the prospect of even pretty harsh criticism: Think of some worst cases already in advance. Like when you do a project/plan your life, consider the hypotheses that e.g.
Hmm, fwiw, I spontaneously think something like this is overwhelmingly likely.
Even in the (imo unlikely) case of AI research basically stagnating from now on, I expect AI applications to have effects that will significantly affect the broader public and not make them think anything close to "what a nothingburger" (e.g. like I've heard it happen for nanotechnology). E.g. I'm thinking of things like the broad availabiltiy of personal assistants & AI companions, automating of increasingly many tasks, impacts on education, on the productivity of soft...
Most news outlets seem to jump on everything he does.
That's where my thoughts went, maybe he and/or CAIS thought that the statement would have a higher impact if reporting focuses on other signatories. That Musk thinks AI is an x-risk seems fairly public knowledge anyways, so there's no big gain here.
This is so awesome, thank you so much, I'm really glad this exists. The recent shift of experts publicly worrying about AI x-risks has been a significant update for me in terms of hoping humanity avoids losing control to AI.
(but notably not Meta)
Wondering how much I should update from Meta and other big tech firms not being represented on the list. Did you reach out to the signing individuals via your networks and maybe the network didn't reach some orgs as much? Maybe there are company policies in place that prevent employees from some firms from signing the statement? And is there something specific about Meta that I can read up on (besides Yann LeCun intransigence on Twitter :P)?
I also have the impression that there's a gap and would be interested in whether funders are not prioritizing it too much, or whether there's a lack of (sufficiently strong) proposals.
Another AI governance program which just started its second round is Training For Good's EU Tech Policy fellowship, where I think the reading and discussion group part has significant overlap with the AGISF program. (Besides that it has policy trainings in Brussels plus for some fellows also a 4-6 months placement at an EU think tank.)
Thanks for sharing Luise, I also have some issues with tiredness and probably something burn-out-related and found this helpful to read. E.g. this feels very familiar when I want to engage more complicated research questions:
That depth of thinking and amount of working memory sounds way too hard right now. I try, but 3 minutes later I give up. I decide to read something instead. I feel the strong desire to sit in a comfy bean bag and get a blanket.
Had to laugh at this one, sounds like torture to me xD
...Even staring at the wall for 10 minutes sounds gre
How do you evaluate community notes? Multiple times they have given me fairly informative context on some viral tweets, and it seems like they were introduced under Musk.
Thanks for sharing, that's a refreshingly nice article. :D Big fan of HIA!
...“In New Zealand, we’ve got ‘tall poppy syndrome’,” says Inglis, “where those who like to stand out will get cut down. And the New Zealand public love to do that sometimes, which is great because it keeps us humble, but at the same time, it can reduce people’s confidence to put themselves out there and talk about issues which they care about. So we’re trying to work with athletes so they can put themselves out there to deliver messages that they can be really confident in and that the
I often feel guilty for eating out at restaurants. Especially when meat is involved.
I kinda feel like I personally wouldn't want to use the app like this, it spontaneously feels like I wouldn't fully own the tradeoffs I'm under or something? Like I'd be trying to distract myself from the outcomes of the choices I'm making? If I'd think I made the best tradeoffs by eating meat now and then, I'd probably just want to one time cry about how sad it is and make peace with living in a world that also features this particular cruel tradeoff.
(And now back to reducing x-risks from AI! <3 )
Thanks for the updates, I'm really grateful for your work and wish you all the best for the rest of the year!
Since this update, we’ve hired 38 more staff members!
Pretty cool to see you growing the team, would be interested in the challenges and lessons learned.
Thanks for your work, and for sharing your thoughts, that all makes sense to me and I'm glad that you seem to have success in making people feel psychologically safe and encouraged to make their ideas happen! (And thanks for reminding me of the Google study)
I'm not yet sure why socials and rationality skill trainings appear to be everything the Berlin crowd wants.
Well, we also have a very popular TEAMWORK speaker series, and I'm part of one highly regarded cause-specific dinner networking thing! :P So maybe I'd indeed guess that this is partially a founder...
But "the reasonable restrictionist mood is anguish that a tremendous opportunity to enrich mankind and end poverty must go to waste." You might think that restricting immigration is sometimes the lesser evil, but if you don't have this mood, you're probably just ~xenophobic.
I don’t see why they would feel anguish if they don’t believe in the first place that open borders would enrich mankind and end poverty? I guess it works if they value something else, like cultural homogeneity. But even then it seems reasonable not to feel anguish about tradeoffs one...
For an NVIDIA A100, the on-board memory bandwidth is around 2GB/s
I think this should be 2TB/s?
And ping!
We are working on a piece with more insights on the utilizations and some advice on how to estimate training compute and the connected utilization of the system (link to be added by the end of 2021; ping me if not).
Thanks for this! Your summaries usually cause me to add ~1-2 relevant posts to my reading list, and remove ~2-3 others from my reading list for which I feel satisfied to just have read your summary. :)
Thanks for your work here, it's a useful overview for the compute metrics project I'm working on with Peter. Minor errors:
Also commonly used is Petaflop/s-day. It's also a quantity of operations. A petaflop/s is floating point operations per second for one day. A day has . That makes FLOPs.
Cool, thanks for doing that analysis! I'm wondering whether the scores you derived would be a great additional performance metric to provide to forecasters, specifically
a) the average contribution over all questions, and
b) the individual contribution for each question.
Thanks for writing this up. I saw people asking for upvotes a couple of times e.g. on Slack channels, from people I'm fairly sure are well-meaning and cooperative and who just haven't considered the problems this behavior causes. I never said something because I had the suspicion they are pretty self-conscious about getting up/downvotes and about posting on the forum in general (to which I can relate :D), and starting a conversation that touches on those issues seemed a bit too much every time.
FWIW, different communities treat it differently. It's a no-go to ask for upvotes at https://hckrnews.com/ but is highly encouraged at https://producthunt.com/.
Thanks for the comment. I agree that well-meaning and cooperative people sometimes end up vote-brigading (or borderline), and I imagine that there are people who might read this and feel quite bad. I really don't want that.
I'm just hoping that we can make lots of people aware of this to prevent accidental/uninformed/absent-minded cases of this happening.
(Then we'd still be left with clearer-cut cases of uncooperative behavior, but if nothing else, the people being asked to vote-brigade might be able to warn the mods more easily with increased awareness of the problem.)
Really appreciate the level of detail you provide on your thinking here! And I’m very glad to hear that it’s been going so well, hope the next year will be even better. :)
Fwiw, I think your examples are all based on less controversial conditionals, though, which makes them less informative here. And I also think the topics that are conditioned on in your examples already received sufficient analyses that make me less worried about people making things worse* as they will be aware of more relevant considerations, in contrast to the treatment in the background discussions that Larks discussed.
*(except the timelines example, which still feels slightly different though as everything seems fairly uncertain about AI strategy)
Hmm good point that my examples are maybe too uncontroversial, so it's somewhat biased and not a fair comparison. Still, maybe I don't really understand what counts as controversial, but at the very least, it's easy to come up with examples of conditionals that many people (and many EAs) likely place <50% credence on, but are still useful to have on the forum:
I also relate a lot due to my PhD experience. Thanks so much for writing this, I’m glad you got out of it as well.
Maybe the lesson here is that we should be more proactive about watching and checking in on other members of the EA community
I think that’s a really good idea. While people saw my struggles during my PhD, I think there was never a real intervention of someone talking it out systematically with me. I haven’t followed up on their work, but maybe this project is covering something like this and is still ongoing/worth expanding? https://forum.e...
Thanks for sharing your thoughts, I particularly appreciated you pointing out the plausible connection between experiencing scarcity and acting less prosocially / with less integrity. And I agree that experiencing scarcity in terms of social connections and money is unfortunately still sufficiently common in EA that I'm also pretty worried when people e.g. want to systematically tone down aspects that would make EA less of a community.
...Game-theoretically, it makes total sense for people to be a bit untrustworthy while they are in a bad place in their life.
What do you think about the idea of large donors holding back some of their funding and directly transferring it to the people in the board? Or the donors could maybe earmark some part of their funding for that purpose. Then the people in the board don't have to feel like their income is dependent on their relationships to people in the org.
Would it be possible to set up a fund that pays people for the damages they incurred for a lawsuit where they end up being innocent? That way the EA community could make it less risky for those who haven’t spoken up, and also signal how valuable their information is to them.
Yes, although it is likely cheaper (in expected costs) and otherwise superior to make a ~unconditional offer to cover at least the legal fees for would-be speakers. The reason is that an externally legible, credible guarantee of legal-expense coverage ordinarily acts as a strong deterrent to bringing a weak lawsuit in the first place. As implied by my prior comment, one of the main tools in the plaintiff's arsenal is to bully a defendant in a weak case to settle by threatening them with liability for massive legal bills. If you take that tactic way by maki... (read more)