New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+
243
· · 21m read

Posts tagged community

Quick takes

Show community
View more
I can highly recommend following Sentinel's weekly minutes, a weekly update from superforecasters on the likelihood of any events which plausibly could cause worldwide catastrophe. Perhaps the weekly newsletter I look the most forward to at this point. Read previous issues here:  https://sentinel-team.org/blog/
4
Linch
2h
1
Anthropic issues questionable letter on SB 1047 (Axios). I can't find a copy of the original letter online. 
Hey everyone, in collaboration with Apart Research, I'm helping organize a hackathon this weekend to build tools for accelerating alignment research. This hackathon is very much related to my effort in building an "Alignment Research Assistant." Here's the announcement post: 2 days until we revolutionize AI alignment research at the Research Augmentation Hackathon! As AI safety researchers, we pour countless hours into crucial work. It's time we built tools to accelerate our efforts! Join us in creating AI assistants that could supercharge the very research we're passionate about. Date: July 26th to 28th, online and in-person Prizes: $2,000 in prizes Why join? * Build tools that matter for the future of AI * Learn from top minds in AI alignment * Boost your skills and portfolio We've got a Hackbook with an exciting project to work on waiting for you! No advanced AI knowledge required - just bring your creativity! Register now: Sign up on the website here, and don't miss this chance to shape the future of AI research!
Meta has just released Llama 3.1 405B. It's open-source and in many benchmarks it beats GPT-4o and Claude 3.5 Sonnet: Zuck's letter "Open Source AI Is the Path Forward".
‘Five Years After AGI’ Focus Week happening over at Metaculus. Inspired in part by the EA Forum’s recent debate week, Metaculus is running a “focus week” this week, aimed at trying to make intellectual progress on the issue of “What will the world look like five years after AGI (assuming that humans are not extinct)[1]?” Leaders of AGI companies, while vocal about some things they anticipate in a post-AGI world (for example, bullishness in AGI making scientific advances), seem deliberately vague about other aspects. For example, power (will AGI companies have a lot of it? all of it?), whether some of the scientific advances might backfire (e.g., a vulnerable world scenario or a race-to-the-bottom digital minds takeoff), and how exactly AGI will be used for “the benefit of all.” Forecasting questions for the week range from “Percentage living in poverty?” to “Nuclear deterrence undermined?” to “‘Long reflection’ underway?” Those interested: head over here. You can participate by: * Forecasting * Commenting * Comments are especially valuable on long-term questions, because the forecasting community has less of a track record at these time scales.[2][3] * Writing questions * There may well be some gaps in the admin-created question set.[4] We welcome question contributions from users. The focus week will likely be followed by an essay contest, since a large part of the value in this initiative, we believe, lies in generating concrete stories for how the future might play out (and for what the inflection points might be). More details to come. 1. ^ This is not to say that we firmly believe extinction won’t happen. I personally put p(doom) at around 60%. At the same time, however, as I have previously written, I believe that more important trajectory changes lie ahead if humanity does manage to avoid extinction, and that it is worth planning for these things now. 2. ^ Moreover, I personally take Nuño Sempere’s “Hurdles of using forecasting as a tool for making sense of AI progress” piece seriously, especially the “Excellent forecasters and Superforecasters™ have an imperfect fit for long-term questions” part. With short-term questions on things like geopolitics, I think one should just basically defer to the Community Prediction. Conversely, with certain long-term questions I believe it’s important to interrogate how forecasters are reasoning about the issue at hand before assigning their predictions too much weight. Forecasters can help themselves by writing comments that explain their reasoning. 3. ^ In addition, stakeholders we work with, who look at our questions with a view to informing their grantmaking, policymaking, etc., frequently say that they would find more comments valuable in helping bring context to the Community Prediction. 4. ^ All blame on me, if so.