New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
If you believe that: - ASI might come fairly soon - ASI will either fix most of the easy problems quickly, or wipe us out - You have no plausible way of robustly shaping the outcome of the arrival of ASI for the better does it follow that you should spend a lot more on near-term cause areas now? Are people doing this? I see some people argue for increasing consumption now, but surely this would apply even more so to donations to near-term cause areas?
When AI Safety people are also vegetarians, vegans or reducetarian, I am pleasantly surprised, as this is one (of many possible) signals to me they're "in it" to prevent harm, rather than because it is interesting.
Sharing a piece of advice I've given to a few people about applying for (EA) funding. I've heard various people working on early-stage projects express hesitancy about applying for EA funding because their plan isn't "complete" enough. They don't feel confident enough in their proposal, or think what they're asking for is too small. They seem to assume that EA funders only want to look at proposals with a long time-horizons from applicants who will work full-time who are confident their plan will work. In my experience (I've done various bits of grantmaking and regularly talk to EA funders), grantmakers in EA spaces are generally happy to receive applications that don't have these qualities. It's okay to apply if you just want to test a project out for a few months; maybe you won't be full-time, maybe you aren't confident in some part of the theory of change, maybe it's just a few months. You should apply and just explain your thinking, including all of your uncertainties. Funders are uncertain too, and often prefer to fund tests for a few months than commit to multi-year projects with full-time staff because tests give them useful information about you and the theory of change. Ideally, funders eventually support long-term projects too. I'm not super confident in this take, but I ran it past a few EA funders and they agreed. Note that I think this probably doesn't apply outside of EA; I understand many grant applications require detailed plans.
New: floating audio player for posts You can already listen to Forum posts via the audio player on the post page and via podcast feeds, thanks to Type III Audio. Recently, we enabled their new floating audio player. Now, the player becomes fixed to the bottom of the screen when you scroll down, making it easier to read along with the audio. In addition, you can click on the play buttons next to section headings to skip directly to them. As always, we’d love to hear your feedback! Do you prefer to read or listen to posts? How can we improve the listening experience? Feel free to respond here or contact us in other ways.
OpenAI appoints Retired U.S. Army General Paul M. Nakasone to Board of Directors I don't know anything about Nakasone in particular, but it should be of interest (and concern)—especially after Situational Awareness—that OpenAI is moving itself closer to the U.S. military-industrial complex. The article itself specifically mentions Nakasone's cybersecurity experience as a benefit of having him on the board, and that he will be placed on OpenAI's board's Safety and Security Committee. None of this seems good for avoiding an arms race.

Popular comments

Recent discussion

OG YouTubers Hank and John Green (the vlog brothers) are reducing their beef consumption to only 4 designated days each year (plus 2 flex days). They have millions of followers and hundreds of thousands as a part of their so there's a good chance this could make a real impact. And more importantly, raise more awareness for the issue. 

Continue reading

If you believe that:
- ASI might come fairly soon
- ASI will either fix most of the easy problems quickly, or wipe us out
- You have no plausible way of robustly shaping the outcome of the arrival of ASI for the better

does it follow that you should spend a lot more on near...

Continue reading

I think this conclusion does logically flow from those premises but I would question the premises themselves -- I think the first and second premise are pretty uncertain and the third premise is likely false for most people.

does it follow that you should spend a lot more on near-term cause areas now?

I think so.

I was quite focused on building career capital, and now I'm focused on reducing near-term animal suffering, partly because of this reasoning.

The latest video from YouTube’s biggest creator has him teaming up with Give Directly to give $300k in cash to everyone in a remote Ugandan Village. 

This is going to get tens of millions of views. Please watch and like to boost the signal!

Giving $300,000 to Rural Villagers...

Continue reading

Thanks for sharing. Your post title is very misleading though. I wouldn't be surprised if Mr Beast has never even heard of EA. I'm not against clickbaity titles which are more or less accurate but exaggerated, but "Mr Beast is now officially an EA!" seems simply incorrect. Not a huge deal, but I was quite excited when I clicked on this post only to be left a bit disappointed. May be worth clarifying in the text that Mr Beast hasn't actually signalled agreement with EA principles.

10
calebp
I’d guess that the vast majority of people who donate to give directly (including substantial public sums) would not describe themselves as EA and by their lights aren’t focussed on “doing the most good with impartiality and scope sensitivity in mind” so I wouldn’t describe them as an EA on this donation alone, if they talk about EA explicitly in the video it would be great to add that context above. Obviously not “being an EA” does not diminish this achievement, I’m excited when anyone donates to help the poor in a cost-effective manner independent of their community affiliation and even more excited when it’s such a large amount of money! Thanks for posting this here, it’s very exciting and I’m looking forward to watching the video!
10
Joseph Lemien
I have mixed feelings about this. The consequentialist part of me thinks that this great. The virtue ethicist part of me flinches away from this. I am happy to see people who have unmet desires (medical care, improved housing, education, food, etc.) getting access to money which allows them to meet those desires. The video also feels very manipulative. There is something about poverty porn. The heartfelt music while we are being shown video clips of a child getting a single meal per day, the slow motion video of people smiling and laughing with inspirational music and narration about the good things GiveDirectly will do, a clip of a group of children performing a song... I know that there has been a lot written about exploitation and stereotypes when it comes to development and aid. And I can't exactly claim that there is something wrong about using standard video editing techniques or selecting a soundtrack that sparks the emotional reaction you want in your audience. I also know that plenty to kids start performing what they think you expect as soon as they see that they are being recorded, and this happens in tourism situations as well (giving the tourists what they want, in a sense). The knife sharpening seemed like pretty standard performative YouTube behavior. A couple of outsiders goofing around and being the center of attention, a large crowd of people standing around watching, a friendly and low-stakes rivalry, affectations, etc. I'm aware that "competitions" like this are common for these kinds of videos. And maybe a video lacking that kind of competition would be shared less, and spread the message less. If this generates lots of donations that otherwise wouldn't have happened, if this reduces the suffering in the world, then who am I to criticize or whine or complain about it?
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.
JWS commented on David Mathers's quick take

Some very harsh criticism of Leopold Aschenbrenner's recent AGI forecasts in the recent comments on this Metaculus question. People who are following stuff more closely than me will be able to say whether or not they are reasonable: 

Continue reading

Which particular resolution criteria do you think it's unreasonable to believe will be met by 2027/2032 (depending on whether it's the weak AGI question or the strong one)?

Two of the four in particular stand out. First, the Turing Test one exactly for the reason you mention - asking the model to violate the terms of service is surely an easy way to win. That's the resolution criteria, so unless the Metaculus users think that'll be solved in 3 years[1] then the estimates should be higher. Second, the SAT-passing requires "having less than ten SAT exams as part of the training data", which is very unlikely in current Frontier models, and labs probably aren't keen to share what exactly they have trained on.

it is just unclear whether people are forecasting on the actual resolution criteria or on their own idea of what "AGI" is. 

No reason to assume an individual Metaculus commentator agrees with the Metaculus timeline, so I don't think that's very fair.

I don't know if it is unfair. This is Metaculus! Premier forecasting website! These people should be reading the resolution criteria and judging their predictions according to them. Just going off personal vibes on how much they 'feel the AGI' feels like a sign of epistemic rot to me. I know not every Metaculus user agrees with this, but it is shaped by the aggregate - 2027/2032 are very short timelines, and those are median community predictions. This is my main issue with the Metaculus timelines atm.

I actually think the two Metaculus questions are just bad questions. 

I mean, I do agree with you in the sense that they don't fully match AGI, but that's partly because 'AGI' covers a bunch of different ideas and concepts. It might well be possible for a system to satisfy these conditions but not replace knowledge workers, perhaps a new market focusing on automation and employment might be better but that also has its issues with operationalisation.
 

  1. ^

    On top of everything else needed to successfully pass the imitation game

I didn't read all the comments, but Order's are obvious nonsense, of the "(a+b^n)/n = x, therefore God exists" tier. Eg take this comment:

But something like 5 OOMs seems very much in the realm of possibilities; again, that would just require another decade of trend algorithmic efficiencies (not even counting algorithmic gains from unhobbling).

Here he claims that 100,000x improvement is possible in LLM algorithmic efficiency, given that 10x was possible in a year. This seems unmoored from reality - algorithms cannot infinitely improve, you can derive a mathematical upper bound. You provably cannot get better than Ω(n log n) comparisons for sorting a randomly distributed list. Perhaps he thinks new mathematics or physics will also be discovered before 2027?

This is obviously invalid. The existence of a theoretical complexity upper bound (which incidentally Order doesn't have numbers of) doesn't mean we are anywhere near it, numerically. Those aren't even the same level of abstraction! Furthermore, we have clear theoretical proofs for how fast sorting can get, without AFAIK any such theoretical limits for learning. "algorithms cannot infinitely improve" is irrelevant here, it's the slightly more mathy way to say a deepity like "you can't have infinite growth on a finite planet," without actual relevant semantic meaning[1]

Numerical improvements happen all the time, sometimes by OOMs. No "new mathematics or physics" required.

Frankly, as a former active user of Metaculus, I feel pretty insulted by his comment. Does he really think no one on Metaculus took CS 101? 

  1. ^

    It's probably true that every apparently "exponential" curve become a sigmoid eventually, knowing this fact doesn't let you time the transition. You need actual object-level arguments and understanding, and even then it's very very hard (as people arguing against Moore's Law or for "you can't have infinite growth on a finite planet" found out).

To be clear I also have high error bars on whether traversing 5 OOMs of algorithmic efficiency in the next five years are possible, but that's because a) high error bars on diminishing returns to algorithmic gains, and b) a tentative model that most algorithmic gains in the past were driven by compute gains, rather than exogeneous to it. Algorithmic improvements in ML seems much more driven by the "f-ck around and find out" paradigm than deep theoretical or conceptual breakthroughs; if we model experimentation gains as a function of quality-adjusted researchers * compute * time, it's obvious that the compute term is the one that's growing the fastest (and thus the thing that drives the most algorithmic progress).

 

The Arthropoda Foundation

Tens of trillions of insects are used or killed by humans across dozens of industries. Despite being the most numerous animal species reared by animal industries, we know next to nothing about what’s good or bad for these animals. And right...

Continue reading

Is this separate from Insect Institute? The title of the post made me think that Insect Institute was rebranding to Arthropoda Foundation.

I wanted to share this update from Good Ventures (Cari and Dustin’s philanthropy), which seems relevant to the EA community.

Tl;dr: “while we generally plan to continue increasing our grantmaking in our existing focus areas via our partner Open Philanthropy, we have...

Continue reading

Hi Ozzie,

My personal model is that most of that can be figured out post-AGI

One could also have argued for figuring out farmed animal welfare after cheap animal food (produced in factory-farms) is widely available? Now that lots of people are eating factory-farmed animals, it is harder to role back factory-farming.

4
Vasco Grilo
Nice points, Ozzie! For reference, Alex wrote: I asked Alex about the above 3 months ago: There was no answer.
34
Dustin Moskovitz
I didn't think of it as sufficient, but I did think of it as momentum. "Tomorrow, there will be more of us" doesn't feel true anymore.

I run a research organization. Research organizations, like all organizations, want to grow. I'm not talking growth for the sake of growth, but growth to be able to further your mission. However, there is always going to be a limiting factor constraining the pace of this...

Continue reading

A donor-pays philanthropy-advice-first model solves several of these problems.

  • If your model focuses primarily on providing advice to donors, your scope is "anything which is relevant to donating", which is broad enough that you're bound to have lots of high-impact research to do, which helps with constraint 1.
  • Strategising and prioritisation are much easier when you're knee-deep in supporting donors with their donations -- this highlights the pain points in making good giving decisions, which helps with constraint 2.
  • If donors perceive that the research is w
... (read more)

Securitization is indeed coming 

"OpenAI appoints Retired U.S. Army General Paul M. Nakasone to Board of Directors"

https://openai.com/index/openai-appoints-retired-us-army-general/ 

Continue reading

What do you mean by "securitization"?

What do people think about posting urban planning/yimby/ local gov policy thoughts on the forum? 

I find that stuff really interesting and admittedly don't believe it is the most important stuff to work on but is "effective altruism" if you have an extremely local moral...

Continue reading

For my part I think the fact that Open Phil look at YIMBY stuff is weird and wish it were explained better (but don't think it's important enough to actively pursue)