New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
EAG Bay Area Application Deadline extended to Feb 9th – apply now! We've decided to postpone the application deadline by one week from the old deadline of Feb 2nd. We are receiving more applications than in the past two years, and we have a goal of increasing attendance at EAGs which we think this will help. If you've already applied, tell your friends! If you haven't — apply now! Don't leave it till the deadline! You can find more information on our website.
A minor personal gripe I have with EA is that it seems like the vast majority of the resources are geared towards what could be called young elites, particularly highly successful people from top universities like Harvard and Oxford. For instance, opportunities listed on places like 80,000 Hours are generally the kind of jobs that such people are qualified for, i.e. AI policy at RAND, or AI safety researcher at Anthropic, or something similar that I suspect less than the top 0.001% of human beings would be remotely relevant for. Someone like myself, who graduated from less prestigious schools, or who struggles in small ways to be as high functioning and successful, can feel like we're not competent enough to be useful to the cause areas we care about. I personally have been rejected in the past from both 80,000 Hours career advising, and the Long-Term Future Fund. I know these things are very competitive of course. I don't blame them for it. On paper, my potential and proposed project probably weren't remarkable. The time and money should go to the those who are most likely to make a good impact. I understand this. It just, I guess I just feel like I don't know where I should fit into the EA community. Even just many people on the forum seem incredibly intelligent, thoughtful, kind, and talented. The people at the EA Global I atttended in 2022 were clearly brilliant. In comparison, I just feel inadequate. I wonder if others who don't consider themselves exceptional also find themselves intellectually intimidated by the people here. We do probably need the best of the best to be involved first and foremost, but I think we also need the average, seemingly unremarkable EA sympathetic person to be engaged in some way if we really want to be more than a small community, to be as impactful as possible. Though, maybe I'm just biased to believe that mass movements are historically what led to progress. Maybe a small group of elites leading the charge is actually what i
One of my main frustrations/criticisms with a lot of current technical AI safety work is that I'm not convinced it will generalize to the critical issues we'll have at our first AI catastrophes ($1T+ damage). From what I can tell, most technical AI safety work is focused on studying previous and current LLMs. Much of this work is very particular to specific problems and limitations these LLMs have. I'm worried that the future decisive systems won't look like "single LLMs, similar to 2024 LLMs." Partly, I think it's very likely that these systems will be ones made up of combinations of many LLMs and other software. If you have a clever multi-level system, you get a lot of opportunities to fix problems of the specific parts. For example, you can have control systems monitoring LLMs that you don't trust, and you can use redundancy and checking to investigate outputs you're just not sure about. (This isn't to say that these composite systems won't have problems - just that the problems will look different to those of the specific LLMs). Here's an analogy: Imagine that researchers had 1960s transistors but not computers, and tried to work on cybersecurity, in preparation of future cyber-disasters in the coming decades. They want to be "empirical" about it, so they go along investigating all the failure modes of 1960s transistors. They successfully demonstrate that in extreme environments transistors fail, and also that there are some physical attacks that could be done on the transistor level. But as we know now, almost all of this has either been solved on the transistor level, or on levels shortly above the transistors that do simple error management. Intentional attacks on the transistor level are possible, but incredibly niche compared to all of the other cybersecurity capabilities. So just as understanding 1960s transistors really would not get you far towards helping at all with future cybersecurity challenges, it's possible that understanding 2024 LLM details
Best books I've read in 2024 (I want to share, but this doesn't seem relevant enough to EA to justify making a standard forum post. So I'll do it as a quick take instead.) People who know me know that I read a lot, and this is the time of year for retrospectives.[1] Of all the books I read in 2024, I’m sharing the ones that I think an EA-type person would be most interested in, would benefit the most from, etc.  Animal-Focused  There were several animal-focused books I read in 2024. This is the direct result of being a part of an online Animal Advocacy Book Club. I created the book club about a year ago, and it has been helpful in nudging me to read books that I otherwise probably wouldn’t have gotten around to.[2] * Reading Compassion, by the Pound: The Economics of Farm Animal Welfare was a bit of a slog, but I loved that there were actual data and frameworks and measurements, rather than handwavy references to suffering. The authors provided formulas, the provided estimates and back-of-the-envelope calculations, and did an excellent job looking at farm animal welfare like economists and considering tradeoffs, with far less bias than anything else I’ve ever read on animals. They created and references measurements for pig welfare, cow welfare, and chicken welfare that I hadn’t encountered anywhere else. I haven’t even seen other people attempt to put together measurements to evaluate what the overall cost and benefit would be to enact a particular change in how farm animals are treated. * Every couple of pages in An Immense World: How Animal Senses Reveal the Hidden Realms Around Us I felt myself thinking “whoa, that is so cool.” Part of the awe and pleasure in reading this book was a bunch of factoids about how different species of animals perceive the world in incredibly different ways, ranging from the familiar (sight, hearing, touch) to the exotic (vibration detection, taste buds all over the body, electrolocation, and more). The author does a great jo
My donation strategy: It seems that we have some great donation opportunities in at least some cases such as AI Safety. This has made me wonder what donation strategies I prefer. Here are some thoughts, also influenced by Zvi Mowshowitz's: 1. Attracting non-EA funding to EA causes: I prefer donating to opportunities that may bring external or non-EA funding to some causes that EA may deem relevant. 2. Expanding EA funding and widening career paths: Similarly, if possible fund opportunities that could increase the funds or skills available to the community in the future. For this reason, I feel highly supportive of Ambitious Impact project to create onramps for careers with impact in earning to give, for instance. This is in contrast to incubating new charities (Charity Entrepreneurship), which is slightly harder to motivate unless you have strong reasons to believe your impact is more cost-effective than typical charities. I am a bit wary that uncertainty might be too large to clearly distinguish charities in the frontier. 3. Fill in the gap left by others: Aim to fund charities that are medium-sized between their 2nd to 5th years of life: they are not small and young enough that they can rely on Charity Entrepreneurship seed funding. But they are also not large enough to get funding from large funders. One could similarly argue that you should fund causes that non-EAs are less likely to fund (e.g. animal welfare), though I find this argument more strongly if non-EA funding was close to fully funding those other causes (e.g. global health) or if the full support of the former (animal welfare) fully depends on the EA community. 4. Value stability for people running charities: By default and unless there are clearly better opportunities, keep donating to the same charities as previously done, and do so with unrestricted funds. This allows some stability for charities, which is very much welcomed for the charities. Also, do not push too hard on the marginal cost-