This is a special post for quick takes by Joseph_Chu. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
A minor personal gripe I have with EA is that it seems like the vast majority of the resources are geared towards what could be called young elites, particularly highly successful people from top universities like Harvard and Oxford.
For instance, opportunities listed on places like 80,000 Hours are generally the kind of jobs that such people are qualified for, i.e. AI policy at RAND, or AI safety researcher at Anthropic, or something similar that I suspect less than the top 0.001% of human beings would be remotely relevant for.
Someone like myself, who graduated from less prestigious schools, or who struggles in small ways to be as high functioning and successful, can feel like we're not competent enough to be useful to the cause areas we care about.
I personally have been rejected in the past from both 80,000 Hours career advising, and the Long-Term Future Fund. I know these things are very competitive of course. I don't blame them for it. On paper, my potential and proposed project probably weren't remarkable. The time and money should go to the those who are most likely to make a good impact. I understand this.
It just, I guess I just feel like I don't know where I should fit into the EA community. Even just many people on the forum seem incredibly intelligent, thoughtful, kind, and talented. The people at the EA Global I atttended in 2022 were clearly brilliant. In comparison, I just feel inadequate. I wonder if others who don't consider themselves exceptional also find themselves intellectually intimidated by the people here.
We do probably need the best of the best to be involved first and foremost, but I think we also need the average, seemingly unremarkable EA sympathetic person to be engaged in some way if we really want to be more than a small community, to be as impactful as possible. Though, maybe I'm just biased to believe that mass movements are historically what led to progress. Maybe a small group of elites leading the charge is actually what it takes?
I don't know where I'm going with this. It's just some thoughts that have been in the back of my head for a while. This is definitely not worth a full post, so I'm just gonna leave it as a quick take.
Unfortunately, a single organisation can't do everything. There's a lot of advantages of picking a particular niche and targeting it, so I think it makes sense for 80,000 Hours to leave serving other groups of people to other organisations.
Have you heard of Probably Good? Some of the career paths they suggest might be more accessible to you.
You might also want to consider running iterations of the intro course locally. Facilitating can be challenging at times, and not everyone will necessarily be good at it, but I suspect that most people would become pretty good given enough practice and dedication.
Earning to Give is another option that is more accessible as it just requires a career that pays decently (and there are a lot of different options here).
Firstly, I'm sorry that you feel inadequate compared to people on the EA Forum or at EAGs. I think EA is a pretty weird community and it's totally reasonable for people to not feel like it's for them and instead try and do an ambitious amount of good outside the community.
I think this is somewhat orthogonal to feelings of rejection or the broader point that you are making about the higher impact potential of larger communities but I've personally felt that whilst EA seems to "care more" about people who are particularly smart, hardworking, and altruistic, it does a good job of giving people from various backgrounds an opportunity to participate - even if it's differentially easier if you went to a top university.
For example, I think if someone with little or no reputation were to post a few top 10% of rethink priorities quality articles on important topics in fish welfare on the EA Forum they'd gain a lot of career capital and would almost overnight be on various organisation's radars as someone to consider hiring (or at least be competitive in various application processes). I think that story is probably more true for AI safety. Contrast this with hiring for various hedge funds and consultancies which can be really hard to break into if you didn't go to a small set of universities.
I don't fit a typical EA but I've managed to get things done by being "Entrepreneurial" and focussing on neglected areas.
Find things that are important, neglected and where you have a comparative advantage.
EAs overrate smartness in a world that prioritises getting shit done quickly. Being the 5th commenter on a google doc is not a path to impact. Create something. Bring a thing into the world people need but noone is doing.
I think there are a lot of roles on the 80k job board and other places that don't need anyone special but just need good hard generalist workers. A lot of operations roles don't need anything special but just "work that needs doing".
For Global health, there are a ton of jobs at the Clinton Health Initiative that you don't need to move to Africa to do. Other than that, here are some
It can be tough to see things you'd like do do and to feel that they aren't accessible to you. (although I think you are correct that 80k, and most of the larger EA orgs place focus primarily on elites)
It seems that you've already hit on the major points, so maybe you just need to take time to process it and accept it. But I do what to provide two alternatives:
First, people make tradeoffs and sacrifices. Some of the people you see writing impressive essays about animal suffering or creating a project that gets respect and funding, there are a lot of things that they are choosing to not do. This ranges from the simple (they don't get the fun of following that new show) to the profound (they don't spend time cultivate a good relationship with a sibling, or they leave a girlfriend to move to a new country).
Second, some of the people on the EA forum aren't actually so talented or intelligent or thoughtful or kind. We may seem that way sometimes, and maybe sometimes we are, but we also have times when we are foolish or selfish or blundering. A few different framings:
An academic way to think of it is that we are all engaging in impression management.
A casual way to think of it is that we are all cosplaying as intelligent and thoughtful people.
A analogy is to think of a photographer who takes 1,000 pictures, and 900 of them are bad, 90 are okay, and 10 are excellent; but you only see the 10, because that is what gets published.
To use real, anonymized examples
"John Doe" is a real person, who I have met, and is one the top posters here. I consider him kind and intelligent and thoughtful, and he was thoughtless and inconsiderate once, lacking sympathy for another person's situation. Does that make him a bad person? No. It makes him human.
"Jane" works hard to reduce the suffering in the world and is well known in her cause area. I also observed her (in-person, not online) being inconsiderate of others and deprioritizing them in a way that benefited her organization and her cause. Does that make her a villain? No. She was stressed and busy and focused on other things and trying to accomplish stuff. It just makes her human.
"EA org with funding and respect" had simple typos on their website
A fairly well-known EA org insisted that they will only hire people for a generalist role with both a specific professional background and several years of leadership/management experience at an EA organization. (which is somewhat ridiculous, as there are only a handful of people in the world who meet this standard)
Job applications (an area I am somewhat familiar with) are often rejected for reasons unrelated to the person's ability to do the job. Maybe you've heard of the idea that candidates get rejected if the decision-maker wouldn't want to spend time stuck in an airport with them. I haven't heard anyone specifically reference this heuristic, but I've seen similar things: people are rejected because of vibes.
So what I'm saying here is that people (and the organizations that people work for) make mistakes, some trivial, and some major. It is possible that you are looking at other people's highlight reels, and assuming that it represents their normal performance.
First of all, do not give up! And keep fighting/trying to reach your goals :)
I am not sure how the 80,000 Hours career advising and the Long-Term Future Fund work, but it might be good if they check some internal bias that selects people from certain universities. So, we shouldn't take it personal as not worthy enough.
I attended one of these schools, and I guarantee you most of the people there are quite normal.[1] It could be that many of the successful applicants at the funds have some kind of support that you do not currently have.
Many jobs are prioritised for people coming from elite schools and/or with the right connections. Or at least this is what I have seen. We might get rejected even if we are better than someone else.
What to do then? I would say you need to higher your chances of achieving your goals. One way is to play in some unexplored space. You gotta find a niche, and connect with people. Talk to people, and spam people around to get the ball rolling :)
By the end of the day, you gotta believe in yourself, no matter how smart/athletic/contributing you are. And this is what I say is the most important trait.[2]
People I met that entered just for a master are nothing special, except for a few really amazing people, graduate/PhD level I would say 50-50, and undergraduate level is the one with the highest variance, with truly out of sample kids or just low performance. I have also met people from some non-elite/low-ranking schools, and some of them are very intelligent and would be easily in the top cohort at elite schools.
I believe that everyone in EA should try to use the SHOW framework to really see how they can advance their impact. To reiterate:
1. Get Skilled: Use non-EA opportunities to level up on those abilities EA needs most.
2. Get Humble: Amplify others’ impact from a more junior role.
3. Get Outside: Find things to do in EA’s blind spots, or outside EA organizations.
4. Get Weird: Find things no one is doing.
I do think getting skilled is the most practical advice. And if that fails, you can always get humble: if you make an EA 10% more effective you already contributed 10% of their impact!
As a utilitarian, I think that surveys of happiness in different countries can serve as an indicator of how well the various societies and government systems of these countries serve the greatest good. I know this is a very rough proxy and potentially filled with confounding variables, but I noticed that the two main surveys, Gallup's World Happiness Report, and Ipsos' Global Happiness Survey seem to have very different results.
Notably, Gallup's Report puts the Nordic model countries like the Netherlands (7.403) and Sweden (7.395) near the top, with Canada (6.961) and the United States (6.894) scoring pretty well, and countries like China (5.818) scoring modestly, and India (4.036) scoring poorly.
Conversely, the Ipsos Survey puts China (91%) at the top, with the Netherlands (85%) and India (84%) scoring quite well, while the United States (76%), Sweden (74%), and Canada (74%) are more modest.
I'm curious why these surveys seem to differ so much. Obviously, the questions are different, and the scoring method is also different, but you'd expect a stronger correlation. I'm especially surprised by the differences for China and India, which seem quite drastic.
As you've pointed out, the questions are very different. The Gallup Poll asks people to rank their current position in life from "the best possible" to the "worst possible" on a ten point scale which implies that unequal opportunities and outcomes matter a lot.
The IPSOS poll avoids any sort of implicit comparison with how much better things could otherwise have been or actually is for others, and simply asks whether they would describe themselves as (very) happy or not (at all) on a simpler 4 point scale which is collapsed to a yes/no answer for the ranking
So Chinese and Indian people aren't being asked whether they're conscious of the many things they lack which could make their life better like in the Gallup poll, they're being asked whether they feel so bad about their life they wish to describe themselves as unhappy (or, for various other questions "unsatisfied"). People tend to be biased towards saying they're happy and there's likely to be a cultural component to how willing people are to say they're not too
And to add to the complications, the samples are non-random and not necessarily equivalent. IPSOS acknowledge their developing country samples are significantly more affluent, urban and educated than the population, which might explain why even when it comes to their personal finances they're often more "satisfied" than inhabitants of countries with much higher median incomes. Gallup doesn't admit that sampling bias, but even if it's present to exactly the same extent (it's bound to be present to some extent; poor, rural illiterate people are hard to randomly survey) it probably doesn't have the same effect. Indian professionals can simultaneously be "happy" with their secure-by-local standards position in life and aware that their life outcomes could have been a whole lot better.
Think the stark differences are a good illustration of the limits to subjective wellbeing data, but arguably neither survey captures SWB particularly well anyway, the former because it asks people to make a comparison of [mainly objective] outcomes and the latter because the scale is too simple to capture hedonic utility.
I have some ideas and drafts for posts to the EA Forums and Less Wrong that I've been sitting on because I feel somewhat intimidated by the level of intellectual rigor I would need to put into the final drafts to ensure I'm not downvoted into oblivion (particularly on Less Wrong, where a younger me experienced such in the early days).
Should I try to overcome this fear, or is it justified?
For the EA Forums, I was thinking about explaining my personal practical take on moral philosophy (Eudaimonic Utilitarianism with Kantian Priors), but I don't know if that's actually worth explaining given that EA tries to be inclusive and not take particular stands on morality, and it might not be relevant enough to the forum.
For Less Wrong, I have a draft of a response to Eliezer's List of Lethalities post that I've been sitting on since 2022/04/11 because I doubted it would be well received given that it tries to be hopeful and, as a former machine learning scientist, I try to challenge a lot of LW orthodoxy about AGI in it. I have tremendous respect for Eliezer though, so I'm also uncertain if my ideas and arguments aren't just hairbrained foolishness that will be shot down rapidly once exposed to the real world, and the incisive criticism of Less Wrongers.
The posts in both places are also now of such high quality that I feel the bar is too high for me to meet with my writing, which tends to be more "interesting train-of-thought in unformatted paragraphs" than the "point-by-point articulate with section titles and footnotes" style that people in both places tend to employ.
As someone who most of my time here critiquing EA/rationalist orthodoxy, I don't think you have much to worry about, besides annoying comments. A good faith critique presented politely is rarely downvoted.
Also, I feel like there's selection bias going on around the quality of posts. The best, super highly upvoted posts may be extremely high quality, but there are still plenty of posts that aren't (and that's fine, this is an open forum, not an academic journal).
I'd be interested in reading your list of lethalities response. I'm not sure it would be that badly recieved, for example, this response by quinton pope got 360 upvotes. List of lethalities seems to be a fringe view even among AI x-risk researchers, let alone the wider machine learning community.
This is one reason why it's very common for people to write a Google doc first, share it around, update it based on feedback and then post. But this only works if you know enough people who are willing to give you feedback.
An additional option: if you don't know people who are willing to review a document and give you feedback, you could ask people in the Effective Altruism Editing and Review Facebook group to review it.
On this Forum, it is rather rare for good-faith posts to end up with net negative karma. The "worst" reasonably likely outcome is to get very little engagement with your post, which is still more engagement than it will get in your drafts folder. I can't speak to LW, though.
I also think that the appropriate reference point is not the median level of the average post here, but much of the range of first posts from people who have developed into recognized successful posters.
From your description, my only concern would be whether your post sufficiently relates to EA. If it's ~80-90 percent a philosophy piece, maybe there's a better outlet for it. If it's ~50-70 percent, maybe it would work here with a brief summary of the philosophical position upfront and an internal link for the reader who wants to jump directly to the more directly EA-relevant content?
I've often felt a similar my thoughts aren't valuable enough to share feeling. I tend to write these thoughts as a quick take rather than as a normal forum post, and I also try to phrase my words in a manner to indicate that I am writing rough thoughts, or observations, or something similarly non-rigorous (as sort of a signal to the reader that it shouldn't be evaluated by the same standard).
Either it'll be recieved well, or you get free criticism on your ideas, or a blend of the two. You win in all cases. If it get down voted into oblivion you can always delete it; how many deleted posts can you tie to an author? I can't name one.
Ultimately, nobody cares about you (or me, or any other random forum user). They're too busy worrying about how they'll be perceived. This is a blessing. You can take risks and nobody will really care if you fail.
Either it'll be recieved well, or you get free criticism on your ideas, or a blend of the two.
A tough pill for super-sensitives like me to swallow, but I can see it as an exceptionally powerful one. I surely sympathize with OP on the fear of being downvoted—it's what kept me away from this site for months and from Reddit entirely—but valid criticism on many occasions has influenced me for the better, even if I'm scornful of the moments. Maybe my hurt with being wrong will lessen someday or maybe not, but knowing why can serve me well in the end, I can admit that.
My view is you should write/post something if you believe it's an idea that people haven't sufficiently engaged with in the past. Both of your post ideas sound like that to me.
If you have expertise on AI, don't be shy about showing it. If you aren't confident, you can frame your critiques as pointed questions, but personally I think it's better to just make your argument.
As for style, I think people will respond much better to your argument if it's clear. Clear is different from extensive; I think your example of many-sections-with-titles-and-footnotes conflates those two. That format is valuable for giving structure to your argument, not for being a really extensive argument that covers every possible ground. I agree that "interesting train of thought in unformatted paragraphs" won't likely be received well in either venue. I think it's good communication courtesy to make your ideas clear to people who you are trying to convey them to. Clear structure is your friend, not a bouncer keeping you out of the club.
So, I read a while back that SBF apparently posted on Felicifia back in the day. Felicifia was an old Utilitarianism focused forum that I used to frequent before it got taken down. I checked an archive of it recently, and was able to figure out that SBF actually posted there under the name Hutch. He also linked a blog that included a lot of posts about Utilitarianism, and it looks like, at least around 2012, he was a devoted Classical Benthamite Utilitarian. Although we never interacted on the forum, it feels weird that we could have crossed paths back then.
I tried asking ChatGPT, Gemini, and Claude to come up with a formula that converts between correlation space to probability space while preserving the relationship 0 = 1/n. I came up with such a formula a while back, so I figure it shouldn't be hard. They all offered formulas, all of which were shown to be very much wrong when I actually graphed them to check.
I've been looking at the numbers with regards to how many GPUs it would take to train a model with as many parameters as the human brain has synapses. The human brain has 100 trillion synapses, and they are sparse and very efficiently connected. A regular AI model fully connects every neuron in a given layer to every neuron in the previous layer, so that would be less efficient.
The average H100 has 80 GB of VRAM, so assuming that each parameter is 32 bits, then you have about 20 billion per GPU. So, you'd need 10,000 GPUs to fit a single instance of a human brain in RAM, maybe. If you assume inefficiencies and need to have data in memory as well you could ballpark another order of magnitude so 100,000 might be needed.
For comparison, it's widely believed that OpenAI trained GPT4 on about 10,000 A100s that Microsoft let them use from their Azure supercomputer, most likely the one listed as third most powerful in the world by the Top500 list.
Recently though, Microsoft and Meta have both moved to acquire more GPUs that put them in the 100,000 range, and Elon Musk's X.ai recently managed to get a 100,000 H100 GPU supercomputer online in Memphis.
So, in theory at least, we are nearly at the point where they can train a human brain sized model in terms of memory. However, keep in mind that training such a model would take a ton of compute time. I haven't done to calculations yet for FLOPS so I don't know if it's feasible yet.
Also, even if we can train and run a model the size of the human brain, it would still be many orders of magnitude less energy efficient than an actual brain. Human brains use barely 20 watts. This hypothetical GPU brain would require enormous data centres of power, and each H100 GPU uses 700 watts alone.
I'm wondering what people's opinions are on how urgent alignment work is. I'm a former ML scientist who previously worked at Maluuba and Huawei Canada, but switched industries into game development, at least in part to avoid contributing to AI capabilities research. I tried earlier to interview with FAR and Generally Intelligent, but didn't get in. I've also done some cursory independent AI safety research in interpretability and game theoretic ideas my spare time, though nothing interesting enough to publish yet.
My wife also recently had a baby, and caring for him is a substantial time sink, especially for the next year until daycare starts. Is it worth considering things like hiring a nanny, if it'll free me up to actually do more AI safety research? I'm uncertain if I can realistically contribute to the field, but I also feel like AGI could potentially be coming very soon, and maybe I should make the effort just in case it makes some meaningful difference.
It's really hard to know without knowledge of how much a nanny costs, your financial situation and how much you'd value being able to look after your child yourself.
If you'd be fine with a nanny looking after your child, then it is likely worthwhile spending a significant amount of money in order to discover whether you would have a strong fit for alignment research sooner.
I would also suggest that switching out of AI completely was likely a mistake. I'm not suggesting that you should have continued advancing fundamental AI capabilities, but the vast majority of jobs in AI relate to building AI applications rather than advancing fundamental capabilities. Those jobs won't have a significant effect on shortening timelines, but will allow you further develop your skills in AI.
Another thing to consider: if at some point you decide that you are unlikely to break into technical AI safety research, it may be worthwhile to look at contributing in an auxiliary manner, ie. through mentorship or teaching or movement-building.
I'm starting to think it was a mistake for me to engage in this debate week thing. I just spent a good chunk of my baby's first birthday arguing with strangers on the Internet about what amounts to animals vs. humans. This does not seem like a good use of my time, but I'm too pedantic to resist replying to comments I feel the need to reply to. -_-
In general, I feel like this debate week thing seems somewhat divisive as well. At least, it doesn't feel nice to have so many disagrees on my posts, even if they still somehow got a positive amount of karma.
I really don't have time to make high-effort posts, and it seems like low-effort posts do a disservice to people who are making high-effort posts, so I might just stop.
So, a while back I came up with an obscure idea I called the Alpha Omega Theorem and posted it on the Less Wrong forums. Given how there's only one post about it, it shouldn't be something that LLMs would know about. So in the past, I'd ask them "What is the Alpha Omega Theorem?", and they'd always make up some nonsense about a mathematical theory that doesn't actually exist. More recently, Google Gemini and Microsoft Bing Chat would use search to find my post and use that as the basis for their explanation. However, I only have the free version of ChatGPT and Claude, so they don't have access to the Internet and would make stuff up.
A couple days ago I tried the question on ChatGPT again, and GPT-4o managed to correctly say that there isn't a widely known concept of that name in math or science, and basically said it didn't know. Claude still makes up a nonsensical math theory. I also today tried telling Google Gemini not to use search, and it also said it did not know rather than making stuff up.
I'm actually pretty surprised by this. Looks like OpenAI and Google figured out how to reduce hallucinations somehow.
I ran out of the usage limit for GPT-4o (seems to just be 10 prompts every 5 hours) and it switched to GPT-4o-mini. I tried asking it the Alpha Omega question and it made some math nonsense up, so it seems like the model matters for this for some reason.
I recently interviewed with Epoch, and as part of a paid work trial they wanted me to write up a blog post about something interesting related to machine learning trends. This is what I came up with:
A minor personal gripe I have with EA is that it seems like the vast majority of the resources are geared towards what could be called young elites, particularly highly successful people from top universities like Harvard and Oxford.
For instance, opportunities listed on places like 80,000 Hours are generally the kind of jobs that such people are qualified for, i.e. AI policy at RAND, or AI safety researcher at Anthropic, or something similar that I suspect less than the top 0.001% of human beings would be remotely relevant for.
Someone like myself, who graduated from less prestigious schools, or who struggles in small ways to be as high functioning and successful, can feel like we're not competent enough to be useful to the cause areas we care about.
I personally have been rejected in the past from both 80,000 Hours career advising, and the Long-Term Future Fund. I know these things are very competitive of course. I don't blame them for it. On paper, my potential and proposed project probably weren't remarkable. The time and money should go to the those who are most likely to make a good impact. I understand this.
It just, I guess I just feel like I don't know where I should fit into the EA community. Even just many people on the forum seem incredibly intelligent, thoughtful, kind, and talented. The people at the EA Global I atttended in 2022 were clearly brilliant. In comparison, I just feel inadequate. I wonder if others who don't consider themselves exceptional also find themselves intellectually intimidated by the people here.
We do probably need the best of the best to be involved first and foremost, but I think we also need the average, seemingly unremarkable EA sympathetic person to be engaged in some way if we really want to be more than a small community, to be as impactful as possible. Though, maybe I'm just biased to believe that mass movements are historically what led to progress. Maybe a small group of elites leading the charge is actually what it takes?
I don't know where I'm going with this. It's just some thoughts that have been in the back of my head for a while. This is definitely not worth a full post, so I'm just gonna leave it as a quick take.
Unfortunately, a single organisation can't do everything. There's a lot of advantages of picking a particular niche and targeting it, so I think it makes sense for 80,000 Hours to leave serving other groups of people to other organisations.
Have you heard of Probably Good? Some of the career paths they suggest might be more accessible to you.
You might also want to consider running iterations of the intro course locally. Facilitating can be challenging at times, and not everyone will necessarily be good at it, but I suspect that most people would become pretty good given enough practice and dedication.
Earning to Give is another option that is more accessible as it just requires a career that pays decently (and there are a lot of different options here).
Firstly, I'm sorry that you feel inadequate compared to people on the EA Forum or at EAGs. I think EA is a pretty weird community and it's totally reasonable for people to not feel like it's for them and instead try and do an ambitious amount of good outside the community.
I think this is somewhat orthogonal to feelings of rejection or the broader point that you are making about the higher impact potential of larger communities but I've personally felt that whilst EA seems to "care more" about people who are particularly smart, hardworking, and altruistic, it does a good job of giving people from various backgrounds an opportunity to participate - even if it's differentially easier if you went to a top university.
For example, I think if someone with little or no reputation were to post a few top 10% of rethink priorities quality articles on important topics in fish welfare on the EA Forum they'd gain a lot of career capital and would almost overnight be on various organisation's radars as someone to consider hiring (or at least be competitive in various application processes). I think that story is probably more true for AI safety. Contrast this with hiring for various hedge funds and consultancies which can be really hard to break into if you didn't go to a small set of universities.
I don't fit a typical EA but I've managed to get things done by being "Entrepreneurial" and focussing on neglected areas.
Find things that are important, neglected and where you have a comparative advantage.
EAs overrate smartness in a world that prioritises getting shit done quickly. Being the 5th commenter on a google doc is not a path to impact. Create something. Bring a thing into the world people need but noone is doing.
I think there are a lot of roles on the 80k job board and other places that don't need anyone special but just need good hard generalist workers. A lot of operations roles don't need anything special but just "work that needs doing".
As some examples https://jobs.lever.co/aisafety/c5269975-e074-44ee-ad32-9ff521f4d709 this is a good job if someone wants something in AI safety.
This is more of an EA infrastructure role https://www.givingwhatwecan.org/get-involved/careers/operations-associate
Another AI role https://careers.rethinkpriorities.org/en/postings/b6cbef86-5239-4218-9aaa-8b7fe660db72
Another AI role https://www.arena.education/operations-lead
For animal welfare, I would say the bulk of the jobs don't need anyone who is some kind of 0.001% person.
https://animal-equality.rippling-ats.com/job/807514/operations-manager
https://www.fishwelfareinitiative.org/pa
For Global health, there are a ton of jobs at the Clinton Health Initiative that you don't need to move to Africa to do. Other than that, here are some
https://malariaconsortium.current-vacancies.com/Jobs/Advert/3701297?cid=2061&t=Compliance-Manager
It can be tough to see things you'd like do do and to feel that they aren't accessible to you. (although I think you are correct that 80k, and most of the larger EA orgs place focus primarily on elites)
It seems that you've already hit on the major points, so maybe you just need to take time to process it and accept it. But I do what to provide two alternatives:
First, people make tradeoffs and sacrifices. Some of the people you see writing impressive essays about animal suffering or creating a project that gets respect and funding, there are a lot of things that they are choosing to not do. This ranges from the simple (they don't get the fun of following that new show) to the profound (they don't spend time cultivate a good relationship with a sibling, or they leave a girlfriend to move to a new country).
Second, some of the people on the EA forum aren't actually so talented or intelligent or thoughtful or kind. We may seem that way sometimes, and maybe sometimes we are, but we also have times when we are foolish or selfish or blundering. A few different framings:
So what I'm saying here is that people (and the organizations that people work for) make mistakes, some trivial, and some major. It is possible that you are looking at other people's highlight reels, and assuming that it represents their normal performance.
First of all, do not give up! And keep fighting/trying to reach your goals :)
I am not sure how the 80,000 Hours career advising and the Long-Term Future Fund work, but it might be good if they check some internal bias that selects people from certain universities. So, we shouldn't take it personal as not worthy enough.
I attended one of these schools, and I guarantee you most of the people there are quite normal.[1] It could be that many of the successful applicants at the funds have some kind of support that you do not currently have.
Many jobs are prioritised for people coming from elite schools and/or with the right connections. Or at least this is what I have seen. We might get rejected even if we are better than someone else.
What to do then? I would say you need to higher your chances of achieving your goals. One way is to play in some unexplored space. You gotta find a niche, and connect with people. Talk to people, and spam people around to get the ball rolling :)
By the end of the day, you gotta believe in yourself, no matter how smart/athletic/contributing you are. And this is what I say is the most important trait.[2]
Keep up!
People I met that entered just for a master are nothing special, except for a few really amazing people, graduate/PhD level I would say 50-50, and undergraduate level is the one with the highest variance, with truly out of sample kids or just low performance. I have also met people from some non-elite/low-ranking schools, and some of them are very intelligent and would be easily in the top cohort at elite schools.
If I ever found a company, I will choose people based on character and competence. Piece of paper where one studied won't matter.
I believe that everyone in EA should try to use the SHOW framework to really see how they can advance their impact. To reiterate:
I do think getting skilled is the most practical advice. And if that fails, you can always get humble: if you make an EA 10% more effective you already contributed 10% of their impact!
As a utilitarian, I think that surveys of happiness in different countries can serve as an indicator of how well the various societies and government systems of these countries serve the greatest good. I know this is a very rough proxy and potentially filled with confounding variables, but I noticed that the two main surveys, Gallup's World Happiness Report, and Ipsos' Global Happiness Survey seem to have very different results.
Notably, Gallup's Report puts the Nordic model countries like the Netherlands (7.403) and Sweden (7.395) near the top, with Canada (6.961) and the United States (6.894) scoring pretty well, and countries like China (5.818) scoring modestly, and India (4.036) scoring poorly.
Conversely, the Ipsos Survey puts China (91%) at the top, with the Netherlands (85%) and India (84%) scoring quite well, while the United States (76%), Sweden (74%), and Canada (74%) are more modest.
I'm curious why these surveys seem to differ so much. Obviously, the questions are different, and the scoring method is also different, but you'd expect a stronger correlation. I'm especially surprised by the differences for China and India, which seem quite drastic.
As you've pointed out, the questions are very different. The Gallup Poll asks people to rank their current position in life from "the best possible" to the "worst possible" on a ten point scale which implies that unequal opportunities and outcomes matter a lot.
The IPSOS poll avoids any sort of implicit comparison with how much better things could otherwise have been or actually is for others, and simply asks whether they would describe themselves as (very) happy or not (at all) on a simpler 4 point scale which is collapsed to a yes/no answer for the ranking
So Chinese and Indian people aren't being asked whether they're conscious of the many things they lack which could make their life better like in the Gallup poll, they're being asked whether they feel so bad about their life they wish to describe themselves as unhappy (or, for various other questions "unsatisfied"). People tend to be biased towards saying they're happy and there's likely to be a cultural component to how willing people are to say they're not too
And to add to the complications, the samples are non-random and not necessarily equivalent. IPSOS acknowledge their developing country samples are significantly more affluent, urban and educated than the population, which might explain why even when it comes to their personal finances they're often more "satisfied" than inhabitants of countries with much higher median incomes. Gallup doesn't admit that sampling bias, but even if it's present to exactly the same extent (it's bound to be present to some extent; poor, rural illiterate people are hard to randomly survey) it probably doesn't have the same effect. Indian professionals can simultaneously be "happy" with their secure-by-local standards position in life and aware that their life outcomes could have been a whole lot better.
Think the stark differences are a good illustration of the limits to subjective wellbeing data, but arguably neither survey captures SWB particularly well anyway, the former because it asks people to make a comparison of [mainly objective] outcomes and the latter because the scale is too simple to capture hedonic utility.
I have some ideas and drafts for posts to the EA Forums and Less Wrong that I've been sitting on because I feel somewhat intimidated by the level of intellectual rigor I would need to put into the final drafts to ensure I'm not downvoted into oblivion (particularly on Less Wrong, where a younger me experienced such in the early days).
Should I try to overcome this fear, or is it justified?
For the EA Forums, I was thinking about explaining my personal practical take on moral philosophy (Eudaimonic Utilitarianism with Kantian Priors), but I don't know if that's actually worth explaining given that EA tries to be inclusive and not take particular stands on morality, and it might not be relevant enough to the forum.
For Less Wrong, I have a draft of a response to Eliezer's List of Lethalities post that I've been sitting on since 2022/04/11 because I doubted it would be well received given that it tries to be hopeful and, as a former machine learning scientist, I try to challenge a lot of LW orthodoxy about AGI in it. I have tremendous respect for Eliezer though, so I'm also uncertain if my ideas and arguments aren't just hairbrained foolishness that will be shot down rapidly once exposed to the real world, and the incisive criticism of Less Wrongers.
The posts in both places are also now of such high quality that I feel the bar is too high for me to meet with my writing, which tends to be more "interesting train-of-thought in unformatted paragraphs" than the "point-by-point articulate with section titles and footnotes" style that people in both places tend to employ.
Anyone have any thoughts?
Short form/quick takes can be a good compromise, and sources of feedback for later versions.
As someone who most of my time here critiquing EA/rationalist orthodoxy, I don't think you have much to worry about, besides annoying comments. A good faith critique presented politely is rarely downvoted.
Also, I feel like there's selection bias going on around the quality of posts. The best, super highly upvoted posts may be extremely high quality, but there are still plenty of posts that aren't (and that's fine, this is an open forum, not an academic journal).
I'd be interested in reading your list of lethalities response. I'm not sure it would be that badly recieved, for example, this response by quinton pope got 360 upvotes. List of lethalities seems to be a fringe view even among AI x-risk researchers, let alone the wider machine learning community.
This is one reason why it's very common for people to write a Google doc first, share it around, update it based on feedback and then post. But this only works if you know enough people who are willing to give you feedback.
An additional option: if you don't know people who are willing to review a document and give you feedback, you could ask people in the Effective Altruism Editing and Review Facebook group to review it.
On this Forum, it is rather rare for good-faith posts to end up with net negative karma. The "worst" reasonably likely outcome is to get very little engagement with your post, which is still more engagement than it will get in your drafts folder. I can't speak to LW, though.
I also think that the appropriate reference point is not the median level of the average post here, but much of the range of first posts from people who have developed into recognized successful posters.
From your description, my only concern would be whether your post sufficiently relates to EA. If it's ~80-90 percent a philosophy piece, maybe there's a better outlet for it. If it's ~50-70 percent, maybe it would work here with a brief summary of the philosophical position upfront and an internal link for the reader who wants to jump directly to the more directly EA-relevant content?
I encourage you to share your ideas.
I've often felt a similar my thoughts aren't valuable enough to share feeling. I tend to write these thoughts as a quick take rather than as a normal forum post, and I also try to phrase my words in a manner to indicate that I am writing rough thoughts, or observations, or something similarly non-rigorous (as sort of a signal to the reader that it shouldn't be evaluated by the same standard).
Either it'll be recieved well, or you get free criticism on your ideas, or a blend of the two. You win in all cases. If it get down voted into oblivion you can always delete it; how many deleted posts can you tie to an author? I can't name one.
Ultimately, nobody cares about you (or me, or any other random forum user). They're too busy worrying about how they'll be perceived. This is a blessing. You can take risks and nobody will really care if you fail.
A tough pill for super-sensitives like me to swallow, but I can see it as an exceptionally powerful one. I surely sympathize with OP on the fear of being downvoted—it's what kept me away from this site for months and from Reddit entirely—but valid criticism on many occasions has influenced me for the better, even if I'm scornful of the moments. Maybe my hurt with being wrong will lessen someday or maybe not, but knowing why can serve me well in the end, I can admit that.
My view is you should write/post something if you believe it's an idea that people haven't sufficiently engaged with in the past. Both of your post ideas sound like that to me.
If you have expertise on AI, don't be shy about showing it. If you aren't confident, you can frame your critiques as pointed questions, but personally I think it's better to just make your argument.
As for style, I think people will respond much better to your argument if it's clear. Clear is different from extensive; I think your example of many-sections-with-titles-and-footnotes conflates those two. That format is valuable for giving structure to your argument, not for being a really extensive argument that covers every possible ground. I agree that "interesting train of thought in unformatted paragraphs" won't likely be received well in either venue. I think it's good communication courtesy to make your ideas clear to people who you are trying to convey them to. Clear structure is your friend, not a bouncer keeping you out of the club.
Post links to google docs as quick takes if posting posts proper feels like a high bar?
So, I read a while back that SBF apparently posted on Felicifia back in the day. Felicifia was an old Utilitarianism focused forum that I used to frequent before it got taken down. I checked an archive of it recently, and was able to figure out that SBF actually posted there under the name Hutch. He also linked a blog that included a lot of posts about Utilitarianism, and it looks like, at least around 2012, he was a devoted Classical Benthamite Utilitarian. Although we never interacted on the forum, it feels weird that we could have crossed paths back then.
His Felicifia: https://felicifia.github.io/user/1049.html
His blog: https://measuringshadowsblog.blogspot.com/
I tried asking ChatGPT, Gemini, and Claude to come up with a formula that converts between correlation space to probability space while preserving the relationship 0 = 1/n. I came up with such a formula a while back, so I figure it shouldn't be hard. They all offered formulas, all of which were shown to be very much wrong when I actually graphed them to check.
I've been looking at the numbers with regards to how many GPUs it would take to train a model with as many parameters as the human brain has synapses. The human brain has 100 trillion synapses, and they are sparse and very efficiently connected. A regular AI model fully connects every neuron in a given layer to every neuron in the previous layer, so that would be less efficient.
The average H100 has 80 GB of VRAM, so assuming that each parameter is 32 bits, then you have about 20 billion per GPU. So, you'd need 10,000 GPUs to fit a single instance of a human brain in RAM, maybe. If you assume inefficiencies and need to have data in memory as well you could ballpark another order of magnitude so 100,000 might be needed.
For comparison, it's widely believed that OpenAI trained GPT4 on about 10,000 A100s that Microsoft let them use from their Azure supercomputer, most likely the one listed as third most powerful in the world by the Top500 list.
Recently though, Microsoft and Meta have both moved to acquire more GPUs that put them in the 100,000 range, and Elon Musk's X.ai recently managed to get a 100,000 H100 GPU supercomputer online in Memphis.
So, in theory at least, we are nearly at the point where they can train a human brain sized model in terms of memory. However, keep in mind that training such a model would take a ton of compute time. I haven't done to calculations yet for FLOPS so I don't know if it's feasible yet.
Just some quick back of the envelope analysis.
Also, even if we can train and run a model the size of the human brain, it would still be many orders of magnitude less energy efficient than an actual brain. Human brains use barely 20 watts. This hypothetical GPU brain would require enormous data centres of power, and each H100 GPU uses 700 watts alone.
I'm wondering what people's opinions are on how urgent alignment work is. I'm a former ML scientist who previously worked at Maluuba and Huawei Canada, but switched industries into game development, at least in part to avoid contributing to AI capabilities research. I tried earlier to interview with FAR and Generally Intelligent, but didn't get in. I've also done some cursory independent AI safety research in interpretability and game theoretic ideas my spare time, though nothing interesting enough to publish yet.
My wife also recently had a baby, and caring for him is a substantial time sink, especially for the next year until daycare starts. Is it worth considering things like hiring a nanny, if it'll free me up to actually do more AI safety research? I'm uncertain if I can realistically contribute to the field, but I also feel like AGI could potentially be coming very soon, and maybe I should make the effort just in case it makes some meaningful difference.
It's really hard to know without knowledge of how much a nanny costs, your financial situation and how much you'd value being able to look after your child yourself.
If you'd be fine with a nanny looking after your child, then it is likely worthwhile spending a significant amount of money in order to discover whether you would have a strong fit for alignment research sooner.
I would also suggest that switching out of AI completely was likely a mistake. I'm not suggesting that you should have continued advancing fundamental AI capabilities, but the vast majority of jobs in AI relate to building AI applications rather than advancing fundamental capabilities. Those jobs won't have a significant effect on shortening timelines, but will allow you further develop your skills in AI.
Another thing to consider: if at some point you decide that you are unlikely to break into technical AI safety research, it may be worthwhile to look at contributing in an auxiliary manner, ie. through mentorship or teaching or movement-building.
I'm starting to think it was a mistake for me to engage in this debate week thing. I just spent a good chunk of my baby's first birthday arguing with strangers on the Internet about what amounts to animals vs. humans. This does not seem like a good use of my time, but I'm too pedantic to resist replying to comments I feel the need to reply to. -_-
In general, I feel like this debate week thing seems somewhat divisive as well. At least, it doesn't feel nice to have so many disagrees on my posts, even if they still somehow got a positive amount of karma.
I really don't have time to make high-effort posts, and it seems like low-effort posts do a disservice to people who are making high-effort posts, so I might just stop.
So, a while back I came up with an obscure idea I called the Alpha Omega Theorem and posted it on the Less Wrong forums. Given how there's only one post about it, it shouldn't be something that LLMs would know about. So in the past, I'd ask them "What is the Alpha Omega Theorem?", and they'd always make up some nonsense about a mathematical theory that doesn't actually exist. More recently, Google Gemini and Microsoft Bing Chat would use search to find my post and use that as the basis for their explanation. However, I only have the free version of ChatGPT and Claude, so they don't have access to the Internet and would make stuff up.
A couple days ago I tried the question on ChatGPT again, and GPT-4o managed to correctly say that there isn't a widely known concept of that name in math or science, and basically said it didn't know. Claude still makes up a nonsensical math theory. I also today tried telling Google Gemini not to use search, and it also said it did not know rather than making stuff up.
I'm actually pretty surprised by this. Looks like OpenAI and Google figured out how to reduce hallucinations somehow.
I ran out of the usage limit for GPT-4o (seems to just be 10 prompts every 5 hours) and it switched to GPT-4o-mini. I tried asking it the Alpha Omega question and it made some math nonsense up, so it seems like the model matters for this for some reason.
I recently interviewed with Epoch, and as part of a paid work trial they wanted me to write up a blog post about something interesting related to machine learning trends. This is what I came up with:
http://www.josephius.com/2022/09/05/energy-efficiency-trends-in-computation-and-long-term-implications/