All posts

New & upvoted

Today, 14 November 2024
Today, 14 Nov 2024

Frontpage Posts

Quick takes

116
lukeprog
7h
2
Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things: * Open Philanthropy (OP) and our largest funding partner Good Ventures (GV) can't be or do everything related to GCRs from AI and biohazards: we have limited funding, staff, and knowledge, and many important risk-reducing activities are impossible for us to do, or don't play to our comparative advantages. * Like most funders, we decline to fund the vast majority of opportunities we come across, for a wide variety of reasons. The fact that we declined to fund someone says nothing about why we declined to fund them, and most guesses I've seen or heard about why we didn't fund something are wrong. (Similarly, us choosing to fund someone doesn't mean we endorse everything about them or their work/plans.) * Very often, when we decline to do or fund something, it's not because we don't think it's good or important, but because we aren't the right team or organization to do or fund it, or we're prioritizing other things that quarter. * As such, we spend a lot of time working to help create or assist other philanthropies and organizations who work on these issues and are better fits for some opportunities than we are. I hope in the future there will be multiple GV-scale funders for AI GCR work, with different strengths, strategies, and comparative advantages — whether through existing large-scale philanthropies turning their attention to these risks or through new philanthropists entering the space. * While Good Ventures is Open Philanthropy's largest philanthropic partner, we also regularly advise >20 other philanthropists who are interested to hear about GCR-related funding opportunities. (Our GHW team also does similar work partnering with many other philanthropist
Paying candidates to complete a test task likely increases inequality, credentialism and decreases candidate quality. If you pay candidates for their time, you're likely to accept less candidates and lower variance candidates into the test task stage. Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.
Some musings about experience and coaching. I saw another announcement relating to mentorship/coaching/career advising recently. It looked like the mentors/coaches/advisors were all relatively junior/young/inexperienced. This isn't the first time I've seen this. Most of this type of thing I've seen in and around EA involves the mentors/advisors/coaches being only a few years into their career. This isn't necessarily bad. A person can be very well-read without having gone to school, or can be very strong without going to a gym, or can speak excellent Japanese without having ever been to Japan. A person being two or three or four years into their career doesn't mean that it is impossible for them to have have good ideas and good advice.[1] But it does seem a little... odd. The skepticism I feel is similar to having a physically frail person as a fitness trainer: I am assessing the individual on a proxy (fitness) rather than on the true criteria (ability to advise me regarding fitness). Maybe that thinking is a bit too sloppy on my part. This doesn't mean that if you are 24 and you volunteer as a mentor that you should stop; you aren't doing anything wrong. And I wouldn't want some kind a silly and arbitrary rule, such as "only people age 40+ are allowed to be career coaches." And there are some people doing this kind of work that have a decade or more of professional experience; I don't want to make it sound like all of the people doing coaching and advising are fresh grads. I wonder if there are any specific advantages or disadvantages to this 'junior skew.' Is there a meaningful correlation between length of career and ability to help other people with their careers?  EA already skews somewhat young, but from the last EA community survey it looks like the average age was around 29. So I wonder why are the vast majority of people doing mentorship/coaching/career advising are younger than that? Maybe the older people involved in EA are disproportionality not employ

Wednesday, 13 November 2024
Wed, 13 Nov 2024

Frontpage Posts

Quick takes

There’s an asymmetry between people/orgs that are more willing to publicly write impressions and things they’ve heard, and people/orgs that don’t do much of that. You could call the continuum “transparent and communicative, vs locked down and secretive” or “recklessly repeating rumors and speculation, vs professional” depending on your views! When I see public comments about the inner workings of an organization by people who don’t work there, I often also hear other people who know more about the org privately say “That’s not true.” But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their org’s communications staff, and then follow whatever discussion comes from it. A downside is that if an organization isn’t prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. That’s frustrating, but it’s a standard way that many organizations operate, both in EA and in other spaces. There are some good reasons to be slower and more coordinated about communications. For example, I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if they’d all waited a day or two to get on the same page and write a response with the correct facts. This process is worth doing for some important discussions, but few organizations will prioritize doing this every time someone is wrong on the internet. So what’s a reader to do? When you see a claim that an org is doing some shady-sounding thing, made by someone who doesn’t work at that org, remember the asymmetry. These situations will look identical to most readers: * The org really is doing a shady thing, and doesn’t want to discuss it * The org
29
saulius
1d
10
What’s a realistic, positive vision of the future worth fighting for? I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
Applying my global health knowledge to the animal welfare realm, I'm requesting 1,000,000 dollars to launch this deep net positive (Shr)Impactful charity. I'll admit the funding opportunity is pretty marginal...   Thanks @Toby Tremlett🔹 for bringing this to life. Even though she doesn't look so happy I can assure you this intervention nets a 30x welfare range improvement for this shrimp, so she's now basically a human.
Does anyone have thoughts on whether it’s still worthwhile to attend EAGxVirtual in this case? I have been considering applying for EAGxVirtual, and I wanted to quickly share two reasons why I haven't: * I would only be able to attend on Sunday afternoon CET, and it seems like it might be a waste to apply if I'm only available for that time slot, as this is something I would never do for an in-person conference. * I can't find the schedule anywhere. You probably only have access to it if you are on Swapcard, but this makes it difficult to decide ahead of time whether it is worth attending, especially if I can only attend a small portion of the conference.
Update: Pushing for messenger interoperability (part of EU Digital Markets Act) might be more tractable and more helpful. Forwarding private comment from a friend: Interoperability was part of Digital Markets Act, so EVP Ribera will be main enforcer, and was asked about her stance in her EU parliament confirmation hearing yesterday. You could watch that / write her team abt the underrated cybersecurity benefits of interoperability esp. given it would upgrade WhatsApp's encryption TLDR: Improving Signal (messenger) seems important, [edit: maybe] neglected and tractable. Thoughts? Can we help?  Signal (similar to Whatsapp) is the only truly privacy-friendly popular messenger I know. Whatsapp and Telegram also offer end-to-end encryption (Telegram only in "secret chats") but they still collect metadata like your contacts, and many people I meet strongly prefer Signal for various reasons: Some people work in cybersecurity and have strong privacy preferences, others dislike Telegram (bad rep, popular among conspiratists, spam) and Meta (Whatsapp owner). For some vulnerable people such activists in authoritarian regimes or whistleblowers in powerful organizations, secure messaging seems essential, and Signal seems to be the best tool we have.  While Signal is improving, I still often find it annoying to use compared to Telegram. Here just some examples: 1) it's easily overwhelming: No sorting chats in folders, archiving group chats doesn't really work (they keep popping back to 'unarchived' whenever someone writes a new message), lots of notifications I don't care about like "user xyz changed their security number" and no way to turn them off 2) no option to make chat history visible to new group members, which is really annoying for some use cases 3) no poll feature, no live location sharing 4) no "community"/supergroup feature, people need to find and manually join all different groups in a community 5) no threads (in Telegram that's possible in announcement

Topic Page Edits and Discussion

Tuesday, 12 November 2024
Tue, 12 Nov 2024

Frontpage Posts

Personal Blogposts

Quick takes

Has anybody changed their behaviour after the animal welfare vs global health debate week? A month or so on, I'm curious if anybody is planning to donate differently, considering a career pivot, etc. If anybody doesn't want to share publicly but would share privately, please feel free to message me. Linking @Angelina Li's post asking how people would change their behaviour, and tagging @Toby Tremlett🔹 who might have thought about tracking this.
Meant to post this in funding diversification week. A potential source of new and consistent funds: EA researchers/orgs could run research training programs. Drawing some of the rents away from universities and keeping it in the system. These could be non-accredited but focus on publicly demonstrable skills and offer tailored letters of recommendation for a limited number of participants. Could train skills and mentor research particularly relevant to EA orgs and funders. Students (EA and non-EA) would pay for this. Universities and government training funds could also be unlocked. (More on this later, I think, I have a whole set of plans/notes).
Someone really needs to make Asterisk meetup groups a thing.
People in EA end up optimizing for EA credentials so they can virtue signal to grantmakers, but grantmakers would probably like people to scope out non-EA opportunities because that allows us to introduce unknown people to the concerns we have

Monday, 11 November 2024
Mon, 11 Nov 2024

Frontpage Posts

Saturday, 9 November 2024
Sat, 9 Nov 2024

Quick takes

During the animal welfare vs global health debate week, I was very reluctant to make a post or argument in favor of global health, the cause I work in and that animates me. Here are some reflections on why, that may or may not apply to other people: 1. Moral weights are tiresome to debate. If you (like me) do not have a good grasp of philosophy, it's an uphill struggle to grasp what RP's moral weights project means exactly, and where I would or would not buy into its assumptions. 2. I don't choose my donations/actions based on impartial cause prioritization. I think impartially within GHD (e.g. I don't prioritize interventions in India just because I'm from there, I treat health vs income moral weights much more analytically than species moral weights) but not for cross-cause comparison. I am okay with this. But it doesn't make for a persuasive case to other people. 3. It doesn't feel good to post something that you know will provoke a large volume of (friendly!) disagreement. I think of myself as a pretty disagreeable person, but I am still very averse to posting things that go against what almost everyone around me is saying, at least when I don't feel 100% confident in my thesis. I have found previous arguments about global health vs animal welfare to be especially exhausting and they did not lead to any convergence, so I don't see the upside that justifies the downside. 4. I don't fundamentally disagree with the narrow thesis that marginal money can do more good in animal welfare. I just feel disillusioned with the larger implications that global health is overfunded and not really worth the money we spend on it. I'm deliberately focusing on emotional/psychological inhibitions as opposed to analytical doubts I have about animal welfare. I do have some analytical doubts, but I think of them as secondary to the personal relationship I have with GHD.
1
Phib
6d
0
A quick OpenAI-o1 preview BOTEC for additional emissions from a sort of Leopold scenario ~2030, assuming energy is mostly provided by natural gas, since I was kinda curious. Not much time spent on this and took the results at face value. I (of course?) buy that emissions don't matter in short term, in a world where R&D is increasingly automated and scaled. Phib: Say an additional 20% of US electricity was added to our power usage (e.g. for AI) over the next 6 years, and it was mostly natural gas. Also, that AI inference is used at an increasing rate, say 100Million H100s equivalent, increasing from today's level to 100M over the next 6 years, and it's more or less the same as today's energy breakdown, although with more natural gas over time. What do the emissions from such a build out of natural gas look like? [Reasoning available here: https://chatgpt.com/share/672ed183-7c3c-800d-ab0e-d08020d072c7] **Answer: The emissions from this build-out of natural gas over the next six years would be significant. Specifically: * Total Additional Emissions Over Six Years: Approximately 1.136 billion metric tons of CO₂. * Average Annual Emissions Increase: About 189 million metric tons of CO₂ per year. * Percentage of Current U.S. Emissions: This represents an approximate 2.9% increase over the current annual U.S. greenhouse gas emissions. These emissions result from the increased natural gas usage required to meet the additional electricity demand—both from the general 20% increase in electricity consumption and the substantial growth in AI inference workloads powered by the equivalent of 100 million NVIDIA H100 GPUs. Phib: Imagine there is a similar buildout in China, if not larger, how do the emissions from all of the above (within some confidence interval) place us environmentally within 2 years after 2030 of such a buildout and increase in emissions? Within 10 years? Considering a more or less constant rate of emissions thereafter for each. Conclusion The combi

Friday, 8 November 2024
Fri, 8 Nov 2024

Frontpage Posts

Quick takes

54
Joey 🔸
6d
11
A thing that seems valuable but is not talked about much is organizations that bring talent into the EA/impact-focused charity world, vs. re-using people already in the movement, vs. turning people off the movement. The difference in these effects seems both significant and pretty consistent within an organization. I think Founders Pledge is a good example of an organization that, I think, net brings talent into the effective charities world. I often see their hires, post-leaving FP, go on to pretty impactful other roles that it’s not clear they would have done absent their experience working for FP. I wish more organizations did this vs. re-using/turning people off.
I’d love to dig a bit more into some real data and implications for this (hence, just a quick take for now), but I suspect that (EA) donors may not take the current funding allocation within and across cause areas into account when making donation decisions - and that taking it sufficiently into account may mean that small donors shouldn’t diversify? For example, the recent Animal Welfare vs. Global Health Debate Week posed the statement “It would be better to spend an extra $100m on animal welfare than on global health.” Now, one way to think through this question is “How would the ideal funding split between Animal Welfare vs. Global Health look like” and test whether an additional $100m on Animal Welfare would bring us closer to the ideal funding split (in this case, it appears that spending the $100m on Animal Welfare increases the share of AW from 0.41% to 0.55% - meaning that if your ideal funding split would allocate more than 0.55% to AW, you should be in favor of directing $100m there). I am not sure if this perspective is the right or even the best to take, but I think it may often be missing. I think it’s important to think through it, because it takes into account “how much money should be spent on X vs. Y” as opposed to “how much money I should spend on X vs. Y” (or maybe even “How much money should EA spend on X vs. Y”?) - which I think closer to what we should care about. I think this is interesting, because: * If you primarily, but not strictly and solely favor a comparably well-funded area (say, GHD or Climate Change), you may want to donate all your money towards a cause area that don’t even value particularly highly. * Ironically, this type of thinking only applies if you value diversification in your donations in the first place. So, if you are wondering how much % of your money should go to X vs. Y, I suspect that looking at the current global funding allocation will likely (for most people, necessarily?) lead to pouring all your money into
Why should you donate to the Forum’s Donation Election Fund? * It could change the way you donate, for the better: We all have limited information to decide how we should donate. Giving via the Donation Election Fund let’s you benefit from the collectively held information of the Forum’s users, as well as up to date facts from the marginal funding posts from organisations. If enough users take part in the voting, you won’t have to read all of the marginal funding posts to benefit from the information they contain.  * It could boost engagement in the election, which leads to: * More funding for charities: Last year, the donation election and surrounding events moved a lot of money. In our squiggle model, we get this distribution: * The headline “$30k raised through the election” does not represent all of the money raised because of the Forum’s events. But giving money to the election fund will likely increase the attention on the Forum, the amount of effort that organisations and individuals put into posts etc… in a way which will increase the amount raised overall.  * Influencing other’s donations for the better: In the EA survey (post coming soon) we saw that the donation election had influenced people’s donation choices. We also saw in the comments on last years votes that specific posts had influenced donations, especially, shifting people towards animal welfare organisations, and increasing donations to rethink priorities.  Also, maybe you just want to get these sweet sweet rewards. 
What is malevolence? On the nature, measurement, and distribution of dark traits was posted two weeks ago (and i recommend it). there was a questionnaire discussed in that post which tries to measure the levels of 'dark traits' in the respondent. i'm curious about the results[1] of EAs[2] on that questionnaire, if anyone wants to volunteer theirs. there are short and long versions (16 and 70 questions). 1. ^ (or responses to the questions themselves) 2. ^ i also posted the same quick take to LessWrong, asking about rationalists
I’d be grateful if some people could fill this survey https://forms.gle/RdQfJLs4a5jd7KsQA The survey will ask you to compare different intensities of pain. In case you're interested why you might want to do it, you’ll be helping me to estimate plausible weights for different categories of pain used by the Welfare Footprint Project. This will help me with to summarise their conclusions into easily digestible statements like “switch from battery cage to cage-free reduces suffering of hens by at least 60%” and with some cost-effectiveness estimates. Thanks ❤️

Thursday, 7 November 2024
Thu, 7 Nov 2024

Frontpage Posts

1
· · 1m read

Quick takes

As earn to giver, I found contributing to funding diversification challenging Jeff Kaufmann posted a different version of the same argument earlier than me. Some have argued that earning to give can contribute to funding diversification. Having a few dozen mid-sized donors, rather than one or two very large donors, would make the financial position of an organization more secure. It allows them to plan for the future and not worry about fundraising all the time. As earn to giver, I can be one of those mid-sized donors. I have tried. However, it is challenging. First of all, I don't have expertise, and don't have much time to build the expertise. I spend most of my time on my day job, which has nothing to do with any cause I care about. Any research must be done in my free time. This is fine, but it has some cost. This is time I could have spent on career development, talking to others about effective giving, or living more frugally. Motivation is not the issue, at least for me. I've found the research extremely rewarding and intellectually stimulating to do. Yet, fun doesn't necessarily translate to effectiveness. I've seen peer earn to givers just defer to GiveWell or other charity evaluators without putting much thought into it. This is great, but isn't there more? Others said that they talked to an individual organization, thought "sounds reasonable", and transferred the money. I fell for that trap too! There is a lot at stake. It's about hard-earned money that has the potential to help large numbers of people and animals in dire need. Unfortunately, I don't trust my own non-expert judgment to do this. So I find myself donating to funds, and then the funding is centralized again. If others do the same, charities will have to rely on one grantmaker again, rather than a diverse pool of donors. Ideas What would help to address this issue? Here are a few ideas, some of them are already happening. * funding circles. Note that most funding circles I know r
17
saulius
7d
6
I was thinking on ways to reduce political polarization and thought about AI chatbots like Talkie. Imagine an app where you could engage with a chatbot representing someone with opposing beliefs. For example: * A Trump voter or a liberal voter * A woman who chose to have an abortion or an anti-abortion activist * A transgender person or someone opposed to transgender rights * A person from another race, religion, or a country your country might be at odds with Each chatbot would explain how they arrived at their beliefs, share relatable backstories, and answer questions. This kind of interaction could offer a low-risk, controlled environment for understanding diverse political perspectives, potentially breaking the echo chambers reinforced by social media. AI-based interactions might appeal to people who find real-life debates intimidating or confrontational, helping to demystify the beliefs of others.  The app could perhaps include a points system for engaging with different viewpoints, quizzes to test understanding, and start conversations in engaging, fictional scenarios. Chatbots should ideally be created in collaboration with people who hold these actual views, ensuring authenticity. Or maybe chatbots could even be based on concrete actual people who could hold AMAs. Ultimately, users might even be matched with real people of differing beliefs for video calls or correspondence. If done well, such an app could perhaps even be used in schools, fostering empathy and reducing division from an early age.  Personally, I sometimes ask ChatGPT to write a story of how someone came to have views I find difficult to relate to (e.g., how someone might become a terrorist), and I find that very helpful. I was told that creating chatbots is very easy. It’s definitely easy to add them to Talkie, there are so many of them there. Still, to make this impactful and good, this needs a lot more than that. I don’t intend to build this app. I just thought the idea is worth sh
Flaming hot take: I wonder if some EAs suffer from Scope Oversensitivity - essentially the inverse of the identifiable victim effect. Take the animal welfare vs global health debate: are we sometimes biased by the sheer magnitude of animal suffering numbers, rather than other relevant factors? Just as the identifiable victim effect leads people to overweight individual stories, maybe we're overweighting astronomical numbers. EAs pride themselves on scope sensitivity to combat emotional biases, but taken to an extreme, could this create its own bias? Are we sometimes too seduced by bigger numbers = bigger problem? The meta-principle might be that any framework, even one designed to correct cognitive biases, needs wisdom and balance to avoid becoming its own kind of distortion.
I think eventually, working on changing the EA introductory program is important. I think it is an extremely good thing to do well, and I think it could be improved. I'm running a 6 week version right now, and I'll see if I feel the same way at the end.

Wednesday, 6 November 2024
Wed, 6 Nov 2024

Frontpage Posts

Quick takes

The value of re-directing non-EA funding to EA orgs might still be under-appreciated. While we obsess over (rightly so) where EA funding should be going, shifting money from one EA cause to another "better" ne might often only make an incremental difference, while moving money from a non-EA pool to fund cost-effective interventions might make an order of magnitude difference. There's nothing new to see here. High impact foundations are being cultivated to shift donor funding to effective causes, the “Center for effective aid policy”  was set up (then shut down) to shift governement money to more effective causes, and many great EAs work in public service jobs partly to redirect money. The Lead exposure action fund spearheaded by OpenPhil is hopefully re-directing millions to a fantastic cause as we speak. I would love to see an analysis (might have missed it) which estimates the “cost-effectiveness” of redirecting a dollar into a 10x or 100x more cost-effective intervention, How much money/time would it be worth spending to redirect money this way? Also I'd like to get my head around how much might the working "cost-effectiveness" of an org improve if its budget shifted from 10% non-EA funding to 90% non- EA funding. There are obviously costs to roping in non-EA funding. From my own experience it often takes huge time and energy. One thing I’ve appreciated about my 2 attempts applying for EA adjacent funding is just how straightforward It has been – probably an order of magnitude less work than other applications.  Here’s a few practical ideas to how we could further redirect funds 1. EA orgs could put more effort into helping each other access non-EA money. This is already happening through the AIM cluster, but I feel the scope could be widened to other orgs, and co-ordination could be improved a lot without too much effort. I’m sure pools of money are getting missed all the time. For example I sure hope we're doing whatever we can through our networks to hel
Current takeaways from the 2024 US election <> forecasting community. First section in Forecasting newsletter: US elections, posting here because it has some overlap with EA. 1. Polymarket beat legacy institutions at processing information, in real time and in general. It was just much faster at calling states, and more confident earlier on the correct outcome. 2. The OG prediction markets community, the community which has been betting on politics and increasing their bankroll since PredictIt, was on the wrong side of 50%—1, 2, 3, 4, 5. It was the democratic, open-to-all nature of it, the Frenchman who was convinced that mainstream polls were pretty tortured and bet ~$45M, what moved Polymarket to the right side of 50/50. 3. Polls seem like a garbage in garbage out kind of situation these days. How do you get a representative sample? The answer is maybe that you don't. 4. Polymarket will live. They were useful to the Trump campaign, which has a much warmer perspective on crypto. The federal government isn't going to prosecute them, nor bettors. Regulatory agencies, like the CFTC and the SEC, which have taken such a prominent role in recent editions of this newsletter, don't really matter now, as they will be aligned with financial innovation rather than opposed to it. 5. NYT/Siena really fucked up with their last poll and the coverage of it. So did Ann Selzer. Some prediction market bettors might have thought that you could do the bounded distrust, but in hindsight it turns out that you can't. Looking back, to the extent you trust these institutions, they can ratchet their deceptiveness (from misleading headlines, incomplete stories, incomplete quotes out of context, not reporting on important stories, etc.) for clicks and hopium, to shape the information landscape for a managerial class that... will no longer be in power in America. 6. Elon Musk and Peter Thiel look like geniuses. In contrast Dustin Moskovitz couldn't get SB 1047 passed despite being the s
Celebrating your users - this just popped into my inbox celebrating my double digit meetings using the Calendly tool.  It highlights a great practice of understanding your users' journey and celebrating the key moments that matter.  Onboarding and offboarding are key moments, but so are points that can transition them to a power user.  From forum stalker to contributor.  This allows me to reflect on how good an experience I've had that I keep using this tool (make sure it is good), and as a next step suggests tips on how I can use the tool more pervasively to get more embedded in the ecosystem.  So think about how you can celebrate your users when community building.  
I've been thinking that there is a "fallacious, yet reasonable as a default/fallback" way to choose moral circles based on the Anthropic principle, which is closely related to my article "The Putin Fallacy―Let’s Try It Out". It's based on the idea that consciousness is "real" (part of the territory, not the map), in the same sense that quarks are real but cars are not. In this view, we say: P-zombies may be possible, but if consciousness is real (part of the territory), then by the Anthropic principle we are not P-Zombies, since P-zombies by definition do not have real experiences. (To look at it another way, P-Zombies are intelligences that do not concentrate qualia or valence, so in a solar system with P-zombies, something that experiences qualia is as likely to be found alongside one proton as any other, and there are about 10^20 times more protons in the sun as there are in the minds of everyone on Zombie Earth combined.) I also think that real qualia/valence is the fundamental object of moral value (also reasonable IMO, for why should an object with no qualia and no valence have intrinsic worth?) By the Anthropic principle, it is reasonable to assume that whatever we happen to be is somewhat typical among beings that have qualia/valence, and thus, among beings that have moral worth. By this reasoning, it is unlikely that the sum total |W| of all qualia/valence in the world is dramatically larger than the sum total |H| of all qualia/valence among humans, because if |W| >> |H|, you and I are unlikely to find ourselves in set H. I caution people that while reasonable, this view is necessarily uncertain and thus fallacious and morally hazardous if it is treated as a certainty. Yet if we are to allocate our resources in the absence of any scientific clarity about which animals have qualia/valence, I think we should take this idea into consideration. P.S. given the election results, I hope more people are doing now the soul-searching we should've done in 2016. I pr

Tuesday, 5 November 2024
Tue, 5 Nov 2024

Frontpage Posts

Quick takes

I think that EA outreach can be net positive in a lot of circumstances, but there is one version of it that always makes me cringe. That version is the targeting of really young people (for this quicktake, I will say anyone under 20). This would basically include any high school targeting and most early-stage college targeting. I think I do not like it for two reasons: 1) it feels a bit like targeting the young/naive in a way I wish we would not have to do, given the quality of our ideas, and 2) these folks are typically far from making a real impact, and there is lots of time for them to lose interest or get lost along the way. Interestingly, this stands in contrast to my personal experience—I found EA when I was in my early 20s and would have benefited significantly from hearing about it in my teenage years.
I'm pretty confident that Marketing is in the top 1-3 skill bases for aspiring Community / Movement Builders. When I say Marketing, I mean it in the broad sense it used to mean. In recent years "Marketing" = "Advertising", but I use the classic Four P's of Marketing to describe it. The best places to get such a skill base is at FMCG / mass marketing organisations such as the below. Second best would be consulting firms (McKinsey & Company): * Procter & Gamble (P&G) * Unilever * Coca-Cola * Amazon 1. Product - What you're selling (goods or services) - Features and benefits - Quality, design, packaging - Brand name and reputation - Customer service and support 2. Price - Retail/wholesale pricing - Discounts and promotions - Payment terms - Pricing strategy (premium, economy, etc.) - Price comparison with competitors 3. Place (Distribution) - Sales channels - Physical/online locations - Market coverage - Inventory management - Transportation and logistics - Accessibility to customers 4. Promotion - Advertising - Public relations - Sales promotions - Direct marketing - Digital marketing - Personal selling
In the spirit of Funding Strategy Week, I'm resharing this post from @Austin last week:

Load more days