Quick takes

What's the lower bound on vaccine development? Toby Ord writes in a recent post:

The expert consensus was that it would take at least a couple of years for Covid, but instead we had several completely different vaccines ready within just a single year

My intuition is that there's a lot more we can shave off from this. The reason I think this is because it seems like vaccine development is mostly bottlenecked by the human-trial phase, which can take upwards of months, whereas developing the vaccine itself can be done in far less time (perhaps a month, but som... (read more)

David Rubinstein recently interviewed Philippe Laffont, the founder of Coatue (probably worth $5-10b). When asked about his philanthropic activities, Laffont basically said he’s been too busy to think about it, but wanted to do something someday. I admit I was shocked. Laffont is a savant technology investor and entrepreneur (including in AI companies) and it sounded like he literally hadn’t put much thought into what to do with his fortune.

Are there concerted efforts in the EA community to get these people on board? Like, is there a google doc with a six ... (read more)

something I persistently struggle with is that it's near-impossible to know everything that has been said about a topic, and that makes it really hard to know when an additional contribution is adding something or just repeating what's already been said, or worse, repeating things that have already been refuted

to an extent this seems inevitable and I just have to do my best and sometimes live with having contributed more noise than signal in a particular case, but I feel like I have an internal tuning knob for "say more" vs. "listen more" and I find it really hard to know which direction is overall best

As weird as it sounds, I think the downvote button should make you a bit less concerned with contribution quality. If it's obviously bad, people will downvote and read it less. If it's wrong without being obviously bad, then others likely share the same misconception, and hopefully someone steps in to correct it.

In practice, the failure mode for the forum seems to be devoting too much attention to topics that don't deserve it. If your topic deserves more attention, I wouldn't worry a ton about accidentally repeating known info? For one thing, it could ... (read more)

4
John Salter
Using Kialo for debates rather than the Forum would go a long way. It's hard to get off the ground because it's attractiveness to use is roughly proportional to the number of EAs using it, and at present, the number of EAs using it is zero. https://www.kialo-edu.com/

Ideas of posts I could write in comments. Agreevote with things I should write. Don't upvote them unless you think I should have karma just for having the idea, instead upvote the post when I write it :P

Feel encouraged also to comment with prior art in cases where someone's already written about something. Feel free also to write (your version of) one of these posts, but give me a heads-up to avoid duplication :)

(some comments are upvoted because I wrote this thread before we had agreevotes on every comment; also I'm removing my own upvotes on these)

Showing 3 of 33 replies (Click to show all)

This one might be for LW or the AF instead / as well, but I'd like to write a post about:

  • should we try to avoid some / all alignment research casually making it into the training sets for frontier AI models?
  • if so, what are the means that we can use to do this? how do they fare on the ratio between reduction in AI access vs. reduction in human access?
0
Ben Millwood
This became When "human-level" is the wrong threshold for AI
0
Ben Millwood
my other quick take, AI Safety Needs To Get Serious About Chinese Political Culture is basically a post idea, but it was substantial enough I put it at the top level rather than have it languish in the comments here. Nevertheless, here it is so I can keep all the things in one place.

Culture against EA?

I was born and raised in an Asian metropolitan city, historically capitalistic. I have decided to write and contest the economy-driven model in my city where people are the means to an end - a robust economy, my question is what such a goal is for then if the people put pleasure over any consideration for the people in other places who are suffering? People seem to fail to acknowledge that the world is one village, so would a government choose to assist a country in crisis even if this requires knowledge transfer or financial assistance?... (read more)

3
Muloongo Stella Mwanahamuntu
This is indeed a thought-provoking question. While I'm not an economics expert, my perspective leans towards the idea that capitalism, at its core, is driven by profits. However, what becomes crucial is how organizations, essentially run by people, choose to utilize those profits. Take Patagonia, for instance, which allocates 98% of its profits to address climate change and protect underdeveloped land. It highlights that the impact on society is more about the decisions made by the people within the system rather than inherent flaws in the system itself. Unfortunately, the system has been flawed from the beginning, and money, rather than being a tool for positive change, is often wielded as a weapon. Addressing cultural traditions that may clash with Effective Altruism (EA) values, I'm relatively new to EA, but I find it aligns well with the practice of tithing in the SDA church. I'm intrigued by your mention of cultural traditions that go against Effective Altruism (EA) values. Could you provide more details on specific traditions you have in mind? I've encountered challenges in aligning with other EA causes, particularly those related to animals, given my everyday experiences, like literally seeing chickens crossing the road all the time. It's a reminder that life's complexities vary for everyone. Nevertheless, what I appreciate about EA is its accountability, allowing me to ensure that my financial contributions are making a positive impact. Importantly, EA doesn't restrict how one chooses to allocate money and resources. In the end, 'Each one must give as he has decided in his heart, not reluctantly or under compulsion, for God loves a cheerful giver' - 2 Corithians 9:7.

Thanks for your clarifying question. In my city, people tend to value meat consumption - often as a culture related to god worshipping but no one ever questions why. In my city Hong Kong, waste goes to landfills and we are way behind other developed economies in measures such as recycling mainly because of a dense population. My perspective on politics go above the city onto a global level. Comparatively we are a wealthy city BUT the gap between the rich and the poor is big. In this Chinese culture, wealth accumulation is an honour that we keep wealth to o... (read more)

titotal
60
14
17
2
2

I want make my prediction about the short-term future of AI. Partially sparked by this entertaining video about the nonsensical AI claims made by the zoom CEO. I am not an expert on any of the following of course, mostly writing for fun and for future vindication. 

The AI space seems to be drowning in unjustified hype, with very few LLM projects having a path to consistent profitabilitiy, and applications that are severely limited by the problem of hallucinations and the general fact that LLM’s are poor at general reasoning (compared to humans). It see... (read more)

Showing 3 of 13 replies (Click to show all)
3
David Mathers
I want to say just "trust the market", but unfotunately, if OpenAI has a high but not astronomical valuation, then even if the market is right, that could mean "almost certainly will be quite useful and profitable, chance of near-term AGI almost zero' or it could mean "probably won't be very useful or profitable at all, but 1 in 1000 chance of near-term AGI supports high valuation nonetheless" or many things inbetween those two poles. So I guess we are sort of stuck with our own judgment? 

For publically-traded US companies there are ways to figure out the variance of their future value, not just the mean, mostly by looking at option prices. Unfortunately, OpenAI isn't publically-traded and (afaik) has no liquid options market, but maybe other players (Nvidia? Microsoft?) can be more helpful there.

2
Justin Olive
Why I disagree that this video insightful/entertaining: The YouTuber quite clearly has very little knowledge of the subject they are discussing - it's actually quite reasonable for the Zoom CEO to simply say that fixing hallucinations will "occur down the stack", given that they are not the ones developing AI models, and would instead be building the infrastructure and environments that the AI systems operate within. From what I watched of the video, she also completely misses the real reason that the CEOs claims are ridiculous; if you have an AI system with a level of capability that allows it to replicate a person's actions in the workplace, then why would we go to the extra effort of having Zoom calls between these AI clones? I.e. It would be much more efficient to build information systems that align with the strengths & comparative advantages of the AI systems  - presumably this would not involve having "realistic clones of real human workers" talking to each other, but rather a network of AI systems that communicate using protocols and data formats that are designed to be as robust and efficient as possible. FWIW if I were the CEO of Zoom, I'd be pushing hard on the "Human-in-the-loop" idea. E.g. building in features that allow you send out AI agents to fetch information and complete tasks in real time as you're having meetings with your colleagues. That would actually be a useful product that helps keep Zoom interesting and relevant. With regards to AI progress stalling, I think it depends on what you mean by "stalling", but I think this is basically impossible if you mean "literally will not meaningfully improve in a way that is economically useful" When I first learned how modern AI systems worked, I was astonished at how absurdly simple and inefficient they are. In the last ~2 years there has been a move towards things like MoE architectures & RNN hybrids, but this is really only scratching the surface of what is possible with more complex architectur

This is a sloppy rough draft that I have had sitting in a Google doc for months, and I figured that if I don't share it now, it will sit there forever. So please read this is a rough grouping of some brainstormy ideas, rather than as some sort of highly confident and well-polished thesis.

- - - - - - 

What feedback do rejected applicants want?

From speaking with rejected job applicants within the EA ecosystem during the past year, I roughly conclude that they want feedback in two different ways:

  • The first way is just emotional care, which is really j
... (read more)
2
Ben Millwood
I'm on board with a lot of your emotional care advice, but,,, ...I feel like your mileage may vary on this one. I don't like being in suspense, and moreover it's helpful from a planning perspective to know what's up sooner rather than later. I'd say instead that if you want to signal that you spent time with someone's application, do it by making sure your rejection is conspicuously specific (i.e. mentions features of the applicant or their submissions, even if only superficially). I also think you missed an entire third category of reason to want feedback, which is that if I stand no hope of getting job X, no matter how much I improve, I do really want to know that, so I can make choices about how much time to spend trying to get that job or jobs like it. It feels like a kindness to tell me I can do anything I put my mind to, but if it's not true then you're just setting me up for more pain in the future. (Similarly, saying "everyone should apply, even if you're not sure you're qualified" sounds like a kindness but does have a downside in terms of increasing the number of unsuccessful applicants; sometimes it's worth it anyway, but the downside should be acknowledged.)

There is a sort of a trade-off to notifying people immediately or notifying them after a couple of days. My best guess is that it generally won't make a difference for someone's planning to be rejected from a job application in less than 24 hours or to be rejected within a few days. But there is probably a lot of variation in preferences from one person to another; maybe I am impacted by this more than average.

I've had a few job applications that I submitted and then got rejected for an hour or two later, and emotionally that felt so much worse. But at the end of the day I think you are right that "your mileage may vary."

2
Joseph Lemien
Good point! I hadn't thought of that, but that would be very helpful feedback to have.

About a month ago, @Akash stated on Less Wrong that he'd be interested to see more analysis of possible international institutions to build and govern AGI (which I will refer to in this quick take as "i-AGI").

I suspect many EAs would prefer an international/i-AGI scenario. However, it's not clear that countries with current AI leadership would be willing to give that leadership away.

International AI competition is often framed as US vs China or similar, but it occurred to me that a "AI-leaders vs AI-laggards" frame could be even more important. AI-laggar... (read more)

FYI I don't really understand Forum tags but I think none of your @name mentions actually tagged anyone (I would expect them to turn into a blue profile link if they did work)

I'm been mulling over the idea of proportional reciprocity for a while. I've had some musings sitting a a Google Doc for several months, and I think that I either share a rough/sloppy version of this, or it will never get shared. So here is my idea. Note that this is in relation to job applications within EA, and I felt nudged to share this after seeing Thank You For Your Time: Understanding the Experiences of Job Seekers in Effective Altruism.

- - - - 

Proportional reciprocity 

I made this concept up.[1] The general idea is that relationships ... (read more)

the level of care and effort that I express toward you should be roughly proportional to the level of effort and care that you express toward me

maybe a version of this that is more durable to the considerations in your footnote is: the level of care and effort that I ask from you should be roughly proportional to the level that I express towards you

if I ask for not much care and effort and get a lot, that perhaps should be a prompt to figure out if I should have done more to protect my counterpart from overinvesting, if I accidentally overpromised or miscommunicated, but ultimately there's only so much responsibility you can take for other people's decisions

I'm pretty confident that a majority of the population will soon have very negative attitudes towards big AI labs. I'm extremely unsure about what impact this will have on the AI Safety and EA communities (because we work with those labs in all sorts of ways). I think this could increase the likelihood of "Ethics" advocates becoming much more popular, but I don't know if this necessarily increases catastrophic or existential risks.

Anecdotally I feel like people generally have very low opinion of and understanding of the finance sector, and the finance sector mostly doesn't seem to mind. (There are times when there's a noisy collision, e.g. Melvin Capital vs. GME, but they're probably overall pretty rare.)

It's possible / likely that AI and tech companies targeting mass audiences with their product are more dependent on positive public perception, but I suspect that the effect of being broadly hated is less strong than you'd think.

6
titotal
The public already has a negative attitude towards the tech sector before the AI buzz. in 2021 45% of americans had a somewhat or very negative view of tech companies.  I doubt the prevalence of AI is making people more positive towards the sector given all the negative publicity over plagarism, job loss, and so on. So I would guess the public already dislikes AI companies (even if they use their products), and this will probably increase. 
3
anormative
Can you elaborate on what makes you so certain about this? Do you think that the reputation will be more like that of Facebook or that of Big Tobacco? Or will it be totally different?

Microsoft have backed out of their OpenAI board observer seat, and Apple will refuse a rumoured seat, both in response to antitrust threats from US regulators, per Reuters.

I don’t know how to parse this—I think it’s likely that the US regulators don’t care much about safety in this decision, and nor do I think it meaningfully changes Microsoft’s power over the firm. Apple’s rumoured seat was interesting, but unlikely to have any bearing either.

Lina Khan (head of the FTC) said she had P(doom)=15%, though I haven't seen much evidence it has guided her actions, and she suggested this made her an optimist, suggesting maybe she hadn't really thought about it.

Not sure if this type of concern has reached the meta yet, but if someone approached me asking for career advice, tossing up whether to apply for a job at a big AI lab, I would let them know that it could negatively affect their career prospects down the track because so many people now perceive such as a move as either morally wrong or just plain wrong-headed. And those perceptions might only increase over time. I am not making a claim here beyond this should be a career advice consideration.

It's frankly quite concerning that usually technical specifications are only worked on by Working Groups after high-level qualitative goals are set by policymakers- seems to open a can of worms for different interpretations and safety washing.

Updated away from this generally- there is a balance.
Good example for why I updated away is 28:27 from the video at:

EA Global: Bay Area 2025 will take place 21-23 February 2025 at the Oakland Marriott (the same venue as the past two years). Information on how to apply and other details to follow, just an FYI for now since we have the date.

Showing 3 of 4 replies (Click to show all)

We aren't planning on having a GCR (or other cause area) focus for this event, but we'll confirm that in due course.

1
Jona
Thanks, Ollie! I thought this was helpful.
1
Karthik Tadepalli
Question seconded!

A core part of the longtermist project is making it very clear to people today that 21st century humanity is far from the peak of complex civilization. Imagine an inhabitant of a 16th-century medieval city looking at their civilization and thinking “This is it; this is civilization close to its epitome. Sure, we may build a few more castles over there, expand our army and conquer those nearby kingdoms, and develop a new way to breed ultra-fast horses, but I think the future will be like this, just bigger”. As citizens of the 21st century we’re in the ... (read more)

Is talk about vegan diets being more healthy is mostly just confirmation bias and tribal thinking? A vegan diet can be very healthy or very unhealthy, and a non-vegan diet can also be very healthy or very unhealthy. The simplistic comparisons that I tend to see are contrasting vegans who put a lot of care and attention toward their food choices and the health consequences, versus people who aren't really paying attention to what they ear (something like the standard American diet or some similar diet without much intentionality). I suppose in a statistics ... (read more)

Showing 3 of 4 replies (Click to show all)

I think your confounders are on the money.

You might be interested in Elizabeth's Change my mind: Veganism entails trade-offs, and health is one of the axes. I especially appreciated her long list of cruxes, the pointer to Faunalytics' study of nutritional issues in ex-vegans & ex-vegetarians, and her analysis of that study attempting to adjust for its limitations which basically strengthens its findings (to my reading). 

I'd also guess, without much evidence, that there's a halo effect-like thing going on where if someone really care about averting... (read more)

5
Joseph Lemien
You choose great examples! 😂
3
NickLaing
Strong upvote for the attempted mirth - I think I'm one of the few that appreciates it around here :D.

The recently released 2024 Republican platform said they'll repeal the recent White House Executive Order on AI, which many in this community thought is a necessary first step to make future AI progress more safe/secure. This seems bad.

Artificial Intelligence (AI) We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.

From https://s3.documentcloud.org/documents/24795... (read more)

3
huw
Sorry, could you explain why ‘many people in the community think this is a necessary first step’ or provide a link? I must’ve missed that one and that sounds surprising to me that outright repealing it (or replacing it with nothing in the case of the GOP’s platform) would be desirable.

I edited my comment for clarity.

This is a nudge to leave questions for Darren Margolias, Executive Director of @Beast Philanthropy, in the AMA post here! I'll be recording a video AMA with Darren based on the questions left in the post, and we'll try to get through as many of them as possible.

For extra context, Beast Philanthropy is a charity founded by YouTuber MrBeast. They recently collaborated with GiveDirectly on this video; you can read more about it on GiveDirectly's blog and the Beast Philanthropy LinkedIn.

"a Utilitarian may reasonably desire, on Utilitarian principles, that some of his conclusions should be rejected by mankind generally; or even that the vulgar should keep aloof from his system as a whole, in so far as the inevitable indefiniteness and complexity of its calculations render it likely to lead to bad results in their hands." (Sidgwick 1874)

Load more