Are you building these things on ATProtocol (Bluesky) or where are you building it right now? I feel like there's quite a nice movement happening there with some specific tools for this sort of thing. (I'm curious because I'm also trying to build some stuff more on the deeper programming level but I'm currently focusing on open-source bridging and recommendation algorithms like pol.is but for science and it would be interesting to know where other people are building things.)
If you don't know about the ATProtocol gang, some things I enjoy here are:
- https:...
Firstly, that is if you think that it isn't inevitable and that it is possible to stop or slow down, if nuclear was going to be developed anyway, that changes the calculus. Even if that is the case there's also this weird thing within human psychology where if you can point out a positive vision of something, it is often easier for people to kind of get it?
"Don't do this thing" is often a lot worse than saying something like, could you do this specific thing instead when it comes to convincing people of things. This is also true for specific therapeu...
I would very much be curious about mechanisms for the first point you mentioned!
For 11, I would give a little bit of pushback related to your building as a sport team metaphor as I find them a bit discongruant with each other?
Or rather the degree of growth mindset that is implied in 11th seems quite bad based on best practices within things like sport psychology and general psychology? The existent frame is like you're either elite or you're not gonna make it. I would want the frame to be like "it's really hard to become a great football player...
Firstly, great post thanks for writing it!
Secondly, with regards to the quantification section:
...Putting numbers on the qualities people have feels pretty gross, which is probably why using quantification in hiring is rather polarising. On the one hand, there’s some line of thinking that the different ways in which people are well and ill suited to particular roles isn’t quantifiable and if you try to quantify it you’ll just be introducing bias. On the other hand, people in favour of quantification tend to strongly recommend that you stick exactly to the ran
Very very well put.
I became quite emotional when reading this because I resonated with it quite strongly. I've been in some longer retreats practicing the teachings in Seeing That Frees and I've noticed the connections between EA and Rob Burbea's way of seeing things but I haven't been able to express it well.
I think that there's a very beauitful deepening of a seeing of non-self when acting impartialy. One of the things that I really like about applying this to EA is that you often don't see the outcomes of your actions. This is often seen as ...
The question that is on every single EAs mind is, of course, what about huel or meal replacements? I've been doing huel+supplements for a while now instead of meat and I want to know if you believe this to be suboptimal and if so to what extent? Nutrition is annoyingly complex and so all I know for sure is like protein=good, cal in=cal out and minimize sugar (as well as some other things) and huel seems to tick all the boxes? I'm probably missing something but I don't know what so if you have an answer, please enlighten me!
This one hit close to home (pun not intended).
I've been thinking about this choice for a while now. There's the obvious network and work benefits in living in an EA Hub yet in my experience there's also the benefit of a slower pace leading to more time to think and reflect and develop my own writing and opinions on things which is easier to get when not in a hub.
Yet in AI safety (where I work) all of the stuff is happening in the Bay and London and mostly the Bay. For the last 3 years people have constantly been telling me "Come to the Bay, bro. It w...
Uncertain risk. AI infrastructure seems really expensive. I need to actually do the math here (and I haven’t! hence this is uncertain) but do we really expect growth on trend given the cost of this buildout in both chips and energy? Can someone really careful please look at this?
https://www.lesswrong.com/users/vladimir_nesov <- Got a bunch of stuff on energy calculations and similar required for AI companies, especially the 2028 post, some very good analysis of these things imo.
I think it is a bit like the studies on what makes people able to handle adversity well, it's partly about preparation and ensuring that the priors people bring into the systems are equipped to handle the new attack vectors that this transition provides to our collective epistemics.
So I think we need to create some shared sources of trust that everyone can agree on and establish those before the TAI transition if we want things to go well.
Besides the point that "shoddy toy models" might be emotionally charged, I just want to point out that accelerating progress majorly increases variance and unknown unknowns? The higher energy a system is and the more variables you have the more chaotic it becomes. So maybe an answer is that a agile short-range model is the best? Outside view it in moderation and plan with the next few years being quite difficult to predict?
You don't really need another model to disprove an existing one, you might as well point out that we don't know and that is okay too.
Yeah, I think you're right and I also believe that it can be a both and?
You can have a general non-profit board and at the same time have a form of representative democracy going on which seems the best we can currently do for this?
I think it is fundamentally about a more timeless trade-off between hierarchical organisations that generally are able to act with more "commander's intent" versus democratic models that are more of a flat voting model. The democratic models suffer when there is a lot of single person linear thinking involved but do well a...
Yeah for sure, I think the devil might be in the details here around how things are run and what the purpose of the national organisation is. Since Sweden and Norway have 8x less of a population than germany I think the effect of a "nation-wide group" might be different?
In my experience, I've found that EA Sweden focuses on and provides a lot of the things that you listed so I would be very curious to hear what the difference between a local and national organisation would be? Is there a difference in the dynamics of them being motivated to sustain themselves because of the scale?
You probably have a lot more experience than me in this so it would be very interesting to hear!
I like that decomposition.
There's something about a prior on having democratic decision making as part of this because it allows for better community engagement usually? Representation often leads to feelings of inclusion and whilst I've only dabbled in the sociology here it seems like the option of saying no is quite important for members to feel heard?
My guess would be that the main pros of having democratic deliberation doesn't come from when the going is normal but rather as a resillience mechanism? Democracies tend to react late to major c...
First and foremost, I think the thoughts expressed here make sense and this comment is more just expressing a different perspective, not necessarily disagreeing.
I wanted to bring up an existing framework for thinking about this from Raghuram Rajan's "The Third Pillar," which provides economic arguments for why local communities matter even when they're less "efficient" than centralized alternatives.
The core economic benefits of local community structures include:
This is very nice!
I've been thinking that there's a nice generalisable analogy between bayesian updating and forecasting. (It is quite no shit when you think about it but it feels like people aren't exploiting it?)
I'm doing a project on simulating a version of this idea but in a way that utilizes democratic decision making called Predictive Liquid Democracy (PLD) and I would love to hear if you have any thoughts on the general setup. It is model parameterization but within a specific democratic framing.
PLD is basically saying the following:
What if we...
Some people might find that this post is written from a place of agitation which is fully okay. I think that even if you do there are two things that I would want to point out as really good points:
I felt that this post might be relevant for longtermism and person affecting views so I had claude write up a quick report on that:
In short: Rejecting the SWWM 💸11% pledge's EV calculation logically commits you to person-affecting views, effectively transforming you from a longtermist into a neartermist.
Example: Bob rejects investing in a $500 ergonomic chair despite the calculation showing 10^50 * 1.2*10^-49 = 12 lives saved due to "uncertainty in the probabilities." Yet Bob still identifies as a longtermist who believes we should value future generation...
First and foremost, I'm low confidence here.
I will focus on x-risk from AI and I will challenge the premise of this being the right way to ask the question.
What is the difference between x-risk and s-risk/increasing the value of futures? When we mention x-risk with regards to AI we think of humans going extinct but I believe that to be a shortform for wise compassionate decision making. (at least in the EA sphere)
Personally, I think that x-risk and good decision making in terms of moral value might be coupled to each other. We can think of our ...
First and foremost, I agree with the point. I think looking at this especially from a lens of transformative AI might be interesting. (Coincidentally this is something I'm currently doing using ABMs with LLMs)
You probably know this one but here's a link to a cool project: https://effectiveinstitutionsproject.org/
Dropping some links below, I've been working on this with a couple of people in Sweden for the last 2 years, we're building an open source platform for better democratic decision making using prediction markets:
I guess a random thought I have here is that you would probably want video and you would probably want it to be pretty spammable so you have many shots at it. Looking at twitter we already see like a large amounts of bots around commenting on things which is like a text deepfake.
Like I can see in a year or so when SORA is good enough that creating a short form stabel video is easy we will see a lot more manipulation of voters through various social media through deepfakes.
(I don't think the tech is easy enough to use yet for it to be painless to do i...
FWIW, I find that if you analyze places where we've successfully aligned things in the past (social systems or biology etc.) you find that the 1th and 2nd types of alignment really don't break down in that way.
After doing Agent Foundations for a while I'm just really against the alignment frame and I'm personally hoping that more research in direction will happen so that we get more evidence that other types of solutions are needed. (e.g alignment of complex systems such as has happened in biology and social systems in the past)
FWIW, I completely agree with what you're saying here and I think that if you seriously go into consciousness research and especially for what we westerners more label as a sense of self rather than anything else it quickly becomes infeasible to hold a position that the way we're taking AI development, e.g towards AI agents will not lead to AIs having self-models.
For all matters and purposes this encompasses most theories of physicalist or non-dual theories of consciousness which are the only feasible ones unless you want to bite some really sour app...
I'm not a career councellor so take everything with a grain of salt but you did publically post this asking for unsolicited advice, so here you go!
So, more directly if you're thinking of EA as a community that needs specific skills and you're wondering what to do, your people management skills, strategy & general leadership skills are likely to be high in demand from other organisations: https://forum.effectivealtruism.org/posts/LoGBdHoovs4GxeBbF/meta-coordination-forum-2024-talent-need-survey
Someone else mentioned that enjoyment can be highly or...
So I'll just give some reporting on a vibe I've been feeling on the forum.
I feel a lot more comfortable posting on LessWrong compared to the EA forum because it feels like there's a lot more moral outrage here? Like if I go back 3 or 4 years I felt that the forum was a lot more open to discussing and exploring new ideas. There's been some controversies recently around meat-eater problem stuff and similar and I can't help but just feel uncomfortable posting stuff with how people have started to react?
I like the different debate weeks as I think it set...
I want to preface that I don't have a strong opinion here, just some curiosity and a question.
If we are focusing on second order effects wouldn't it make sense to bring up something like moral circle expansion and its relation to ethical and sustainable living over time as well?
From a long-term perspective, I see one of the major effects of global health being better decision making through moral circle expansion.
My question to you is then what time period you're optimising for? Does this matter for the argument?
Thank you for that substantive response, I really appreciate it! It was also very nice that you mentioned the Turner et.al definitions, I wasn't expecting that.
(Maybe write a post on that? There's a comment that mentions uptake from major players in the EA ecosystem and maybe if you acknowledge you understand the arguments they would be more sympathetic? Just a quick thought but it might be worth engaging there a bit more?)
I just wanted to clarify some of the points I was trying to make yesterday as I do realise that they didn't all get across as I w...
Thank you for this post David!
I've from time to time engaged with my friends in discussion about your criticisms of longtermism and some existential risk calculations. I found that this summary post of your work and interaction calrifies my perspective on the general "inclination" that you have in engaging with the ideas, one that seems like a productive one!
Sometimes, I felt that it didn't engage with some of the core underlying claims of longtermism and exisential risk which did annoy me.
I want to respect the underlying time spend assym...
I just did different combinations of the sleep supplements, you still get the confounder effects but it removes some of the cross-correlation. So Glycine 3 days, no magnesium followed by magnesium 3 days, no glycine e.t.c. It's not necessarily going to give you a high accuracy but you can see if it works or not and a rough effect size
I use bearable for 3 months at a time to get a picture of what is currently working. You can track effect sizes of supplements in sleep quality for example if you also have a way of tracking your sleep.
Funnily enough, I noticed there were a bunch of 80/20 stuff in my day through using bearable. I found doing a cold shower, loving kindness meditation in the morning and getting sunlight in the morning were like a difference of 30% in energy and enjoyment so I now do these religiously and it has worked wonders. (I really like bearable for these sorts of experiments.)
Sorry for not noticing the comment earlier!
Here's the Claude distillation based on my reasoning on why to use it:
Reclaim is useful because it lets you assign different priorities to tasks and meetings, automatically scheduling recurring meetings to fit your existing commitments while protecting time for important activities.
For example, you can set exercising three times per week as a priority 3 task, which will override priority 2 meetings, ensuring those exercise timeblocks can't be scheduled over. It also automatically books recurrent meetin...
Thanks Jacques! I was looking for an upgrade to some of my LLM tools. I was looking for some IDEs and I'll check that out.
The only tip I've got is using reclaim.ai instead of calendly for automatic meeting scheduling, it slaps.
Thanks! That post adresses what I was pointing at a lot better than I did in mine.
I can see from your response that I didn't get across my point as well as I wanted to but I appreciate the answer none the less!
It was more a question of what leads to the better long-term consequences rather than combining them.
It seems plausible animals have moral patienthood and so the scale of the problem is larger for animals whilst also having higher tractability. At the same time, you have cascading effects of economic development into better decision making. As a longtermist, this makes me very uncertain on where to focus resources. I will therefore put myself centrally to signal my high uncertainty.
I think that still makes sense under my model of a younger and less tractable field?
Experience comes partly from the field being viable for a longer period of time since there can be a lot more people who have worked in that area in the past.
The well-described steps and concrete near-term goals can be described as a lack of easy tractability?
I'm not saying that it isn't the case that the proposals in longtermism are worse today but rather that it will probably look different in 10 years? A question that pops up for me is about how great t...
I enjoyed the post and I thought the platform for collective action looked quite cool.
I also want to mention that I think tractability is just generally a really hard thing for longtermism. It's also a newer field and so on expectation I think you should just believe that the projects will look worse than in animal welfare. I don't think there's any need for psychoanalysis of the people in the space even though it has its fair share of wackos.
Great point, I did not think of the specific claim of 5% when thinking of the scale but rather whether more effort should be spent in general.
My brain basically did a motte and baily on me emotionally when it comes to this question so I appreciate you pointing that out!
It also seems like you're mostly critiquing the tractability of the claim and not the underlying scale nor neglectedness?
It kind of gives me some GPR vibes as for why it's useful to do right now and that dependent on initial results either less or more resources should be spent?
Super exciting!
I just wanted to share a random perspective here: Would it be useful to model sentience alongside consciousness itself?
If you read Daniel Dennett's book Kinds of Minds or take some of the Integrated Information Theory stuff seriously, you will arrive at this view of a field of consciousness. This view is similar to Philip Goff's or to more Eastern traditions such as Buddhism.
Also, even in theories like Global Workspace Theory, the amount of localised information at a point in time matters alongside the type of information p...
There's this idea of the truth as an asymmetric weapon; I guess my point isn't necessarily that the approach vector will be something like:
Expert discussion -> Policy change
but rather something like
Experts discussion -> Public opinion change -> Policy Change
You could say something about memetics and that it is the most understandable memes that get passed down rather than the truth, which is, to some extent, fair. I guess I'm a believer that the world can be updated based on expert opinion.
For example, I've noticed a trend in the AI Safety d...
Yeah, I guess the crux here is to what extent we actually need public support or at least what type of public support that we need for it to become legislation?
If we can convince 80-90% of the experts, then I believe that this has cascading effects on the population, and it isn't like AI being conscious is something that is impossible to believe either.
I'm sure millions of students have had discussions about AI sentience for fun, and so it isn't like fully out of the Overton window either.
I'm curious to know if you disagree with the above or if there is another reason why you think research won't cascade to public opinion? Any examples you could point towards?
A crux that I have here is that research that takes a while to explain is not going to inspire a popular movement.
Okay, what comes to mind for me here is quantum mechanics and how we've come up with some pretty good analogies to explain parts of it.
Do we really need to communicate the full intricacies of AI sentience to say that an AI is conscious? I guess that this isn't the case.
...The world where EA research and advocacy for AI welfare is most crucial is one where the reasons to think that AI systems are conscious are non-obvious, such that we
Damn, I really resonated with this post.
I share most of your concerns, but I also feel that I have some even more weird thoughts on specific things, and I often feel like, "What the fuck did I get myself into?"
Now, as I've basically been into AI Safety for the last 4 years, I've really tried to dive deep into the nature of agency. You get into some very weird parts of trying to computationally define the boundary between an agent and the things surrounding it and the division between individual and collective intelligence just starts to break down a ...
Startup: https://thecollectiveintelligence.company/
Democracy non-profit: https://digitaldemocracy.world/
So I've been working in a very adjacent space to these ideas for the last 6 months and I think that the biggest problems that I have with this is just the feasibility of it.
That being said we have thought about some ways of approaching a GTM for a very similar system. Thr system I'm talking about here is an algorithm to improve interpretability and epistemics of organizations using AI.
One is to sell it as a way to "align" management teams lower down in the organization for the C-suite level since this actually incentivises people to buy it.
A second one is ...
I guess I felt that a lot of the post was arguing under a frame of utilitarianism which is generally fair I think. When it comes to "not leaving a footprint on the future" what I'm referring to is epistemic humility about the correct moral theories. I'm quite uncertain myself about what is correct when it comes to morality with extra weight on utilitarianism. From this, we should be worried about being wrong and therefore try our best to not lock in whatever we're currently thinking. (The classic example being if we did this 200 years ago we might still ha...
I enjoyed reading this and yet I find that in the practice of higher ambition there are some specific pitfalls that I still haven't figured out my way around.
If you've every worked a 60-70 hour work week and done it for a longer period of time, you can notice a narrowing characteristic of experience, it is as if you have blinders to what is not within your stated goals or the project you're working on. (I like to call this compression) With some of my more ambitious friends who do this more often, I find that they sometimes get lost in what they're w... (read more)