Ah, I wasn't aware that that wasn't the conventional definition. Thanks for the correction.
Still, I think it's important to somehow manage both sets of people and we can probably do better, though my idea is quite random.
Well, yes, but I was thinking about what to do with sociopaths that are already in the community. If your policy is "we kick out every sociopath we identify", no sociopath is going to identify themselves to you. I'm not advocating for attracting new sociopaths.
Mind you, I'm assuming here that there are plenty of sociopaths that aren't that bad, and want to do good, but suffer from the disability of not being able to care emotionally for others. I think it would be good if we could at least keep them out of powerful positions.
This was a pretty uninformed thought of how to deal with sociopaths, but it does feel like a problem worth someone thinking more deeply about.
Here's another question I have:
(I think yes. Something like 1% of the population of sociopathic, and I think EA's utilitarianism attracts sociopaths at a higher level than population baseline. Many sociopaths don't inherently want to do evil, especially not those attracted to EA. If sociopaths could somehow receive integrity guidance and be excluded from powerful positions, this would limit risk from other sociopaths.)
Random idea:
Maybe we should - after this question of investigation or not has been discussed in more detail - organize community-wide vote on whether there should be an investigation or not?
I have not been very closely connected to the EA community the last couple of years, but based on communications, I was expecting:
For example, Will posted in his Quick Takes 9 months ago:
...I had originally planned to get out a backwards-looking post early in the year, and I had been holding off on talking about other things until that was published. That post has been repeatedly del
It now turns out that this has changed into podcasts, which is better than nothing, but doesn't give room to conversation or accountability.
Formatting error; this is something Siebe is saying, not part of the Will quotation.
I would like to know what the disagree votes* mean here.
*At the time of this comment, it's 7 Agree - 7 Disagree
I hope you are correct! As an outsider, I find it very hard to judge without standardized non-gameable benchmarks for agents.
I hope you are correct. I find it very hard to judge without standardized, non-gameable benchmarks for agents.
I hope you are correct. As an outsider, I find it very hard to judge without standardized, non-gameable benchmarks for agents.
I really like this post, but I think the concept of buckets is a mistake. It implies that a cause has a discrete impact and "scores zero" on the other 2 dimensions, while in reality some causes might do well on 2 dimensions (or at least non-zero).
I also think over time, the community has moved more towards doing vs. donating, which has brought in a lot of practical constraints. For individuals this could be:
And also for the community:
If anyone has good suggestions of what I could email to relevant MEPs (just Zvi's post?) that would be net-positive (e.g. low risk of bad regulation), I'd be happy to hear them.
Ah yes, that's a great summary I hadn't read yet. Link: https://www.lesswrong.com/posts/KXHMCH7wCxrvKsJyn/openai-facts-from-a-weekend?commentId=eFuasCwaKJr2YiScY
And it looks like likely that phrase actually meant "not required"!
Thanks for re-sharing! Unfortunately, these make it quite unclear how much they've given to EA. (I assume it's a large chunk of 'GCR Capacity Building'
No they didn't, and it looks like we aren't going to see the investigation, unless somebody leaks it. But it looks to me that it had something to do with his pattern of manipulative behavior, and allegedly he lied to other board members that McCauley wanted Toner fired (this was stated in the NY Times article on Murati, I think), which sounds like the proximate cause to me.
But if such behavior came up during the investigation, I'm confused how the investigators could NOT conclude there was good reason for his firing (maybe they're not so independent?) or w...
Thanks for making the list Remmelt!
Not sure how important this one is, but Air Canada recently had to comply to a refund policy made up by its own chatbot.
Also worth reading:
...WilmerHale found the prior Board implemented its decision on an abridged timeframe, without advance notice to key stakeholders, and without a full inquiry or an opportunity for Mr. Altman to address the prior Board’s concerns. WilmerHale found that the prior Board acted within its
Impressions:
Yeah agree, though the disagreement is also specific to views on AI x-risk, which I view as very different from reputation
The National Institute of Standards and Technology (NIST) is facing an internal crisis as staff members and scientists have threatened to resign over the anticipated appointment of Paul Christiano to a crucial, though non-political, position at the agency’s newly-formed US AI Safety Institute (AISI), according to at least two sources with direct knowledge of the situation, who asked to remain anonymous.
I don't know, threatening to resign is a pretty concrete thing and I don't find "revolt" such an exaggeration. You can doubt the sources and wish for mor...
Okay, got it!
The grant was also to buy a board seat, which makes it very different from a normal grant.
The 80K job promotion is indeed odd.
I think there was plenty of skepticism towards OpenAI, but maybe less so at the top
I agree with your points in general, but I'm confused at the context. Are you implying here that EA empowered and trusted Sam Altman?:
I get the sense that EAs are, as a whole, too ready to assume that other EAs have low susceptibility to corruption from these sorts of influences.
I want to share a concern that hasn't been raised yet: this seems like a huge conflict of interest.
From the Power for Democracies website:
...Power for Democracies was founded in 2023 by Markus N. Beeko (the former Secretary General of Amnesty International in Germany) together with Stefan Shaw and Stephan Schwahlen, the founders of the philanthropy advisory legacies.now and co-founders of effektiv-spenden.org. Power for Democracies is funded by small family foundations and individuals from Germany and Switzerland who wish to make an effective contribution i
Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism.
This suggests people's expected x-risk levels are really small ('extreme levels of caution'), which isn't what people believe.
I think "if you believe the probability that a technology will make humanity go extinct with a probability of 1% or more, be very very cautious" would be endorsed by a large majority of the general population & intellectual 'elite'. It's not at all a fringe moral position.
Although I agree with pretty much all he writes, I feel like a crucial piece on the FTX case is missing: it's not only the failure of some individuals to exercise reasonable humility and abide by common sense virtues. It's also a failure by the community, its infrastructure, and processes to identify and correct this problem.
(The section on SBF starts with "When EAs Have Too Much Confidence".)
I don't feel I have much to say about that tbh, though I did talk about auditing financials here https://forum.effectivealtruism.org/posts/eRyC6FtN7QEkDEwMD/should-we-audit-dustin-moskovitz?commentId=qEzHRDMqfR5fJngoo
If we have another major donor with a more mysterious financial background than mine, we should totally pressure them to undergo an audit!
That said, I'm not convinced the next scandal will look anything like that, and the real problem to me was the lack of smoking guns. It's very hard to remove someone from power without that, as we've recentl...
What would be the proper response of the EA/AI safety community, given that Altman is increasingly diverging from good governance/showing his true colors? Should there be any strategic changes?
So, what do we think Altman's mental state/true belief is? (Wish this could be a poll)
I'm also very curious what the internal debate on this is - if I were working on safety inside OpenAI, I'd be very upset.
I'm not convinced that he has "true beliefs" in the sense you or I mean it, fwiw. A fairly likely hypothesis is that he just "believes" things that are instrumentally convenient for him.
The way I envision him (obviously I don't know and might be wrong):
Altman, like most people with power, doesn’t have a totally coherent vision for why him gaining power is beneficial for humanity, but can come up with some vague values or poorly-thought-out logic when pressed. He values safety, to some extent, but is skeptical of attempts to cut back on progress in the name of safety.
Good luck!
Quick thoughts:
This is, unless you specifically want to keep it for FAST members
Not that we can do much about it, but I find the idea of Trump being president in a time that we're getting closer and closer to AGI pretty terrifying.
A second Trump term is going to have a lot more craziness and far fewer checks on his power, and I expect it would have significant effects on the global trajectory of AI.
I was surprised to see that the Finance position is volunteer. It seems not in line with the responsibilities?
why aren't more EA-aligned organizations and initiatives (other than MIRI) presenting global, strictly enforced bans on advanced AI training as a solution to AI-generated x-risk? Why isn't there more discussion of acceptance (of the traditional human condition) as an antidote to the risks of AGI, rather than relying solely on alignment research and safety practices to provide a safe path forward for AI (I'm not convinced such a path exists)?
I think these are far more relevant questions than the theoretical long-termist question you ask.
People can be in ...
Heavy use of kava is associated with liver damage, but it seems much less toxic than alcohol. (I use it in my insomnia stack)
I just want to share that I think you did an excellent job explaining the arguments on the recent Politico Tech podcast, in a way that I think comes across as very grounded and reasonable, which makes me more optimistic that MIRI can make this shift. I also hope that you can nudge Eliezer more towards this style of communication, which I think would make his audience more receptive. (I thought the tone of the TIME piece didn't seem professional enough). This seems especially important if Eliezer will also focus on communications and policy instead of research.
Really interesting initiative to develop ethanol analogs. If successful, replacing ethanol with a less harmful substance could really have a big effect on global health. The CSO of the company (GABA Labs) is prof. David Nutt, a prominent figure in drug science.
I like that the regulatory pathway might be different from most recreational drugs, which would be very hard to get de-scheduled.
I'm pretty skeptical that GABAergic substances are really going to cut it, because I expect them to have pretty different effects to alcohol. We already have those (L-thean...
Top Anglophone universities are already quite small, and I find it hard to believe that the migration numbers are significant
I wonder whether the focus on top universities in itself carries Anglophone bias.. most other countries don't have a large disparity in talent-attraction between different universities. Instead, the talent is concentrated within universities in e.g. Honours programmes and by grades.
In terms of policy recommendations, these differences don't seem to matter.
Maybe I'm nitpicking, but I see this point often and I think it's a little too self-serving. There are definitely policy ideas in both spheres that trade-off against the others. E.g. many AI X-risk policy analysts (used to) want few players to reduce race dynamics, while such concentration of power would be bad for present-day harms. Or keeping significant chip production out of developing countries.
More generally, if governments really took x-risk seriously, they would be willing to sacrifice significant civil liberties, which wouldn't be acceptable at low x-risk estimates.
It's called an existential catastrophe: https://www.fhi.ox.ac.uk/Existential-risk-and-existential-hope.pdf or if you mean 1 step down, it could be a "global catastrophe".
or colloquially "doom" (though I don't think this term has the right serious connotations)
Something like the recent Nonlinear post–but focused at Sam–would likely have far, far higher EV.
I felt really uncomfortable reading this
I agree with everything but the last point. Director or CEO simply refers to a name of the position, doesn't it?
Ray Dalio is giving out free $50 donation vouchers: tisbest.org/rg/ray-dalio/
Still worked just a few minutes ago
I wanted to check if this project could become redundant by the expected arrival of TB vaccine(s) later this decade, but they had only 50% efficacy in Phase 2 trials, so treatment will indeed be needed for quite a while it seems.
A pretty poor piece of journalism in my opinion. It gets a number of facts wrong. For example:
This looks evermore unlikely. I guess I didn't properly account for:
Nevertheless, I think speculating on internal politics can be a valuable exercise - being able to model the actions & power of strong bargainers (including bad faith ones) seems a valuable skill for EA.
Thanks
Maybe quite some people don't like random ideas being shared on the Forum?