All of Raemon's Comments + Replies

Raemon
41
13
0
2

I was encouraged by the positive response to my posts: it turned out that many people found them helpful! But that also raised the question: why isn't anyone else doing this? In a community of people who care a ton about the most effective ways to donate money, why wasn't anyone else set up to make similarly detailed cost-effectiveness analyses?

A sort of central paradox of EA as a movement/community is "you'd think, writing up cost-benefit analysis of donation targets would be like a core community activity", but, also, there's big professional orgs evalua... (read more)

It feels fairly alarming to me that this post didn't get more pushback here and is so highly upvoted.

I think it makes a couple interesting points, but then makes extremely crazy sounding claims, taking the Rethink Priorities 7 - 15% numbers at face value, when the arguments for those AFAICT don't even have particular models behind them. This is a pretty crazy sounding number that needs way better argumentation than "a poll of people said so", and here it's just asserted without much commentary at all.

(In addition to things other people have mentioned here,... (read more)

taking the Rethink Priorities 7 - 15% numbers at face value, when the arguments for those AFAICT don't even have particular models behind them. This is a pretty crazy sounding number that needs way better argumentation than "a poll of people said so", and here it's just asserted without much commentary at all.

 

I'm confused by this statement. The welfare range estimates aren't based on a "poll" and are based on numerous "particular models."

8
Bentham's Bulldog
If you want to read the longer defense of the RP numbers, you can read the RP report or my followup article on the subject https://benthams.substack.com/p/you-cant-tell-how-conscious-animals. Suffice it to say, it strikes me as deeply unwise to base your assessments of bee consciousness on how they look, rather than on behavior.  I think the strong confidence that small and simple animals aren't intensely conscious rests on little more than unquestioned dogma, with nothing very persuasive having ever been said in its favor https://benthams.substack.com/p/betting-on-ubiquitous-pain.   Also the RP report wasn't a poll!  I agree about the 97% number and have corrected it!  I think the point made by the number--many more bees than e.g. fish--is correct, but I failed to add the relevant caveats.   Regarding 10% as bad as chicken, that still strikes me as pretty conservative.  I think bees spend much of their time suffering from extreme temperatures, disease, etc, and thinking that's 10% as bad as the life of an average chicken (note: this is before adjusting for sentience differential) strikes me as pretty conservative.   The argument for insects mostly living bad lives is given in the linked post and in this post--if you live a super short life (days or weeks) you don't get enough welfare to outweigh the badness of a painful death.   The reason it has political potshots is that it was originally a blogpost and I just added it here.  If I were writing it specifically for the forum, I wouldn't have added that--but I also am somewhat irritated by the EA forum culture where it feels like you have to write like you're making an academic paper rather than having any whimsy or fun! 

taking the Rethink Priorities 7 - 15% numbers at face value, when the arguments for those AFAICT don't even have particular models behind them

I'm interested to hear what you think the relevant difference is between the epistemic grounding of (1) these figures vs. (2) people's P(doom)s, which are super common in LW discourse. I can imagine some differences, but the P(dooms) of alignment experts still seem very largely ass-pulled and yet also largely deferred-to.

I basically disagree with this take on the discussion.

Most clearly: this post did generate a lot of pushback. It has more disagree votes than agree votes, the top comment by karma argues against some of its claims and is heavily upvoted and agree-voted, and it led to multiple response posts including one that reaches the opposite conclusion and got more karma & agree votes than this one.

Focusing on the post itself: I think that the post does a decent job of laying out the reasoning for its claims, and contains insights that are relevant and not widely ... (read more)

On the 7-15% figure I don't actually see where the idea that smaller, less intelligent animals suffer less when they are in physical pain is commonsense comes from. People almost never cite a source for it being commonsense, and I don't recall having had any opinion about it before I encountered academic philosophy. I think it is almost certainly true that people don't care very much about small dumb animals, but that, but there are a variety of reasons why that is only moderate evidence for the claim that ordinary people think they experience less intense... (read more)

I recall previously hearing there might be a final round of potential amendments in response to things Gavin Newsom requests. Was/is that accurate?

8
ThomasW
Hello! The legislative session is over, so no more changes can be made to the bill. Sometimes that kind of thing does happen, but it happens during the legislative session.

(several years late, whoops!)

Yeah, my intent here was more "be careful deciding to scale your company to the point you need a lot of middle managers, if you have a nuanced goal", rather than "try to scale your company without middle managers."

In the context of an EA jobs list it seems like both are pretty bad. (there's the "job list" part, and the "EA" part)

5
Neel Nanda
I'm pro including both, but was just commenting on which I would choose if only including one for space reasons

Yeah, this does seem like an improvement. I appreciate you thinking about it and making some updates.

Can you say a bit more about:

and (2) worse in private than in public.

?

A few DC and EU people tell me that in private, Anthropic (and others) are more unequivocally antiregulation than their public statements would suggest.

I've tried to get this on the record—person X says that Anthropic said Y at meeting Z, or just Y and Z—but my sources have declined.

Mmm, nod. I will look into the actual history here more, but, sounds plausible. (edited the previous comment a bit for now)

Following up my other comment:

To try to be a bit more helpful rather than just complaining and arguing: when I model your current worldview, and try to imagine a disclaimer that helps a bit more with my concerns but seems like it might work for you given your current views, here's a stab. Changes bolded.

OpenAI is a frontier AI research and product company, with teams working on alignment, policy, and security. We recommend specific opportunities at OpenAI that we think may be high impact. We recommend applicants pay attention to the details of individual r

... (read more)

Thanks.

Fwiw while writing the above, I did also think "hmm, I should also have some cruxes for 'what would update me towards 'these jobs are more real than I currently think.'" I'm mulling that over and will write up some thoughts soon.

It sounds like you basically trust their statements about their roles. I appreciate you stating your position clearly, but, I do think this position doesn't make sense:

  • we already have evidence of them failing to uphold commitments they've made in clear cut ways. (i.e. I'd count their superalignment compute promises as basica
... (read more)
9
Habryka [Deactivated]
As I've discussed in the comments on a related post, I don't think OpenAI meaningfully changed any of its stated policies with regards to military usage. I don't think OpenAI really ever promised anyone they wouldn't work with militaries, and framing this as violating a past promise weakens the ability to hold them accountable for promises they actually made.  What OpenAI did was to allow more users to use their product. It's similar to LessWrong allowing crawlers or jurisdictions that we previously blocked to now access the site. I certainly wouldn't consider myself to have violated some promise by allowing crawlers or companies to access LessWrong that I had previously blocked (or for a closer analogy, let's say we were currently blocking AI companies from crawling LW for training purposes, and I then change my mind and do allow them to do that, I would not consider myself to have broken any kind of promise or policy).

Thanks. This still seems pretty insufficient to me, but, it's at least an improvement and I appreciate you making some changes here.

Yeah same. (although, this focuses entirely on their harm as an AI organization, and not manipulative practices)

I think it leaves the question "what actually is the above-the-fold-summary" (which'd be some kind of short tag).

I think EAs vary wildly. I think most EAs do not have those skills – I think it is a very difficult skill. Merely caring about the world is not enough. 

I think most EAs do not, by default, prioritize epistemics that highly, unless they came in through the rationalist scene, and even then, I think holding onto your epistemics while navigating social pressure is a very difficult skill that even rationalists who specialize in it tend to fail at. (Getting into details here is tricky because it involves judgment calls about individuals, in social situation... (read more)

0
JackM
The person who gets the role is obviously going to be highly intelligent, probably socially adept, and highly-qualified with experience working in AI etc. etc. OpenAI wouldn't hire someone who wasn't. The question is do you want this person also to care about safety. If so I would think advertising on the EA job board would increase the chance of this. If you think EAs or people who look at the 80K Hours job board are for some reason less good epistemically than others then you will have to explain why because I believe the opposite.

Surely this isn't the typical EA though?

I think job ads in particular are a filter for "being more typical."

I expect the people who have a chance of doing a good job to be well connected to previous people who worked at OpenAI, with some experience under their belt navigating organizational social scenes while holding onto their own epistemics. I expect such a person to basically not need to see the job ad.

2
JackM
You're referring to job boards generally but we're talking about the 80K job board which is no typical job board. I would expect someone who will do a good job to be someone going in wanting to stop OpenAI destroying the world. That seems to be someone who would read the 80K Hours job board. 80K is all about preserving the future. They of course also have to be good at navigating organizational social scenes while holding onto their own epistemics which in my opinion are skills commonly found in the EA community!

I do want to acknowledge: 

I refer to Jan Leike's and Daniel Kokotajlo's comments about why the left, and reference other people leaving the company. 

I do think this is important evidence.

I want to acknowledge I wouldn't actually bet that Jan and Daniel would endorse everyone else leaving OpenAI, and only weakly bet that they'd endorse not leaving up the current 80k-ads as written.

I am grateful to them for having spoken up publicly, but I know that a reason people hesitate to speak publicly about this sort of thing is that it's easier for soundbyt... (read more)

I have slightly complex thoughts about the "is 80k endorsing OpenAI?" question.

I'm generally on the side of "let people make individual statements without treating it as a blanket endorsement." 

In practice, I think the job postings will be read as an endorsement by many (most?) people. But I think the overall policy of "social-pressure people to stop making statements that could be read as endorsements" is net harmful. 

I think you should at least be acknowledging the implication-of-endorsement as a cost you are paying.

I'm a bit confused about how... (read more)

I attempted to address this in the Isn't it better to have alignment researchers working there, than not? Are you sure you're not running afoul of misguided purity instincts? FAQ section.

I think the evidence we have from OpenAI is that it isn't very helpful to "be a safety conscious person there." (i.e. combo of people leaving who did not find it tractable to be helpful there, and NDAs making it hard to reason about, and IMO better to default assume bad things rather than good things given the NDAs)

I think it's especially not helpful if you're a low-contex... (read more)

1
JackM
It's insanely hard to have an outsized impact in this world. Of course it's hard to change things from inside OpenAI, but that doesn't mean we shouldn't try. If we succeed it could mean everything. You're probably going to have lower expected value pretty much anywhere else IMO, even if it does seem intractable to change things at OpenAI. Surely this isn't the typical EA though?

I do basically agree we don't have bargaining power, and that they most likely don't care about having a good relationship with us. 

The reason for the diplomatic "line of retreat" in the OP is more because:

  • it's hard to be sure how adversarial a situation you're in, and it just seems like generally good practice to be clear on what would change your mind (in case you have overestimated the adversarialness)
  • it's helpful for showing others, who might not share exactly my worldview, that I'm "playing fairly."

I'd probably imagine no-one much at OpenAI reall

... (read more)

fwiw I don't think replacing the OpenAI logo or name makes much sense.

I do think it's pretty important to actively communicate that even the safety roles shouldn't be taken at face value. 

Raemon
84
27
0
1

Nod, thanks for the reply.

I won't argue more for removing infosec roles at the moment. As noted in the post, I think this is at least a reasonable position to hold. I (weakly) disagree, but for reasons that don't seem worth getting into here.

The things I'd argue here:

  • Safetywashing is actually pretty bad, for the world's epistemics and for EA and AI safety's collective epistemics. I think it also warps the epistemics of the people taking the job, so while they might be getting some career experience... they're also likely getting a distorted view of what wh
... (read more)
8
Conor Barnes 🔶
Re: On whether OpenAI could make a role that feels insufficiently truly safety-focused: there have been and continue to be OpenAI safety-ish roles that we don’t list because we lack confidence they’re safety-focused. For the alignment role in question, I think the team description given at the top of the post gives important context for the role’s responsibilities: OpenAI’s Alignment Science research teams are working on technical approaches to ensure that AI systems reliably follow human intent even as their capabilities scale beyond human ability to directly supervise them.  With the above in mind, the role responsibilities seem fine to me. I think this is all pretty tricky, but in general, I’ve been moving toward looking at this in terms of the teams: Alignment Science: Per the above team description, I’m excited for people to work there – though, concerning the question of what evidence would shift me, this would change if the research they release doesn’t match the team description. Preparedness: I continue to think it’s good for people to work on this team, as per the description: “This team … is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.” Safety Systems: I think roles here depend on what they address. I think the problems listed in their team description include problems I definitely want people working on (detecting unknown classes of harm, red-teaming to discover novel failure cases, sharing learning across industry, etc), but it’s possible that we should be more restrictive in which roles we list from this team. I don’t feel confident giving a probability here, but I do think there’s a crux here around me not expecting the above team descriptions to be straightforward lies. It’s possible that the teams will have limited resources to achieve their goals, and with the Safety Systems team in particular, I think there’s an extra risk of safety work blending into product work. However, my impres

I'm with Ozzie here. I think EA Forum would do better with more technical content even if it's hard for most people to engage with. 

FYI fag is a pretty central example of a slur in America imo.

It gets used and normalized in some edgy cultures but I think that’s sort of like how the n-word gets used in some subcultures. (When I was growing up at least it was probably in the top 5 ‘worst’ words to say, at least weighted by ‘anyone ever actually said them’)

There’s also a thing where ‘retarded’ went from ‘not that bad’ to ‘particularly bad in some circles’, although I’m not sure how that played out since it was ‘after my time’.

All of this is sort of anti-inductive and evolving and makes sense to not be very obvious to a foreigner.

Eh, I've been living in the U.S. for a full decade, so I think the "foreigner excuse" doesn't really work here, I think I was mostly just wrong in a kind of boring way. 

My guess is I just happened to have not heard this specific term used very much where I could see people's social reaction to it, which I guess is a weird attribute of slurs. Reading more about it in other contexts definitely made me convinced it qualifies as a slur (but also, relatedly, would honestly be quite surprised if people used it in any kind of real way during Manifest).

nggrzcgf

is... that rot13'd for a reason? (it seemed innocuous to me)

I work for Habryka, so my opinion here should be discounted. (for what it's worth I think I have disagreed with some of his other comments this week, and I think your post did update me on some other things, which I'm planning to write up). But re:

incorrectly predicted what journalists would think of your investigative process, after which we collaborated on a hypothetical to ask journalists, all of whom disagreed with your decision.

this seems egregiously inaccurate to me. Two of the three journalists said some flavor of "it's complicated" on the topic of ... (read more)

I think it's worth pointing to the specifics of each, because I really don't think it's unreasonable to gloss as "all of whom disagreed."

I would delay publication.

This goes without saying.

I think it depends a lot on the group's ability to provide evidence the investigators' claims are wrong. In a situation like that I would really press them on the specifics. They should be able to provide evidence fairly quickly. You don't want a libel suit but you also don't want to let them indefinitely delay the publication of an article that will be damaging to

... (read more)

What’s wrong with “make a specific targeted suggestion for a specific person to do the thing, with an argument for why this is better than whatever else the person is doing?”, like Linch suggests?

This can still be hard, but I think the difficulty lives in the territory, and is an achievable goal for someone who follows EA Forum and pays attention to what organizations do what.

3
Brad West🔸
Nothing is wrong with that. In fact it is a good thing to do. But this post seemed to discourage people from providing their thoughts regarding things that they think should be done unless they want to take personal responsibility for either personally doing it (which could entail a full-time job or multiple full-time jobs) or personally take responsibility for finding another person who they are confident will take up the task.  It would be great if the proponent of an idea or opinion had the resources and willingness to act on every idea and opinion they have, but it is helpful for people to share their thoughts even if that is not something they are able or willing to do. I would agree with a framing of the Quick take that encouraged people to act on their should or personally find another person who they think will reliably act on it, without denigrating someone who makes an observation about a gap or need. Speaking as someone who had an idea and acted upon it to start an organization while maintaining a full-time job to pay my own bills and for the needs of the organization, it is neither easy for most people to do a lot of things that "should be done" nor is it easy to persuade others to give up what they are doing to "own" that responsibility. In my view there is nothing wrong with making an observation of a gap or need that you think it would be cost-effective to fill, if that is all that you are able or willing to do.

It seemed useful to dig into "what actually are the useful takeaways here?", to try an prompt some more action-oriented discussion.

The particular problems Elizabeth is arguing for avoiding:

  • Active suppression of inconvenient questions
  • Ignore the arguments people are actually making
  • Frame control / strong implications not defended / fuzziness
  • Sound and fury, signifying no substantial disagreement
  • Bad sources, badly handled
  • Ignoring known falsehoods until they're a PR problem

I left off "Taxing Facebook" because it feels like the wrong name (since it's not really p... (read more)

Is your concrete suggestion/ask "get rid of the karma requirement?"

6
Gemma 🔸
Hmmm I'm not being as prescriptive as that. Maybe there is a better solution to this specific problem - maybe requiring someone with higher karma to confirm the suggestion? (original person gets the credit)

Quick note: I don't think there's anything wrong with asking "are you an english speaker" for this reason, I'm just kinda surprised that that seemed like a crux in this particular case. Their argument seemed cogent, even if you disagreed with it.

The comments/arguments about the community health team mostly make me think something more like "it should change its name" than be disbanded. I think it's good to have a default whisper network to report things to and surreptitiously check in with, even if they don't really enforce/police things. If the problem is that people have a false sense of security, I think there are better ways to avoid that problem.

Just maintaining the network is probably a fair chunk of work.

That said – I think one problem is that the comm-health team has multiple roles. I'm ho... (read more)

But a glum aphorism comes to mind: the frame control you can expose is not the true frame control.

I think it's true that frame control (or, manipulation in general) tends to be designed to make it hard to expose, but, I think the actual issue here is more like "manipulation is generally harder to expose than it is to execute, so, people trying to expose manipulation have to do a lot of disproportionate work."

Part of the reason I think it was worth Ben/Lightcone prioritizing this investigation is as a retro-active version of "evaluations."

Like, it is pretty expensive to "vet" things. 

But, if your org has practices that lead to people getting hurt (whether intentionally or not), and it's reasonably likely that those will eventually come to light, orgs are more likely to proactively put more effort into avoiding this sort of outcome.

2
Ozzie Gooen
That sounds a lot like what I picture as an "evaluation"? I agree that spending time on evaluations/investigations like this is valuable.  Generally, I agree that - the more (competent) evaluations/investigations are done, the less orgs will feel incentivized to do things that would look bad if revealed.  (I think we mainly agree, it's just terminology here)
Raemon
59
18
1
1
2

(crossposted from LessWrong)

This is a pretty complex epistemic/social situation. I care a lot about our community having some kind of good process of aggregating information, allowing individuals to integrate it, and update, and decide what to do with it.

I think a lot of disagreements in the comments here and on LW stem from people having an implicit assumption that the conversation here is about "should [any particular person in this article] be socially punished?". In my preferred world, before you get to that phase there should be at least some period f... (read more)

0
Morpheus_Trinity
I don't think the initial goal of this discussion was to punish anyone socially. In my view, the author shared their findings because they were worried about our community's safety. Then, people in our community formed their own opinions based on what they read. In the comments, you can see a mix of things happening. Some people asked questions and wanted more information from both the author and the person being accused. Others defended the person being accused, and some just wanted to understand what was going on. I didn't see this conversation starting with most people wanting to punish someone. Instead, it seemed like most of us were trying to find out the truth. People may have strong feelings, as shown by their upvotes and downvotes, but I think it's important to be optimistic about our community's intentions. Some people are worried that if we stay impartial for too long, wrongdoers might not face any consequences, which is like letting them "get away with murder," so to speak. On the other hand, some are concerned about the idea of "cancel culture." But overall, it seems like most people just want to keep our community safe, prevent future scandals, and uncover the truth.

I don't know about Jonas, but I like this more from the self-directed perspective of "I am less likely to confuse myself about my own goals if I call it talent development." 

3
Jonas_
Yes, this.
4
James Herbert
Thanks! So, to check I understand you, do you think when we engage in what we've traditionally called 'community building' we should basically just be doing talent development?  In other words, your theory of change for EA is talent development + direct work = arrival at our ultimate vision of a radically better world?[1] Personally, I think we need a far more comprehensive social change portfolio. 1. ^ E.g., a waypoint described by MacAskill as something like the below: "(i) ending all obvious grievous contemporary harms, like war, violence and unnecessary suffering; (ii) reducing existential risk down to a very low level; (iii) securing a deliberative process for humanity as a whole, so that we make sufficient moral progress before embarking on potentially-irreversible actions like space settlement." 

I do wanna note, I thought the experience of using the google campus was much worse than many other EAGs I've been at – having to walk 5-10 minutes over to another part of the campus, hope that anyone else had shown up to the event I wanted to go to (which they often hadn't) eventually left me with a learned helpnessness about trying to do anything.

2
Rebecca
I experienced this at EAG London 2022 as well, as that event was spread out over multiple buildings and streets.

TL;DR;BNOB 

("but not obviously bad")

Hmm, have there been applications that are like "what's your 50th percentile expected outcome?" and "what's your 95th percentile outcome?"

3
NickLaing
Such a great idea love it - never seen that. I think for EA style applications that could work well, for other applications it might be hard for many people to grasp.

I listed those on an SFF application last year, although I can't remember if they asked for it explicitly. I think it's a good idea.

Note: the automatic audio for this starts with what sounds like some weird artifacts around the image title.

I think there's a reasonable case that, from a health perspective, many people should eat less meat. But "less meat" !== "no meat". 

Elizabeth was pretty clear on her take being:

Most people’s optimal diet includes small amounts of animal products, but people eat sub-optimally for lots of reasons and that’s their right.

i.e. yes, the optimal diet is small amounts of meat (which is less than most people eat, but more than vegans eat).

The article notes:

It’s true that I am paying more attention to veganism than I am to, say, the trad carnivore idiots, even

... (read more)

The argument isn’t about that at all, and I think most people would agree that nutrition is important.

It sounds like you're misreading the point of the article.

The entire point of this article is that there are vegan EA leaders who downplay or dismiss the idea that veganism requires extra attention and effort. It doesn't at all say "there are some tradeoffs, therefore don't be vegan."  (it goes out of the way to say almost the opposite)

Whether costs are worth discussing doesn't depend on how large one cost is vs the other – it depends on whether the h... (read more)

Is there a word in the rest-of-the-world that means "everything that supports the core work and allows other people to focus on the core work?"

6
Joseph
I have an answer for this now: line functions and staff functions. Line functions do the core work on the organization, while staff function "supports the organization with specialized advisory and support functions." My vague impression is that this labelling/terminology is fairly common among high-level management types, but that people in general likely wouldn't be familiar with it.
5
Linda Linsefors
I took a minute to think about what sort of org has a natural distinction between "core work" and "non-core-work". A non-EA example would be a Uni research lab. There are usually a clear distinction between * research (core work) * teaching (possibly core work, depending on who you ask) * and admin (everting else) Where the role of admin seems similar to EA ops. 
3
Grayden 🔸
Most organizations do not divide tasks between core and non-core. The ones that do (and are probably most similar to a lot of EA orgs) are professional services ones
2
Joseph
I think there isn't a single term (although I'm certainly not an expert, so maybe someone with a PhD in business or a few decades of experience can come and correct me). Finance, Marketing, Legal, Payroll, Compliance, and so on could all be departments, divisions, or teams within an organization, but I don't know of any term used to cover all of them with the meaning of "supporting the core work." I'm not aware of any label that is used outside of EA analogous to how "operations" is used within in EA.
2
Vaidehi Agarwalla 🔸
"administration" ? but that sounds quite unappealing, which is why I think the EA movement has used operations. 

I hadn't looked into the details of Windfall Clause proposed execution and assumed it was prescribing something closer to GiveDirectly than "CEO gets to direct it personally." CEO gets to direct it personally does seem obviously bad.

The "disadvantaged background" thing does turn out to show up in the top several google results, so, does seem like a real thing, although I also had no idea until this moment and would have naively used the term "talent search" in the way you describe.

Another angle on this (I think this is implied by the OP but didn't quite state outright?)

All the community-norm posts are an input into effective altruism. The gritty technical posts are an output. If you sit around having really good community norms, but you never push forward the frontier of human knowledge relevant to optimizing the world, I think you're not really succeeding at effective altruism. 

It is possible that frontier-of-human-knowledge posts should be paid for with money rather than karma, since karma just isn't well suited for rewarding it. But, yeah it seems like it distorts the onboarding experience of what people learn to do on the forum.

A related, important consideration when Lightcone arranged to buy the Rose Garden Inn (for similar reasons as Wytham Abbey), is that the Inn can also be resold if it turns out not to be as valuable. So thinking of this as "15 million spent" isn't really right here.

The Rose Garden Inn is even something at a comparable price point to pressure test against. As in it is the same ballpark general distance to most of the potential users, roughly the same price, within a factor of 2 in room count, etc. but way more run down, and as recent breakins have shown, though perhaps way more vulnerable to people just walking on premises and stealing construction materials as they work to fix it up.

I do think the Lightcone example is a large part of why I'm not up in arms about this. They've demonstrated in their existing somewhat s... (read more)

(it'd be handy to have a link in the opening paragraph so if I wanna avoid spoilers I can go do that easily)

I'm not sure what your imagining, in terms of overall infrastructural update here. But, here's a post that is in some sense a followup post to this:

https://www.lesswrong.com/posts/FT9Lkoyd5DcCoPMYQ/partial-summary-of-debate-with-benquo-and-jessicata-pt-1 

Load more