All of Chris Leong's Comments + Replies

For anyone wondering about the definition of macrostrategy, the EA forum defines it as follows:

Macrostrategy is the study of how present-day actions may influence the long-term future of humanity.[1]

Macrostrategy as a field of research was pioneered by Nick Bostrom, and it is a core focus area of the Future of Humanity Institute.[2] Some authors distinguish between "foundational" and "applied" global priorities research.[3] On this distinction, macrostrategy may be regarded as closely related to the former. It is concerned with the assessment of

... (read more)

If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake. 

Disagree because it is at -36.

Happy to consider your points on the merits if you have an example of an objectionable post with positive upvotes.

That said: part of me feels that Effective Altruism shouldn't be afraid of controversial discussion, whilst another part of me wants to shift it to Less Wrong. I suppose I'd have to have a concrete example in front of me to figure out how to balance these views.

8
Jason
3d
Which (if any) of titotal's six numbered points only apply and/or have force if the post's net karma is positive, as Mr. Parr's have been at certain points in time?

I didn't vote, but maybe people are worried about the EA forum being filled up with a bunch of logistics questions?

This post makes some interesting points about EA's approach to philanthropy, but I certainly have mixed feelings on "please support at least one charity run by someone in the global south that just so happens to be my own".

Thank so much Chris. The heading, though, clearly said "Help me make some small stride on extreme poverty where I live"

Let me just say this: if you visited the project office of the UCF (in Kamuli), and see for yourself that even the people working at the UCF are also living in the exact same conditions of abject poverty that all other people in our region (whom we are aiming to move from poverty) are living in, you'd see why it isn't wrong at all to seek support for the work we are doing on extreme poverty.

We are simply trying to build a self-sustainable ... (read more)

Might be more useful if you explain why the arguments weren't persuasive to you


So my position is that most of your arguments are worth some "debate points" but that mitigating potential x-risks outweigh this.

Our interest is in a system of liability that can meet AI safety goals and at the same time have a good chance of success in the real world

I've personally made the mistake of thinking that the Overton Window is narrower than it actually was in the past. So even though such laws may not seem viable now, my strong expectation is that it will quickly chan... (read more)

I find the idea of a reverse burden of proof interesting, but tbh I wasn’t really persuaded by the rest of your arguments. I guess the easiest way to respond to most of them would be “Sure, but human extinction kind of outweighs it” and then you’d reraise how these risks are abstract/speculative and then I’d respond that putting risks in two boxes, speculative and non-speculative, hinders clear thinking more than it helps. Anyway, that’s just how I see the argument play out.

I̶n̶ ̶a̶n̶y̶ ̶c̶a̶s̶e̶ ̶m̶y̶ ̶m̶a̶i̶n̶ ̶w̶o̶r̶r̶y̶ ̶a̶b̶o̶u̶t̶ ̶s̶t̶r̶o̶n̶g̶ ̶l̶i̶a... (read more)

1
Cecil Abungu
11d
Thanks. Might be more useful if you explain why the arguments weren't persuasive to you. Our interest is in a system of liability that can meet AI safety goals and at the same time have a good chance of success in the real world. Anyway, even if we start from your premise, it doesn't mean strict liability would work better than a fault-based liability system (as we demonstrated in Argument 1). 

I have very mixed views on Richard Hannania.

On one hand, some of his past views were pretty terrible (even though I believe that you've exaggerated the extent of these views).

On the other hand, he is also one of the best critics of conservatives. Take for example, this article where he tells conservatives to stop being idiots who believe random conspiracy theories and another where he tells them to stop scamming everyone. These are amazing, brilliant articles with great chutzpah. As someone quite far to the right, he's able to make these points far more cr... (read more)

Yeah, it's possible I'm taking a narrow view of what a professional organisation is. I don't have a good sense of the landscape here.

I guess I'm a bit skeptical of this proposal.

I don't think we'd have much credibility as a professional organisation. We could require people to do the intro and perhaps even the advanced fellowship, but that's hardly rigorous training.

I'm worried that trying to market ourselves as a professional organisation might backfire if people end up seeing us as just a faux one.

I suspect that this kind of association might be more viable for specific cause areas than for EA as a whole, but there might not be enough people except in a couple of countries.

4
David T
10d
OK, I guess the tone of my original reply wasn't popular (which is fair enough I guess).  The OP raised the subject of a non-trivial proportion of people perceiving EA as being a 'phyg' as a problem, and suggested with moderately high confidence that the transition to a "professional association" would radically reduce this. I'm not seeing this. Plenty of groups recruiting students brand themselves "movements" for "doing good" in some general way whilst being relatively unlikely to be accused of being a cult (climate change and civil/animal rights activists, fair-traders, volunteering groups etc) And I suspect far more people would say the International Association of Scientologists and Association of Professional Independent Scientologists which both adopt the structure and optics of professional membership bodies are definitely cults (Obviously there are many more reasons to consider Scientology as a cult, but if anything I'd think the belief-system-under-a-professional-veneer approach looks more suspicious rather than less. At any rate, forming professional membership bodies definitely isn't something actual cults don't do) So if people are perceiving EA as a cult it's probably their reaction - justified or otherwise - to other things, some of which might be far too important to dispense with like Giving Pledges and concern about x-risk, and some of which might be easily avoided like reading from scripts (and yes, substituting ordinary words for insider jargon like 'phyg'). Other ways to dispel accusations that EA is a cult (if it is indeed a problem) feels like the subject for an entirely different debate, but I'd genuinely be interested in counter-arguments from anyone who thinks I'm wrong and changing the organization structure is the key.
-8
David T
16d
1
Michael Noetel
16d
That’s a good point. All AMA members have to meet certain criteria. I can see how ‘’8 week reading group” pales in comparison to a medical degree.

Thank you for posting this publicly. It's useful information for everyone to know.

Wasn't there some law firm that did an investigation? Plus some other projects listed here.

It would be useful for you to clarify exactly what you'd like to see happen and how this differs from the things that did happen, even though this might be obvious to someone who is high-context on the situation like you are. On the other hand, I'd have to do a bit of research to figure out what you're suggesting.

The post has a footnote, which reads:

Although EV conducted a narrow investigation, the scope was far more limited than what I’m describing here, primarily pertaining to EV’s legal exposure, and most results were not shared publicly.

As far as I know, what has been shared publicly from the investigation is that no one at EVF had actual knowledge of SBF's fraud.

I didn't know that CHAI or 80,000 Hours had recommended material.

The 80,000 Hours syllabus = "Go read a bunch of textbooks". This is probably not ideal for a "getting started' guide.

2
Linda Linsefors
22d
I do think AISF is a real improvement to the field. My apologies for not making this clear enough.   You mean MIRI's syllabus?  I don't remember what 80k's one looked like back in the days, but the one that is up not is not just "Go read a bunch of textbooks". I personally used CHAI's one and found it very useful. Also some times you should go read a bunch of text books. Textbooks are great. 

I was there for an AI Safety workshop, I can't remember the content though. Do you know what you included?

I found that just open discussion sometimes leads to less valuable discussion, so in both cases I'd focus on a few specific discussion prompts / trying to help people come to a conclusion on some question


That's useful feedback. Maybe it'd be best to take some time at the end of the first session of the week to figure out what questions to discuss in the second session? This would also allow people to look things up before the discussion and take some time for reflection.

I'd be keen to hear specifically what the pre-requisite knowledge is - just in order to

... (read more)
4
Linda Linsefors
25d
I would reccomend having a week 0 with some ML and RL basics.  I did a day 0 ML and RL speed run, at the start of two of my AI Safety workshops at EA hotel in 2019. Where you there for that? It might have been recorded, but I have no idea where it might have ended up. Although obviously some things have happened since then. Seems very worth creating. Depending on peoples background some people will have an understanding of these with out knowing the terminology. A document explaining each term, and a "read more" link to some useful post would be great. Both for people to know if they have the pre-requisite, and to help anyone who almost have the prerequisite to find that one blogpost they (them specifically) should read to be able to follow the course.

I'm quite tempted to create a course for conceptual AI alignment, especially since agent foundations has been removed from the latest version of the BlueDot Impact course[1].

If I did this, I would probably run it as follows:

a) Each week would have two sessions. One to discuss the readings and another for people to bounce their takes off others in the cohort. I expect that people trying to learn conceptual alignment would benefit from having extra time to discuss their ideas with informed participants.
b) The course would be less introductory, though without... (read more)

1
Jamie B
26d
Thanks for engaging! Sounds like a fun experiment! I found that just open discussion sometimes leads to less valuable discussion, so in both cases I'd focus on a few specific discussion prompts / trying to help people come to a conclusion on some question. I linked to something about learning activities in the main post, which I think helps with session design. As with anything though, I think trying it out is the only way to know for sure, so feel free to ignore me. I'd be keen to hear specifically what the pre-requisite knowledge is - just in order to inform people if they 'know enough' to take your course. Maybe it's weeks 1-3 of the alignment course? Agree with your assessment that further courses can be more specific, though. Sounds right! I would encourage you trying to front-load some of the work before creating a curriculum though. Without knowing how expert you are in agent foundations yourself - I'd suggest trying to take steps that mean your first stab is close enough for giving feedback to seem valuable to the people you ask, and so it's not a huge lift to get from 1st draft to final product and there are no nasty surprises from people who would have done it completely differently. I.e. what if you ask 3-5 experts what they think the most important part of agent foundations is, and maybe try to conduct 30 min interviews with them to solicit the story they would tell in a curriculum? You can also ask them their top recommended resources, and why they recommend it. That would be a strong start, I think.

I think the biggest criticism that this cause will face from an EA perspective is that it's going to be pretty hard to argue for moving more talent to first-world countries to do random things than either convincing more medical, educational or business talent to move to developing countries to help them develop or to focus on bringing more talent to top cause areas. I'm not saying that such a case couldn't be made, just that I think it'd be tricky.

1
David T
1mo
The question of whether net outflows of talented workers might be bad for some of the worst off countries is a thorny one which probably differs on a sector by sector and even person by person basis (LMICs have shortages of certain skillsets themselves, but there are other fields in which talented workers simply won't get the opportunity without moving, and sending remittances home adds value to overseas economies too). It's interesting the white paper picks UK social care institutions generally being unable to recruit in Africa as an example, since the reason isn't that agencies specialising in social care recruitment from overseas don't exist, but that they're  restricted from doing so by the UK respecting a WHO red list identifying countries with domestic healthcare worker shortages. But I'd have thought the most straightforward criticism from an EA perspective is that the issue of skilled migration isn't exactly neglected, and (in the medium run at least) migration to richer countries is self funding, implying that most institutions aiding the process need not depend on philanthropy. Firms and research institutions have strong incentives to acquire overseas talent, middlemen have financial incentives to help them, and the pay differences cover those costs. (The flip side of this, I guess, is that grants to incubate new projects can have returns that compound in the long term)
4
Jack Malde
1mo
Yeah I have a feeling that the best way to argue for this on EA grounds might surprisingly be on the basis of richer world economic growth, which is kind of antithetical with EA's origins, but has been argued to be of overwhelming importance e.g.: * Economist Tyler Cowen says our overwhelming priorities should be maximising economic growth and making civilisation more stable. Is he right? * The Moral Consequences of Economic Growth

The upshot is: I recommend only choosing this career entry route if you are someone for whom working exclusively at EA organisations is incredibly high on your priority list.


I think taking a role like this early on could also be high-value if you're trying to determine whether working in a particular cause area is for you. Often it's useful to figure that out pretty early on. Of course, the fact that it isn't the exact same job as you might be doing later on might make it less valuable for this.

This is a very interesting idea. I'd love to see if someone could make it work.

I'm perfectly fine with holding an opinion that goes against the consensus. Maybe I could have worded it a bit better though? Happy to listen to any feedback on this.

I suppose at this stage it's probably best to just agree to disagree.

2
Jeff Kaufman
1mo
I guess, though judging by the votes on your "I gave this a downvote for the clickbait title" it seems to me that a lot of us think you're being unfair to the author.

Sorry, I misread the definition of ex ante.

I agree that the post poses a challenge to the standard EA view.

I don't see "There are no massive differences in impact between individuals" as an accurate characterization of the claim the argument is showing.

 "There are no massive ex ante differences in impact between individuals" would be a reasonable title. Or perhaps "no massive identifiable differences"?

I can see why this might seem like an annoying technicality. I still think it's important to be precise and rounding arguments off like this increases the chances that people talk past each other.

9
Sarah Weiler
1mo
Wasn't quite sure where best to respond in this thread, hope here makes decent sense. I did actually seek to convey the claim that individuals do not differ massively in impact ex post (as well as ex ante, which I agree is the weaker and more easily defensible version of my claim). I was hoping to make that clear in this bullet point in the summary: "I claim that there are no massive differences in impact between individual interventions, individual organisations, and individual people, because impact is dispersed across [many actions]". So, I do want to claim that: if we tried to apportion the impact of these consequences across contributing actions ex post, then no one individual action is massively higher in impact than the average action (with the caveat that net-negative actions and neutral actions are excluded; we only look at actions that have some substantial positive impact). That said, I can see how my chosen title may be flawed because a) it leaves out large parts of what the post is about (adverse effects, conceptual debate); and b) it is stronger than my actual claim (the more truthful title would then need to be something like "There are probably no massive differences in impact between individuals (excluding individuals who have a net-negative or no significant impact on the world)"). I am not sure if I agree that the current title is actively misleading and click-baity, but I take seriously the concern that it could be. I'll mull this over some more and might change the title if I conclude that it is indeed inappropriate.  [EDIT: Concluded that changing the title seems sensible and appropriate. I hope that the new title is better able to communicate fully what my post is about.]  I'm obviously not super happy about the downvote, but I appreciate that you left the comment to explain and push me to reconsider, so thank you for that.
2
Owen Cotton-Barratt
1mo
Yeah, I'd often be happier with people being clearer about whether they mean ex ante or ex post. But I do think that when people are talking about "distribution of impact" it's more important to clarify if they mean ex post (since that's less often the useful meaning) than if they mean ex ante.
2
Jeff Kaufman
1mo
I agree it would be better if the post explicitly compared the ex-ante and ex-post ways of looking at impact, but I don't think it's reasonable to expect the post make this distinction in its title.

"Is that this is not true because for there to be massive differences ex ante we would (a) need to understand the impact of choices much better" - Sorry, that's a non-sequitur. The state of the world is different from our knowledge of it. The map is not the territory.

"X is false" and "We don't know whether X is true or false" are different statements.

[This comment is no longer endorsed by its author]Reply
2
Owen Cotton-Barratt
1mo
(While I don't think that the argument in the post does enough to support the conclusion in the title,) I think this is a case where the map is the important thing: when making decisions, we have to use ex ante impact (which depends on a map; although you can talk about doing it with respect to a better map than you have now) rather than ex post (which would be the territory). This is central enough that I think it's natural to read claims about the distribution of impact as being about the ex ante distribution rather than the ex post one.

It's fine to mention other factors too, but the claim (at least from the outline) seems to be that "it's hard to tell" rather than "there are no large differences in impact". Happy to be corrected if I'm wrong.

5
Jeff Kaufman
1mo
The standard EA claim is that your decisions matter a lot because there are massive differences in impact between different altruistic options, ex ante. The core claim in this post, as I read it, is that this is not true because for there to be massive differences ex ante we would (a) need to understand the impact of choices much better and (b) we would need to be in a world where far fewer people contribute to any given advance.

"I understand the post is claiming that in as much as it is possible to evaluate the impact of individuals or decisions, as long as you restrict to ones with positive impact the differences are small, because good actions tend to have credit that is massively shared." - There's a distinction between challenges with evaluating differences in impact and whether those impacts exist.

The other two arguments listed in the outline are: "Does this encourage elitism"? and a pragmatic argument that individualized impact calculations are not the best path of action.

None of these are the argument made in the title.

I gave this a downvote for the clickbait title which from the outline doesn't seem to match the actual argument. Apologies if this seems unfair, titles like this are standard in journalism, but I hope this doesn't become standard in EA as it might affect our epistemics. This is not a comment on the quality of the post itself.

5
Sarah Weiler
1mo
I appreciate the sentiment and agree that preventing clickbaity titles from becoming more common on the EA forum is a valid goal! I'd sincerely regret if my title does indeed fall into the "does not convey what the post is about" category. But as Jeff Kaufman already wrote, I'm not sure I understand in which sense the top-level claim is untrue to the main argument in the post. Is it because only part of the post is primarily about the empirical claim that impact does not differ massively between individuals?
5
Jeff Kaufman
1mo
I think the title does match the argument? I understand the post is claiming that in as much as it is possible to evaluate the impact of individuals or decisions, as long as you restrict to ones with positive impact the differences are small, because good actions tend to have credit that is massively shared.

Amazing work!

1) What did you make it in?
2) How difficult was it?
3) Is it open source?

Sorry to hear this. Unfortunately, AI Safety opportunities are very competitive.

You may want to develop your skills outside of the AI safety community and apply to AI Safety opportunities again further down the track when you're more competitive.

3
Rebecca
1mo
Is Paul “supremely qualified” regarding CBRNs? Also what’s the difference between a political and non political position?

Happy to talk that through if you'd like, though I'm kind of biased, so probably better to speak to someone who doesn't have a horse in the race.

I don't know if this can be answered in full-generality.

I suppose it comes down to things like:
• Financial runway/back-up plans in case your prediction is wrong
• Importance of what you're doing now
• Potential for impact in AI safety

1
yanni kyriacos
2mo
I agree. I think it could be a useful exercise though to make the whole thing (ASI) less abstract. I find it hard to reconcile that (1) I think we're going to have AGI soon and (2) I haven't made more significant life changes. I don't buy the argument that much shouldn't change (at least, in my life). 

I would love to see attempts at either a community-building fellowship or a community-building podcast.

With the community-building podcast, I suspect that people would prefer something that covers topics relatively quickly as community builders are already pretty busy.

a) I suspect AI able to replace human labour will create such abundance that it will eliminate poverty (assuming that we don't then allow the human population to increase to the maximum carrying capacity).
b) The connection the other way around is less interesting. Obviously, AI requires capital, but once AI is able to self-reproduce then amount of capital required to kickstart economic development becomes minimal.
c) "I also wonder if you have the time to expand on why you think AI would solve or improve global poverty, considering it currently has the adverse effect?" - How is it having an adverse effect?

Debating still takes time and energy which reduces the time and energy available elsewhere.

Yep, that's the main one, but to a lesser extent Sora being ahead of schedule + realising what this means for AI agents.

It's less about my median timeline moving down, but more about the tail end not extending out as far.

I’d imagine the natural functions of city and national groups to vary substantially.

3
Rockwell
2mo
I think that's a common intuition! I'm curious if there were particular areas covered (or omitted) from this post that you see as more clearly the natural function of one versus the other. I'll note that a couple factors seem to blur the lines between city and national MEARO functions: -Size of region (e.g. NYC's population is about 8 million, Norway's is about 5.5 million) -Composition of MEAROs in the area (e.g. many national MEAROs end up with a home base city or grew out of a city MEARO, some city MEAROs are in countries without a national MEARO) I could see this looking very different if more resources went toward assessing and intentionally developing the global MEARO landscape in years to come.

I was previously very uncertain about this, but given the updates in the last week, I'm now feeling confident enough in my prediction of the future that I regret any money I put into my super (our equivalent of a pension).

Please do not interpret this comment as financial advice, rather just a statement of where I am at.

1
OscarD
2mo
What updates are you thinking of? Gemini 1.5?

A few questions that you might find helpful for thinking this through:

• What are your AI timelines?
• Even if you think AI will arrive by X, perhaps you'll target a timeline of Y-Z years because you think you're unlikely to be able to make a contribution by X
• What agendas are you most optimistic about? Do you think none of these are promising and what we need are outside ideas? What skills would you require to work on these agendas?
• Are you likely to be the kind of person who creates their own agenda or contributes to someone else's?
• How enthusiastic are... (read more)

Do the intro fellowship completions only include EA Intro Fellowship, not people doing the AI Safety Fundamentals course?

3
James Herbert
2mo
Correct. I’m a co-author on a post on AIS coordination in NL which has some relevant AIS numbers (sorry for not linking, I’m on my phone right now).

My gut feeling is that, putting to one side the question of which is the most effective strategy for reducing x-risk etc., the 'narrow EA' strategy is a mistake because there's a good chance it is unethical to try to guide society without broader societal participation. 


I suppose it depends on how much of an emergency you consider the current situation to be.

If you think it's truly a dire situation, I expect almost no-one would reason as follows: "Well, we're insufficiently diverse, it'd be immoral for us to do anything, we should just sit over here a... (read more)

1
James Herbert
2mo
Yeah good point! I'm super cautious about this line of reasoning because, given high enough certainty about the seriousness of the situation, it can be used to justify almost anything. 

If EA decided to pursue the politics and civil society route, I would suggest that it would likely make sense to follow a strategy similar to what the Good Ancestors Project has been following in Australia. This project has done a combination of a) outreach to policy-makers b) co-ordinating an open letter to the government c) making a formal submission to a government inquiry d) walking EA's through the process of making their own submissions (you'd have to check with Greg to see if he still thinks all of these activities are worthwhile).

Even though AI Pol... (read more)

If this ends up succeeding, then it may be worthwhile asking whether there are any other sub-areas of EA that might deserve their own forum, but I suppose that's more a question to ask in a few months.

To be honest, I don't really see these kinds of comments criticising young organisations that likely have access to limited amounts of funding to be helpful. I think there are some valid issues to be discussed, but I'd much rather see them discussed at an ecosystem level. Sure, it's less than ideal that low-paid internships provide an advantage to those from a particular class, but it's also easier for wealthier people to gain a college degree as well, I think it'd be a mistake for us to criticise universities for offering college degrees. At least with th... (read more)

1
Heramb Podar
3mo
So, some high-level suggestions based on my interactions with other people I have are: 1. Being more explicit about this in 80K hours calls or talking about the funding bar (potentially somehow with grantmakers/ intro'ing to successful candidates who do independent stuff). Maybe organisations could explicitly state this in their fellowship/intern/job applications: "Only 10 out of 300 last year got selected" so that people don't over-rely on some applications.  2. There is a very obvious point that Community Builders can only do so much because their general job is to point resources out and set initial things rolling. I think that as community builders, being vocal about this from an early point is important. This could look like, "Hey, I only know as much as you do now that you have read AGI SF and Superintelligence."  Community builders could also try connecting with slightly more senior people and doing intros on a selective basis(e.g., I know a few good community builders who try to go out of their way to an EAGx to score convos with such people). 3. I think metrics for 80K, and CBs need to be more heavily weighted towards(if not already) "X went on to do an internship and publish a paper" and away from "this guy read superintelligence and did a fun hackathon". The latter also creates weird sub-incentives for community members to score brownie points with CBs and make a lot of movement with little impactful progress. 4. Talking about creating your own opportunities seems really untalked about in EA circles- there is a lot of talk about finding opportunities and overwhelming newcomers with EA org webpages, which, coupled with neglectedness, causes them to overestimate the opportunities. Maybe there could be a guide for this, some sort of a group/support for this? 5. For early career folks, maybe there could be some sort of a peer buddy system where people who are a little bit further down the road can get matched and collaborate/talk. A lot of these convers

I'm not going to fully answer this question, b/c I have other work I should be doing, but I'll toss in one argument. If different domains (cyber, bio, manipulation, ect.) have different offense-defense balances a sufficiently smart attacker will pick the domain with the worst balance. This recurses down further for at least some of these domains where they aren't just a single thing, but a broad collection of vaguely related things.

Oh, I can see why it is ambiguous. I meant whether it is easier to attack or defend, which is separate from the "power" attackers have and defenders have.

"What incentive is there to destroy the world, as opposed to take it over? If you destroy the world, aren't you sacrificing yourself at the same time?"

Some would be willing to do that if they can't take it over.

2
Matthew_Barnett
3mo
What reason is there to think that AI will shift the offense-defense balance absurdly towards offense? I admit such a thing is possible, but it doesn't seem like AI is really the issue here. Can you elaborate?

Your argument in objection 1 doesn't the position people who are worried about an absurd offense-defense imbalance.

Additionally: It may be that no agent can take over the world, but that an agent can destroy the world. Would someone build something like that? Sadly, I think the answer is yes.

2
Matthew_Barnett
3mo
I'm having trouble parsing this sentence. Can you clarify what you meant? What incentive is there to destroy the world, as opposed to take it over? If you destroy the world, aren't you sacrificing yourself at the same time?

Pretty terribly. We fell into in-fighting and everyone with an axe to grind came out to grind it.

We need to be able to better navigate such crises in the future.

Looks like outer alignment is actually more difficult than I thought. Sherjil Ozair, a former Deepmind employee writes:

"From my experience doing early RLHF work for Gemini, larger models exploit the reward model more. You need to constantly keep collecting more preferences and retraining reward models to make it not exploitable. Otherwise you get nonsensical responses which have exploited the idiosyncracy of your preferences data. There is a reason few labs have done RLHF successfully"

In other words, even though we look at things like ChatGPT and go, "Wow,... (read more)

Load more