Guy Raveh

Doing my master's in applied mathematics, where my research is on online learning. I'm very interested in AI from theoretical and interdisciplinary points of view: psychology, ethics, safety.

Wiki Contributions

Comments

What Small Weird Thing Do You Fund?

[Sorry for only coming here 2 months later]

I stopped seeing content from him for various reasons a few years ago, so I may not be up to date, and I'm somewhat biased on this*. But I don't remember discussions that he participated in very fondly, and I never felt he promoted anything good.

*Which may bring one to ask why I'm writing this. I think my opinion is at least partly based in reality, and I'm trying to err on the side of saying things rather than not, even if they're weak and somewhat political, because I worry such disagreements aren't sounded enough.

Open Thread: Winter 2021

There's an extra dot at the end. Remove it and the link is fine.

[Feature Announcement] Rich Text Editor Footnotes

One (possibly stupid) question: in markdown mode, where do I write the actual content of the footnote?

13 Very Different Stances on AGI

orthogonality thesis

moral nonrealism

What are those?

Have you considered switching countries to save money?

I don't think seeking lower tax countries is positive. Some Bermuda based EAs would probably disagree, but in my opinion they make the point stronger, since they caused an association in public opinion between EA and tax evasion.

Convergence thesis between longtermism and neartermism

I really like this, though I haven't had the time yet to read all the details. I do tend to feel a convergence should occur between near term and long term thinking, though perhaps for different reasons (the most important of which is that to actually get the long term right, we need to empower most or all of the population to voice their views on what "right" means - and empowering the current population is also what non-longtermists generally try to do).

I also specifically like the format and style. They are almost the same like other posts, but somehow this is much more legible for me than many (most?) EA forum posts.

Democratising Risk - or how EA deals with critics

I think a third hypothesis is that they really think funding whatever we are funding at the moment is more important than continuing to check whether we are right; and don't see the problems with this attitude (perhaps because the problem is more visible from a movement-wide, longterm perspective rather than an immediate local one?).

Democratising Risk - or how EA deals with critics

When you say "surely", what do you mean? It would certainly be legal and moral. Would a body of research generated only by people who agree with a specific assumption be better in terms of truth-seeking than that of researchers receiving unconditional funding? Of that I'm not sure.

And now suppose it's hard to measure whether a researcher conforms with the initial assumption, and in practice it is done by continual qualitative evaluation by the funder - is it now really only that initial assumption (e.g. animals deserve moral consideration) that's the condition for funding, or is it now a measure of how much the research conforms with the funder's specific conclusions from that assumption (e.g. that welfarism is good)? In this case I have a serious doubt about whether the research produces valuable results (cf. publication bias).

Democratising Risk - or how EA deals with critics

If you completely disagree that people consistently producing bad work should not be allocated scare funds, I'm not sure we can have a productive conversation.

I theoretically agree, but I think it's hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors.

For example, I don't think the average research quality by non-tenured professors (who are supposedly judged by the merits of their work) is better than that of tenured professors.

Load More