If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)

If you're new to the EA Forum, you can use this thread to introduce yourself! You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)

If you're new to effective altruism, consider checking out the Motivation Series (a collection of classic articles on EA). You can also learn more about how the Forum works on this page.

14

0
0

Reactions

0
0
Comments16
Sorted by Click to highlight new comments since: Today at 11:24 AM

Hello everyone,

I am Srishti Goyal from New Delhi, India. I have been working as a researcher in the social development space since my post-graduation in Economics. In the coming years, I intend to undertake a Ph.D. in Behavioral economics, and I would like to get connected to people who are interested or are working in this space. Otherwise, my interests varies which includes international affairs, political affairs, climate change, apart from social development (education, health, child protection, among others) and behavioral development.

I would like to thank Silvana Hultsch from introducing me to effective altruism and this forum, as it turns out that my ideology is in sync with EA, I was just not aware of this technical jargon. 

I look forward to learning from you! :)

Regards,

Srishti

Anyone else find it weird that we can strongly upvote our own comments and posts? It doesn’t seem to do anything except promote the content of certain people who are happy to upvote themselves, at the expense of those who aren’t.

EDIT: I strongly upvoted this comment

Yeah, this has been discussed before. I think that it should not be possible to strongly upvote one's own comments.

Relatedly, should we have a strong dispreference for upvoting (especially strong upvoting ) people who work in the same org as us, or whom we otherwise may have a nonacademic interest in promoting*? Deliberately soliciting upvotes on the Forum is clearly verbotten, yet in practice I know that I'm much more likely to read work by somebody else if I had a prior relationship with them**, and since I only upvote posts I've read, this means that I'm disproportionately likely to upvote posts by people who I work with, which seems bad.

On the flip side, I guess you can argue that any realistic pattern of non-random upvoting is a mild conflict of interest. For example, I'm more likely to read forecasting posts on the Forum, and I'm much more likely to upvote (and I rarely downvote) posts about forecasting. This in turn has a very small effect of raising awareness/attention/prestige of forecasting within EA, which has a very small but nonzero probability of having material consequences for me later.

So broadly, there are actions along the spectrum of "upvoting things you find interesting may lead to the movement being more interested in things you find interesting, which in turn may have a positive effect on your future material consequences" all the way up to "full astroturfing."

A possible solution to this is for people to reflect on how they came across the article and chose to read it. If the honest answer is "I'm unlikely to have read this article if not for a prior connection with the author," then opt against upvoting it***.

It's also possible I'm overthinking this, and other people don't think this is a problem in practice.

*(eg funders/fundees, mentors/mentees, members of the same cohort, current project partners, romantic relationships, etc)

**I haven't surveyed others so I don't know if this reading pattern is unusual. I will be slightly surprised if it is though.

***or flip a coin, biased towards your counterfactual probability of reading the article without that prior connection.

I strong-upvote when I feel like my comment is underappreciated, and don't think of it as too different from strong-upvoting someone else's comment. The existence of the strong-upvote already allows someone to strong-upvote whatever they want, which doesn't seem to be a problem.

I think of this as different from voting for another person's content. When I read a comment with e.g. 3 upvotes and 10 karma, I assume "the author supports this, and I guess at least one other person really strongly agrees." If the "other person" who strongly agrees is actually the author, I get a skewed sense of how much support their view has. 

Given the tiny sample sizes that voting represents, this isn't a major problem, but it still seems to make the karma system work a bit less well. As a moderator/admin, I'd discourage strong-upvoting yourself, though the Forum doesn't have an official ban on it.

Is it difficult to remove the possibility of strongly upvoting yourself?

Not particularly hard. My guess is half an hour of work or so, maybe another half hour to really make sure that there are no UI bugs.

Ah OK it may be worth doing then

This hasn't been implemented yet, was it forgotten about or just not worth it?

Oh, I think the functionality is currently net-positive. I was just commenting on the technical difficulty of implementing it if the EA Forum thought it was worth the change.

On a related question: I just posted a question to the forum, and once the page refreshed on the question I had just asked, it already had one vote. Is this an auto-setting where my questions get automatically upvoted by me, or did someone really upvote it in the few (mili)seconds before submitting it and the page reloading?

All of your posts start with a strong upvote from "you" automatically. Your comments start with a normal-strength upvote from "you" (as they do on Reddit). You can undo these votes the same way you'd undo any of your other votes.

I have recently been toying with a metaphor for vetting EA-relevant projects: that of a mountain climbing expedition. I'm curious if people find it interesting to hear more about it, because then I might turn it into a post.

The goal is to find the highest mountains and climb them, and a project proposal consists of a plan + an expedition team. To evaluate a plan, we evaluate

  • the map (Do we think the team perceives th territory accurately? Do we agree that the territory looks promising for finding large mountains? and
  • the route (Does the strategy look feasible?)

To evaluate a team, we evaluate

  • their navigational ability (Can they find & recognise mountains? Can they find & recognise crevasses, i.e. disvalue?)
  • their executive ability (Can they executive their plan well & adapt to surprising events? Can they go the distance?)

Curious to hear what people think. It's got a bit of overlap with Cotton-Barratt's Prospecting for Gold, but I think it might be sufficiently original.

IIRC, Charity Navigator had some plans to look into cost-effectiveness/impact for a while, so maybe this was an easy way to expand their work into this? Interesting to see that this was supported by the Gates Foundation.

More discussion in this EA Forum post.

Curated and popular this week
Relevant opportunities