brb243

Bara on EA Hub

Sequences

GHD charity additionalities & alternatives
Wellbeing Growth

Topic Contributions

Comments

Moral Weights of Six Animals, Considering Viewpoint Uncertainty - Seeds of Science call for reviewers

Thank you. I encourage you to

1) Encourage authors of EA-related articles to make their work publicly accessible

2) Post summaries of relevant articles on the EA Forum to facilitate discussion without the need to register and further ease the work of gardeners

Introducing spirit hazards

Ok, that can be a better interpretation: adding the audience's capacity to commit harm into info hazards considerations.

That makes sense that the information about the existence of potentially harmful info can be shared also with people who can hold decisionmakers accountable to use their knowledge positively.

Whether this will succeed can depend on the attitude of the public toward the topic, which can depend on the 'spirit' of those who share the info. Using your examples, it an info comes from a resource such as the EA Forum, where the norm is to focus on impact and prevent harm, then even public who would normatively influence decisionmakers can have a similarly safe preferences regarding the topic. 

However, one can also imagine that the public will seek to present that the info can be used for selfish gain or harm (since people may want to 'side' with a harmful entity due to fear, seek to gain standing or attention on social media due to posting about a threat, or aim to gain privilege for their group by harming others). Since the general public is not trained in double-thinking the possible impacts of their actions and since risk memes can spread faster than safety ones, publicly sharing the existence of risky topics, in good faith, can normalize and expedite harmful advancement of these subjects.

Crowd wisdom can apply when solutions are not already developed, only decisionmakers need to implement them, and when the public has the skills to come up with these solutions. For example, if only a treaty needs to be signed and budget spent on lab safety, then a few individuals can complete it. Or, people untrained in universal values research can have a limited ability to contribute to it. 

Cybersecurity is an example of a field that requires cooperation of many experts who are not more likely to engage in a risky use of the info. Bomb recipes info, on the other hand, does not extensively help safety experts (who may specialize in legislation and regulations to prevent harm due to explosives) and could motivate otherwise uninterested actors to research this topic further. In this, cybersecurity can be analogous to AI safety and explosives info to biosecurity.

Spirit hazard can also make (empower or inspire) bad actors. The lower the cost of involvement (e. g. due to consequences, financial and other resources cost), the riskier it can be to share the info (and not necessarily more likely that (potentially) bad actors could have it already). So, risky info with low cost of negative involvement should not be shared.

Risky info should be shared if i) the cost of involvement is high, ii) it is highly unlikely that the group would use it to increase the riskiness of norms, iii) it is likely that not sharing this security info with the group would make decisionmakers advance risk, and iv) this topic is not subject to the unilateralist's curse (e. g. if one person tries to make an explosive many others would prevent them from doing so).

Moral Weights of Six Animals, Considering Viewpoint Uncertainty - Seeds of Science call for reviewers

What do you think of taking the log of neuron count dividing that by neural complexity and adding the total wellbeing impact of the individual to get the relative moral value? Intuitively, this can make sense: 

1) The more neurons, the more the individual can feel (but the intensity of perception can increase slower than the number of neurons).

2) The higher the neural complexity - which can correlate with one's ability to feel better about exteroceptive stimuli because they have more, either rational or emotional/intuitive experience (that is either due to ancestors' experience or the individual's life) to 'deal with them,' the less intensely the individual perceives.[1]

3) The impact of the individual on net wellbeing[2] should be added. I am suggesting this weighting.

For humans, especially privileged ones, 3) could make the contribution from individual's wellbeing negligible in the total because they can have much more influence on others. For individuals with less choices, including confined non-human animals,[3] on the contrary, the contribution of 3) can be neglected because these animals do not influence others.

How does this compare with what you found (and how is either finding more accurate)?

  1. ^

    This can correlate with 3) but due to a sum, there should be no double-counting.

  2. ^

    The devil can be in determining the counterfactual to use. For humans, this can be i) e. g. impact due to action, ii) due to inaction (to what extent that should be understood in a utilitarian way), iii) due to unfulfilled potential - e. g. someone did not study to be able to influence decisionmakers even though they could, or iv) due to unfulfilled capacity - e. g. someone who studied and can influence decisionmakers choses another job. For animals, this can be similar, except that animal's free will is intuitively understood as lower. For example, if a chicken in a crammed barn chose to try killing others instead of upskilling them in teaching young chicken to prevent diseases, it can be attributed to norms and environment set by the human caretakers rather than the choice of the chicken.

  3. ^

    Assuming that they cannot influence the wellbeing of others, e. g. by presenting positive attitude.

EA Common App Development Further Encouragement

That is one way to look at this that organizations look at different aspects to hire the best fit candidate. Another way is that the constraint is that there is not really anyone sincerely interested in working for that specific organization in a particular capacity. This is what I am trying to address: by filling an application people should better define their interests, and these, alongside with their skills/background, should be readily available for organizations (who may thus start looking to hire a specific skillset), plus funds that may be seeking people to advance projects, and people looking to just contract others for some tasks or for collaborators. So, it can be argued that this can help organizations find what they are looking for.

Yes, that makes sense. That would be many organization-specific parts, but that can be done relatively easily, maybe adding a few questions per organization, and people can choose which ones to fill. Role-specific parts can be relatively more challenging as the application would have to keep changing but that is also possible.

Then, this person would be only marginally better off than if he filled 3 applications and just copied-pasted the organization-specific for CEA (and filling name and e-mail, .. takes almost no time). The improvement here is if he fills the role-specific info for recruiter only once. Of course, a recruiter at CEA is different from recruiter at OpenPhil but if there is just one/few common questions about a recruiter then he can get to a better-fit role because he cannot be tailoring answers based on role descriptions/etc. - I actually wonder if then people would be more sincere or more biased in a different way (e. g. try to optimize for attention).

EA Common App Development Further Encouragement

1. Ok, maybe actually getting sincere feedback on rejected offers seems like an additional project.

2. Ok.

EA Common App Development Further Encouragement
  1. OK, there should be a minimal option (e. g. just upload a CV)
  2. I could be interested in speaking with people, maybe I can test via a Calendly link for a test period (speaking can still be the most efficient)

eyes on the ball: after speaking with people, they have to fill out the form?

There should be the option to just link a CV. After, people could answer more questions or schedule a call.

Ok, so getting people upload a CV may be key.

Oh, well, they have to upload something. They can always update or delete it and will not be penalized for any earlier uploads as these are overwritten. Maybe asking about priorities that they think progress should be made in can provide similar information to what they want to make progress in but make people less nervous. 

EA Common App Development Further Encouragement

1. I think that feedback regarding rejected offers can be valuable and low marginal effort (e. g. adding a column). Some CV writing support could be taken care of by Career Centers (that are sometimes available also to alumni). EA community members could further assist with CV specifics if they are familiar with what different (competitive) positions are looking for that the candidate can highlight. As an MVP, comments on linked docs can be used.

2. I mean, of the people who you spoke with and who had idea of a personal project

a) How many applied for EA-related funding to work on this project and how many did not?

b) What percentage tried to find someone with a similar idea in mind to work with them on the project?

I am asking to assess to what extent people with personal project ideas could be constrained by encouragement to apply for funding and by being connected with someone else. If they applied and were rejected then integrating funds can be less of a value. If they looked for collaborators but could not find any, then increasing the number of skilled people should be prioritized over recommending connections.

3. Tested. Realizing that writing can motivate engagement/action.

EA Common App Development Further Encouragement

1. I think that the 80k board can be best improved by a greater variety of opportunities, not only those related to EA-labeled organizations and governance in large economies but also opportunities that develop win-win solutions useful to the decisionmakers, understand fundamentals of wellbeing, share already developed solutions with networks where top-down decisionmaking possibilities are limited, motivate positive norms within institutions that can have large positive or negative impact (such as developing nations' governments), possibly develop comparative advantage in positive-externality sectors (such as crop processing vs. industrial animal farming), increase private sector efficiencies in a way that benefits large numbers of individuals (e. g. agricultural machinery leasing to smallholders, traffic coordination in cities, medical supplies distribution that considers bottlenecks, etc), implement solutions or conduct research for local prices, introduce impactful additions to existing programs (e. g. hypothetically central micronutrient fortification of food aid), offer shorter personal projects contracts, understand intended beneficiary actual preferences, etc.

This increased variety of opportunities can be conditional to a Common App bringing value by increasing the efficiency of hiring for a specific set of opportunities. Some of these additional opportunities are on the EA Forum or in the minds of community members. Since 80k could appear informal if it included these opportunities, it may be best to list them on a spreadsheet or/and refer individuals to others with ideas/let others find collaborators or contractors.

Integration of EA funding opportunities, including less formal more counterfactual funding (one would not donate to bednets but they would give a stipend to a fellow group member to learn over the summer and produce a practice project), can be key. Risk should be considered with this approach, for example, funding should not be given to projects that relate to info hazards or could make decisionmakers enthusiastic about risky topics. This should be specified and checked in a risk-averse way by some responsible people (who also have time), such as group organizers.

One way that existing 80k resources could be added to is using the career planning resource where people write answers and then based on these answers some career considerations are recommended. Just enabling people to (edit and) post their answers online can be valuable. The added value is that others can hire them or make recommendations based on their interests. I would still add more questions, because they can paint a more comprehensive picture of the candidate without the need to interview or interact with them or ask for a reference.

I think that getting project ideas even from a well-written post of an engaged community member onto the 80k board can be a challenge due to the scope of opportunities that are considered.

2. I mean before they start needing a job, not after they get one. For example, if someone is looking in March for a 3-month internship starting June 1, they should not be getting offers that extend before June 1 or after September 1. Of course, if someone is hired they (or anyone) should update that otherwise others will be wasting time reviewing their application.

3. Yes, maybe there should be a balance between distracting busy professionals and enabling them to save time by hiring others. Ideally the community would pre-filter the applications. Bias in this process can be limited by asking people to make recommendations in a non-preferential way and include their reasoning for recommending a particular opportunity. While there should be an option to get a list of applicants filtered by some criteria specified by the professional sent periodically, a greater value can be from reviewing others' reasoning why candidates can be a great fit for a role that one posts (and providing feedback on the reasoning).

EA Common App Development Further Encouragement

Thank you for the useful tip on importrange.

Yes, I mean to use maybe a Google Form. Ah hah, it makes sense that all can be optional (name, sure) but even no way of contacting the candidate can be possible (maybe just writing in the form - hm here is where digital people enter haha).

Ok, what about some interview-like questions, such as

  • Describe a time you were resolving an important problem.
  • What are you currently working on improving and what should you be?
  • How do you go about prioritization at work?
  • Describe a time you received or gave feedback. How did you feel?
  • How would you summarize your unique skillset?
  • How did you became interested in applying for the employment that you are specifying?
  • What is your role in a team? What should it be?

Or, questions relevant to the specific candidate's preferences

  • What would an ideal employment look like for you?
  • Describe a collaborative working arrangement that you especially like or dislike.
  • What offers would you likely turn down?

Or, something that shows the applicants' interests more broadly, such as

  • What is an article that you recently read? What do you think about it?
  • What article did you change your mind about? How?
  • What course did you take but realized that is irrelevant to what you want to do?

Axiological, moral value, and risk attitude questions can add information on the candidate's fit, such as

  • How would you negotiate between scientific progress and wellbeing research of entities that do not contribute to progress, under scarce resources?
  • When is the Repugnant or Sadistic Conclusion (Population axiology, Greaves, 2017)  permissible? Find a situation.
  • In his "All animals are equal," Peter Singer argues that "Equal consideration for different beings may lead to different treatment and different rights." How can this go optimally and badly?
  • When would you friends describe you as risk-averse or risk-seeking? How would you feel about their description?

10. orgs multiselect: for non-EA orgs (recommended by 80k), it can be interesting to just copy general interest app fields and then (if it would not constitute a reputational loss risk for the applicant) paste the responses and see what happens. Founders Pledge orgs make sense - have not thought of these.

Maybe I can go through some applications of EA-related orgs, Funds, 80k orgs, Founders Pledge ventures, opportunities relevant to Probably Good profiles, etc to synthesize questions.

EA Common App Development Further Encouragement

Yes, there should be enough actually interesting opportunities (for developers) ranging from AI safety research and increasing NGO, impact sector, and public infrastructure efficiencies to developing products that apply safety principles, communicating with hardware manufacturers, informing AI strategy and policy, or upskilling in an area that they have not explored and pivoting. It should not be scary to apply, management by fear reduces thriving.

From the link/your writing, feedback of a candidate who rejected an offer can be also valuable. General support with CV writing can be valuable, as long as it highlights candidates' unique background and identities rather than standardizes the documents.

What is the percentage of people interested in something who applied for funding and who tried to find someone interested in a similar project, as an estimate?

What if this recommendation was not done as part of a discussion but written, would people who you spoke with still be enthusiastic about the recommendations?

Load More