I am actively recruiting effective altruists to participate in an online survey mapping their psychological profiles. The survey should take no more than 90 minutes to complete and anyone who identifies as being in alignment with EA can participate. If you have the time, my team and I would greatly appreciate your participation! The survey pays $15 and the link can be found below.

Survey Link: https://albany.az1.qualtrics.com/jfe/form/SV_8v31IDPQNq4sKBU

The Research Team:

  1. Kyle Fiore Law (Project Leader; PhD Candidate in Social Psychology; University at Albany, SUNY): https://www.kyleflaw.com/
  2. Brendan O’Connor (Associate Professor of Psychology; University at Albany, SUNY):
  3. Abigail Marsh (Professor of Psychology and Interdisciplinary Neuroscience; Georgetown University)
  4. Liane Young (Professor of Psychology and Neuroscience; Boston College)
  5. Stylianos Syropoulos (Postdoctoral Researcher; Boston College)
  6. Paige Amormino (Graduate Student; Georgetown University)
  7. Gordon Kraft-Todd (Postdoctoral Researcher; Boston College)

Warmly,

Kyle :)

25

0
0

Reactions

0
0
Comments18


Sorted by Click to highlight new comments since:

Thanks for sharing, Kyle!

May I ask how you estimated the time to complete the survey? I gave up because I made very little progress in 40 min, but I might be unusually slow.

I think 15 $ is a minor incentive for 90 min, but I tried to complete the survey anyway because this type of work seems useful. However, my guess is that you would get way more responses if you did a smaller e.g. 15 min survey even it had no monetary incentive. I also wonder whether the responses would be more accurate, because 90 min is so much time that people may start to give inaccurate responses. Alternatively, assuming I am not unusually slow, people may give rushed responses in order to finish the survey in 90 min.

In general, I also felt like many questions were overly similar. I appreciate some of this may be needed for internal validity purposes, but I would say the trade-off as the survey stands is not great.

Sorry if my comments come across as harsh. Thanks for working towards contributing to a better world!

I just reposted your X/Twitter recruitment message, FWIW:

https://twitter.com/law_fiore/status/1706806416931987758 

Good luck! I might suggest doing a shorter follow-up survey in due course -- 90 minutes is a big time commitment for $15 payment!

Please I participated in the survey and $15 was sent to my email I tried to send the fund to my PayPal account and the transaction was cancelled. How can I claim my $15

Hi Gyang, how long did it take for you to get the pay?

Dear All,

I’m truly grateful for the valuable feedback and support received from the community. Below, I’ve provided some clarifications that I hope you’ll find useful.

To help ensure an accurate assessment of survey length, we timed the survey on the basis of median completion time derived from a pilot study with a general population sample (N=320). We’re aware of the concerns about respondent fatigue, which is why the pilot was crucial. It allowed us to examine the response quality and ensure the consistency and validity of our measures, aligning with well-established findings in relevant literature.

This survey marks our initial foray into this research area. As such, we opted for a comprehensive approach in selecting our measures. This approach enables us to explore numerous key variable relationships thoroughly, setting the stage for a more streamlined follow-up study.

Your participation, time and effort in helping us with this project are immensely appreciated. We are hopeful that the insights gained will significantly contribute to the development of more effective EA outreach strategies within the general population.

Warmly,

Kyle :)

Kyle - I just completed the survey yesterday. I did find it very long and grueling. I worry that you might get lower quality data in the last 1/2 of the survey, due to participant fatigue and frustration.

My suggestion -- speaking as a psych professor who's run many surveys over the last three decades -- is to develop a shorter survey (no more than 25 minutes) that focuses on your key empirical questions, and try to get a good large sample for that. 

Thank you, Geoffrey! I really appreciate your time and candid feedback. I will take this into careful consideration going forward. 

I've just spent about an hour on the survey, at which point I noticed the progress bar was at about 1/6th. This was at the start of four timed questions which required 2 minutes each, with each one having a few follow-up questions.

At this point there had been 7 or 8 of these timed questions, as well as at least two dozen pages of multiple-choice questions and a few 'brain-teasers'. I do not see how it is possible to complete this first 1/6th in under 45 minutes, seeing that the timed questions alone already take up 16 minutes.

I sincerely apologize for the length of the survey.

Others have mentioned the length of the survey, but I think it would also be useful for long surveys to use a survey provider that has a progress bar.

ETA: I now realise there is a progress bar, but I didn't register it because it was advancing so slowly. I rescind my initial implication that others have said adequately expressed the length of this survey...

Also a 101-point Likert scale seems asking us for overmuch precision :P

Also I'm on a 7-point Likert scale page where I can only click on the first five options (about how much thought I like my tasks to involve).

I resolved it by shrinking the browser window so it switched to dropdown menus, but some people might not think to do that.

This is very good to know. Thank you for sharing these insights!

Hello, i completed the survey 3 days agi and was asked to drop my email to facilitate payment. I haven't heard a feed back from you since

Hey Caleb,did you later receive the payment and how long did it take?

Hello Caleb,

Yes i did, i reached out to Kyle and it was sorted within 24 hours.

I completed the survey hours ago, still yet to see any compensation though

Hello, I participated in the survey and my study participation fund sent to PayPal was refunded back to Brenda O'Connor.

Kindly make my payment. Thanks 

Curated and popular this week
Jim Chapman
 ·  · 12m read
 · 
By Jim Chapman, Linkedin. TL;DR: In 2023, I was a 57-year-old urban planning consultant and non-profit professional with 30 years of leadership experience. After talking with my son about rationality, effective altruism, and AI risks, I decided to pursue a pivot to existential risk reduction work. The last time I had to apply for a job was in 1994. By the end of 2024, I had spent ~740 hours on courses, conferences, meetings with ~140 people, and 21 job applications. I hope that by sharing my experiences, you can gain practical insights, inspiration, and resources to navigate your career transition, especially for those who are later in their career and interested in making an impact in similar fields. I share my experience in 5 sections - sparks, take stock, start, do, meta-learnings, and next steps. [Note - as of 03/05/2025, I am still pursuing my career shift.] Sparks – 2022 During a Saturday bike ride, I admitted to my son, “No, I haven’t heard of effective altruism.” On another ride, I told him, “I'm glad you’re attending the EAGx Berkely conference." Some other time, I said, "Harry Potter and Methods of Rationality sounds interesting. I'll check it out." While playing table tennis, I asked, "What do you mean ChatGPT can't do math? No calculator? Next token prediction?" Around tax-filing time, I responded, "You really think retirement planning is out the window? That only 1 of 2 artificial intelligence futures occurs – humans flourish in a post-scarcity world or humans lose?" These conversations intrigued and concerned me. After many more conversations about rationality, EA, AI risks, and being ready for something new and more impactful, I decided to pivot my career to address my growing concerns about existential risk, particularly AI-related. I am very grateful for those conversations because without them, I am highly confident I would not have spent the last year+ doing that. Take Stock - 2023 I am very concerned about existential risk cause areas in ge
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 2m read
 · 
2024 marked 10 years since we launched Open Philanthropy. We spent our first decade learning (about grantmaking, cause selection, and the history of philanthropy), and growing our team and expertise to be able to effectively deploy billions of dollars from Good Ventures, our main funder. Our early grants — and some grantees we’ve helped get started — are now old enough that we can see material signs of our impact in the world. The start of our second decade also marked a major change in our direction. With Good Ventures approaching the level of spending consistent with its founders’ ambition to spend down in their lifetimes, we finally began to execute at scale on our long-held ambition to support other funders, and found a surprising degree of early success. I expect that our ambition to serve additional partners will guide much of our second decade. A few highlights from the year: * We launched the Lead Exposure Action Fund (LEAF), a >$100 million collaborative fund to reduce lead exposure globally. LEAF marked our first major foray into partnering with other funders beyond Good Ventures, and we’re planning to do a lot more in this vein going forward — more below. * Our longtime grantee David Baker won the Nobel Prize in Chemistry for his groundbreaking work using AI for protein design. We’re proud to have supported both the basic methods development and the potentially high-impact humanitarian applications of his work for ailments like syphilis, hepatitis C, snakebite, and malaria. * Our grantee Open New York played an important role in the recent passage of New York City’s largest zoning overhaul in over 60 years. The city planning department expects the package to create 80,000 new homes over 15 years, making this the first set of major YIMBY reforms to pass in New York City. * Research mentorship programs that we fund continue to produce some of the top technical talent in AI safety and security. Graduates of programs like MATS, the Astra Fellowship, LA