All of AlexanderSaeri's Comments + Replies

A Case for Better Feeds

I use Feedly to follow several RSS feeds, including everything from the EA forum, LessWrong, etc. This lets me read more EA-adjacent/aligned content than if I visited each website infrequently because Feedly has an easy to use app on my phone.  

Here is a screenshot on browser of my Feedly sidebar. (I almost never use a browser)
Here is an example of the Feedly 'firehose' from my mobile phone, previewing several posts from EA forum and elsewhere.

 

I liken it to a 'fire hose' in that I get everything, including all the personal blogs and low-effort c... (read more)

3Nathan Young2moThis is exactly my point. Imagine you could customise your RSS feed like you do your front page.
10 Habits I recommend (2020)

Thanks for this list. Your EA group link for Focusmate just goes to the generic dashboard. Do you have an updated link you can share?

2Michelle_Hutchinson10moSorry about that. Here's the link: https://www.focusmate.com/signup/EffectiveAltruism [https://www.focusmate.com/signup/EffectiveAltruism] (I'll fix it in the article)
[Help please/Updated] Best EA use of $250,000AUD/$190,000 USD for metascience?

If you're comfortable sharing these resources on prioritisation and coordination, please also let me know about them.

3EdoArad10moWe plan to post it publicly in a couple of months, but I'll send you privately what we have now :)
[Help please/Updated] Best EA use of $250,000AUD/$190,000 USD for metascience?

I'm a researcher based in Australia and have some experience working with open/meta science. Happy to talk this through with you if helpful, precommitting to not take any of your money.

Quick answers, most of which are not one off, donation target ideas but instead would require a fair amount of setup and maintenance.

  • $250,000 would be enough to support a program for disseminating open / meta science practices in Australian graduate students (within a broad discipline), if you had a trusted person to administrate it.

  • you could have a prize for best ope

... (read more)
3gavintaylor10moI joined a few sessions at the AIMOS (Association for Interdisciplinary Metascience and Open Science) conference a few weeks ago. It was great and I wrote up some notes about the talks I caught here [https://onscienceandacademia.org/t/association-for-interdisciplinary-meta-research-open-science-conference-december-3-4/960/5] . That said, beyond hosting their annual conference, I'm not really sure what other plans AIMOS has. If it's of interest I can put the OP in touch with the incoming 2021 president (Jason Chin from USyd Law School) to talk further. Otherwise, many of the speakers were from Australia and you might find other ideas for local donation recipients on the AIMOS program [https://aimos.community/2020-program-schedule]. Paul Glasziou from Bond Uni mentioned something in his plenary that stood out to me - inefficient ethical reviews can be a huge source of wasted research time and money (to the tune of $160 million per annum in Australia) - if that's of interest he may be able to suggest a way to spend the money to push for ethical review reforms in Australia.
CHAI Internship Application

Friendly suggestions: expand CHAI in the first instance of a post, for readers who are not as familiar with the acronym; clarify the month and day (eg Nov 11) for readers outside the United States

AMA: Markus Anderljung (PM at GovAI, FHI)

Thanks Markus.

I read the US public opinion on AI report with interest, and thought to replicate this in Australia. Do you think having local primary data is relevant for influence?

Do you think the marginal value lies in primary social science research or in aggregation and synthesis (eg rapid and limited systematic review) of existing research on public attitudes and support for general purpose / transformative technologies?

4MarkusAnderljung1yThanks Alexander. Would be interested to hear how that project proceeds. I think having more data on public opinion on AI will be useful primarily for understanding the "strategic landscape". In scenarios where AI doesn't look radically different from other tech, it seems likely that the public will be a powerful actor in AI governance. The public was a powerful actor in the history of e.g. nuclear power, nuclear weapons, GMOs, and perhaps the industrial revolution not happening sooner (The Technology Trap makes this argument). Understanding the public's views is therefore important to understanding how AI governance will go. It also seems important to understand how one can shape or use public opinion for the better, though I'm pessimistic about that being a high leverage opportunity. Following on from the above, I think the answer is yes. I'd be particularly keen for this work to try to answer some counterfactual history questions: What would need to have been different for GMO/nuclear to have been more accepted? Was it possible to see the public's resistance in advance?
Parenting: Things I wish I could tell my past self

I really appreciate this, Michelle. I'm glad to see this kind of piece on the EA forum.

Potential High-Leverage and Inexpensive Mitigations (which are still feasible) for Pandemics

If you haven't already, please upload a version to the open science framework as a preprint: https://osf.io/preprints

3Davidmanheim2yI have uploaded it to preprints.org, linked above, pending the final layout and publication. (With an open source license in both cases.)
A Semester-Long Course In EA

Thanks for posting this, Nick. I'm interested in how you plan to run this course. Are you the course coordinator? Is there an academic advisor? Who are the intended guest lecturers and how would they work? Who are the intended students?

1nickwhitaker2yHi Alexander, We have a program called GISP that allows students to run their own course, essentially a group independent study. It should be able to count for an elective philosophy credit too. There is an academic advisor, a professor in our medical school who has been involved in EA. We've been having a lecture series this semester of EA people in the Boston area. We have a lot of students on our mailing list that we've met while tabling that are either vaguely familiar with EA or expressed interest in learning more about it.
Formalizing the cause prioritization framework

Michael, thanks for this post. I have been following the discussion about INT and prioritisation frameworks with interest.

Exactly how should I apply the revised framework you suggest? There are a number of equations, discussions of definitions and circularities in this post, but a (hypothetical?) worked example would be very useful.

1Michael_Wiebe2yYes, the difficult part is applying the ITC framework in practice; I don't have any special insight there. But the goal is to estimate importance and the tractability function for different causes. You can see how 80k tries to rank causes here [https://80000hours.org/articles/cause-selection/].
[Link] Experience Doesn’t Predict a New Hire’s Success (HBR)

Very valuable piece, and likely worth a separate write up.

Candy for Nets

Jeff, this is really lovely and I appreciate you thinking out loud through your reasoning. Is be interested to hear what you think will be hard for them as they grow up with "parts with strong unusual views" and whether you think this would be qualitatively different from other unusual views (eg strongly religious, military family, etc)

One way we try to make it easier is by making it clear that the children can make personal choices about things like donation, diet, and eventually career. E.g. we have the full range from vegan to meat-eaters in our house, and when Lily decided she wanted to be vegetarian for a while we said "It's your choice."

I can imagine having conflict later about her wanting to use the money we donate differently (for spending on "extras" or for donating to something we don't think is effective). But I don't expect it to be worse than the conflict parents and children typically have about money.

Are you working on a research agenda? A guide to increasing the impact of your research by involving decision-makers

Thanks for this excellent piece, Karolina. In my work (research enterprise working w/ government and large orgs), we are constantly trying to get clarity on the implicit theory of change that underpins the organisation, or individual projects. In my experience, the association of ToC with large international development projects has meant that some organisations see them as too mainstream/stodgy/not relevant to their exciting new initiative. But for-profit businesses live or die on their ToC (aka business model), regardless of whether they are large or small.