All of QubitSwarm99's Comments + Replies

The dormant period occurred between applying and getting referred for the position, and between getting referred and receiving an email for an interview. These periods were unexpectedly long and I wish there had been more communication or at least some statement regarding how long I should expect to wait. However, once I had the interview, I only had to wait a week (if I am remembering correctly) to learn if I was to be given a test task. After completing the test task, it was around another week before I learned I had performed competently enough to be hired.   

I should have chosen a clearer phrase than "not through formal channels". What I meant was that my much of my forecasting work and experiences came about through my participation on Metaculus, which is "outside" of academia; this participation did not manifest as forecasting publications or assistantships (as would be done through a Masters or PhD program), but rather as my track record (linked in CV to Metaculus profile) and my GitHub repositories. There was also a forecasting tournament I won, which I also linked on the CV. 

I agree with this.

"Number of publications" and "Impact per publication" are separate axes, and leaving the latter out produces a poorer landscape of X-risk research. 

6
Will Aldred
8mo
Yes, especially given that impact of x-risk research is (very) heavy-tailed.

Glad to hear that the links were useful!

Keeping by Holden's timeline sounds good, and I agree that AGI > HLMI in terms of recognizability. I hope the quiz goes well once it is officially released!

I am not the best person to ask this question (@so8res, @katja_grace, @holdenkarnofsky) but I will try to offer some points.

... (read more)
2
AndreFerretti
2y
Thanks for the links, Rodeo. I appreciate your effort to answer my questions. :) I can add the number of concerned AI researchers in an answer explanation - thanks for that!  I have a limited amount of questions I can fit into the quiz, so I would have to sacrifice other questions to include the one on HLMI vs. transformative AI. Also, it seems that Holden's transformative AI timeline is the same as the 2022 expert survey on HLMI (2060). So I think one timeline question should do the trick.  I'm considering just writing "Artificial General Intelligence," which is similar to HLMI, because it's the most easily recognizable term for a large audience.

I completed the three quizzes and enjoyed it thoroughly. 

Without any further improvements, I think these quizzes would still be quite effective. It would be nice to have a completion counter (e.g., an X/Total questions complete) at the bottom of the quizzes, but I don't know if this is possible on quizmanity. 

2
AndreFerretti
2y
Hey Rodeo, glad you enjoyed the three quizzes!  Thank you for your feedback. I'll pass it to Guided Track, where I host the program. For now, there's a completion bar at the top, but it's a bit thin and doesn't have numbers.  I saw that you work in AI Safety, so maybe you can help me clear two doubts:  * Do AI expert surveys still predict a 50% chance of transformative AI by 2060? (a "transformative AI" would automate all activities needed to speed up scientific and technological progress). * Is it right to phrase the question above as "transformative AI"? Or should I call it AGI and give it a different definition? I took the "transformative AI" and the 2060 timeline from Holden Karnofsky.

Got through about 25% of the essay and I can confirm it's pretty good so far. 

Strong upvote for introducing me to the authors and the site. Thank you for posting. 

2
joshcmorrison
2y
Thanks for the kind comment!

Every time I think about how I can do the most good, I am burdened by questions roughly like

  • How should value be measured? 
  • How should well-being be measured? 
  • How might my actions engender unintended, harmful outcomes? 
  • How can my impact be measured? 

I do not have good answers to these questions, but I would bet on some actions being positively impactful on the net.

For example

  • Promoting vegetarianism or veganism
  • Providing medicine and resources to those in poverty
  • Building robust political institutions in developing countries
  • Promoting policy
... (read more)

Thoughts and Notes: October 5th 0012022 (1) 

As per my last shortform, over the next couple of weeks I will be moving my brief profiles for different catastrophes from my draft existential risk frameworks post into shortform posts to make the existential risk frameworks post lighter and more simple. 

In my last shortform, I included the profile for the use of nuclear weapons and today I will include the profile for climate change. 

Climate change 

  • Risk: (sections from the well written Wikipedia page on Climate Change): "Contemporary climate
... (read more)

Does anyone have a good list of books related to existential and global catastrophic risk? This doesn't have to just include books on X-risk / GCRs in general, but can also include books on individual catastrophic events, such as nuclear war. 

Here is my current resource landscape (these are books that I have personally looked at and can vouch for; the entries came to my mind as I wrote them - I do not have a list of GCR / X-risk books at the moment; I have not read some of them in full): 

General:

... (read more)

Thoughts and Notes: October 3rd 0012022 (1)

I have been working on a post which introduces a framework for existential risks that  I have not seen covered on the either LW or EAF, but I think I've impeded my progress by setting out to do more than I originally intended. 

Rather than simply introduce the framework and compare it to the Bostrom's 2013 framework and the Wikipedia page on GCRs, I've tried to aggregate all global and existential catastrophes I could find under the "new" framework. 

Creating an anthology of global and existential cat... (read more)

While I have much less experience in this domain, i.e. EA outreach, than you, I too fall on the side of debate that the amount spent is justified, or at least not negative in value. Even if those who've learned about EA or who've contributed to it in some way don't identify with EA completely, it seems that in the majority of instances some benefit was had collectively, be it from the skepticism, feedback, and input of these people on the EA movement / doing good or from the learning and resources the person tapped into and benefited from by being exposed to EA. 

In my experience as someone belonging to the WEIRD demographic, males in heterosexual relationships provide less domestic or child support, on average, than their spouse, where by "less" I mean both lower frequency and lower quality in terms of attention and emotional support provided. Males seem entirely capable of learning such skills, but there does seem to some discrepancy in the amount of support actually provided. I would be convinced otherwise were someone to show me a meta-analysis or two of parental care behaviors in heterosexual relationships tha... (read more)

9
Jeff Kaufman
2y
For men reading this and thinking "I want to be an equal partner in raising kids, but I know a lot of men who intellectually want this don't end up doing their share; what should I do", you might be interested in my Equal Parenting Advice for Dads

Agreed. I also think putting these in a shortform, requesting individual feedback in DMs, or posting these in a slightly less formal place such as the EA Gather town or a Discord server might be better alternatives than having this on the forum in such a bare-bones state. Also, simply posting each article in full on the forum and receiving feedback in the comments could work as well. I just think [2 sentence context + links] doesn't make for a great post. 

Thank you very much for this offer - I have many questions, but I don't want to eat away too much of your time, so feel free to answer as few or as many of my questions as you choose. One criterion might be only answering those questions that you believe I would have the longest or most difficult time answering myself. I believe  that many of my questions could have fairly complex answers; as such, it might be more efficient to simply point me to the appropriate resources via links. While this may increase your work load, I think that if you crosspost... (read more)

My two cents on a couple of these from the perspective of a father of two girls (4 years old and 2 years old, I'm 30). Just my perspective, feel free to disregard if not helpful!

On emotions and discipline, I'm also a very calm person and rarely show anger and frustration. But kids are really good at finding was to frustrate you. I almost never yell at them, but I do get frustrated or exasperated and raise my voice, and it's genuinely unclear to me how anyone could parent a child without doing that.

In general I think the military wisdom "no plan survives fi... (read more)

4
Jeff Kaufman
2y
Would you be up for saying more about why you don't want to celebrate the conventional holidays? Your kids are likely going to want to celebrate the things their friends and extended family are celebrating, and unless you have a strong reason not to, might as well make them happy? For example, despite being atheists we celebrate Christmas, Easter, Hanukkah, and Passover. Not in an especially religious way, just things like dying eggs and looking for them are fun.

Without thinking too deeply, I believe that this framing, i.e. one in line with AI developers are gambling with the fate of humanity for the sake of profit, and we need to stop them/ensure that their efforts don't have catastrophic effects, for AI risk could serve as a conversational cushion for those who are unfamiliar with the general state of AI progress and with the existential risk poorly aligned AI poses. 

Those unfamiliar with AI might disregard the extent of risk from AI if approached in conversation with remarks about how not only it is non-tr... (read more)

I've been keeping tabs on this since mid-August when the following Metaculus question was created:

The community and I (97%, given NASA's track record of success) seem in agreement that it is unlikely DART fails to make an impact. Here are some useful Wikipedia links that aided me with the prediction: (Asteroid impact avoidance, Asteroid impact prediction, Near Earth-object (NEO), Potentially hazardous object). 

There are roughly 3 hours remaining until impact (https://dart.jhuapl.edu/); it seems unlikely that something goes awry, an... (read more)

While it is my belief that the there is some wider context missing from certain aspects of this post (e.g., what sorts of AI progress are we talking about, perhaps strong AI or transformative AI?- this makes a difference), the analogy does a fair job at illustrating that the intention to use advanced AI to engender progress (beneficial outcomes for humanity) might have unintended and antithetical effects instead. This seems to actually encapsulate the core idea of AI safety / alignment, roughly that a system which is capable of engendering vasts amounts of... (read more)

1
423175
2y
I don't think it's just about "methods". There was nothing wrong about the Cotton Gin's "methods", or its end goal. The problem was that other actors adapted their behavior to its presence, and, in fact, it's not hard to see that this was very likely in retrospect (not that they could have predicted it, but if we "rerolled" the universe, it would probably happen again).

I agree with the overall premise of this post that, generally speaking, the quality of engagement on the forum, through posts or comments, has decreased, though I am not convinced (yet) that some of the points made by the author as evidence for this are completely accurate. What follows are some comments on what a reduced average post quality of the forum could mean, along with a closing comment or two.

If it is true that certain aspects of EAF posts have gotten worse over time,  it's worth examining exactly which aspects of comments and posts have deg... (read more)

I took the Giving What We Can Pledge about 2 years ago, so I don't have much data to work off of, but in sum, I think the first year around I should have hedged my donations a bit more - I donated exclusively to Global Health and Development and feel that this was somewhat rash, but plan to diversify more this year and come up with better rationales for why X% of my donations goes to a particular cause or organization. Also, thank you for asking this question, as it might lead to somewhat more care being taken in donating.  

Below I consider changes for this Wiki page. 

The sentence

"Existential risks include natural risks such as those posed by asteroids or supervolcanoes as well as anthropogenic risks like mishaps resulting from synthetic biology or artificial intelligence." 

is insufficient in my view in capturing what existential risks humanity faces. I believe that having the list of existential risks covered in Bruce E. Tonn's Anticipation, Sustainability, Futures and Human Extinction on the EAF Existential Risk Wiki would be substantially more helpful to EAF read... (read more)

Given that you have just published this on the forum, I have not yet finished watching the video, but it is playing in the background on 1.5x speed. 

Your project is valuable to me since I am not up-to-date with my knowledge of the state of interpretability research and suspect that your project and manner of explanations will help slightly in this regard. Beyond the value, interpretability is simply interesting. I would very likely watch more video explanations of this nature on topics in AI Safety, interpretability, alignment, etc... which leads me t... (read more)

2
Sean Osier
2y
Thanks for the comment and for watching! I don't currently have any future videos planned, but I'd definitely consider it if there's interest. I'm also a fan of learning via videos, and you're right that there aren't that many in the AI Safety space. (Robert Miles is the only AI Safety YouTuber, I'm aware of.  Absolutely worth checking out if your interested in this kind of stuff.)

Hello, I recently received my undergraduate degrees in Mathematics and Neuroscience, and have lately been doing work at the intersection of forecasting and deep learning. I am very interested in safeguarding humanity from X-risks and GCRs, especially GCBRs, and have been working with Rumtin Sepasspour of CSER to expand and automate some aspects of gcrpolicy.com. Biosecurity and bioethics are topics I would like to learn more about, both in relation to extreme risks but also to better understand how some emerging technologies (e.g., genome editing) in biology might affect humanity in the near and long term. 

3
Elika
2y
Welcome!! If I can be of help in any way let me know!  There's a lot on bioethics and biosecurity - mainly in the form of dual use research of concern (DURC) and GoF policy (at least in the US) and what research should be allowable! Happy to share reading suggestions if wanted   

Thank you for posting this; with respect to the first footnote, I think that even if the post is missing some parts or is slightly miscalibrated, having it on the forum might nonetheless help raise the forum's epistemic standards. 

Some considerations: 

  • I find Epistemic Status notes more useful when the author includes the extent to which they researched or thought about something, which you mentioned under The effort you put into this post and into making its claims very precise.  It might also be useful to include statements concerning rough
... (read more)

This is great. I think coordinated sharing of Anki / other flashcards should be a norm / done more frequently. 

Any chance you can share the sources for these notes, if the sources are easily accessible to you? I'd be interested in examining them, but it's okay if you don't have them, given that most people usually do not to have all the sources to their cards immediately accessible. 

Just saw the sources in Anki! Thank you. 

2
AndreFerretti
2y
I agree with sharing more flashcards! Let me know your feedback on the Anki cards :)

Stories of this nature are sobering to hear; thank you for posting this - each post like this gets people in the community mentally  closer  to seeing the base rate of success in the EA community for what it is. 

Your writing is enjoyable to read as well - I would read more of it. 

Controlling for overconfidence, I'm sorry that your expectations were failed with the last EA job you applied for. My brain doesn't usually like to accept such things. 

The expected value of letting go and building a mental foundation that is simple, peacef... (read more)

2
Mart_Korz
2y
I agree. And now I wonder whether someone already did write more about this? And if not, maybe this could be a great project? I found the 'personal EA stories' in Doing Good Better (Greg Lewis) and Strangers Drowning (well, many of these are not quite about EA, but there are many similarities) very helpful for clarifying what my expectations should or could be. A book where, say, each chapter follows the EA path of one person with their personal successes, struggles, uncertainties and failures could span the different experiences that people can have with EA. Similarly to how many people found semicycle's story valuable, I could imagine that such a book could be very helpful for actually internalizing that EA is very much a community project where doing the right thing often means that individuals will fail at many of their efforts. If this book already exists, I would be very happy to know about it :)
6
semicycle
2y
I am humbled by your encouragement, my cowboy/cowgirl friend. That means a lot coming from a microorganism, considering the challenges you overcame in learning to operating a computer and to survive this land of giants. It's a battle to convince myself of that sometimes, but it's a battle worth fighting, and as you say, "1 step at a time."

Any updates on this? I'm interested to see your thoughts on all these good responses.

Thank you for sharing this. For some reason, a lot of WHO's reports usually escape my radar.

Thank you for posting this.

I want to direct more attention to Decreasing Anxiety. If these observations and pieces of advice were weighted, I would expect reducing one's anxiety to be near or at the top.  

Many environments and activities within the EA-sphere (e.g., research or grant-making) are quite stressful, and operating continually in these environments can lead to burnout or the consequences of anxiety. 

Here is a simple reminder of the activities that are fundamental for flourishing (and for reducing anxiety) as a human:

  • Exercising each day&
... (read more)

I admit, some of these apply to me as well. I would be interested in reading further on the phenomenon, which I can't seem to find a term for, of "ugly intentions (such as philanthropy purely for status) that produce a variety of good outcomes for self and others, where the actor knows that this variety of good outcomes for others is being produced but is in it for other reasons".

Your post reminds me of some passages from the chapter on charity in the book The Elephant in the Brain (rereading it now to illustrate some points), and could probably be grouped... (read more)

Thank you Parmest for writing this post. Shared reflections and experiences such as this one seem to occur somewhat infrequently on the EAF, and I appreciate your perspective. 

Some things came to my when reading this. 

A post that you may find enjoyable and insightful is Keeping Absolutes in Mind. Here, Michelle Hutchinson writes about altruistic baselines: 

In cases like those above, it might help to think more about the absolute benefit our actions produce. That might mean simply trying to make the value more salient by thinking about it. Th

... (read more)
1
Parmest Roy
2y
Thank you for sharing your perspective. It was certainly helpful.

Entering "longtermism" into Google Translate produces Langfristigkeit, which has already been stated below. 

To add additional weight to this definition,  my native-speaking German grandmother believes that "Langfristigkeit" is probably the best or near-best translation for longtermism, after thinking about it for around 10 minutes and reading the other responses, although she is not terribly familiar with the idea of longtermism.

For additional context, the following means "long-term future" in German:

  • langzeitige Zukunft

One problem is properly get... (read more)

Thank you for contributing this. I enjoyed reading it and thought that it made some people’s tendency in EA (which I might be imagining) to "look at other cause-areas with Global Health goggles" more explicit.

Here are some notes I’ve taken to try to put everything you’ve said together. Please update me if what I’ve written here omits certain things, or presents things inadequately. I’ve also included additional remarks to some of these things.

  • Central Idea: [EA’s claim that some pathways to good are much better than others] is not obvious, but widely believ
... (read more)
6
Owen Cotton-Barratt
2y
I think the main thing this seems to be missing is that I'm not saying global health has an efficient altruistic market -- I'm saying that if anything does you should expect to see it there. But actually we don't even see it there ... reasonable-looking health interventions vary by ~four orders of magnitude in cost-effectiveness, and the most cost-effective are not fully funded.

Thank you for doing this. 

Even though aggregating what media forum members learn from and interact with seems obviously useful, I am surprised this hasn't been done more frequently (I have not seen a form of this nature, but only have a fractional sample of what's out there). 

I am very interested to see what you find (partially to find some new content to absorb) and hope that many people fill out this form. 

Thank you for sharing this experience. It upweights the idea of me moving to another state, partially on the basis of grant relocation programs.

I remember seeing, in the past, that Vermont would pay remote workers 10k USD to relocate (here). I can't find much on this now, but did find that Vermont has a New Relocating Worker Grant (here)

QUALIFYING RELOCATION EXPENSES

Upon successful relocation to Vermont and review of your application, the following qualifying relocation expenses may be reimbursed:  

  • Closing costs for a primary residence or lease d
... (read more)
3
NicoleJaneway
2y
Thanks for your thoughts, rodeo.   Anecdotally, VT discontinued their original program because of unpopularity with state residents.  It was funded with taxpayer money, therefore locals were essentially defraying the moving costs for out-of-state folks. I've updated the article to reflect the fact that Tulsa Remote is largely privately funded, with some funding from OK for certain individuals (i.e., tech workers). The other programs I know of are WV and northwest AR.   I'm tempted to maybe try one of these programs in the future (if they're not turned off by the fact I've already done Tulsa Remote).  Both states are beautiful, but offer less in the way of career development and urban amenities compared to Tulsa.   

This post has a fair number of downvotes but is also generating, in my mind, a valuable discussion on karma, which heavily guides how content on EAF is disseminated. 

I think it would be good if more people who've downvoted share their contentions (it may well be the case that those who've already commented contributed the contentions). 

1
Guy Raveh
2y
Can you know how many downvotes it has, beyond the boolean "does it have more votes than karma"?

Location: New Jersey, USA

Remote: Yes

Willing to relocate: Likely Yes

Skills:

  - Machine Learning (Python, TensorFlow, Sklearn): Familiar with creating custom NNs in Keras,  properly using packaged ML algorithms, and (mostly) knowing what to use and when. I haven’t reproduced an ML paper in full, but probably could after a decent amount of time. I am in the process of submitting a paper on ensemble learning for splice site prediction to IEEE Access (late submission). 

  - Python, R, HTML, CSS: I am competent  in Python (5 years experience), and am familiar 
... (read more)

Thank you for commenting this! I have not previously heard of Unjournal, but believe it's very likely that I will try to use this for feedback (decided after taking a ~5 min look at the link). 

Thank you for writing this!

Here are some of my notes / ideas I wrote while reading.

This “celebrating failures” notion is a celebration of both the audacity to try and the humility to learn and change course. It’s a great ideal. I wholeheartedly support it

Something I remembered when reading this was the idea, which most people here might have been exposed to at one point or another but might have forgotten, that “Adding is favored over subtracting in problem solving” (https://www.nature.com/articles/d41586-021-00592-0).

I believe making it easier for people,... (read more)

6
Luke Freeman
2y
Thanks for writing this! I strongly upvoted this comment because I think it contributes a lot to and extends the OP on many different points.

What do you think would occur if you added in the 1st or 2nd most upvoted, recent comments in the GPT-3 description, following the question? 

I think it might make the difference on some questions with high forecaster volume, but might detract from the accuracy on questions with  lower forecaster volume. 

3
MathiasKB
2y
If the comments include a prediction my guess is that GPT would often make the same prediction and thus become much more accurate. Not because it learned to predict things but because there's probably a strong correlation between the community prediction and the most upvoted comments prediction. If the goal is to give GPT more context than just the title of the question, then you could include the descriptions for each question as well, but when I tried this I got worse results (fewer legible predictions).

I believe my father qualifies as "conservative" (I don't have a clear definition for "conservative", and age is a confounding factor in this case, but that he was a Trump voter in 2020, generally opposes immigration, and loves meat indicate him as conservative), and have discussed EA ideals  and concepts with him at length over the span of several years. 

He supports altruism, and in general believes that altruistic practices  (he mainly discusses global health and development) could be more effective. On this note, he believes EA is "good". ... (read more)

3
Harrison Durland
2y
I have not directly asked my parents for their views on EA, but I've mentioned it before, and I've gotten the sense that they would probably also be supportive of trad-EA work like in health and development, but I suspect that they are not particularly sympathetic to the focus on x-risks—especially actual extinction from things like AI—given their religious views, which is one of the main reasons I don't tend to bring it up that much in the first place.

Thank you for writing this post and for including those examples. 

To address the first part of your  "Meta" comment at the bottom of the post, I think that, were I to do this exercise with my peers, it would not cost much time or energy, but could potentially generate ideas for desirable states of humanity's future that might result in some of my or my peers' attention temporarily being reallocated to a different cause. This reallocation might take the form of some additional querying on the Internet  that  might not have otherwise occu... (read more)

Not sure if you are in fact seeing this, but presently I see 3 posts with a similar title. The two previous ones had "105" in the title. Just making sure you know this. Also, thank you for posting this. Quick survey results are usually nice to see. 

1
Max Clarke
2y
I didn't know that thanks - have removed those. I clicked the submit button and saw nothing happen

Red-team - "Examine the magnitude of impact that can be made from contributing to EA-relevant Wikipedia pages. Might it be a good idea for many more EA members to be making edits on Wikipedia?" 

Rough answer  - "I suspect it's the case that only a small proportion of total EA members make edits to the EAF-wiki. Perhaps the intersection of EA members and people who edit Wikipedia is about the same size, but my intuition is that this group is even smaller than the EAF wiki editors. Given that Wikipedia seems to receive  a decent amount of Inter... (read more)

3
mic
2y
I helped start WikiProject Effective Altruism a few months ago, but I think that the items on our WikiProject's to-do list are not as valuable as, say, organizing a local EA group at a top university, or writing a useful post on the EA Forum. One tricky thing about Wikipedia is that you have to be objective, so while someone might read an article on effective altruism and be like "wow, this is a cool idea", you can't engage them further. I also think that the articles are already pretty decent.

I was thinking more about price in terms of carbon cost, but this should follow from the USD calculation, assuming that this is roughly proportional to some quantity of CO2 released. My prior knowledge on wattage was lacking, so I guessed 100k lumen for ~8-12 hours per day to consume more electricity than it actually does. 

I haven't read the paper, but this sounds interesting. There was a time when I purchased this 10,000 lumen lamp to ail my poor sentiment during the sophomore year of college. The power in my room blew out once I purchased a second lamp, and I was left without electricity in my dorm for 2 weeks. Imagine treating everyone with SAD - 100,000 lumens each. I wonder how much electricity this would be collectively. Each person would need around this amount of light for some duration of some interval of the year. What would be the net positive impact of this inter... (read more)

How did the electricity blow out once you had 2 100W lamps? That's 200W whereas a toaster or hair dryer commonly uses 1200-1500W.

Also, energy has a price and we can just measure it. At 20c/kWh, which is above average US retail electricity cost and probably enough to pay for carbon offsets too, 200W *  1h/day  * 20c / kWh = 4 cents/day, cheaper than most antidepressants by an order of magnitude.

As of September 30th, 2021, 80000 Hours lists ageing under Other longtermist issues, which means that, at the moment, it is not one of their Highest priority areas. 

Despite this, I am interested in learning more about research on longevity and ageing. The sequence Gears of Ageing, Laura Deming's Longevity FAQ, and the paper Hallmarks of Aging, are all on my reading list. 

Relatedly, my friends have sometimes inquired how long I would like to live, if I could hypothetically live invincibly for however long I wanted, and I have routinely defaulted t... (read more)

I think that the poor outcomes you listed - causing reputational damage, spreading a wrong version of EA, over-focusing or under-focusing on certain cause areas, giving bad career advice, etc... - are on the mark, but might not entirely stem from EA community builders not taking the time to understand EA principles. 

For example, I can imagine a scenario where an EA community builder is not apt at presenting cause-areas, but understands the cause-area landscape very well. Perhaps as a result of their poor communication skills (and maybe also from a lac... (read more)

I posted this on LW as well, but I have being reading content from EA forum + LW sporadically for the last several years; only recently, though, did I find myself visiting here several times per day, and have made an account given my heightened presence. 

I am graduating with a B.A. in Neuroscience and Mathematics this January. My current desire is to find remote work (this is important to me) that involves one or more of: [machine learning, mathematics, statistics, global priorities research]. 

In spirit of the post The topic is not the content, I... (read more)

4
Aaron Gertler
3y
Nice to see that you've made the transition from lurker to poster :-) I hope the paper gets published, and that EA Global goes as planned.
Load more