All of JanB's Comments + Replies

FWIW, I am excited about Future Matters. I have experienced them as having great perspectives on how to affect change via policy and how to make movements successful and effective. I think they have a sufficiently different lense and expertise from many EA orgs that I'm really happy to have them working on these causes. I've also repeatedly donated to them over the years (one of my main donation targets)

Scott said in his email that OpenPhil is only taking donations >$250,000. Is this still true?

That makes sense, thanks. Although this will not apply to organisations/individuals that were promised funds from the Future Fund but didn't receive any, right? This case is pretty common, AFAICT.

2
Jason
1y
If an organization received nothing, there is nothing to claw back. I see no risk to donating to those organizations (unless you feel an organization somehow so overextended themselves in expectation of FTX cash that they cannot recover from it disappearing -- that is not a clawback risk though).
Answer by JanBDec 05, 20222
0
0

Scott has sent me the following email (reproduced here with his approval). Scott wants to highlight that he doesn't know anything more than reading the public posts on this issue.

I'd encourage people to email Scott, it's probably good for someone to have a list of interested donors.
 

 

------------------------------------
Scott's email:

SHORT VERSION

 

If you want to donate blindly and you can afford more than $250K, read here for details, then consider emailing Open Philanthropy at  inquiries@openphilanthropy.org . If less than $250K, read h... (read more)

Thanks for investigating this and producing such an extremely thorough write-up, very useful!

JanB
2y81
39
0

I haven't read the comments and this has probably been said many times already, but it doesn't hurt saying it again:
From what I understand, you've taken significant action to make the world a better place. You work in a job that does considerable good directly, and you donate your large income to help animals. That makes you a total hero in my book :-)

6
Constance Li
2y
Thank you for those kind words! I plan to continue taking significant action to help the world become a better place. While I was hoping that attending EAG could help me in that journey, I've come to learn that there are many other avenues available.

At the same time though, it seems like your objection is a fully general argument against fundamental breakthroughs ever being necessary at any point, which seems quite unlikely. 

Sorry, what I wanted to say is it seems unclear if fundamental breakthroughs are needed. They might be needed, or not. I personally am pretty uncertain about this and think that both options are possible. I think it's also possible that any breakthroughs that will happen won't change the general picture described in the OP much.

I agree on the rest of your comment!

I gave the comment a strong upvote because it's super clear and informative. I also really appreciate it if people spell out their reasons for "scale in not all you need", which doesn't happen that often.

That said, I don't agree with the argument or conclusion. Your argument, at least as stated, seems to be "tasks with the following criteria are hard for current RL with human feedback, so we'll need significant fundamental breakthroughs". The transformer was published 5 years ago. Back then, you could have used a very analogous argument about language models to argue that language models will never do this or that task; but for many of these tasks, language models can perform them now (emergent properties).

2[anonymous]2y
Thank you for the comment  - it's a fair point about the difficulty of prediction. In my post I attempted to point to some heuristics which suggest strongly to me that significant fundamental breakthroughs are needed. Other people have different heuristics. At the same time though, it seems like your objection is a fully general argument against fundamental breakthroughs ever being necessary at any point, which seems quite unlikely.  I also think that even the original Attention Is All You Need paper gave some indication of the future direction by testing a large and small transformer and showing greatly improved performance with the large one, while RLHF's early work does not appear to have a similar immediately obvious way to scale up and tackle the big RL challenges like sparse rewards, problems with long episode length, etc.

Yes, you can absolutely apply for conference and compute funding, separately from an application for salary, or in combination. E.g. if you're applying for salary funding anyway, it would be very common and normal to also apply for funding for a couple of conferences, equipment that you need, and compute. I think you would probably go for cloud compute, but I haven't thought about it much. 

Sometimes this can give mild tax issues (if you get the grant in one year but only spend the money on the conference in the next year; or, in some countries, if you... (read more)

Answer by JanBAug 12, 20222
0
0

I think you could apply for funding from a number of sources. If the budget is small, I'd start with the Longterm Future Fund: https://funds.effectivealtruism.org/funds/far-future

2
Guy Raveh
2y
I know that's the source for e.g. a salary. But say you need less foreseeable expenses like a conference, or use of resources that are usually shared like GPUs. Should you use these funds for that too?
JanB
2y10
0
0

I'm excited about people thinking about this topic. It's a pretty crucial assumption in the "EA longtermist space", and relatively underexplored.

This post is a response to the thesis of Jan Brauner’s post The Expected Value of Extinction Risk Reduction Is Positive.

The post is by Jan Brauner AND Friederike Grosse-Holz. I think correcting this is particularly important because the EA  community struggles with gender diversity, so dropping the female co-author is extra bad. 

1
Aaron Bergman
2y
Oh man, that’s really bad on our part. Thanks for the correction. Apologies to Friederike for this.
2
Hasan
2y
Apologies on this, we've fixed it now.
Answer by JanBJun 27, 202233
0
0

Given that Greg trained as an MD because he wanted to do good, this here probably counts: https://80000hours.org/2012/08/how-many-lives-does-a-doctor-save/

(and the many medical doctors and students who read posts like this and then also changed their minds, including me :-) )

Good point, thanks! I'm really impressed, seems like a very hard switch to make. 

Answer by JanBJun 27, 202210
0
0

https://www.jefftk.com/p/revisiting-why-global-poverty

Answer by JanBJun 26, 20226
0
0

This is a bit of a summary of what other people have said, and a bit of my own conceptualisation:

A) If the work is not competitive (not a winner-takes-all market), then:

  • For some jobs, marginal returns on quality-adjusted time invested will decrease, and you lose less than 20% of impact. This is true for jobs where some activities are clearly more valuable than others, so that you cut the less valuable ones.
  • For some jobs, marginal returns on quality-adjusted time invested will increase, and you lose more than 20% of impact. This could be e.g. because you ha
... (read more)

I'd guess that quite often you'd either win anyway or lose anyway, and that the 20% don't make the difference. There are so many factors that matter for startup founder success (talent, hard-workingness, network, credentials, luck) that it would be surprising if the competition was often so close that a 20% reduction in working time changes things.

Another way to put this: it seems likely that Facebook would still be worth hundreds of billions of dollars, and Myspace ~$0, had the Facebook founders worked 20% less).

JanB
2y10
0
0

Thanks so much for writing this!  I expect this will be quite useful for many people.

I actually spent some time this week worrying a bit about a nuclear attack on the UK, bought some preparation stuff, figured out where I would seek shelter or when I’d move to the countryside, and so on. One key thing is that it’s just so hard to know which probability to assign. Is it 1%? Then I should GTFO! Is it 0.001% Then I shouldn’t worry at all.

JanB
2y14
0
0

Enlightenment at scale (provocative title :-) )

Values and Reflective Processes (?), X-risk (?)

A strong meditation practice promises enticing benefits to the meditator---less suffering, more control over ones attention and awareness, more insight, more equanimity. Brahmavihara practice promises the cultivation of loving-kindness, compassion, and empathetic joy. The world would be a much better place if everybody suffered less, had more equanimity, and felt strong compassion and empathy with other beings. But meditation is hard! Becoming a skilled meditator,... (read more)

2
Dawn Drescher
2y
Are there high-quality safety trials for different meditation practices? I’ve heard of a variety of really bad side effects, usually from very intense, very goal-oriented meditative practice. The Dark Night that Daniel Ingram describes, the profligacy that Scott Alexander warned of, more inefficient perception that Holly Elmore experienced, etc. I have no idea how common those are and whether one is generally safe against them if one only meditates very casually… It would be good to have more certainty about that, especially since a lot of my friends are casual meditators.

I think Berlin has something like this

4
victor.yunenko
2y
Indeed, the space was organized by Effektiv Spenden: teamwork-berlin.org
JanB
2y16
0
0

AI alignment prize suggestion: Improve our ability to evaluate (and provide training signal for) fuzzy tasks

Artificial Intelligence

There are many tasks that we want AI systems to do, for which performance cannot be evaluated automatically (and thus training signal provision is hard). If we don't make progress on our ability to train systems for such tasks, we might end up in a world full of systems that optimise for that which is easy to measure, rather than what we actually want. One example of such a task is the evaluation of free-form text; there is cur... (read more)

JanB
2y10
0
0

AI alignment prize suggestion: Demonstrate a true sandwiching project

Artificial Intelligence

Sandwiching projects are a concrete way for how to make progress on aligning narrowly superhuman models. They “sandwich” the model in between one set of humans which is less capable than it and another set of humans which is more capable than it at the fuzzy task in question, and b) figure out how to help the less-capable set of humans reproduce the judgments of the more-capable set of humans. For example, first fine-tune a coding model to write short functions solv... (read more)

JanB
2y30
0
0

AI alignment prize suggestion: Introduce AI Safety concepts into the ML community

Artificial Intelligence

Recently, there have been several papers published at top ML conferences that introduced concepts from the AI safety community into the broader ML community. Such papers often define a problem, explain why it matters, sometimes formalise it, often include extensive experiments to showcase the problem, sometimes include some initial suggestions for remedies. Such papers are useful in several ways: they popularise AI alignment concepts, pave the way for fu... (read more)

2
Yonatan Cale
2y
Risk:  The course presents possible solutions to these risks, and the students feel like they "understood" AI risk, and in the future it will be harder to these students about AI risk since they feel like they already have an understanding, even though it is wrong. I am specifically worried about this because I try imagining who would write the course and who would teach it. Will these people be able to point out the problems in the current approaches to alignment? Will these people be able to "hold an argument" in class well enough to point out holes in the solutions that the students will suggest after thinking about the problem for five minutes? I'm not saying this isn't solvable, just a risk.
JanB
2y10
0
0

Refinement of project idea #22, Prediction Markets

 

Add: "In particular, we'd like to see prediction platforms that do all of the following three: use real money, are very easy to use, allow very easy creation of markets.

JanB
2y65
0
0

Highly effective enhancement of productivity, health, and wellbeing for people in high-impact roles

Effective Altruism

When it comes to enhancement of productivity, health, and wellbeing, the EA community does not sufficiently utilise division of labour. Currently, community members need to obtain the relevant knowledge themselves and do related research, e.g. on health issues, themselves. We would like to see dedicated experts on these issues that offer optimal productivity, health, and wellbeing, as a service. As a vision, a person working in a high-impact... (read more)

6
Brendon_Wong
2y
I was going to write a similar comment for researching and promoting well-being and well-doing improvements for EAs as well as the general public! Since this already exists in similar form as a comment, strong upvoting instead. Relevant articles include Ben Williamson’s project (https://forum.effectivealtruism.org/posts/i2Q3DTsQq9THhFEgR/introducing-effective-self-help) and Dynomight’s article on “Effective Selfishness” (https://dynomight.net/effective-selfishness/). I also have a forthcoming article on this. Multiple project ideas that have been submitted also echo this general sentiment. For example “ Improving ventilation,” “Reducing amount of time productive people spend doing paperwork,” and “ Studying stimulants' and anti-depressants' long-term effects on productivity and health in healthy people (e.g. Modafinil, Adderall, and Wellbutrin).” Edit: I am launching this as a project called Better! Please get in touch if you're interested in funding, collaborating on, or using this!
JanB
2y58
0
0

Reducing gain-of-function research on potentially pandemic pathogens

Biorisk

Lab outbreaks and other lab accidents with infectious pathogens happen regularly. When such accidents happen in labs that work on gain-of-function research (on potentially pandemic pathogens), the outcome could be catastrophic. At the same time, the usefulness of gain-of-function research seems limited; for example, none of the major technological innovations that helped us fight COVID-19 (vaccines, testing, better treatment, infectious disease modelling) was enabled by gain-of-func... (read more)

JanB
2y51
0
0

Cognitive enhancement research and development (nootropics, devices, ...)

Values and Reflective Processes, Economic Growth

Improving people's ability to think has many positive effects on innovation, reflection, and potentially individual happiness. We'd like to see more rigorous research on nootropics, devices that improve cognitive performance, and similar fields. This could target any aspect of thinking ability---such as long/short term memory, abstract reasoning, creativity---and any stage of the research and development pipeline---from wet lab research ... (read more)

5
Jackson Wagner
2y
I think this is an underrated idea, and should be considered a good refinement/addition to the FTX theme #2 of "AI-based cognitive aids".  If it's worth kickstarting AI-based research assistant tools in order to make AI safety work go better, then doesn't the same logic apply towards: * Supporting the development of brain-computer interfaces like Neuralink. * Research into potential nootropics (glad to hear you are working on replicating the creatine study!) or the negative cognitive impact of air pollution and other toxins. * Research into tools/techniques to increase focus at work, management best practices for research organizations, and other factors that increase productivity/motivation. * Ordinary productivity-enhancing research software like better note-taking apps, virtual reality remote collaboration tools, etc.   The idea of AI-based cognitive aids only deserves special consideration insofar as: 1. Work on AI-based tools will also contribute to AI safety research directly, but won't accelerate AI progress more generally.  (This assumption seems sketchy to me.) 2. The benefit of AI-based tools will get stronger and stronger as AI becomes more powerful, so it will be most helpful in scenarios where we need help the most.  (IMO this assumption checks out.  But this probably also applies to brain-computer interfaces, which might allow humans to interact with AI systems in a more direct and high-bandwidth way.)

This page, from Rob Wiblin, has been shared on Twitter recently. Contains some advice some minimal version of preparation (e.g. buy potassium iodide tablets): https://nuclearadvice.org/

4
simeon_c
2y
Does anyone know where to buy potassium iodide tablets? I can't find any seller which is not out-of-stock and which works on the internet

I have a similar knee-jerk reaction whenever I read a post "on research", so I wrote up my experience with different types of research: https://forum.effectivealtruism.org/posts/pHnMXaKEstJGcKP2m/different-types-of-research-are-different

 (I'm not at all trying to imply that Rose should have caveated more in her post.)

Answer by JanBFeb 13, 20223
0
0

First, check out this post: https://www.lesswrong.com/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers

 

Are you talking about adversarial ML/adversarial examples? If so, that is certainly an area that's relevant to long-term AI safety; e.g. many proposals for aligning AGI include some adversarial training. In general, I'd say many areas of ML have some relevance to safety, and it mostly depends on how you pick your research project within an area.

Answer by JanBJan 26, 20226
0
0

Maybe this? Probably not, though https://www.lesswrong.com/posts/puYfAEJJomeodeSsi/an-observation-of-vavilov-day

yes, when we did the calculation, it was something like €2 per day (for ~6-8 hours per day). Still very cheap for a depression treatment :-)

Great to see this initiative, it seems like there is probably valuable work to be done in this area. I would make extra sure not to conflate "EA jobs" with "jobs at EA orgs" (not implying that you do conflate them).  The latter just don't have that much capacity in the medium-term.

In a way, it's easier to offer specific training for skills that are needed by EA orgs, and maybe this is more tractable. But I'd also be very excited about programmes that equip many people with the resources  they need to pursue high-impact careers outside of the few main EA orgs (whatever these resources are: skills? personality traits? money? cultural shift in the EA community?).

JanB
3y14
0
0

I personally have benefitted massively from coaching. E.g. I recently wrote this about one of my coaches:

"Paul is a truly excellent coach. I had 40 sessions with him over the course of 2 years. In these, I made transformational progress on topics as broad as motivation/procrastination, communication/teamwork/leadership, time and project management, and decision-making. Paul's science-based and no-bullshit approach aims at long-term growth, not only fixing this week's issues."

Coaching increased my productivity a lot, but also helped me improve a lot in othe... (read more)

Jan's enthusiasm triggered me to start with coaching. I second  deeper changes as the truly important ones to my personal coaching journey. I started coaching without having particular "issues" but quickly realized how much space between current me and best-possible me there is. Among other things ,coaching helped me to get in the habit of deliberate practice on minimizing this space. Nowadays I regularly do self-coaching sessions that have many elements of the previous "two brain" coaching sessions. 

4
SebastianSchmidt
3y
Thanks for sharing Jan. I knew that coaching had an exceptional impact on you but this description puts it in a completely new light: 5x increase in your (expected) lifetime impact and 2x in productivity indicates an exceptional cost-effective intervention considering that you (as far as I know) perhaps invested around 500-1000 hours in total on this (when including personal development more broadly). Super inspiring - thanks for sharing!
JanB
3y16
0
0

I just quickly wanted to say that this seems related to impact certificates: https://forum.effectivealtruism.org/tag/certificate-of-impact

 

There have been a few forum posts on this topic, you can just search the forum (or google) for "impact certificate" and will probably find some interesting arguments.

7
ESRogs
3y
Note that Vitalik Buterin has also recently started promoting related ideas: Retroactive Public Goods Funding
5
Kerry_Vaughan
3y
I think the consensus around impact certificates was that they seemed like a good idea and yet the idea never really took off.

Hi Michael, I wrote this 2 years ago and have not worked in this area afterwards. To give a really good answer, I'd probably have to spend several hours reading the text again. But from memory, I think that most arguments don't rest on the assumption of future agents being total utilitarians. In particular, none of the arguments requires the assumption that future agents will create lots of high welfare beings. So I guess the same conclusions follow if you assume deontologist future agents, or ones with asymmetric population ethics. This is particularly true if you think that your idealised, reflected preferences would be close to that of the future agents.

Answer by JanBMar 28, 20202
0
0

I wrote down some musings about this (including a few relevant links) in appendix 2 here.

I think I overheard Toby saying that the footnotes and appendices were dropped in the audiobook and that, yes, the footnotes and appendices (which make up 50% of the book) should be the most interesting part for people already familiar with the X-risk literature.

Answer by JanBFeb 12, 202047
0
0

So this is my very personal impression. I might be super wrong about this, that's why I asked this question. Also, I remember liking the main EA facebook group quite a bit in the past, so maybe I just can't properly relate to how useful the group is for people that are newer to EA thinking.

Currently, I avoid reading the EA facebook group the same way I avoid reading comments under youtube videos. Reading the group makes me angry and sad because of the ignorance and aggression displayed in the posts and especially in the comments. I think many co... (read more)

4
Aaron Gertler
4y
You may be missing a lot of good comments on YouTube videos (at least, if you watch entertaining content that gets a lot of upvotes). Now that comments are filtered by a sort of "magic algorithm" (which I assume is similar to the Forum's -- recency and upvotes), top comments on positive/entertaining videos are regularly very funny and occasionally provide interesting background context. That said, I can't speak to intellectual content, and I'm sure that "controversial content" comments are still terrible, because they lead to more upvoting of negative content that one side or the other wants to support.

I agree that the main EA Facebook group has many low quality comments which "do not meet the bar for intellectual quality or epistemic standards that we should have EA associated with." That said, it seems that one of the main reasons for this is that the Facebook group contains many more people with very low or tangential involvement with EA. I think we should be pretty cautious about more heavily moderating or trying to exclude the contributions of newer or less involved members

As an illustration: the 2018 EA Survey found >50% of respondents were memb

... (read more)
6
riceissa
4y
This has been the case for quite a while now. There was a small discussion back in December 2016 where some people expressed similar opinions. My guess is that 2015 is the last year the group regularly had interesting posts, but I might be remembering incorrectly.
4
Habryka
4y
Do you have the same feeling about comments on the EA Forum?

I first thought that "counterproposal passed" means that a proposal very different to the one you suggested passed the ballot. But skimming the links, it seems that the counterproposals were actually similar to your original proposals?

5
Jonas V
4y
That's correct. The original proposals for sustainable nutrition explicitly mentioned "plant-based" and "animal-friendly" food, but then the counterproposals only said "sustainable" or "environmentally friendly." So I'd say overall, from an animal welfare perspective, they were moderately successful. We didn't have the time to evaluate their actual impact, though I think this would be a worthwhile project for EAs, especially if it results in an EA Forum article similar to this one.
JanB
4y21
0
0

Thanks for bringing this to my attention, I modified the title and a respective part in the post.


I didn't have the time to check in with CEA before writing the post so I had to choose between writing the post as is or not writing it at all. That's why the first line says (in italics) "I’m not entirely sure that there is really no other official source for local group funding. Please correct me in the comments. "

I think I could have predicted that this is not enough to keep people from walking away with a false impression so I think I should have chosen a different headline.

7
Manuel Allgaier
4y
Thanks for taking the time to reply and update the title! :) Yes, I saw your disclaimer and think it's helpful, people might indeed not read it or disregard it as the former title sounded rather certain.

That mostly seems to be semantics to me. There could be other things that we are currently "deficient" in and we could figure that out by doing cognitive enhancement research.

As far as I know, the term "cognitive enhancement" is often used in the sense that I used it here, e.g. relating to exercise (we are currently deficient in exercise compared to our ancestors), taking melatonin (we are deficient in melatonin compared to our ancestors), and so on...

Great to hear that several people are involved with making the grant decisions. I also want to stress that my post is not at all intended as a critique of the CBG programme.

I agree that there is more to movement building than local groups and that the comparison to AI safety was not on the right level.

I still stand by my main point and think that it deserves consideration:

My main point is that there is a certain set of movement building efforts for which the CEA community building grant programme seems to be the only option. This set includes local groups and national EA networks but also other things. Some common characteristics might be that these efforts are oriented towards the earlier stages of the movement building fu... (read more)

8
Jan_Kulveit
5y
In practice, it's almost never the inly option - e.g. CZEA was able to find some private funding even before CBG existed; several other groups were at least partially professional before CBG. In general it's more like it's better if national-level groups are funded from EA
Load more