All of Jaime Sevilla's Comments + Replies

Focus of the IPCC Assessment Reports Has Shifted to Lower Temperatures

Sounds reasonable enough to me.

The bet will resolve in your favor if the median temperature increase in the stated policies scenario of the 2032 IEA report is above 2°C.

If the IEA report does not exist or does not report an equivalent of their stated policies scenario the bet resolves ambiguously.

Very curious to see what will actually happen!

5FJehn8d
Alright, that's settled then. Also looking forward to resolution!
Focus of the IPCC Assessment Reports Has Shifted to Lower Temperatures

Good points, I agree that the articles I linked dont directly imply a less than 50% chance of 2ºC warming.

And FWIW Metaculus disagrees with me here, the community prediction is 85% probability of >2ºC warming.

I still hold my position, where my model is that:

  1. Predictions today are much more optimistic than predictions 10 years ago
  2. I expect that trend to continue, because we keep understimating social and tech progreess
  3. I think that the academic process is biased towards being more pessimistic on climate change than the evidence warrants, because of policy c
... (read more)
5FJehn9d
I get your reasons and I hope I lose the 100 $. I also think the probable temperature for 2100 will continue to go down. However, we still have quite a long way to go to get to 2°C. The IPCC does not really attach probabilities to temperatures. Therefore, it is not really possible to directly go for the IPCC reports as resolution. One possibility would be the Internationale Energy Agency [https://www.iea.org/reports/world-energy-outlook-2021/scenario-trajectories-and-temperature-outcomes] . They regularly publish estimates of likely temperature trajectories. Their current estimate is that with currently (in 2021) stated policies we'll get 2.6°C in 2100. We could use the median estimate for stated policies in their report for 2032. As they have been around since 1974, it seems likely they will continue to exist in until 2032. However, they might chance the way they do their reporting, so I am not sure if this is a great way to resolve this.
Focus of the IPCC Assessment Reports Has Shifted to Lower Temperatures

I appreciate this report and the effort that went into it. That being said, I think it's overly pessimistic considering the evidence we currently have. [1] [2]

I'd bet $1000 at 1:1 odds that we won't see warming over 2°C by 2100.

I'd be happy to take a betting approach that allows for an earlier resolution - I think Tamay Besiroglu described one such procedure somewhere. I can dig it up if anyone wants to take my bet.

Alternatively I would be happy to bet $100 that IPCC projections by 2032 will imply less than 50% probability of higher than 2°C warming by 210... (read more)

5FJehn9d
Thanks for your comment. Unsurprisingly, I am less optimistic. While I also think that climate news gotten better over the last years, I still think there is a big chance we end up at over 2°C. The Twitter thread you linked to says "It finds that, if all the countries of the world fulfilled their climate commitments, the world would most likely limit climate change to just under 2 degrees C." That's quite a big if. The post by John and Johannes mainly argues that extreme warming is not likely, which I also agree with. However, I see the research gap more in the range 2°-3.5°C. Finally, even if our median trajectory would aim below 2 °C, we still should do more research above 2°C . Climate damage does increase considerably for higher temperatures and due to uncertainties in the climate sensitivity we still could end up there. I'm happy to take on your second bet. Let me know how you want to implement that. I'd also consider the first one depending on the implementation. However, betting is easier if you have lots of money, which I don't.
Do we have any *lists* of 'academics/research groups relevant/adjacent to EA' ... and open science?

Do you see a particular vector or case where harassment might be a risk?

There is precedence for episodes of harassment in the community [1]. One motivated and misguided individual could use this list to conduct more harassment in the future. 

There is also precedence for scams directed at academics - I remember distinctly one such scam where one of my colleague's account was spoofed and they tried to scam money out of me. 

Overall I agree that this is less risky than a list of people who share a particular belief, and as risky as other public ... (read more)

1david_reinstein2mo
Thanks. I'll be careful about this. In principle, there is no reason I would need to make this list public. Perhaps a 'by-invitation Airtable' would be a good compromise.?
Do we have any *lists* of 'academics/research groups relevant/adjacent to EA' ... and open science?

I'll share with you one such list privately.

One note of caution: there are some important data privacy concerns here. A public list like this could be used to spam or harass researchers. Asking for researcher permission to include their name and information and a mechanism for people to opt out later seems important.

1david_reinstein2mo
Thanks. Do you see a particular vector or case where harassment might be a risk? I’m thinking that for a list of “researchers who do work on global priorities” or “researchers who have spoken at EA global conference” this is akin to existing public lists of researchers by field and this not a big threat. If instead this was a list reflecting, e.g., deeply personal views or political affiliations it could be more problematic. And you are right that we should notify people who are on the list and allow them to ask to have their names removed.
Other-centered ethics and Harsanyi's Aggregation Theorem

Since I first read your piece on future-proof ethics my views have evolved from "Not knowing about HAT" to "HAT is probably wrong/confused, even if I can't find any dubious assumptions" and finally "HAT is probably correct, even though I do not quite understand all the consequences". 

I would probably not have engaged with HAT if not for this post, and now I consider it close in importance to VNM's and Cox' theorems in terms of informing my worldview.

I particularly found the veil of ignorance framing very useful to help me understand and accept HAT.

I'l... (read more)

What pieces of ~historical research have been action-guiding for you, or for EA?

Here is a long answer I wrote a while ago. Not sure how action guiding they were but I am glad the work I mentioned was done.

https://forum.effectivealtruism.org/posts/uH4kGL4LgQdCgMpDP/can-we-influence-the-values-of-our-descendants?commentId=Wey6Q2KBELrK3n5BW

The relevant part:

Do you have any ideas about how to make progress on [studying the cultural legacy of intentional movements]?

There is a large corpus of historical analysis studying social movements like the suffragettes or the slavery abolitionists. My bet is that there would be large value in summari... (read more)

3Ramiro2mo
I included some missing links... Did I get it right?
The role of academia in AI Safety.

Fair points. In particular, I think my response should have focused more on the role of academia + industry.

a disproportionate amount of progress on [mechanistic interpretability] has been made outside of academia, by Chris Olah & collaborators at OpenAI & Anthropic

Not entirely fair: if you open the field just a bit to "interpretability" in general you will see that most important advances in the field (eg SHAP and LIME) were done inside academia.

I would also not be too surprised to find people within academia who are doing great mechanistic interp... (read more)

6Simon Skade2mo
I must say I strongly agree with Steven. 1. If you are saying academia has a good track record, then I must say (1) wrong for stuff like ML, where in recent years much (arguably most) relevant progress is made outside of academia, and (2) it may have a good track record for the long history of science, and when you say it's good at solving problems, sure I think it might solve alignment in 100 years, but we need it in 10, and academia is slow. (E.g. read Yudkowsky's sequence on science [Academia doesn't have good incentives to make that kind of important progress: You are supposed to publish papers, so you (1) focus on what you can do with current ML systems, instead of focusing on more uncertain longer-term work, and (2) goodhart on some subproblems that don't take that long to solve, instead of actually focusing on understanding the core difficulties and how one might address them.], if you don't think that academia is slow.) 2. Do you have some reason why you think that a person can make more progress in academia than elsewhere? I agree that academia has people, and it's good to get those people, but academia has badly shaped incentives, like (from my other comment): "Academia doesn't have good incentives to make that kind of important progress: You are supposed to publish papers, so you (1) focus on what you can do with current ML systems, instead of focusing on more uncertain longer-term work, and (2) goodhart on some subproblems that don't take that long to solve, instead of actually focusing on understanding the core difficulties and how one might address them." So I expect a person can make more progress outside of academia. Much more, in fact. 3. Some important parts of the AI safety problem seem to me like they don't fit well into academia work. There are of course exceptions, people in academia who can make useful progress here, but they are rare. I am not that
The role of academia in AI Safety.

Here my personal interpretation of the post: 

> the EA/LW community has a comparative advantage at stating the right problem to solve and grantmaking, the academic community has a comparative advantage at solving sufficiently defined problems

I think this is fairly uncontroversial, and roughly right - I will probably be thinking in these terms more often in the future.

Implications are that the most important output the community can hope to produce is research agendas, benchmarks, idealized solutions and problem statements, and leave ML research, pra... (read more)

9Steven Byrnes2mo
It's not obvious to me that "the academic community has a comparative advantage at solving sufficiently defined problems". For example, mechanistic interpretability has been a well-defined problem for the past two years at least, but it seems that a disproportionate amount of progress on it has been made outside of academia, by Chris Olah & collaborators at OpenAI & Anthropic. There are various concrete problems here [https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh] but it seems that more progress is being made by independent researchers (e.g. Vanessa Kosoy, John Wentworth) and researchers at nonprofits (MIRI) than by anyone in academia. In other domains, I tend to think of big challenging technical projects as being done more often by the private or public sector—for example, academic groups are not building rocket ships, or ultra-precise telescope mirrors, etc., instead companies and governments are. Yet another example: In the domain of AI capabilities research, DeepMind and OpenAI and FAIR and Microsoft Research etc. give academic labs a run for their money in solving concrete problems. Also, quasi-independent-researcher Jeremy Howard beat a bunch of ML benchmarks while arguably kicking off the pre-trained-language-model revolution here [https://arxiv.org/abs/1801.06146]. My perspective is: academia has a bunch of (1) talent and (2) resources. I think it's worth trying to coax that talent and resources towards solving important problems like AI alignment, instead of the various less-important and less-time-sensitive things that they do. However, I think it's MUCH less clear that any particular Person X would be more productive as a grad student than as a nonprofit employee, or more productive as a professor than as a nonprofit technical co-founder. In fact, I strongly expect the reverse. And in that case, we should really be framing it as "There are tons of talented people in academia, and we should be trying to convince them that AGI x-risk is a problem they s
I'm interviewing Nova Das Sarma about AI safety and information security. What shouId I ask her?

Is there any work on historical studies of leaks in the ML field? 

Would you like such a project to exist? What sources of information are there?

I'm interviewing Nova Das Sarma about AI safety and information security. What shouId I ask her?

Any hot takes on the recent NVIDIA hack? Was it preventable? Was it expected? Any AI Safety implications?

I'm interviewing Nova Das Sarma about AI safety and information security. What shouId I ask her?

Why is Anthropic working on computer security? What are the key computer security problems she thinks is prioritary to solve?

I'm interviewing Nova Das Sarma about AI safety and information security. What shouId I ask her?

What are the key companies Nova would like to help strenghten their computer security?

Concurso de ensayos sobre Riesgos Catastróficos Globales en Español

¡Hola Cristina!

Buena idea la del grupo de Facebook y gracias por compartirlo allí. 
Agradecemos toda la ayuda posible difundiendo el concurso, por ejemplo en Twitter y Facebook.

Te escribo por privado y podemos hablar de un plan de promoción :)

What will be some of the most impactful applications of advanced AI in the near term?

Personally, I have been using ML to make art, help me write short stories and suggest alternative ways of framing the abstracts in my papers.

I expect these applications will become way better soon, and become a staple as they are integrated in text editors like Word.

I think its likely (60%) that a major text editor will by 2026 have a language model making online suggestions, and plausibly an option to generate an appropriate image to be inserted in the middle of the document.

What will be some of the most impactful applications of advanced AI in the near term?

The AI Tracker team has been tracking some potential (mis)uses of cutting-edge AI.

these include phising, social media manipulation, disinformation campaigns, harassment and blackmail, surveillance, cyberattacks, evidence fabrication, bioweapon development and denial-of-service attacks.

2IanDavidMoss2mo
Hmm, I get a "Service Unavailable" error message when I visit the website. Edit: works now.
Patricia Hall & The Warlock Curse

"I read this in one sitting" is the highest praise a new author can receive <3

Thank you for giving it a go!

Agrippa's Shortform

You seem frustrated that some EAs are working on leading AI labs, because you see that as accelerating AI timelines when we are not ready for advanced AI.

Here are some cruxes that might explain why working at leading AI labs might be a good thing:

We are uncertain of the outcomes of advanced AI

AI can be used to solve many problems, including eg poverty and health. It is plausible that we would be harming people who would benefit from this technology by delaying it. 

Also, accelerating progress of space colonization can ultimately give you access to a va... (read more)

5Agrippa3mo
I agree "having people on the inside" seems useful. At the same time, it's hard for me to imagine what an "aligned" researcher could have done at the Manhattan Project to lower nuclear risk. That's not meant as a total dismissal, it's just not very clear to me. > Safety-conscious researchers and engineers have done an incredible work setting up safety teams in OpenAI and DeepMind. I don't know much about what successes here have looked like, I agree this is a relevant and important case study. > I think ostracizing them would be a huge error. My other comments better reflect my current feelings here.
EA Forum feature suggestion thread

If you click on the link icon next to the votes you will be redirected to the comment's URL.
For exampe, here is a link to your comment above.

1Matt Goldwater3mo
Awesome. Thanks for letting me know!
Contest - A New Term For "Eucatastrophe"

I could not figure out the interface in five minutes and gave up.

My suggestions would be existential windfall and existential bonanza.

Both of them are real words that I expect many people will know.

There is precedent for using the term windfall in AI governance to denote a large increase in revenue, which might make it slighly more confusing. But in any case they seem good words.

FWIW I also like eucatastrophe.  

EA Forum feature suggestion thread

HTML injections?

I wanted to write a post with color highlighting. This would have been easy to do if I could inject some HTML code into my posts. I imagine there are other use cases where people want to do something special that the code base does not support yet.

Being able to embed OWiD interactive graphs and other visualization would be a great plus too!

2Yonatan Cale17d
(This would introduce security concerns, but could be done safely, especially if the LW/CEA teams don't actually write the security code but use something ready)
Splitting the timeline as an extinction risk intervention

I think it's bunk that we get to control the number of splits, unless your value function is really weird and considers branches which are too similar to not count as different worlds.

Come on people, the whole point of MWI is that we want to get rid of the privileged role of observers!

2NunoSempere3mo
It's unclear to me why this would be so weird
The best $5,800 I’ve ever donated (to pandemic prevention).

though my real expectation is that we probably could just be honest and straightforward, and this wouldn't actually hurt candidates

Endorsed.

Lately I've had two minor unrelated experiences where I have been recommended to not say what I believe straight up out of fear of being misunderstood by people outside the community.

I think on the margin the community is too concerned with reacting to "what people might think" instead of their actual reactions.

I think on the margin the community is too concerned with reacting to "what people might think" instead of their actual reactions.

I see where you're coming from with this general heuristic, but I'm less sure how applicable the heuristic is to this context. In most cases, it seems right to ask, "How will a random person react if they hear X, if they randomly stumble across it?" But given the adversarial nature of politics, the more relevant question here might be, "How will a random person react if they hear X, if it's presented however an adversary want... (read more)

EA Forum feature suggestion thread

Footnotes are great!

One feature that would make then even greater is if I could copy paste text from a Google Doc that includes footnotes, and have them be formatted correctly.

4Vaidehi Agarwalla2mo
Strong +1 now as I actually try to insert a post with 10+ footnotes :D
EA Forum feature suggestion thread

The ability to add links in bios would be great!

If we could make it so I can edit my bio like I would edit a post it would be even better.

EDIT: ohh the bio uses markdown, noted.

1Sarah Cheng4mo
Thanks for the suggestion! Markdown formatting should work, though I agree it's very unclear how to add a link your bio. And it looks like we already have an item in our backlog to use the rich text editor for the bio. :)
What are some artworks relevant to EA?

I run a digital art studio, and some of my work is inspired by Effective Altruism themes and ideas.

Particularly, Shared Identity, Shared Values and Science and Identity borrow heavily from the community.

7CarolineJ4mo
Congrats on starting this work! Those are great. A particular ask: may we have one or several on "Future Generations"? I've been willing to see and maybe have inspirational art around this for a while and I haven't encountered this in this format.
1slg4mo
This is cool, I had no idea you were also working on this.
2Lizka4mo
Thanks for sharing this!
How to make Slack workspaces welcoming and valuable

Why both #announcements and #general? What is the use case for each?

2Alex Berezhnoi4mo
By creating #announcements channel where only admins can post you can separate group updates from other posts and discussions. The goal is to increase the visibility of important announcements everyone should know about. #general is usually the default channel for discussions and sources everyone in the workspace might be interested in, so the posting rate is higher there.
EA Forum feature suggestion thread

Yes, that is right.

I don't have any recent examples in the EA Forum, but here is an article I wrote in LessWrong where the equations where very annoying to edit.

I expect I occassionally would use larger equations, better formated (with underbraces and such) if it was easier to edit in the WYSIWYG editor.

3Jonathan Mustin4mo
Actually it looks like a version of this is currently possible! There's a handle in the lower-right corner of the equation editor that let's you resize it. Once you've done that, it remains at the set width and wraps the contents to fit. The way the equation editor follows the cursor can be a bit janky, but it does seem to work.
EA Forum feature suggestion thread

Thanks to you!

In hindsight, the footnotes was the thing I really wanted so I am a very happy user indeed!

Would be good to be able to switch between editors to do things like eg editing complicated LaTeX (right now its complicated to edit it in the WYIWYG editor). But maybe the more reasonable ask is to make the WYSIWYG equation editor span multiple lines for large equations.

1Jonathan Mustin4mo
Really glad to hear footnotes have met your needs! Added to the list! Are you writing long enough equations that the text goes offscreen?
EA Forum feature suggestion thread

TL;DR: I'd like to have a single board where to see a summary of the analytics for all my posts.

I've been really enjoying the analytics feature!
I used it for example to notice that my post on persistence had become very popular, which led me to write a more accessible summary.

One thing I've noticed is that it is very time consuming to track the analytics of each post. That requires me to go to each post, click on analytics and have them load.

I think Medium has a much nicer interface. They have a main user board for stats, from which I can see overall engag... (read more)

3Jonathan Mustin4mo
Good suggestion! I expect this would be a well-liked feature. Added to our project list. Thanks!
Consider trying the ELK contest (I am)

I am sure that if you join the AI Alignment slack [1], Rob Miles discord server [2] or ask questions on LW you will find people willing to answer.

Finding a dedicated tutor might be harder, but if you can compensate them for their time. The bountied rationality Facebook group [3] might be a good place to ask.

[1] https://eahub.org/profile/jj-hepburn/ [2] https://www.patreon.com/posts/patreon-discord-41901653 [3] https://m.facebook.com/groups/bountiedrationality/about/

1Jeremy4mo
Thanks for the suggestions!
[Feature Announcement] Rich Text Editor Footnotes

I am so excited for this feature! Finally I will be able to update my posts with real footnotes instead of awkwardly adding them at the end of my posts ^^

Principled extremizing of aggregated forecasts

Thanks for chipping in Alex!

It's the other way around for me. Historical baseline may be somewhat arbitrary and unreliable, but so is 1:1 odds.

Agreed! To give some nuance to my recommendation, the reason I am hesitant is mainly because of lack of academic precedent (as far as I know).

If the motivation for extremizing is that different forecasters have access to independent sources of information to move them away from a common prior, but that common prior is far from 1:1 odds, then extremizing away from 1:1 odds shouldn't work very well.

Note that the data ... (read more)

When pooling forecasts, use the geometric mean of odds

UPDATE: Eric Neyman recently wrote about an extra assumption that I believe cleanly cuts into why this example fails.

The assumption is called the weak substitutes condition. Essentially, it means that there are diminishing marginal returns to each forecast.

The Jack, Queen and King example does not satisfy the weak substitutes condition, and forecast aggregation methods do not work well in it. 

But I think that when the condition is met we can get often get good results with forecast aggregation. Furthermore I think it is a very reasonable condition to ... (read more)

What is the best press out there on Effective Altruism?

Future Perfect from Vox is an EA aligned outlet.
Author Kelsey Piper in particular ran the Stanford EA group and frequently covers issues from an EA perspective. 

Democratising Risk - or how EA deals with critics

Thank you for writing and sharing. I think I agree with most of the core claims in the paper, even if I disagree with the framing and some minor details.

One thing must be said: I am sorry you seem to have had a bad experience while writing criticism of the field. I agree that this is worrisome, and makes me more skeptical of the apparent matters of consensus in the community. I do think in this community we can and must do better to vet our own views.

Some highlights:

I am big on the proposal to have more scientific inquiry . Most of the work today on existe... (read more)

1jchen13mo
"I have regularly seen proposals in the community to stop and regulate AI development" - Are there any public ones you can signpost to or are these all private proposals?
EA Forum feature suggestion thread

Some basic functionality I would benefit a lot from:

  • Add functionality for footnotes in the WYSIWYG editor
  • Make both editors interoperable
  • Have a way to toggle between the markdown and the WYSIWYG editor on the fly

Footnotes are a thing that I would use more often if it was easy to do so.

I love editing using the WYSIWYG editor, which does not support them. So when I want to add footnotes I would need to: 1) copy paste my article into a google doc, 2) run a plugin to turn the text to markdown, 3) change my editor settings to Markdown, 4) create a new artic... (read more)

3Jonathan Mustin4mo
Thanks for the feedback Jsevillamol! And good timing 🙂 [https://emojipedia.org/slightly-smiling-face/] Hope WYSIWYG footnotes are meeting your needs. Full interoperability is a pretty tall order, and I expect it won't be a near-term add, but I've added it to our list in any case. Cheers!
Can we influence the values of our descendants?

Thank you Rose! You make interesting points, let me try to reason through them:

These papers look at measurable and relatively narrow features of the past, and how far they explain features of the present which are again measurable and relatively narrow.

This is a point worth grappling with. And let's be fair - there are many obvious ways in which cultural transmission clearly has had an effect on modern society. 

Case in point: Xmas is approaching! And the fact that we have this arbitrary ritual of meeting once a year to celebrate is a very clear and me... (read more)

Can we influence the values of our descendants?

I think this is a good point and worth emphasizing.

The studies are focused on studying variation across populations - if everyone in the studied population is equally affected by the cultural forces in question, then this will not show up in the results.

This still means that in practice deliberate cultural interventions are less appealing. In this interpretation, you cannot  work towards improving the values of a subpopulation and hope that they will persist through the time - the forces of dispersal and diffussion, as you say, will slowly wilt away t... (read more)

Nines of safety: Terence Tao’s proposed unit of measurement of risk

Yes please. This is a great idea and I would want us to move towards a culture where this is more common. Even better if we can use logarithmic odds instead, but I understand that is a harder sell.

Talking about probabilities makes sense for repeated events where we care about the proportion of outcomes. This is not the case for existential risk. 

Also I am going to be pedantic and point out that Tao's example about the election is misleading. The percentage is not the chances of winning the election! Instead is the pollling results. The implicit probab... (read more)

1Hashem5mo
I emailed them a few days ago but didn't hear back.
How to generate research proposals

In How to generate research proposals I sought to help early career researchers in the daunting task of writing their first research proposal.

Two years after the fact, I think the core of the advice stands very well. The most important points in the post are:

  1. Develop a pipeline to collect ideas as they come to you.
  2. Think about scope (is your question concrete?) and methodology (is your question tractable?).
  3. Devote some time to figuring out what good research looks like.

None of this is particularly original. The value I added is collecting all the advice in a ... (read more)

1acylhalide5mo
Thanks, this is interesting. (One point that is not mentioned is the counterfactual world where you didn't develop the tech, would someone else have developed the same tech with the same funding instead?)
Takeaways from our interviews of Spanish Civil Protection servants

(disclaimer: this is my opinion)

In short:  Spanish civil protection would not as of today consider making plans to address specific GCRs

There is this weird tension where they believe that resilience is very important, and that planning in advance is nearly useless for non-recurring risks.

The civil protection system is very geared towards response. Foresight, mitigation and prevention seldom happens.This means they are quite keen on improving their general response capacity but they have no patience for hypotheticals. So they would not consider specifi... (read more)

Load More