evelynciara

I'm a machine learning engineer on a team at PayPal that develops algorithms for personalized donation recommendations (among other things). Before this, I studied computer science at Cornell University. I also manage the Effective Public Interest Computing Slack (join here).

Obligatory disclaimer: My content on the Forum represents my opinions alone and not those of PayPal.

I also offer copyediting and formatting services to members of the EA community for $15-35 per page, depending on the client's ability to pay. DM me for details.

I'm also interested in effective altruism and longtermism broadly. The topics I'm interested in change over time; they include existential risks, climate change, wild animal welfare, alternative proteins, and longtermist global development.

A comment I've written about my EA origin story

Pronouns: she/her, ella, 她, 彼女

Links:

"It is important to draw wisdom from many different places. If we take it from only one place, it becomes rigid and stale. Understanding others, the other elements, and the other nations will help you become whole." —Uncle Iroh

Sequences

Democracy & EA
How we promoted EA at a large tech company
EA Survey 2018 Series
EA Survey 2019 Series

Comments

EA is more than longtermism

Giving money away as ineffectively as possible to own the nerds, got it 😒

evelynciara's Shortform

I think some of us really need to create op-eds, videos, etc. for a mainstream audience defending longtermism. The Phil Torres pieces have spread a lot (people outside the EA community have shared them in a Discord server I moderate, and Timnit Gebru has picked them up) and thus far I haven't seen an adequate response.

Death to 1 on 1s

Yeah, I like your idea of going out for walks too. I did that with one attendee last time.

The Many Faces of Effective Altruism

Good post! I'll take another look later.

Nitpick: Utilitarianism has tenets, not tenants.

Effective [Re]location

I've had an idea like this before! In my concept, the user would select the criteria that they value, and the app would only show them the places that are Pareto optimal with respect to those criteria.

Open Thread: Spring 2022

Are shortforms supposed to show up on the front page? I published a shortform on Sunday and noticed that it did not appear in the recent activity feed, but older material did.

Also, does anyone else think that the shortform section should be more prominent? It's a nice way to encourage people to publish ideas even if they're not confident in them, but my most recent one has gotten little to no engagement.

evelynciara's Shortform

Big O as a cause prioritization heuristic

When estimating the amount of good that can be done by working on a given cause, a good first approximation might be the asymptotic behavior of the amount of good done at each point in time (the trajectory change).

Other important factors are the magnitude of the trajectory change (how much good is done at each point in time) and its duration (how long the trajectory change lasts).

For example, changing the rate of economic growth (population growth * GDP/capita growth)  has an  trajectory change in the short run, as long as humanity doesn't expand into space. We break it down into a population growth component, which grows linearly, and a component for the change in average welfare due to GDP per capita growth[1]. GDP per capita typically grows exponentially, and welfare is a logarithmic function of GDP per capita. These two trends cancel out, resulting in a linear trend, or .

If humanity expands into space, then economic growth becomes a quartic trend. Population growth becomes cubic, since humanity can fill an  amount of space in  time.[2] The average welfare due to GDP per capita growth is still linear. Multiplying the cubic and linear trends yields a quartic trend. So increasing the probability that space colonization goes well looks more important than changing the rate of economic growth on Earth.

Surprisingly, the trajectory change caused by reducing existential risk is not an exponential trend. Existential risk reduces expected welfare at each point in time by a factor of , where  is the probability of catastrophe per unit of time. This exponential trend causes all trajectory changes to wash out as  becomes very large.

The amount of good done by reducing x-risk from  to  is

,

where  is the welfare trajectory conditional on no existential catastrophe. The factor  increases to a maximum value and then decays to 0 as  approaches infinity, so its asymptotic behavior is . So the trajectory change caused by reducing x-risk is , or whatever the asymptotic behavior of  is.

On the other hand, the amount of good done by changing the trajectory from  to  is

.

So if you can change the trajectory to a  that will grow asymptotically faster than , e.g. by colonizing space, then this may be more important than reducing x-risk while holding the trajectory constant.

  1. ^

    Change in GDP per capita may underestimate the amount of welfare created by technological progress. I'm not sure if this makes the change in average welfare a super-linear growth trend, though.

  2. ^
It could be useful if someone ran a copyediting service

I'm willing to do this work for $15-35 per page (depending on the author's ability to pay); I'm very detail-oriented and like this kind of stuff. I can only really copyedit/proofread posts written in American English, but I can do formatting for any text. I could probably do one or two copyediting or formatting jobs per weekend.

It could be useful if someone ran a copyediting service

Another type of service that would be useful is accessibility services, such as writing transcripts/timed text for audio and video (e.g. podcasts) and alt text for images.

Load More