Hide table of contents

Quick sidenote

Due to complications, despite generous efforts from the audio team, the audio recording is almost completely inaccurate for several parts of this. listener's discretion is advised. 

Also, it has come to my attention that this does not include the discrete case, and the probability distribution functions are a bit wonky, as the method currently provided is optimized purely for comparing abstract functions. At the moment, the function treats probability distributions* as though there is a uniformly randomly distributed input (x), and a non-uniform output (), and the probability distribution* of  is  as * for the continuous function , ( being the inverse function of .). I am working to fix this. (I am currently undergoing more ambitious and time-effective projects, and this article will likely not be outdated until many months from now, if ever.)

Explanation

In summary, the function takes input functions and  out the expected highest output given some randomly generated input for each input function. 

This can be used to see how many jobs you should consider, how many charities to look into, and many other things. For the sake of the example, we'll use restaurants.

Let's say you want to know how many restaurants you should try before deciding where you should go to dinner. Each restaurant is assigned a "Tastiness value" between -1 and 1, with a uniform probability distribution[1][2]. This can be expressed with the function .

 For that, you would have input  into the function on desmos.

and  being however many restaurants you visit.

the value of   then predicts the expected value of the best restaurant you find. input different values for  until the value of  meets your needs.

 

Now, let's say that you now have the option of delivery.

The distribution of good delivery places to bad places is [3][4]

 Now, you would add the inputs ,

and  being the number of delivery places you order from.

change  and  until you are satisfied with the result.

Link

https://www.desmos.com/calculator/q5wwazy8uc

How to use

Value

What the value 

represents

How to edit

The first 

function

The number of times 

 is evaluated

 

The minimum value of x for 

The maximum

value of x for 

[5]

The outputJust look at it
All things with a 2 (like )The second function's valuesSame as with 1, just edit the ones with a 2 instead.
All things with a 3 (like )The third function's valuesSame as with 1, just edit the ones with a 3 instead.

How to add a function[6]

Optional Why it works

(It's a link to a post on why it works.)

If you have any suggestions as to how I can make this clearer, or a better way of finding the expected value of the best option, or any wording that could be done differently, tell me. (ONLY if you want). No pressure. 

If there's anything incorrect, please tell me.

  1. ^

    A uniform probability distribution means that the probabilities of each outcome are the same.

  2. ^

    The distribution could've been different. For example, if no restaurant is bad, and good restaurants have diminishing returns, the function could be , where  or if restaurants are more likely to be good, the function could be , where .

  3. ^

    This would be because you could look at the rating of each place on most delivery apps, which eliminates terrible places, but the food is less fresh, causing slightly less food. (This doesn't perfectly reflect reality though)

  1. ^

    The formula doesn't work if there's a correlation in-between values (For example, maybe you get delivery more from the good restaurants, making a correlation between restaurants and delivery.)

  2. ^

    If it says that  is undefined, that's probably either because of 

    1. Limited processing power (Desmos thinks  which is undefined

    2. An undefined input (for example, if , and  is undefined because  is undefined.

  3. ^

     on Desmos, to do , simply write a_b. This works for all a and b, and is used for log [ = log_a(b)]

Show all footnotes
Comments2


Sorted by Click to highlight new comments since:

I didn't even know you could make a table and then embed youtube videos within the table on EA Forum posts! Very cool. 

[anonymous]-1
0
0

Thanks :)

Curated and popular this week
 ·  · 6m read
 · 
This post summarizes a new meta-analysis from the Humane and Sustainable Food Lab. We analyze the most rigorous randomized controlled trials (RCTs) that aim to reduce consumption of meat and animal products (MAP). We conclude that no theoretical approach, delivery mechanism, or persuasive message should be considered a well-validated means of reducing MAP consumption. By contrast, reducing consumption of red and processed meat (RPM) appears to be an easier target. However, if RPM reductions lead to more consumption of other MAP like chicken and fish, this is likely bad for animal welfare and doesn’t ameliorate zoonotic outbreak or land and water pollution. We also find that many promising approaches await rigorous evaluation. This post updates a post from a year ago. We first summarize the current paper, and then describe how the project and its findings have evolved. What is a rigorous RCT? There is no consensus, either in our field or between fields, about what counts as a valid, informative design, but we operationalize “rigorous RCT” as any study that: * Randomly assigns participants to a treatment and control group * Measures consumption directly -- rather than (or in addition to) attitudes, intentions, or hypothetical choices -- at least a single day after treatment begins * Has at least 25 subjects in both treatment and control, or, in the case of cluster-assigned studies (e.g. university classes that all attend a lecture together or not), at least 10 clusters in total. Additionally, studies needed to intend to reduce MAP consumption, rather than (e.g.) encouraging people to switch from beef to chicken, and be publicly available by December 2023. We found 35 papers, comprising 41 studies and 112 interventions, that met these criteria. 18 of 35 papers have been published since 2020. The main theoretical approaches: Broadly speaking, studies used Persuasion, Choice Architecture, Psychology, and a combination of Persuasion and Psychology to try to cha
 ·  · 1m read
 · 
[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.] In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.  In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines: 1. Those working on AI Safety, because they believe that transformative AI is coming. 2. Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1] Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that? If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources. 
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time. 1. ^ Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other pra
Rasool
 ·  · 1m read
 · 
In 2023[1] GiveWell raised $355 million - $100 million from Open Philanthropy, and $255 million from other donors. In their post on 10th April 2023, GiveWell forecast the amount they expected to raise in 2023, albeit with wide confidence intervals, and stated that their 10th percentile estimate for total funds raised was $416 million, and 10th percentile estimate for funds raised outside of Open Philanthropy was $260 million.  10th percentile estimateMedian estimateAmount raisedTotal$416 million$581 million$355 millionExcluding Open Philanthropy$260 million$330 million$255 million Regarding Open Philanthropy, the April 2023 post states that they "tentatively plans to give $250 million in 2023", however Open Philanthropy gave a grant of $300 million to cover 2023-2025, to be split however GiveWell saw fit, and it used $100 million of that grant in 2023. However for other donors I'm not sure what caused the missed estimate Credit to 'Arnold' on GiveWell's December 2024 Open Thread for bringing this to my attention   1. ^ 1st February 2023 - 31st January 2024