I expect these issues to become less important very soon as new AI-powered technology gets better. To an extent, the Babblefish is already here and nearly useable.
Meant to post this in funding diversification week. A potential source of new and consistent funds: EA researchers/orgs could run research training programs.
Drawing some of the rents away from universities and keeping it in the system. These could be non-accredited but focus on publicly demonstrable skills and offer tailored letters of recommendation for a limited number of participants. Could train skills and mentor research particularly relevant to EA orgs and funders.
Students (EA and non-EA) would pay for this. Universities and government training funds could also be unlocked.
(More on this later, I think, I have a whole set of plans/notes).
Thinking of trying to re-host innovationsinfundraising.org, which I stopped hosting maybe a year ago. Not sure I have the bandwidth to keep it updated as a ~living literature review, but the content might be helpful to people.
You can see some of the key content on the wayback machine, e.g., the table of evidence/consideration of potential tools .
Any thoughts/interest in using this or collaborating on a revival (focused on the effective giving part)?
Some thoughts on what The Unjournal (unjournal.org) can offer, cf existing EA-aligned research orgs (naturally, there are pros and cons)
... both in terms of defining and assessing the 'pivotal questions/claims', and in evaluating specific research findings that most inform these.
Non-EA-aligned expertise and engagement: We can offer mainstream (not-EA aligned) feedback and evaluation, consulting experts who might not normally come into this orbit. We can help engage non-EA academics in the priorities and considerations relevant to EAs and EA-adjacent orgs. This can leverage the tremendous academic/government infrastructure to increase the relevant research base. Our processes can provide 'outside the EA bubble' feedback and perhaps measure/build the credibility of EA-aligned work.
Depth and focus on specific research and research findings: Many EA ~research orgs focus on shallow research and comms. Some build models of value and cost-effectiveness targeted to EA priorities and 'axiology'. In contrast, Unjournal expert evaluations can dig deeply into the credibility of specific findings/claims that may be pivotal to these models.
Publicity, fostering public feedback and communication: The Unjournal is building systems for publishing and promoting our evaluations. We work to link these to the scholarly/bibliometric tools and measures people are familiar with. We hope this generates further feedback, public discussion, research, and application of this research.
This year's Nobel prizes for Physics and for Chemistry went to computer scientists (among others).
Previous prizes have stretched the discipline boundaries, e.g., the Economics Prize for Ostrom (poli sci) and Kahnemann (psych).
Probably because the prize categories are not set optimally to maximize their goal
Those who, during the preceding year, shall have conferred the greatest benefit on mankind
... especially as the world has progressed.
The current categories are: Physics, Chemistry, Physiology or Medicine, Literature, Economics (*slightly different prize), and Peace
What would be the ideal categories for this, considering what the real world (not just EA) will latch onto?
My quick take, in approximate order of importance to this goal: (revised)
Humanitarian prize (policies, programs, innovation, and action); includes of global catastrophic risks (this one might be hard to sell), response to disasters and pandemics. If feasible, also considers animal welfare
Peace (actual work towards international peace, not humanitarian stuff), governance and public policy
Basic and pure science (Life sciences, physical science, math, basic CS research)
Applied science, technology, and engineering
[1x per 3 years] Social science (including economics, including history)
[1x per 2 years] Philosophy, journalism, and communication
Project Idea: 'Cost to save a life' interactive calculator promotion
What about making and promoting a ‘how much does it cost to save a life’ quiz and calculator.
This could be adjustable/customizable (in my country, around the world, of an infant/child/adult, counting ‘value added life years’ etc.) … and trying to make it go viral (or at least bacterial) as in the ‘how rich am I’ calculator?
The case
People might really be interested in this… it’s super-compelling (a bit click-baity, maybe, but the payoff is not click bait)!
May make some news headlines too (it’s an “easy story” for media people, asks a question people can engage with, etc. … ’how much does it cost to save a life? find out after the break!)
if people do think it’s much cheaper than it is, as some studies suggest, it would probably be good to change this conception… to help us build a reality-based impact-based evidence-based community and society of donors
similarly, it could get people thinking about ‘how to really measure impact’ --> consider EA-aligned evaluations more seriously
GWWC probably doesn't have the design/engineering time for this (not to mention refining this for accuracy and communication). But if someone else (UX design, research support, IT) could do the legwork I think they might be very happy to host it.
It could also mesh well with academic-linked research so I may have some ‘Meta academic support ads’ funds that could work with this.
[This was originally posted as a response in the wrong thread - I've deleted the incorrectly placed response.]
Hi, David,
Thanks for tagging us in this suggestion! We're happy to see people talking about the creation of more compelling resources to correct misperceptions and get people thinking about the true cost of saving a life.
This doesn't seem exactly like what you have in mind, as it was more narrowly focused on GiveWell's recommended charities, but in the past we provided an impact calculator on our site. It allowed users to insert a donation amount and choose a GiveWell top charity to give to, and would return the number of outputs (e.g., nets or vitamin A supplements distributed) and outcomes (e.g., lives saved).
We stopped sharing the impact calculator in November 2021, because we didn't feel confident enough in our ability to produce a useful forward-looking estimate of an individual donation's impact. We now report on the impact of past grants directed by GiveWell (see this spreadsheet, for example, and our 2021 cost per life saved estimates for top charities). We feel that giving the estimated cost per life saved of a past grant to a program serves as a helpful proxy for the impact of a future donation to that same program, even if we can't count on the impact remaining the same.
We've written a bit more about why we focus on backwards-looking impact estimates here and here.
I've thought about something similar. I'm surprised no one has done it yet.
I was thinking you could click on a bunch of different organizations, and it will show the resulting QALY or whatever metric.
Like for example, there'd be a bunch of orgs with nice 'cost to save a life' information, then if you clicked on donating to your university it will display something like the annual interest of your donation as part of the endowment..? Just as a way to illustrate differences in impact.
Modest proposal on a donation mechanism for people doing direct work?
Preamble
Employees at EA orgs and people doing direct work are often also donors/pledgers to other causes. But charitable donations are not always exactly 1-1 deductible from income taxes. E.g., in the USA it’s only deductible if you forgo the standard deduction and ‘itemize your deductions’, and in many countries in the EU there is very limited tax deductibility.
So, if you are paid $1 more by your employer/funded and donate it to the Humane League, Malaria Consortium, etc, the charity only ends up with maybe $0.65 on the margin in many cases. There are ways to do better at this (set up a DAF, bunch your donations…) but they are costly (DAF takes fees) and imperfect (whenever you itemize you lose the standard deduction if I understand.)
Proposal
Funders/orgs (e..g, Open Phil, RP) could agree that employees are allowed relinquish some share of their paycheck into some sort of general fund. The employees who do so are allowed to determine the use of these funds (or ‘advise on’, with the advice generally followed).
Key anticipated concerns, responses
Concern: This will lead to a ‘pressureto donate/relinquish’ if the employers, managers, funders are aware of it
Response: This process could be managed by ops and by someone at arms-length who will not share the data with the employers/managers/funders. (Details need working out, obviously, unless something like this already exists)
Concern - Legal issues: Is this feasible? Would these relinquishments be seen by governments as actually income?
Response: ??
Concern - crowding out: If the funder knows that the people/orgs it funds gives back to charities, they may shift their funding away from these charities, nullifying the employees counterfactual impact
Response: This is hardly a new issue, hardly unique to this context; it’s a major question for donors in general, through all modes; so maybe not so important to consider here. … To the extent it is important, it could be reduced if we can keep the exact target and amount of the donations unknown to the funders
Concern - “Org reputation … why not give back to the org?”
Maybe a stretch, but I could imagine someone arguing “If your (e.g., RP) employees ask you to redirect paychecks to a fund, which largely goes to the Humane League, Malaria Consortium, … does this indicate your employees don’t think RP is the best use of funds”?
Responses: Unlikely to be a concern. Employees may want to ‘hedge their bets’ because of moral uncertainty, and because of the good feeling they get from direct impact of donations.
Responses: Keeping the recipient of these funds hidden to outsiders
... It reminded me of my thoughts on the ‘imposter syndrome’.
I think there are many people who are under-confident in their abilities, both overall, and in relation to other people. Perhaps this disproportionally tends to affect people in the EA and rationalist community, because we are more introspective and skeptical.
But there are also people in this world who are in some way ‘imposters’, in the sense that they don’t have the training for their position, or they (or their organization) is claiming much more than they are actually doing. In some cases it is useful for these people and orgs to consider “how can we level up our abilities and accomplishments, and moderate our claims?"[1]
This is also real, and we don’t want to convey that “everyone who thinks they are over their heads/over-claiming is merely suffering from imposter syndrome”. Maybe some have IS, but some are actually having a meaningful and useful insight that they can benefit from… if not paralyzed by inaction and shame.
This ‘not everything is IS’ also applies to considering individuals and companies making big claims, or modest ones. I don’t think we should always judge these in the light of “these people/orgs are all probably better than they say, because everyone has IS these days.”I also think that sometimes so-called IS may often reflect ‘a whole sector is under-trained and overclaiming’. E.g., if ‘everyone doing [machine learning, economic analysis, whatever] doesn’t understand the principles, is doing a lot of guesswork, and writes things up as if they are clear and certain’… this is a problem. If you are particularly concerned that you are doing the above, you may not be an imposter ‘relative to others in the sector’, but it still seems like a good insight to have. And perhaps more people ‘revealing that they are not wearing imperial clothes’ could help change the dynamic.
In my own case, for example, I think I was underprepared for certain aspects of my PhD program. As an undergraduate I jumped right into Calculus 1 without taking pre-calculus. Here I struggled desperately and barely passed … and I lost out on learning some fundamentals and deep mathematical insights. ↩︎
In some cases it is useful for these people and orgs to consider “how can we level up our abilities and accomplishments, and moderate our claims?"
I agree this is an important message for some people and circumstances. For instance, it would probably have been a good message for me when I started doing research on longtermist strategy (from an s-risk perspective) in 2014-2017. I mostly pushed through impostor syndrome because there weren't many other people doing similar things, so it felt like "I know it's bad but, looking around, it may just be good enough to be useful." In hindsight, I think the feeling was telling me that I should have focused less on searching for conclusions (by "winging it") and more on improving my understanding and skill building. (That said, "searching for conclusions" is a crucial habit and people should be trying it with some amount of their attention from the very start, otherwise it's difficult to acquire it later.)
The Dunning-Kruger effect is real. But with a few basic sanity checks, I believe any thoughtful EA can determine whether it's imposter syndrome vs actual under-qualification.
If you have evidence to support your non-trivial investment in the area—classes, degrees, self-directed learning, projects, jobs—you are probably at least qualified for an entry-level position in a given area.
Probably the easiest way to check is by asking an impartial 3rd party, like an 80kH Advisor, or even just someone who already has experience working in that field.
Note that this is heavily contested. A lot of the observed phenomenon in the studies (qualitatively: incompetent people thinking they're average, great people thinking they're only good) can be explained by "better than average" effect + metrics not being perfect + natural mean regression.
And of course pop science accounts of Dunning-Kruger is even more unhinged than what D-K claimed.
My own best guess is that the claimed effect is real but small.
Impact & Results scores of livelihood support programs are based on income generated relative to cost. Programs receive an Impact & Results score of 100 if they increase income for a beneficiary by more than $1.50 for every $1 spent and a score of 75 if income increases by more than $0.85 for every $1 spent. If a nonprofit reports impact but doesn't meet the threshold for cost-effectiveness, it earns a score of 50.
My charitable interpretation is that the "$0.85" number is meant to represent one year's income, and to imply a higher number over time (e.g. you have new skills or a new business that boosts your income for years to come).
But I also think it's plausible that "$0.85" is meant to refer to the total increase, such that you could score "75" by running a program that, in your own estimation, helps people less than just giving them money.
(The "lowest score is 50" element puzzled me at first, but this page clarifies that you score "0" if CN can't find enough information to estimate your impact in the first place.)
*****
Still, this is much better than the original CN setup, and I hope this is an early beta version with many improvements on the way.
Also source of consciousness ("which configurations of matter are conscious?") seems a bit different from moral status ("which configurations of matter do we care about?").
A paperclip maximiser could have consciousness, that doesn't have to mean we care too much about it or are willing to sacrifice our lives to ensure its survival.
But why not? How do we justify that?
Basically I think humans just care about anything that looks similar to human beings. (Which makes sense evolutionarily.)
That may be what we do care about, but how can we justify that in terms of what we should care about?
I'm driving up to upstate New York today to visit my dad and take advantage of the promotions and staying through the weekend. I live about 11-2 hours from the NYS border so I can return for a followup if necessary.
… in order to donate to highly effective charities.
Steps taken (some in error)
Caesars
Soon after Robi’s earlier post … (25 Jan)
Started a Caesar’s account,
deposited and withdrew $50 while out of state (I had thought it meant ‘deposit at least $50 and you will get the full credit’... didn’t realize first deposit was the determining factor),
requested that they let me redo this to qualify for the maximum bonus (no response)
Fanduel
First bet up to 1000 will be refunded in betting credit if lost.
Terms and conditions HERE
Jan 25 – drove to Connecticut, started an account
Tried to fund account
With bank transfer (my internet bank doesn’t seem eligible)
With Paypal (unsuccesful, I forget why)
-With UK card (foreign cards not accepted)
With bank's debit card – declined, but since then I’ve asked the account being unblocked
5 Feb 2022 : Trying this from New York State
Needed to download the ‘verify location app’ and install it. Even after this, bank transfer didn’t work, but Paypal funded by my bank did work, deposited 1k.
Next I tried to look at bets but it again said I was out of state and ineligible. I’m in upstate NY so that must be wrong. Next I tried clearing the cache, didn’t work. So I downloaded it on my phone and this did work.
Ok now to bet – I want a long odds bet but maybe not too long … because I want to demonstrate that this works OK (and I also have a psychological barrier to betting on long shots, I think). And I want a decent odds bet, no big house ‘vig’ so I signed up for the free OddsJam trial. But this was scaryish because the default was the 1 year subscription ($999!), which seems a big risk if I forget to cancel. But I finally figured out how to do ‘monthly’ which only has an $89 forgetfulness risk..
Within OddsJam it doesn’t tell you which ones are the highest EV unless you signup for the non-free-trial premium arbitrage version or something. But you can go to ‘odds’ and find some things that look decent, resolving soon (in my case, important, because I have a limited time). I selected a few sports. Nothing resolving while I’m sleeping, because then I can’t sleep!
Basketball seems maybe the best … low vig, frequent stuff happening now:
Georgetown-Providence seems OK: low vig, For Providence FanDuel offers the best odds of all the casinos. Wait – I got it backwards: Providence is favored here … the ‘negative sign’ means ‘less return’ I guess. Opposite of what you want for the first bet. OK, trying again.
OK, maybe Milwaukee is the one. Let me see what “+440” means.
$100 bet pays $440 (not including the stake) … maybe that’s what the “+440” meant.
But I need to bet $1000 to take advantage here, and the Max wager is $455 for this bet!
I’ll check again tomorrow if that’s upped. Maybe they need more people on both sides. If not maybe I go with a slightly shorter odds bet on the NBA “Brooklyn Nets” (TIL Brooklyn has a basketball team)
Feb 6, morning
“Best odds” on Oddsjam change from day-to-day. Of course, this doesn’t mean that Fanduel is offering poor odds, but still seems like a decent heuristic to choose one where ‘Fanduel offers the best’. I ended up going with a 5-1 (long odds) bet on Maryland over Ohio State (consulting OddsJam on this of course)...
When I made the bet it did not give me any indication that the ‘credit refund’ would happen! I just have to have faith that I’ve complied with the offer!? Wait until 1pm to see if I'm very lucky, and if not, whether they refund me with $1000 in credit as promised. (I followed all the rules as far as I can tell).
Updates: credit was refunded. I then ended up making a bunch of diverse bets and more or less got the money back
I also gained $250 from the Betrivers promotion which I was able to “play through” ended up netting $250 for a lot of work and stress.
Update: earned $280 in The form of seven free $40 bets for betting $5 on the Rams in the Super Bowl. But I now have to go back to New York State to place the bets before it expires
When you say "feeling", are you referring to conscious experience of the AI, or mechanistic positive and negative signals?
The former. The latter has no moral patienthood I guess
If consciousness, super-high uncertainty on what consciousness even is, what the correct ontology for it is. But can be discussed.
I've been reading more about this and I realize there is great disagreement
If positive and negative reward signals, then AI today already runs based on positive and negative reward signals as you mention.
Of course, but their 'concious experience' of these signals need not agree with how they are coded in the algorithm. They could 'feel pain from maximizing' ... we just don't know.
David Reinstein: I have argued against this idea of 'room for more funding' as a binary thing.
generally imagine in these areas there is always room for more funding, at least in the horizon of a year or more.
It's just a combination of
diminishing returns, perhaps past a threshold of 'these interventions are better than alternatives'
limited capacity because of short-run constraints that take some time to adjust (hire more staff, negotiate more vaccine access, assess new areas to administer vaccines)
Almost no cost function should have an 'infinite slope' past a certain output, particularly not in the non-very-short run. Similarly here.
Skills/training/EA jobs, 'Should you do an Economics PhD (or masters… see later)'; what do you need to learn to work at an EA org? How to level up on this stuff and prove value
These were the most frequent questions I got and discussions I had at the conference, mainly with UG students, but also with people at career pivot points. (Maybe the second-biggest was from PhD students and academics looking to learn more about EA and RP, and to have more impact)
I'm working on an essay/post/resource answering these questions and giving my opinions and experience in a Google Doc here. I'd love your feedback.
EA aligned version of "Oxfam stores" ... in the USA+
I used to buy a lot and give away a lot of stuff at Oxfam stores in the UK. I don’t agree with all of the approaches and campaigns but I think that they do a great deal of good. I think that before their prostitution scandal broke the stores were earning about £20 million per year.
Do we have anything like that in the US? We have Goodwill and the Salvation Army but those are doing domestic charity only and thus an order of magnitude less effective, I suspect.
This made me think: would there be any value potential in having a store like this, especially in the USA that was supporting a variety of EA causes (Global health, animal welfare, reducing existential risk…)? If it’s done right it might raise a few tens of millions of dollars per year at least. (Maybe much more. I'm seeing very inconsistent numbers for Goodwill’s and Salvation Army’s revenues for example). [1]
I suspect that much of this would be counterfactual because people in the USA tend to give domestically only, and the people using this store would otherwise be going to Goodwill or Salvation Army.
My impression is that donations themselves might be the minority of the benefit. The presence of the Oxfam stores in the UK also had big community building and public awareness benefits (for Oxfam). Nearly every moderate sized city/town had an Oxfam store which was pretty stylish and had lots of volunteers and maybe some activities around it. It was also not just students but a lot of other people were involved with it.
Any thoughts on whether this idea might have legs?
Annual report reports ~$47 million in goods sales, but Forbes reports $5.8 billion in revenue, mostly 'other income' ↩︎
This is a really interesting idea! I'm very fond of charity shops so I love the idea of making ones for EA charities. I have no idea how easy or hard it is to do and how it compares to other fundraising tactics, but it seems like it could have a big impact both from profits and from raising awareness. It could be a good thing to do for people with experience starting or running shops.
Employees at EA orgs and people doing direct work are often also donors/pledgers to other causes. But charitable donations are not always exactly 1-1 deductible from income taxes. E.g., in the USA it’s only deductible if you forgo the standard deduction and ‘itemize your deductions’, and in many countries in the EU there is very limited tax deductibility.
So, if you are paid $1 more by your employer/funded and donate it to the Humane League, Malaria Consortium, etc, the charity only ends up with maybe $0.65 on the margin in many cases.There are ways to do better at this (set up a DAF, bunch your donations…) but they are costly (DAF takes fees) and imperfect (whenever you itemize you lose the standard deduction if I understand.)
Proposal
Funders/orgs (e..g, Open Phil, RP, FHI, CEA) could agree that employees are allowed relinquish some share of their paycheck into some sort of general fund. The employees who do so are allowed to determine the use of these funds (or ‘advise on’, with the advice generally followed).
Key anticipated concerns --> responses
Concern: pressure
This will lead to a ‘pressureto donate/relinquish’ if the employers, managers, funders are aware of it
Response: This process could be managed by ops and by someone at arms-length who will not share the data with the employers/managers/funders.
Details need working out, obviously, unless something like this already exists
Concern - Legal issues
Is this feasible? Would these relinquishments be seen by governments as actually income?
Response: ??
Concern - crowding out
If the funder knows that the people/orgs it funds gives back to charities, they may shift their funding away from these charities, nullifying the employees counterfactual impact
Response: This is hardly a new issue, hardly unique to this context; it’s a major question for donors in general, through all modes; so maybe not so important to consider here.
… To the extent it is important, it could be reduced if we can keep the exact target and amount of the donations unknown to the funders
Concern - “Org reputation … why not give back to the org?”
Maybe a stretch, but I could imagine someone arguing “If your employees ask you to redirect paychecks to a fund, which largely goes to the Humane League, Malaria Consortium, … does this indicate your employees don’t think RP is the best use of funds”?
Responses: Unlikely to be a concern. Employees may want to ‘hedge their bets’ because of moral uncertainty, and because of the good feeling they get from direct impact of donations.
Responses: Keeping the recipient of these funds hidden to outsiders
Are you engaging in motivated reasoning ... or çommitting other reasoning fallacies?
I propose the following good epistemic check using Elicit.org's "reason from one claim to another" tool
Whenever you have a theory that A→B
Feed this tool your theory, negating one side or the other[1] A→¬B
and/or ¬A→B
And see if any of the arguments it presents seem equally plausible to your arguments for A→B
If so, believe your arguments and conclusion less.
Caveat: the tool is not working great yet, and often requires a few rounds of iteration, selecting the better arguments and telling it "show me more like this", or feeding it some arguments
General notes on "Sports Betting for EA" … and ‘ways to (not) screw it up’
Some lessons from my experiment and understanding (writing up my experience HERE, when I get a a chance)
As written elsewhere, there are basically 2-3 types of rewards.
The "deposit match" rewards give you some house money (“bonus”) when you sign up and make yout first deposit. The ones I've seen will give you this house money in an amount equal to the amount you deposited on that first deposit.
Risk-free bets: When you start an account and make a deposit, some online casinos give you your first bet "risk-free". What this means is that if you place an eligible bet and lose you will be refunded the amount you bet – not in cash but in what I'm calling house money.
Rewards for taking particular actions, making particular bets, winning certain bets, etc. For example DraftKings is offering a bonus prize of a few hundred dollars (house money, I presume( if you win your first bet of $5 or more within a certain category .
What is this “house money?": the rewards and bonuses cannot be withdrawn immediately. There are certain “play through requirements”. From what I'm seeing if there are "1X playthrough requirements” and 20x (or something) play through requirements.^[Don't bother with bonuses involving the latter, the process of playing through them you will be giving the casino back a lot of money as they take some cut with every bet].
But even with 1X requirement there are some caveats, and some bets do not count as playing through. [See below ‘Check that your bets…’]
Ways to (not) screw it up (or inconvenience yourself)
Don’t miss an opportunity when signing up for an account or depositing money.
It's not clear to me to what extent the "promotion codes” are necessary to get the promotion, but they might be in some cases.
If there is a deposit match, when you make your first deposit, make sure that very first deposit is the amount that can achieve the maximum deposit match
Make sure you are in the right state and can verify this. Some sites have specific tracking software you need to download others seem to just use something within your browser. However, in my experience it sometimes gets it wrong and says you are not in the state when actually you are. It seems to work better when you are closer to multiple wifi-spots. Sometimes clearing your cache might help or using an incognito browser but I didn't have consistent results for that. Note also that your browser should be set to allow location support. If you download the casino’s geo tracking software you also need to give that software permission for location support.
But in my experience, where my computer failed to demonstrate I was in the state I was in, my telephone (Iphone) almost always worked, particularly once I downloaded the casinos’ apps.
You will need to upload/share some photo ID such as a driver's license, and at least in one case (Caesars) a utility bill as well. I don't think you need to show residence in the state, just that you are really who you say you are. If you are not comfortable doing this (I do think it's pretty secure), don't bother.
Make sure you have access to your phone for two-factor identification. The sites and apps are continually asking you for this, and you often get logged out and have to log yourself in again with this 2-factor
When you make a ‘risk-free bet’, make sure the terms apply. I didn't actually have any issues with this, but there are so many terms and conditions I would make sure before you bet. Also, the general advice is "for risk-free bets, you should bet on something at least somewhat risky, otherwise you are wasting the reward.”
Check that your bets are actually with the ‘house money’ (bonus/reward) and are eligible for playthrough requirements.
Not all bets ‘count equally’. You might accidentally bet your cash and not the house money. You might bet with the house money in a way that doesn’t qualify as ‘play through.’
Not all bets allow you to use the house money, or qualify as ‘playing through’ that money In some cases if you bet it on a "nearly sure thing” (e.g., what they call -200 or shorter odds) this does not count towards your play-through! They may not let you use the house money for this, or if they do, even if you win you will not be able to withdraw it without playing more. Be sure you know the rules, and….
When in doubt go to their chat helpline and ask directly. They were often helpful. But even there, I'm not sure all of their help team necessarily gets it right, in at least one case they didn’t seem aware of the caveat above. … But at least you will have a record of the chat you can show to complain.
Make sure you have time (in the state) to use the house money, and don’t wait too long: The rewards seem to expire after a period that is sometimes a day or week or something.
How to make reasonably good and safe bets/betting portfolios (ideally with house money)
You need to bet on something. As noted above, for the risk-free bets you want something fairly risky, something with “+200 odds or higher” perhaps. (I think +200 means that if you bet $1000 and win you get $2000 plus your initial $1000 stake, and can withdraw $3000).
Finding OK bets, OddsJam
As stated elsewhere, OddsJam is a site that provides a range of information. You can sign up for a trial account but….
Don’t forget to cancel an OddsJam trial, and probably best to trial a ‘monthly membership’ rather than yearly. If you trial a yearly and you forget to cancel in a week you could be out $1,000.
There's a premium version of OddsJam with no free trial that claims to find the highest expected value bets. But the regular one does have a list of sports and sporting events, and tells you which casinos offer the best odds, and what the house ‘vig’ is (the amount of all bets the house gets on average, I guess, because of the spread between the odds on each team … e.g., one may be -100 and the other only +50.
I wouldn't bet on anything with a “vig” of 5% or more. My impression was that US basketball games had pretty low vigs but (Euro?) soccer games had ridiculously high ones.
Next, you might look for a game where one side has odds somewhat close to the level you're looking for, the amount of risk you want to take (unless you are hedging, in which case that might not matter, see below). On OddsJam you can browse for that while also comparing whether your casino is offering something close to the best odds among the casinos. (Larger “+” numbers and “-” numbers closer to zero are better).
Low risk bets and portfolios
So now you have a ‘house money’ reward (perhaps just the return of the money you lost on the risk-free bet) and you want to get it out. If you have an account with multiple sites and similar amount you want to get out of each, there's a pretty easy way to do this: you can bet on opposite teams on opposite sites, i.e., fully hedge your bet. Check how much will pay if you win on each side on each site and bet an amount on each that makes that roughly equal. You can do this in combination with OddsJam to try to guarantee as close to 100% of your money back as possible (over 100% is also conceivable but would seem to be a rare ‘arbitrage opportunity’).
If you have only an account with one site (or more money you are trying to get out of one sites than another) I think you should:
look for eligible bets with odds as low as possible (very likely to win, low payoff, but remember these still need to be risky enough to qualify for the playthrough requirement), and
make several small bet rather than one large bet to lower the overall variance of your outcome (see the ‘law of large numbers’, ‘portfolio diversification’ etc.
Warning: this can be stressful and distracting to work
The return seems to be potentially pretty good, but (particularly for the casinos with “risk-free bets” rather than deposit matching) there are ways you can lose money if you are not careful. It can be pretty stressful and you have to concentrate on what you're doing. Even though the process of doing it correctly they only take a few hours, if you are anticipating yet and discriminating over your bad choices, or losing sleep anticipating upcoming games... This can take up a lot more of your life. For some people this process might be fun, for others traumatic. Other people might really like the excitement and adrenaline But it may take you away from other things you are trying to focus on. This was the case for me; so far, it’s been interesting and sometimes fun and rewarding to see when ‘I got it right.’ But overall it was not relaxing, rather stressful and pulled me away from other important things.
Thanks so much for sharing these tips! With regards to the info on OddsJam:
I use the app "Todoreminder" to remind myself to cancel subscriptions like this.
You can always buy a $10 visa gift card online and register with that instead of your credit card if you're worried about forgetting to cancel your subscription.
If you're using OddsJam premium, I wouldn't worry about betting on specific sports. OddsJam will show you where the best bets are for the sports book you're betting with accounting for the vig. Generally speaking though, you're right that two-way markets (meaning only two outcomes can happen) take less vig than futures markets (e.g., the winner of this year's NBA championship) or three-way markets (like soccer where a game can end in a tie).
AI consciousness and valenced sensations: unknowability?
Variant of Chinese room argument? This seems ironclad to me, what am I missing:
My claims:
Claim: AI feelings are unknowable: Maybe an advanced AI can have positive and negative sensations. But how would we ever know which ones are which (or how extreme they are?
Corollary: If we cannot know which are which, we can do nothing that we know will improve/worsen the “AI feelings”; so it’s not decision-relevant
Justification I: As we ourselves are bio-based living things, we can infer from the apparent sensations and expressions of bio-based living things that they are happy/suffering. But for non-bio things, this analogy seems highly flawed. If a dust cloud converges on a ‘smiling face’, we should not think it is happy.
Justification II. (Related) AI, as I understand it, is coded to learn to solve problems and maximise things, optimize certain outcomes or do things it “thinks” will yield positive feedback.
We might think then, that the AI ‘wants’ to solve these problems, and things that bring it closer to the solution make it ‘happier’. But why should we think this? For all we know, it may feel pain when it gets closer to the objective, and pleasure when it avoids this.
Does it tell us it makes it happy to come closer to the solution? That may merely because we programmed it to learn how to come to a solution, and one thing it ‘thinks’ will help is telling us it gets pleasure from doing so, even though it actually gains pain.
A colleague responded:
If we get the AI through a search process (like training a neural network) then there's a reason to believe that AI would feel positive sensations (if any sensations at all) from achieving its objective since an AI that feels positive sensations would perform better at its objective than an AI that feels negative sensations. So, the AI that better optimizes for the objective would be more likely to result from the search process. This feels analogous to how we judge bio-based living things in that we assume that humans/animals/others seek to do those things that make them feel good, and we find that the positive sensations of humans are tied closely to those things that evolution would have been optimizing for. A version of a human that felt pain instead of pleasure from eating sugary food would not have performed as well on evolution's optimization criteria.
OK but this seems only if we:
Knew how to induce or identify "good feelings"
Decided to induce these and tie them in as a reward for getting close to the optimum.
But how on earth would we know how to do 1 (without biology at least) and why would we bother doing so? Couldn't the machine be just as good an optimizer without getting a 'feeling' reward from optimizing?
Facebook ads: can you really do A/B testing on a comparable audience?
On Facebook ‘Lift testing’
… can you really compare ‘ad A vs ad B’ to see which works better on a comparable audience?
Braun and Schwartz (Hat tip, Josh Lewis)… seem to think this is NOT possible in the current FB setup (and maybe not on most platforms either). ... Because of the way each ad design is separately targeted/optimized to its ‘best audience’.
Smartly seems to imply that multi-cell lift tests do not suffer from this problem.
... But it's unclear if this really implements 'target then randomize'.
Most of Economic research can be deemed EA-relevant in a general sense in that it usually focuses on welfare properties (of equilibria)…
But sometimes it’s the ‘potential pareto improvement/2nd welfare theorem’ stuff … could make the ‘pie higher’ and achieve any improved outcome you like it were to be redistributed.
E.g., one could claim (loosely)
A. … “efficient antitrust regulation is an EA cause because it aims to achieve the greatest level of Consumer + Producer surplus”
B. “…which could then yield the greatest social gains if we redistributed it to help the extreme global poor/animal welfare/existential risk reduction”
But you might ask:
For A: “Is this the most important/easiest/biggest way to ‘make the pie higher’“?
For B: “How likely is it that any gains could/would actually be redistributed to then ‘do the most good’”
(Note: I am not making a claim that this is an EA cause candidate.)
Music is an "information good". It is "nonrival" and infinitely and freely sharable. Any positive price leads to "allocative inefficiency".
But a zero price obviously gives no incentive to produce and share music. The best solution is to separate what consumers pay from what musicians receive. The music (and all media and info) would be free to access, but the creators would get a payment equal to the value the listeners and users took from it.
But it's hard to
Know what that VALUE is and
Coordinate a way to get the FUNDS and compensate the creators.
For point 1 (measuring VALUE), the number of plays or amount of time spent listening seems like a possibly OK, but imperfect measure. E.g., 'listening in the background' has less value than 'listening and really getting into it. Still the 'compensation per stream' seems like the best feasible measure.
However, perhaps because of the market structure and lack of competition, it seems creators are given very little per stream, not enough to motivate the right amount of content to be produced.
For point 2 (getting the FUNDS), an international government mandated tax and funding would be efficient but there are all sorts of difficulties there. (Compulsion, coordinating across governments, is it fair for non-listeners, etc).
Private streaming services seem like a good second-best, but the inefficiency comes when the streaming service charges customers too much, so not everyone joins. (Why: society could give these people access to ad-free music to these people at no extra cost, but they don't access it.)
Perhaps the best solution would be some sort of streaming service that pays creators more (should that be subsidized?) and offers more differentiated prices to capture the true value consumers are getting. Hard to do, though.
The main point, I think, is that the 'classical economic model' really doesn't work well for information goods, which are becoming more and more of the economy.
As one data point, I'd gladly pay an extra $5-15/month for a "tier" of Spotify that passed along, say, 90% of that extra money to artists. Spotify being mostly private makes it hard to get good digital bling from a higher-tier option, but maybe artists could offer extra rewards to people in that tier?
Much more simply, I'd love to have a "tip the artist" option next to any song, so that when I was especially appreciating something, I could tip the artist a dollar. I'd probably use that option 100-200 times/year.
This seems like it should be a win for Spotify — I see few people angry about Spotify making/keeping too much money, lots of people angry about artists being underpaid. And I think it should be possible to design a tier/tip option that sends the message "you're funding artists" without "we're not".
From some brief research, Spotify paid out over $5 billion to "rights holders"* in 2020 and grossed about $9 billion (they claim to pay out 70% of all revenue). And they have 6500 employees. All of these seem like reasonable numbers, and even boosting artist revenue by 20% would probably feel tiny to critics — now it's half a cent per stream instead of 0.4 cents, hooray — while being a pretty sharp cut for their staff/technical infrastructure.
*Note that this includes record labels; for many artists, Spotify's rate isn't nearly as problematic as the % their record labels take.
Do people in the EA (and maybe rationalist) community have any particular levers we could pull or superpowers that could persuade key influencers and voters?
E.g., Joe Rogan might be able to tip this election. He shows some signs of thoughtfulness and reasoning (at time). Does anyone “here” have a useful connection to him?
There are quite a few posts/some discussion on
The value of language learning for career capital
The dominance of English in EA and the advantages it confers
See., e.g., https://forum.effectivealtruism.org/posts/qf6pGhm9a7vTMFLtc/english-as-a-dominant-language-in-the-movement-challenges
https://forum.effectivealtruism.org/posts/k7igqbN52XtmJGBZ8/effective-language-learning-for-effective-altruists
I expect these issues to become less important very soon as new AI-powered technology gets better. To an extent, the Babblefish is already here and nearly useable.
E.g., the latest timekettle translator earbuds (https://www.amazon.com/dp/B0BTP57ZRM?ref=ppx_yo2ov_dt_b_fed_asin_title&th=1) are getting rave reviews from some people (https://bsky.app/profile/joshuafmask.bsky.social/post/3lcm22p6nsc2o)
Meant to post this in funding diversification week. A potential source of new and consistent funds: EA researchers/orgs could run research training programs.
Drawing some of the rents away from universities and keeping it in the system. These could be non-accredited but focus on publicly demonstrable skills and offer tailored letters of recommendation for a limited number of participants. Could train skills and mentor research particularly relevant to EA orgs and funders.
Students (EA and non-EA) would pay for this. Universities and government training funds could also be unlocked.
(More on this later, I think, I have a whole set of plans/notes).
AIM have already tried to do this for research, and they aren’t sure whether to continue their research fellowship in 2025.. I imagine they’d have some very good learnings on this topic if you got in touch!
Interesting. But I wonder if they (or anyone) has considered this
... as a source of funding (student fees, government aid if accredited, training subsidies from government and companies)
rather than merely as an outflow?
Thinking of trying to re-host innovationsinfundraising.org, which I stopped hosting maybe a year ago. Not sure I have the bandwidth to keep it updated as a ~living literature review, but the content might be helpful to people.
You can see some of the key content on the wayback machine, e.g., the table of evidence/consideration of potential tools .
Any thoughts/interest in using this or collaborating on a revival (focused on the effective giving part)?
This, along with the barriers to effective giving might (or might not) also be a candidate for Open Phil's living literature project. (The latter is still hosted, some overlaps with @Lucius Caviola and @Stefan_Schubert's book).
Re "pivotal questions"...
Some thoughts on what The Unjournal (unjournal.org) can offer, cf existing EA-aligned research orgs (naturally, there are pros and cons)
... both in terms of defining and assessing the 'pivotal questions/claims', and in evaluating specific research findings that most inform these.
Non-EA-aligned expertise and engagement: We can offer mainstream (not-EA aligned) feedback and evaluation, consulting experts who might not normally come into this orbit. We can help engage non-EA academics in the priorities and considerations relevant to EAs and EA-adjacent orgs. This can leverage the tremendous academic/government infrastructure to increase the relevant research base. Our processes can provide 'outside the EA bubble' feedback and perhaps measure/build the credibility of EA-aligned work.
Depth and focus on specific research and research findings: Many EA ~research orgs focus on shallow research and comms. Some build models of value and cost-effectiveness targeted to EA priorities and 'axiology'. In contrast, Unjournal expert evaluations can dig deeply into the credibility of specific findings/claims that may be pivotal to these models.
Publicity, fostering public feedback and communication: The Unjournal is building systems for publishing and promoting our evaluations. We work to link these to the scholarly/bibliometric tools and measures people are familiar with. We hope this generates further feedback, public discussion, research, and application of this research.
Most visited sites by users of EA Forum
From a quick go on https://pro.similarweb.com/
The rest are about 1.5-5% daily cross-visitation:
This is such a predictable and unsurprising set of results that it's adorable.
Somewhat helpful/useful to me as a sort of recommendation engine ... gonna try some of those sites.
although tbh there were only 3 or 4 of these sites that I didn't already know or use
This year's Nobel prizes for Physics and for Chemistry went to computer scientists (among others).
Previous prizes have stretched the discipline boundaries, e.g., the Economics Prize for Ostrom (poli sci) and Kahnemann (psych).
Probably because the prize categories are not set optimally to maximize their goal
... especially as the world has progressed.
The current categories are: Physics, Chemistry, Physiology or Medicine, Literature, Economics (*slightly different prize), and Peace
What would be the ideal categories for this, considering what the real world (not just EA) will latch onto?
My quick take, in approximate order of importance to this goal: (revised)
Humanitarian prize (policies, programs, innovation, and action); includes of global catastrophic risks (this one might be hard to sell), response to disasters and pandemics. If feasible, also considers animal welfare
Peace (actual work towards international peace, not humanitarian stuff), governance and public policy
Basic and pure science (Life sciences, physical science, math, basic CS research)
Applied science, technology, and engineering
[1x per 3 years] Social science (including economics, including history)
[1x per 2 years] Philosophy, journalism, and communication
[1x per 2 years] Arts and culture
This keeps the budget about the same.
Project Idea: 'Cost to save a life' interactive calculator promotion
What about making and promoting a ‘how much does it cost to save a life’ quiz and calculator.
This could be adjustable/customizable (in my country, around the world, of an infant/child/adult, counting ‘value added life years’ etc.) … and trying to make it go viral (or at least bacterial) as in the ‘how rich am I’ calculator?
The case
While GiveWell has a page with a lot of tech details, but it’s not compelling or interactive in the way I suggest above, and I doubt they market it heavily.
GWWC probably doesn't have the design/engineering time for this (not to mention refining this for accuracy and communication). But if someone else (UX design, research support, IT) could do the legwork I think they might be very happy to host it.
It could also mesh well with academic-linked research so I may have some ‘Meta academic support ads’ funds that could work with this.
Tags/backlinks (~testing out this new feature)
@GiveWell @Giving What We Can
Projects I'd like to see
EA Projects I'd Like to See
Idea: Curated database of quick-win tangible, attributable projects
[This was originally posted as a response in the wrong thread - I've deleted the incorrectly placed response.]
Hi, David,
Thanks for tagging us in this suggestion! We're happy to see people talking about the creation of more compelling resources to correct misperceptions and get people thinking about the true cost of saving a life.
This doesn't seem exactly like what you have in mind, as it was more narrowly focused on GiveWell's recommended charities, but in the past we provided an impact calculator on our site. It allowed users to insert a donation amount and choose a GiveWell top charity to give to, and would return the number of outputs (e.g., nets or vitamin A supplements distributed) and outcomes (e.g., lives saved).
We stopped sharing the impact calculator in November 2021, because we didn't feel confident enough in our ability to produce a useful forward-looking estimate of an individual donation's impact. We now report on the impact of past grants directed by GiveWell (see this spreadsheet, for example, and our 2021 cost per life saved estimates for top charities). We feel that giving the estimated cost per life saved of a past grant to a program serves as a helpful proxy for the impact of a future donation to that same program, even if we can't count on the impact remaining the same.
We've written a bit more about why we focus on backwards-looking impact estimates here and here.
Best,
Miranda Kaplan
GiveWell Communications Associate
For what it is worth, even people that one might think would know better like professors of international development really get these sort of questions wrong.
That could also be an interesting promo tag ... 'are you smarter than a professor of international development' :)
I've thought about something similar. I'm surprised no one has done it yet.
I was thinking you could click on a bunch of different organizations, and it will show the resulting QALY or whatever metric.
Like for example, there'd be a bunch of orgs with nice 'cost to save a life' information, then if you clicked on donating to your university it will display something like the annual interest of your donation as part of the endowment..? Just as a way to illustrate differences in impact.
Modest proposal on a donation mechanism for people doing direct work?
Preamble
Employees at EA orgs and people doing direct work are often also donors/pledgers to other causes. But charitable donations are not always exactly 1-1 deductible from income taxes. E.g., in the USA it’s only deductible if you forgo the standard deduction and ‘itemize your deductions’, and in many countries in the EU there is very limited tax deductibility.
So, if you are paid $1 more by your employer/funded and donate it to the Humane League, Malaria Consortium, etc, the charity only ends up with maybe $0.65 on the margin in many cases. There are ways to do better at this (set up a DAF, bunch your donations…) but they are costly (DAF takes fees) and imperfect (whenever you itemize you lose the standard deduction if I understand.)
Proposal
Funders/orgs (e..g, Open Phil, RP) could agree that employees are allowed relinquish some share of their paycheck into some sort of general fund. The employees who do so are allowed to determine the use of these funds (or ‘advise on’, with the advice generally followed).
Key anticipated concerns, responses
Concern: This will lead to a ‘pressure to donate/relinquish’ if the employers, managers, funders are aware of it
Response: This process could be managed by ops and by someone at arms-length who will not share the data with the employers/managers/funders. (Details need working out, obviously, unless something like this already exists)
Concern - Legal issues: Is this feasible? Would these relinquishments be seen by governments as actually income?
Response: ??
Concern - crowding out: If the funder knows that the people/orgs it funds gives back to charities, they may shift their funding away from these charities, nullifying the employees counterfactual impact
Response: This is hardly a new issue, hardly unique to this context; it’s a major question for donors in general, through all modes; so maybe not so important to consider here. … To the extent it is important, it could be reduced if we can keep the exact target and amount of the donations unknown to the funders
Concern - “Org reputation … why not give back to the org?”
Maybe a stretch, but I could imagine someone arguing “If your (e.g., RP) employees ask you to redirect paychecks to a fund, which largely goes to the Humane League, Malaria Consortium, … does this indicate your employees don’t think RP is the best use of funds”?
Responses: Unlikely to be a concern. Employees may want to ‘hedge their bets’ because of moral uncertainty, and because of the good feeling they get from direct impact of donations.
Responses: Keeping the recipient of these funds hidden to outsiders
Imposter (syndrome) ?
Building on my response to the Don't think just apply thread.
... It reminded me of my thoughts on the ‘imposter syndrome’.
I think there are many people who are under-confident in their abilities, both overall, and in relation to other people. Perhaps this disproportionally tends to affect people in the EA and rationalist community, because we are more introspective and skeptical.
But there are also people in this world who are in some way ‘imposters’, in the sense that they don’t have the training for their position, or they (or their organization) is claiming much more than they are actually doing. In some cases it is useful for these people and orgs to consider “how can we level up our abilities and accomplishments, and moderate our claims?"[1]
This is also real, and we don’t want to convey that “everyone who thinks they are over their heads/over-claiming is merely suffering from imposter syndrome”. Maybe some have IS, but some are actually having a meaningful and useful insight that they can benefit from… if not paralyzed by inaction and shame.
This ‘not everything is IS’ also applies to considering individuals and companies making big claims, or modest ones. I don’t think we should always judge these in the light of “these people/orgs are all probably better than they say, because everyone has IS these days.”I also think that sometimes so-called IS may often reflect ‘a whole sector is under-trained and overclaiming’. E.g., if ‘everyone doing [machine learning, economic analysis, whatever] doesn’t understand the principles, is doing a lot of guesswork, and writes things up as if they are clear and certain’… this is a problem. If you are particularly concerned that you are doing the above, you may not be an imposter ‘relative to others in the sector’, but it still seems like a good insight to have. And perhaps more people ‘revealing that they are not wearing imperial clothes’ could help change the dynamic.
In my own case, for example, I think I was underprepared for certain aspects of my PhD program. As an undergraduate I jumped right into Calculus 1 without taking pre-calculus. Here I struggled desperately and barely passed … and I lost out on learning some fundamentals and deep mathematical insights. ↩︎
I really like those points!
I agree this is an important message for some people and circumstances. For instance, it would probably have been a good message for me when I started doing research on longtermist strategy (from an s-risk perspective) in 2014-2017. I mostly pushed through impostor syndrome because there weren't many other people doing similar things, so it felt like "I know it's bad but, looking around, it may just be good enough to be useful." In hindsight, I think the feeling was telling me that I should have focused less on searching for conclusions (by "winging it") and more on improving my understanding and skill building. (That said, "searching for conclusions" is a crucial habit and people should be trying it with some amount of their attention from the very start, otherwise it's difficult to acquire it later.)
The Dunning-Kruger effect is real. But with a few basic sanity checks, I believe any thoughtful EA can determine whether it's imposter syndrome vs actual under-qualification.
If you have evidence to support your non-trivial investment in the area—classes, degrees, self-directed learning, projects, jobs—you are probably at least qualified for an entry-level position in a given area.
Probably the easiest way to check is by asking an impartial 3rd party, like an 80kH Advisor, or even just someone who already has experience working in that field.
Note that this is heavily contested. A lot of the observed phenomenon in the studies (qualitatively: incompetent people thinking they're average, great people thinking they're only good) can be explained by "better than average" effect + metrics not being perfect + natural mean regression.
And of course pop science accounts of Dunning-Kruger is even more unhinged than what D-K claimed.
My own best guess is that the claimed effect is real but small.
Me: “The Dunning-Kruger effect is real.”
Linch: “…the claimed effect is real…”
Great to know that we are in agreement, Linch! The logical follow-up question is what other factor(s) has (have) a higher impact on the effect?
Interesting. Of course my point is independent of the D-K effect, although that would enhance it.
I’m not saying worse ppl are more. overconfident. I’m just saying ‘some ppl are overconfident or overstating’
I’m also suggesting to there may be a secular overstatement of abilities and accomplishments in some fields. Less so among EAs I suspect.
ImpactMatters acquired by CharityNavigator; but is it being incorporated/presented/used in a good way?
Note: moved to 'regular' post here ...
I spent a few minutes looking at the impact feature, and I... will also go with "not satisfied".
From their review of Village Enterprise:
My charitable interpretation is that the "$0.85" number is meant to represent one year's income, and to imply a higher number over time (e.g. you have new skills or a new business that boosts your income for years to come).
But I also think it's plausible that "$0.85" is meant to refer to the total increase, such that you could score "75" by running a program that, in your own estimation, helps people less than just giving them money.
(The "lowest score is 50" element puzzled me at first, but this page clarifies that you score "0" if CN can't find enough information to estimate your impact in the first place.)
*****
Still, this is much better than the original CN setup, and I hope this is an early beta version with many improvements on the way.
There was some discussion of the original acquisition here.
Historically, Charity Navigator has been extremely hostile to effective altruism, as you probably know, so perhaps this isn't surprising.
Thank you, I had not seen Luke Freeman @givingwhatwecan's earlier post
That 2013 opinion piece/hit job is shocking. But that was 9 years ago or so.
I doubt CN would have acquired IM just to bury it; there might be some room for positive suasion here.
But why not? How do we justify that?
That may be what we do care about, but how can we justify that in terms of what we should care about?
Sports betting promotion capture for charity
I pledge to donate 70% of the net gain to effective charities immediately or within the year 2022. - David Reinstein
I’ll try to follow the guidelines given in the post here: EA Fundraising Through Advantage Sports Betting: A Guide ($500/Hour in Select States) - EA Forum
5 Feb 2022
I'm driving up to upstate New York today to visit my dad and take advantage of the promotions and staying through the weekend. I live about 11-2 hours from the NYS border so I can return for a followup if necessary.
… in order to donate to highly effective charities.
Steps taken (some in error)
Caesars Soon after Robi’s earlier post … (25 Jan)
Started a Caesar’s account,
deposited and withdrew $50 while out of state (I had thought it meant ‘deposit at least $50 and you will get the full credit’... didn’t realize first deposit was the determining factor),
requested that they let me redo this to qualify for the maximum bonus (no response)
Fanduel
First bet up to 1000 will be refunded in betting credit if lost.
Terms and conditions HERE
Jan 25 – drove to Connecticut, started an account
5 Feb 2022 : Trying this from New York State
Needed to download the ‘verify location app’ and install it. Even after this, bank transfer didn’t work, but Paypal funded by my bank did work, deposited 1k.
Next I tried to look at bets but it again said I was out of state and ineligible. I’m in upstate NY so that must be wrong. Next I tried clearing the cache, didn’t work. So I downloaded it on my phone and this did work.
Ok now to bet – I want a long odds bet but maybe not too long … because I want to demonstrate that this works OK (and I also have a psychological barrier to betting on long shots, I think). And I want a decent odds bet, no big house ‘vig’ so I signed up for the free OddsJam trial. But this was scaryish because the default was the 1 year subscription ($999!), which seems a big risk if I forget to cancel. But I finally figured out how to do ‘monthly’ which only has an $89 forgetfulness risk..
Within OddsJam it doesn’t tell you which ones are the highest EV unless you signup for the non-free-trial premium arbitrage version or something. But you can go to ‘odds’ and find some things that look decent, resolving soon (in my case, important, because I have a limited time). I selected a few sports. Nothing resolving while I’m sleeping, because then I can’t sleep!
Basketball seems maybe the best … low vig, frequent stuff happening now:
Georgetown-Providence seems OK: low vig, For Providence FanDuel offers the best odds of all the casinos. Wait – I got it backwards: Providence is favored here … the ‘negative sign’ means ‘less return’ I guess. Opposite of what you want for the first bet. OK, trying again.
OK, maybe Milwaukee is the one. Let me see what “+440” means.
Confirming it on Fanduel
It’s fairly long odds, ok:
$100 bet pays $440 (not including the stake) … maybe that’s what the “+440” meant.
But I need to bet $1000 to take advantage here, and the Max wager is $455 for this bet!
I’ll check again tomorrow if that’s upped. Maybe they need more people on both sides. If not maybe I go with a slightly shorter odds bet on the NBA “Brooklyn Nets” (TIL Brooklyn has a basketball team)
Feb 6, morning
“Best odds” on Oddsjam change from day-to-day. Of course, this doesn’t mean that Fanduel is offering poor odds, but still seems like a decent heuristic to choose one where ‘Fanduel offers the best’. I ended up going with a 5-1 (long odds) bet on Maryland over Ohio State (consulting OddsJam on this of course)...
When I made the bet it did not give me any indication that the ‘credit refund’ would happen! I just have to have faith that I’ve complied with the offer!? Wait until 1pm to see if I'm very lucky, and if not, whether they refund me with $1000 in credit as promised. (I followed all the rules as far as I can tell).
Updates: credit was refunded. I then ended up making a bunch of diverse bets and more or less got the money back
I also gained $250 from the Betrivers promotion which I was able to “play through” ended up netting $250 for a lot of work and stress.
Update: earned $280 in The form of seven free $40 bets for betting $5 on the Rams in the Super Bowl. But I now have to go back to New York State to place the bets before it expires
The former. The latter has no moral patienthood I guess
I've been reading more about this and I realize there is great disagreement
Of course, but their 'concious experience' of these signals need not agree with how they are coded in the algorithm. They could 'feel pain from maximizing' ... we just don't know.
"Room for more funding": A critique/explanation
Status: WIP, rough, needs consolidating
David Reinstein: I have argued against this idea of 'room for more funding' as a binary thing. generally imagine in these areas there is always room for more funding, at least in the horizon of a year or more.
It's just a combination of
Almost no cost function should have an 'infinite slope' past a certain output, particularly not in the non-very-short run. Similarly here.
Links:
Skills/training/EA jobs, 'Should you do an Economics PhD (or masters… see later)'; what do you need to learn to work at an EA org? How to level up on this stuff and prove value
These were the most frequent questions I got and discussions I had at the conference, mainly with UG students, but also with people at career pivot points. (Maybe the second-biggest was from PhD students and academics looking to learn more about EA and RP, and to have more impact)
I'm working on an essay/post/resource answering these questions and giving my opinions and experience in a Google Doc here. I'd love your feedback.
EA aligned version of "Oxfam stores" ... in the USA+
I used to buy a lot and give away a lot of stuff at Oxfam stores in the UK. I don’t agree with all of the approaches and campaigns but I think that they do a great deal of good. I think that before their prostitution scandal broke the stores were earning about £20 million per year.
Do we have anything like that in the US? We have Goodwill and the Salvation Army but those are doing domestic charity only and thus an order of magnitude less effective, I suspect.
This made me think: would there be any value potential in having a store like this, especially in the USA that was supporting a variety of EA causes (Global health, animal welfare, reducing existential risk…)? If it’s done right it might raise a few tens of millions of dollars per year at least. (Maybe much more. I'm seeing very inconsistent numbers for Goodwill’s and Salvation Army’s revenues for example). [1]
I suspect that much of this would be counterfactual because people in the USA tend to give domestically only, and the people using this store would otherwise be going to Goodwill or Salvation Army.
My impression is that donations themselves might be the minority of the benefit. The presence of the Oxfam stores in the UK also had big community building and public awareness benefits (for Oxfam). Nearly every moderate sized city/town had an Oxfam store which was pretty stylish and had lots of volunteers and maybe some activities around it. It was also not just students but a lot of other people were involved with it.
Any thoughts on whether this idea might have legs?
Annual report reports ~$47 million in goods sales, but Forbes reports $5.8 billion in revenue, mostly 'other income' ↩︎
This is a really interesting idea! I'm very fond of charity shops so I love the idea of making ones for EA charities. I have no idea how easy or hard it is to do and how it compares to other fundraising tactics, but it seems like it could have a big impact both from profits and from raising awareness. It could be a good thing to do for people with experience starting or running shops.
Modest proposal on a donation mechanism for people doing direct work?
(Tag: Donations, efficiency, taxes, administrative innovation)
Preamble
Employees at EA orgs and people doing direct work are often also donors/pledgers to other causes. But charitable donations are not always exactly 1-1 deductible from income taxes. E.g., in the USA it’s only deductible if you forgo the standard deduction and ‘itemize your deductions’, and in many countries in the EU there is very limited tax deductibility.
So, if you are paid $1 more by your employer/funded and donate it to the Humane League, Malaria Consortium, etc, the charity only ends up with maybe $0.65 on the margin in many cases.There are ways to do better at this (set up a DAF, bunch your donations…) but they are costly (DAF takes fees) and imperfect (whenever you itemize you lose the standard deduction if I understand.)
Proposal
Funders/orgs (e..g, Open Phil, RP, FHI, CEA) could agree that employees are allowed relinquish some share of their paycheck into some sort of general fund. The employees who do so are allowed to determine the use of these funds (or ‘advise on’, with the advice generally followed).
Key anticipated concerns --> responses
Concern: pressure
This will lead to a ‘pressure to donate/relinquish’ if the employers, managers, funders are aware of it
Response: This process could be managed by ops and by someone at arms-length who will not share the data with the employers/managers/funders.
Details need working out, obviously, unless something like this already exists
Concern - Legal issues
Is this feasible? Would these relinquishments be seen by governments as actually income?
Response: ??
Concern - crowding out
If the funder knows that the people/orgs it funds gives back to charities, they may shift their funding away from these charities, nullifying the employees counterfactual impact
Response: This is hardly a new issue, hardly unique to this context; it’s a major question for donors in general, through all modes; so maybe not so important to consider here.
… To the extent it is important, it could be reduced if we can keep the exact target and amount of the donations unknown to the funders
Concern - “Org reputation … why not give back to the org?”
Maybe a stretch, but I could imagine someone arguing “If your employees ask you to redirect paychecks to a fund, which largely goes to the Humane League, Malaria Consortium, … does this indicate your employees don’t think RP is the best use of funds”?
Responses: Unlikely to be a concern. Employees may want to ‘hedge their bets’ because of moral uncertainty, and because of the good feeling they get from direct impact of donations.
Responses: Keeping the recipient of these funds hidden to outsiders
Are you engaging in motivated reasoning ... or çommitting other reasoning fallacies?
I propose the following good epistemic check using Elicit.org's "reason from one claim to another" tool
Whenever you have a theory that A→B
Feed this tool your theory, negating one side or the other[1]
A→¬B
and/or
¬A→B
And see if any of the arguments it presents seem equally plausible to your arguments for A→B
If so, believe your arguments and conclusion less.
Caveat: the tool is not working great yet, and often requires a few rounds of iteration, selecting the better arguments and telling it "show me more like this", or feeding it some arguments
Or the contrapositives of either
General notes on "Sports Betting for EA" … and ‘ways to (not) screw it up’
Some lessons from my experiment and understanding (writing up my experience HERE, when I get a a chance)
As written elsewhere, there are basically 2-3 types of rewards.
What is this “house money?": the rewards and bonuses cannot be withdrawn immediately. There are certain “play through requirements”. From what I'm seeing if there are "1X playthrough requirements” and 20x (or something) play through requirements.^[Don't bother with bonuses involving the latter, the process of playing through them you will be giving the casino back a lot of money as they take some cut with every bet].
But even with 1X requirement there are some caveats, and some bets do not count as playing through. [See below ‘Check that your bets…’]
Ways to (not) screw it up (or inconvenience yourself)
Don’t miss an opportunity when signing up for an account or depositing money.
Make sure you are in the right state and can verify this. Some sites have specific tracking software you need to download others seem to just use something within your browser. However, in my experience it sometimes gets it wrong and says you are not in the state when actually you are. It seems to work better when you are closer to multiple wifi-spots. Sometimes clearing your cache might help or using an incognito browser but I didn't have consistent results for that. Note also that your browser should be set to allow location support. If you download the casino’s geo tracking software you also need to give that software permission for location support.
But in my experience, where my computer failed to demonstrate I was in the state I was in, my telephone (Iphone) almost always worked, particularly once I downloaded the casinos’ apps.
You will need to upload/share some photo ID such as a driver's license, and at least in one case (Caesars) a utility bill as well. I don't think you need to show residence in the state, just that you are really who you say you are. If you are not comfortable doing this (I do think it's pretty secure), don't bother.
Make sure you have access to your phone for two-factor identification. The sites and apps are continually asking you for this, and you often get logged out and have to log yourself in again with this 2-factor
When you make a ‘risk-free bet’, make sure the terms apply. I didn't actually have any issues with this, but there are so many terms and conditions I would make sure before you bet. Also, the general advice is "for risk-free bets, you should bet on something at least somewhat risky, otherwise you are wasting the reward.”
Check that your bets are actually with the ‘house money’ (bonus/reward) and are eligible for playthrough requirements.
Not all bets ‘count equally’. You might accidentally bet your cash and not the house money. You might bet with the house money in a way that doesn’t qualify as ‘play through.’
Not all bets allow you to use the house money, or qualify as ‘playing through’ that money In some cases if you bet it on a "nearly sure thing” (e.g., what they call -200 or shorter odds) this does not count towards your play-through! They may not let you use the house money for this, or if they do, even if you win you will not be able to withdraw it without playing more. Be sure you know the rules, and….
When in doubt go to their chat helpline and ask directly. They were often helpful. But even there, I'm not sure all of their help team necessarily gets it right, in at least one case they didn’t seem aware of the caveat above. … But at least you will have a record of the chat you can show to complain.
Make sure you have time (in the state) to use the house money, and don’t wait too long: The rewards seem to expire after a period that is sometimes a day or week or something.
How to make reasonably good and safe bets/betting portfolios (ideally with house money)
You need to bet on something. As noted above, for the risk-free bets you want something fairly risky, something with “+200 odds or higher” perhaps. (I think +200 means that if you bet $1000 and win you get $2000 plus your initial $1000 stake, and can withdraw $3000).
Finding OK bets, OddsJam
As stated elsewhere, OddsJam is a site that provides a range of information. You can sign up for a trial account but….
Don’t forget to cancel an OddsJam trial, and probably best to trial a ‘monthly membership’ rather than yearly. If you trial a yearly and you forget to cancel in a week you could be out $1,000.
There's a premium version of OddsJam with no free trial that claims to find the highest expected value bets. But the regular one does have a list of sports and sporting events, and tells you which casinos offer the best odds, and what the house ‘vig’ is (the amount of all bets the house gets on average, I guess, because of the spread between the odds on each team … e.g., one may be -100 and the other only +50.
I wouldn't bet on anything with a “vig” of 5% or more. My impression was that US basketball games had pretty low vigs but (Euro?) soccer games had ridiculously high ones.
Next, you might look for a game where one side has odds somewhat close to the level you're looking for, the amount of risk you want to take (unless you are hedging, in which case that might not matter, see below). On OddsJam you can browse for that while also comparing whether your casino is offering something close to the best odds among the casinos. (Larger “+” numbers and “-” numbers closer to zero are better).
Low risk bets and portfolios
So now you have a ‘house money’ reward (perhaps just the return of the money you lost on the risk-free bet) and you want to get it out. If you have an account with multiple sites and similar amount you want to get out of each, there's a pretty easy way to do this: you can bet on opposite teams on opposite sites, i.e., fully hedge your bet. Check how much will pay if you win on each side on each site and bet an amount on each that makes that roughly equal. You can do this in combination with OddsJam to try to guarantee as close to 100% of your money back as possible (over 100% is also conceivable but would seem to be a rare ‘arbitrage opportunity’).
If you have only an account with one site (or more money you are trying to get out of one sites than another) I think you should:
Warning: this can be stressful and distracting to work
The return seems to be potentially pretty good, but (particularly for the casinos with “risk-free bets” rather than deposit matching) there are ways you can lose money if you are not careful. It can be pretty stressful and you have to concentrate on what you're doing. Even though the process of doing it correctly they only take a few hours, if you are anticipating yet and discriminating over your bad choices, or losing sleep anticipating upcoming games... This can take up a lot more of your life. For some people this process might be fun, for others traumatic. Other people might really like the excitement and adrenaline But it may take you away from other things you are trying to focus on. This was the case for me; so far, it’s been interesting and sometimes fun and rewarding to see when ‘I got it right.’ But overall it was not relaxing, rather stressful and pulled me away from other important things.
Thanks so much for sharing these tips! With regards to the info on OddsJam:
I'm
HERE (podcast 'found in the struce' available on all platforms).
I think this will help people who have limited screen time get more from the EA Forum.
I’d like to encourage others to also narrate/record forum posts. I would love to listen to this too on those long drives/walks.
AI consciousness and valenced sensations: unknowability?
Variant of Chinese room argument? This seems ironclad to me, what am I missing:
My claims:
Claim: AI feelings are unknowable: Maybe an advanced AI can have positive and negative sensations. But how would we ever know which ones are which (or how extreme they are?
Corollary: If we cannot know which are which, we can do nothing that we know will improve/worsen the “AI feelings”; so it’s not decision-relevant
Justification I: As we ourselves are bio-based living things, we can infer from the apparent sensations and expressions of bio-based living things that they are happy/suffering. But for non-bio things, this analogy seems highly flawed. If a dust cloud converges on a ‘smiling face’, we should not think it is happy.
Justification II. (Related) AI, as I understand it, is coded to learn to solve problems and maximise things, optimize certain outcomes or do things it “thinks” will yield positive feedback.
We might think then, that the AI ‘wants’ to solve these problems, and things that bring it closer to the solution make it ‘happier’. But why should we think this? For all we know, it may feel pain when it gets closer to the objective, and pleasure when it avoids this.
Does it tell us it makes it happy to come closer to the solution? That may merely because we programmed it to learn how to come to a solution, and one thing it ‘thinks’ will help is telling us it gets pleasure from doing so, even though it actually gains pain.
A colleague responded:
OK but this seems only if we:
But how on earth would we know how to do 1 (without biology at least) and why would we bother doing so? Couldn't the machine be just as good an optimizer without getting a 'feeling' reward from optimizing?
Please tell me why I'm wrong.
Posts I bookmarked (often as references)
What are examples of EA work being reviewed by non-EA researchers?QAaron Gertler, cole_haus
Altruistic Agency – free tech expertise for effective altruistsMarkus Amalthea Magnuson
Pedant, a type checker for Cost Effectiveness AnalysisHazelfire
How to find EA documents on a particular topicJc_Mourrat
2-week summer course in "economic theory and global prioritization": LMK if interested!trammell
Learnings about literature review strategy from research practice sessionsalexlintz
EA Survey 2018 Series: How Long Do EAs Stay in EA?Peter Wildeford
Estimates of highly engaged EA retentionGabby_O
Review: What works to promote charitable donations?PeterSlattery
Rethink Priorities 2020 Impact and 2021 StrategyMarcus_A_Davis
A central directory for open research questionsMichaelA
Uncertainty and sensitivity analyses of GiveWell's cost-effectiveness analysescole_haus
EA Survey 2019 Series: Community Demographics & Characteristics
Facebook ads: can you really do A/B testing on a comparable audience?
On Facebook ‘Lift testing’
… can you really compare ‘ad A vs ad B’ to see which works better on a comparable audience?
Braun and Schwartz (Hat tip, Josh Lewis)… seem to think this is NOT possible in the current FB setup (and maybe not on most platforms either). ... Because of the way each ad design is separately targeted/optimized to its ‘best audience’.
Smartly seems to imply that multi-cell lift tests do not suffer from this problem.
... But it's unclear if this really implements 'target then randomize'.
What Economics research is EA-relevant?
Most of Economic research can be deemed EA-relevant in a general sense in that it usually focuses on welfare properties (of equilibria)…
But sometimes it’s the ‘potential pareto improvement/2nd welfare theorem’ stuff … could make the ‘pie higher’ and achieve any improved outcome you like it were to be redistributed.
E.g., one could claim (loosely)
A. … “efficient antitrust regulation is an EA cause because it aims to achieve the greatest level of Consumer + Producer surplus”
B. “…which could then yield the greatest social gains if we redistributed it to help the extreme global poor/animal welfare/existential risk reduction”
But you might ask: For A: “Is this the most important/easiest/biggest way to ‘make the pie higher’“? For B: “How likely is it that any gains could/would actually be redistributed to then ‘do the most good’”
On music streaming services
(Note: I am not making a claim that this is an EA cause candidate.)
Music is an "information good". It is "nonrival" and infinitely and freely sharable. Any positive price leads to "allocative inefficiency".
But a zero price obviously gives no incentive to produce and share music. The best solution is to separate what consumers pay from what musicians receive. The music (and all media and info) would be free to access, but the creators would get a payment equal to the value the listeners and users took from it.
But it's hard to
Know what that VALUE is and
Coordinate a way to get the FUNDS and compensate the creators.
For point 1 (measuring VALUE), the number of plays or amount of time spent listening seems like a possibly OK, but imperfect measure. E.g., 'listening in the background' has less value than 'listening and really getting into it. Still the 'compensation per stream' seems like the best feasible measure.
However, perhaps because of the market structure and lack of competition, it seems creators are given very little per stream, not enough to motivate the right amount of content to be produced. For point 2 (getting the FUNDS), an international government mandated tax and funding would be efficient but there are all sorts of difficulties there. (Compulsion, coordinating across governments, is it fair for non-listeners, etc).
Private streaming services seem like a good second-best, but the inefficiency comes when the streaming service charges customers too much, so not everyone joins. (Why: society could give these people access to ad-free music to these people at no extra cost, but they don't access it.) Perhaps the best solution would be some sort of streaming service that pays creators more (should that be subsidized?) and offers more differentiated prices to capture the true value consumers are getting. Hard to do, though.
The main point, I think, is that the 'classical economic model' really doesn't work well for information goods, which are becoming more and more of the economy.
As one data point, I'd gladly pay an extra $5-15/month for a "tier" of Spotify that passed along, say, 90% of that extra money to artists. Spotify being mostly private makes it hard to get good digital bling from a higher-tier option, but maybe artists could offer extra rewards to people in that tier?
Much more simply, I'd love to have a "tip the artist" option next to any song, so that when I was especially appreciating something, I could tip the artist a dollar. I'd probably use that option 100-200 times/year.
This seems like it should be a win for Spotify — I see few people angry about Spotify making/keeping too much money, lots of people angry about artists being underpaid. And I think it should be possible to design a tier/tip option that sends the message "you're funding artists" without "we're not".
From some brief research, Spotify paid out over $5 billion to "rights holders"* in 2020 and grossed about $9 billion (they claim to pay out 70% of all revenue). And they have 6500 employees. All of these seem like reasonable numbers, and even boosting artist revenue by 20% would probably feel tiny to critics — now it's half a cent per stream instead of 0.4 cents, hooray — while being a pretty sharp cut for their staff/technical infrastructure.
*Note that this includes record labels; for many artists, Spotify's rate isn't nearly as problematic as the % their record labels take.
US election question/take.
Do people in the EA (and maybe rationalist) community have any particular levers we could pull or superpowers that could persuade key influencers and voters?
E.g., Joe Rogan might be able to tip this election. He shows some signs of thoughtfulness and reasoning (at time). Does anyone “here” have a useful connection to him?