Re: Googling, this post wasn't really motivated by finding out how to donate crypto/stocks to AMF. I mainly just wanted to signal to EA orgs that they should be able to receive such donations (or understand why some might not accept it).
The rough sequence of events was:
Yeah, I can see how that’s a bit of a dilemma. To avoid the flags, maybe you could make it a separate download or something people have to specifically email you to request.
I can also imagine that for some people, especially those in certain jobs, the screenshots or logging features won’t be worth the risk. On that note, I decided to uninstall the tool myself, given the security considerations and the requirements from my employer.
I looked into this and no longer have these concerns.Thank you for this. I have some security concerns, and I am uninstalling the software after I received the message below from my university. Can you please respond here to explain?
"IS&T Information Security received a Crowdstrike alert about some software you downloaded from donethat.ai. Presumably it was intentional, but this software comes installed with what looks like keylogger functionality. ...Having a keylogger on your system is pretty risky. Even if they purport to not store data, there is al...
Do you think that it would be better to just add a helpful or heart emoji to the post instead? I used to leave the same sorts of comments as Ben. These got downvoted occasionally. I interpreted this pattern as being due to people not appreciating these sorts of 'thank you comments'. When emoji react were added, I therefore switched to emoji reacting, as I felt that this would achieve the same outcomes without creating the 'noise' of a 'thank you comment'. However, I could go back to leaving comments if that seems like a better approach.
Thanks for this. Is there a place where I can see the sources you are using here?
I am particularly interested in the source for this:
"The other graph here is an interesting one. It's the financial returns to IQ over two cohorts. The blue line is the older cohort, it's from 50 years ago or something. It's got some slope. And then the red line is a newer cohort. And that's a much steeper slope. What that means is basically for every extra IQ point you have, in the new cohort you get about twice as much money on average for that IQ point. "
I think it would be useful, and believe that it doesn't exist. I'd use the resource, and I know several others who also would. Especially, if it was slides and transcripts from good talks. Have you talked to the people behind https://aisafety.info/? This could be something they could support/host.
This is extremely relevant for me as I have been thinking a lot about when to start making more serious donations. I discussed some previous blockers here which haven't been resolved. I am therefore considering commissioning some research (ideally with others).
Broadly I'm interested in better understanding the 'donators dilemma': if you give money now, you forego the later opportunity to 'give better' due to having improved information, and to 'give more' due to passive income. Also to benefit from increased financial security that might enable you to have...
I like this idea, but wonder if CEA or another organization should take the lead on running something like this? Making donations to other people or informal groups is interpersonally and logistically complicated. For instance, people will often refuse a donation if offered it (or when their bank account is requested), and taking money from a person may feel like an obligation, or be misinterpreted. It could work better if they instead donate money to an org who allocates it for a person and contacts them to receive it (and donates it if they don't accept)...
Thanks! Do you want this shared more widely?
Note that I estimate that putting these findings on a reasonably nice website (I generally use Webflow) with some nice interactive embedded (Flourish is free to use and very functional at the free tier) would take between 12-48 hours of work. You could probably reuse a lot of the work in the future if you do future waves.
I am also wondering if someone should do a review/meta-analysis to aggregate public perception of AI and other risks? There are now many surveys and different results, so people would probably value a synthesis.
I really like the vote and discussion thread. In part because I think that aggregating votes will improve our coordination and impact.
Without trying to unpack my whole model of movement building; I think that the community needs to understand itself (as a collective and as individuals) to have the most impact, and this approach may really help.
EA basically operates on a "wisdom of wise crowds" principle, where individuals base decisions on researchers' and thinkers' qualitative data (e.g., forum posts and other outputs)
However, at our current scale, ...
Thank you for this. I really appreciate this in-depth analysis, but I think it is unnecessarily harsh and critical in points.
E.g., See: Hendrycks has it backwards: In order to have a real, scientific impact, you have to actually prove your thing holds up to the barest of scrutiny. Ideally before making grandiose claims about it, and before pitching it to fucking X. Look, I’m glad that various websites were able to point out the flaws in this paper. But we shouldn’t have had to. Dan Hendrycks and CAIS should have put in a little bit of extra effort, to spare all the rest of us the job of fact checking his shitty research.
Hey! Yes, this is related to MIT/US immigration challenges and not something we can easily fix, unfortunately. We do sometimes hire people remotely. If you would like to express interest working with/for us, then you can submit a general expression of interest here.
Thanks for the feedback, Riley. Sorry for the confusion. See the not very detailed job description on the MIT careers page. Probably the best and quickest way to apply is to make a submission here - just select the Junior Research Scientist/Technical Associate position (if that is the only one of interest).
"From there, we asked it to compute the probabilities of 177 events from Metaculus that had happened (or not happened) since.
Concretely, we asked the bot whether Israel would carry out an attack on Iran before May 1, 2024. We compared the probabilities it arrived at to those arrived at independently by crowds of forecasters on the prediction platform Metaculus. We found that FiveThirtyNine performed just as well as crowd forecasts."
Just to check my understanding of the excerpt above, were all the 177 events used in evaluation related to Israel attacking Iran?
Quick response - the way that I reconcile this is that these differences were probably just due to context and competence interactions. Maybe you could call it comparative advantage fluctuations over time?
There probably no reasonable claim that advising is generally higher impact than Ops or vice versa. It will depend on the individual and the context. At some times, some people are going to be able to have much higher impact doing ops than advising, and vice versa.
From a personal perspective my advising opportunities very greatly. There are times where mo...
Just wanted to quickly say that I hold a similar opinion to the top paragraph and have had similar experiences on terms of where I felt I had most impact.
I think that the choice of whether to be a researcher or do operations is very context dependant.
If there are no other researchers doing something important your competitive advantage may be to do some research because that will probably outperform the counterfactual (no research) and may also catalyze interest and action within that research domain.
However if there are a lot of established organizations ...
I haven't encountered any donors complaining that they were misled by donation matching offers, and I'm not aware of any evidence that offering donation matching has worse impacts than not having it, either in terms of total dollars donated or in attempts to increase donations to effective charities.
However, I haven't been actively looking for that evidence - is there something that I've missed?
I haven't encountered any donors complaining that they were misled by donation matching offers
When I was at Google, I participated in an annual donation matching event. Each year, around giving Tuesday, groups of employees would get together to offer matching funds. I was conflicted on this but decided to participate while telling anyone who would listen that my match was a donor illusion. Several people were quite unhappy about this, where they told me that it was fraud for me to be claiming to match donations when my money was going to the same charit...
Thanks for writing this. I just wanted to quickly respond with some thoughts from my phone.
I currently like the norm of not publicly affiliating with EA but its something I've changed my mind about a few times.
Some reasons
I think EA succeeds when it is no longer a movement and simply a general held value system (i.e., that it's good to do good and try to be effective and to have a wide moral circle). I think that many of our insights and practices are valuable and underused. I think we disseminate these most effectively when they are unbranded.
This is why:...
Thank you for this. I really appreciate this research because I think the EA community should do more to evaluate interventions (e.g., conferences, pledges, programs etc) considering the focus on cost-effectiveness etc. Especially repeat interventions. I also like the idea of having independent evaluations.
This seems misleading. Some of the authors are from Epoch, but there are authors from two other universities on the paper.
Also, where does it say that he is a guest author? Neil is a research advisor for Epoch and my understanding is that he provides valuable input on a lot of their work.
Disclosure: I have past, present, and potential future affiliations with MIT FutureTech. These views are my own.
Thank you for this post. I think it would be helpful for readers if you explained the context a little more clearly; I think the post is a little misleading at the moment.
These were not “AI Safety” grants; they were for “modeling the trends and impacts of AI and computing” which is what Neil/the lab does. Obviously that is important for AI Safety/x-risk reduction, but it is not just about AI Safety/x-risk reduction and somewhat upstream.
Imp...
Thanks for writing this and for your good intentions. Sorry, you haven't received more feedback!
My quick thought is that you should probably try to work for one of the organisations doing something like your project before you attempt to start a new organisation. There are usually a lot of useful things you can learn from established alternative project, including how and why they operate as they do. Additionally, it is probable that helping something big be a little better is more impactful in expectation than doing something novel and risky which probabl...
Thanks! His post definitely suggests awareness and interest in EA.
I wonder what happened with the panel. He said he would be on it, l but from what I can see in that video, he wasn't. I imagine that someone could find out what happened there by contacting people involved in organising that event. I don't care enough to prioritise that effort but I'd appreciate learning more if someone else wants to investigate.
Thanks for following up! This evidence you offer doesn't persuade me that most EAs are extremely rich guys because it's not arguing that. Did you mean to claim that most EAs who are rich guys are not donating any of their money or more than the median rich person?
I also don't feel particularly persuaded by that claim based on the evidence shared. What are the specific points that are persuasive in the links - I couldn't see anything particularly relevant from scanning them. As in nothing that I could use to make an easy comparison between EA donors a...
Thanks for the input!
I think of EA as a cluster of values and related actions that people can hold/practice to different extents. For instance, caring about social impact, seeking comparative advantage, thinking about long term positive impacts, and being concerned about existential risks including AI. He touched on all of those.
It's true that he doesn't mention donations. I don't think that discounts his alignment in other ways.
Useful to know he might not be genuine though.
Also someone messaged me about a recent controversy that Bryan was involved in. I thought he had been exonerated but this person thought that he had still done bad things.
And his response https://twitter.com/bryan_johnson/status/1734257098119356900?t=DHcSxlZ5PkxhREVJkAdXag&s=19
Worth knowing about when judging his character.
I thought this summary by TracingWoodgrains was good (in terms of being a summary. I don't know enough about the object-level to know if it was true). If roughly accurate, it paints an extremely unflattering picture of Johnson.
Yeah I think that's part of it. I also thought it was very interesting how he justified what he was doing as being important for the long term future given the expected emergence of superhuman AI. E.g., he is running his life by an algorithm in expectation that society might be run in a similar way.
I will definitely say that he does come across as hyper rational and low empathy in general but there's also some touching moments here where he clearly has a lot of care for his family and really doesn't want to lose them. Could all be an act of course.
Yeah, he could be planning to donate money once his attempt to reduce our overcome mortality is resolved.
He said several times that what he's doing now is only part one of the plants so I guess there is a opportunity to withhold judgment and see what he does later.
Having said all that I don't want to come across as trusting him. I just heard the interview and was really surprised by all the EA themes which emerged and the narrative he proposed for why what he's doing is important
Thanks for this, I appreciate that someone read everything in depth and responded.
I feel I should say something because I defended Nonlinear (NL) in previous comments, and it feels like I am ignoring the updated evidence/debate if I don't.
I also really don’t want to get sucked in, so I will try to keep it short:
How I feel
I previously said that I was very concerned after Ben's post, then persuaded by the response from NL that they are not net negative.
Since then, I realized that there have been more negative views expressed towards NL than I rea...
My understanding is that I can simply need to print and send the letter/email from Every.org to Interactive Brokers, and they will facilitate the transfer.
One thing to note here is that end-of-year transfers can only be facilitated if the request has been made by the start of December. Another is that because I submitted my request too late in December, I will have to submit it again in January.
However, all things considered, if they do facilitate the transaction based on the submission of the email/letter shared from Every.org it will feel like a relatively straightforward process for me. However, maybe I'm going to find out later that there are some other complications in the process that I've missed?