If you press a footnote link in a post and the footnote is hidden in the 'View more footnotes' collapsable list the page scrolls to a footnote you can't see. I found it confusing until I realised you have to press 'view more footnotes' to expand them. It would be good if it opened automatically when you follow a footnote link
My sincere apologies, I had missed that it had been updated! V. Embarrassing. Thankyou for doing that
Maybe I'm misunderstanding what you want EVF to do, but when I go to the page that you linked to I see a list of trustees with bios at the bottom of it. (This doesn't solve the "contact" problem, but it does solve "who they are and what they do".)
I thought the bottom half was an OK response.- 'We have long term plans and value healthy funding' (paraphrasing)
I think this hints at a divide between EA and progressive thinking. EAs: we only have a set amount of money for food causes, we need to use it effectively Progressives: just allocate more money to good causes (like treating AIDS) and less on bad causes (like defense spending)
Objections to 'value of my time arguments'
I often hear EA/ rationalists saying something like 'it's not worth spending an hour to save £20, if your hourly rate of pay is over £20/ hour. I think this is wrong, but I might not understand the argument.
It could be understood as a hypotherical argument, you COULD earn this much in an hour, as a reference point to help you understand the value of your time. This hypothetical reference point isn't really useful, when I have the very real figure of my total balance, and upcoming outgoings to consider, and the...
I agree with that rough claim. And I liked the rest of the blog.
I guess I do see people who are struggling behaving badly sometimes. I just don't think it's in any more frequent than the general population. Or I see sometimes see them using the fact they're struggling to justify their bad behaviour, and I don't buy that.
https://forum.effectivealtruism.org/posts/oGdCtvuQv4BTuNFoC/good-things-that-happened-in-ea-this-year
Is there any consensus on who's making things safer, and who isn't? I find it hard to understand the players in this game, it seems like AI safety orgs and the big language players are very similar in terms of their language, marketing and the actual work they do. Eg openAI talk about ai safety in their website and have jobs on the 80k job board, but are also advancing ai rapidly. Lately it seems to me like there isn't even agreement in the ai safety sphere over what work is harmful and what isn't (I'm getting that mainly from the post on closing the lightcone office)
Hi Howie, I'm getting back to this 3 months later. I don't think this feature has been added and I'd like to raise again that it would be good for transparency. The link to the CEA team page doesn't have bios for Tasha McCauley and Becca Kagan (who has since resigned from EVF, I guess it could be worth listing former board members).
When EVF announced the new interim CEOs 3 months ago, I noted that there wasn't a bio for EVF's board members on their website, and that it was hard to find much information on Google. At this moment in time, it's the most upvoted comment on that post, with 35 upvoted and 29 agreements. Howie agreed to update the website, but as of now it doesn't look like anything has been added.
I'd like to raise this again, it would be good to update EVF's website with board member bios for transparency, and maybe a contact email address. I like that this press rel...
This resonated with me. I get some internal strife and anxiety about posting on here. I think it's a combo of caring a lot about the kind of things being discussed here (suffering, global poverty, animal welfare, xrisk), and having thoughts about these things, and wanting to share them, but then finding the negative incentive of [being criticised in the comments] outweighs the positive incentives of [temporary status amongst strangers on the internet], plus some sort of [happiness at being able to express myself].
It seems for me the emotional drive t...
I was surprised to see Twitter noted as a good place to share thoughts, I think mainly because it's rare I hear anyone say a good word about Twitter. I don't use it, as my impressions of twitter are:
I'm planning to write a piece on animal welfare, as part of that post it will help to post a picture of a dead animal. I'd like to have it blurred until users choose to see it, is there a way to do that?
Side note: I can't see anything about this circumstance in the user manual or guide to norms.
Your comment made me realise I'm actually talking about two different things:
I agree with you that having some kind of peer pressure or social credit for 'doing well' can help a person withstand pain. I'd imagine this has an effect on the hand-in-cold-water experiment, if you're doing it on your own vs as part of a trial with onlookers.
sorry, I got your name wrong in my reply (changed now)! I'm going to look into my question further, and read some of https://reducing-suffering.org/ you linked to. That's as a result of this post:)
I went through these experiences voluntarily and with the knowledge that I have the freedom to stop whenever I want. People suffering from painful disease, children dying of hunger, chickens being electrocuted to death, fish being asphyxiated to death - for these individuals, such experiences are a horrific reality, not an experiment
I think this is a very important distinction that should be given more emphasis. When I've experienced severe pain, the no.1 thought in my mind was "oh god make it stop". This makes complete sense if you think of pain as your b...
Hey Ren, this is a great post!
I share your intuition that reducing extreme suffering is the no.1 moral imperative for humankind.
What charities do you recommend, if that's what you value most? GiveWell recommended charities based on their own moral weights, which I don't think weight as reducing extreme suffering as highly as me.
Then there's many animal welfare charities. And there's OPIS, which is the only charity I know that explicitly targets extreme human suffering. Are there any others that I'm missing?
My guess is that it wouldn't change much
Maybe not for most people reading the people reading the EA forum. I think if you take a serious look at the issues of animal suffering and farmed animal conditions, you'll probably arrive at a number similar to existing statistics on numbers of factory farmed animals.
But I think there's plenty of people who have motivated reasoning to doubt those statistics, or minimise the badness/factory-ness of a farm, or farming practice. For example, my extended family run a dairy farm. I remember when first reading...
Why aren't we protesting AI acceleration in the street?
I'm not super up to date with the latest EA thinking on current AI capabilities. The takes I read on social media from Yudkowsky and the like are something along the lines of 'We're at a really dangerous time, various companies are engaged in arms race to make more and more powerful AIs with little regard to safety, and this will directly lead to humanity being wiped out by AGI in the near future'. For people really believe this to be true (especially if you live in San Fransico) - why aren't you...
I feel uncomfortable with this kind of public character judgement of an alleged victim. Especially when it's presented without a source or evidence backing up the claim she's 'hella scary'
I think using the term 'woke left' will be counter-productive to your aim of reaching out to politically left people. While 'woke' started as a term used by the left, I now see it being used almost exclusively by the right as a pejorative term for the left, and most politically left people I know would be annoyed at being called 'woke'.
What would that add? I think that would add speculation on to what is already speculation, and I'd think only the passing of time would be able to give feedback on whether the predictions turn out to be true.
I guess it could give more information, if you sought out different people for the meta-predictions, than had made the original predictions. But then I'm not sure why you wouldn't just have these new people do the original prediction questions directly.
I think this might be partly due to the complex structure (and subsequent re-structure) of CEA. 'CEA' used to be a dual name for both a legal entity and the community building organisation.
I think this led me in the past to having a vague idea of what 'CEA' was, and thinking that the public-facing Community Health Team was representing all of it and responsible for more than they were.
This is kind of a separate issue though, here I'd just like to say I'm grateful for the work the Community Health Team does, and don't want to distract from the discussion of the accusations made here.
At launch -- per the Gawker article -- their priorities seemed to include lining up a bunch of famous people (who may or may not have known they were involved...) and hiring a PR firm for a glizty launch with lots of meaningless buzzwords, but apparently not having any clue about what the organization would actually accomplish. That strikes me as a prime example of performative charity.
At present, the website is very well-done, but is awfully light on what the organization has accomplished. For example, the second project on their website is "Shield," acco...
Assuming that this is both useful and time or funding constrained, you could be selective in how you roll it out. Images of world leaders and high profile public figures seem most likely to be manipulated and would have the highest negative impact if many people were fooled. You could start there
I'd like to be able to hide the amount of karma and agreement points a comment or post has. I think seeing how many people have upvoted a statement affects how likely I am to agree with or upvote that statement. I think it makes me more likely to vote in accordance with social agreement, rather than whether or not I think a statement is true or well written. I'd like to be able to turn this off from time to time. Strongly downvoted comments should probably still be hidden.
I think the UI for voting could be improved in the following ways:
The formatting toolbar doesn't appear until after you highlight text. This means you can only format text after you've written it - you can't for example, select bold and have your text appear in bold as you write it. This is something I find unintuitive. It took me a a few minutes of looking for the toolbar and googling how to do it before I realised the toolbar only appears when you highlight text. I'd like the formatting toolbar to always be on the page when I'm writing.
I'd like to be able to highlight a word or phrase in text I'm writing and Ctrl-V a URL link directly into that phrase. This is something that other platforms, like Slack do.
Yes, you can highlight a phrase and bring up the toolbar to add a link, but being able to do it immediately through a well known keyboard shortcut is easier.
Thanks for writing this up. It'd be nice to have a paragraph of bio for each of the board members on ev's website. Google search didn't give me much for some of the board members.
Are you planning to back-date each piece of content that you timestamp to the time it was created? If so, how hard is it to find the time of creation of pieces at the moment? This seems to be the very problem this initiative is planning to tackle (so I'm guessing it's at least somewhat hard) although I think the argument here is 'it will get harder in the future'.
The only alternative I can see is to add the timestamp of the time that the content is processed as part of this initiative. This might be easier than finding the creation date but it would probab...
Update: FLI FAQ on the rejected grant proposal controversy.
Although I still think the original statement was not good, reading the FAQ and comments in the linked post have helped me have more empathy for the difficulties of releasing a PR when under public pressure to say something urgently.
I think my tone here was too confrontational and demanding, and I'm sorry if that caused additional stress for FLI.
Thankyou to FLI for updating both the initial statement, and putting out the FAQ, which clears things up.
I don't think both words are accurate here. Crimea was illegally annexed, and 'invasion' to me means entering another country's territory.
My fundamental belief here is that the norms on a countries borders should be decided by referendum, and then respected (i.e. not invaded).
The 2014 referendum was one month after Russia invaded Crimea. I wouldn't trust the results of it (a 96% result to join Russia is implausible), or really any referendum since, while Russia is still in control. So, I would think the latest and most authoritative piece of evidence...
I downvoted for the use of the word 'invading'. 'Invading' describes what Russia did to Crimea in 2014, 'retaking' would be a better word for this context.
As for self-determination, 54% of Crimieans voted for Ukrainian independence in the 1991 referendum. Since the 2014 invasion, Russia has probably imported so many citizens that the demographics have changed massively and this would skew any future referendum.
Thankyou for linking that. I'm glad FLI has issued that statement, and it reassures me somewhat. I'd still like to hear more detail of FLI's logic around this grant - why it was considered in the first place, what FLI's pipeline for considering grants is, at what stage Nya Dagblade was rejected, and why. (Hopefully the 'why' part is obvious, but it would be good to understand what information they received that changed their minds, that they didn't have in the first place).
....which makes no mention of the neo-nazi views of Nya Dagbladet, and does not condemn them. That section reads to me as almost an afterthought to their response, which is a rant about how Expo.se is unfairly criticising FLI, and how Nya Dagbladet is not neo-nazi.
Here's that quote in context:
...
We will continue to engage the broadest sample of humankind, whether or not we are criticized by anyone who questions our motives, or who may have their own agendas. And in this effort, the Future of Life Institute stands and will always stand emphatically
It appears that a paragraph was added to the statement today:
...Added Jan 16: Just to be absolutely unambiguous: FLI finds Nazi, neo-Nazi or pro-Nazi groups or ideologies despicable and would never knowingly support them. In case FLI’s past work, its website and the lifetime work, writing, and talks by FLI leadership left any doubt about that, we included this final sentence in our statement above just to be 100% clear: “the Future of Life Institute stands and will always stand emphatically against racism, bigotry, bias, injustice and discrimination at all
Taking both parts of that paragraph seriously, I think the statement is best read as saying (1) we condemn neo-nazism but (2) we're okay with partnering with neo-nazis if it helps achieve our goals. I agree it would have been much better to specifically condemn neo-nazism by name, but I find the existence of (2) to be the most alarming part of the statement.
There's also a failure to reckon with how vile the material Nya Dagbladet has published is and instead legitimate it as an organization (e.g., look, they got $30K in public funding!).
Unless we actually are saying that talking with 'bad people' is automatically bad and something you should apologize to all your right thinking friends for having contaminated them with proximity to badness afterwards.
This is putting it very, very euphemistically, if you want to call 'offering $100,000 in funding to a neo-Nazi publication' ,'talking with bad people'.
Is there a principled argument that thinking about funding a group like that, and then changing your mind is bad?
Yes. Even if they thankfully never granted the money, the question remains - why...
I really can't express clearly how badly I think of FLI's non-apology.
Why on earth would they think a neo-nazi publication would ever be a good thing to fund?
The Future of Life Institute makes no apologies for engaging with many people across the immensely diverse political spectrum, because our mission is so important that it needs broad support from all sectors of society
Why on earth would they put this in their response, rather than condemning neo-nazism?
@Tegmark
...rather than condemning neo-nazism?
There was this section:
And in this effort, the Future of Life Institute stands and will always stand emphatically against racism, bigotry, bias, injustice and discrimination at all times and in all forms. They are antithetical to our mission to safeguard the future of life and to advance human flourishing.
Minor quibble, but this should be titled 'New Good things that happened in EA this year'.
There's already loads of existing good things happening, that shouldn't get forgotten about. I don't have the numbers, but I'd like to know- how many nets did AMF distribute? How many times did animal charities expose abuse and take companies to court to protect existing laws?
I know this stuff is happening, and it's great and we should hear more about existing, ongoing good work
I'd like to be able to bookmark comments, in the same way you can bookmark posts. There's a lot of really, really well thought out and written comments, in some cases containing just as much value as articles, and I'd like to be able to bookmark a comment to come back to.
I'd argue this is even more important than bookmarking articles, because articles have tags and titles to search for, whereas comments don't, and it's easy to loose track of what article and what thread the one you're looking for is contained in.
I agree. To take the distinctions of trust one step further - there's a difference between trust in the intentions and judgements of people, and trust in the systems they operate in.
Like, I think you could be trusting of the intentions and judgement of EA leadership, but still recognise that people are human, and humans make mistakes, and that transparency and more open governance leads to more voices being heard in decision making processes, which leads to better decisions. It's the 'Wisdom of Crowds' kind of argument.
transparency and more open governance leads to more voices being heard in decision making processes, which leads to better decisions
Perhaps I'm just a die-hard technocrat, but I'm very unconvinced that this is actually true. Do we have any good examples either way?
Freegan