aogara

Wiki Contributions

Comments

LetsDoThis's Shortform

Faunalytics is an animal advocacy charity and one of ACE’s top charities. They summarize and interpret a lot of academic research, as well as conducting their own research including full data analyses. I volunteered with them a few years ago and would definitely recommend it to anyone looking to learn research, writing, or data analysis skills. They’re also really good about doing transparent and reproducible research by releasing the code they use.

Here’s a recent article conducting a data analysis of 15 years of data on wildlife trade: https://faunalytics.org/wildlife-imports/

Here’s the main data analysis I worked on, analyzing Faunalytics’ proprietary polling data on animal welfare attitudes. We released the data, a writeup of the methodology, and all the code for the analysis on OSF: https://osf.io/2b86k/

[Linkpost] Eric Schwitzgebel: Against Longtermism

The third argument seems to represent what a lot of people actually feel about utilitarian and longtermist ethics. They refuse to take impartiality to its logical extreme, and instead remain partial to helping people that feel nearby.

From a theoretical standpoint, there are few academic philosophers who will argue against “impartiality” or some understanding that all people have the same moral value. But in the real world, just about everyone prioritizes people who are close to them: family, friends, people of the same country or background. Often this is not conceived of as selfishness — my favorite Bruce Springsteen song, “Highway Patrolman”, sings the praises of a police officer who puts family above country and allows his brother escape the law.

Values are a very human question, and there’s as much to learn from culture and media as there is from academic philosophy and logical argument. Perhaps that’s merely the realm of descriptive ethics, and it’s more important to learn the true normative ethics. Or, maybe the academics have a hard time understanding the general population, and would benefit from a more accurate picture of what drives popular moral beliefs.

Two tentative concerns about OpenPhil's Macroeconomic Stabilization Policy work

Very complicated question that I’m not at all qualified to speak on, but if you’re interested google Scott Sumner NGDP Targeting. Basically, rather than the current “dual mandate” of maintaining both low unemployment and a little inflation, targeting a fixed rate of NGDP growth would balance the mandate between unemployment and inflation. The idea became very popular in the blogosphere and in real economics literature in the aftermath of the 2008 crisis, where many believe the Fed was too slow to drop interest rates and should’ve been more concern about unemployment than inflation.

doing more good vs. doing the most good possible

Agreed, it’s not helpful to discourage people who are doing good by only criticizing where they might fall short. It’s one of the challenges of the EA mindset, but in my experience it’s a challenge that most EAs have struggled with and tried to find solutions for. Generally, the solutions recognize that beating yourself up about this stuff isn’t really effective or altruistic at all.

My favorite writing on this topic comes from Julia Wise, a community liaison at CEA and author of the Giving Gladly blog. Here’s a few posts I found helpful:

http://www.givinggladly.com/2013/06/cheerfully.html?m=1

http://www.givinggladly.com/2020/01/its-ok-to-feed-stray-cats.html?m=1

http://www.givinggladly.com/2019/02/you-have-more-than-one-goal-and-thats.html?m=1

Why don't governments seem to mind that companies are explicitly trying to make AGIs?

Counterpoint on market sentiment: Anthropic raised a $124M Series A with few staff and no public facing product. The money comes from a handful of individuals including Jaan Tallin and Eric Schmidt, which makes unusual beliefs more likely to govern the bid (think unilateralist’s curse). But this seems like it has to be a financial bet on the possibility of incredible AI progress.

Separate question: Anthropic seems to be composed largely of people from OpenAI, another well-funded and socially-minded AGI company. Why did they leave OpenAI?

I was a COVID-19 recipient. Now, I’m an employee

Very moving. Strong reminder of the struggles people face and why it’s so important to help out in this world. Thank you for sharing your story.

Should the EA community have a DL engineering fellowship?

Very cool idea. I’d be interested in working on implementing DL papers if anyone wants to get a group together. I’ve had some experience with more standard ML algorithms at work, but deep learning is very specialized technology that doesn’t get used in most business applications so I think I would need a more academic setting study it. Will send you a message Pablo and anyone else feel free to reach out if you’re interested.

Why don't governments seem to mind that companies are explicitly trying to make AGIs?

Agreed, and I don't have any specific explanation of why government is unconcerned with dramatic progress in AI. As usual, government seems just a bit slow to catch up to the cutting edge of technological development and academic thought. Charles_Guthmann's point on the ages of people in government seems relevant. Appreciate your response though, I wasn't sure if others had the same perceptions.

Why don't governments seem to mind that companies are explicitly trying to make AGIs?

One of EA’s most important and unusual beliefs is that superintelligent AGI is imminently possible. While ideally effective altruism is just an ethical framework that can be paired with any set of empirical beliefs, it is a very important fact that people in this community hold extremely unusual beliefs about the empirical question of AI progress.

Nobody I have ever met outside of the EA sphere seriously believes that superintelligent computer systems could take over the world within decades. I study computer science in school, I work in the field of data science, and everybody I know anticipates progress-as-usual for the foreseeable future. GPT-3 is a cool NLP algorithm, but it doesn’t spell world takeover anytime soon. The stock market would arguably agree, with DeepMind receiving a valuation of only $400M in 2014, though more recent progress within Google and Facebook has not received public financial valuations. The AI Impacts investigation into the history of technological progress revealed just how rare it is for a single innovation to bring decades worth of progress on an important metric. Much more likely in my opinion is the gradual and progressive acceleration of progress in AI and ML systems where the 21st century sees a booming Silicon Valley, but no clear “takeoff point” of discontinuous progress, with the possibility that the supposed impacts of AGI (such as automating most of the labor force or >2xing the global GDP growth rate) do not emerge for a century or centuries.

To be clear, I agree that unprecedented AI progress is possible and important. There are some strong object-level arguments, particularly Ajeya’s OpenPhil analysis of the size of the human brain vs. the size of our biggest computers. These arguments have helped convince influential experts to write books, conduct research, and bring attention to the problem of AGI safety. Perhaps the more persuasive argument is that no matter how slim the chances are, the chance cannot be disproven, and the impact of such a transformation would be so great that a group of people should be seriously thinking about. But it shouldn’t be a surprise when other groups do not take the superintelligence revolution seriously, nor should it be a surprise if the revolution does not come this century.

Epistemic Status: Possibly overstated.

Do sour grapes apply to morality?

Really cool! +1 for an interesting hypothesis and recreating real data to test it.

Load More