All of more better 's Comments + Replies

Fair point.

OTOH, If Trump wins or “wins” in 2024, I’m honestly not sure a legitimate election would be possible in 2028, in which case 2024 would have been the most important.

Hoping 2024 is legitimate. There are justified concerns that Trump will not accept defeat (assuming legitimate defeat) and will stop at nothing to regain power. He is a very dangerous man.

Thanks for sharing a summary of the content in addition to the link! Super helpful as I do not have streaming right now and am trying to avoid hyperlinks.

It makes sense that politicians would say this kind of thing leading up to an election. However, the futures that these 2 candidates are proposing are wildly different. I know that I am not alone (neither here on the forum nor IRL) in believing that this election will be a tipping point and will have implications that ripple far beyond just the US.

That was my interpretation, though the specific forums in question were not named directly so I cannot be certain.

Wow, this went from a karma level 8 to a 1 in under an hour.

I typically view this forum as a place where civil discourse ought to be, and often is, encouraged. This downvoting feels a bit like a unique form of censorship.

I would love to understand how people are thinking about this.

Thanks for sharing:

Curious as to your take on this: If this forum existed prior to WW2 and there was a post suggesting that it was imperative to prevent Hitler from gaining power, would you have felt that post should not have been made?

I do agree that exceptions can be a slippery slope, and certainly don’t think all US elections warrant exception. This one has potential to accelerate harm globally and is occurring in one of the most powerful countries in the world. US citizens can influence its outcome. This election will have global ramifications spanning... (read more)

-4
more better
1mo
Wow, this went from a karma level 8 to a 1 in under an hour. I typically view this forum as a place where civil discourse ought to be, and often is, encouraged. This downvoting feels a bit like a unique form of censorship. I would love to understand how people are thinking about this.
5
Larks
1mo
Someone told you not to go on lesswrong for cybersecurity reasons?

Interesting, thanks for sharing!

I can see how that may be the case and I appreciate your feedback. It made me think.

I believe there can be value in keeping a space politically neutral, but that there are circumstances that warrant exceptions and that this is one such case. If Trump wins, I believe that moral progress will unravel and several cause areas will be rendered hopeless.

If there had been a forum in existence before WW2, I wonder if posts expressing concerns about Hitler or inquiring about efforts to counter actions of Nazis would have been downvoted. I certainly hope not.

This post captures some of my feelings for why I don't think we should make exceptions for US elections: 

https://www.benlandautaylor.com/p/the-four-year-locusts 

See also: 

https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer 

Can you all help me understand why this is getting downvoted? At the moment the comment’s karma is -2, though 6 people have agreed and 3 have disagreed.

Is the downvoting likely occurring because:

A) I shouldn’t have written this as a response to the above post.

B) I did not provide sufficient rationale.

C) You prefer Trump over Biden.

D) You don’t believe electing Trump would threaten national and international security / increase cumulative suffering.

E) You believe that voting for a write in candidate or third party has a real chance at being successful.

F) Something else (I’d be grateful if you specify).

Thanks for the feedback.

5
Rebecca
1mo
I didn’t vote, but I’d guess that people are trying to discourage politicisation on the forum?

I am so grateful to Moskovitz and Tuna for making these donations.

This coming November is — technically— a US election. That being said, if Trump were to win, it would lead to worsened security and immense suffering nationally and internationally. The US cannot let Trump become President and must do everything possible to prevent this from happening (barring illegal or unethical actions).

From my POV, supporting Biden by ensuring he gets the votes he needs seems like the only viable option. If you have other ideas please share them!

This election is absolute... (read more)

1
more better
1mo
Can you all help me understand why this is getting downvoted? At the moment the comment’s karma is -2, though 6 people have agreed and 3 have disagreed. Is the downvoting likely occurring because: A) I shouldn’t have written this as a response to the above post. B) I did not provide sufficient rationale. C) You prefer Trump over Biden. D) You don’t believe electing Trump would threaten national and international security / increase cumulative suffering. E) You believe that voting for a write in candidate or third party has a real chance at being successful. F) Something else (I’d be grateful if you specify). Thanks for the feedback.
4
Arturo Macias
1mo
I simply cannot understand how some people considers the 2024 a "normal" election.  https://forum.effectivealtruism.org/posts/ekM9jQqXq8D8qa2fP/united-states-2024-presidential-election-so-help-you-god

Interesting, thanks for sharing your thoughts. I guess I'm less certain that wealth has led to faster moral progress. 

@niplav Interesting take; thanks for the detailed response. 

Technically, I think that AI safety as a technical discipline has no "say" in who the systems should be aligned with. That's for society at large to decide.

So, if AI safety as a technical discipline should not have a say on who systems should be aligned with, but they are the ones aiming to align the systems, whose values are they aiming to align the systems with? 

Is it naturally an extension of the values of whoever has the most compute power, best engineers, and most data?

I love the id... (read more)

3
niplav
1y
I am somewhat more hopeful about society at large deciding how to use AI systems: I have the impression that wealth has made moral progress faster (since people have more slack for caring about others). This becomes especially stark when I read about very poor people in the past and their behavior towards others. That said, I'd be happier if we found out how to encode ethical progress in an algorithm and just run that, but I'm not optimistic about our chances of finding such an algorithm (if it exists).
3
niplav
1y
In my conception, AI alignment is the theory of aligning any stronger cognitive system with any weaker cognitive system, allowing for incoherencies and inconsistencies in the weaker system's actions and preferences. I very much hope that the solution to AI alignment is not one where we have a theory of how to align AI systems to a specific human—that kind of solution seems fraudulent just on technical grounds (far too specific). I would make a distinction between alignment theorists and alignment engineers/implementors: the former find a theory of how to align any AI system (or set of systems) with any human (or set of humans), the alignment implementors take that theoretical solution and apply it to specific AI systems and specific humans. Alignment theorists and alignment implementors might be the same people, but the roles are different. This is similar to many technical problems: You might ask someone trying to find a slope that goes through a could of x/y points, with the smallest distance to each of those points, “But which dataset are you trying to apply the linear regression to?”—the answer is “any”.

Or is there a (spoken or unspoken) consensus that working on aligned AI means working on aligned superintelligent AI? 

1
niplav
1y
There are several plans for this scenario. * Low alignment tax + coordination around alignment: Having an aligned model is probably more costly than having a non-aligned model. This "cost of alignment" is also called the "alignment tax". The goal in some agendas is to lower the alignment tax so far that it is reasonable to institute regulations that mandate these alignment guarantees to be implemented, very similar to safety regulations in the real world, similar to what happened to cars, factory work and medicine. This approach works best in worlds where AI systems are relatively easy to align, they don't become much more capable quickly. Even if some systems are not aligned, we might have enough aligned systems such that we are reasonably protected by those (especially since the aligned systems might be able to copy strategies that unaligned systems are using to attack humanity). * Exiting the acute risk period: If there is one (or very few) aligned superintelligent AI systems, we might simply ask it what the best strategy for achieving existential security is, and if the people in charge are at least slightly benevolent they will probably also ask about how to help other people, especially at low cost. (I very much hope the policy people have something in mind to prevent malevolent actors to come into possession of powerful AI systems, though I don't remember seeing any such strategies.) * Pivotal act + aligned singleton: If abrupt takeoff scenarios are likely, then one possible plan is to perform a so-called pivotal act. Concretely, such an act would (1) prevent anyone else from building powerful AI systems and (2) allow the creators to think deeply enough about how to build AI that implements our mechanism for moral progress. Such a pivotal act might be to build an AI system that is powerful enough to e.g. "turn all GPUs int rubik's cubes" but not general enough to be very dangerous (for example limiting its capacity for self-improvement), and then augment

@PeterSlattery  I want to push back on the idea about "regular" movement building versus "meta". It sounds like you have a fair amount of experience in movement building. I'm not sure I agree that you went meta here, but if you had, am not convinced that would be a bad thing, particularly given the subject matter.

I have only read one of your posts so far, but appreciated it. I think you are wise to try and facilitate the creation of a more cohesive theory of change, especially if inadvertently doing harm is a significant risk. 

As someone on the p... (read more)

1
PeterSlattery
1y
Thanks for the thoughts, I really appreciate that you took the time to share them.

Like, what is the incentive for everyone using existing models to adopt and incorporate the new aligned AI?

1
more better
1y
Or is there a (spoken or unspoken) consensus that working on aligned AI means working on aligned superintelligent AI? 
1
more better
1y
Like, what is the incentive for everyone using existing models to adopt and incorporate the new aligned AI?
6
niplav
1y
There are three levels of answers to this question: What the ideal case would be, what the goal to aim for should be, and what will probably happen. * What the ideal case would be: We find a way to encode "true morality" or "the core of what has been driving moral progress" and align AI systems to that. * The slightly less ideal case: AI systems are aligned with humanity's Coherent Extrapolated Volition of humans that are currently alive. Hopefully that process figures out what relevant moral patients are, and takes their interests into consideration. * What the goal to aim for should be: Something that is (1) good and (2) humanity can coordinate around. In the best case this approximates Coherent Extrapolated Volition, but looks mundane: Humans build AI systems, and there is some democratic control over them, and China has some relevant AI systems, the US has some, the rest of the world rents access to those. Humanity uses them to become smarter, and figures out relevant mechanisms for democratic control over the systems (as we become richer and don't care as much about zero-sum competition). * What is probably going to happen: A few actors create powerful AI systems and figure out how to align them to their personal interests. They use those systems to colonize the universe, but burn most of the cosmic commons on status signaling games. Technically, I think that AI safety as a technical discipline has no "say" in who the systems should be aligned with. That's for society at large to decide.

This is great! 

Is the prediction that we will run out of text by 2040 specific to human-generated text or does it account for generative text outputs (which, as I understand it, are also being used as inputs)?

It is specific to the human-generated text.

The current soft consensus at Epoch is that data limitations will probably not be a big obstacle to scaling compared to compute, because we expect generative outputs and data efficiency innovation to make up for it.

This is more based on intuition than rigorous research though.

I read that AI-generated text is being used as input data due to a data shortage. What do you think are some foreseeable implications of this? 

2
Erich_Grunewald
1y
You may be referring to Stanford's Alpaca? That project took an LLM by Meta that was pre-trained on structured data (think Wikipedia, books), and fine-tuned it using ChatGPT-generated conversations in order to make it more helpful as a chatbot. So the AI-generated data there was only used for a small part of the training, as a final step. (Pre-training is the initial, and I think by far the longest, training phase, where LLMs learn next-token prediction using structured data like Wikipedia.) SOTA models like GPT-4 are all pre-trained on structured data. (They're then typically turned into chatbots using fine-tuning on conversational data and/or reinforcement learning from human feedback.) The internet is mostly unstructured data (think Reddit), so there's plenty more of that to use, but of course unstructured data is worse quality than structured data. Epoch estimates – with large error bars – that we'll run out of structured ("high-quality") text data ~2024 and all internet text data ~2040. I think ML engineers haven't really hit any data bottleneck yet, so there hasn't been that much activity around using synthetic data (i.e. data that's been machine-generated, either with an AI or in some other way). Lots of people, myself included, expect labs to start experimenting more with this as they start running out of high-quality structured data. I also think compute and willingness to spend are and will remain more important bottlenecks to AI progress than data, but I'm not sure about that.

What attempts have been made to map common frameworks and delineate and sequence steps plausibly required for AI theories to transpire, pre-superintelligence or regardless of any beliefs for or against potential for superintelligence? 

Hmm. I see this got downvoted and am curious why. Please let me know! Feel free to reach out anonymously if you prefer.

1
Robi Rahman
1y
My guess is they're because this post is not very EA-related.

Interesting, thanks for sharing. I'm curious about how the distribution of people that would see and vote on this Robin Hanson twitter poll compares with other populations.

I have only dabbled in ML but this sounds like he may just be testing to see how generalizable models are / evaluating whether they are overfitting or underfitting the training data based on their performance on test data(data that hasn’t been seen by the model and was withheld from the training data). This is often done to tweak the model to improve its performance.

2
Charlie_Guthmann
1y
I definitely have very little idea what I’m talking about but I guess part of my confusion is inner alignment seems like a capability of ai? Apologies if I’m just confused.

I agree. This seems like an important problem.

Several existing technologies can wreak havoc on epistemics / perception of the world and worsen polarization. Navigating truth vs. fiction will get more difficult. This will continue to be a problem for elections and may sow seeds for even bigger global problems.

Anyone know what efforts exist to combat this?

Is this a subcategory of AI safety?

1
Phib
1y
Yeah, I have no idea, unfortunately. And yes it seems quite attached to AI capabilities.

Thanks for bringing this to light. I think awareness around deepfakes really does need to be considered more. 

So maybe spamming content for significant figures doing whacky things is effective for updating people's models for the probability of a deepfake. 

I would be slightly concerned about spamming people with deepfakes. I don't know if the average adult knows what a deepfake is. If spammed with deepfakes, people might think the world is even crazier than it actually is and I think that could get ugly. I think a more overtly educational or inte... (read more)

4
Phib
1y
Yeah discernment of truth makes sense to me - and fair, spam is probably not productive, but it got across my intention of ‘desensitizing’ people to this strategy of playing on our ‘discernment of truth’. I think Geoffrey’s comment on the next political cycle is really interesting for thinking about how that ‘spam’ may end up looking.

I appreciate this initiative, @Buhl .  I went to the google form and noticed it requires permissions to update. There are a lot of entries on it, and it looks like the last update was January 2022, pre FTX crash. Not sure if others felt this way, but personally this made me question whether reading through the existing ideas /coming up with new ones would be a good use of time versus parasocial.

Is your goal to find out which ideas have the greatest support on the forum, to generate more ideas, to find people interested in working on particular ideas, ... (read more)

@Nathan Young  This is interesting, but I'm struggling to understand how it is helpful or would change things. Can you help me understand?

Thanks for sharing @Geoffrey Miller and @DavidNash .

The results of this study are interesting for sure. Examining them more carefully makes me wonder if there is a significant priming effect in play  in both the 2015 and 2023 polls. This would not explain the 11 percent increase in participants worried about AI eventually posing a threat to the existence of the human race, though it potentially could have contributed, since there were some questions added to the 2023 poll that weren’t in the 2015 one.

Text

Description automatically generated

I was surprised  that in 2023, only 60% of par... (read more)

Thanks, I'm seeing that here, too: 

"It should be noted that although creatine is found mostly in animal products, the creatine in most supplements is synthesized from sarcosine and cyanamide [39,40], does not contain any animal by-products, and is therefore “vegan-friendly. The only precaution is that vegans should avoid creatine supplements delivered in capsule form because the capsules are often derived from gelatin and therefore could contain animal by-products."

Thanks for posting this. I think it's valuable to pay attention what drives shifts in perception.

I think Ezra Klein does a good job appealing to certain worldviews and making what may initially seem abstract feel more relatable. To me personally this piece was even more relatable than the one cited. 

In the piece you cited I think it's helpful that he:

  • calls out the "weirdness"
  • acknowledges the fact that people that work on z are likely to think z is very important but identifies as someone who does not work on z
  • doesn't go into the woods with theories

I t... (read more)

It might have increased recently, but even in 2015, one survey found 44% of the American public would consider AI an existential threat. It's now 55%.

Glad to see this thread. It precipitated several questions, which I am happy to post separately if you’d like.

  1. Has anyone found a good source of vegan creatine?

  2. Has anyone calculated the monthly cost for their supplements?

  3. Does anyone try and avoid or at least minimize highly processed/ ultra processed foods? I’ve noticed more studies over the past several years around ill effects of such foods. It’s one reason I don’t consume most meat substitutes very often.

I wonder if concerns like the following act as barriers to people going vegan, or even r... (read more)

4
Stephen Clare
1y
On (1), I commented above, but most supplemental creatine is vegan as far as I can tell.

Thank you! Deleting this entry because I posted it as a question without understanding that this post was also visible.

This is absolutely beautiful and made me tear. I am so sorry that Alexa is no longer here, but so glad that she spent her time on this planet exuding such infinite compassion. She sounds like a truly remarkable human.  

Grief is so hard. Thanks for this.

Maybe my comment is off, since your article is specifically about AI alignment vs. capabilities research and I was taking the single sentence I quoted out of context. Will remove .

Reply

[This comment is no longer endorsed by its author]Reply
2
NickLaing
1y
Maybe I'm missing something, what do you think are the assumptions that that statement makes?
6
ben.smith
1y
Can you describe exactly how much you think the average person, or average AI researcher, is willing to sacrifice on a personal level for a small chance at saving humanity? Are they willing to halve their income for the next ten years? Reduce by 90%? I think in a world where there was a top down societal effort to try to reduce alignment risk, you might see different behavior. In the current world, I think the "personal choice" framework really is how it works because (for better or worse) there is not (yet) strong moral or social values attached to capability vs safety work.

Thanks, I appreciate these insights and these are good ideas. 

Thanks for the response.  I'm mainly concerned with #2 and #3

4
Geoffrey Miller
1y
more better -- thanks for the clarification. I have no idea about how to handle number 3 (reducing search engine/LLM awareness of infohazards).  For number 2 (being cautious about raising awareness of infohazards on public forums), I guess one strategy would be to ask very vague questions at first to test the waters, and see if anybody replies with a caution that you might be edging into infohazard territory. And then if nobody with more expertise raises an alarm, gradually escalate the specificity of one's questions, narrowing the focus one step at a time, until eventually you either get a satisfactory answer, or credible experts call for caution about raising the topic. Really, EA and related communities need some specific, consensual 'safeword' that cautions other people that they're edging into infohazard territory. I'm open to any suggestions about that. Trouble is, a lot of topics are treated as toxic infohazards that really aren't (e.g. behavior genetics, intelligence research, evolutionary psychology, sex research, etc). Most of these take the form of 'here's a behavioral sciences theory or finding that is probably true, but that the general public shouldn't learn about, because they don't have the political or emotional maturity to handle it'.  So we'd need a couple of different safewords -- one that refers to specific technical knowledge that could actually increase true existential risks (e.g. software for autonomous assassination drones, for genetically engineering more lethal pandemics, for enriching uranium, etc), versus one that refers to more general knowledge that (allegedly) could lead people to updating their social/political views in directions that some might consider unacceptable.
[This comment is no longer endorsed by its author]Reply
4
Dawn Drescher
1y
There are the info hazards that just shouldn’t get into the wrong hands. In such cases, I’ve usually just asked around in some private setting. You may want to use something that is end-to-end encrypted in case all of Slack/Messenger/Telegram leaks one day. But there are also those info hazards that may be really bad for anyone who knows about them just by dint of knowing about them. I usually wait a long time and think about it a lot before I ask someone about it, and then I optimize for someone who I think might’ve thought about it already, might be immune, and might know more. And get their consent first. But for the most part I just keep these things to myself.

Really love this idea, and would encourage you to evaluate additional barriers to getting vaccines in arms.Maybe this could be via basic surveys or interviews or via browsing some medical publications and talking to vaccine clinic organizers.

Also curious if there is an existing ( or soon to exist) non profit that might be able to assist financially. Global vaccine shortage and insufficient public health tools amidst a pandemic seems like an important, tractable cause. I’d be interested in helping out if it makes sense.

Personally I’d be super excited about... (read more)

Filip, that sounds reasonable too . Thanks for the add .

Have you considered making the survey questions preview-able for potential participants?  People might feel more inclined to fill out the form if able to peruse before deciding to partake.  

I appreciate the work you have done to help streamline this process!

:) I wouldn't be surprised if this is a thing, and stand by my bad puns. Didn't realize you were joking, but also didn't judge the corgi hotel other than feeling slightly concerned about the possibility of animal exploitation.

I'm sorry to hear that others experience anxiety around finding lodging too, and think surveying for potential value of group bookings might be good.

Awesome, thanks for sharing this experience!

Vaidehi, Thanks for this post. I like these ideas and have wondered about this too. I like how you lay out possible operational models.
 

Miranda, what you are doing looks promising to me; thanks for sharing!

Tim: Not sure. Didn't apply to this year's Prague event because I couldn't make the dates work. Glad you found a place to stay!

Irena: I was in Prague for a short time when I was little and remember it as a thriving cultural hub that I wanted to spend more time in. I think you're right on about people wanting to experience real life and travel to awesome places again, after prolonged lack of travel, arts/culture, in-person human interaction, etc. That's great that you made a public transport doc! I agree that there would likely be significant work involved ... (read more)

3
Charles He
2y
Right now, as we speak, I think many attendees are running into the problem that there are few low cost rooms to be booked.  So I think many EAs are going through the anxiety of having booked expensive rooms as a result, or are spending hours circling the booking process. This anxiety is probably borne by more conscientious EAs.  This is bad and I think people should book those rooms, despite the higher price.   So I made a joke about corgis and a fancy room, trying to show by example that this is OK. I thought doing this with a joke was way more effective (and less dry and briefer) than writing out the above. I think this is true, the joke and corgi picture was glorious. I deleted this comment. No one messaged me about it, I just did it because I think there are downsides to my joke, including creating anxiety among a smaller group of organizers.

Thanks Kevin!  Curious as to whether that worked out well for EAGx Boston.  

4
KevinWei
2y
I did get one of those rooms through EA @ Georgia Tech. From my personal perspective, the block booking was convenient from a logistics perspective. It also probably had secondary effects for community building, as all of us staying in the same hotel helped us organize meetups / coworking time in the hotel. From a cost perspective for the conference, my guess would be that it would be more cost effective to book a block rather than reimbursing individuals who book separately. Source: I had to do a bunch of block bookings for a past job, and it was significantly more cost effective. Interesting note is that in many hotels, single rooms and double rooms have the same nightly cost, so if many attendees had roommates then there is a potential for significant (40-50%) cost savings on hotel bookings.

Thank you!  Good question.  I think Charles' consideration makes sense here, though I'm not super familiar with what the location-based EA ecosystem looks like.  

P.S. Charles, I've found many of your posts super insightful. Maybe not many people talk to you (me neither, unless I initiate the convo!), but perhaps more people are listening than you think.  

3
Charles He
2y
Thanks for your kind note. My comment was just a consideration, in the weakest sense.  Another issue is that this might be driven by the tail value of a few EAs having valuable meetings, which seems hard to measure. My guess is that CEA has considered this and has some anecdotes or intuitions.  Maybe others will comment? Haha, thanks, now there's at least two of you!

Thank you! That makes me feel good.

Load more