M

Maynk02

47 karmaJoined Oct 2022Seeking work

Bio

I am a reader looking for knowledge.

How others can help me

I am actively looking for opportunities to bring change in society and make it my profession. 

How I can help others

Reach out to me for anything. If I can help, I definitely will.

Sequences
1

From Animal Weapons to New-world Arms Race

Comments
20

Great news! I really like some of the channel's videos.

I do want to ask though, will there be a specific type or niche of animated videos that you guys are planning to work on for external projects? Like animation work for studios, industrial projects, or outside altruistic causes?

I agree with the proposal of University groups as impact-driven truth-seeking teams and mentioned a few of my observations corresponding to your comment. Of course, it can work out. I tried to think about some of the reasons behind the same ambiguity you mentioned. It is just my two cents. I, too, consider the importance of participation above all. 

As someone who has first-hand experience with many points mentioned in the post: I can say that the current state of college-level EA groups is fairly limited to theoretical conduct rather than actions. I can guess that there might be multiple reasons, but I can mention some that I have personally observed:

 

  • IRL College studies sometimes hardly align with EA values, especially for students in technical and business institutions.
  • Most scholars prefer studying doing good for an extended period. The rest of the time is saved for studying their original streams.
  • The common and foremost goal[1] for college EA groups is organizing events for reading and discussion of just resources(posts and blogs). IRL EA college groups are typical Whatsapp groups serving as crossposting events, that's all. There may be opportunities present locally, but typically. college students can't afford to get involved at the student level.
  • College students are often surrounded by substantially large groups of non-EA people.

 

 

 

  1. ^

    First-hand experience. 

(Without going through part 2 and the mathematics)

After going through part 2: Good job! I hope to see this model in use sooner or later!

Thanks for writing such a useful post!

Upon observing the pattern, I can see why you stated in the conclusion that the graph would be mostly quite obvious. The high-intensity pain with maximum time reduction would always move to the Pareto frontier. This would be directly responsible for the pain of Disabling variety—the motive of most factory farming practices, for reduced manufacturing costs and efficient storage.

I think the mentioned Multi-Objective model can be quite handy. We can incorporate it for cost-effective intervention analyses. But I think we can also use it to achieve the prerequisites required for estimating better Pareto frontier data.

Of course, Behaviour is probably a good indicator of pain, as the evolutionary point of pain is to change behaviour. One caveat though is change in behavioral patterns after long periodical treatments. For example: the case of cattle would be different from hens and chickens. That data can be obtained from authentic monitoring and testing (a fundamental bottleneck).

 

P.S. I think this is an important consideration.

so far this has had limited success due to the scarcity of relevant studies on humans, not to mention species of farmed animals.

Somebody (or a bunch of somebodies) can only try to come forward to take action. But I am afraid that's what they tried to do.

Here, "they" refers to folks from OpenAI who tried to come forward and do something about Sam's manipulative behavior or lies or whatever was happening. Anyone who may potentially provide the leaks or shed some light.

It was like the first necessary crisis (the sooner, the better) for later events to unfold. I am unsure about their nature.

Here, I am unsure about the nature of the events. 

I hope it is clear now.

I am not sure if leaks are a reliable source in these cases. For one, these instances don't have material evidence. Somebody (or a bunch of somebodies) can only try to come forward to take action. But I am afraid that's what they tried to do. It was like the first necessary crisis (the sooner, the better) for later events to unfold. I am unsure about their nature. Partially based on the new board's current update on choosing the new members.

No they didn't, and it looks like we aren't going to see the investigation, unless somebody leaks it. 

I might be untapped with the latest update here, but did they release the reason behind Altman's firing? I don't think it was ever answered by him in the subsequent interviews. Gradually questions died down or probably dropped from the questioner's list due to a clause, maybe. Now that he is back at the table,[1]I think it has become more urgent to get the original motivations out.

  1. ^

Workers' rights are usually under the umbrella of systematic violations of rights, a term usually associated with Human rights. We can use similar pointers and forecast questions/solutions. Some would overlap with data mining and fair use —which are hardly followed. It is not very hard for an average company to see the pivots created by OpenAI's crisis management team. OpenAI research leads say their recent model is trained on a combination of data that's publicly available as well as data that OpenAI has licensed, but they can't go into much detail on it.

The last part is no easy feat for anyone to dive into. This conversation came out less than two days ago and seemed quite intentional. We can safely assume that this is going to be the new norm for addressing lawsuits. It is admissible in all the formal proceedings, after all. It is important to note that, statements like: in some ways, we really see modeling reality as the first step to be able to transcend it, are meticulously said in the end. I don't think anyone would want to deal with them and get stuck in an expensive limbo beyond control, which OpenAI can afford.

Great piece. In terms of potential scenarios, I think this video also covers some good points.

I think making the point of AI governance is quite valid here. And so is the progress of AI applications by OpenAI tools like GPTs, Sora, and the upcoming multimodal AI application. It is important to note that a majority of exposure affects large groups of un-savvy individuals in both the common workforce as well as the decision-makers. As someone who has a significant background in videos- what goes into the making and how it exists in the open- I can say there was sufficient time to implement a basic structural provision in the sector of generative media. It is/was obvious that the unregulated use of massive data is in play and only giants can try to sue giants, and as of now, that's just it. There should have been strict rules formed before marketing platforms like Sora. It is not hard to see that tech companies are not focused on AI alignment right now. New research aligns toward making the most in a competitive setting to deliver products. That's just a fact, and it is sad to be excited about the future of it. Of course, we can not blame the entire field of research for forecasting and making readings. But, there are massive layoffs regularly in the name of just Futureproofing. We must figure out how to move in the direction of action-guiding research fast.

Load more