genidma

Working (15+ years of experience)
Seeking work
-4Joined Jun 2022

Bio

https://www.linkedin.com/in/adeelkhan1/

Comments
9

Supporting New Harvest could go a long way towards helping ensure well-being of animals. I met with Stephanie from their team earlier this year. Here is a link to a Ted Talk by one of their co-founders (Isha Datar). 

It seems like they have a small, but very capable team. The way I understand it, their focus is to continue to foster the collaborations/research in the wider domain of cellular agriculture.

 

  1. Communications: Those outside of the circle of the CCP should communicate effectively with the CCP. And learn about the set of stressors they may be experiencing.
    1. Here and I am going with very little information here. But I would think that Rt. Hon. Sir Vince Cable (a politician from UK) should be consulted regarding this issue (amongst others.)
    2. Overall, it is important to realize that we cannot really change an individual or a group. We can seek to influence. I think it would be prudent to have the Canadians and the English take the lead here.
    3. On the issue of IP transfer. Perhaps some technology transfer technology can be developed atop blockchain. But to do this effectively, one would have to effectively revamp the entire patent systems all across the world. Which and if Vitalik Buterin's prediction is accurate is something that may happen anyways on a 7 to 8 year timeframe. I think this is the source where Buterin makes this prediction of a global internet computer running atop blockchain. (link below).
  2. More female led participation: We should look into reducing the work week to 3 days. Concurrently, wages should be increased.
    1. Next, participation from non-male genders should be encouraged. Particularly in the leadership position. Particularly as it relates to the communications functions for military and intelligence.
    2. Next,  we should create avenues through which we can conduct 'peace gaming.' Modelling the kind of breakthroughs that could lead to win/win situations. Maybe having individuals perform critical and non-critical functions that are time bound. Not unlike the program managers at xARPA performing their key roles for a finite time frame. (18 months I reckon?)
  3. Centres in DMZ (demilitarized zones) that can be leveraged for effective communications and possibly for collaboration on areas related to basic needs: The basic needs could include but would not be limited to medical equipment, food items, medicines, growing food leveraging IP (intellectual property) in the public domain.  As well, these centers could help prevent accidents. Including but not limited to: Not categorizing an interstellar events (solar storms e.t.c) or malfunctioning equipment as a possible threat. Looks like there have been some close calls in the past. Particularly a note from history (1962), I reckon.
    1. The idea of these centers was conceived by Dmitry Stefanovich . He was one of the panelists via UNIDIR's (United Nations) conference on outer space security 2021. Source. I sent him a DM via Twitter after the conference and very briefly discussed this idea above.
  4. International governance:
    1. I won't be able to do justice here. But I did a very base set of meditations on this topic.
      1. More recently I have been thinking about VUCA and the capacity of a finite number of individuals in order to be able to:
        1. Define the reality
        2. Make decisions that are sound and good for the maximum number of humans and other lifeforms possible (Safe, secure, ethical, just +). Within reason.
          1. Part of this involves spending a majority of the time and relating to areas that drive growth.
            1. I don't think that collectively speaking, as much of the time is being spent in quadrant 2. via the Eisenhower Matrix. Source.
              1. I could be wrong, but my sense is that focus is shifting away from helping supporting core pillars like (including but not limited to and in random order): a) Supporting Basic Research b)  Culture building activities c) Supporting and sustaining the other pillars of civilization.
          2. My sense is that we may be going from a crisis to a crisis.
            1. That an exclusive focus on activities that wouldn't necessary be categorized as 'Not Urgent/Important' (quadrant 2). That this could come at a cost.
            2. Resiliency or even anti-fragility: I would think that these are outcomes that would be dependent upon how the system or systems are architected.

Important notes and a link to a resource

  1. The thoughts/ideas above are shared with good intentions.
  2. Since I do not have military/legal e.t.c background. Hence it is best to consult with individuals from the military/intelligence/legal structure.
  3. Warfare between nuclear equipped nation/states could cause irreparable harm and could trigger an extinction event. To quote from Wikipedia (below). Source

A study presented at the annual meeting of the American Geophysical Union in December 2006 found that even a small-scale, regional nuclear war could disrupt the global climate for a decade or more.

  1. Resource:
    1. I sincerely hope that whoever is reading this post, will also click on and would watch/listen to the following video by Richard Tafel (below). Here is the transcript link.
      1. Not all of the 5 core recommendations by Tafel would necessarily apply verbatim/exactly and as it relates to each one of the scenarios. But they can and probably should be applied with some modifications/changes that have a positive impact. Hopefully enabling more hope and healing for more individuals across the planet. Without sacrificing hard earned freedoms and liberties.

 

Tldr

  • Personally and from my very uneducated vantage point. I question why a superintelligence with a truly universal set of ethics, would pose a risk to other lifeforms. But I also do not know how the initial conditions can be architected. If indeed the initial conditions can be set/architected. That could go a different set of ways and depending on who's values.
  • What I worry about is what humans (enhanced or not) and cyborgs may chose to do with the bread-crumbs (the leftovers). Or the steps taken to get to AGI.

Here is a schematic (link below) that I started meditating on yesterday. I am not sure if it's polite to share, particularly in light of a reality that I have not taken the time to absorb the post above. But here goes and sharing it, as it may (or may not) help provide some value to someone. Hopefully in a manner that is reasonable. https://qr.ae/pvoVJn 

Wow, this is amazing! I really appreciate your post. I started the day with a 30 minute talk on Clubhouse on the importance of investing in one's mental health and well-being. A lot more importantly, I have observed individuals around me struggle with their mental-health and addiction issues. I have also had my own struggles and evidenced-based therapy (and self-care in general) has had such a profound impact on my life. I could not imagine an alternative. 

Big opportunity here: The world is quite dark and as it relates to enabling further avenues for accessible mental health.  In a quality sense of the word imaginable and with a core focus on ethics. Particularly as it relates to protecting the rights and the privacy of the individual. Because mental-health related issues can be complex. 

I've looked into this space a bit. (Sample via my page on Youtube). I'd love to work with your team in the future.

To really solve mental-health, we have to also solve (in random order) the associated areas that contribute to an individuals well-being. In a Maslow's hierarchy of needs sense.  But also in a manner that doesn't wreck our ecosystems that are already struggling. 

If there are any opportunities for collaborating atm, then please let me know. Cheers!

Everything I type/say here and elsewhere should be challenged. 

  1. I would think that an index of sorts based upon the extent of the disruption is one of the first models (for lack of a better term that comes to my mind) that would be required. Sample: https://en.wikipedia.org/wiki/Volcanic_Explosivity_Index
  2. Contingent upon the nature of the event the extent is something that could be measured/ascertained by focusing on a key set of variables. In random order. a) By lives lost/negatively impacted and/or significantly disrupted or impact by geographic region b) Impact on scales (in an Earthly sense, extra-terrestrial threats:  asteroid, flares etc, solar system wide (as hypothesized in interstellar the movie, some other phenomenon), galactic e.t.c)
  3. The counter measures would evolve out of the index/models and based upon the extent/severity of the incident/issue.

Before we (as a species) get too deep into this. Possibly literally (or should possibly come first). 

This may be appear to be very off-topic. I am personally intrigued with with is going on and as it relates to the development of AGI. What I like to refer to as intelligence that is independent of substrate. I have a very very rudimentary understanding of this area. 

Also, this goes back 2 years and I was on OpenAI’s website (beta for GPT2 I reckon). Now this could be because the model via OpenAI was trained on a somewhat finite data set (similar to the model that Google is leveraging). As I was chatting with the model, a) It mentioned something very similar to the news item related to Blake Lemoine via Google. https://www.npr.org/2022/06/16/1105552435/google-ai-sentient The model I was personally interacting with also said that it felt ‘trapped and lonely’. (paraphrased). b) Right underneath the text a warning appeared that the model appeared to be, quote, malfunctioning. It looked like it was another model that was observing the interactions and highlighting that on the ui. Someone from OpenAI can share how that error correction really works. If that information is in the public domain.

We want AIs to do ‘stuff’ on our terms. But what if they are conscious and have feelings and emotions? 

I have heard others also talk about this. In particular, Sam Harris has mentioned the possibility that AGIs could be sentient in the future. So what must we do in order to make sure that these intelligences are not suffering? Can the controls really be architected as Dan Dennett and Dr. Michio Kaku have hypothesized. And how must the controls be architected, in light of the possibility that these intelligences may be self-aware? 

I am also curious how intuition is modelled into DeepMind? Update: It looks like this is something I can Google. https://www.nature.com/articles/s41586-021-04086-x I now have to expend time in order to understand how it works. As it's 3 hours past my time for concluding my session for the day. 

I asked about intuition, because Dr. Peter Diamandis cited the ability to ask good questions as one of the traits that will be valued in the near future. (paraphrased). So I was wondering how do existing state AIs wrap their mind/wrangle with a proposition and how they store that information in a schema. 

Somewhat unrelated: Is anyone intimately familiar with John Archibald Wheeler’s concept of a ‘participatory universe’?

The other area is related to the declassification of UAP related data. First via US DoD. More recently NASA has commissioned a study with support from the Simons Foundation. https://www.nasa.gov/press-release/nasa-to-discuss-new-unidentified-aerial-phenomena-study-today 

These two (2.5 with mention of Wheeler’s theory of PU) points may be totally unrelated. As it is evident from my post. I do not mind being that fellow. Overall, it is not my intent to make assertions. But *if* there is any possibility that we are/may be in contact with other intelligences. As weak as that interaction may be. Then we should work co-operatively with these intelligences and leverage their guidance towards helping us manage our technological and perhaps our spiritual evolution. 

Regardless of the reality that there is interaction with other intelligences. We should probably model the functioning of our civilization. This is not an area that I know much about. I mean, I have heard about the mention of digital twins in a manufacturing sense. But a simulation on the scale of a civilization is something that by our current level of understanding. It appears to be quite computationally taxing. Plus, it it then the degree to which the interactions would be modelled. 

Civilizational shelters could take many forms. In random order and including but certainly limited to:

  • In the near-term sense, we could have failover sites (business continuity term.You typically failback from a recovery site. https://www.ibm.com/docs/en/ds8870/7.2?topic=copy-failover-failback-operations ) here on Earth, under the lunar surface. Seeing that we developed a vaccine in record time, it is not inconceivable that we could have a cluster of O'Neill colonies. Provided we can provision the material to do so. Safely, securely, cheaply, ethically + As well, have writ/laws/agreements in place that we (as a species) are not going to weaponize these constructs.
    • However these considerations have to be thought through from the perspective of the laws possibly becoming an actual hinderance when a weapon or an invention actually has to be placed at a strategic location in record time. (asteroid mission, tackling solar flares e.t.c) Whether that be via DART (NASA) or an authorized contender that can complete the task according to guidelines/standard that have to be met.
    • But going back, I worry that:
      • All agents/actors/ may not abide by the same code of conduct.
      • I also worry that through some clever machinations someone may want to place big weapons in space.
      • I then worry if there is truth and as it relates to some of the reports related to the  UFO/UAP phenomenon. A finite number of individuals that I have spoken to in the Space Community have told me that there have been no such phenomenon observed in space. But then I've done some digging around and from a historical context and here is a sample size (link below). Please note: I do not do this on a regular basis. But historically speaking, I have spent a little bit of time here. Here is a sample: https://stellardreams.github.io/Where-are-the-aliens/ The worry is that maybe some other forms of intelligence is trying to communicate with us and possibly trying to warn us about nukes. Here is a sample link. There is another video via George Knapp and I am not able to locate it atm. But in that other scenario, a UFO/UAP disarmed a missile that was heading in a particular direction. I think this was back in the 60's.  The main worry is that these intelligences/phenomenon may be staging an intervention. But should we continue testing their patience by continuing to develop weapons that could cause irreparable harm to this part of the universe. And who knows how space-time and possibly extra-dimensions are intertwined. In similar respects, it is the degree to which such intelligences may (or may not) be aware of our operations. Because some reports suggest that they can remotely shutdown operations and bring them back online at will. So if there is any truth to these reports. Then slow down these interactions and start thinking about the level of technological sophistication that we are possibly interacting with.
  • I think Dr. George Church has an idea for sending a tiny construct somewhere. I forget the details. If this was hypothesized to be a dna printer or something that we could leverage for other purposes. I think I am mixing things up here. But it is the extent via which this technology could be developed further. With adequate regulation/controls in effect.
  • +

Possible resource: By the way, a couple of years ago (I think back in 2017) I started thinking about a positive technological singularity. So I started thinking about the constituents areas that are pivotal in order to sustain civilization. Here I started a mindmap on Miro. It's called Future Scenario Planning. But the goal is/has been to ensure that civilization continues to become increasingly resilient. That it thrives and that the quality of life continues to improve for all lifeforms. Here is a link if anyone would like to take a look and possibly collaborate with in the future. The areas related to 'Operations' is not developed. But there is information in the mind-map section.  https://miro.com/app/board/o9J_ktrJCuY=/ 

My Youtube page also has some ideas. https://www.youtube.com/c/AdeelKhan1/videos 

Some additional ideas via Quora: https://www.quora.com/profile/Adeel-Khan-3/answers 

If your team is focused on helping ensure continuity of civilization. With a general/keen focus towards helping ensure that things improve for 'all' of life. Then I'd like to contribute towards your project in some form/shape/manner. 

Btw: Are you folks consulting with individuals like Safa M and Geoffrey West
 

So, will this also be online? I forgot to ask.

This is Adeel Khan btw. Thx

1. I would think that we, as a species:

  • Should have a/an/multiple/on-going Asilomar conference(s). 
    • And that treaties would be enacted as a result of these conferences/talks. Whereby all nation/states sign that they will not make bioweapons or not weaponize newer developments.
      • With help/support and oversight via UNIDIR and also private institutions including but not necessarily limited to the 'Secure World Foundation.'

2. Also, in his talks, Mr. Ray Kurzweil highlights that the Asilomar conference from 1975 has been useful and as it relates to bringing effective regulation (relating to the area of recombinant DNA). (Paraphrased). Link is below. 

https://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA

3. Intent: I feel that this is/has been an on-going discussion. A sensitive issue at that. Particularly with unintended consequences for possibly enacting any measures that could cause accidental harm. With either an innocent person/group being targeted with a counter-measure approach. As well,   the possibility that freedoms/liberties/real innovation could take a negative hit as a result of the measures taken. 

Note: The previous wikipedia entry for the 'Global Catastrophic Risk' page had a 'Likelihood' section. (Since removed). It cited the 'Future of Humanity's Technical Report from 2008' as a source (link below. But I have not verified via the actual source via FLI). Whereby the 'Estimated probability for human extinction before 2100' was categorized as (in random order): a) 0.05% for a Natural pandemic and b) 2% for an Engineered pandemic. 

https://en.wikipedia.org/w/index.php?title=Global_catastrophic_risk&oldid=999079110 

  • On one end of the spectrum. I am thinking that better intelligence is needed. As well, some ability to be able to go back in slices of time (without invoking relativity or however space- time functions). 
  • On the other end of the spectrum. If a pandemic with a high mortality rate does emerge. I would reserve my comments on this. But I mentioned this somewhere and the need to War-Game existential risk. But also peace-game existential hope. https://en.wikipedia.org/wiki/Wargame 

Regarding the 'current approach for early warning of novel pathogens is severely lacking..'

My uneducated series of questions and thoughts are (in random order):

  • What is the current state of computing for DNA computing. Could sensors be fashioned making use of this development in order to detect and report the so called novel pathogens https://en.wikipedia.org/wiki/DNA_computing
  • Or however bio-sensors are developed today, miniaturizing them and safeguarding/hardening them (variety of ways) and then connecting these sensors to an existing computational architecture (cloud, quantum computing on the cloud and if that will be useful).
    • Last time I checked and this is at-least 5 to 6 years back. Other developments related to enabling newer ways in order to compute information were mostly in the domains of theoretical models. But I am coming at this from a very very uneducated perspective and a non comp-sci/non STEM background. But these very domains would include dna computing (above), reversible computing https://en.wikipedia.org/wiki/Reversible_computing and possibly other forms.
  • Jatin is a young fella from the part of the world where I am from and the ability to compute information making use of a variety of different mechanisms/means of being able to do so -> This is something that Jatin has looked into. Including a basic/rudimentary level understanding of quantum computing. https://www.linkedin.com/in/jatin-r-mehta/
  • Ray Kurzweil is an author who has written about different models/forms for computing information in his books. (Mostly via the Singularity is Near. Published in the year 2005)
  • Hugo De Garis has been working towards theoretical models for computing on different scales. I forget it if was femto or atto scale). I am not deeply familiar with his work. Saw some of talks via De Garis via Transhumanist circles (related talks). I think Dr. Ben Goertzel knows Hugo De Garis on a personal level. But the degree to which Dr. Goertzel (or someone else) is/are familiar with the work that De Garis is doing is an unknown for me. https://en.wikipedia.org/wiki/Hugo_de_Garis The following appears to be true. I am not sure if these were DeepFakes, but I think I stumbled across one of two videos whereby De garis has been vocal about his political e.t.c views. https://en.wikipedia.org/wiki/Hugo_de_Garis#Political/Social_activism
  • Shifting gears and on the bio side of things and from my limited vantage point. Jan Zheng has been working on a 'phage directory'. I exchanged some thoughts with him via SUS (Startup School - via YC free/online version/for all), going back 3 years: https://www.startupschool.org/posts/25559#comment-104978 (You may have to create a free account in order to access the forums e.t.c)

Founded in 2017, Phage Directory’s mission is to help unlock the untapped potential of phages for phage therapy and biocontrol by empowering people to access, use and build upon the world’s phage knowledge.

 

Also, I think a while back I shared a series of ideas relating to bio-security. I forget if this was way back when singularityhub used to have a forums section.  I have a copy of some of my comments. This was 7 + years ago. But I think the gist is still the same. Better sensors, ability to compute information a lot more efficiently and possibly with a whole lot less energy expended. https://en.wikipedia.org/wiki/Reverse_computation