top of page
Search

Technological Singularity | by Aman Srivastava

  • Writer: The Computers and Mathematics Society, SRCC
    The Computers and Mathematics Society, SRCC
  • Apr 16, 2020
  • 4 min read

“The problem is not simply that the Singularity represents the passing of humankind from centre stage, but that it contradicts our most deeply held notions of being”, said Vernor Vinge. The world today is fascinated by the ‘accelerating’ rate of development in Artificial Intelligence but ever wondered what will happen when AI outdoes humanity?


Technological Singularity with reference to the ‘law of accelerating returns’ by Ray Kurzweil refers to a hypothetical moment where Artificial Intelligence reaches a stage which is greater than human intelligence, making technological development unfathomable and unpredictable. In mathematics, singularity refers to a point where the value of function increases towards infinity. In astronomy, this term is used in relation to a black hole; from where even light can't escape. Similarly, in future, Technological Singularity might lead us to an intellect level which can result in rapid changes in the human civilization.


Ray Kurzweil - the single most important person attached to the recent developments in this field of study, predicts that nanotechnology, artificial intelligence, robotics and biotechnology will grow to an extent leading us to the stage of singularity by the year 2045, where human race will augment minds with nanotechnology and genetic transformations and machine intelligence will be infinitely more intelligent than human intelligence, resulting in creation of Superhuman Artificial Intelligence.


The concept builds on an idea of the creation of a more intelligent robot with a superhuman intellect which will take on the process of further development and this accelerating rate of development is explained by Gordon Moore, co-founder of Intel, in Moore’s Law. It is an observation that number of transistors in a densely Integrated Circuit will double up every two years, which is slowing down to two and half years and subsequent death of the law as pointed out by Moore himself. The law is in clear relation to singularity with the usage of accelerating development over the years. We all know that computers are getting faster and the rate of this improvement and enhancement is accelerating i.e., the speed of improvement is rising. Looking at the things, a Superhuman Artificial Intelligence is possible and will create a better, more developed Superhuman Artificial Intelligence and it can do this faster than we did earlier and the cycle will continue until we get a singularity.


But the biggest question here is how will this singularity affect mankind?

The impact of “singularity” can be studied in many ways, one argument stands where human intelligence is seen as a subjective concept and technological singularity is seen as a development towards greater human intelligence itself. Prima facie, the benefits of Technological Singularity to a common man might seem lucrative, with advantages like greater government control, greater human intellect towards transforming the world and a possibility of genetic modifications. However, it will not be wise to see only the positive side of the picture, as Technological Singularity has its cons too. Among the benefits to the human race, the biggest positive of technological singularity is that this Superhuman Intellect will be able to solve problems of billions simultaneously. Also, among the positives, is ease of doing tasks as is the case of every technological advancement. Among the negatives, there is a reason to fear something which will be above human’s capability to understand or comprehend that can ultimately alter human race to a large extent. Also, as we all know, privacy is the single biggest issue with growing technology. Singularity too has privacy as its biggest issue.


One can definitely say the possibility of an unchained super intelligence system is low, but if AI is unchained it will start to learn as we do and can go for three extreme possibilities; a benevolent and friendly nature as option one, an attitude where it does not care much and will leave us unaltered as option two and the grim possibility of destruction as the third. We also need to see how it can be developed i.e. either by copying human mind or by developing a program to go for a process of self-development. This points out that the core feature of AI is learning and thus, it will not be foolish to say that the occurrence of this phenomena looks inevitable.


But is it realistic to talk about such a concept?

Well yes, that seems realistic enough to be discussed as we have seen the case of our progress from vacuum tubes to transistors and development within the field of transistors thereon, intelligence is based on learning and thus, a concept of Superhuman Artificial Intelligence seems realistic at large and inevitable as well. Good or bad at large depends upon perception. Kurzweil finds prospective benefits exceeding the threats. However, we can expect to hear different opinions about this subject matter from every corner of the world.

Terminator or Matrix, the fiction world has found the concept interesting and tapped on the darker side but technology controlled is technology utilized. How we will reap the benefits of technology which is beyond our mental capacity and when we can't even understand it, is a big question.

 
 
 

Comments


bottom of page