<AI>Devspace

What is the concept of the technological singularity?

clock icon
asked 4 weeks ago
message icon
4
eye icon
2.0K

I've heard the idea of the technological singularity, what is it and how does it relate to Artificial Intelligence? Is this the theoretical point where Artificial Intelligence machines have progressed to the point where they grow and learn on their own beyond what humans can do and their growth takes off? How would we know when we reach this point?

4 Answers

The "singularity," viewed narrowly, refers to a point at which economic growth is so fast that we can't make useful predictions about what the future past that point will look like.

It's often used interchangeably with "intelligence explosion," which is when we get so-called Strong AI, which is AI that is intelligent enough to understand and improve itself. It seems reasonable to expect that the intelligence explosion would immediately lead to an economic singularity, but the reverse is not necessarily true.

The concept of "the singularity" is when machines outsmart the humans. Although Stephen Hawking opinion is that this situation is inevitable, but I think it'll be very difficult to reach that point, because every A.I. algorithm needs to be programmed by humans, therefore it would be always more limited than its creator.

We would probably know when that point when humanity will lose control over Artificial Intelligence where super-smart AI would be in competition with humans and maybe creating more sophisticated intelligent beings occurred, but currently, it's more like science fiction (aka Terminator's Skynet).

The risk could involve killing people (like self-flying war drones making their own decision), destroying countries or even the whole planet (like A.I. connected to the nuclear weapons (aka WarGames movie), but it doesn't prove the point that the machines would be smarter than humans.

The singularity, in the context of AI, is a theoretical event whereby an intelligent system with the following criteria is deployed.

  1. Capable of improving the range of its own intelligence or deploying another system with such improved range
  2. Willing or compelled to do so
  3. Able to do so in the absence of human supervision
  4. The improved version sustains criteria (1) through (3) recursively

By induction, the theory then predicts that a sequence of events will be generated with a potential rate of intelligence increase that may vastly exceed the potential rate of brain evolution.

How obligated this self-improving entity or population of procreated entities would be to preserve human life and liberty is indeterminate. The idea that such an obligation can be part of an irrevocable software contract is naive in light of the nature of the capabilities tied to criteria (1) through (4) above. As with other powerful technology, the risks are as numerous and far-reaching as the potential benefits.

Risks to humanity do not require intelligence. There are other contexts to the use of the term singularity, but they are outside of the scope of this AI forum but may be worth a brief mention for clarity. Genetic engineering, nuclear engineering, globalization, and basing an international economy on a finite energy source being consumed thousands of times faster than it arose in the earth — These are other examples of high-risk technologies and mass trends that pose risks as well as benefits to humanity.

Returning to AI, the major caveat in the singularity theory is its failure to incorporate probability. Although it may be possible to develop an entity that conforms to criteria (1) through (4) above, it may be improbable enough so that the first event occurs long after all the current languages spoken on Earth are dead.

On the other extreme of the probability distribution, one could easily argue that there is a nonzero probability that the first event already occurred.

Along those lines, if a smarter presence where already existent on the Internet, how likely would it be that it would find it in its best interest to reveal itself to the lower human beings. Do we introduce ourselves to a passing maggot?

The technological singularity is a theoretical point in time at which a self-improving artificial general intelligence becomes able to understand and manipulate concepts outside of the human brain's range, that is, the moment when it can understand things humans, by biological design, can't.

The fuzziness about the singularity comes from the fact that, from the singularity onwards, history is effectively unpredictable. Humankind would be unable to predict any future events, or explain any present events, as science itself becomes incapable of describing machine-triggered events. Essentially, machines would think of us the same way we think of ants. Thus, we can make no predictions past the singularity. Furthermore, as a logical consequence, we'd be unable to define the point at which the singularity may occur at all, or even recognize it when it happens.

However, in order for the singularity to take place, AGI needs to be developed, and whether that is possible is quite a hot debate right now. Moreover, an algorithm that creates superhuman intelligence (or superintelligence) out of bits and bytes would have to be designed. By definition, a human programmer wouldn't be able to do such a thing, as his/her brain would need to be able to comprehend concepts beyond its range. There is also the argument that an intelligence explosion (the mechanism by which a technological singularity would theoretically be formed) would be impossible due to the difficulty of the design challenge of making itself more intelligent, getting larger proportionally to its intelligence, and that the difficulty of the design itself may overtake the intelligence required to solve the said challenge.

Also, there are related theories involving machines taking over humankind and all of that sci-fi narrative. However, that's unlikely to happen, if Asimov's laws are followed appropriately. Even if Asimov's laws were not enough, a series of constraints would still be necessary in order to avoid the misuse of AGI by misintentioned individuals, and Asimov's laws are the nearest we have to that.

1

Write your answer here