Artificial Intelligence
is the hypothesis of the invention of Artificial General Intelligence (AGI). AGI would be capable of recursive self-improvement, leading to the rapid emergence of Artificial Superintelligence (ASI). This will abruptly trigger a runaway feedback loop of technological breakthroughs resulting in unfathomable changes to human civilization... the limits of which are unknown. Technological progress will become so rapid that it makes the future after the Singularity qualitatively different and impossible to predict due to an inability of human beings to imagine the intentions or capabilities of superintelligent entities.
is the hypothesis of the invention of Artificial General Intelligence (AGI). AGI would be capable of recursive self-improvement, leading to the rapid emergence of Artificial Superintelligence (ASI). This will abruptly trigger a runaway feedback loop of technological breakthroughs resulting in unfathomable changes to human civilization... the limits of which are unknown. Technological progress will become so rapid that it makes the future after the Singularity qualitatively different and impossible to predict due to an inability of human beings to imagine the intentions or capabilities of superintelligent entities.
Existential Risk From Artificial Intelligence
Existential Risk From Artificial Intelligence
is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then this new superintelligence could become powerful and difficult to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.
is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then this new superintelligence could become powerful and difficult to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.
One source of concern is that controlling a superintelligent machine, or instilling it with human-compatible values, may be a harder problem than naïvely supposed. Many researchers believe that a superintelligence would naturally resist attempts to shut it off or change its goals—a principle called instrumental convergence—and that preprogramming a superintelligence with a full set of human values will prove to be an extremely difficult technical task.
One source of concern is that controlling a superintelligent machine, or instilling it with human-compatible values, may be a harder problem than naïvely supposed. Many researchers believe that a superintelligence would naturally resist attempts to shut it off or change its goals—a principle called instrumental convergence—and that preprogramming a superintelligence with a full set of human values will prove to be an extremely difficult technical task.
A second source of concern is that a sudden and unexpected "intelligence explosion" might take an unprepared human race by surprise. For example, in one scenario, the first-generation computer program found able to broadly match the effectiveness of an AI researcher is able to rewrite its algorithms and double its speed or capabilities in six months. The second-generation program is expected to take three calendar months to perform a similar chunk of work, on average; in practice, doubling its own capabilities may take longer if it experiences a mini-"AI winter", or may be quicker if it undergoes a miniature "AI Spring" where ideas from the previous generation are especially easy to mutate into the next generation. In this scenario the time for each generation continues to shrink, and the system undergoes an unprecedentedly large number of generations of improvement in a short time interval, jumping from subhuman performance in many areas to superhuman performance in all relevant areas.
A second source of concern is that a sudden and unexpected "intelligence explosion" might take an unprepared human race by surprise. For example, in one scenario, the first-generation computer program found able to broadly match the effectiveness of an AI researcher is able to rewrite its algorithms and double its speed or capabilities in six months. The second-generation program is expected to take three calendar months to perform a similar chunk of work, on average; in practice, doubling its own capabilities may take longer if it experiences a mini-"AI winter", or may be quicker if it undergoes a miniature "AI Spring" where ideas from the previous generation are especially easy to mutate into the next generation. In this scenario the time for each generation continues to shrink, and the system undergoes an unprecedentedly large number of generations of improvement in a short time interval, jumping from subhuman performance in many areas to superhuman performance in all relevant areas.
While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of technological change already fits this description.
While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of technological change already fits this description.
In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence.
In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence.
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.'' - Good, I. J. (1965)
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.'' - Good, I. J. (1965)
''When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.'' - Nick Bostrom (2002)
''When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.'' - Nick Bostrom (2002)
The Three Laws of Robotics
"Handbook of Robotics, 56th Edition, 2058 A.D.The Three Laws of Robotics
First Law
First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
proposes that all of reality, including the Earth and the universe could be in fact an artificial simulation — for example by quantum computer simulation — indistinguishable from “true” reality. It could contain conscious minds which may or may not be fully aware that they are living inside a simulation. This is a proposed technology so advanced that it would seem realistic enough to convince its inhabitants the simulation was real.
proposes that all of reality, including the Earth and the universe could be in fact an artificial simulation — for example by quantum computer simulation — indistinguishable from “true” reality. It could contain conscious minds which may or may not be fully aware that they are living inside a simulation. This is a proposed technology so advanced that it would seem realistic enough to convince its inhabitants the simulation was real.
Artificial Super Intelligent Entities are theorized to eventually have the power to simulate reality. The motivations and goals of such a simulation may be incomprehensible to humans.
Artificial Super Intelligent Entities are theorized to eventually have the power to simulate reality. The motivations and goals of such a simulation may be incomprehensible to humans.
🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖
🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖
It could contain conscious minds which may or may not be fully aware that they are living inside a simulation
It could contain conscious minds which may or may not be fully aware that they are living inside a simulation
🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖
🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖