In this paper, I will discuss the topic of superintelligence. I will assess the strength of the main arguments for thinking that superintelligence will eventually arise and determine their strength. I will deduce that superintelligence will be achieved, and likely within the century. Next, I will consider the potential benefits and downsides of reaching superintelligence and determine if superintelligence would be an overall positive or negative thing. Taking a transhumanist/posthumanist stance, I will conclude that superintelligence should be considered a good thing. That being said, we must take certain precautions for that to be true. I will be referring to the book AI Ethics by Mark Coecklebergh, The Singularity: A Philosophical Analysis by David Chalmers, and class discussions throughout the paper.
Before I begin, it is necessary to define “superintelligence”. According to Coeckelbergh, superintelligence is the idea that “machines will surpass human intelligence.” (Coecklebergh, 11) In other words, artificial intelligence would have capabilities beyond our most intelligent human beings. There are two main ways that superintelligence could arise. In the first option, and let us call this option A, artificial intelligence could develop recursive self-improvement. In simpler terms, once an AI is created, it would be able to design an improved version of itself, let us call this version X. Version X then constructs a smarter version of itself (let us call this version Y), and the cycle repeats. In the latter option, which we call option B, a human brain could be scanned and uploaded and then reproduced within and through AI. (Coecklebergh, 12)
I would like to note that Chalmers seemed to use singularity and superintelligence somewhat interchangeably. Therefore, I will be drawing on Chalmer’s essay on singularity to talk about superintelligence. Coecklebergh, on the other hand, presented singularity as the moment in which human intelligence becomes one with AI, which is still a form of superintelligence. As stated on page 13 of Coecklebergh’s AI Ethics, some think that singularity will be reached by the year 2045. The idea that we will achieve superintelligence within the century appears to be quite realistic.
I will consider some arguments for why superintelligence will arise soon. Firstly, technology is constantly evolving and surprising us. Machines were originally only given simple if-then commands and were solely able to carry out simple tasks. Today, AI is capable of sophisticated tasks such as creating art, identifying potential criminals, and identifying one’s sexuality. As we discussed in class, if we gave an AI a simple goal and turned it loose, equipped with a good machine learning algorithm, it may find ways of pursuing that goal that did not occur to the creators. Perhaps a machine outsmarts humans this way and this is how superintelligence is achieved. According to Chalmers, this kind of superintelligence becomes increasingly likely if we take advantage of the speed explosion. A speed explosion is when faster processing will lead to faster designers and an even faster design cycle. Simply put, if a machine that is smarter than a human makes a machine, it can manufacture the product faster than a human and make the product smarter than any human could. Assuming this cycle continues, there will be a speed explosion that could lead to an intelligence explosion resulting in superintelligence (Chalmers, 2). Moreover, according to Chalmers, if we consider the human brain to be a machine, then we will have the capacity to emulate this machine. If we can achieve this, we have human level-AI. Now if we have human-level AI, then absent defeaters (I will touch on defeaters in a moment) we will reach superintelligence relatively soon. Again, if we achieve any sort of AI at the human level, there is the potential that AI would be able to create a stronger AI than any other human due to its faster processing and design speeds. It appears we already have AI close to the human-level, so we are not far off.
While it may seem evident that we will reach superintelligence within the century – considering we are constantly making technological advancements – Chalmers makes a strong counterargument to this point. We will only reach singularity if there are no defeaters. According to Chalmers, defeaters can be “anything that prevents intelligent systems (human or artificial) from manifesting their capacities to create intelligent systems.” (Chalmers, 7) Some potential defeaters, specifically structural obstacles to superintelligence, include limits in intelligence space, failure of takeoff, diminishing returns, correlation obstacles, and manifestation obstacles. (Chalmers, 19-20) That being said, there are some obstacles that appear more likely than others. For example, it is unlikely we will reach the upper limit of what is physically possible (limits in intelligence space). In many ways, we are close to reaching superintelligence without having exhausted our physical resources. I presume we would not reach the upper limit of physics by taking the next step for AI. However, the other obstacles Chalmers presents seem more plausible. For example, manifestation obstacles in particular seem to be significant obstacles. Perhaps humans will become disinclined to create higher-level AI after seeing some of the negative effects of weak AI. As we discussed in class, We are already experiencing the negative effects of weak AI. For example, weak AI has already raised questions about privacy, threats to human superiority, and resource access/distribution. While this is a solid counterargument to achieving superintelligence, I still do not think the current negative consequences of AI will be enough to deter all humans from attempting to achieve superintelligence. Humans are constantly pushing the boundaries, especially those related to technology. Moreover, because of the pressure to continue making technological advances, I believe researchers will be able to achieve superintelligence soon, and likely within the century.
Now that I have established that we are not far off from superintelligence, I would like to assess the potential risks and benefits of superintelligence. First I will consider the potential benefits of superintelligence. According to both Chalmers and Coecklebergh, potential benefits of superintelligence include cures for diseases, scientific advancements, and increased productivity and efficiency, among other things. (Chalmers, 4) If we take a transhumanist and/or posthumanist stance, it can be seen that making these kinds of technological advances and achieving superintelligence is the next milestone. While there are unknowns associated with achieving superintelligence, it is a conquest worth exploring. The return may very well be rewarding. On the other hand, there are of course disadvantages and risks that come with superintelligence. For example, if machines become more powerful than humans there is a chance that humans will die off, that machines will destroy the planet, and/or that machines act immorally, etc. (Chalmers, 4) I will also point out that humans have a tendency of ignoring the potential risks of their projects, especially technological feats. Without proper policy and regulation, I anticipate superintelligence will be a negative force. However, because we have not yet reached superintelligence, and because we have some experience with weak AI, it is very possible to take precautions before we reach the point of no return. Drawing upon the knowledge we do have about current AI, we can at least create a foundation of policies for stronger AI. Referring back to the posthumanist and postphenomenological stance as discussed by Coecklebergh, we should recognize that technology has always been and will always be incorporated into our lives as human beings. Rather than shying away from superintelligence, we should direct our energy toward achieving a common goal alongside AI, rather than viewing it as a competitor.
Overall, superintelligence will be a good thing for humans if the proper safety and ethical measures are taken. All things considered, it appears we will reach superintelligence within the century, absent any major defeaters. If we do reach superintelligence, we must take precautions to minimize the risks. We can do so by creating effective policies regarding what artificial intelligence should be allowed to do and taking other preventative measures. The current effects of weak AI can help lead us in the right direction regarding the potential effects of superintelligence and how to mitigate future dangers. If we take the proper steps, superintelligence will benefit humans and we will likely experience those benefits soon.