artificial intelligence Potential issues for *libertarianism of artificial intelligence (AI): human *unemployment; an out-of-human-control superintelligence; and an AI that is a *person with libertarian *rights. Several responses appear relevant.
People have no libertarian *right to employment. But if they would be reduced to unemployed poverty by AI, then that clashes with the libertarian compatibility thesis (of *liberty and *welfare). But this is Luddism (see *labour). The vast increase in the human population combined with the innumerable inventions that have gone along with this, appear to undermine this as a realistic problem. A life of increasing human leisure with ever-more useful machines is real but hardly a problem.
The possible dangers of a hyperintelligent AI *initiating impositions on humans is a fear of a new technology that matches that of various others in the past. Doubtless, it is prudent to be reasonably cautious; maybe including having strong failsafe devices. But it is not prudent to be so cautious as to put a brake on possible progress, as the *precautionary principle does.
There seems to be no sound explanation of the conjecture that AI is yet, or is ever likely to become, conscious. A Turing test that causes a human interlocutor to think that the AI is a full person is no serious evidence that it is one. How then, it might be asked, do we even know that other human beings are really persons? That is also a conjecture but one that does withstand criticism (which cannot be briefly explained). *Critical rationalist epistemology explains how no “supporting justification” is necessary or possible.
What about the hypothesis that apparently natural human beings are all simulations on a computer? Then we are all artificial intelligences. But this seems to be an unrealistic fantasy of people overly impressed, as they have also been before, with the idea that new technology or science is also what explains human beings in some fundamental way. Alternatively, people are overly impressed by mere logical possibility, and maybe also any “supporting justifications”: analogous versions of this hypothesis have long existed in philosophy. Apparently observable reality is a recalcitrantly consistent and incredibly detailed fit with the materialistic theory of humans and the universe. Unless some reproducible “glitch in the matrix” becomes evident, or some clear and unambiguous way of testing this view is devised, then there is no reason to take it more seriously than the infinite number of other logically possible analogous theories.
Libertarianism is a theory about what is, and ought to be implemented as, the liberty-observing way to treat humans that are persons. A non-human person, including aliens, might be quite different. For instance, suppose AI could be designed to be a person that is a willing tool of humans. Would it be immoral to design it this way? It is not obvious that it would. This naturally suggests another question: then what about altering the genes of human beings so that they are willing slaves (assuming that this could also be done)? But it is an initiated imposition to damage a pre-person human in this way (see *circumcision, infibulation, etc., of children). By contrast, an AI person would not have had an inherent initiated imposition: there is usually no way that it would have been that has been changed to a worse state (unless someone designed it, all within his own *property rights, to be an autonomously-willed person and then someone else interfered with that).
(This is an entry from A LIBERTARIAN DICTIONARY: Explaining a Philosophical Theory [draft currently being revised]. Asterisks indicate other entries.)