![]() ![]() ![]() This makes extremely intelligent AIs a unique risk, in that extinction is more likely than lesser impacts.” Elon Musk, the founder of Tesla, tweeted that: “We need to be super careful with AI. An Oxford University team warned: “Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime)…the intelligence will be driven to construct a world without humans or without meaningful features of human existence. One might dismiss these ideas as the provenance of science fiction, were it not for the fact that these concerns are shared by several highly respected scholars and tech leaders. Some observers predict that the singularity could occur as soon as 2030. After reaching this point of “technological singularity,” computers will continue to advance and give birth to rapid technological progress that will result in dramatic and unpredictable changes for humanity. Others use the term to refer to the computers that use algorithms to process large amounts of information and draw conclusions and learn from their experiences.ĪI is believed by some to be on its way to producing intelligent machines that will be far more capable than human beings. The famous Turing test holds that AI is achieved when a person is unable to determine whether a response to a question he or she asked was made by a person or a computer. One well-known definition is: “Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.” A popular understanding of AI is that it will enable a computer to think like a person. We agree with the findings of a study panel organized as part of Stanford University’s One Hundred Year Study of Artificial Intelligence: “The Study Panel’s consensus is that attempts to regulate ‘AI’ in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains.” Although AI developments undoubtedly deserve attention, we must be careful to avoid applying too broad a brush. Calls from scholars and public intellectuals for imposing government regulations on AI research and development (R&D) are gaining traction. Policymakers and the public are impressed by driverless cars that have already traveled several million miles. After waves of hype followed by disappointment, computers have now defeated chess, Jeopardy, Go, and poker champions. There seems to be widespread agreement that AI growth is accelerating. Renowned economists point out that AI, unlike previous technologies, is destroying many more jobs than it creates, leading to major economic disruptions. Others fear that AI is enabling governments to mass produce autonomous weapons-“killing machines”-that will choose their own targets, including innocent civilians. Several respected scholars and technology leaders warn that AI is on the path to turning robots into a master class that will subjugate humanity, if not destroy it. New technologies often spur public anxiety, but the intensity of concern about the implications of advances in artificial intelligence (AI) is particularly noteworthy. Should Artificial Intelligence Be Regulated? ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |