Risks & Benefits Of Artificial Intelligence

Risks & Benefits Of Artificial Intelligence

artificial intelligence

Artificial intelligence (AI) is a simulation of human intelligence processes by machines, especially computer systems. The processes include learning (purchasing information and rules for use of information), resonation (using rules to achieve close or specific conclusions) and self-correction.

WHAT IS AI?

From SIRI to self-driving car, artificial intelligence (AI) is progressing rapidly. Although science fiction often portrays AI as robots with human features, the AI may span anything from Google’s search algorithms to IBM Watson in standalone weapons.

Artificial intelligence is nowadays properly known as narrow AI (or weak AI) because it was created to perform a narrow task (e.g., just face recognition or just internet search or just driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). Although narrow AI can outperform humans in any specific task, such as playing chess or solving equations, AGI would outperform humans in almost any cognitive task.

WHY RESEARCH AI SAFETY?

In recent term, the goal is to keep the influence of AL in society useful to motivate research in many areas, from economy and law to technical topics such as verification, validity, security and control. Although it may be a little more than a minor disturbance if your laptop is collapsing or hacked, it becomes increasingly important that the system for the AI does what you want it to do if it controls your car, your plane, your pacemaker, your automated trading system or your power grid. Another SHORT-term challenge prevents a devastating weapon race in a deadly autonomous weapon.

Long-term, the important question is what happens if the search for the strong AI succeeds and the system becomes better than the people on all cognitive tasks. As he pointed out the I.J. good at 1965, designing the cleverest of the system is just a cognitive task. Such a system could be done by reenhangence, initiating an intelligence explosion that leaves human intellect far behind.

By inventing revolutionary new technologies, such superintelligence could help us eradicate war, disease and poverty, so that creating a strong AI can be the greatest event in human history. Some experts are expressing concern, however, that it may be the last, unless we learn to harmonde the goals of the AI with ours before it becomes superintelligent.

There are some who ask if the strong AI will ever come true, and others who insist that the creation of superintelligent and guaranteed to be useful. We recognized both possibilities in FLI, but I also recognize the potential for the artificial intelligence system intentionally or inadvertently caused a lot of damage. We believe that today’s research will help us prepare ourselves and prevent potentially negative consequences in the future, so that we enjoy the wellbeing of the fairies.

Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates and many other tech heavyweights recently expressed media concern about the risks posed by artificial intelligence in an open letter, joining many leading ai intelligence researchers. Why is this topic popping up in the headlines?

The idea that the search for the strong AI would eventually succeed was a long thought of science fiction, centuries or more. But thanks to recent discoveries, many AI Milestones, which experts thought were decades away only five years ago, are now achieved, making many experts seriously comprehend the possibility of superintelligence in our lives. While some experts are still guessing that AL has been on the human level for centuries, most of the AI’s at 2015 Puerto Rico Conference have guessed it will happen before 2060. Since it can take decades to complete the required security research, it is wise to start now.

Because AL has the potential to become more intelligent than any human, we have no safe way to predict how he’s going to behave. We can’t use previous technological developments as grounds because we’ve never created anything that has the ability, consciously or unwittingly, outsmart us. The best example of what we can see is our own evolution. People control the planet now, not because we’re the strongest, the fastest or the greatest, but because we’re the smartest. If we’re not the smartest anymore, are we confident we’ll stay in control?

FLEE’s position is that our civilization will flourish as long as we win the race between the growing power of technology and wisdom that it has managed. In the case of AI technology, FLEE’s stance is that the best way to win the race is not to interfere with the former, but to accelerate the other, supporting AL security research.

Share This Post

Post Comment