This article is produced by NetEase Smart Studio (public number smartman163). Focus on AI and read the next big time!
Little known is that the famous science fiction writer Isaac Asimov did not write the famous "three laws of robots " in the novel "I, Robot".
In fact, the first three laws of robots appeared in the short novel "Runaround" in 1942. The three laws write:
1. The robot can't hurt people, or there is no action to see the wounded incident;
2. The robot cannot violate human orders unless it violates the first law;
3. The robot must protect itself unless it violates the first and second laws.
Seventy-five years have passed, and we are still exploring the future in Asimov’s novels. Asimov’s three laws seem to be more focused on “strong artificial intelligence.†You can find such traits in Artificial Intelligence Hal (HAL) in 2001: Space Roaming, but this does not apply to Amazon’s new artificial intelligence. Echo.
Strong artificial intelligence mimics the human brain, like an ever-evolving child, until it becomes conscious and can handle any problems you throw at it, just like humans. There is still a long way to go to achieve this goal, but today we are dealing with narrow artificial intelligence, in which case the algorithm can handle a limited number of tasks. It recognizes the face, understands the “what is the weather tomorrow†issue, or tries to predict whether you should give someone a loan.
However, Jonathan M. Smith said that it is quite difficult to make rules for this kind of artificial intelligence. Jonathan, a member of the Computer Machinery Association and a professor of computer science at the University of Pennsylvania, said that even at the current level of weak artificial intelligence, there are still a large number of ethical issues that need to be solved. “Short-term issues are very important because artificial intelligence is at the boundaries of technology and policy,†he said. “You certainly don’t want someone to use artificial intelligence to exploit legal loopholes, to escape or divert the social and political decisions we are making.â€
Whether it is real or imaginary, artificial intelligence has some thorny problems in ethics. One of them is the famous tram problem, which is the scene in the film "Sophie's Choice": a train in progress will pass over two groups of people. If you don't do anything, the train will pass and kill 5 people. If you take the initiative to pull a knife and change the train trajectory, the train will crush a person on another line of tracks. You have to make a choice.
Therefore, critics of artificial intelligence often apply this to self-driving cars. A child ran to the road, the car did not have time to stop, but the automatic driving can choose to turn and hit an old man. What should AI do, and who will make this decision?
There are a lot of differences in this theme, and MIT even collects part of it into an online game. There is a classic rebuttal that the self-driving car does not travel at school, so it is unlikely that this will happen.
Utilitarians may argue that by eliminating drivers who are distracted, drunk or tired, the number of deaths worldwide will decrease, meaning that society as a whole wins, even if one dies. You may point out that even if humans might choose to kill one of them in this scenario, why do we have to engage in such a dialogue?
YaseminErden is a senior lecturer in philosophy at Queen Mary University and she has an answer to this. She pointed out that she spent a lot of time considering ethics and computer research in artificial intelligence and behavioral simulations. The decisions made before the simulation did not occur indicate the moral orientation of the person and will lead to the judgment of others, but made on the spot. The decision is not.
She said: "Automatic driving programming with ethical judgment knows what the risk is, which means that the public may be less willing to see things as accidents." Or in other words, as long as you are responsible enough in driving, you can After hitting anyone, he said, "It is the person who suddenly jumped out." Any of your actions can be forgiven. But artificial intelligence algorithms do not have such a luxurious understanding of morality. If computers are faster and more predictable than in some cases, then how they are programmed is important.
Magic Cube Led Display,Led Video Cube,Cube Led Screen,Magic Cube 5D
ShenZhen Megagem Tech Co.,Ltd , https://www.megleddisplay.com