Artificial intelligence of 23 military regulations, such as Musk, and thousands of AI experts jointly endorsed

Artificial intelligence is an amazing technology that is changing the world in an incredible way. If you have seen the movie "Terminator", you know that advanced artificial intelligence technology contains some dangers.

For this reason, Elon Musk and Stephen Hawking joined forces with hundreds of other researchers, technology leaders and scientists to support the 23 articles that artificial intelligence should follow in the areas of productivity, ethics and safety. The basic principle.

After the dozens of experts from the Future of Life InsTItute participated in the Beneficial AI 2017 conference, the Asilomar AI Principles was officially released. These experts include robotics, physicists, economists, and philosophers who have had a heated debate about the impact of artificial intelligence in the areas of security, economics, and ethics. In order for this set of laws to be finally passed, it must be approved by up to 90% of the participating experts.

“We have drafted 23 laws ranging from research strategies to digital rights to possible future issues such as super-intelligence. Experts who recognize these laws have already signed their names on it.” Future Life Research Institute website Written, "These laws are not comprehensive, and obviously there are different interpretations, but it also highlights a problem: many of the 'default' behaviors around many related issues may undermine the importance of most participants. the rules."

A total of 892 artificial intelligence or robotics researchers and 1,445 experts signed the law, including Tesla CEO Musk and the famous physicist Hawking.

Some of these laws (such as transparency and shared research results) are less likely to be realized. Even if these laws are not fully implemented, the 23 laws can still improve the artificial intelligence development process, ensuring that these technologies are ethical and avoid the rise of evil forces.

The following is the full content of "Aes Roman Artificial Intelligence 23 Law":

1. Research objectives:

The goal of artificial intelligence research cannot be unconstrained, and beneficial artificial intelligence must be developed.

2. Research funds:

Artificial intelligence investments should be accompanied by a number of special research funds to ensure that they are used beneficially to solve difficult problems in computer science, economics, law, ethics and social research:

——How to ensure the healthy development of the future artificial intelligence system to meet our wishes and avoid failure or hacking?

- How to achieve prosperity through automation while protecting human resources and implementing human goals?

——How to update the legal system to make it more fair and efficient, so as to keep up with the development of artificial intelligence and control the risks associated with artificial intelligence?

——What values ​​should artificial intelligence meet and what legal and moral status should it have?

3. Science and policy links:

Artificial intelligence researchers should engage in constructive and benign communication with policy makers.

4. Research culture:

A culture of cooperation, mutual trust, and transparency should be formed between artificial intelligence researchers and developers.

5. Avoid competition:

Artificial intelligence system development teams should work together to avoid compromises in security standards.

6, security:

Artificial intelligence systems should ensure safety throughout their life cycle and be validated against the feasibility of the technology and the applicable field.

7, fault transparency:

If the artificial intelligence system causes damage, it should be possible to determine the cause.

8. Judicial transparency:

The use of any form of automated system in a judicial decision-making system should provide a satisfactory explanation and be reviewed by competent personnel.

9. Responsibility:

Designers and developers are stakeholders in the ethical implications of the use, abuse, and application of advanced artificial intelligence systems, and they have a responsibility and opportunity to shape the resulting impact.

10. Value consistency:

There is a need to ensure that the goals and actions taken by highly automated AI systems are in line with human values.

11. Human values:

Artificial intelligence systems must be designed and operated in accordance with human dignity, rights, freedoms, and cultural diversity.

12. Personal privacy:

Human beings should have the right to use, manage, and control the data they generate, giving artificial intelligence the right to analyze and use the data.

13. Freedom and privacy:

The application of artificial intelligence to personal data must not unreasonably limit the freedom that human beings possess or deserve.

14. Sharing benefits:

Artificial intelligence technology should be used and benefited by as many people as possible.

15. Sharing prosperity:

The economic prosperity created by artificial intelligence should be widely shared and benefit all mankind.

16. Controlled by humans:

Human beings should have the right to choose whether and how to make decisions by artificial intelligence systems in order to accomplish the goals of human choice.

17, non-destructive:

The power gained through the control of highly advanced artificial intelligence systems should respect and promote the social and civic processes that a healthy society depends on, rather than destroying these processes.

18. Artificial Intelligence Arms Race:

An arms race on automated deadly weapons should be avoided.

19. Ability warning:

There is no consensus yet, and we should avoid making strong assumptions about the ability of future artificial intelligence technologies.

20. Importance:

Advanced artificial intelligence represents a far-reaching change in the history of life on Earth, and it should be planned and managed with a serious attitude and sufficient resources.

21. Risk:

For the risks of artificial intelligence systems, especially catastrophic risks and existential risks, corresponding planning and mitigation measures must be formulated for their expected impacts.

22. Constant self-improvement:

For self-improvement or self-replication, rapid improvement of quality or increased number of artificial intelligence systems must be accompanied by strict safety and control measures.

23. Common interests:

Super artificial intelligence can only serve universal values, and should consider the interests of all human beings, not the interests of a country or an organization.

Grating Ruler

Displacement sensor, also known as linear sensor, is a linear device belonging to metal induction. The function of the sensor is to convert various measured physical quantities into electricity. In the production process, the measurement of displacement is generally divided into measuring the physical size and mechanical displacement. According to the different forms of the measured variable, the displacement sensor can be divided into two types: analog and digital. The analog type can be divided into two types: physical property type and structural type. Commonly used displacement sensors are mostly analog structures, including potentiometer-type displacement sensors, inductive displacement sensors, self-aligning machines, capacitive displacement sensors, eddy current displacement sensors, Hall-type displacement sensors, etc. An important advantage of the digital displacement sensor is that it is convenient to send the signal directly into the computer system. This kind of sensor is developing rapidly, and its application is increasingly widespread.

Magnetic Scale Sensor,Magnetic Length Scale,Magnetic Scale Device,Magnetic Scale Measurements

Changchun Guangxing Sensing Technology Co.LTD , https://www.gx-encoder.com