As an important driving force of a new round of industrial transformation, artificial intelligence is exerting profound influence on economic, social development and human life , bringing challenges in all aspects. General Secretary Xi Jinping stressed the needs to integrate multi-disciplinary forces, strengthen research on laws, ethics and social issues related to artificial intelligence, and establish and improve laws, regulations, systems and ethics that guarantee the healthy development of artificial intelligence during the ninth collective study session of the political bureau of the CPC central committee on Oct 31.
Artificial intelligence is a technology problem not only, also be a management problem. Take the driver-less technology as a representative, there have been a number of accidents around the world, even causing casualties. However, there is no clear provision on this by law. When people cede some decision-making power to algorithms, they also make accountability a problem. In this sense, technological progress is a "double-edged sword". On the one hand, it can benefit mankind; on the other hand, in the absence of norms and constraints, it may damage the public interest. To deal with the new issues raised by artificial intelligence in the fields of law and security, we need to improve governance and keep technological innovation on the track of the system.
In fact, there is no consensus on how to regulate artificial intelligence in the world. In many countries, legislation in the field of unmanned driving has been under discussion and opinions are divided. There has also been a flurry of discussion in China about whether driver-less cars can be put on the road and whether they comply with road traffic safety laws. The EU's newly released "common data protection regulations" also failed to respond to the privacy risks and data protection risks involved in artificial intelligence as expected by the public. Whenever a new technology emerges, there is a discussion about whether the old governance rules apply, whether they need to be updated, and whether new governance rules need to be written. In the face of the ever-changing artificial intelligence technology and the problems caused by it, how to use legal provisions to find the best solution and build consensus for the future is a formidable challenge.
To improve the laws and regulations related to artificial intelligence, the key is to clarify the principle of imputation: once the problem occurs, which is the responsibility of the person, which is the responsibility of the algorithm? The imputation principle is actually to evaluate the algorithm, but the algorithm itself is not transparent, how to evaluate it? There are three solutions to this: no algorithm, make the algorithm transparent, and review the algorithm output. Avoid algorithms altogether; there are few tools other than algorithms that can handle large amounts of data. The transparency of the algorithm is difficult to operate, which requires ordinary people to understand the algorithm. Therefore, the audit algorithm output is the best scheme at present. The key point of this scheme is that, regardless of the inherent working mechanism of the algorithm, it is only evaluated according to the impartiality of its results. Under this scheme, the supervision cost is lower, the operability is stronger, and the principle of imputation is clearer, which has the significance of legislative practice.
In the face of the rapid development of science and technology, we are always exploring legal solutions and governance models of artificial intelligence, and are committed to forming a set of practical and effective methods. For example, relevant departments should give sufficient innovation space to new applications and new attempts in the field of artificial intelligence, but necessary supervision is also indispensable. Once a safety problem or risk is identified, the regulatory authorities will intervene in a timely manner, or even take a multi-department joint investigation to deal with the hidden danger and solve the problem. This model not only protects innovation, but also helps prevent systemic risks, and provides a guarantee for the healthy development of artificial intelligence.
The governance of artificial intelligence is a major challenge at present, and countries and international organizations all over the world are involved in it, which also makes the governance of artificial intelligence become part of global governance. If it is absent, it may be in a passive position in a new round of rule-making. Today, China is at the forefront of technological exploration in the field of artificial intelligence in the world, and some parts have even entered the "no man's land". Innovation comes first, and governance must follow suit, so that we can fully enjoy the dividends of innovation and leverage the future of development.
The author is Li Hui, an associate researcher at the Shanghai institute of science. This article was published in People's Daily (November 01, 2018).