With all the challenges and risks associated with artificial intelligence, intergovernmental cooperation, a people-centered approach, and ethical considerations will be key to resolving issues for ethical and responsible AI development, experts said during a panel discussion on AI governance on Thursday.
The discussion was part of the Boao Forum for Asia, held in Boao, Hainan province, which will conclude on Friday.
Speakers at the session discussed the importance of AI governance and regulations, with a focus on balancing the benefits and challenges of the technology, and emphasized the need for evidence-based decision-making and empowering marginalized communities.
Secretary-General of the Organisation for Economic Co-operation and Development Mathias Cormann said they are looking at the AI issue with a focus on increasing economic and social well-being through public policy.
"We have focused on AI as an important area of policy for some time," he said. Inevitably, there will be conversations within countries between policymakers and experts, but dialogue on an international level is also required, he said.
"There will be different levels at which conversations will take place," he said. "But ultimately, we all want to achieve the same thing. We want to ensure that we can seize and receive all of the benefits of this exciting technology in a way that is safe, responsible, ethical."
An intergovernmental standard of developing AI in an innovative and trustworthy way, as agreed upon in 2019, is in the process of update, he added.
Lee Kyoung-mu of Seoul National University said AI governance means regulations, guidelines or protocols for AI products, but it needs to have some guidelines for users as well in harnessing the power of AI.
"We need to make some specific regulations in developing and deploying the product, but we need to think about the uses of those machines," said Lee, who is also the editor-in-chief of IEEE Transactions on Pattern Analysis and Machine Intelligence. "Users may have the functions of AI in malicious ways."
Lu Jinghui, chief security officer at Vivo, said that, in terms of using AI innovations, they are "doing the right thing" and "doing things in the right ways".
As a smartphone manufacturer, using AI to improve user experience is the right thing, but at the same time, they want to utilize AI governance to make sure innovations are in the interests of the users, he noted.
Christopher Thomas, the nonresident senior fellow of the Brookings Institution, said this is inherently a concentration of power in the AI industry, and the reason why computing is controlled by a small group of entities is that it's extraordinarily expensive.
"We need to find a way for every country to be able to build their own computing clusters and their own capabilities."
Cormann said people need to focus on appropriate governance arrangements to enable the safe, responsible, ethical and trustworthy development of AI, and they need to be aware of its very significant disruption to the labor market, such as what it means in terms of making sure that people are not left behind and everyone has the right opportunity in the labor market disrupted by AI.