The Chinese government has blocked its citizens’ access to ChatGPT, one of the most popular applications for artificial intelligence algorithms, over concerns it might spread what the Chinese government sees as “disinformation” that reflects the US view of China.
The application (ChatGPT) is not officially available in China, but some people have found ways to use it, such as using VPN servers and special programs issued by independent developers that allow it to run in the country. According to Nikkei Asia, the Chinese authorities have told major Chinese tech companies, including Tencent, which owns WeChat, to cut off access to the software.
The Chinese government also requires technical companies that want to develop artificial intelligence applications to communicate with government agencies before launching any new application. This step comes in light of growing concerns in China about some of the responses that the government cannot impose censorship on, to user questions related to human rights violations in China, such as violations related to Uighur Muslims.
The targeting of ChatGPT is part of a recent wave of Chinese legislation to monitor the use of AI applications. In 2019, the Chinese government introduced new legislation requiring companies to submit AI technologies to the government for approval before they can be sold or exported. The move is seen as part of the government’s plan to assert more control over China’s rapidly emerging industry.
AI and bot applications rely on natural language processing to produce responses to users’ queries, and they derive their information based on the data they’ve trained on. In the case of ChatGPT, the application is trained on a large set of text data coming from the Internet, including news articles, social media posts, and various online content. This means that the responses generated by the app may reflect biases found in the source material.
In recent years, there have been concerns about the use of AI applications and bots to spread misinformation and political propaganda. In 2019, a group of researchers from the University of Washington and the Allen Institute for Artificial Intelligence created a text-generating application based on the GPT-2 model, and the application was able to produce highly convincing news articles. The researchers decided not to release the full version of the app due to concerns that it could be used to spread false information.
OpenAI and Tencent, which has a partnership with OpenAI to develop AI technologies, have not yet commented on this action from the Chinese government.
This measure is expected to raise the concerns of companies using artificial intelligence applications in China, which may suffer from the possibility of imposing other restrictions on these applications in the future. It is worth noting that interest in the fields of artificial intelligence and its applications is increasing year after year in the world, and a number of computer algorithms similar to ChatGPT have been produced in recent years.
And since the use of this technology is increasing in all industries, it is likely that the authorities in China will express more reservations about the use of artificial intelligence applications and chatbots in the near future, which will greatly affect that industry in the Chinese market.