The Chinese Government Claims That Its AI Can Read Minds

In June 2022, Beijing Jiaotong University published research that detailed a device allegedly capable of detecting spikes in neural activity to determine whether a person is watching explicit content (via SCMP). Fast forward a few weeks, and there are now reports about yet another dystopian AI-fueled invention coming out of China. According to The Telegraph, the Institute of Artificial Intelligence at Hefei Comprehensive National Science Center created an AI program that can allegedly analyze brain waves and facial expressions to gauge whether a person is loyal to the Chinese Communist Party.

Interestingly, according to The Times, the original release and the accompanying research were soon pulled after publication. China is no stranger to strictly enforcing the CCP's ideologies, and is also known for its stringent surveillance programs, but the swift deletion of the latest research raises suspicion that it might well be a scare tactic. In the video that has now been removed, a person could reportedly be seen viewing content about the ruling party on a screen while the AI program collected brain signal data, facial movement, and skin electrical response.

At the end of the observation, the program returned a score based on metrics such as "emotional identification" and "learning attentiveness" that would dictate whether the test subject needed better education or if they pass the loyalty threshold. The research throws around terms like measuring "the level of acceptance of ideological and political education of the individual party member," quantifying "thought education," and creating "new avenues for party building." This is the latest in a number of controversial technologies deployed in China, including its so-called social credit system.

Another dystopian invention or merely a scare tactic?

As for the objective behind the AI program's creation, the research reportedly notes that it aims to "encourage our members to further feel gratitude for the party, listen to the party, and solidify their resolve to follow the party." However, there are a few critical questions here. How can a brainwave reliably measure the level of human loyalty?

In a world where methods to cheat a polygraph test are an open secret, the latest AI innovation coming out of China remains an enigma with a lost source. Another question is whether facial expressions are sufficient to study a certain emotion called loyalty simply based on assessing a person's face as they watch a party's propaganda material. As doubtful — and scary — as it may sound, this type of technology has already popped up on the other side of the Atlantic.

In April 2022, Protocol reported that Zoom was working on an "emotional AI" to gauge a person's emotional response and feelings on a product or topic by using techniques such as computer vision, facial recognition, speech pattern analysis, and natural-language processing. In response to the revelation, Fight for the Future and 27 human rights organizations urged Zoom to pause work on the project. However, the efficacy of such systems is still a topic of hot debate. For example, an effort called Emojify Project led by a University of Cambridge professor showcased the limitations of AI-based software at accurately studying human emotions from facial expressions.