Source: The Guardian (4/11/19)
China’s hi-tech war on its Muslim minority
Smartphones and the internet gave the Uighurs a sense of their own identity – but now the Chinese state is using technology to strip them of it.
By Darren Byler
In mid-2017, Alim, a Uighur man in his 20s, returned to China from studying abroad. As soon as he landed back in the country, he was pulled off the plane by police officers. He was told his trip abroad meant that he was now under suspicion of being “unsafe”. The police administered what they call a “health check”, which involved collecting several types of biometric data, including DNA, blood type, fingerprints, voice recordings and face scans – a process that all adults in the Uighur autonomous region of Xinjiang, in north-west China, are expected to undergo.
After his “health check”, Alim was transported to one of the hundreds of detention centres that dot north-west China. These centres have become an important part of what Xi Jinping’s government calls the “people’s war on terror”, a campaign launched in 2014, which focuses on Xinjiang, a region with a population of roughly 25 million people, just under half of whom are Uighur Muslims. As part of this campaign, the Chinese government has come to treat almost all expressions of Uighur Islamic faith as signs of potential religious extremism and ethnic separatism. Since 2017 alone, more than 1 million Turkic Muslims, including Uighurs, Kazakhs, Kyrgyz and others, have moved through detention centres.
At the detention centre, Alim was deprived of sleep and food, and subjected to hours of interrogation and verbal abuse. “I was so weakened through this process that at one point during my interrogation I began to laugh hysterically,” he said when we spoke. Other detainees report being placed in stress positions, tortured with electric shocks, and kept in isolation for long periods. When he wasn’t being interrogated, Alim was kept in a tiny cell with 20 other Uighur men.
Many of the detainees had been arrested for having supposedly committed religious and political transgressions through social media apps on their smartphones, which Uighurs are required to produce at checkpoints around Xinjiang. Although there was often no real evidence of a crime according to any legal standard, the digital footprint of unauthorised Islamic practice, or even a connection to someone who had committed one of these vague violations, was enough to land Uighurs in a detention centre. The mere fact of having a family member abroad, or of travelling outside China, as Alim had, often resulted in detention.
Most Uighurs in the detention centres are on their way to serving long prison sentences, or to indefinite captivity in a growing network of internment camps, which the Chinese state has described as facilities for “transformation through education”. These camps, which function as medium-security prisons and, in some cases, forced-labour factories, attempt to train Uighurs to disavow their Islamic identity and embrace the secular principles of the Chinese state. They forbid the use of the Uighur language and instead offer drills in Mandarin, the language of China’s Han majority. Only a handful of detainees who are not Chinese citizens have been fully released from this “re-education” system.
Alim was relatively lucky: he was let out after only two weeks. (He later learned that a relative had intervened in his case.) But a few weeks later, when he went to meet a friend for lunch at a mall in his home city, he had another shock. At a security checkpoint at the entrance to the mall, Alim scanned the photo on his government-issued identification card, and presented himself before a security camera equipped with facial recognition software. An alarm sounded. The security guards let him pass, but within a few minutes he was approached by police officers, who then took him into custody.
Alim learned that he had been placed on a blacklist maintained by the Integrated Joint Operations Platform (Ijop), a regional data system that uses AI to monitor the countless checkpoints in and around Xinjiang’s cities. Any attempt to enter public institutions such as hospitals, banks, parks or shopping centres, or to cross beyond the boundaries of his local police precinct, would trigger the Ijop to alert police. The system had profiled him and predicted that he was a potential terrorist.
There was little Alim could do. Officers told him he should “just stay at home” if he wanted to avoid detention again. Although he was officially free, Alim’s biometrics and his digital history were being used to lock him in place. “I’m so angry and afraid at the same time,” he told me. He was haunted by his data.
China’s version of the “war on terror” depends less on drones and strikes by elite military units than facial recognition software and machine learning algorithms. Its targets are not foreigners but domestic minority populations who appear to threaten the Chinese Communist party’s authoritarian rule. In Xinjiang, the web of surveillance reaches from cameras on buildings, to the chips inside mobile devices, to Uighurs’ very physiognomy. Face scanners and biometric checkpoints track their movements almost everywhere.
Other programmes scan Uighurs’ digital communications, looking for suspect patterns, and flagging religious speech or even a lack of fervour in using Mandarin. Deep-learning systems search in real time through video feeds capturing millions of faces, building an archive that can supposedly help identify suspicious behaviour in order to predict who will become an “unsafe” actor. Actions that can trigger these “computer vision” technologies include dressing in an Islamic fashion and failing to attend nationalistic flag-raising ceremonies. All of these technological systems are brought together in the Ijop, which is constantly learning from the behaviours of the Uighurs it watches.
In her recent study on the rise of “surveillance capitalism”, the Harvard scholar Shoshana Zuboff notes that consumers are constantly generating valuable data that can be turned into profitable predictions about our preferences and future behaviours. In the Uighur region, this logic has been taken to an extreme. The power – and potential profitability – of the predictive technologies that purport to keep Xinjiang safe derive from their unfettered access to Uighurs’ digital lives and physical movements. From the perspective of China’s security-industrial establishment, the principal purpose of Uighur life is to generate data, which can then be used to further refine these systems of surveillance and control.
Controlling the Uighurs has also become a test case for marketing Chinese technological prowess around the world. A hundred government agencies and companies from two dozen countries, including the US, France, Israel and the Philippines, now participate in the highly influential annual China-Eurasia Security Expo in Urumqi, the capital of the Uighur region. The ethos at the expo, and in the Chinese techno-security industry as a whole, is that Muslim populations need to be managed and made productive. Over the past five years, the people’s war on terror has allowed a number of Chinese tech startups to achieve unprecedented levels of growth. In just the last two years, the state has invested an estimated $7.2bn in techno-security in Xinjiang. As a spokesperson for one of these tech startups put it, 60% of the world’s Muslim-majority nations are part of China’s premier international development project, the Belt and Road Initiative, so there is “unlimited market potential” for the type of population-control technology they are developing in Xinjiang.
Some of the technologies pioneered in Xinjiang have already found customers in authoritarian states as far away as sub-Saharan Africa. In 2018, CloudWalk, a Guangzhou-based tech startup that has received more than $301m in state funding, finalised an agreement with Zimbabwe’s government to build a national “mass facial recognition programme” in order to address “social security issues”. (CloudWalk has not revealed how much the agreement is worth.) Freedom of movement through airports, railways and bus stations throughout Zimbabwe will now be managed through a facial database integrated with other kinds of biometric data. In effect, the Uighur homeland has become an incubator for China’s “terror capitalism”.
There was a time when the internet seemed to promise a brighter future for China’s Uighurs. When I arrived in Urumqi in 2011 to conduct my first year of ethnographic fieldwork, the region had just been wired with 3G mobile data networks. When I returned in 2014, it seemed as though nearly all adults in the city had a smartphone. Suddenly, Uighur cultural figures who the government subsequently labelled “unsafe”, such as the pop star Ablajan, developed followings that numbered in the millions.
Most unsettling, from the perspective of the state, unsanctioned Uighur religious teachers based in China and Turkey also developed a deep influence. Since Mao’s Religious Reform Movement of 1958, the state had limited Uighurs’ access to mosques, Islamic funerary practices, religious knowledge and other Muslim communities. There were virtually no Islamic schools outside of government control, no imams who were not approved by the state. Children under the age of 18 were forbidden to enter mosques. But as social media spread through the Uighur homeland over the course of the last decade, it opened up a virtual space to explore what it meant to be Muslim. It reinforced a sense that the first sources of Uighur identity were their faith and language, their claim to a native way of life, and their membership in a Turkic Muslim community stretching from Urumqi to Istanbul. Rather than being seen as perpetually lacking Han appearance and culture, they could find in their renewed Turkic and Islamic values a cosmopolitan and contemporary identity. Food, movies, music and clothing, imported from Turkey and Dubai, became markers of distinction. Women began to veil themselves. Men began to pray five times a day. They stopped drinking and smoking. Some began to view music, dancing and state television as influences to be avoided.
The Han officials I met during my fieldwork referred to this rise in technologically disseminated religious piety as the “Talibanisation” of the Uighur population. Along with Han settlers, they felt increasingly unsafe travelling to the region’s Uighur-majority areas, and uneasy in the presence of pious Turkic Muslims. The officials cited incidents that carried the hallmarks of religiously motivated violence – a knife attack carried out by a group of Uighurs at a train station in Kunming; trucks driven by Uighurs through crowds in Beijing and Urumqi – as a sign that the entire Uighur population was falling under the sway of terrorist ideologies.
But, as dangerous as the rise of Uighur social media seemed to Han officials, it also presented them with a new means of control. On 5 July 2009, Uighur high school and college students had used Facebook and Uighur-language blogs to organise a protest demanding justice for Uighur workers who were killed by their Han colleagues at a toy factory in eastern China. Thousands of Uighurs took to the streets of Urumqi, waving Chinese flags and demanding that the government respond to the deaths of their comrades. When they were violently confronted by armed police, many of the Uighurs responded by turning over buses and beating Han bystanders. In the end, more than 190 people were reported killed, most of them Han. Over the weeks that followed, hundreds, perhaps thousands, of young Uighurs were disappeared by the police. The internet was shut off in the region for nearly 10 months, and Facebook and Twitter were blocked across the country.
Soon after the internet came back online in 2010 – with the notable absence of Facebook, Twitter and other non-Chinese social media applications – state security, higher education and private industry began to collaborate on breaking Uighur internet autonomy. Much of the Uighur-language internet was transformed from a virtual free society into a zone where government technology could learn to predict criminal behaviour. Broadly defined new anti-terrorism laws, first drafted in 2014, turned nearly all crimes committed by Uighurs, from stealing a Han neighbour’s sheep to protesting against land seizures, into forms of terrorism. Religious piety, which the new laws referred to as “extremism”, was conflated with religious violence.
The Xinjiang security industry mushroomed from a handful of private firms to approximately 1,400 companies employing tens of thousands of workers, ranging from low-level Uighur security guards to Han camera and telecommunications technicians to coders and designers. The Xi administration declared a state of emergency in the region, the people’s war on terror began, and Islamophobia was institutionalised.
In 2017, after three years of operating a “hard strike” policy that turned Xinjiang into what many considered an open-air prison – which involved instituting a passbook system that restricted Uighurs’ internal travel, and deploying hundreds of thousands of security forces to monitor the families of those who had been disappeared or killed by the state – the government turned to a fresh strategy. A new regional party secretary named Chen Quanguo introduced a policy of “transforming” Uighurs.
Local authorities began to describe the “three evil forces” of “religious extremism, ethnic separatism and violent terrorism” as three interrelated “ideological cancers”. Because the digital sphere had allowed unauthorised forms of Islam to flourish, officials called for AI-enabled technology to crack down on these evils. Party leadership began to incentivise Chinese tech firms to develop technologies that could help the government control Uighur society. Billions of dollars in government contracts were awarded to build “smart” security systems across the Uighur region.
The turn toward “transformation” coincided with breakthroughs in the AI-assisted computer systems that the public security bureau rolled out in 2017 and brought together in the Ijop. The Chinese startup Meiya Pico began to market software to local and regional governments that was developed using state-supported research and could detect Uighur language text and Islamic symbols embedded in images. The company also developed programmes for automating the transcription and translation of Uighur voice messaging. The company Hikvision advertised tools that could automate the identification of Uighur faces based on physiological phenotypes. Other companies devised programmes that would perform automated searches of Uighurs’ internet activity and then compare the data it gleaned to school, job, banking, medical and biometric records, looking for predictors of aberrant behaviour.
The rollout of this new technology required a great deal of manpower and technical training. More than 100,000 new police officers were hired. One of their jobs was to conduct the sort of “health check” Alim underwent, creating biometric records for almost every human being in the region. Face signatures were created by scanning individuals from a variety of different angles as they made different facial expressions; the result was a high-definition portfolio of personal emotions. All Uighurs were required to install nanny apps , which monitored everything they said, read and wrote, and everyone they connected with, on their smartphones.
Higher-level police officers, most of whom were Han, were given the job of conducting qualitative assessments of the Muslim population as a whole – providing more complex, interview-based survey data for Ijop’s deep-learning system. In face-to-face interviews, these neighbourhood police officers assessed the more than 14 million Muslim-minority people in Xinjiang and determined if they should be given the rating of “safe”, “average”, or “unsafe”. They determined this by categorising the person using 10 or more categories, including whether or not the person was Uighur, whether they prayed regularly, had an immediate relative living abroad, or had taught their children about Islam in their home. Those who were determined to be “unsafe” were then sent to the detention centres, where they were interrogated and asked to confess their crimes and name others who were also “unsafe”. In this manner, the officers determined which individuals should be slotted for the “transformation through education” internment camps.
Many Muslims who passed their first assessment were subsequently detained because someone else named them as “unsafe”. In thousands of cases, years of WeChat history was used as evidence of the need for Uighur suspects to be “transformed”. The state also assigned an additional 1.1 million Han and Uighur “big brothers and sisters” to conduct week-long assessments on Uighur families as uninvited guests in Uighur homes. Over the course of these stays, the relatives tested the “safe” qualities of those Uighurs who remained outside of the camp system by forcing them to participate in activities forbidden by certain forms of Islamic piety, such as drinking, smoking and dancing. They looked for any sign of resentment or any lack of enthusiasm in Chinese patriotic activities. They gave the children candy so that they would tell them the truth about what their parents thought.
All of this information was entered into databases and then fed back into the Ijop. The government’s hope is that the Ijop will, over time, run with less and less human guidance. Even now, it is always running in the background of Uighur life, always learning.
In the tech community in the US, there is some scepticism regarding the viability of AI-assisted computer vision technology in China. Many experts I’ve spoken to from the AI policy world point to an article by the scholar Jathan Sadowski called “Potemkin AI”, which highlights the failures of Chinese security technology to deliver what it promises. They frequently bring up the way a system in Shenzhen meant to identify the faces of jaywalkers and flash them on giant screens next to busy intersections cannot keep up with the faces of all the jaywalkers; as a result, human workers sometimes have to manually gather the data used for public shaming. They point out that Chinese tech firms and government agencies have hired hundreds of thousands of low-paid police officers to monitor internet traffic and watch banks of video monitors. As with the theatre of airport security rituals in the US, many of these experts argue that it is the threat of surveillance, rather than the surveillance itself, that causes people to modify their behaviour.
Yet while there is a good deal of evidence to support this scepticism, a notable rise in the automated detection of internet-based Islamic activity, which has resulted in the detention of hundreds of thousands of Uighurs, also points to the real effects of the implementation of AI-assisted surveillance and policing in Xinjiang. Even western experts at Google and elsewhere admit that Chinese tech companies now lead the world in these computer-vision technologies, due to the way the state funds Chinese companies to collect, use and report on the personal data of hundreds of millions of users across China.
The Han officials I spoke with during my fieldwork in Xinjiang often refused to acknowledge the way disappearances, frequent police shootings of young Uighur men, and state seizures of Uighur land might have motivated earlier periods of Uighur resistance. They did not see correlations between limits on Uighur religious education, restrictions on Uighur travel and widespread job discrimination on the one hand, and the rise in Uighur desires for freedom, justice and religiosity on the other. Because of the crackdown, Han officials have seen a profound diminishment of Islamic belief and political resistance in Uighur social life. They’re proud of the fervour with which Uighurs are learning the “common language” of the country, abandoning Islamic holy days and embracing Han cultural values. From their perspective, the implementation of the new security systems has been a monumental success.
A middle-aged Uighur businessman from Hotan, whom I will call Dawut, told me that, behind the checkpoints, the new security system has hollowed out Uighur communities. The government officials, civil servants and tech workers who have come to build, implement and monitor the system don’t seem to perceive Uighurs’ humanity. The only kind of Uighur life that can be recognised by the state is the one that the computer sees. This makes Uighurs like Dawut feel as though their lives only matter as data – code on a screen, numbers in camps. They have adapted their behaviour, and slowly even their thoughts, to the system.
“Uighurs are alive, but their entire lives are behind walls,” Dawut said softly. “It is like they are ghosts living in another world.”
Some names have been changed. A longer version of this article first appeared in Logic, a new magazine about technology