Home Articles What the Facebook Hearings Mean For Surveillance, Data Privacy, & Islamophobia

What the Facebook Hearings Mean For Surveillance, Data Privacy, & Islamophobia

174
SHARE
What the Facebook Hearings Mean For Surveillance, Data Privacy, & Islamophobia

“What the Facebook Hearings Mean For Surveillance, Data Privacy, & Islamophobia”

On April 10th and 11th of this year, Facebook CEO Mark Zuckerberg was summoned before Congress to answer questions about how Cambridge Analytica appropriated millions of user’s data from his company to develop “psychographic profiles” that were used to influence voter behavior in the 2016 Presidential elections. In his highly anticipated testimony, Zuckerberg not only acknowledged that mistakes were made, allowing a third party to misuse the data, but he also testified that the company was working on various tools to prevent Facebook from being used for other purposes such as a platform for fake news, foreign interference, and hate speech. He admitted that his company did not do enough to protect the user’s privacy and the ecosystem that is Facebook. Furthermore, Zuckerberg agreed that government regulations were inevitable but he stressed a balanced approach.

Issues of data privacy and surveillance, and how the monetization of data is the business of the tech industry, are concerns for everyone using social media. But privacy violations are also a distinctive discursive terrain and political experience particularly relevant to Arabs and Muslim Americans in a post-9/11 security state, one that was intensified in this country’s War on Terror. Arabs and Muslim Americans tend to be the most vulnerable to changing political winds as evidenced by our current administration’s xenophobic whims and Islamophobic tantrums. Peter Waldman, Lizette Chapman, and Jordan Robertson write in their article on Palantir Technologies and the Los Angeles Police Department, “Palantir Knows Everything About You,” “When whole communities are algorithmically scraped for pre-crime suspects [then] data is destiny.” This is not merely Islamophobia as in the fears of others that can be rationally engaged, deliberated, deconstructed, or decolonized. This is Islamophobia at the level of data that has been collected, categorized, aggregated, sorted into massive databases, exchanged and/or sold to other agencies, and used for any number of purposes in our information economy.

The two-day hearings featuring Zuckerberg were a closely watched political affair given what the scandal revealed about Cambridge Analytica’s methods and how they were used in a consequential election. Many leaders in the tech industry worried that Zuckerberg’s testimony might lead to restrictions on the “Wild West” culture of innovation and the multi-billion dollar industry that is the signature feature of Silicon Valley. But as the first hearing concluded, many Congressional leaders appeared to be too gentle with their interrogations, allaying fears from the tech industry. Some leaders demonstrated their profound lack of understanding about technology and social media; some simply did not understand how the Internet worked.

However, as one especially perceptive Senator pointed out, the privacy violations raised by Cambridge Analytica’s misuse of Facebook users’ data were reminiscent of the U.S. government’s surveillance efforts from the early 2000s in a program called Total Information Awareness. The Senator also pulled in a recent controversy with a data mining company named Palantir Technologies that specializes in complex data mining and their purported work on the “Muslim Registry.” Although Palantir Technologies has a “strict policy” that explicitly forbids any employee from working on any political campaigns, a former computer engineer of Cambridge Analytica testified before British Parliament that an employee from Palantir did work with the group to develop its “psychographic profiles” off of Facebook accounts. Zuckerberg testified that Facebook employees worked with all campaigns during the presidential election, but he was unsure about the extent of any collaboration with Cambridge Analytica or with Palantir. Nevertheless, the connections between Palantir, Cambridge Analytica, and Facebook are difficult to ignore, and they form an important political backdrop to the Senator’s questions.

The second day’s questioning was more aggressive as some members of the House raised genuine concerns about censorship and political bias from Facebook’s “artificial intelligence” programs that were employed to routinely censor explicit materials. But Zuckerberg was prepared, agreeing with the importance of working with political leaders on improved regulation, admitting to making mistakes, and remaining focused on his scripted talking points.

(For the full Senate hearing, see: https://youtu.be/6ValJMOpt7s .  For a short clip on Senator Maria Cantwell’s Q&A, see: “Outfoxing the foxes?”)

Of these two days of hearings, what caught my attention were questions raised by Senator Maria Cantwell (D-Washington) on the first day. Cantwell began by asking whether he knew of Palantir Technologies, a tech company based in Palo Alto, CA, that specializes in complex data mining. Zuckerberg’s face briefly turned away and then back to the Senator as he responded, “I do.” Cantwell paused and described Palantir as “Stanford Analytica.”

“Do you agree?” the Senator asked.

Zuckerberg paused briefly as if to contemplate a response and then he said, “Senator, I have not heard that.” And then his eyes immediately turned downwards after he answered.

Senator Cantwell quickly followed up with this retort, “Do you think Palantir taught Cambridge Analytica, as press reports are saying, how to do these tactics?”

“Senator, I don’t know,” Zuckerberg promptly said.

“Do you think Palantir ever scraped data from Facebook?” she then asked.

“Senator, I am not aware of that,” he said.

Then the Q&A unexpectedly turned to a surprising historical topic. Senator Cantwell asked Zuckerberg whether he was aware of the Total Information Awareness Program to which Zuckerberg responded with he was not.

(1:55) The Senator pressed on, “Total Information Awareness was, 2003, John Ashcroft and others were trying to do similar things to what I think is behind all of this – geopolitical forces trying to get data and information to influence a process. So when I look at Palantir and what they’re doing, and I look at WhatsApp, which is another acquisition, and I look at where you are from the 2011 consent decree and where you are today, I am thinking, ‘Is this guy outfoxing the foxes or is he going along with what is a major trend in the information age to try to harvest information for political forces?’ And so my question to you is, do you see that those applications, that those companies – Palantir and even WhatsApp – are going to fall into the same situation that you’ve just fallen into, over the last several years?”

Senator Cantwell’s question not only challenged the business model of Facebook and the very architecture of the information economy, but it also invoked the historical specter of surveillance programs in post-9/11 America that disparately impacted Arabs and Muslim Americans, thus stoking Islamophobic discourses. Her query suggests the need for critical discussion and research about data privacy, information technology, and Islamophobia in our contemporary moment.

Total Information Awareness (TIA) was a program of the United States government that operated in 2003 under the Defense Advanced Research Projects Agency (DARPA). According to the American Civil Liberties Union, Total Information Awareness was nearly a realization of George Orwell’s “Big Brother” program utilizing an “ultra-large-scale” database for the sole purpose of identifying terrorists. Although previous iterations of mass surveillance programs have existed, the September 11th attacks provided an overarching and drastic rationale for the immediate collection of personal and commercial data to “create profiles” so that “terrorism can be prevented by collecting hoards of information about everyone and then subjecting them to a virtual dragnet.” The program could collect not only government records such as social security information and income tax records, but also medical records, travel histories, drug prescriptions, consumer habits, personal communications (e.g., phone calls, emails, browser histories), school records, personal and family associations, and the list goes on. As the ACLU rightfully notes, the absence of proper oversight and the potential for government abuse and civil liberties violations from these programs were most distressing.

 

What the Facebook Hearings Mean For Surveillance, Data Privacy, & Islamophobia

Even the logo for the Total Information Awareness program sparked outrage. It depicted an “all-seeing eye” atop a pyramid gazing upon nearly half the globe, everywhere except North America. According to Matt Kessler’s “The Logo That Took Down a DARPA Surveillance Project,” the design evoked total global surveillance, which was also expressed in the program’s slogan: Sciencia Est Potentia (Knowledge is Power).

 

What the Facebook Hearings Mean For Surveillance, Data Privacy, & Islamophobia
(When I heard the name Palantir Technologies, I thought about the all-seeing orb known as a palantír from J.R.R. Tolkien’s Lord of the Rings and made popular by the movies. This is one of those moments where reality and fiction collide.)

 

Kessler writes that although TIA was shutdown in 2003 amidst public outcry as “post-9/11 excess,” the National Security Agency continued building their own surveillance programs which we now know through the work of whistleblowers such as Chelsea Manning and Edward Snowden.

As for Palantir Technologies, it is a tech company that builds software to connect data, technologies, humans and environments (http://www.palantir.com/about/). It was founded by Alexander Karp and the infamous Peter Thiel, the billionaire Silicon Valley mogul who was once the senior tech advisor to President Trump early in the administration. According to Spencer Woodman’s insightful article, “Documents Suggest Palantir Could Help Power Trump’s ‘Extreme Vetting’ of Immigrants,” the “big data” mining company was suspected of building the basis of the platform for the Muslim Registry for then President-elect Trump in December 2016. Woodman writes that a similar program called Analytical Framework for Intelligence developed for the U.S. Customs and Border Protection Intelligence already “tracks and assesses immigrants and other travelers.” The program draws upon a “variety of federal, state, and local law enforcement databases … including biographical information, personal associations, travel itineraries, immigration records, and home and work addresses, as well as fingerprints, scars, tattoos, and other physical traits.”

Palantir Technologies specializes in working with large-scale, complex data sets, and building visually compelling interfaces that will provide meaningful analysis for users. In other words, Palantir has overcome one of the major obstacles in handling comprehensive data analysis, and that is data overload. In fact, Palantir has already agreed to a $34,650,000 contract with Immigrations and Custom Enforcement (ICE) to “build and maintain an intelligence system called FALCON” which operates in similar fashion to AFI, storing and analyzing information from government databases.

In 2012, a Department of Homeland Security impact report highlighted AFI’s promising capabilities including processing data, geospatial analysis to “help agents learn about the location or type of location that is favorable for a particular activity,” “link analysis to produce a social network representation of the data,” and temporal analysis “that can be used to predict future activities.” Four years later in 2016, another favorable impact report by the Department of Homeland Security revealed that a controversial data source from the Bush era named National Security Entry-Exit Registration System (NSEERS) was added to the AFI system-linked programs. NSEERS required all foreign nationals from specific countries to register with the federal government. Most of those who registered were Muslims. Although the program was suspended in 2011 by President Barack Obama, the data still exists and federal agents can draw upon it for other law enforcement purposes. In other words, whether President Trump calls for “extreme vetting” or a Muslim Registry, a means to systematize this information and a registry currently exists. Both can be tasked to track Muslims not only in the United States, but also all over the globe. The question is not if a Muslim Registry could be implemented, but rather when, what are the parameters, and who profits if the program goes live.

For what it’s worth, awareness of the Muslim Registry and Peter Thiel’s links to President Trump, led to a series of protests at the headquarters of Palantir Technologies on January 18, 2017, just days before President Barack Obama stepped down from office. The protests prompted Palantir’s other CEO, Alexander Karp, to declare that his company would never build a Muslim registry.

In her questioning of Mark Zuckerberg, Senator Cantwell raises a fundamental questions: What is the difference between government agencies such as the NSA and companies such as Palantir and Facebook if they are all engaged in massive data collection and data mining? Perhaps more crucially, how can they be made accountable when violations of privacy, proliferation of fake news, or damage from hate speech or Islamophobia occur?

Tech CEOs, and Silicon Valley culture in general, like to present themselves as a corporate culture of forward thinkers and champions of civil liberties and civil rights. Tim Cook, CEO of Apple, even declared in an interview on MSNBC that privacy was a human right, which gained much praise and cultural currency in the fallout from Facebook’s privacy breach.

(Apple CEO Tim Cook: “Privacy Is A Human Right” on MSNBC: https://youtu.be/WneQ34ZuXww)

But like most politicians, tech CEOs cannot be relied upon to do the right thing when it counts. When it comes to privacy and surveillance, vulnerable communities like Arabs and Muslim Americans are often at the greatest risk. Understanding how Islamophobic discourses are produced, circulated, and preserved at the level of data is greatly needed as information technology continues to innovate at a rapid pace and our information economy continues to create new opportunities as well as difficult challenges.

 

For further reading, see:

Abdul-Gader, Abdulla H. and Muhammad A. Al-Buraey. “An Islamic Perspective on Managing Information Technology: Toward a Global Understanding.” Islamic Studies. Volume 37, No. 1 (Spring 1998). PP. 57-76. Stable URL: http://www.jstor.org/stable/20836978

Azzi, Abderrahmane, “Ethical Competence in the Information Age.” Islamic Studies. Vol. 37, No. 1 (Spring 1998). PP. 77-102. Stable URL: http://www.jstor.org/stable/20836979

Bowie, Neil G. “Terrorism Events Data: An Inventory of Databases and Data Sets, 1968-2017.” Perspectives on Terrorism. Vol. 11, No. 4 (August 2017. PP. 50-72. Stable URL: http://www.jstor.org/stable/26297895

Gold, Steven J. “Israeli Infotech Migrants in Silicon Valley.” The Russell Sage Foundation Journal of the Social Sciences. Vol. 4, No. 1 (January 2018). PP. 130-148. Stable URL: http://www.jstor.org/stable/10.7758/rsf.2018.4.1.08

Huff, Toby E. “Globalization and the Internet: Comparing the Middle Eastern and Malaysian Experiences.” Middle East Journal. Vol. 55, No. 3 (Summer 2001). PP. 439-458. Stable URL: http://www.jstor.org/stable/4329651

Mazrui, Ali, and Alamin Mazrui. “The Digital Revolution and the New Reformation: Doctrine and Gender in Islam.” Harvard International Review. Vol. 23, No. 1 (Spring 2001). PP. 52-55. Stable URL: http://www.jstor.org/stable/42762663

Mian, Atif, and Howard Rosenthal. “Big Data in Political Economy.” The Russell Sage Foundation Journal of the Social Sciences. Vol. 2, No. 7 (November 2016). PP. 1-10. Stable URL: http://www.jstor.org/stable/10.7758/rsf.2016.2.7.01

Tufekci, Zeynep. Twitter and Tear Gas: The Power and Fragility of Networked Protest. New Haven: Yale University Press, 2017.

Williams, Betsy Anne, Catherine F. Brooks, and Yotam Shmargad. “How Algorithms Discriminate Based on Data They Lack: Challenges, Solutions, and Policy Implications.” Journal of Information Policy. Vol. 8 (2018). PP. 78-115. Stable URL: http://www.jstor.org/stable/10.5325/jinfopoli.8.2018.0078

Maxwell Leung
Maxwell Leung is an assistant professor in the Critical Studies Program at California College of the Arts. His research explores the relationship between the representation and expression of state power and its regulation of subjectivity, agency, and culture. His primary research interests are hate violence studies, law and society, studies of governance, critical race theory, and post-structuralist theory. His secondary research interests include Asian American Studies, comparative Ethnic Studies, visual sociology, and popular culture. In 2010, Leung was appointed Associate Researcher to the Islamophobia Research and Documentation Project at UC Berkeley. Leung’s project with the Center is called, “Visualizing Islamophobia: A Visual Ethnography of Arab and Muslim American Identity.” This interdisciplinary visual, sociological, and ethnographic project is designed to discover the ways in which Arab and Muslim American identity is impacted by and negotiated in an acute period of Islamophobia in the United States.
The central concerns of this project are the nature of Arab and Muslim American personal narratives in a time of intense racial and xenophobic anxieties, and how Arab and Muslim Americans make sense of their identity and place in American culture.
SHARE
Previous articleProfessor Tariq Ramadan & France’s Islamophobia
Next articleUS liberal Islamophobia is rising – & more insidious than rightwing bigotry
Maxwell Leung
Maxwell Leung is an assistant professor in the Critical Studies Program at California College of the Arts. His research explores the relationship between the representation and expression of state power and its regulation of subjectivity, agency, and culture. His primary research interests are hate violence studies, law and society, studies of governance, critical race theory, and post-structuralist theory. His secondary research interests include Asian American Studies, comparative Ethnic Studies, visual sociology, and popular culture. In 2010, Leung was appointed Associate Researcher to the Islamophobia Research and Documentation Project at UC Berkeley. Leung’s project with the Center is called, “Visualizing Islamophobia: A Visual Ethnography of Arab and Muslim American Identity.” This interdisciplinary visual, sociological, and ethnographic project is designed to discover the ways in which Arab and Muslim American identity is impacted by and negotiated in an acute period of Islamophobia in the United States. The central concerns of this project are the nature of Arab and Muslim American personal narratives in a time of intense racial and xenophobic anxieties, and how Arab and Muslim Americans make sense of their identity and place in American culture.