Is Facebook Just a “Tool”?

Is Facebook Just a “Tool”?
Image by Simon from Pixabay

by Nolen Gertz

This past week, Facebook CEO, chairman, and founder Mark Zuckerberg testified before members of the United States Senate and House of Representatives to answer to the growing scandal surrounding Facebook’s role in helping Cambridge Analytica (and, by association, the Russian government) gain access to the data of millions of Facebook users in order to influence the 2016 US presidential election, the Brexit referendum, and possibly many other recent elections.

Zuckerberg began his prepared testimony by apologizing that Facebook “didn’t take a broad enough view of our responsibility; that was a big mistake.” While this was nice to hear, it is clear from the rest of his testimony that Zuckerberg and Facebook have still not taken a “broad enough view.”

Zuckerberg elaborated by stating, “Across the board, we have a responsibility to not just build tools, but to make sure those tools are used for good.” While this and other such statements from Zuckerberg were alarming due to the implication that Facebook decides for its users what is “good,” for me what was most alarming was the description of Facebook as a “tool.”

In order to see why this view of Facebook is dangerous, it is important to realize that this view was similarly echoed by Cambridge Analytica. On March 17, Cambridge Analytica turned to Twitter to try to downplay the scandal surrounding it. In its first tweet, Cambridge Analytica explained that the data they used was not “Facebook data” but rather “client and commercially and publicly available data.” In other words, the data was given to them, it was not taken.

In response to the claim that people were tricked into giving Cambridge Analytica access to their data, just as the data was allegedly used to trick people into voting for Trump or for Brexit, Cambridge Analytica’s fifth tweet of the thread stated: “Advertising is not coercive; people are smarter than that.” The implication here is that to think that something posted online can manipulate people into acting a certain way is to view people as dumb, as stupid enough to be behaviourally influenced by technology, which is a view that Cambridge Analytica says it does not support.

Except that Cambridge Analytica does support such a view of humanity. On Cambridge Analytica’s homepage, we can still find one simple statement: “Data drives all we do. Cambridge Analytica uses data to change audience behaviour. Visit our Commercial or Political divisions to see how we can help you.” Clearly Cambridge Analytica at least used to think that people were dumb, that a tech company — a company that, like Facebook, wanted to “just build tools” — could “change audience behaviour.” Who would hire them otherwise?

The danger here is to accept the terms of this debate, as provided by Zuckerberg and Cambridge Analytica, that technologies are tools, that tech companies build such tools, and that to be manipulated by a tool is to be dumber than a hammer.

In the field of the philosophy of technology, the idea that technologies are mere tools is known as the instrumental view of technology. This view, criticized in the 1950s by philosopher Martin Heidegger as the most dangerous yet most pervasive view of technology, sees technologies as neutral things with no power of their own. Humans have power, things do not, so things can only do good or bad depending on the person using them. To think otherwise is to attribute agency to things, which common sense tells us is absurd.

The presupposition of this view is subject/object dualism, the view that only humans (subjects) have agency, not things (objects). This dualism operates behind the claim that “Guns don’t kill people; people kill people.” Guns are things, and things do not kill. This seems so obvious as to not bother arguing about, which is why we of course do not need gun control (“Ban assault weapons!”), but only responsible gun ownership. Similarly, we of course do not need social media control (“Delete Facebook!”), but only responsible social media users. In other words, subject/object dualism is a metaphysical position, but one with huge political implications.

Yet just as people tend to act differently after picking up a gun, so too do people tend to act differently after logging in to Facebook. To pick up a gun is to see the world as someone holding a gun, so someone who picks up a gun is much more likely to take aim at something (or someone) rather than throwing the gun in the air or licking it. The gun does not determine our behaviour, but the gun does shape our behaviour. So too does Facebook shape our behaviour, and to ignore this shaping is to vitally misunderstand both guns and Facebook.

As Don Ihde argued in Technology and the Lifeworld, technologies “mediate” how we see the world and how we act in the world. Ihde analyzed these mediations by classifying them into types of “human–technology relations.” A gun is an “embodiment relation” as an extension of our bodily abilities, so a bullet is like a fist you can shoot at someone. Facebook is a “hermeneutic relation” as an extension of our interpretive abilities, so a Facebook account is like a pair of eyes you can shoot at someone.

What is crucial about human–technology relations is the dynamic of revealing and concealing at play, as technologies give but they also take away. Guns make us feel powerful, so much so that gun users feel helpless without them. Facebook makes us feel connected, so much so that Facebook users feel disconnected without it.

Because technologies influence how we see and how we act, Peter-Paul Verbeek argues in Moralizing Technology that we must integrate technologies into our ethical theories since they are already integrated into our ethical practices. And because technologies influence how we destroy ourselves and how we destroy others, I argue in my forthcoming book Nihilism and Technology that we must integrate technologies into our political theories since they are already integrated into our political practices. In other words, we should not only worry about what Facebook is doing to us, but also about what drives us to want to use Facebook and to stay on it long after discovering its dangers. For we gain nothing from deleting Facebook if we simply erect a new Facebook in its place.

Nolen Gertz is Assistant Professor of Applied Philosophy at the University of Twente in the Netherlands. His research focuses primarily on the intersection of existential phenomenology and the philosophy of technology. He spoke on Nihilism and Technology at CIPS on 27 March 2018. He can be reached at [email protected].

Related Articles

 

 

 

 

 

 


 

The CIPS Blog is written only by subject-matter experts. 

 

CIPS blogs are protected by the Creative Commons license: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)

 


 

[custom-twitter-feeds]