Timothy R. Amidon, Graduate Assistant, University of Rhode Island
Increasingly, those with access to electronic tools use them to perform mundane tasks in both public and private spheres: searching for information, clicking within databases on websites, inputting numbers and words into networked computing devices. They also take breaks to use social networking sites to connect and catch up with friends and acquaintances. Never before in human history have we been so networked. Yet how often do people stop and reflect about these practices? How successful have attempts within the field of rhetoric/composition been at raising technology users’ awareness of the types of tacit arrangements upon which these types of technologies are founded?
One such arrangement is the way in which people who use technologies—those who add value to these tools—often fail to be recognized for what they contribute. More specifically, as Bruce Sterling and Scott Klinker have argued ‘end users’ add value to technologies because they co-compose patterns (think massive data-sets) that tell designers how people use technology as well as what motivates those uses (e.g. business, political, and social goals). One benefit is that technology users often enjoy smarter, more efficient technologies. Yet there may be downsides to the tacit assumption that technology designers should be given all the credit for the creation of technologies. It is a matter of perspective—a difference between tool and tool in use.
It goes without saying that users of technologies must become more cognizant of economies of digital interface. Generally speaking, it is commonplace now to hear conversations about privacy in relation to these economies, but privacy is just a piece of the greater puzzle. For example, why haven’t we had more conversations about what levels of ownership of digital fingerprints technology users should have? Why are users unconcerned that the tacit arrangement currently assigns authorship, ownership, and access rights to information about how users do things with tools not to users themselves but to the designers of the tools? Again, it is a matter of perspective—a difference between tool and tool in use. When people use electronic tools they create information that is a byproduct of that use, as the following two examples illustrate.
The first example: A moment ago, I decided to update my Facebook status to inform my friends that I was writing a piece on copyright, spimes, and metatext for the NCTE Inbox (see Johndan Johnson-Eilola’s chapter in Stuart Selber’s Rhetorics and Technologies for a more thorough look at how spimes and metatext relate to composing in a digital age). I hit a couple of keys. I pushed the share button with my mouse, and subsequently the marketing algorithms suggested that I like “copyright” and “law.” I created value: Facebook now knows a little bit more about me as an individual, and that information might be used for a wide variety of purposes.
The second example: While searching for Johndan Johnson-Eilola’s Datacloud in Amazon, the database suggested—based on the information I had input—that I may also wish to buy Henry Jenkins’ Convergence Culture. Again, I created value. I have created value (with others like me who use the site)—by supplying information (because I cannot opt out, even if I wanted to) about my purchasing habits to the database that is Amazon. Taken with the data collected from others, it essentially allows Amazon to make ntuitive suggestions; it is also (at least partially) a reason why Amazon is a technology we tend to rely on to satiate our needs and desires in a consumer culture.
These are two of literally hundreds of ways that information about how we do things with electronic tools is collected on a daily basis. Moreover, these are two uses that are relatively harmless—I think. The fact that I didn’t ‘like’ copyright or the law—after Facebook suggested I do so—may say a great deal about me as an individual. The bigger point, though, is that when we start making these clicks as large collections of people, as a society of technology users, we essentially allow the designers of technologies to construct and assemble vast data-sets. What we actually do is construct—with tool providers—a digital habitus (Bordieu). In other words, we allow these technologies to conduct research on us and we don’t even ask to see what that data says. What does this phenomenon suggest, then, about how informed consent functions with regards to technology use?
Beyond the issue of informed consent, what about the question of who has access to, and thus de-facto ownership and stewardship of, the vast sea of information, the giant data-sets that are created within the electronically networked, socially-constructed, environments? In short, why haven’t we conceived of meta-text and spimes as intellectual property?
Selected List of Works Cited
Bordieu, Pierre. Practical Reason: On the Theory of Action. Stanford: Stanford UP, 1998. Print.
Johnson-Eilola, Johndan. “Among Texts.” Rhetorics and Technologies: New Directions in Writing and Communication. Ed. Stuart A. Selber. U of Southern Carolina P, 2011. 33-
This article will be continued in next month’s IP Report.
This column is sponsored by the Intellectual Property Committee of the CCCC and the CCCC-Intellectual Property Caucus. The IP Caucus maintains a mailing list. If you would like to receive notices of programs sponsored by the Caucus or of opportunities to submit articles to either this column or to an annual report on intellectual property issues, please contact email@example.com.