Wednesday, January 11, 2017

How to Augment a Prison

If you've ever played Tony Hawk Pro Skater 4, then visiting Alcatraz Island in San Francisco is a very surreal experience. The entire time I was touring the island, I had to fight the urge to jump up on the handrails and pretend I was doing a kick flip onto the lower deck. (I can't skateboard at all; I definitely would have broken my neck.) As someone interested in augmented reality, this strange convergence between digital and physical spaces was truly fascinating.



Of course, I didn't spend the entire time at Alcatraz trying to find "S-K-A-T-E." Mostly because my wife and I were utterly engrossed in the free audio tour that came with our admission. As we listened, the nondescript cells came to life as the narrator pointed out the beds occupied by Frank Morris and the Anglin brothers, the prisoners involved in the infamous Alcatraz escape of 1962. The audio tour even included actual recordings from when the prison was still operational. When I walked into the dining hall, I almost jumped when a prison guard from the 1950s shouted at me to "shut up and eat!"

Being at a location with an audio tour is not simply a matter of looking; it's a matter of inhabiting, and the audio supplement works so well at a place like Alcatraz because it allows the prison to remain visually unadorned with distracting signs and images. Unlike stationary placards or historical markers, it allows visitors to move through the space, thus creating less blockage in more popular areas. Moreover, it creates a more immersive historical experience by coating the visual artifacts with layers of aural history.

According to Emma Rodero, audio-based narratives are so compelling because the listener is "constantly building [their] own images of the story in [their] mind." Similar to reading a book, audio narratives are a fundamentally participatory media in the sense that it leaves space for the listener's imagination to participate in the creation process. For location-based tours like Alcatraz, then, the location itself provides a kind of visual and material catalyst to the user' s reception of the audio.

But what about locations (historic or otherwise) that don't have prominent visual elements? The site of the Battle of Gettysburg, for instance, looks like a normal field unless viewed through the augmented reality tour offered by InSite. Creating location-based tours, then, must take into consideration how the space itself has already "written" the visual elements of the scene. If rhetoric is about discerning our "available means" as writers, then we must consider what is (and is not) already available as a rhetorical factor within the location itself.

As John Tinnell points out, physical spaces operate as "techno-geographic interfaces" within AR applications: "AR enables the creation of media that incorporate geographic flows as a vital element in the composition or design process" (78). For Tinnell, writers of location-based projects must maintain a sense of openness or contingency to the "accidents of the sensible" present within a location that will always serve as constitutive rhetorical forces within the user's reception and experience of the project. As the Alcatraz example demonstrates, writers of location-based tours must allow the development of AR content to remain "permeable and transparent" to the location itself (80).

Writing a location is always a form of co-authorship. As augmented reality becomes more advanced and ubiquitous, we must remain careful as writers not to simply incorporate AR media unreflectively into location-based writing projects. Rather, we must carefully consider how the augmentations respond to or (as I have written elsewhere) "articulate" the rhetorics already embedded within the user's environment.

Thursday, September 15, 2016

Celestial Augmentation

Augmented reality is on track to become a $120 billion industry by the year 2020, AR startup Magic Leap just received an unprecedented $800 million in C round venture capital investment, and Vuforia, a leading vision-based augmented reality platform, was recently purchased for $65 million by PTC. Without a doubt, the last few years have seen a staggering amount of interest in AR technologies. As a result, some of the smartest people in technology, business, marketing, and design are starting to flock to this emerging industry.

But for some reason, they can't seem to convey the potential of this amazing technology beyond kids looking at models of 3D planets.


Screenshot from EON Reality promo video



Screenshot from Project Tango promo video
Screenshot from Magic Leap promo video

This might seem like a strange trope, but it's actually more fitting than you think. After all, isn't outer-space the perfect metaphor for augmented reality? Kind of like the "final frontier" of computational advancement? Popular media often uses outer space as a symbol for curiosity, exploration, and territorial conquest, and the child-users in the videos are the future explorers, or inheritors, of this impending technological unknown.

Or, to phrase it less optimistically, this trope exposes the underlying colonialist impulse of "augmented reality" as a mass medium. When I say that AR "colonizes space," I mean it in two different ways: First, AR colonizes space in the more obvious sense that digital advertisements, social media feeds, email reminders, and solar systems will visually populate the physical spaces of everyday life. However, there is another, less obvious form of colonization taking place within AR development.

When Google released it's beta-version of Glass, one of the biggest complaints was that the device was essentially a smartphone that you wear on your face and thus failed to live up to its potential as an AR optimized optical display. Well, Google has responded to its critics by taking on an even more ambitious AR program: Project Tango. At first glance, Tango seems indistinguishable from other AR devices and platforms. I mean, the promo video even features a kid staring in wonder at an augmented solar system! However, if you read up on the technology behind Tango, Google is actually taking a very different approach to how its technology orients digital content in physical space.

Tango is basically a smartphone/tablet with an AR enhanced camera. The camera combines three computer vision technologies that allow it to track and map its physical surroundings with incredibly high degrees of accuracy: motion tracking, area learning, and depth perception. Unlike image or GPS-based AR technologies, Tango uses this computer vision technology to generate a 3-dimensional map of the user's surroundings. This "map" is called an ADF (Area Description File), and subsequent Tango applications can draw upon the spatial data within this file to create additional overlays.

In many ways, Tango extends the logic of other Google mapping projects, such as Google Street View, just on a much more precise scale. One of Google's core missions is to "organize the world's information," but it seems like Project Tango is more of a pursuit to "organize the world" itself. What we're seeing with Tango (unsurprisingly) is a continuation of the digital hegemony that Google currently wields within screen-based media: they're not seeking control of AR content so much as the spaces and processes through which people will access it.

But who cares? Wouldn't something like project tango just move as that much closer to the era of "ubiquitous computing?"

Well, it matters because there are social, cultural, and political implications to technology companies tracking what counts (and what doesn't count) as potentially augment-able space. Take for instance Microsoft's promo videos for the Kinect. Their "retail clothing scenario" video depicts a customer walking into a clothing store where an AR mirror automatically overlays a digital dress. The customer does not do anything to approve of the augmentation nor the gendered clothing items the mirror (super)imposes onto the customer's body.


Kinect Mirror "Retail Clothing Scenario" promo video

 
This kind of "proxy augmentation," in which digital overlays are subordinated to the physical objects they represent, is one of the most prevalent depictions of AR for commercial purposes. One of the leading partners of Project Tango, Lowe's, has invested in an AR app that overlays furniture within customer's homes so that they can see an accurate spatial representation of a new kitchen stool before clicking "Buy."

The trope of celestial augmentation merely serves to obscure this underlying capitalist impulse currently driving the commercial AR industry. Similar to any new technology (like a spaceship, for instance), we want to believe that AR will be leveraged for altruistic purposes and allow us to test the limits of human knowledge, wonder, curiosity, exploration, etc. When, in reality, it's far more likely to be leveraged for more mundane consumer activities, the same ones that have always colonized the digital and physical spaces of everyday life.



To Infiniti and beyond, I suppose.










Tuesday, August 9, 2016

Pokemon Go to Sacred Spaces

I was in Europe for two weeks this summer. As is probably a common experience for many first-time visitors, I went to more museums, memorials, and churches than I can remember. After a while, I started to notice some interesting variations in how these spaces encode and enforce their historical, political, and/or religious significance by regulating the actions of visitors: at a 10th century church in north Germany, I was told not to film the paintings; at a beautiful, towering cathedral in Poland, I was told to remove my hat; at the former home of Frederic the Great in Potsdam, Germany, I was told to not take any pictures of the palace interior (or else pay ten euros for a "photo pass"); at the Memorial to the Murdered Jews of Europe in Berlin, I wasn't "told" to do anything. With each interaction, I gained a little more knowledge about not just why but how spaces are configured as sacred through the actions of those who pass through them.

Public memorials are often designed to be physically obtrusive for visitors. They are material markers whose physical presence is not only intended to represent but to materially enact collective remembrance. In many cases, memorials are designed to (literally) get in the way of the quotidian activities of public life. The German artist Gunter Demnig, for instance, has been installing "stolpersteins" (or "stumbling stones") throughout Europe since the early 1990's. In what is described on Wikipedia as "the world's largest decentralized memorial," a stolperstein is placed at the last freely chosen place of residency for those sent away to Nazi concentration camps. As memorials, stolpersteins are designed to be intrusive, to carve time out of people's habituated routes as they walk by and "stumble" upon them.

I was on a family trip to New York City recently, and one morning we took a train downtown to visit the 9/11 memorial. Walking down Greenwich Street, it's impossible not to see (and hear) the Twin Tower reflection pools. Erected at the original site of the North and South Towers, the pools operate as cultural pedagogy, or as a material instruction for how we are to respond to 9/11 as an historical event: the dialectic between national suffering and national healing is materially "reflected" by the pools' disappearing center (unspeakable trauma) and flowing water (outpouring of hope and support). Similar to Deming's "stumbling stones," the public is meant to encounter the reflection pools as a physically obstructive reminder of American tragedy, incorporating its message as an embodied, affective event. 



Surprisingly, I also felt all of these things when my Pokemon Go avatar came across the North Tower reflection pool.


I think one of the reasons I was so shocked by the digital reflection pools is because of the disruption that the memorial evoked within the procedurality of the game's digital environment. As a Pokemon Go player, I typically proceed un-reflectively through the digital landscape of Pokemon Go, only stopping to ponder its non-descript trees, intersections, parks, and buildings insofar as they participate within the processes embedded within the game (i.e. as locations for catching/fighting pokemon). In the case of the North and South reflection pools, however, their presence within the digital world of Pokemon Go was (for me at least) significant beyond their role within the game’s processes. Although they were just a collection of pixelated blue squares, their presence struck me as unique and significant, sticking out like public scars in an otherwise bustling, energetic landscape. The digital reflection pools were so noticeable because they reversed the procedural logic that many find troublesome in the game's ability to (super)impose itself upon sacred spaces.

For someone like Leonard Pitts jr., I should be “throttled” for even having the application open in such a place. Pitts’ article "Capture This! It's wrong to play Pokemon at Auschwitz!" comes in response to reports that Pokemon Go players have been spotted capturing digital monsters at sacred spaces such as Holocaust museums and national cemeteries. For Pitts and many others critical of the game's careless dispersal of pokemon spawning locations, augmented reality games like Pokemon Go are not only distracting for the other visitors, but more importantly dishonor those for whom the space is intended as a site of remembrance.




Pitts is not alone in his disdain for Pokemon Go. From hospitals to private homes, Niantic has received a plethora of requests from various locations asking to removed from the game. Although it seems simple enough to implement in the next app update, Niantic currently has no system in place (at least to my knowledge) for barring pokemon spawning in locations like memorial sites, museums, and cemeteries. (Although, if you happen to be friends with the Niantic Board of Directors, you can personally request that pokemon be prevented from spawning on private property) But even if Niantic were to implement a system for restricting pokemon spawning locations, it would be nearly impossible for Niantic to remove all unwanted pokemon considering that the designation of a space as "sacred" is determined by a multitude of cultural, ideological, and political factors that are not likely to be taken into account for every square inch of the game's digital landscape. Pikachu will always be trespassing somewhere, it seems.

For the most part, it seems that people are not as upset with Niantic as they are the people who choose to engage with the game in a place like Auschwitz. After all, other mobile games and social media apps still function in such places, and we don’t hold those developer responsible for their user's inappropriate actions. The real culprit, it seems to me, is the game's augmented reality functionality, or the overlaying of digital pokemon within physical space. This feature, even though it is a relatively minor aspect of the game,
is often featured prominently in articles discussing this issue. Indeed, it’s hard to deny that a smiling, bouncy Koffing (a pokemon who excretes poisonous gas) at the Holocaust Museum creates a jarring and inappropriate visual juxtaposition. There is a kind of digital colonization taking place that undeniably configures the Pokemon Go player as flippant (at best) and cruel (at worst).

In "Of Other Spaces: Utopias and Heterotopias," Michel Foucault describes how spaces become configured as "heterotopic" when they juxtapose
"in a single real place several spaces, several sites that are in themselves incompatible” (6). For Foucault, such spaces are full of "superimposed meanings," which then generate further meanings through their concatenation. As Robert Topinka explains, "by cutting and clashing with order, heterotopias force new forms of knowledge to emerge" (64).

What "new forms of knowledge" emerge in the clash between sacred-physical and playful-digital spaces? When I first saw the integration of the north and south tower reflection pools within the game's digital rendering of my surroundings at Ground Zero, I (literally) confronted the key difference between virtual and augmented reality environments: whereas virtual reality traditionally places the user in an entirely digital world with no necessary correspondence to physical reality, AR must necessarily take into account the physical spaces of the user's surroundings in its design, including those elements that merely serve to be "disrupt" the AR experience. I know it seems simple, but the lesson to be learned from all of this, at least in terms of AR as a technology for writing within public spaces, is that "augmentation" is not simply the (super)imposition of playful digital elements upon physical spaces (e.g. pokemon at Auschwitz), but simultaneously the (super)imposition of sacred physical elements upon playful digital spaces (the Twin Tower reflection pools). Indeed, as AR technology begins to move forward and more GPS-enabled mobile AR games are developed, we are likely to encounter even more "incompatible" juxtapositions between physical and digital space.

Monday, May 16, 2016

From Augmentation to Articulation

The first—and last—time I ever went fly fishing was on a family vacation in Big Sky, Montana. From what I can remember, my dad and I aimlessly tossed our lines into a beautiful, waist deep stream as our guide implored us to “flick our wrists more.” We were fishing with “dry flies,” which are designed to imitate the subtle movement of insects landing on the water’s surface (hence the wrist flicking). After a long, fish-less morning, the exasperated guide took us to a section of the river where we could pretty much reach in and grab a trout with our bare hands. We got our pictures and headed back to Florida, where you don’t need an entomology degree to catch a damn fish.

According to Gunner Brammer’s October 12th, 2015 post on the Flymen Fishing Company blog, if you really want to “move fish” then you need to use articulated streamers. Unlike “dry flies,” which rest on top of the water, streamers are designed to glide just under the water’s surface where their flexible joints can “articulate” the serpentine motion of small baitfish, such as minnows. However, as Brammer notes, fishing with an articulated streamer is not as simple as purchasing “a streamer with an articulation joint.” Rather, it is a process of constructing temporary material relationships—streamer length, joint tension, knot types, etc.—such that the streamer “will articulate.” Although the streamer’s materials create the potential for articulation, the streamer does not actually “articulate” until it is reeled in, thereby (per)forming a rhetorical action that emerges as a result of the flexible connections between its various components.


Recently, I've been thinking about how I can draw upon "articulation" to theorize augmented reality as a writing technology. Drawing upon Ernesto Laclau and Chantal Mouffe’s influential definition of articulation as “establishing a relation among elements such that their identity is modified as a result,” I claim that those who use AR to write within a physical location should focus less on how that space can be “augmented” (i.e. adding rhetorical elements) and more on how it can be “articulated” (i.e. establishing new relationships among existing rhetorical elements). For example, John Craig Freeman’s augmented reality intervention “Border Memorial: Frontera de los Muertos” generates a digital calaca--a traditional Mexican wood carving commemorating the death of a loved one--at the precise GPS coordinates of each recorded migrant death. “Border Memorial” seeks to invoke the affective weight of the migrant death toll by allowing users “to visualize the scope of the loss of life” that occurs invisibly at the U.S./Mexico border each year. Freeman’s “Border Memorial” project does not “add” new information to this physical space, but instead retrieves information already present within it. The digital calacas work alongside the harsh material rhetoric of this space (hot, dusty, dry, etc.), creating a juxtapositional framework that allows the user to articulate their experience of the border wall in the context of the more than 6,000 people who have died attempting to cross it.







As the articulated streamer demonstrates,“articulation” is an emergent rhetorical phenomenon that results from material and immaterial exchanges between flexible, contingent links. Writing studies theorist Raul Sanchez defines emergence as “modes of action that develop over time and as the result of recurrent and multiple exchanges between and among actors” (29). Thus, a rhetorical activity like public discourse is fundamentally emergent in the sense that that it “develops over time and as the result” of rhetorical exchange between various members of a public. Public discourse, then, is not simply “adding” to a conversation about a particular public issue; it is about crafting flexible, contingent links between elements that are already a part of the discourse. However, similar to the articulated streamer, these links only (per)form an emergent rhetorical action as “recurrent and multiple exchanges” pass through them. Public discourse requires activity that "develops over time" in order to develop emergent qualities. AR's potential as a platform for public discourse, then, hinges upon "establishing" identity-modifying relationships not in order to impose another static perspective on a space but rather to catalyze further discourse about it. When mobile writing technologies are leveraged as a rhetorical practice of articulation, they should seek to foster open-ended, affective public responses to the unarticulated elements within that space.






Monday, April 11, 2016

Tools of the Klutz: Reading Technological Gimmicks with Thomas Rickert

I attended one of the best panels of my life at the 2016 Conference on College Composition and Communication. Each speaker played a "role" inspired by one of four common character tropes: the Poet (Geoffrey Sirc), the Gambler (Brooke Rollins), the Klutz (Thomas Rickert), and the Villain (Jeff Rice). The presenters drew upon their characters' qualities as a lens through which to interpret student subjectivity. Although all the speakers gave compelling presentations that illuminated a forgotten or unnoticed aspect of the student subject, I want to focus specifically on Thomas Rickert's notion of "the klutz" as an important rhetorical figure for moving the field of writing studies beyond narrow conceptions of "error" as a temporary divergence on the path to writing excellence. On the contrary, Rickert sees klutzery as "something to be cultivated for itself," arguing that it is "the very ground of style, of composition, and development." When we (and our students) write an essay about a particular theory, event, or issue, we do not do so according to the linear, abstracted models of process extolled by our textbooks. Rather, we often stumble along, bumping into moments of invention that emerge alongside (and in spite of) our conscious writing efforts.

I think that Rickert's notion of the klutz is compelling for not only how it changes our conceptions about writing and writing pedagogy, but also for how it offers an alternative to the "digital literacy" model of technical education. As I listened to Rickert's presentation, I thought about the process of teaching myself how to use completely unfamiliar technologies, languages, and programs as I began designing and creating mobile augmented reality applications. This process was never neat or linear, and my original design ideas were constantly being revised as I stumbled upon new rhetorical affordances of the technologies I was working with. 



While working on our site-specific augmented reality criticism application SeeWorld: Visualizing Animal Captivity Practices, I came across a new feature of the Vuforia SDK called "Virtual Buttons." This feature allowed the user to interact directly with the digital contact in physical space, thus (partially) collapsing the mediating presence of the mobile screen. After seeing the above video, I was immediately interested in this new feature and I actively sought ways to incorporate it into our application. 

Because we want to appeal to a younger audience with the SeeWorld project, I thought it might be fun to make an augmented, interactive version of the SeaWorld park map. When scanned, small pins would appear superimposed onto the map that, when touched, would trigger the appearance of augmented signs available within that area.


 After I designed the virtual button component of our application, I couldn't help but think that I was complicit in the negative characterization of AR as a gimmick. New technologies are often disparaged as gimmicks when they incorporate new features merely to garner attention and with little thought put into their practical function in accomplishing a prior (i.e. conscious) goal. In my case, I just thought this feature was cool, so I used it. I was trafficking in gimmick, the cardinal sin of technological innovation. Via gimmick, I was playing the part of the klutz.

Rickert claims that klutzery is "the constitutive inability to meet the needs of the available scene of performance." If my "available scene of performance" was the creation of a highly compelling, interactive AR experience that would fundamentally change how virtual interactivity is implemented within AR applications, then yes, I failed miserably. And if I try again, I will most likely continue to fail with little hope of ever successfully performing the scripts written for this scene

The klutz is a figure that appears every time we write, every time we open up a new program, and every time we try to learn a new computer language. The point is not that we should perform some cognitive magic trick that inverts our klutzery to mastery so that failure can be disclosed as progress, but rather that we should see this component of ourselves (as writers, inventors, etc.) as something not to be resisted. The benefit of the kultz is not that it can justify past failures for a predetermined writing goal, but that it can help us (re)write the fundamental material and rhetorical conditions that determine what counts as "a writing goal" in the first place.

Thursday, March 31, 2016

Teaching Writing with Augmented Reality Technologies

I'm currently teaching a course called "Writing through Augmented Reality" in the University of Florida-Department of English. I have taught this class once before, although I changed it a good bit this go-around. However, both courses had the same primary aims of 1) getting students to think about the social, technological, and rhetorical implications of the emergence of AR as a mass medium, and 2) teaching students how to work with AR as an emerging writing technology. As I discussed in my talk at the 2015 CCCC, this second aim also allowed my students to re-conceive their role as "writers." Instead, I encouraged them to see themselves as inventors of this emerging medium as they "wrote" the genres and texts that will come to shape its rhetorical trajectory.

The first time I taught this class in fall 2014, we used Aurasma for all of the major projects. This had certain advantages: it was easy to learn and all the projects could be created, saved, and accessed through the Aurasma website and mobile app. Using the same software the entire software also had the added advantage of allowing my students to gain more depth in their technical knowledge of a specific tool rather than shifting to other AR platforms throughout the semester. By semester's end, however, many of the students were getting tired of Aurasma, and some even expressed interest in expanding the rhetorical possibilities of their final projects by utilizing more advanced AR software.

Aurasma is a fantastic teaching tool for anyone looking for a painless entry point into the world of AR. Indeed, I still used it for the class I am currently teaching, and my students utilized it to create some amazing AR criticism projects (see image below). For this assignment, students were required to isolate a particular company, industry, or issue and design a multimodal critique for it through Aurasma. Students were required to use the ubiquitous imagery that represented this company, industry, or issue as the trigger images for their projects. For example, one student's project critiqued the dairy industries unethical treatment of dairy cows by augmenting the famous "Got Milk?" logo to say "Got Misery?" when it is scanned with the Aurasma app. After a few seconds, a short video starts explaining the horrendous conditions of most dairy cows. To see this project, download the Aurasma app and follow the channel "counter publics." Scan the ad below.



For the final project, we switched to Unity (along with the Vuforia SDK) for our final class project. Unlike Aurasma, Vuforia allows developers to create standalone AR applications that can be submitted to the Google Play and Apple app stores. Thus, once the app is finished, rather than directing our users to a third party AR app  like Aurasma, Layar, or Blippar, we can simply tell people the name of our application and have them search for it in the appropriate app store. In introducing the project, I discussed the fact that although Vuforia has a steeper learning curve in requiring some basic C# coding and familiarity with the Unity game engine, creating a standalone-app will provide our project with more ethos, or authority, when it comes time to advertise it to potential users. In future posts, I will discuss the actual application itself as well as my experience in walking a group of undergraduate students through the process of creating a mobile application. For now, however, I just want to touch on a couple of pedagogical aspects of the project. 

My idea was that this final project would be a collaborative effort with the entire class. Each student would be responsible for creating a portion of the application, and at the end of the semester, we would put all of the components together into a single Unity/Vuforia AR application. So far, this is still the plan, although we also have all of our trigger images and overlays uploaded to Aurasma in case we run out of time and need a backup. 

I've posted a full description of the assignment below, but in a nutshell, the assignment requires students to create a site-specific AR application at a location of their choosing. First, every student wrote a 1000 word proposal and feasibility report for a potential location for our app. Students came up with a variety of interesting locations, including the university art museum and a local grocery chain. Each student gave a short 5-10 minute presentation on their proposed location, and then they all voted on what they thought to be the best/most feasible. In the end, the students ended up unanimously supporting a proposal to augment the historical buildings around the University of Florida campus. The students (as well as myself) liked this proposal for a number of reasons: 
  1. the site was nearby
  2. overlay materials (historic images, documents, etc.) were easily accessible
  3. we could imagine a variety of potential users (visitors, tour groups, prospective students, etc.)
Pedagogically, the most difficult part about this assignment was breaking up the workload evenly for the application design, planning, and building process. In addition, although the assignment is not yet complete, I imagine that assessing the project will also prove challenging, but I have attempted to mitigate this to a certain extent by designing clear assignment requirements and instructions (see below). 

Due to the proposal portion of the assignment, I did not know the site that we would be augmenting so I had to design the workload distribution for the assignment on the fly. As we moved forward with the project, we found that the easiest way to distribute the workload was to assign a certain number of buildings or historical monuments to each student and they were solely responsible for locating trigger images and designing overlays for this area. Thus, each student had a "building cluster" that they would research and superimpose with multimedia content. 

We are now about halfway through this group project, and it has proved to be an exciting pedagogical challenge. I've learned a ton along the way about teaching individual, group, and class-based augmented reality projects, and I'll continue to share that information here for anyone interested. Stay tuned!

Site-Specific AR Application Proposal 

Write a 1,000 word report detailing the feasibility of creating a site-specific, image-based AR application for a specific location in or around Gainesville. Possible locations include, but are not limited to:
  • Parks, conservation areas
  • Museums, cultural preservation areas
  • Sections of campus (athletic district, historic buildings, etc.)
  • Businesses
Your proposal should have four sections:
  1. A summary section describing why this location is ideally suited for a site-specific AR application
  2. A feasibility section detailing the amount and types of trigger images available at this location
  3. A content section describing the different media (videos, images, links, etc.) that could be used as overlays for this location
  4. A research section describing the amount of background and technical research required
Also include any relevant contact information for anyone responsible for managing the location (e.g. a representative of the Gainesville park's department)
Students will present their proposals formally to the class during one of our workshop times along with a short Prezi or Powerpoint. The class will place votes to determine which proposal(s) will be accepted for the final project. Submit your 1000 word proposal, with sections clearly labelled, along with any slides you will be using for your presentation.

Site-Specific AR Application Assignment

Students will work on a collaborative, class-wide project to be determined through the AR application proposal assignment. Each student will be responsible for producing augmentations and application content for five images at their assigned location.

  • Write one 750 word "about" page for your building cluster
  • For two of your trigger images: create 90-second videos with audio narration 
  • For two of your trigger images: create audio clips with image overlays
  • Come up with five multiple choice trivia questions/answers for your building cluster

Thus, in total, you should be submitting the following
  • 2 video files
  • 2 audio files
  • any image files used as overlays that are not in your videos
  • A word document with your about page text and trivia questions/answers







Monday, March 14, 2016

Scrambling to the Finish Line!

I'm excited to announce that my first ever standalone augmented reality application, "Super PAC Scramble," is now available for download on the Google Play store! I learned a lot in the process of designing, prototyping, and testing this application. As the description on the Play store indicates, the application itself is a single-player augmented reality game that educates players about monetary donations for the 2016 U.S. presidential election. The app uses each candidate's logo as a "trigger image" for displaying interactive overlays corresponding to their various funding sources (e.g. Super PAC, individual donations, etc.)


The idea behind Super PAC Scramble was born on a road trip back from SeaWorld-Orlando, where my co-creator (Melissa Bianchi) and I had just visited to gather information for a site-specific augmented reality critique we were creating for the marine park (more about this in later posts). As we were driving back with our other collaborator (Sid Dobrin), we realized that the project was taking much longer than anticipated. Not only did we need to visit the park to gather information about SeaWorld's rhetoric of conservation and care, we also had to find suitable trigger images throughout the park, build a beta-application, return to test the trigger images next month, and then return again a few months later to create video documentation for the project. All of this had to be accomplished before we could submit our application to the Google Play and/or Apple app stores.

We decided collectively that it would be best to create another AR application that not only demonstrated the concept behind augmented reality criticisms (ARCs), but could also be accomplished within a shorter time frame. First, we knew that doing another site-specific ARC was out of the question; they require extensive trips to the location, which end up eating into the project's timeline and (more importantly) budget. Considering that the presidential primaries were beginning to ramp up, we decided to partner with the kairos of our cultural moment and utilize the ubiquitous imagery of the various presidential campaigns as the sites of our critique.

When we got back to Gainesville, the original idea was simply to create digital overlays for each candidate that revealed their top funding sources (JP Morgan, Goldman Sachs, etc.). However, as Melissa began to do more research into this issue, she quickly discovered that the most pressing concern with campaign finance were Super PACs. Super PACs essentially allow campaigns to raise unlimited amounts of money by claiming that their spending is not "coordinated with that of the candidates they benefit" (opensecrets.org). The presence of Super PACs exacerbates the outsized role of money in American politics.

The idea behind this application is partially inspired by Mark Skwarek's 2011 "Bailout Citibank"app, which overlays the dollar amount received by Citibank from the federal government in the wake of the 2008 financial crisis. However, as we began work on the application, simply having dollar amounts appear over each logo was not very compelling, and we wanted to provide the user with a greater incentive to engage with the application.

As I began working more with the Unity game engine and Vuforia AR-SDK, I created some basic interactive overlays that allowed the user to drag the funding sources into each candidate's logo. As I brought this idea to Melissa, she expanded on it by outlining a game design that used a procedural rhetoric to privilege the selection of candidates who were funded by a Super PAC. In his book Persuasive Games: The Expressive Power of Video Games, Ian Bogost defines procedural rhetoric as "using processes persuasively." Super PAC Scramble invokes a procedural rhetoric within its game design by requiring users to work harder when they confront a candidate who is supported by small, individual donations rather than a Super PAC. When scanning these candidate's logos, players have the same amount of time (twenty seconds) as they do with other candidates, but there are over five times as many donations that must be dragged into the logo in order to receive full funding.

Creating this application was "super" fun (sorry, I had to do it). Although the presidential campaign will be over within the next few months, and indeed several candidates in the application have already dropped out, this application could be completely updated to coincide with later elections by exchanging the trigger images and overlays but maintaining the same procedural argument critiquing the influence of money on politics.