Skip to content

How Shape Semifinalists turned an artwork project for an elementary school fundraising auction into an interactive crystal ball

by Andrea Morton  07.21.2016 03:21 PM


Post by Glen German, Product Marketing Manager, AT&T Developer Program

In February, we invited students, developers, and entrepreneurs to enter a product or app in the Shape Challenge. Unlike hackathons, the Shape Challenge focuses on products and prototypes that work – meaning a lot of the kinks have been ironed out. We recently announced the semifinalists for the Connected Things category, where we asked for submissions of solutions that find new ways to connect the things around us in a meaningful way. Now, we invite you to get to know a little bit more about a semifinalists in that category, Future Tense.

We asked Future Tense team members JD Beltran and Scott Minneman, who are based in San Francisco, CA, to tell us all little bit more about them and their project.

What is the purpose of Future Tense?

Future Tense is a series of artworks we hope will evoke the same responses as other powerful artworks: curiosity, surprise, wonder, passion, and much more. We see this project as a provocative touchstone, a totem, or even an oracle. It will contain at least three hundred and sixty-five specially-created and/or curated short silent films and adventures, one for each day of the year. Hopefully one will always encounter something unexpected, and, perhaps walk away with a fresh perspective on the world.      

What was your inspiration behind the development of Future Tense?

Several years ago, we came up with a clever new artwork to contribute to our fifth grade son’s elementary school fundraising auction. We had already been experimenting with using spherical lenses for our Cinema Snowglobe series of artworks, and we decided to scale up the lens to magnify a video display. Scott had seen an 8” diameter crystal sphere in a store window, and we discovered that they were amazing, optically. We set the sphere on a base with an underlying display, yielding magical results. The image was not only magnified, but the spherical lens gave it an unexpected spatial dimension akin to an entirely new 3D miniature world. We played with prototypes of this artwork structure for the next three years, typically playing short film loops with different content on the display. This year, we saw the developing technology of sensor-based miniaturized computers that would fit within our base’s space limitations. This innovation inspired idea of making the Crystal Ball more interactive. One can now “summon” a vision of the future, or a story, by waving their hand over the ball. Embedded sensor technology also allows the viewer to control the image being viewed and “explore” that environment by using hand gestures to shift around it in the crystal ball.

How long has Future Tense been in development?

The initial Crystal Ball with a simple display and looping video was first developed four years ago. The interactive sensor-based version was developed in 2016. However, the original idea had been brewing for a long time. In 1995, Scott created a crystal ball for a museum exhibition. It contained a small image (projected upward from beneath, onto silk), which was reactive to a viewer’s approach and retreat.

01_Crystal_Ball_SeriesWhat are some challenges you have encountered while developing Future Tense?  How did your team overcome those challenges?

There were multiple challenges—developmental, functional, content-based, and structural/optical—that we have faced in creating this work.

Developmental/Functional: A key challenge in interactive art is making a plug-and-play turn-key version so that the purchaser can simply plug in and start up automatically.  Scale is also an issue. The base of the Crystal Ball is relatively small and its interior space quite limited, yet we need a robust mini-computer that can run the high-resolution video display, interface with and analyze sensors, and accommodate a plug-and-play boot-up system. Only recently have we have been able to acquire and aggregate this equipment at the desired miniaturized scale.

Content-based: For the Crystal Ball to be a success, it needs to offer a set of films or moving images that are extremely compelling, whether it be through tapping into the unexpected, using the allure of beauty, conveying intriguing stories, highlighting fascinating nature or phenomena, or offering mystery and contemplation. We have needed to tap into the history of film, specifically the tools of the short, silent film, and use all of the filmic devices at our disposal to create the short film content. This includes using gesture, color, light, ambiguity, abstraction, movement, unforeseen transitions, and rhythmic cross-dissolves, to name a few.

Display: The giant spherical lens such as we are using magnifies not only the image, but the individual pixels, as well. We often strugge to get a screen of sufficient resolution, so that pixelation won’t distract from the image. When the first retina displays became available several years ago, we were thrilled to see resolutions of adequate quality. The image was not compromised once it was magnified through the spherical lens.

Optical: 21st Century viewing audiences are inundated with the moving image—through television, film, YouTube, the Internet, gifs, Virtual Reality (VR), Augmented Reality (AR), and more. Our challenge is to harness the power of the moving image and story, and create experiences that transcend the mundane. Since the Crystal Ball mimics a 3D experience (but in a sphere), 3D films could be considered a primary competitor for our audience. So, we further augment the 2D films on our displays by using compositing, shadows, transparent layers, and blue/green screen imagery to enhance the spatial quality of the image.

Supply sources: We frequently use cutting-edge technology, such as retina screens, screens in unusual form factors, or novel sensors, but we will need only one source for our project. Our technology suppliers, however, are used to working with (and sometimes will only supply) companies that require thousands.

What’s next for Future Tense?

Content tricks: Constant zoom,  layers, and center vs. peripheral focus on a subject.

Capture devices: 360/180-degree video using the whole sphere.

Imagery: Innovative applications for VR goggles.

Responding to the user: Viewing position of the hands, increasing the number of people who can interact.

What is your development background?  What type of development background do your team members have?

Scott: Architect and Mechanical Engineer with years of experience making things and writing whatever code is necessary to make projects work.

JD: MFA in New Genres/Film, and extensive writing and storytelling experience in multiple mediums.

Preferred coding languages?

Scott: Prototypes in Processing, Python and Wiring/Arduino, but often migrates to Java, C, C++, Objective C, and whatever it takes. Scott’s life is plagued by making the API and myriad scripting languages do things and perform tasks that weren’t anticipated by their authors.

JD: Processing, and also works in HTML/CSS/Javascript

How did you form the Future Tense team?

Our first collaboration was in 2007, when we worked on an interactive art commission initially developed by the MIT Media Lab. The commission needed a further iteration for an exhibition. We have been working together to make new interactive artworks ever since.

When we approach developing a new work of interactive art, there is a period of prototyping, during which we develop a proof of concept. After we have a workable prototype, we assemble a list of people with the right skill sets to build the team. We sometimes use ODesk/Elance/Upwork, but prefer using our Rolodex.

05_Crystal_Ball_SeriesFavorite development tool? Why do you like it?

As artists, our favorite development tool is storytelling. We are passionate about authoring the stories. We are equally inspired by the immersive, engaging environment in which (and platforms on which) these stories are told.

Scott couldn’t live without a Dremel Rotary Tool or a laser cutter, and he’s getting attached to 3D printing. Again, Arduino, Python, and processing are efficient ways to see if we’re on a promising track. For JD, it’s Final Cut Pro, Adobe Premiere, Boris FX, Photoshop, MorphX, her Sony Alpha 7R II camera, assorted camera filters, light, color, paintbrushes, and the natural landscapes of the world—urban, rural, and celestial.

What technologies are you most passionate about?  Did those passions help shape the app you created?

How our audience and users experience the storytelling imagery we make is key, so we get really excited about stunning, crisp, life-like visual displays, particularly in unusual form factors.

What advice do you have for up-and-coming developers who may have an idea that they want to turn into a reality?

As a developer, concentrate on doing the lightest weight set of things that need to be done in order to figure out the next questions to be asking. Also, see whether what you’re pursuing is working or not.

What sets us apart from many other artists/technologists is that we’re able and willing to do this kind of work. We place ourselves, not the technology, in the driver’s seat. We are not inspired by the technology that’s available, rather, we are inspired by our ideas. We then coerce technology into realizing those visions.

As artists, we frequently find ourselves needing something you can’t download or buy off a shelf. For instance, we might need a specific material that has certain properties, e.g., strength or flexibility, and needs to have a certain form tubular, cylindrical, etc.). In many cases, we’ll just go to a hardware store and find something that resembles what would work. We’ve spent countless hours researching a lead about a related material/application, or waiting for something at CES to become available. We might go to a marine supplies store, where, after months of testing and research, we identify an effective marine glue to seal our water-filled snowglobes. Or, we might buy some consumer electronics device on Amazon or Best Buy and rip it apart. More than once, we’ve inserted deconstructed televisions displays to put into artworks disguised as moving photographs.

Again, our re-purposing of hardware and software is frequently unanticipated by the people who made or authored that hardware and software. But, we find that this open receptivity to thinking creatively and imaginatively is crucial to offering experiences that are completely novel and unanticipated experiences. In fact, it is what has made our work successful.

How do you think IoT can help new viewing and entertainment experiences in the future?

For years, content has appeared on limited number of platforms and viewing spaces, thereby relegating users to a limited variety of viewing experiences. Audiences are saturated with imagery from screens, televisions, laptops, YouTube windows, and the like. What IoT promises, and Future Tense demonstrates, is an explosion of content-specific viewing devices and interaction modes.

How important is it for companies like AT&T to hold innovation challenges like the AT&T Shape Challenge for pure technological innovation?

Extremely. As a 15-year veteran of the iconic creative thinktank Xerox Parc, Scott has been sad to witness the demise of long-view corporate research. Challenges like AT&T Shape help fill the void and create new opportunities for brilliant innovation.

For more on articles on AR, VR and all things video, see our new AT&T Video and VR site.

Share this post
by ag039m on 10.04.2016 11:25 AM

nice blog

Post a Comment:

Sign in is required to comment on this blog. Not a member? Join Now.