AT&T Futurist Report: Blended Reality–The Future of Entertainment, 5G, & Mobile Edge Computing
INTRODUCTION
Reality is up for grabs. The Internet is no longer a place we go to through our smartphones, laptops, or other connected devices. The network is becoming an overlay on top of our physical world. Over the next decade, most everything around us will become connected and interconnected, intelligent and communicative. We will see and feel the world through new senses and interact with each differently than before. Things that were previously invisible will become visible, and we will experience the familiar in new ways. As the physical and virtual are woven together, the most immediate impacts will be felt in the realm of entertainment. Indeed, this coming age of blended reality will reshape the entertainment landscape forever.
Entertainment will become ambient. We will dip in and out of the entertainment flows, engaging in offerings that are contextually relevant and always available.
An array of technologies—from augmented reality (AR) to artificial intelligence (AI), immersive media to “magical” interfaces—is driving this transformation. While many of these technologies can be demonstrated today in structured, well-bounded environments, it will take 5G networks and mobile edge computing to bring the breakthroughs into the real wireless world. As we know, 5G networks will enable high data speeds for streaming multimedia content. But equally important, 5G will dramatically reduce network latency the period between when you request data from the cloud and the network sends you that data. That means the processing power needed to, say, render a stunning virtual reality scene can be handled by computers in the cellular nodes and elsewhere at the “edge” of the network. Yet your experience will be local and personal.
“Edge computing is the next step in the evolution of the network,” says Melissa Arnoldi, president, AT&T Technology and Operations. “As connectivity becomes ubiquitous and fast, it also needs to become smart. Edge computing puts a supercomputer in your pocket, on your wrist, in your car, and right in front of your eyes.”
In the next decade, these blended reality entertainment experiences will no longer be destinations but rather delightful opportunities that travel among us, waiting for us to engage with them at every turn.
About This Report
To create this latest installment in AT&T’s Futurist Report series, “Blended Reality: The Future of Entertainment, 5G, and Mobile Edge Computing,” AT&T collaborated with Institute for the Future (IFTF), who drew from their ongoing deep research in mobile computing, wireless networking, and the future of entertainment, along with adjacent and intersecting areas, from immersive media and the “internet of actions” to AI and autonomous vehicles. IFTF then conducted interviews with experts in academia and industry for their divergent viewpoints. Informed by that research, IFTF synthesized five big stories likely to play out over the next decade at the intersection of entertainment, 5G networks, and mobile edge computing. Each of these big stories is supported by six “Signals,” present-day examples that indicate directions of change. Think of a Signal as a signpost pointing toward the future. As AT&T continues to demonstrate 5G’s promise at events like AT&T SHAPE, we’re starting to see that our 5G & edge computing dreams are inching that much closer to reality.
AT&T and IFTF hope that this Futurist Report provokes insight that drives strategic action in the present. After all, there are no facts about the future, only fictions. The rest is up to all of us.
Download a PDF copy of this report.
Entertainment Everywhere
Episodic entertainment will make way for continuous entertainment streams that course through our daily lives.
“The line will blur between reality and performance, movies, and gaming.”
—Christine W. Park, co-author of Designing Across Senses
Today, most entertainment content is designed for a particular location or interface—the living room, the movie theater, the automobile, a VR headset, an amusement park, your mobile device. With 5G and mobile edge computing though, entertainment will become continuous, managed by AI agents behaving proactively on our behalf.
For example, you may watch a documentary about wild animals with your children. When your family leaves the house, that shared experience follows you into the physical world through an augmented reality interface that reveals your neighborhood, where a diverse population of animals ran wild before it was developed. This “historical” reality, and myriad other realities, are available anytime, anywhere. Your personal AI agent in the cloud, familiar with your preferences, gently suggests entertainment experiences where and when you are most likely to engage with them.
As the Internet of Things comes alive, everything will become media. Everyday objects will talk to each other and share their own stories as they move through physical space. At the intersection of new intelligent displays, natural interfaces, and billions of connected devices, our media environment will extend well beyond televisions, smartphones, and computers to include our bodies, the natural world, and our built environment.
“Your actual physical context becomes a form of entertainment with mediated experiences across multiple surfaces,” says Christine W. Park, co-author of Designing Across Senses.
For example, in the future with the deployment of fully autonomous vehicles containing in-cabin entertainment experiences, “gameful driving” will become a popular on-road pastime. As you move through the city, your windshield could display a space adventure game that takes into account the real bumps and turns of the road to deliver a viscerally convincing experience.
In tomorrow’s world of ambient media, all entertainment becomes location-based entertainment, no matter where you are.
Signal: A Community for Physical Computing
What: A new community workspace in Oakland, California, Dynamicland, that’s outfitted with sensors, cameras, and projects to create an ambient physical computing environment for digitally augmented collaborative
work and play.
Signal: Magic Leap One
What: AR start-up Magic Leap integrates its digital lightfield technology and other platform features in its first hardware product: a headset that activates interaction through voice, gesture, head pose, and eye movement.
So what: The Magic Leap One experience is based on their “Digital Lightfield” technology that seamlessly blends virtual light sources with the real world. According to the company’s website, their “advanced technology allows our brain to naturally process digital objects the same way we do real-world objects, making it comfortable to use for long periods of time.”
Signal: Mobile Living Room
What: When creating their fully autonomous car concept, Swiss automotive think tank Rinspeed and infotainment firm Harman explored this question: “How will the interior of a vehicle have to be designed to let the now largely unburdened driver make optimal use of the gain in time?”
So what: Autonomous vehicles will become immersive entertainment experiences filling myriad screens in the passenger compartment. “Traveling in a driverless car will no longer require me to stare at the road, but will let me spend my time in a more meaningful way,” says Rinspeed founder Frank M. Rinderknecht.
Signal: The Coughing Billboard
What: Swedish pharmacy Apoteket Hjärtat and agency Åkestam Holst deployed a sensor-laden “smart” anti-smoking billboard. When someone smokes nearby, the actor on the screen has a loud coughing fit.
So what: As everyday objects become “smart,” new opportunities arise for compelling, interactive experiences in our travels through the physical world. The challenge will be to ensure that the experiences are entertaining and unobtrusive as opposed to annoying and coercive.
Signal: The Future of History
What: ARtGlass creates augmented reality tours of historical sites that guide visitors on “interactive journeys through time and space” by placing virtual characters and artifacts within the physical spaces, from George Washington’s Mount Vernon home to ruins throughout Europe.
So what: “We believe—and have found through experience—that appreciation for the history and respect for a site can be enhanced through better storytelling [enabled by augmented reality],” says ARtGlass CEO Greg Werkheiser.
Signal: Buildings as Digital Canvas
What: Obscura Digital is a creative studio that develops next-generation projection and object-mapping technologies for architectural displays and startling interactive experiences at scale.
So what: Obscura Digital’s work lies at the intersection of science and art to “paint” the built environment with astounding digital media for immersive storytelling and magical experiences.
Eyewitness Everywhere
Tomorrow’s newscasts will put you in the middle of the action, safely.
“You will inject your presence into other spaces and then share your own view, your own idea of the world, with the network.”
—Christine W. Park, co-author of Designing Across Sense
Myriad devices connected via 5G and powered by machine intelligence at the edge will make our experience of news more personal, immediate, and sensory-rich. Respected news organizations will invite you to visit a war zone with embedded journalists, join scientists as they explore an active volcano as it’s erupting, or analyze a contested play from over the quarterback’s shoulder without leaving your sofa.
This will become possible through multisensory networks of ubiquitous volumetric cameras, binaural microphones, and tiny autonomous drones capturing real-time data in physical public spaces. These eyes and ears on the world will leverage the bandwidth of 5G to stream high-resolution data to the cloud. As a “remote viewer,” you’ll choose your vantage point and experience the distant location up close, in real time, in 3D.
“We’ll be able to create 3D views on the world, from any angle,” says Jan Rabaey, director of the UC Berkeley Wireless Research Center (BWRC). “But the media stream won’t come just from one device like a mobile phone. It will be a vision of the whole space fused from all of the connected devices.”
While 5G will provide the lag-free bandwidth necessary for the ultra-high definition video and interactivity, machine intelligence and edge computing is essential to seamlessly fuse the sensor data and orchestrate the personalized experience for each user. Immersive and haptic telepresence interfaces will then deliver the visual and visceral sensation of being in the middle of the action and moving through it by your own volition.
Of course, once you’ve seen something interesting, you’ll want to share your eyewitness account with your social network and beyond. Everyone will have the opportunity to be a field correspondent, reporting “live on the scene” and increasing the diversity of perspectives on the news.
Right now though, creating immersive, live-action AR/VR experiences requires expensive volumetric capture systems combined with tremendous computing power and bandwidth. According to Rishi Ranjan, founder of AR/VR start-up GridRaster, 5G networks and mobile edge computing will democratize photorealistic telepresence.
“Ten years from now, anyone will be able to create immersive 3D livestreams of the real world, in real time,” he says.
Signal: Drone Augmented Human Vision
What: Graz University of Technology researchers coupled a drone’s autopilot with Microsoft’s gaze-tracking Holo Lens AR glasses so that if your view is obstructed, the drone autonomously flies to the best spot to reveal what lies beyond your vantage point.
So what: High-resolution drone cameras with volumetric capture will provide dynamic, immersive views of remote locations, including those that may be inaccessible to the naked eye.
Signal: Autonomous Drone Swarms
What: In December 2017, Chinese company Ehang orchestrated a light show in the sky with nearly 1,200 synchronized autonomous drones in what it called a new kind of “flyable media.”
So what: While Ehang’s drones are meant to be seen, they could be modified to “see,” streaming video to the 5G network from myriad angles and heights.
Signal: You, on the Scene
What: Nonny de la Peña, a pioneer in immersive journalism, launched Emblematic to recreate news events in virtual reality and produce immersive social impact pieces meant to drive action.
So what: “Immersive journalism is a novel way to utilize virtual reality gaming platforms and digital technologies to put people in the middle of stories, to make them feel like they are present on scene,” says de la Peña. “Much like the journalists, you become a witness.”
Signal: Best Seat at the Olympics
What: For the 2020 Olympic Games in Tokyo, Intel plans to deliver 360-degree, 8K video streams from the venues.
So what: According to Intel, “instead of watching surfing from the beach, for example, viewers will feel like they’re riding the waves with the athletes. Fans may be able to take in the action using virtual reality.”
Signal: The Empathy Machine
What: Gabo Arora, Chris Milk, and Barry Pousman, in partnership with the United Nations and Samsung, created “Clouds Over Sidra,” an award-winning virtual reality documentary about a 12-year-old Syrian refugee.
So what: “VR’s sheer form-factor and the isolating experience it engenders, inspires focus like no other medium before it,” says Pousman, now a director at Institute for the Future. “When we marry that with the user experience of seeing and hearing the world from another human’s perspective, you get what Chris Milk calls ‘the empathy machine.’”
Signal: VR for the News
What: Axel Springer, owner of German newspaper Bild, is an investor in VR production company Jaunt to develop immersive news media experiences.
So what: “‘Be them or be there,” said Bild’s Marc Jungnickel in a Reuters Institute for the Study of Journalism report on virtual reality. “‘Be them’ provides visceral experiences such as flying aeroplanes and jumping off cliffs. ‘Be there’ gives you unique access to special locations such as a red carpet, an aircraft carrier, or the closed military base of the German military.”
Leveling Up the Playing Field
Immersive gaming will blend elements of virtual reality and real life to create sensory-rich augmented reality adventures.
“Instead of having a capture space in your office or your basement where you play VR, any space out in the real world could potentially be your capture volume, where you can act, and use your body to interact with your avatar inside games. The sensors to track your movements could be built into your clothes. I don’t know how far that would go, but that’s all stuff that I imagined when I was writing Ready Player One (2010) that seems plausible now.”
—Ernest Cline, author of Ready Player One
Mobile edge computing will usher in new kinds of games that are ambient, social, context aware, and free of cables (the bane of today’s fully immersive mixed reality entertainment). Even everyday aspects of life—such as shopping, work, exercise, learning, and errands—will be gamified, transforming mundane activities into immersive adventures. (Think Michael Douglas in “The Game” —short of being buried alive in Mexico.) Depending on your preferences, your “blended reality”—as made visible through your mobile device or drone-based flying projectors, made audible through 3D audio technology, and made feelable through ultrasonic touchless haptic feedback—will be enhanced with bots, avatars, 3D holograms, and interactive stories.
“Pokémon GO just scratched the surface,” says Brent Bushnell, CEO of Two Bit Circus, a Los Angeles-based experiential entertainment company. He envisions multitudes of gamified overlays on the existing world. “You get be an active participant in the narrative rather than just a passive observer. Do you like the Walking
Dead? Great, you’re now engaged in a zombie apocalypse whenever you’re not at work. Or, how about Harry Potter? Wonderful. You’re now a wizard with spells on a year-long quest.”
Ernest Cline, author of the best-selling virtual reality novel Ready Player Onem(and co-author of the screenplay for Steven Spielberg’s movie adaption) is looking forward to mobile edge computing enhancing his experience of socializing with friends in shared virtual spaces. Cline regularly plays Star Trek: Bridge Crew, a multiplayer VR game where friends can log on and play the role of starship crew members.
“After playing this game for an hour and a half,” he says, “I feel like I hung out with my friends.”
But there’s room for improvement. While the lips of the avatars in the game are synced to the players’ speech, the avatars’ facial expressions don’t match those of the players’ because the processing power and bandwidth are limited. Edge computing can solve these issues. “Once you have enough bandwidth so that when you laugh it makes your avatar laugh with no lag … I anticipate it being wildly popular,” Cline says.
Signal: Future Toys
What: Merge makes gear and apps for augmented reality games—ray guns, goggles, and a “Merge Cube” that lets you hold 3D holograms in your hand. The equipment is inexpensive—less than $50—because the items use players’ mobile phones to power them. The games are designed for active play—no sitting in front of a TV or computer.
So what: Augmented reality offers new, healthier ways for people to play video games.
Signal: Immersive Language Learning in VR
What: Studies at Stanford University’s Virtual Human Interaction Lab reveal that virtual environments greatly enhance the learning experience. Learning a new language is especially suited for VR, because it gives learners the opportunity to practice in lifelike situations without the fear of looking foolish. MondlyVR lets users converse with virtual characters in a VR environment to do things like “make friends on the train to Berlin” or “order dinner in a restaurant in Tokyo.”
So what: Mobile edge computing will take language learning to the next level, allowing learners to use AR to experience everything they see and hear in a foreign language whenever they want.
Signal: “Real World Warrior” Game
What: Software developer Abhishek Singh used Apple’s ARKit software to make an augmented reality version of the popular 1980s arcade game Street Fighter. Singh’s app, which he calls Street Fighter II: Real World Warrior, places the cartoonishly musclebound combatants in actual streets or on any other flat surface in the real world.
So what: The app points at the possibilities for going far beyond the simple interactions of Pokémon GO.
Signal: Touchless Touch
What: Ultrahaptics is using ultrasonics to sense hand gestures and deliver tactile feedback without the need for gloves or physical contact with the users’ hands.
So what: Interacting with virtual worlds is much more appealing if you don’t have to put on special gloves or clothing and can control the environment with gestures.
Signal: Teslasuit
What: The Teslasuit is a full body suit equipped with sensors and vibrators to give players the unencumbered ability to sense and act in virtual worlds by using natural gestures and body movements. The material of the suit uses a piezoelectric EAP (electroactive polymer) to record electrical signals generated by players’ muscles to animate avatars in the VR world.
So what: As new sensing and effecting technologies are refined, and as mobile edge computing is able to quickly process the multiple data streams generated by the suits, increasingly sophisticated VR and AR tangible interfaces will hit the market.
Signal: Omni Treadmills
What: Standard treadmills allow you to move in one direction only, making them extremely limited for VR games. But the Virtuix Omni is an omnidirectional treadmill simulator that allows players to move in any direction. The motion platform is made from a low-friction material, as are the soles of the special purpose shoes used with the device. Players are supported by a belt, and when they walk or run, their avatar moves in kind.
So what: Edge computing will be a critical component for VR sensing devices like the Virtuix Omni, haptics suits, headsets, and other devices that require low latency for optimal performance.
Floating Islands of the Imagination
New tools for creating immersive experiences will give artists, and everyone else, the ability to create portable, shareable “dreamspaces.”
“In comedy, timing is essential, but it’s also the case in music. And the latencies have to, in some cases, be sub-five milliseconds. So ultimately what we want to do is push up against the physical borders, the hard physical lines of speed of light and transmission.”
—Ali Hossaini, Visual Poet
The advent of easy-to-use digital artist tools and advanced network technologies will enable anyone to create and access virtual experiences from myriad devices and platforms. You could meet someone on the street and instantly invite them into your imagination, where reality is in the eye and mind of the creator. Like the Holodeck in Star
Trek, people will use simple gestural commands to create procedurally generated worlds populated with interactive flora, fauna, machines, structures, and non-player AI characters. Collective VR environments like these will be tomorrow’s art galleries, performance spaces, theme parks, salons, and destination resorts, allowing artists to collaborate in heretofore impossible ways, and giving audiences new ways to experience artistic creations.
In cases of group collaboration, the affordances made possible with mobile edge computing will allow actors, artists, musicians, and performers to coordinate with and respond to each other in near-real time.
“A lot of the essence of performance and entertainment is timing and tempo, and right now the reason we don’t see more cross-border collaboration, is because you just can’t keep up the tempo,” says Ali Hossaini, an artist and research fellow at King’s College London Department of Informatics, where he leads development of cultural applications for 5G mobile platforms.
VR creation tools have seen a big jump in sophistication and ease of use. Founded in November 2013, Ultrahaptics has developed a groundbreaking technology that projects focused ultrasound waves so users can “actually touch something that cannot be seen, literally in midair, with their bare hands,” says Alex Driskill-Smith, vice president of North America for Ultrahaptics. “It’s now progressed far beyond the simple buttons and switches where we started. We can essentially fill in complete shapes, so you can feel and pick up 3D virtual objects intuitively using simply your hands.”
Signal: DIY VR
What: STYLY is a web-based platform for developing VR spaces by simply dragging and dropping elements into a scene. Users in the arts and in the fashion industry can import existing images and videos from Instagram and YouTube, or 3D models from Maya or Blender and combine them into a custom environment that can be shared with others.
So what: People will be able to share their virtual creations as easily as sharing photos and videos on Instagram.
Signal: Shared Augmented Reality
What: Ubiquity6 is a “reality sharing” platform that allows users to turn real physical spaces into virtual spaces for shared AR and VR experiences. With a smartphone camera, users can make a 3D map of any room or space. The app uses deep learning to recognize floors, walls, and furniture, which behave like their real-world counterparts. In tests, up to 10,000 users have inhabited a single shared space.
So what: Persistent AR spaces will become the new form of social media, where friends, families, and communities of interest will gather and interact.
Signal: MetaWorld
What: MetaWorld is a “10,000 square mile” virtual environment where people and communities can buy or rent a plot of virtual land, build a dream environment, and share experiences with people situated around the world. As time goes on, MetaWorld’s ecology will change and evolve.
So what: Multiuser VR environments will offer a “fourth place” for people to socialize, work, learn, and play.
Signal: Hi-Fi VR
What: The Joint Photographic Experts Group (JPEG) unveiled JPEG XS, a new compression standard that can be used in all applications dealing with uncompressed image data, and is ideally suited for low-latency networks like 5G. Use cases for JPEG XS include virtual reality, augmented reality, drone video delivery, autonomous vehicles, transport over professional video links (SDI) and IP networks, and movie editing.
So what: JPEG XS does not compress images as much as other JPEG standards, so it’s less energy intensive and the compressed images and video are visually indistinguishable from the original files. The end result is virtual reality environments that are highly responsive and visually convincing.
Signal: Alien Zoo
What: An immersive VR experience at Westfield Century City in the spring of 2018 gave visitors the opportunity to walk with megagiraffes, play with frogcats, and frolic with other fantastical creatures housed in an otherworldly bestiary. Alien Zoo was produced by Dreamscape Immersive, and every ticket was sold before the six-week engagement commenced.
So what: Mobile edge computing and 5G will make these immersive VR experiences less expensive to produce, easier to deploy, and more interactive.
Signal: A Sociable Network
What: vTime enables a small group of friends and family to meet in “breathtaking virtual locations” to chat and interact with one another. Users create custom avatars to represent themselves and can interact with other people in the shared space through VR headsets or their smartphones. Immersive 3D audio enhances the experience.
So what: The shared virtual spaces can be accessed by a large audience, opening the possibilities for new kinds of live performances and talk shows.
Digital ESP
New wearables, sensor networks, and magical interfaces will imbue us all with “superpowers.”
“We’ll mix senses together. And crosslink them to create new senses. Very bizarre stuff becomes possible.”
—Jan Rabaey, Director of the UC Berkeley Wireless Research Center (BWRC)
What if you could have eyes in the back of your head? Or play a game with someone else just by thinking about them? While those kinds of experiences might sound like surreal science fiction, the ability to reprogram and augment our senses with mobile technology and advanced wireless networks is on the horizon.
“We may all be wearing AR glasses, but there are many other ways for us to perceive information that could give a much richer experience,” says Jan Rabaey, director of the UC Berkeley Wireless Research Center (BWRC).
Indeed, mobile edge computing and 5G will underpin a sensory-complete Internet that transmits not just sight and sound but also touch, taste, and even thought, in perceived real time. Our own senses will be enhanced and networked through wearable versions of the same technology that enables autonomous vehicles to map their surroundings. An array of ambient displays and gestural interfaces will ease our interactions with the augmented world. The confluence of these breakthroughs will allow entertainment designers to deliver magical experiences in the real world.
For example, if an opponent in a mixed reality game approaches you from behind, physically or virtually, your personal area sensor network could alert you through a subtly vibrating haptic collar, delivering that eerie feeling of your neck hairs standing up. Or perhaps we’ll transport ourselves into another body, seeing, hearing, and touching through their sensors—virtually walking in someone else’s shoes. As our sensory experiences are expanded,
the network itself will help us filter the information coming through the new channels.
“Sometimes sensory information will be processed locally, but other channels will require greater cognition than we’re capable of and that can be handled at the network’s edge,” says John Alderman, co-author of Designing Across Senses.
Further out, we’ll be able to communicate wirelessly through thought alone. Head-worn non-invasive sensors will detect brain signals that are translated at the network’s edge into meaningful, albeit simple, communications and commands. The recipient of the wireless signal, wearing a device that delivers a safe electromagnetic pulse to specific regions of the brain, will experience the message as a thought or feeling from “somewhere else.”
Signal: Cutaneous Sensory Displays
What: MIT’s Cutaneous Sensory Lab develops tactile, haptic, and thermal displays that translate digital data into patterns that you can feel on your skin.
So what: The group has built prototypes of wearable displays that push gently on the skin, warming and cooling it to communicate abstract information, proving that mobile communication can be multisensory.
Signal: Smell-O-Vision
What: VAQSO’s scent device attaches to VR goggles to enhance the experiences with odors.
So what: Adding the sense of smell to media experiences dates back to at least 1960, when odors were piped into movie theaters during screenings of Scent of Mystery. The challenge, to this day, is rapidly switching smells as the action changes. But as smell is so closely tied to memory and emotion, enhancing virtual reality experiences with accurate odors will make it much easier for users to suspend their disbelief.
Signal: Seeing with Your Tongue
What: Wicab’s BrainPort V100 is a prosthetic vision aid for the blind that converts images from a video camera into electro-tactile sensations. According to the company, some users report it as feeling like “pictures that are painted on the tongue with champagne bubbles.”
So what: Our senses can be “rerouted” thanks to neuroplasticity, the brain’s ability to change and adapt as a result of new situations.
Signal: Bridging the Brain-Computer Gap in VR
What: Neurable has developed a non-invasive brain-computer interface (BCI) that integrates with VR headsets to control immersive, experiences through brain activity. Their first technology demo is in the form of a science fiction VR game called Awakening, in which the player is a child with telekinetic powers.
So what: Neurable’s technology harnesses machine learning to process the raw EEG data and translate it into control signals. The goal, they say, is to enable VR, AR, and connected device interfaces “that function as a natural extension of brain activity.”
Signal: The Cyborg Activist
What: Neil Harbisson is an avant-garde artist who had an antenna implanted into his skull that enables him to perceive visible and invisible colors as audible vibrations.
So what: Harbisson says his experiment as a self-proclaimed cyborg “explores identity, human perception, the connection between sight and sound, and the use of artistic expression via new sensory inputs.”
Signal: Mind Meld
What: University of Washington researchers demonstrated a “brain-to-brain” connection that enabled participants to play a question/answer game over the Internet with just their minds.
So what: One person wore an EEG cap that detects brain waves. The other had a magnetic coil positioned near her head. When the first person mentally answered “yes” to a question, the magnetic coil stimulated the second person’s visual cortex and caused her to “see” a flash of light.
Like this article? Download a PDF copy.