Augmented Reality: A Managers Guide - Part 2

May 31, 2019

Augmented Reality: A Managers Guide - Part 2

unsplash-logo Patrick Schneider

 

Part 2:

     

    How Does Augmented Reality Work?

    Augmented reality starts with a camera-equipped device—such as a smartphone, a tablet, or smart glasses—loaded with AR software. When a user points the device and looks at an object, the software recognizes it through computer vision technology, which analyzes the video stream.

    The device then downloads information about the object from the cloud, in much the same way that a web browser loads a page via a URL. A fundamental difference is that the AR information is presented in a 3-D “experience” superimposed on the object rather than in a 2-D page on a screen. What the user sees, then, is part real and part digital.

    AR can provide a view of the real-time data flowing from products and allow users to control them by touchscreen, voice, or gesture. For example, a user might touch a stop button on the digital graphic overlay within an AR experience—or simply say the word “stop”—to send a command via the cloud to a product. An operator using an AR headset to interact with an industrial robot might see superimposed data about the robot’s performance and gain access to its controls.

    As the user moves, the size and orientation of the AR display automatically adjust to the shifting context. New graphical or text information comes into view while other information passes out of view. In industrial settings, users in different roles, such as a machine operator and a maintenance technician, can look at the same object but be presented with different AR experiences that are tailored to their needs.

    A 3-D digital model that resides in the cloud—the object’s “digital twin”—serves as the bridge between the smart object and the AR. This model is created either by using computer-aided design, usually during product development or by using technology that digitizes physical objects. The twin then collects information from the product, business systems, and external sources to reflect the product’s current reality. It is the vehicle through which the AR software accurately places and scales up-to-date information on the object.

     

    Merging Real and Digital Words

     

    Back To Top

    Augmented Reality in the Real World

    Who's Investing The Most

     


     

    AR Headsets Take Off

     


     

    Enterprise Roles

     


     

    Strategic Goals

     

    Back To Top

     

    One Company’s Experience with AR

    Guido Jouret 

    Guido Jouret joined the Swiss industrial giant ABB in 2016, after spending more than two decades in technology leadership roles at Cisco and Nokia. As the chief digital officer, he helps lead the $34 billion company’s technology strategy in green power, transportation, robotics, and automation in over 100 countries, and he champions its AR initiatives. Here, Jouret describes AR’s transformative potential—and why many businesses underestimate the change that’s coming.

    Why is ABB interested in augmented reality?

    AR can help address three macroeconomic challenges that we—and our customers—are facing. The first is the ageing of the skilled workforce. In the oil and gas industry, for example, there was a massive employment surge in the 1960s and 1970s and then a hiring lull. As a result, you now have a lot of older workers retiring, taking skills and institutional knowledge with them. A similar dynamic is happening in many other industries. Second, we have a lot more machines in remote locations, and we want to be able to monitor, operate, and fix those machines with fewer people on-site. And the third challenge is the growing complexity of new technologies, which require new technical skills.

    What pilots are you doing?

    In our pulp and paper business, we’re working on AR that will allow us to service the equipment of remote customers without sending in technicians. Today a customer needing guidance on repairs gets a binder with documentation. We’re developing AR on a HoloLens headset that will let the customer be guided by a remote technician who can see what the customer is looking at and walk them through a repair. We’re at the early stage. We’ve put together some prototypes, and we’re sharing those with customers to get their feedback.

    In our marine business, we’re working with a coalition of companies on pilot projects involving autonomous vessels—like Google self-driving cars but ships. You can imagine starting with small autonomous ferries on lakes but eventually scaling up to container ships. You wouldn’t need large crews on these ships. If somebody on shore needs situational awareness of what’s happening on a vessel, they could use AR technology. We think we could bring this capability to market within a few years.

    How would that work, remotely checking in on an autonomous ship?

    A captain onshore might use AR to see the view from the ship’s bridge and contextual information about the ship’s speed and course and other telemetry data. This is a case where you’d be integrating virtual reality and augmented reality. The VR would be the view from the bridge. The AR would be live telemetry overlaid on that view. If sensors showed that something was going on in the engine room, you could teleport there from the bridge and have a look around a virtual engine room that had AR information superimposed on top of it. You can imagine needing only a few people actually on board at any time.

    What other sorts of jobs do you see AR doing?

    There are three overlapping areas where I see AR taking off. The first is in dangerous jobs. You want to make sure people have the best information possible at exactly the right moment, because of the cost of not having that—people getting injured, equipment being destroyed—is so high. So I would imagine AR applications in refineries, chemical plants, construction, and mining, for example. The second area is jobs in remote locations, like on an oil rig or an offshore wind farm, where it’s really valuable to make sure that the people you do have on-site have the skills they need. Third, AR will be really useful in cases where people are working with products or machines that are extremely complex, so they can’t be easily automated. Servicing an industrial 3-D printer would be an example. Or work done in semiconductor labs.

    Those are all pretty cutting-edge. Are there less-cool applications that will be as important?

    This doesn’t sound super exciting, but it could have a big impact: If people use AR simply to adhere to a best-in-class process, it can prevent mistakes and injuries. You can have the best standard operating procedures in the world, but if your workers don’t follow them, it doesn’t matter. AR can ensure compliance with processes. For instance, imagine you’re working with an industrial motor and there’s a step in the manual that says, “Turn off the power.” It would be easy to overlook the step and damage the equipment or get hurt. With AR, the software could say, “Turn off the power and glance at the switch to confirm it’s off.” When you looked at the switch, the AR could take a picture of the state of the switch, time-stamp it, and record the location of the motor using GPS. So you would now be certain that the switch on a motor was off at a specific location and time during a specific step in a process.

    Do businesses have unrealistic expectations about AR because of hype on the consumer side?

    Actually, I think it’s sometimes the reverse—press about consumer uses of new technology negatively influences the perception of that technology in business. It’s a recurring theme. Think of the press around consumer drones, for instance, which suggests that they’re a nuisance or a toy. But of course, we’re finding important applications in the industry now for inspecting refineries, pipelines, and high-voltage transmission lines. The same thing goes for blockchain, which at first was seen as the technology behind bitcoin, the digital currency used by drug dealers. But businesses are beginning to understand that blockchain will have a huge impact on contracts. AR was seen as a game platform, and there was bad press when Google Glass stalled, which may have coloured how businesses saw the technology—may be as a science experiment that wasn’t going anywhere. But people who actually work with AR in the industrial space are quite excited.

    How should a company get started with AR?

    First, if you haven’t already done so, you should design and build your products digitally so that you’ll have digital models of them to use in developing AR and VR. Otherwise, you’ll need to create those digital models later, which is complicated. Second, figure out where AR could generate the most value in your operations or services. I’d gauge that using those three dimensions I’ve mentioned: danger, remoteness, and complexity of the task. It probably shouldn’t be a priority to add AR to a simple machine that’s easily accessible. On services, I’d ask where AR could enhance an existing service rather than what new service you could build from scratch with AR. It’s much easier to get a customer that’s already using some of your maintenance services to try an enhanced version. If you and a competitor provide the same service, and yours has an AR component that allows customers to do some of their own work, that creates value for them and differentiates you.

    How do you see augmented reality and artificial intelligence coming together?

    Today we can create really good artificial intelligence that can play Jeopardy! or a game like Go, but it’s harder for AI to figure out how to respond to situations where it has no training. It will come up with an informed response, but the outcome can be unpredictable. If you train an automated ship to handle clear skies and a calm sea, and a hurricane hits, you don’t know what the AI will do. People, at least for quite some time, will be better at reasoning in context in novel situations. So we can imagine that with the fusion of AI and AR, the AI will provide a set of recommendations about, say, what step to take next in a repair; a human with the contextual expertise will make the final call, and at that point, AR could provide useful guidance. If there’s a noise coming from a motor, it could be many things. AI could look at the data and suggest 10 possible causes and recommend a few to consider first. But the tech’s decision about which to follow up on will be based on his experience, his team’s design knowledge, what he finds when he opens up the machine, and so on. He will make the final call about what the problem most likely is and then select an AR program that guides the repair.

     

    Back To Top

     

    The Battle of the Smart Glasses

    To date, the lack of affordable, lightweight, high-performance smart glasses has been a barrier to augmented reality’s widespread adoption. The head-mounted displays (HMDs) most businesses use for AR tend to be expensive and cumbersome, and none of the options available to consumers has achieved broad acceptance.

    But the race to develop a popular version of this new digital interface is on—and is attracting both tech titans and upstart inventors. Investors are pouring money into wearables development, betting that HMDs running AR will ultimately disrupt the market for phones and tablets. The screens in consumers’ pockets will be replaced by AR interfaces that people put on—and keep on—without a second thought, just as they do sunglasses.

    In this Spotlight package, we have described how businesses are using AR to improve visualization, instruction, and interaction. These same capabilities will allow HMDs to become the consumer interface for many products and forms of data. Consumers will use hand gestures and voice commands to access information about and interact with the machines and devices around them, including appliances; audio systems; and home heating, cooling, lighting, and alarm systems. Smart glasses will guide people through the world, allowing them to summon instructions (How do I change a tire?), directions (Where’s the subway entrance?), and even tourist information (What does that sign say in my language?) on a virtual screen that hovers before them whenever and wherever needed.

    What will the next generation of wearables look like? Google was first to market with Google Glass, a visionary effort that stalled for a variety of reasons, including high cost and privacy concerns. Microsoft subsequently launched the HoloLens, which many view as promising, but it is expensive ($3,000), has a narrow field of view, and is somewhat bulky. (It’s more of a headset than a pair of glasses.) The HoloLens may prove adequate for some business applications but is not yet ready for consumer use. Famously secretive Apple is rumoured to be developing user-friendly smart glasses; the mid-2017 launch of its ARKit developer software for AR apps and the fall 2017 introduction of the AR-capable iPhone X hint at that possibility. Google recently released an improved Glass and launched ARCore, a direct response to ARKit. Numerous other companies are jumping into the market. Among them are Magic Leap, a start-up that has already raised $1.4 billion to develop a head-mounted virtual retinal display, and three companies converging on a sunglasses-like concept: Osterhout Design Group (ODG), Vuzix, and Meta.

    Smart Glasses

     

    The stakes are high. Whoever wins the glasses wars will control a technology that transforms how people interface with the digital and physical worlds—far more than the iPhone did a decade ago. In this next round of the mobile-device arms race, the title of world’s most valuable company could be up for grabs.

     

    Back To Top

     

     

     

    This section of the article was originally written by Michael E. Porter a University Professor at Harvard Business School in Boston, James E. Heppelmann the president and CEO of PTC (a leading maker of industrial software), and Gardiner Morse a senior editor at Harvard Business Review. Full credit goes to Harvard Business Review, who published this article over a year ago. This article has been reprinted for the purpose of education.

    If you enjoyed this article then why not sign up to our mailing list to gain access to high-quality content and exclusive offers.

     





    Also in Portal's Daily Dose

    What Inclusive Leaders Sound Like
    What Inclusive Leaders Sound Like

    November 20, 2020

    When leaders commit to building an inclusive organisation, they tend to start with the company mission, vision, values, and a promise to ensure everyone in the organisation has a voice. But if they don’t change the way they communicate every day with their employees, leaders are missing a crucial piece.

    Continue Reading

    Storytelling Can Make or Break Your Leadership
    Storytelling Can Make or Break Your Leadership

    November 13, 2020

    Telling a compelling story is how you build credibility for yourself and your ideas. It’s how you inspire an audience and lead an organisation. Whether you need to win over a colleague, a team, an executive, a recruiter, or an entire conference audience, effective storytelling is key...

    Continue Reading

    Yes, Virtual Presenting Is Weird
    Yes, Virtual Presenting Is Weird

    November 06, 2020

    What is it about virtual presenting that can feel so unnerving? The lack of audience response, the inability to “read the room,” and the lack of direct eye contact all increase our anxiety. Recreating the back and forth of a ...

    Continue Reading