Internet of Things (IoT)
cancel
Showing results for 
Search instead for 
Did you mean: 

5 challenges of Industrial IoT: Edge computing to the rescue

EIC_IoT_Blog



By JR Fuller (@JRFuller321)
WW Business Development Manager for IoT Edgeline Systems

The Internet of Things is dramatically changing the way businesses derive value from data. With continued advancements in IoT technology, additional and more detailed data can be collected from connected “things.” This, in turn, produces new insights that can enable innovative new service offerings, increase business efficiency, enhance decision making, and mitigate risk.

In order to reap benefits from new insights, you need to analyze more data more quickly and in a cost-effective way. This is where IoT has been most disruptive: Instead of moving massive amounts of data from its origin to a data center or the cloud, deep compute capabilities can be deployed in the field, removing the latency and costs of transmitting data.

This is what we call “shifting to the left”. Traditional IT architecture moves data collected from the “things” at the far left to the data center (or cloud) on the far right for deep analytical processing. Edge computing instead shifts the processing power and ability to produce insights towards the left, closer to where the “things” are.

Traditional IT architecture moves data from the things at far left, to the far right. Edge computing instead shifts insights toward the left, closer to where the “things” are.Traditional IT architecture moves data from the things at far left, to the far right. Edge computing instead shifts insights toward the left, closer to where the “things” are.

This figurative “shift to the left” is what edge computing is all about, and it has tremendous potential to deliver business value to enterprises across industries.

Keeping data local

Edge computing is disrupting long-established limitations in data processing, and, as a result, it is solving some of the biggest IoT challenges. Here are the top five challenges for IoT implementers, and how compute at the edge resolves those obstacles:

Time_savings_2C.pngChallenge No. 1: Time to insight

With today’s distributed IT infrastructure, there’s a trade-off between speed and depth of insights. A rapid insight is necessarily narrow and based on limited data. A powerful, Big Data-driven insight, on the other hand, takes time.

Couple the latency issue with specific data type sensitivities, and time to insight can turn into a significant problem. Imagine, for example, equipment in a lab at an oil and gas processing plant that collects samples and measures physical and chemical properties from a tank. The longer it takes to process the sampled data, the less valuable it will be—you might completely misinterpret the meaning because readings may have changed dramatically during the time it took to process that data.

Latency in data transfer prohibits real-time response and can have serious repercussions. It reduces “time-to-insight” from the data, which slows “time-to-action” for business. Edge computing removes this trade-off, making Big Data insights available more rapidly. When it comes to insights and actions at the edge, sooner is always better.

Cost_savings_2C.pngChallenge No. 2: Cost

Moving immense amounts of data collected from “things” to a data center incurs in costs for both bandwidth and storage. The bandwidth to move Big Data can be a significant expense in more ways than one: Transmitting large data volumes over a wide area network has a high financial cost, but also an opportunity cost, as your bandwidth becomes unavailable for other uses that could add more value. And having to store the same data twice—locally and at the data center—is a waste of resources, funds, and IT labor.

Data processing at the edge of the network, where the “things” are, conserves network bandwidth and saves money in two ways:

  • You need less bandwidth so you pay for less bandwidth
  • Frees up bandwidth for other business critical uses

Security_2C.pngChallenge No. 3: Security

Data in transit is data at risk. When you move data, you expose it to new security threats. Moreover, as you add new sensors and new “things” to your IT network, you can introduce new vulnerabilities. For example, if you manage 3 million pumps and add six sensors to each pump, you’ve added 18 million points of potential vulnerability into your system.

Processing data at the edge limits the opportunity for data to be hacked or stolen because it stays within a local area network. There are simply fewer opportunities for breach. You can limit vulnerability further by using device connectivity management systems that automatically add or remove devices from the network, based on IT-defined policies.

Big_data_2C.pngChallenge No. 4: Compliance

When data crosses geographic borders, it can be subject to new laws and regulations that require diligence—and expense—to comply with. This is especially true in Europe, where each country may have its own data-handling requirements. Companies may need to depersonalize or mask data before transmitting it elsewhere. Edge processing enables geo-fencing so organizations don’t run afoul of regulatory conflict.

There may be other compliance challenges as well. For example, many companies may need to comply with industry requirements involving management, retention, or destruction of data. By keeping data more local, edge computing delivers natural segmentation that significantly eases the burden to comply with any kind of data handling requirement.

Error_message_robot_2C.pngChallenge No. 5: Data duplication and corruption

The more you handle something, the more likely it is to break. In the case of IoT, moving data across networks won’t just cost more money, it will take a toll on data reliability as well. The more data hops across networks, the more likely it is to become corrupted. Computing at the edge allows you to reduce your compute and storage requirements at the data center, which improves simplicity and lowers cost. It also limits non-hostile data corruption that occurs naturally when transmitting large volumes of data throughout a network.

Big Data, big cost

The benefits of centralized deep compute make sense—for traditional data. But the volume and velocity of IoT data has challenged the status quo. IoT data is Big Data. And the more you move Big Data, the more risk, cost, and effort you’ll have to assume in order to provide end-to-end care for that data.

Edge computing is rebalancing this equation, making it possible for organizations to get the best of all worlds: deep compute, rapid insights, lower risk, greater economy, and more trust and security.

To learn how HPE is helping customers take advantage of the benefits of edge computing, read our recent announcement, “Taking the Internet of Things to a new level with 5 big advancements.”

Related links

 

JR Fuller.jpgJR Fuller currently leads the business development activities at HPE for the new market category of Converged IoT Systems as well as HPE Edgeline and Moonshot solutions, concentrating on the energy, oil and gas, manufacturing, healthcare, and smart city sectors. His responsibilities include GTM / RTM activities, client engagements, partnership development, product management input, Global IoT Innovation Lab development, driving repeatable sales, and being an IoT evangelist. Follow him on Twitter at @JRFuller321.

 

 

Empowering the Digital Enterprise to be more efficient and innovative through data-driven insights from the Internet of Things (IoT)
0 Kudos
About the Author

EIC_IoT_Blog

Events
28-30 November
Madrid, Spain
Discover 2017 Madrid
Join us for Hewlett Packard Enterprise Discover 2017 Madrid, taking place 28-30 November at the Feria de Madrid Convention Center
Read more
HPE at Worldwide IT Conferences and Events -  2017
Learn about IT conferences and events  where Hewlett Packard Enterprise has a presence
Read more
View all