Grounded in the Cloud
cancel
Showing results for 
Search instead for 
Did you mean: 

How Manufacturers can protect brand reputation with Big Data

Stephen_Spector

Guest Post by Sameer Nori, Sr Product Marketing Manager, MapR (@sameernori)

 

When things go horribly wrong, there is no shortage of snazzy software solutions and PR experts to help manage the crisis. But, as anyone who has experienced the wrath of the Internet knows, it’s far simpler to maintain a good reputation than to restore confidence in a brand.

 

Customer satisfaction defines a brand’s reputation. Protecting the value of the brand requires adherence to tough quality assurance procedures throughout the entire manufacturing process.

 

QA is a complex and cranky beast with many moving parts. It begins with supply chain and assembly line management and it doesn’t end even after products go to market. Thankfully, big data can provide the insights needed to tame the beast.

 

Using sensors to reveal sneaky flaws

 

Finding flaws early in the manufacturing process saves rework costs, and helps to ensure that defects don’t suddenly appear in products that are ready to ship or are already in the marketplace.

 

Integrating quality statistics right into the production process helps to catch those sneaky little discrepancies that can lead to failures. This can be done via sensors placed on components and parts to capture data in real time. Apache Hadoop enables this data to be efficiently collected, stored, processed, and analyzed (batch or real time).

 

Hidden problems are also revealed in assembly line sensor data histories. The irregularities and atypical information captured in these histories can clearly expose—or at least point to the existence of—flaws that would otherwise easily fly under the radar.

 

To get an accurate picture of the patterns that indicate a problem, you need a data pool that is both broad and deep. Some flaws appear only when parts interact with each other, or over a period of product use time.

 

Apache Hadoop can efficiently store long histories collected from factory sensors across the entire manufacturing environment. The information gleaned from the historical data, teamed with early-warning analytics, is an invaluable resource for troubleshooting and comparison to quality models.

 

Predicting the future with telemetry

 

Telemetry data is drawn from products in use. Companies can use Apache Mahout to analyze Hadoop-collected telemetry data to predict when a machine, device or critical component is likely to need service or replacement. They can then proactively reach out to product owners and offer a remedy before a problem occurs.

 

Telemetry data can also be utilized to understand the root causes of failure and address any faults that may exist in the design or assembly process. The information can also be used to provide heads-up training to customer service reps, reducing consumer frustration and helping to improve the support process.

 

The usefulness of telemetry data extends much further than problem-centered analysis. It provides an important view into how consumers or businesses are using a product. This information can be leveraged to design products or components that better meet users’ needs. It can also reveal new revenue opportunities, and provide a competitive edge in the marketplace.

 

Preflight check: evaluating Hadoop’s QA capabilities

 

When evaluating a particular Hadoop deployment for QA usage suitability, manufacturers should ensure that the solution can meet both scalability and real time data availability needs.

 

The deployment should also be reviewed for its ability to stream data writes for the specific type and size of files produced by the manufacturing environment’s QA sensors. Huge amounts of small-sized files, for example, can produce a bottleneck in deployments that aren’t engineered to handle demanding quality assurance processes.

 

Any mission critical solution must obviously provide robust disaster recovery capabilities. So you’ll want to understand how the solution protects data against hardware failures, accidental overwrites and deletions, and the dreaded “smoking hole” scenario—the catastrophic loss of an entire data center.

 

Disaster recovery should be seamless. Performance and availability should continue to meet established service level agreements throughout the recovery process.

 

For more information on evaluating Hadoop deployments for your specific business needs, download this free Hadoop Buyer’s Guide by Robert D. Schneider, a Silicon Valley-based technology consultant and author of “Hadoop for Dummies.” You can also check out this resource to find out how manufacturers are using Hadoop and big data to optimize operations and sharpen their competitive edge.

 

Senior Manager, Cloud Online Marketing
0 Kudos
About the Author

Stephen_Spector

I manage the HPE Helion social media and website teams promoting the enterprise cloud solutions at HPE for hybrid, public, and private clouds. I was previously at Dell promoting their Cloud solutions and was the open source community manager for OpenStack and Xen.org at Rackspace and Citrix Systems. While at Citrix Systems, I founded the Citrix Developer Network, developed global alliance and licensing programs, and even once added audio to the DOS ICA client with assembler. Follow me at @SpectorID

Events
28-30 November
Madrid, Spain
Discover 2017 Madrid
Join us for Hewlett Packard Enterprise Discover 2017 Madrid, taking place 28-30 November at the Feria de Madrid Convention Center
Read more
See posts for dates
Online
HPE Webinars - 2017
Find out about this year's live broadcasts and on-demand webinars.
Read more
View all
What's New