Servers & Systems: The Right Compute
1826290 Members
4439 Online
109692 Solutions
New Article
Matt_Haines

REST API integrations: Powering secure, automated server management from edge to cloud

For most businesses today, the edge is where the majority of data is generated and consumed, powering unique and differentiated customer experiences. But regardless of industry, one common theme for all organizations is to get to market faster. And they do this with their large, distributed compute infrastructure by ensuring a consistent, automated approach to deployment that uses limited or no local IT staff.

Rest-API-Integration_Blog_HPE_ELEMENT_20210714007-A.jpgEnterprises are scaling up their digital engagement touch points with customers to deliver more and varied types of information and services. These organizations are in industries such as retail, hospitality, transportation, and logistics – where providing a single, unified API-driven connection point, that encompasses devices in both core data centers and in distributed edge or remote sites – is a business driver for many customers.

But deploying 20, 200, or even 2000 servers across a distributed landscape is a time-consuming task that leads to deployment fatigue and is prone to errors.

APIs can benefit all aspects of the IT environment, including automating complex, time-sucking, resource consuming tasks that we all know and dread. As time and consistency matter, a scripted deployment from a centralized location can benefit the business greatly. This is an area where HPE sees the importance of taking an API-first approach to managing and monitoring compute infrastructure pays off.  

API first

Until today, to automate these tasks required organizations to build out APIs as an integration layer sitting on top of existing application interfaces. In response, we built HPE GreenLake for Compute Ops Management, which automates and transforms complex and time-consuming compute management operations into a simplified experience across edge to cloud. With an API-first mentality – every action that a user can perform via the UI has an equivalent API endpoint – that can be used by our customers for scripting purposes.

Consistent security protocols

While APIs help with reducing complexity, anything that touches a compute environment must also be secure. And when you build the world’s most secure industry standard server, its management interface must be just as safe. We have created this solution for our customers with a consistent strategy that applies security protections such as authentication and authorization, protects endpoints from malicious attacks, tracks usage of our services, and monitors the endpoints to ensure availability and high performance to better serve customer demands.

Why is this important, you ask?

First, having a consistent approach to authorization and authentication ensures that all API endpoints/surfaces go through the same consistent checks; there is no unintentional attack endpoint in the product. Customers are assured that their infrastructure data is constantly being surveyed and protected. Second, protecting endpoints from malicious attacks helps us shield customers from denial-of-service attacks that can cause serious business disruption. Third, tracking usage allows us to better serve customers because we are monitoring the health of our app continuously.

Customers who subscribe to the Compute Ops Management Enhanced Tier will have access to this feature and can use the API function to perform typical server deployment, management, and monitoring tasks with the same parity as the UI. We are excited about this feature as it will enable our customers to experience the power of Compute Ops Management from an automation perspective.

Want to see how easy it is to get started? Click here for step-by-step instructions.

Please note: As part of the set-up process a token will be generated. Access to specific APIs depends on the authorization level of the user and the appropriate token used.

To learn more, please visit our website.  


Matt Haines
Hewlett Packard Enterprise

twitter.com/hpe_compute
linkedin.com/company/hewlett-packard-enterprise
hpe.com/servers

 

 

About the Author

Matt_Haines

Matt is currently the Vice President and General Manager of the Compute Cloud Services business. His group is responsible for product management and engineering development of all HPE Compute aaS offerings, including HPE GreenLake for Compute Ops Management and HPE GreenLake for Compute Bare Metal. Matt is also responsible for software product management covering the entire Compute manageability portfolio and leads the HPE OneView engineering organization. Prior to his current role, Matt was an engineering leader for HPE, Cray, Time Warner Cable, and HP. Matt holds a PhD. in Computer Science and an MBA in Finance and Entrepreneurship.