- Community Home
- >
- Services
- >
- The Cloud Experience Everywhere
- >
- Cloud-native app engineering: Decoupling traffic w...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
Cloud-native app engineering: Decoupling traffic with service meshes and API gateways
Engineering your applications by decoupling them from microservices unlocks the full potential of consuming cloud-native infrastructure. But decoupling adds complexity in how the services connect to each other and serve the outside world. Service meshes and API gateways can help you manage this efficiently.
By Nashad Abdul Rahiman, Chief Solution Architect, Cloud Native Computing Practice Area, HPE Pointnext Services
Cloud-native infrastructures like Kubernetes have already become the platform of choice for enterprises to run their workloads on, and there is increased activity in enterprises moving from traditional monolithic applications to microservices.
However, with it arises the next set of challenges related to how the services communicate with each other and how each of them is delivered to the end user.
Luckily, we have service meshes and API gateways to make our life easier.
What is a service mesh?
A service mesh is a layer that can control service-to-service communication, and inserts security, observability and reliability into a microservices architecture. The advantages of service meshes include:
- Load balancing and traffic management features, useful for different scenarios like canary testing, A/B testing and DevOps
- Service discovery
- Failure prevention, and resiliency features
- Security management
- Observability, telemetry and health checking.
A service mesh is highly recommended in the case of large applications and in monolith-to-microservice transformation; where the number of services is growing; and where there is high traffic between services or sophisticated routing needs.
A service mesh is also needed for those who have a focus on security for intercommunication using TLS and set security policies for DevOps pipelines. A service mesh can also help in multi-cluster connectivity, even across clouds and between services with different deployment models.
Popular service meshes include lstio, Linkerd, Consul Connect, Traefik, NSM (NGINX Service Mesh), and Gloo Mesh. And the cloud providers also have service meshes like AWS (Amazon Web Services) App Mesh, and Anthos Service Mesh. RedHat also provides OpenShift Service Mesh as part of the Cloud-Native stack.
What is an API gateway?
An API gateway is a governing layer that handles incoming requests from clients, directs the requests to the correct application service, and then relays the response to the requesting client – instead of the clients making the calls directly to services that may be deployed on different environments. An API gateway can manage collection of services for an application serving different functionalities; the API Gateway would serve as a central point that manages the requests and forwards it to services, and responses are relayed back to users.
Key features of an API gateway include:
- Reverse proxy or gateway routing
- Increased security by acting as the front layer to the backend services
- Authentication and authorization
- Load balancing
- Requests aggregation
- API versioning
- Gateway offloading for functionalities like:
- IP whitelisting
- Service discovery integration
- Response caching
- Retry policies, circuit breaker, and QoS
- Rate limiting and throttling
- Logging, tracing, correlation
- Headers, query strings, and claims transformation
One of the most useful use cases for an API gateway is in cloud native application engineering, in which new services slowly replace the old system, and traffic to both the new services and the old application is managed using an API Gateway. This pattern is called the Strangler Pattern. And the client does need to know the changes in the services added or removed, since that is managed at the API gateway.
Popular API gateways include Gloo, Apigee, and Kong Gateway, and cloud providers like AWS and Azure have their own gateways.
Service mesh vs. API gateway
Often people confuse service meshes and API gateways, due to similarities in how they work. Service meshes and API gateways are different in:
- Communication: A service mesh handles internal communication (east-west), whilst an API gateway handles external (north-south).
- Management: API gateways are simpler to manage compared to service meshes.
- Observability: API gateway monitoring is more on the requests from clients, and a service mesh more for internal communication.
Depending on the use case, both are used for certain applications to improve the overall service management and security for both internal and external traffic.
Every enterprise has its own requirements on how the traffic is to be managed for decoupled applications. HPE Pointnext Advisory & Professional Services has strong expertise and experience backed by hundreds of project deliveries in the Cloud Native Computing space and can help you improve the overall customer experience with modern tools and infrastructure.
Learn more about our cloud consulting services.
Learn more about advisory and professional services from HPE Pointnext Services.
Nashad Abdul Rahiman is a Chief Solution Architect in HPE Pointnext Services’ Cloud Native Computing Practice Area. Nashad joined HPE in 2020. He has worked on application modernization and digital transformation in various industries. His key interests include AI/ML, Kubernetes, application development, public cloud and hybrid cloud. Nashad helps HPE teams deliver solutions based on cloud-native stacks to customers worldwide; he also designs enterprise-ready solutions that can be leveraged in future engagements.
Services Experts
Hewlett Packard Enterprise
twitter.com/HPE_Pointnext
linkedin.com/showcase/hpe-pointnext-services/
hpe.com/pointnext
- Back to Blog
- Newer Article
- Older Article
- Back to Blog
- Newer Article
- Older Article
- Deeko on: The right framework means less guesswork: Why the ...
- MelissaEstesEDU on: Propel your organization into the future with all ...
- Samanath North on: How does Extended Reality (XR) outperform traditio...
- Sarah_Lennox on: Streamline cybersecurity with a best practices fra...
- Jams_C_Servers on: Unlocking the power of edge computing with HPE Gre...
- Sarah_Lennox on: Don’t know how to tackle sustainable IT? Start wit...
- VishBizOps on: Transform your business with cloud migration made ...
- Secure Access IT on: Protect your workloads with a platform agnostic wo...
- LoraAladjem on: A force for good: generative AI is creating new op...
- DrewWestra on: Achieve your digital ambitions with HPE Services: ...