Grounded in the Cloud
Showing results for 
Search instead for 
Did you mean: 

You cannot fix the Internet, how about fixing your apps instead?


One of the main goals of an application developer is to ensure that the application is designed to handle normal loads and have the ability to scale during peak usage. But, this alone is not enough, there are other things that we need to take into consideration when thinking about application performance.


For example, when it comes to the question of how well your application performs on the Internet, we often hear, “well, we cannot fix the Internet”. There is no easy way to predict where end users will come from and how they will access the application. It is also difficult to tell whether they will take same route or if there will be any network issues across their path. In this blog, I explore why it’s becoming more important today to architect and tune the application for different types of network conditions.


In the early days of the Internet when the applications were becoming web-based, there were a number of variations in Internet speed e.g. 56Kbps modem, T1, DSL, cable etc. As the network speed increased, the question of applications performing well in different network conditions became less important. The reason for this shift is that most applications were accessed from PCs and slow connection speeds such as dial-up were becoming obsolete.

 Fast forward to 2010, there is an explosion of mobile applications. Not only there is variation of devices from which the user is coming, but also variation of networks they are coming from. Today, mobile users come from various network types such as WiFi, Edge, 3G, 4G, LTE etc. Depending on the user’s location, each technology has different speeds, bandwidth constraints and other types of issues associated. So it’s more important than ever to test out how well your applications will perform not only under given load, but also under given network conditions. This allows the application developer to proactively identify what changes in your application or infrastructure need to happen to minimize performance issues for the end user.




My application can scale up for 1000 concurrent users and has been tested, why does the network of the end user matter?

Imagine being at a Subway restaurant and in line to order a custom sandwich. The general notion is that capacity of delivering sandwiches is directly dependent on number of servers (or sandwich artists). In order to accommodate peak lunch hours, there are additional servers taking orders from multiple customers and processing them at same time. Let’s assume there are five servers and they can make five sandwiches every minute. This means capacity of one sandwich per minute.


Now let’s explore how a customer can reduce server’s ability to make sandwich within a minute:


  1. Customer is standing far away and explaining his order. Due to distance, some of the words are not properly heard by server and he is asking customer to repeat himself. (Packet loss)


2.     Customer is ordering from a drive-through and his voice is taking longer (let’s assume there is lag) to travel than it would if he was a face-to-face customer. (Latency or Jitter)


In both situations, customers will be engaged with servers for longer time and thus impacting their ability to make one sandwich per minute. As a result, in order to reduce wait time of queue, additional resources (servers) will be required to serve customers.


Now, think about this scenario with your applications, where users are coming from various geographic locations and from different network types. The network issues will end up keeping users alive on application for longer time and as a result you may have to re-think of peak usage conditions that were originally designed for application. It may be that peak users will go 10, 20 or 30 percent higher than originally thought. This results in needing to add additional infrastructure to keep up with increased sessions.


How is my application design impacting my capacity?

I am going back to the same example of sandwich.


Chatty Application”:  Imagine that there is lot of communication between customer and server. For every item customer wants to put on his sandwich, there is Q&A like “Do you want to put mayo and mustard?” In poor “audible” conditions described above, more interactions between customer and server means longer time to complete order.


Application Content”: Imagine you are ordering value meal and the server has to ask you for the type of chips and drink you want. It increases the interaction time of the server.

So taking this analogy back to applications, if the application is “chatty” i.e. lots of back and forth between user and server, under poor network conditions it will slow down the transaction as every round trip will add up. Similarly if the large images are sent and there is no way of using Content Delivery Network or caching, larger files will slow down transactions.


In the screen below, as result of users staying longer on the system due to poor network conditions, it ends up increasing resource utilization on server.




As a result, it ends up affecting all users on the system. In the screen below, if you made predictions based on numbers without inducing network conditions, you will be surprised when real users comes from different networks.



How can I incorporate network conditions in my performance testing?


So by now, I think it will be clear how important is to understand application behavior under different types of network conditions. So, how can it be incorporated in your performance testing plan?

There are two possibilities:


Option #1: Move your load generators to the locations from where most amount of traffic will come from.  This option is ideal only if you have the ability to place an actual load generator at the location. But as we all know, there are many issues that are showstopper and placing an actual load generator at an actual location is expensive, time consuming and is considered a “Luxury” or “Nice to have”.  Now think about implementing it for mobile devices, it further adds to the complexity of creating dedicated network that uses 3G, Edge or 4G network and different providers such as ATT, Verizon or Sprint.


Option #2: Use HP Network Virtualization solution inside Performance Center to simulate network conditions. Shunra has the ability to simulate any kind of network conditions (mobile or WAN) plus a way to jump-start testing with virtual networks by leveraging their database of pre-configured network samples between various global cities.




Why HP Network Virtualization?

By introducing remote end-user experience, you can transform your HP Performance Center software and HP LoadRunner software test labs into end-to-end performance test beds.  HP Network Virtualization integrates seamlessly with HP LoadRunner and HP Performance Center to accurately test and analyze application performance validation from remote sites, all within a local performance test lab,  identifying network sensitive transactions and bottlenecks, and confirming SLO/SLA compliance.

HP provides the ability to recreate mobile network conditions such as 3G, 4G, disconnects, etc. all within an application performance engineering lab.  This enables engineers to:

  • Performance test mobile applications from real handsets as well as virtual handsets via emulators and automation scripts
  • Report on and analyze mobile application performance
  • Receive optimization suggestions based on the analysis results
  • Validate the optimized solution
  • Deploy the mobile application with confidence



For more details, visit HP Network Virtualization homepage. You can try HP Network Virtualization integrated with HP Performance Center hosted on HP SaaS .


About the Author


Jan 30-31, 2018
Expert Days - 2018
Visit this forum and get the schedules for online HPE Expert Days where you can talk to HPE product experts, R&D and support team members and get answ...
Read more
See posts for dates
HPE Webinars - 2018
Find out about this year's live broadcasts and on-demand webinars.
Read more
View all