Today, we’ll be discussing the implications of latency on your desire to move workloads to the cloud. Latency’s impact on the end user experience shouldn’t be a surprise to anyone, but we hope today to help you navigate some questions about latency, while offering some alternatives strategies and things to consider.
Where Latency Comes From
There are two main sources for latency: bandwidth contention and distance.
Bandwidth contention increases latency as queuing starts to occur, due to bandwidth saturation. The longer the queue, the longer the wait, and therefore the greater the latency.
Distance latency is partially due to the speed of light. In the immortal words of everyone’s favorite engineer Scotty, “I can’t change the laws of physics!”. Another cause is the number of hops, the routing hardware between, processing and passing the packets on their journey.
Both are outside of your control, somewhat. They are constraints you might just need to accept. Worry about what you can control, not what you can’t. You can increase bandwidth if the contention is on your network, but not if it’s in the public internet or at the home for those work from home employees. You can’t shrink distances, but you can potentially pick a closer hosting location. You can’t change the number of devices on the public internet, but you can get a private circuit to reduce the number of hops, like using AWS Direct Connect or Azure ExpressRoute.
Protocols. The Friendlies and the Enemies.
Many protocols were designed in the days before cloud. With some protocols, certain assumptions were made when they were developed, such as the expected distance or latency between the client and server in a conversation. When certain distances are exceeded, the protocols often still function, but the experience for the end user manifests as a delay in response. Technically, if the distance and therefore latency gets too large, they wouldn’t work due to timeout, but that’s not a likely scenario if we stay on the planet!
Let’s use SMB as an example of an enemy protocol. The bandwidth to deliver the required data is necessary, but the chatty nature of the protocol exaggerates the latency, leading to a poor experience. Yes, there have been improvements in the SMB protocol over time, but they haven’t fixed it for latency, just improved it.
FTP on the other hand is perfectly fine with higher latency scenarios to a point. Consider this a friendly. It can stream data and help push to bandwidth limits. Protocols like FTP, designed for the internet, are built with latency to be expected.
So, What Can You Do?
Each workload or scenario is different, but here are some tricks/ideas you can use to help drive success.
Consider the end users compute distance to the data/service they want to use. How can that be kept to a minimum for those enemy protocols? Move the end user compute to the cloud, alongside the data/service! Some form of application streaming (Citrix, VDI, RDP, whatever, they all serve the same purpose) can deliver that last mile (or many miles) of communication to the actual user using a latency friendly protocol. This idea keeps those latency sensitive enemy protocols within the low latency environment, where they belong.
In the case of SMB, there are newer technologies to keep the data in the cloud but serve the data locally. SMB caching technology allows you to deploy a physical or virtual “appliance” to cache the hot data near the user, in the office or legacy data center. These appliances allow the data to be protected and managed in the cloud and can be quickly replaced if there is a failure with the caching device. Consider this option if traditional mapped drives, large files, or other SMB heavy workloads are a requirement in your organization.
You can potentially move services to a friendly protocol, a transformation of the service. That older chatty win32 app for your ERP system can be replaced with a newer SaaS offering from the vendor. If the vendor has taken the time to develop a SaaS offering, they have likely designed and built it to tolerate latency. This might be a compromise with price, customization options, etc., and the appropriateness for your organization is a decision point for you.
What is Common Here?
In the examples above, and what you will find generally with latency sensitive workloads, it’s clear that something needs to change. Remember Scotty? Some things just can’t be changed! Unless you are lucky enough to be physically close to a cloud provider, likely a pure lift and shift to cloud strategy, without some form of transformation mixed in it will not lead you to success.
Just because something has always been done a certain way, doesn’t mean there isn’t a better way today. Explore options, explore compromises, and don’t be afraid to break the rules or challenge the norm. You might just make a new norm!
How Can You Get Help?
If cloud migration isn’t in your wheelhouse, or you have questions, seek out a quality partner to help. Be wary of any partner who claims everything will be fine or just brings forward a pure lift and shift strategy. If you are looking at moving a data center to the cloud, it’s a complex beast, built over time, and a mishmash of technologies. Evaluation of everything needs finesse and experience to deliver success.
When seeking a partner, ask tough questions. You deserve honesty from every partner. You want a partner, not just a vendor. Here at Converge, we pride ourselves on being agnostic, seeking the right solution for you, engaging as a true partner committed to your success. We have experience in helping our clients, large and small, navigate the complexities and nuances of cloud migrations. We’ll tell you what works and what doesn’t, bringing ideas and proven methodologies to the table.
I hope this has given you some things to think about. Boldly go forth, slay those enemies, and embrace some friendlies.