So many options! So many names! Whatever you call it, the idea of separating the user endpoint from the environment where they perform compute functions, provides a way to overcome a significant number of problems we all must deal with. With the newer capabilities, it’s never been easier to jump into the technology; to add it as a tool to your toolbox.
How might the technology help?
Why do other organizations adopt the technology? It can often be broken down into the following scenarios, and multiple scenarios may align with your specific requirements, or you might have different use cases.
Latency issues for users:
The use of the technology that’s been most popular since the early days, still holds true. There aren’t many alternatives to close the latency gap for users using traditional client/server apps. The protocols are built for this scenario and need. See my complementary blog discussing latency for this and other thoughts on latency here.
Control the security perimeter:
Do you still have users out of the office? Perhaps that is the new normal for your organization. Unfortunately, having distributed users also means a distributed perimeter, each being its own security risk. Wouldn’t it be easier if you could move that perimeter back to inside the corporate environment for control, monitoring, and remediation? Focus on protection at the centralized compute environment where you can add next-gen firewalls, leverage your SEIM, etc.
Data sovereignty:
Do you need to ensure data is only processed or stored in a particular country? Keeping the data and compute in the same geographic region, can allow workers who may be temporarily or permanently outside of the jurisdiction to access, manipulate, or otherwise use the data in a controlled fashion.
Endpoint management:
Similarly, to the security scenario above, it’s just easier to manage the user endpoints if you only need to manage the centralized compute, not the devices the user is sitting in front of. Patching is simpler, posture checking is less needed, updating of apps, upgrading of the OS, rolling out new apps, they are all just easier than trying to wrangle endpoints which are not always online or connected.
High powered “workstations”:
Perhaps you have a need for some users to occasionally utilize a higher performance environment, perhaps they need powerful 3D capabilities now and then. Having users connect to a shared environment rather than issuing high expense powerful hardware to each user, can be a cost-effective way to provide the capabilities. The amount of “power” can be aligned to the specific users needs, perhaps even on an app by app based, giving them just enough to get the job done.
Supply chain issues:
We’ve all had to navigate the issues of supply chain recently, and the recent delays in acquisition of new endpoints for the users has been a concern for a lot of the folk I talk with. The business is expanding, we have hundreds of new users, how do I get them connected? Likely the new users have some sort of endpoint they could use. Make use of that existing endpoint without having to get a VPN client or other security questionable solution rolled out.
On-prem vs cloud
Cloud native options, vendor enhanced cloud options, on-premises options, each have their own benefits. Cloud based solutions can quickly bridge a gap in capabilities, enabling you to rapidly deploy, expand etc. to meet the immediate needs of your organization. Cloud native solutions have matured greatly in recent years to complete more with the traditional providers of the technology. Simplified per-user cost models are now available to understand the cost implications of the solution more easily. On-premises is still appropriate, especially if all the latency sensitive workloads are on-premises. This will come down to how your environment is configured, where you are on the cloud journey etc.
Some additional benefits of the cloud model can be pretty obvious, but always worth restating.
- You don’t buy the hardware upfront, find space to host it etc.
- Elements of the environment are managed for you, reducing the burden on your existing staff.
- You are no longer responsible for the hardware refresh, to enable access to new GPU or other features.
- You no longer need to worry about redundancy for user access. It’s just provided and wrapped in SLAs.
- You can size as your needs change without worrying about capacity.
- Pay by the hour or pay by the month. Choose your cost model based on expected usage.
Which solution is best?
Well that very much depends on what specific features you are looking for. Different solutions each have their own strengths and weakness in manageability, ease of deployment, cost etc. A full evaluation of the marketplace would need to be performed to align best to you and your organizational needs. Solution vendors often provide some nice additional capabilities in manageability and security which might be important to you, and often these can be used to supplement cloud options too.
Need help or guidance?
We are here to help! At Converge we are always helping our valued customers evaluate the options in the marketplace; helping them determine the best solution for their unique requirements. If you want an independent view, an honest conversation on the topic, then reach out, we’d love to get involved and help you.