With major upgrades now coming much more regularly than in the past and with application compatibility creeping back in as an obstacle, the ability to automate testing of all apps is quickly becoming a must-have. So, I’d like to talk a little about what getting to an overall streamlined Evergreen IT strategy for EUC looks like.
Just for context, this article follows on from my research into Evergreen Application Platforms where I discussed why it was becoming crucial for organizations, and I talked about security hardening leading to third party applications breaking and in some cases, even some of Microsoft’s own applications breaking. I recommend you also have a look at that in conjunction with this article.
Evergreen IT Strategy for Infrastructure
Public Cloud
The obvious Evergreen solution for your infrastructure is public cloud platforms like Azure, AWS and Google Cloud Platform, which provide the infrastructure, and you consume it as a service. If you need to move your users from desktops with 8GB of memory, 120GB HDs and a 4 core processor to a desktop with 16GB of memory, 250GB SSDs, a 4 core processor plus some GPU for graphics-intensive apps then that’s available for you to consume at any time. Of course, this convenience comes at a significant cost.
There’s always sticker shock for those considering consuming cloud resources. Very often this is because there’s no account given for the cost of real estate, power, security and other less direct expenses associated with running your own Data Center which you’d no longer need to pay with the cloud.
Third-Party Data Centers
Of course, there is the alternative of hosting your Data Center with a third party. Your IT teams can still control the software layer and even dictate the hardware layer, but for all intents and purposes, it’s managed and serviced by your hosting provider.
In this scenario, at least in my experience, most organizations still buy the hardware that is being hosted in the DC, so it’s not as convenient as a public cloud. You do need to keep on top of your hardware refreshes and co-ordinate as such. You may also enter into an agreement whereby you buy or lease the hardware from the host who installs, maintains and presents it for your consumption.
Some organizations don’t want to consume public cloud resources due to data privacy concerns, performance concerns or more likely cost. Some have hardware that’s not terribly old and can’t justify the added cost of moving to the cloud just yet. For them, I would have recommended an open converged solution like that from Datrium, but they just got acquired by VMware, and their DVX product has been retired. It was ideal for those who already had compute they wanted to continue to use but to move to a more straight forward flat architecture with attached storage.
Hyperconverged Infrastructure
If you intend to remain in your own Data Center or even if you’ll use a hosted DC it could be worth considering Hyperconverged solutions such as those from Nutanix, Dell EMC, Cisco etc. They’re not as evergreen as cloud would be but adding resources is much more streamlined than previous outdated stacks.
There is one other Evergreen infrastructure option that’s worth mentioning. HPE’s Greenlake which is basically everything as a service. This is a really interesting service that I know a lot of my peers scoffed at when it was announced a few years ago. HPE will essentially become your mechanism for turning your on-premises Data Center into a cloud.
If you need to scale up in your DC to accommodate a sudden Work From Home surge like that experienced with COVID, you can have HPE scale it up for you. Likewise, if you no longer require all of those resources because most of your staff have moved back into the office and now use physical workstations again, then you can get HPE to scale down.
It helps that they make hardware so you can consume it when you need it and get rid of it again when you don’t. Your IT team can also be as hands-on or hands-off as you see fit. You can have HPE monitor and manage your on-premises and public cloud for you.
If you currently operate with a more traditional stack in your Data Center, then moving to hyperconverged will introduce greater simplicity and it isn’t a very steep learning curve for IT teams. Moving your network stack, VMs etc. isn’t a hugely complicated task either. If moving your resources into the cloud, this can be considerably more complex. Getting a virtual network, site to site VPN and VMs spun up and the infrastructure part can be relatively easy, migrating applications, certain servers and user load can be complicated.
Operating System, Servers and Desktops
When you’ve got your infrastructure in place, what about the VMs that sit on it? Or even physical machines in your on-prem. How can you get to fit into your Evergreen IT strategy?
Keeping up with the Operating Systems
The operating system upgrades come thick and fast for desktops now. If you intend to keep up to date with your Windows 10 upgrades that can be quite time-consuming. I’ve already extensively covered the application compatibility challenges, but even without the apps layered in, keeping your desktops on the latest OS can be testing. Even with Microsoft’s recommended servicing approach, managing so much change can be a headache.
It’s going to come as a surprise to pretty much no one who follows my blogs, but I believe the way to streamline your OS updates is to keep as much OFF your desktops as possible. If your user’s data lives in OneDrive, redirected folders and mapped network drives (work of the Devil by the way), then you don’t have to worry about persisting that data. It also gives you a flexible desktop model that ensures no matter what desktop your users try to use, they should have a consistent experience.
Now, I will say the built-in Windows 10 upgrade does do a pretty good job of persisting things like apps, user data etc. but there is still a lot riding on things working correctly – too much for my liking.
Driver Automation
Devices and drivers have mostly been pretty solid through Windows 10 upgrades in my experience though you do have to keep on top of your vendor’s support and certification. For all of your device driver needs, I recommend automation through Maurice Daly’s great Driver Automation Tool available on Github.
Desktops
For your desktops now a big consideration is those people you send to work from home. What device will they use? If they use their own device, then you definitely DO NOT want them using a VPN and exposing your network to whatever they have on their machine. You’ll want to go all virtual with Citrix Virtual Apps and Desktop\Workspace, VMware Horizon\WorkspaceOne, Parallels RAS, Numecent Cloudpaging\S2 AppsAnywhere, Azure Virtual Desktop etc. Of course, I know that’s an ideal scenario, and not everyone has those products or a budget to buy them.
VPNs
If your remote users are using their own devices and a VPN is all you’ve got to offer, then I suggest you at least bring in something like ThinScale’s Secure Remote Worker. This will ensure that when the VPN is in use, the person’s laptop is secured and locked down, preventing anything nasty from blowing back into your corporate network.
Thin Client devices like those from IGEL are popular, and you can have VPNs running on them with a reduced security risk. They can also allow the launch of published resources like those in Citrix.
IGEL, for example, also have an excellent cloud management service, so pushing image updates to remote thin clients is very simple. That was the big Achillies heel with Thins previously, it’s not anymore.
You could also ship your remote workers corporate laptops or desktops and leverage Windows Autopilot to streamline the setup and get things quickly connected back to the corporate network. At the very least, get them enrolled in your corporate InTune tenant and Azure Active Directory and remotely serviced and managed that way.
Servers
What about your Servers? Many of us have plenty of experience with application virtualization on the desktops and data\profile management with UEM products but maybe not so much on the server-side. The goto comfort spot for evergreen on the server-side has been automated server builds.
You can have a full end-to-end automated build in which you simply change a section of your runbook when updating your OS or even applications. While this works fine because it automates the build, it isn’t necessarily streamlined. When you compare with solutions like Citrix PVS, you may find making changes, testing, tweaking etc. will be much more time consuming and potentially frustrating.
Containers
Is there a way to virtualize the apps on the servers as we do on the desktops? You betcha! At one point there was a Server App-V but it never really had legs. These days containers are where it’s at for servers. The beauty of containers is that they’re basically like micro-VMs. Not only is there a virtual file system and registry, but there’s also a virtual network stack. This is of huge importance for your server apps to ensure port configurations and everything the backend app needs persists with it no matter what server or OS it is run on.
Containers have become hugely popular in the developer world and also more popular recently for IT Pros. Large tech companies like Facebook have lived with and sworn by the container approach for many years. Scaling up thousands of web servers when your web service is automated and configured in containers makes the dream a possibility.
If the underlying OS is Windows, then upgrading would be as simple as just automating the upgrade since the app is agnostic.
Upgrading is now much simpler too. If the app is in a container, it’s OS agnostic because it has everything it needs already. You simply automate the server build to create a virtual machine and install the Operating System. Because the container doesn’t care what it’s running on, you can just place it on the server. This is a lot easier than repackaging the app install since it’s all contained and put on the server with a simple command.
Most hypervisors also now support host containers so if your apps are suited to running on say Docker containers, you may not even need to worry about VMs. You can just script the setup of containers in the hypervisor and automate the upgrade and refresh of these containers. Containers also provide the ability to quickly downgrade back to the previous revision in the event of an issue. They’re awesome.
Evergreen IT Strategy for Applications
I’ve mentioned this previously in my Evergreen Platform Bake-off, but app compatibility has started to raise it’s ugly head again. The Evergreen IT strategy for your apps is a three-pronged approach.
- You’ve got to keep on top of the app compatibility relative to your OS versions
- You need to automate and streamline the packaging of new versions of the applications
- You need to automate and streamline the deployment of the package
Evergreen Platforms
For compatibility testing, I suggest a product like those previously reviewed. However, if you haven’t read the review, I’ll give you a quick overview of one of the products, Application Readiness, as an example:
- You can import your application packages into the product
- There are multiple different configurations in the product you can choose from including Windows 10 versions like 1809, 1906, 1909, 2004
- Rather than having to test each application manually for every single Windows 10 upgrade, you can let the product handle the application testing for you against the OS, .NET Framework version, Browser version etc. of your choosing
- The product can also help with automating some of the packaging if you’re looking at using something like MSI or App-V.
Application Packaging
My preference for app packaging is Cloudpaging because it’s app virtualization but with an incredibly high success rate.
In my opinion, it streamlines and simplifies packaging. In fact, since they’ve automated packaging with PowerShell, you can simply create an automated runbook for each of your applications. When it’s time to upgrade you can just plug in the new version and let ‘er rip. You can even integrate with other public automated package management resources like Chocolatey and Windows Package Manager to streamline it even further.
You can set up an automated packaging factory if you wish using a product like Jenkins to run through a check every night to see if a new version of an app is available. If it is, you can package and deploy to your pilot group. I previously accomplished this using AppDNA’s automation capabilities too.
There are many great automation frameworks that could come in useful, and I’m currently experimenting with Automai and having great results. I hope to blog about that in future.
Applications remain the most complicated component. There are just so many different apps and variables to consider. If you just deploy vendor’s exes and msis with silent switches, getting to modernized application management could be difficult, but doing so will make every other layer of your IT services much more straight forward. Trust me on that!
Users
There’s no magic potion or even employment contract that will keep your staff with you for their entire working lives, BUT a solid Active Directory can make your IT support streamlined and self-servicing to accommodate the changes. You should be able to get to the point that your hiring manager and/or human resources can assign an existing or new employee to a certain role which will give them whatever applications and access they need to do their jobs.
Self-servicing for certain things is possible with a Teams chatbot set up to trigger an approval process or even automated workflow. For example, if a user’s AD account is locked, you could have the chatbot unlock the account and provide a link to the cloud host self-service password reset. If you use something like ControlUp, you can do a whole lot of amazing automated actions like, for example, set up an automated action to blow away old profiles and clear Windows Updates cache when it detects a user’s free space is down to 2GB or less. The possibilities are endless.
Getting from not having any of this in place today to getting it set up isn’t actually all that difficult IF you have a well-organized AD. That is of critical importance.
ControlUp is a cloud-first product, so it’s very easy to get set up and going with that. They also have a massive repository of script-based actions you can use. Teams chatbot are simple to set up, and Azure AD and the Self Service Password Reset are some of the easier services to get going with for Azure.
If you found this article on Evergreen IT Strategy useful be sure to sign up for alerts below for new articles like this one.