Converge-blog-infrastructure-performance-management-09242019.jpg-002-e1569442439353.jpg

How Infrastructure Performance Management Can Overcome Data Center Complexity

High-performance IT infrastructure and anytime, anywhere access to business resources aren’t just standard user expectations. They’re business necessities. However, increasingly complex IT infrastructure – more users, more devices, more applications, more systems – can make obstacles to availability and performance more difficult to pinpoint and address. Virtualization can help you reduce costs and consolidate infrastructure, but even virtualization adds a layer of complexity and abstraction that muddies the waters.

Utilization metrics, once considered an indicator of performance, have limited usefulness today. While utilization accounts for certain factors that can affect performance, infrastructure can have capacity to spare but still might not be delivering adequate performance. Also, most organizations use device-specific monitoring and management tools that are incapable of providing a complete picture of infrastructure performance.

The fact is, downtime, performance degradation, over-provisioning of infrastructure, and the creation of IT war rooms are typically the result of infrastructure complexity run amok. Real-time visibility, holistic monitoring and application awareness can help you overcome infrastructure complexity and the resulting symptoms.

True infrastructure performance management (IPM) requires an end-to-end view of infrastructure. IPM is the process of monitoring the health of all infrastructure systems and components to ensure infrastructure as a whole is performing at an optimal level.

An IPM solution tells you infrastructure response times and other important metrics. Rather than replacing device-specific tools, an IPM solution complements them by collecting data from all infrastructure components, including virtual machines, servers, switches and storage solutions.

IPM benefits both the business side and the IT side of the organization. Without IPM, you would have to rely on vendor claims, internal testing, and educated guesses to forecast performance. This approach increases risk and often leads to over-provisioning of IT resources. An IPM solution can help you optimize infrastructure capacity, increase utilization and accelerate infrastructure response times. It can also help you identify and quickly address infrastructure issues that could be slowing performance and hampering the user experience. On the business side, IPM can help you increase availability and performance, reduce costs and improve the customer experience.

Virtual Instruments recently introduced VirtualWisdom, which it calls “the industry’s most comprehensive infrastructure performance monitoring and analytics platform.” VirtualWisdom monitors, analyzes and optimizes infrastructure health holistically — across physical, virtual and cloud environments — to maximize performance and utilization.

VirtualWisdom knows which applications are running on which infrastructure components. It understands the level of performance your infrastructure is delivering for each application. It can tell which applications are causing strain on infrastructure. This makes it possible to proactively ensure the performance and health of the infrastructure that is supporting your mission-critical applications. You can also proactively troubleshoot issues and address them at the source before users and business operations are affected.

Using predictive analytics, you can accurately forecast compute, network and storage consumption and avoid reaching capacity limits. Workload infrastructure balancing continuously adjusts resources to optimize application performance and utilization. This typically allows VirtualWisdom customers to save 30 percent to 50 percent on infrastructure costs.

Don’t let complex IT infrastructure slow down the performance of important applications and drive up IT costs. Let us show you how the VirtualWisdom IPM solution intelligently monitors infrastructure health as a whole to optimize performance and availability.

Read More
Converge Technology Solutions Corp.How Infrastructure Performance Management Can Overcome Data Center Complexity
Converge-blog-AI-Surveillance-08072019.jpg

Cognitive Computing Boosts the Speed and Accuracy of Video Analysis

Surveillance cameras seem to be everywhere these days, deployed by consumers, businesses and other organizations seeking to deter crime and document security-related events. According to some estimates, more than 60 million surveillance cameras are now deployed across the U.S. recording billions of hours of footage each week.

It would be impossible for humans to watch all that video every week. Fortunately, they don’t have to, thanks to the continued development of artificial intelligence (AI) and video analytics technologies.

AI-driven video analytics software can monitor multiple video feeds simultaneously and trigger an alert when potentially significant situations are detected. Analytics systems support a wide range of security and operational use cases by providing facial and license plate recognition, detecting objects taken or left behind, accurately counting people in congested areas, and more.

Most important, software never gets bored or tired.

With conventional analog surveillance systems, humans must monitor the video to catch events in real time or review stored video to reconstruct events after the fact. In addition to being labor-intensive, human monitoring is highly fallible. People can easily miss developing events due to fatigue or because they’re looking at the wrong monitor at the wrong time.

 

The AI Difference

Recent advances in one form of AI have significantly boosted the speed and accuracy of video analytics software. Cognitive computing solutions combine multiple AI subsystems in a way that simulates human thought processes and can evaluate video content in a fraction of the actual viewing time required by a human.

AI is an umbrella term that covers multiple technologies, and cognitive computing relies heavily upon two of the predominant subsets — machine learning and deep learning. Although they are closely related, there are significant differences. Machine learning refers to the use of algorithms that “learn” to produce better analysis as they are exposed to more and more data.

Deep learning has been referred to as machine learning on steroids. It is designed to loosely mimic the way the human brain works with neurons and synapses. It utilizes a hierarchical system of artificial neural networks with many highly interconnected nodes working in unison to analyze large datasets. This gives a machine the ability to discover patterns or trends and learn from those discoveries.

The significant difference is that deep learning algorithms require little or no human involvement. Unlike machine learning models that require programmers to code specific instructions and then label and load datasets for analysis, artificial neural networks require only minimal instructions represented by just a few lines of code. They then “infer” things about the data they are exposed to.

 

Beyond Security

By combining these technologies, cognitive computing becomes particularly useful for image and speech recognition. It uses a series of “classifiers” to identify and tag objects, settings and events based on features such as color, texture, shape and edges. The more data the system is exposed to over time, the more it learns and the more accurate it becomes.

This will allow organizations to use video surveillance for a range of use cases beyond just premises security. Many cities have implemented video surveillance for traffic control, while manufacturers are using it to monitor production for safety, quality assurance and regulatory compliance. Retailers are employing video surveillance for people counting, queue counting and dwell time analysis as well as for loss prevention.

Cognitive computing software makes widespread use of surveillance practical by enabling video analysis that is exponentially faster than conventional systems requiring human evaluation. Through our services agreement with the cognitive computing experts at Essex Technology Group, Converge can help customers implement autonomous video surveillance systems that no longer require human monitoring. Give us a call to learn more.

Read More
John FloresCognitive Computing Boosts the Speed and Accuracy of Video Analysis
iStock-649126046.jpg

A Dynamic Network Perimeter Demands Robust Identity Management

In the days of the traditional “network perimeter,” security resources such as firewalls and intrusion prevention systems were dedicated to defending that perimeter. If you could keep outside threats from somehow penetrating or circumventing the exterior wall, the network would remain safe.

The problem is that the traditional network perimeter no longer exists. Users can access IT resources from any location on any device, whether they’re working from home, at a coffee shop, or in a hotel room in another part of the country. Of course, IT resources often reside in cloud environments that also exist outside the traditional perimeter.

Thanks to mobile and the cloud, user identities are the new perimeter, one that is constantly changing and difficult to secure. Today’s dynamic network perimeter is “defined” when users attempt to access any on-premises or cloud-based environment. The only way to keep cyber criminals out is to effectively authenticate users and ensure that they only access resources their authorized to access. This is the job of an identity management solution.

Identity management is the process of identifying, authenticating, and authorizing users so they can access networks, systems, applications or other resources. The decision to grant access is based on an established user identity, which is matched with credentials provided by a user when an access attempt is made. Identity management then enforces user permissions that dictate which resources the user is permitted to access.

In addition to keeping unauthorized users off your network, an identify management solutions should allow you to provide access to customers, vendors, business partners and other third parties without compromising network security. Best-in-class solutions make it easy to onboard new users, manage existing users, and offboard users who are no longer authorized to access the network. Identity management also improves the user experience by enabling the use of single sign-on to access various systems with a single identity.

The business and security benefits of identity management are significant, but nearly all organizations are struggling with it to some degree, according to research from Vanson Bourne. In fact, 92 percent of respondents said they’re experiencing at least one challenge with identity management. Organizations are finding it difficult to integrate identity management with other security tools, and poor password practices are still a problem. Single sign-on helps with password issues, but it can also create security gaps because not all applications can be integrated with single sign-on solutions. Most respondents agree that multifactor authentication is critical to strengthening access controls, but it can also be difficult to implement.

Converge offers a comprehensive suite of identity management services to help organizations overcome these challenges. We can assist with Active Directory Federation Services deployment and upgrades, and the deployment and configuration of solutions for single sign-on, self-service password resets, multifactor authentication, and user provisioning and deprovisioning. For organizations that have multiple identity stores and directories, we provide consolidation and migration services to prevent security gaps that increase risk.

Identity management is critical to securing today’s increasingly dynamic network perimeter. Let us show you how our identity management services can help you control access to your systems and sensitive data while delivering the best possible user experience.

Read More
John FloresA Dynamic Network Perimeter Demands Robust Identity Management
Pure-Storage-Evergreenn.jpg

How the Pure Evergreen Storage Service Makes Flash Upgrades Simpler and Cost-Efficient

According to a new report from ResearchAndMarkets, the all-flash storage market will triple in size from $5.9 billion in 2018 to $17.8 billion by 2023, growing at an annual rate of 29.75 percent. Organizations are adopting all-flash storage solutions to meet growing demand for higher performance, constant uptime and energy efficiency.

Expanding use of flash in the data center for artificial intelligence applications, big data analytics, the Internet of Things and virtualization is also expected to drive increased adoption over the next few years. These technologies require a modern infrastructure that is capable of storing and processing large volumes of data to keep up with heavy workload demands.

In fact, many organizations are not only implementing flash but looking to upgrade older flash arrays to new technologies such as non-volatile memory express (NVMe). NVMe is a protocol that accelerates the process of accessing data from high-speed flash storage media. NVMe delivers faster response times and lower latency than legacy flash solutions. However, IT departments may have difficulty justifying investments in NVMe to replace all-flash arrays that are only a few years old.

The Evergreen Storage Service was developed by Pure Storage to help organizations future-proof their storage environment and overcome the expense and disruption of the traditional forklift upgrade model. The idea behind Evergreen Storage is to enable in-place upgrades that allow you to modernize and expand your storage as needed – including controllers, external host and internal array connectivity, and flash arrays.

Pure’s approach protects your investment by allowing you to upgrade to next-generation technology without repurchasing hardware or software licenses. Maintenance and support costs remain the same. Essentially, Evergreen Storage allows you take advantage of a cloud-like model in which software and hardware are continuously updated without large capital investments, data migrations or downtime.

Pay-per-use on-premises storage allows you to keep your storage infrastructure fresh, modern and worry-free through generations. It also extends the life of Pure’s existing Right-Size Guarantee as capacity requirements change. The Right-Size Guarantee ensures that you don’t purchase too much or too little storage capacity.

A Total Economic Impact study from Forrester offers insights into the benefits, costs and risks associated with an investment in a Pure Evergreen Storage Gold subscription. For midmarket organizations, the analysis points to risk-adjusted benefits of $1,705,119 over six years and operating costs of $721,476. This translates to a net present value of $983,643 and a risk adjusted ROI of 136 percent. Forrester projects an average of 33 percent in capital expenses and subscription fees compared to a forklift upgrade. When you factor in reductions in environmental, management and migration costs, the average costs savings jump to more than 50 percent.

Pure’s Evergreen Storage Service can be up and running in a matter of days, providing you with a scalable, on-premises storage solution that’s purpose-built for flash with tier-one, enterprise-grade capabilities. Let us show you how Evergreen Storage allows you to take full advantage of the latest flash storage technology as needed without the high cost, risk and disruption of a conventional upgrade.

Read More
John FloresHow the Pure Evergreen Storage Service Makes Flash Upgrades Simpler and Cost-Efficient
Converge-blog-Cohesity-03272019.jpg

Gain Visibility and Value from Data with Cohesity DataPlatform

Data is the coin of the realm for modern business, providing insights that help companies make better decisions, understand markets, boost sales, improve processes and optimize costs. In fact, there are very few business decisions that aren’t influenced by data these days.

Finding and using that data is becoming increasingly difficult, however.

Organizations of all shapes and sizes tend to take a “save everything” mentality when it comes to data, storing everything just in case they might need it later. In addition, these data stores are being continually copied for applications such as backup, disaster recovery, development, testing and analytics. IDC estimates that as much as 60 percent of all stored data are copies.

Over time, this practice creates mass data fragmentation with data spread across myriad infrastructure silos, preventing organizations from fully extracting its value. In a recent Vanson Bourne survey of more than 900 senior IT decision-makers, 87 percent said their organization’s data is fragmented, making management difficult and raising fears around competitive threats, compliance risks, job losses and missed revenue opportunities.

To get the most value from their data, organizations need to find a way to break down these data silos and eliminate the need to continually copy and move data around. The Cohesity DataPlatform solution creates an elegant way to accomplish those goals. Rather than moving copies of data into separate application infrastructures, DataPlatform brings compute to the data.

One Platform for Data and Apps

DataPlatform is a hyper-converged appliance that delivers both storage and compute capacity. Based on a unique distributed file system called SpanFS, the scale-out solution allows organizations to run applications on the same platform that houses massive volumes of backup and unstructured data. It works by creating an abstraction layer that sits between physical data sources and the apps that use that data. Apps can access data within each of the data nodes that form the building blocks of DataPlatform.

SpanFS effectively combines all secondary data, including backups, files, objects, test/development and analytics data. It supports industry-standard file and object protocols such as NFS, SMB and S3 and performs global deduplication, storage tiering, global indexing and search. Data is dynamically balanced across all nodes, and individual nodes can be added or removed to adjust capacity or performance with no downtime.  

There are numerous advantages to this design. By removing the need to use separate infrastructure for running applications, DataPlatform eliminates data silos and reduces data fragmentation. It reduces complexity by allowing organizations to manage everything from a single user interface. It advances business-critical goals around compliance, eDiscovery and security by enabling key apps to look holistically across all backup and unstructured data.

Easy App Access

The Cohesity Marketplace makes it easy to download and use Cohesity-developed applications as well as a variety of third-party apps for conducting granular analysis, ensuring data compliance, improving security and more. Third-party apps available through the marketplace include Splunk Enterprise for indexing and analyzing data, SentinelOne and Clam antivirus solutions, and Imanis Data, a machine-learning-based data management platform.

Cohesity-developed applications include Insight, which enables interactive text search across all data nodes, and Spotlight, a security tool for monitoring any modifications to file data. The Cohesity EasyScript application streamlines the complex and time-consuming process of uploading, executing and managing scripts by providing convenient access to all script creation elements.

How valuable is your data? That’s obviously difficult to say with certainty. PwC analysts have estimated that, in the financial sector alone, the revenue from commercializing data is about $300 billion per year. However, data is only valuable if you understand where it is, what it is and how to use it. By eliminating data silos and reducing fragmentation, Cohesity DataPlatform provides a framework for fully realizing the value of your data assets.

Read More
John FloresGain Visibility and Value from Data with Cohesity DataPlatform
Converge-blog-hybrid-IT-03142019.jpg

Getting Hybrid IT Right

It wasn’t that long ago that many analysts and experts envisioned a day when organizations would move their entire technology infrastructure into the cloud in order to gain nearly limitless capacity with almost no management overhead. That, of course, hasn’t happened. Most companies realized fairly quickly they’d need to keep sensitive data and mission-critical applications in-house.

Instead, most have shifted to a hybrid IT model with a mix of both cloud-based and on-premises platforms and services — and that isn’t going to change any time soon. IBM predicts that within three years, 98 percent of organizations will be using multiple public and private clouds that connect not only to on-premises systems but to other clouds as well.

At the Gartner Symposium/ITxpo in Dubai earlier this month, senior analyst Santhosh Rao told the audience that hybrid IT is becoming the standard because it enables organizations to “extend beyond their data centers and into cloud services across multiple platforms.” Gartner says this is being driven by the management, storage and security requirements of emerging technologies such as artificial intelligence, the Internet of Things and blockchain.

Benefits and Challenges

The hybrid approach is appealing because it delivers the cost optimization, flexibility, scalability and elasticity of the cloud along with the control, security and reliability of on-premises infrastructure. Nevertheless, there are undeniable adoption and management challenges.

Management becomes more complex in a hybrid multi-cloud environment due to the various standards and configurations of different providers. Integrating cross-platform services can also be difficult, as can connecting on-premises applications with cloud resources such as backup and file-syncing solutions.

Cost is another issue. Although cloud usage is known to reduce capital spending on equipment, many organizations experience sticker shock due to cloud sprawl. Uncontrolled growth of cloud-based resources can push subscription and management expenses beyond expectations.

Given these potential drawbacks, it’s a good idea to partner with an IT solutions provider with specific expertise in the development of multi-cloud and hybrid cloud solutions. Such a provider can provide guidance on optimizing resources for deployment across multiple operating environments.

An experienced provider such as Converge will start by conducting a thorough assessment to identify and prioritize which applications and workloads can be easily migrated to the cloud and which should stay in-house. An assessment will also help determine which cloud model is most suitable, based on cost, application requirements and business objectives.

Enabling Effective Management

Of course, some key applications such as business management software suites and transaction processing apps may need to operate across a variety of environments. That will require cloud orchestration tools to manage the integration and interaction of workloads. However, IBM notes that only 30 percent of organizations using multiple clouds have a multi-cloud orchestrator or management platform that can choreograph workloads.

That’s one reason why Converge’s solution architects employ IBM’s cloud management framework to help customers deploy applications and associated datasets across multiple clouds. In addition to automating orchestration and provisioning, it delivers multi-cloud brokering through a self-service dashboard that allows organizations to choose services across different clouds for a range of use cases.

A hybrid IT environment involving multiple public and private clouds as well as traditional on-premises services is becoming the new normal for IT operations. It offers significant cost and efficiency benefits, but it can be challenging to get right. Our team of engineers and solution architects can help you evaluate your current environment and develop a plan for implementing and managing hybrid IT services.

Read More
John FloresGetting Hybrid IT Right
Converge-blog-hybrid-cloud-storage-02212019.jpg

Why You Should Consider Hybrid Cloud Storage

According to RightScale’s most recent State of the Cloud survey, 96 percent of organizations now use the cloud in some form or fashion. However, most also remain fully committed to keeping some mission-critical workloads on-premises. That’s why hybrid cloud is widely viewed as the logical end game for most organizations.

As the name implies, a hybrid cloud orchestrates a mix of public cloud, private cloud and on-premises services. In a 2018 Microsoft survey of 1,700 IT pros and managers, 67 percent said they are now using or planning to deploy a hybrid cloud. More than half said they had made the move within the previous two years.

With different workloads running in different environments, organizations have to make sure that everything is properly aligned with storage. That’s why a hybrid cloud strategy also requires a hybrid cloud storage strategy. In the Microsoft survey, 71 percent said the top use case for hybrid cloud is to control where important data is stored.

Hybrid storage solutions support block, file and object storage. A policy engine decides whether data should be stored on-premises or in a public or private cloud. Hybrid cloud storage is often used to enable “cloud bursting” for specific workloads that run in-house most of the time but may periodically require the added capacity of a public cloud.

A hybrid storage platform allows organizations to take advantage of the flexibility and scalability of the cloud while maintaining the control of on-premises data storage. Resources are integrated so data can move between platforms as needed to optimize cost, performance and data protection. For example, data that is frequently accessed may be stored on-premises until it becomes inactive. At that point, it is automatically moved to a cloud storage tier for archival.

The availability of a widely accepted interface makes hybrid cloud storage feasible. The Amazon Simple Storage Service (S3) API has become the de facto standard in cloud storage, as well as a number of object storage platforms. Storage managers can use a common set of tools and view on-premises and cloud resources as a single pool of storage.

The integration of on-premises storage with one or more cloud platforms creates unprecedented scalability and efficiency in the face of rapid data growth. The hybrid model reduces data silos and simplifies management with a single namespace and a single view — no matter where the data originated or where it resides. Further, the ability to mix and match capacity across platforms opens up a range of deployment options. Here are four valuable use cases:

Disaster recovery. Syncing, tiering and backing up data to the public cloud can enable immediate data availability in the event of a disaster. It can also limit your exposure by shrinking your on-premises storage footprint.

File sharing. By enabling logically consistent access to shared file resources, hybrid storage makes it easier to share large files among dispersed locations.

Primary storage. Hybrid combines the security and performance of an on-premises solution with the flexibility and scalability of the cloud while encrypting data flows from one site to the other.

Analytics. With hybrid storage, you can run transactional workloads onsite or in a private cloud, but then extract that data and load it into a public cloud for analysis.


Storage has always been one of the more logical use cases for the cloud. By some accounts, more than half of all data now being created is flowing into the cloud. According to Enterprise Storage Forum’s 2018 Data Storage Trends survey, cloud storage has now surpassed the hard drive as the top budget line item in IT storage spending.

Is hybrid cloud storage right for your organization? Our solution architects can work with you to assess your current environment and determine if a hybrid approach meets your business requirements.

Read More
John FloresWhy You Should Consider Hybrid Cloud Storage
blog-cloud-governance.jpg

Cloud Governance Is Key to Reducing Waste, Risk and Costs

According to RightScale’s 2018 State of the Cloud Survey, 85 percent of enterprise organizations now have a multi-cloud strategy, with companies reporting that they use nearly five different clouds on average. While the multi-cloud strategy has undeniable benefits, there is a risk of “cloud sprawl” that could lead to wasted resources, rising costs and security challenges.

To reduce risk, organizations should implement a cloud governance program. This is different than cloud management, which is meant to ensure the efficient performance of cloud resources. Governance involves maintaining control of the cloud environment through the creation of rules, policies and processes that guide the deployment and use of clouds.

The ease of provisioning often leads to cloud sprawl. With just a few mouse clicks, employees can quickly deploy cloud services to help them do their jobs. IT deployment processes seem hopelessly slow and bureaucratic by contrast.

However, unauthorized cloud provisioning creates a number of business risks. Data scattered across various platforms with no central oversight increases the risk of data loss or data leakage. Deployed without IT’s knowledge or consent, rogue clouds increase the risk of duplication of services, inadequate protection and unnecessary spending.

Gartner analysts project that overall cloud spending will reach $206.2 billion this year — but $14.1 billion will be wasted on clouds that are either no longer in use or have overprovisioned capacity. Another survey finds that cloud-managed infrastructure is woefully underused, with an average CPU utilization of less than 5 percent.

The sad part is that this waste is entirely avoidable. Cloud sprawl almost always occurs because organizations simply haven’t done a good job of establishing guidelines and procedures for individuals provisioning cloud resources on their own.

As risk and expenses mount, expect organizations to establish formal cloud governance guidelines. According to RightScale, more enterprises are creating “centers of excellence” to focus on cloud governance — 57 percent reported they have already established a cloud governance team, and another 24 percent said they plan to do so soon. These central teams are focusing on planning which applications to move to the cloud (69 percent), optimizing costs (64 percent) and setting cloud policies (60 percent).

Forrester recommends creating a cross-functional governance team of representatives from departments across the organization. This group can provide an overarching view of operations and identify common practices and requirements. In conjunction with IT staff, the governance team can use that broad information to help identify cloud requirements, establish provisioning standards and define usage best practices.

The team’s first job will be to assess the environment to determine how many cloud applications and services are actually being used, who is using them and how they were provisioned. This will be important for identifying cloud resources that may be either unused, underutilized or duplicated.

After identifying all cloud assets, the team should work on establishing guidelines for what applications and workloads are appropriate for the cloud, how and when they should be migrated, what security measures should be developed, and who should have administrative privileges. Ideally, administrative access should be limited to very few people in order to maintain central control of updates, configuration changes and new service requests.

For many organizations, the path to effective cloud governance will be impeded by a lack of resources and skill sets. Partnering with an experienced solution provider can help gill these gaps and bring much-needed expertise.

If you’re looking for ways to get a better handle on your cloud usage, give us a call. Our team of solution architects can help you evaluate your current environment and work with you to ensure your resources are properly aligned with business requirements.

Read More
John FloresCloud Governance Is Key to Reducing Waste, Risk and Costs
Converge-blog-CIO-advisory.jpg

CIOs Are Grappling with the Challenges of Accelerating Digital Transformation

For the past several years, digital transformation has been the top priority of businesses across a wide range of industries. The term refers to the strategic optimization of business processes and models in a way that takes full advantage of technology to better serve customers, spur innovation and create competitive advantages. While digital transformation necessarily cuts across organizational functions, the chief information officer (CIO) plays a critical role in driving these initiatives.

According to Gartner’s 2019 CIO Agenda, 49 percent of the 3,000 CIOs surveyed said that their organizations have already changed their business models or are in the process of changing them. Among the top performers, representing 40 percent of survey respondents, 99 percent said that IT is “very or extremely important to business model change.”

What’s more, digital transformation is evolving from concept to application with a rapid increase in scale. The major driver for scale is the intent to increase consumer engagement via digital channels. As a result, Gartner recommends that CIOs focus on capabilities that enable more consumers to perform more activities using digital channels, and to create an IT environment that can respond quickly to changes in consumer demand.

As the pace of transformation picks up, it can accelerate the failure of businesses that are either unable or unwilling to transform. Startups using disruptive technologies and business models pose a serious threat to organizations using legacy infrastructure that can’t support modern business applications or analyze large data volumes. Right out of the gate, these startups are in a better technological position than well-established competitors.

How do organizations plan to keep up? The CIOs surveyed by Gartner said that disruptive technologies will play a major role in reshaping business models. Asked which technologies they expect to be most disruptive, most CIOs mentioned artificial intelligence (AI), with data analytics moving to second place. More than a third (37 percent) said they have already deployed AI technology or that deployment was planned in the short term.

It’s important to remember, however, that digital transformation is not a particular technology but a new approach to IT. To be successful, CIOs must retool existing infrastructure to reduce operational overhead while incorporating disruptive technologies such as AI. The decisions CIOs make today will have a dramatic impact on their organizations’ competitive posture in the years to come.

Ronaldo Möntmann, CIO Advisor for Converge Technology Solutions, understands this all too well. He says CIOs are facing pressure to align technology investments with their organizations’ strategic goals while ensuring a positive return on investment. To do so, CIOs must develop a sound IT strategy and be able to understand and articulate the role of disruptive technologies in revolutionizing the business. At the same time, CIOs must ensure the reliability of existing infrastructure and streamline IT operations.

With years of experience as an IT executive in the healthcare industry, Möntmann has joined Converge to lead the firm’s new CIO Advisory Service and provide guidance and support to CIOs in the digital transformation journey. He will serve as coach, content-provider and business advisor, offering clients a unique blend of strategic and operational acumen and technical competency. As the pace of digital transformation accelerates, organizations face increasing pressure to become more agile, customer-centric and efficient. This cannot be accomplished by simply implementing new technology. Digital transformation happens by changing how you do business in order to fully leverage technology to achieve business goals. Through its new CIO Advisory Service, Converge is offering CIOs an experienced guide to help drive their digital transformation initiatives.

Read More
John FloresCIOs Are Grappling with the Challenges of Accelerating Digital Transformation
John-Flores_Blog_Image-2.jpg

Mainframes Are Alive and Well – But What about Disaster Recovery?

The first mainframe was introduced in 1943, weighed about five tons and filled an entire room. Not many computer technologies can match the staying power of mainframes, which have evolved quite a bit but continue to power many of the world’s largest organizations today.

In fact, a recent survey from BMC found that 92 percent of executives and IT professionals believe the mainframe is very much a viable, long-term computing platform for their organization. That 92 percent figure is the highest since 2013 as organizations work to scale and modernize their mainframe applications and operations to support increased demand for speed and efficiency.

Many organizations rely on mainframes because they offer the reliability and security required to run mission-critical processes, as well as the computing power to process thousands of transactions per second and billions of transactions per year. Nearly half of respondents are using DevOps practices in their mainframe environment. The top benefits of a mainframe environment, according to the survey, are availability, security, a centralized data serving environment, and transaction throughput.

As important and relevant as mainframes continue to be, there are still challenges associated with these systems. One of the reasons why mainframes need to be modernized is the fact that IT professionals with mainframe expertise are aging or have already retired. For example, most younger programmers and developers don’t know how to work with mainframe applications in the COBOL language. In addition to a growing skills gap, organizations are struggling to keep up with hardware and software upgrades.

Also, it’s difficult to set up a disaster recovery solution to work with a mainframe environment. You’re not just going to buy another mainframe and install it in a remote, secondary data center so you can restore your data and applications if the main system goes down. Hosted solutions can help, particularly for disaster recovery. With a hosted solution, you always have access to a cloud provider’s mainframe environment instead of having to build, manage, maintain and upgrade your own. And you can scale services up or down as needed.

The Converge hosting solution offers resiliency and recovery options for all major mainframe platforms and midrange systems. With multiple data centers across the country, managed by a team with enterprise mainframe expertise, we can make sure your data is secure and available when you need it. Organizations in manufacturing, government, healthcare, finance and other sectors that use mainframes rely on Converge to maintain business continuity.

Our Disaster Recovery-as-a-Service offering provides cost-effective replication management and recovery options that accelerate recovery times while protecting your data. You also have the flexibility to implement a hybrid disaster recovery solution, which allows you to combine your tools with our vast equipment and engineering capabilities. With more than 75 engineers who have more than 500 technical certifications combined, we more than overcome the mainframe skills gap that keeps organizations from modernizing their mainframe environments. Let us show you how Converge hosted disaster recovery services can work with your mainframe system without the cost and complexity of purchasing and maintaining a separate mainframe for the same purpose.

Read More
John FloresMainframes Are Alive and Well – But What about Disaster Recovery?