Pure-Storage-Evergreenn.jpg

How the Pure Evergreen Storage Service Makes Flash Upgrades Simpler and Cost-Efficient

According to a new report from ResearchAndMarkets, the all-flash storage market will triple in size from $5.9 billion in 2018 to $17.8 billion by 2023, growing at an annual rate of 29.75 percent. Organizations are adopting all-flash storage solutions to meet growing demand for higher performance, constant uptime and energy efficiency.

Expanding use of flash in the data center for artificial intelligence applications, big data analytics, the Internet of Things and virtualization is also expected to drive increased adoption over the next few years. These technologies require a modern infrastructure that is capable of storing and processing large volumes of data to keep up with heavy workload demands.

In fact, many organizations are not only implementing flash but looking to upgrade older flash arrays to new technologies such as non-volatile memory express (NVMe). NVMe is a protocol that accelerates the process of accessing data from high-speed flash storage media. NVMe delivers faster response times and lower latency than legacy flash solutions. However, IT departments may have difficulty justifying investments in NVMe to replace all-flash arrays that are only a few years old.

The Evergreen Storage Service was developed by Pure Storage to help organizations future-proof their storage environment and overcome the expense and disruption of the traditional forklift upgrade model. The idea behind Evergreen Storage is to enable in-place upgrades that allow you to modernize and expand your storage as needed – including controllers, external host and internal array connectivity, and flash arrays.

Pure’s approach protects your investment by allowing you to upgrade to next-generation technology without repurchasing hardware or software licenses. Maintenance and support costs remain the same. Essentially, Evergreen Storage allows you take advantage of a cloud-like model in which software and hardware are continuously updated without large capital investments, data migrations or downtime.

Pay-per-use on-premises storage allows you to keep your storage infrastructure fresh, modern and worry-free through generations. It also extends the life of Pure’s existing Right-Size Guarantee as capacity requirements change. The Right-Size Guarantee ensures that you don’t purchase too much or too little storage capacity.

A Total Economic Impact study from Forrester offers insights into the benefits, costs and risks associated with an investment in a Pure Evergreen Storage Gold subscription. For midmarket organizations, the analysis points to risk-adjusted benefits of $1,705,119 over six years and operating costs of $721,476. This translates to a net present value of $983,643 and a risk adjusted ROI of 136 percent. Forrester projects an average of 33 percent in capital expenses and subscription fees compared to a forklift upgrade. When you factor in reductions in environmental, management and migration costs, the average costs savings jump to more than 50 percent.

Pure’s Evergreen Storage Service can be up and running in a matter of days, providing you with a scalable, on-premises storage solution that’s purpose-built for flash with tier-one, enterprise-grade capabilities. Let us show you how Evergreen Storage allows you to take full advantage of the latest flash storage technology as needed without the high cost, risk and disruption of a conventional upgrade.

Read More
John FloresHow the Pure Evergreen Storage Service Makes Flash Upgrades Simpler and Cost-Efficient
Converge-blog-Cohesity-03272019.jpg

Gain Visibility and Value from Data with Cohesity DataPlatform

Data is the coin of the realm for modern business, providing insights that help companies make better decisions, understand markets, boost sales, improve processes and optimize costs. In fact, there are very few business decisions that aren’t influenced by data these days.

Finding and using that data is becoming increasingly difficult, however.

Organizations of all shapes and sizes tend to take a “save everything” mentality when it comes to data, storing everything just in case they might need it later. In addition, these data stores are being continually copied for applications such as backup, disaster recovery, development, testing and analytics. IDC estimates that as much as 60 percent of all stored data are copies.

Over time, this practice creates mass data fragmentation with data spread across myriad infrastructure silos, preventing organizations from fully extracting its value. In a recent Vanson Bourne survey of more than 900 senior IT decision-makers, 87 percent said their organization’s data is fragmented, making management difficult and raising fears around competitive threats, compliance risks, job losses and missed revenue opportunities.

To get the most value from their data, organizations need to find a way to break down these data silos and eliminate the need to continually copy and move data around. The Cohesity DataPlatform solution creates an elegant way to accomplish those goals. Rather than moving copies of data into separate application infrastructures, DataPlatform brings compute to the data.

One Platform for Data and Apps

DataPlatform is a hyper-converged appliance that delivers both storage and compute capacity. Based on a unique distributed file system called SpanFS, the scale-out solution allows organizations to run applications on the same platform that houses massive volumes of backup and unstructured data. It works by creating an abstraction layer that sits between physical data sources and the apps that use that data. Apps can access data within each of the data nodes that form the building blocks of DataPlatform.

SpanFS effectively combines all secondary data, including backups, files, objects, test/development and analytics data. It supports industry-standard file and object protocols such as NFS, SMB and S3 and performs global deduplication, storage tiering, global indexing and search. Data is dynamically balanced across all nodes, and individual nodes can be added or removed to adjust capacity or performance with no downtime.  

There are numerous advantages to this design. By removing the need to use separate infrastructure for running applications, DataPlatform eliminates data silos and reduces data fragmentation. It reduces complexity by allowing organizations to manage everything from a single user interface. It advances business-critical goals around compliance, eDiscovery and security by enabling key apps to look holistically across all backup and unstructured data.

Easy App Access

The Cohesity Marketplace makes it easy to download and use Cohesity-developed applications as well as a variety of third-party apps for conducting granular analysis, ensuring data compliance, improving security and more. Third-party apps available through the marketplace include Splunk Enterprise for indexing and analyzing data, SentinelOne and Clam antivirus solutions, and Imanis Data, a machine-learning-based data management platform.

Cohesity-developed applications include Insight, which enables interactive text search across all data nodes, and Spotlight, a security tool for monitoring any modifications to file data. The Cohesity EasyScript application streamlines the complex and time-consuming process of uploading, executing and managing scripts by providing convenient access to all script creation elements.

How valuable is your data? That’s obviously difficult to say with certainty. PwC analysts have estimated that, in the financial sector alone, the revenue from commercializing data is about $300 billion per year. However, data is only valuable if you understand where it is, what it is and how to use it. By eliminating data silos and reducing fragmentation, Cohesity DataPlatform provides a framework for fully realizing the value of your data assets.

Read More
John FloresGain Visibility and Value from Data with Cohesity DataPlatform
Converge-blog-hybrid-IT-03142019.jpg

Getting Hybrid IT Right

It wasn’t that long ago that many analysts and experts envisioned a day when organizations would move their entire technology infrastructure into the cloud in order to gain nearly limitless capacity with almost no management overhead. That, of course, hasn’t happened. Most companies realized fairly quickly they’d need to keep sensitive data and mission-critical applications in-house.

Instead, most have shifted to a hybrid IT model with a mix of both cloud-based and on-premises platforms and services — and that isn’t going to change any time soon. IBM predicts that within three years, 98 percent of organizations will be using multiple public and private clouds that connect not only to on-premises systems but to other clouds as well.

At the Gartner Symposium/ITxpo in Dubai earlier this month, senior analyst Santhosh Rao told the audience that hybrid IT is becoming the standard because it enables organizations to “extend beyond their data centers and into cloud services across multiple platforms.” Gartner says this is being driven by the management, storage and security requirements of emerging technologies such as artificial intelligence, the Internet of Things and blockchain.

Benefits and Challenges

The hybrid approach is appealing because it delivers the cost optimization, flexibility, scalability and elasticity of the cloud along with the control, security and reliability of on-premises infrastructure. Nevertheless, there are undeniable adoption and management challenges.

Management becomes more complex in a hybrid multi-cloud environment due to the various standards and configurations of different providers. Integrating cross-platform services can also be difficult, as can connecting on-premises applications with cloud resources such as backup and file-syncing solutions.

Cost is another issue. Although cloud usage is known to reduce capital spending on equipment, many organizations experience sticker shock due to cloud sprawl. Uncontrolled growth of cloud-based resources can push subscription and management expenses beyond expectations.

Given these potential drawbacks, it’s a good idea to partner with an IT solutions provider with specific expertise in the development of multi-cloud and hybrid cloud solutions. Such a provider can provide guidance on optimizing resources for deployment across multiple operating environments.

An experienced provider such as Converge will start by conducting a thorough assessment to identify and prioritize which applications and workloads can be easily migrated to the cloud and which should stay in-house. An assessment will also help determine which cloud model is most suitable, based on cost, application requirements and business objectives.

Enabling Effective Management

Of course, some key applications such as business management software suites and transaction processing apps may need to operate across a variety of environments. That will require cloud orchestration tools to manage the integration and interaction of workloads. However, IBM notes that only 30 percent of organizations using multiple clouds have a multi-cloud orchestrator or management platform that can choreograph workloads.

That’s one reason why Converge’s solution architects employ IBM’s cloud management framework to help customers deploy applications and associated datasets across multiple clouds. In addition to automating orchestration and provisioning, it delivers multi-cloud brokering through a self-service dashboard that allows organizations to choose services across different clouds for a range of use cases.

A hybrid IT environment involving multiple public and private clouds as well as traditional on-premises services is becoming the new normal for IT operations. It offers significant cost and efficiency benefits, but it can be challenging to get right. Our team of engineers and solution architects can help you evaluate your current environment and develop a plan for implementing and managing hybrid IT services.

Read More
John FloresGetting Hybrid IT Right
Converge-blog-hybrid-cloud-storage-02212019.jpg

Why You Should Consider Hybrid Cloud Storage

According to RightScale’s most recent State of the Cloud survey, 96 percent of organizations now use the cloud in some form or fashion. However, most also remain fully committed to keeping some mission-critical workloads on-premises. That’s why hybrid cloud is widely viewed as the logical end game for most organizations.

As the name implies, a hybrid cloud orchestrates a mix of public cloud, private cloud and on-premises services. In a 2018 Microsoft survey of 1,700 IT pros and managers, 67 percent said they are now using or planning to deploy a hybrid cloud. More than half said they had made the move within the previous two years.

With different workloads running in different environments, organizations have to make sure that everything is properly aligned with storage. That’s why a hybrid cloud strategy also requires a hybrid cloud storage strategy. In the Microsoft survey, 71 percent said the top use case for hybrid cloud is to control where important data is stored.

Hybrid storage solutions support block, file and object storage. A policy engine decides whether data should be stored on-premises or in a public or private cloud. Hybrid cloud storage is often used to enable “cloud bursting” for specific workloads that run in-house most of the time but may periodically require the added capacity of a public cloud.

A hybrid storage platform allows organizations to take advantage of the flexibility and scalability of the cloud while maintaining the control of on-premises data storage. Resources are integrated so data can move between platforms as needed to optimize cost, performance and data protection. For example, data that is frequently accessed may be stored on-premises until it becomes inactive. At that point, it is automatically moved to a cloud storage tier for archival.

The availability of a widely accepted interface makes hybrid cloud storage feasible. The Amazon Simple Storage Service (S3) API has become the de facto standard in cloud storage, as well as a number of object storage platforms. Storage managers can use a common set of tools and view on-premises and cloud resources as a single pool of storage.

The integration of on-premises storage with one or more cloud platforms creates unprecedented scalability and efficiency in the face of rapid data growth. The hybrid model reduces data silos and simplifies management with a single namespace and a single view — no matter where the data originated or where it resides. Further, the ability to mix and match capacity across platforms opens up a range of deployment options. Here are four valuable use cases:

Disaster recovery. Syncing, tiering and backing up data to the public cloud can enable immediate data availability in the event of a disaster. It can also limit your exposure by shrinking your on-premises storage footprint.

File sharing. By enabling logically consistent access to shared file resources, hybrid storage makes it easier to share large files among dispersed locations.

Primary storage. Hybrid combines the security and performance of an on-premises solution with the flexibility and scalability of the cloud while encrypting data flows from one site to the other.

Analytics. With hybrid storage, you can run transactional workloads onsite or in a private cloud, but then extract that data and load it into a public cloud for analysis.


Storage has always been one of the more logical use cases for the cloud. By some accounts, more than half of all data now being created is flowing into the cloud. According to Enterprise Storage Forum’s 2018 Data Storage Trends survey, cloud storage has now surpassed the hard drive as the top budget line item in IT storage spending.

Is hybrid cloud storage right for your organization? Our solution architects can work with you to assess your current environment and determine if a hybrid approach meets your business requirements.

Read More
John FloresWhy You Should Consider Hybrid Cloud Storage
blog-cloud-governance.jpg

Cloud Governance Is Key to Reducing Waste, Risk and Costs

According to RightScale’s 2018 State of the Cloud Survey, 85 percent of enterprise organizations now have a multi-cloud strategy, with companies reporting that they use nearly five different clouds on average. While the multi-cloud strategy has undeniable benefits, there is a risk of “cloud sprawl” that could lead to wasted resources, rising costs and security challenges.

To reduce risk, organizations should implement a cloud governance program. This is different than cloud management, which is meant to ensure the efficient performance of cloud resources. Governance involves maintaining control of the cloud environment through the creation of rules, policies and processes that guide the deployment and use of clouds.

The ease of provisioning often leads to cloud sprawl. With just a few mouse clicks, employees can quickly deploy cloud services to help them do their jobs. IT deployment processes seem hopelessly slow and bureaucratic by contrast.

However, unauthorized cloud provisioning creates a number of business risks. Data scattered across various platforms with no central oversight increases the risk of data loss or data leakage. Deployed without IT’s knowledge or consent, rogue clouds increase the risk of duplication of services, inadequate protection and unnecessary spending.

Gartner analysts project that overall cloud spending will reach $206.2 billion this year — but $14.1 billion will be wasted on clouds that are either no longer in use or have overprovisioned capacity. Another survey finds that cloud-managed infrastructure is woefully underused, with an average CPU utilization of less than 5 percent.

The sad part is that this waste is entirely avoidable. Cloud sprawl almost always occurs because organizations simply haven’t done a good job of establishing guidelines and procedures for individuals provisioning cloud resources on their own.

As risk and expenses mount, expect organizations to establish formal cloud governance guidelines. According to RightScale, more enterprises are creating “centers of excellence” to focus on cloud governance — 57 percent reported they have already established a cloud governance team, and another 24 percent said they plan to do so soon. These central teams are focusing on planning which applications to move to the cloud (69 percent), optimizing costs (64 percent) and setting cloud policies (60 percent).

Forrester recommends creating a cross-functional governance team of representatives from departments across the organization. This group can provide an overarching view of operations and identify common practices and requirements. In conjunction with IT staff, the governance team can use that broad information to help identify cloud requirements, establish provisioning standards and define usage best practices.

The team’s first job will be to assess the environment to determine how many cloud applications and services are actually being used, who is using them and how they were provisioned. This will be important for identifying cloud resources that may be either unused, underutilized or duplicated.

After identifying all cloud assets, the team should work on establishing guidelines for what applications and workloads are appropriate for the cloud, how and when they should be migrated, what security measures should be developed, and who should have administrative privileges. Ideally, administrative access should be limited to very few people in order to maintain central control of updates, configuration changes and new service requests.

For many organizations, the path to effective cloud governance will be impeded by a lack of resources and skill sets. Partnering with an experienced solution provider can help gill these gaps and bring much-needed expertise.

If you’re looking for ways to get a better handle on your cloud usage, give us a call. Our team of solution architects can help you evaluate your current environment and work with you to ensure your resources are properly aligned with business requirements.

Read More
John FloresCloud Governance Is Key to Reducing Waste, Risk and Costs
Converge-blog-CIO-advisory.jpg

CIOs Are Grappling with the Challenges of Accelerating Digital Transformation

For the past several years, digital transformation has been the top priority of businesses across a wide range of industries. The term refers to the strategic optimization of business processes and models in a way that takes full advantage of technology to better serve customers, spur innovation and create competitive advantages. While digital transformation necessarily cuts across organizational functions, the chief information officer (CIO) plays a critical role in driving these initiatives.

According to Gartner’s 2019 CIO Agenda, 49 percent of the 3,000 CIOs surveyed said that their organizations have already changed their business models or are in the process of changing them. Among the top performers, representing 40 percent of survey respondents, 99 percent said that IT is “very or extremely important to business model change.”

What’s more, digital transformation is evolving from concept to application with a rapid increase in scale. The major driver for scale is the intent to increase consumer engagement via digital channels. As a result, Gartner recommends that CIOs focus on capabilities that enable more consumers to perform more activities using digital channels, and to create an IT environment that can respond quickly to changes in consumer demand.

As the pace of transformation picks up, it can accelerate the failure of businesses that are either unable or unwilling to transform. Startups using disruptive technologies and business models pose a serious threat to organizations using legacy infrastructure that can’t support modern business applications or analyze large data volumes. Right out of the gate, these startups are in a better technological position than well-established competitors.

How do organizations plan to keep up? The CIOs surveyed by Gartner said that disruptive technologies will play a major role in reshaping business models. Asked which technologies they expect to be most disruptive, most CIOs mentioned artificial intelligence (AI), with data analytics moving to second place. More than a third (37 percent) said they have already deployed AI technology or that deployment was planned in the short term.

It’s important to remember, however, that digital transformation is not a particular technology but a new approach to IT. To be successful, CIOs must retool existing infrastructure to reduce operational overhead while incorporating disruptive technologies such as AI. The decisions CIOs make today will have a dramatic impact on their organizations’ competitive posture in the years to come.

Ronaldo Möntmann, CIO Advisor for Converge Technology Solutions, understands this all too well. He says CIOs are facing pressure to align technology investments with their organizations’ strategic goals while ensuring a positive return on investment. To do so, CIOs must develop a sound IT strategy and be able to understand and articulate the role of disruptive technologies in revolutionizing the business. At the same time, CIOs must ensure the reliability of existing infrastructure and streamline IT operations.

With years of experience as an IT executive in the healthcare industry, Möntmann has joined Converge to lead the firm’s new CIO Advisory Service and provide guidance and support to CIOs in the digital transformation journey. He will serve as coach, content-provider and business advisor, offering clients a unique blend of strategic and operational acumen and technical competency. As the pace of digital transformation accelerates, organizations face increasing pressure to become more agile, customer-centric and efficient. This cannot be accomplished by simply implementing new technology. Digital transformation happens by changing how you do business in order to fully leverage technology to achieve business goals. Through its new CIO Advisory Service, Converge is offering CIOs an experienced guide to help drive their digital transformation initiatives.

Read More
John FloresCIOs Are Grappling with the Challenges of Accelerating Digital Transformation
John-Flores_Blog_Image-2.jpg

Mainframes Are Alive and Well – But What about Disaster Recovery?

The first mainframe was introduced in 1943, weighed about five tons and filled an entire room. Not many computer technologies can match the staying power of mainframes, which have evolved quite a bit but continue to power many of the world’s largest organizations today.

In fact, a recent survey from BMC found that 92 percent of executives and IT professionals believe the mainframe is very much a viable, long-term computing platform for their organization. That 92 percent figure is the highest since 2013 as organizations work to scale and modernize their mainframe applications and operations to support increased demand for speed and efficiency.

Many organizations rely on mainframes because they offer the reliability and security required to run mission-critical processes, as well as the computing power to process thousands of transactions per second and billions of transactions per year. Nearly half of respondents are using DevOps practices in their mainframe environment. The top benefits of a mainframe environment, according to the survey, are availability, security, a centralized data serving environment, and transaction throughput.

As important and relevant as mainframes continue to be, there are still challenges associated with these systems. One of the reasons why mainframes need to be modernized is the fact that IT professionals with mainframe expertise are aging or have already retired. For example, most younger programmers and developers don’t know how to work with mainframe applications in the COBOL language. In addition to a growing skills gap, organizations are struggling to keep up with hardware and software upgrades.

Also, it’s difficult to set up a disaster recovery solution to work with a mainframe environment. You’re not just going to buy another mainframe and install it in a remote, secondary data center so you can restore your data and applications if the main system goes down. Hosted solutions can help, particularly for disaster recovery. With a hosted solution, you always have access to a cloud provider’s mainframe environment instead of having to build, manage, maintain and upgrade your own. And you can scale services up or down as needed.

The Converge hosting solution offers resiliency and recovery options for all major mainframe platforms and midrange systems. With multiple data centers across the country, managed by a team with enterprise mainframe expertise, we can make sure your data is secure and available when you need it. Organizations in manufacturing, government, healthcare, finance and other sectors that use mainframes rely on Converge to maintain business continuity.

Our Disaster Recovery-as-a-Service offering provides cost-effective replication management and recovery options that accelerate recovery times while protecting your data. You also have the flexibility to implement a hybrid disaster recovery solution, which allows you to combine your tools with our vast equipment and engineering capabilities. With more than 75 engineers who have more than 500 technical certifications combined, we more than overcome the mainframe skills gap that keeps organizations from modernizing their mainframe environments. Let us show you how Converge hosted disaster recovery services can work with your mainframe system without the cost and complexity of purchasing and maintaining a separate mainframe for the same purpose.

Read More
John FloresMainframes Are Alive and Well – But What about Disaster Recovery?
Converge-blog-cybersecurity-AI.jpg

Machine Learning Has Become Critical to Effective Cybersecurity

Cybersecurity has always been something of a cat-and-mouse game, with security experts constantly implementing new measures and hackers finding new weaknesses to exploit. Machine learning is making it possible to breed a smarter cat — but the mouse is getting smarter, too.

In a recent survey conducted by Wakefield Research, 95 percent of IT security professionals said that machine learning has become a critical component of an effective cybersecurity strategy. Machine learning is a form of artificial intelligence (AI) that automates the building of analytical models. It enables computer systems to use data to continually improve on their ability to perform specific tasks by learning from experience rather than being explicitly programmed.

AI and machine learning have become practical in recent years thanks to processors capable of performing all the necessary calculations and cloud platforms that provide near-infinite data storage. Some common machine learning applications include image and speech recognition, medical diagnosis, and trading systems in the financial sector.

Humans are much smarter than machines but we aren’t very good at processing large volumes of data. That has always been a hindrance to effective cybersecurity, which involves the collection and analysis of massive amounts of data from system logs and user activity. The sheer number of alerts generated by many cybersecurity tools is enough to overwhelm human analysts.

But that’s where machine excel. Machine learning makes it possible to quickly perform pattern recognition, anomaly detection and predictive analytics to identify potential threats and weed out false positives. This cuts down on the “noise” so humans can focus on the most serious threats.

That’s the future envisioned by respondents to the Wakefield survey. Overall, 99 percent of U.S. cybersecurity professionals believe AI could improve their organization’s cybersecurity, particularly when it comes to time-critical threat detection tasks. Eighty-seven percent report that their organization is currently using AI as part of their cybersecurity strategy, and 97 percent say their organization plans to increase budget for AI and machine learning tools within the next three years. Three-quarters believe that, within the next three years, their company will not be able to safeguard digital assets without AI.

However, 91 percent are concerned about hackers using AI against companies. In fact, cybercriminals are beginning to use AI and machine learning to develop more advanced threats. As they continue to innovate, organizations will have to get creative to stay ahead of them.

There are a number of things to keep in mind if you plan to incorporate AI and machine learning into your cybersecurity strategy. First, you should recognize that these technologies cannot replace humans. Machine learning requires human training and oversight. The machines must be taught what is bad, what is good, and when to flag unknown threats to humans.

Second, let the machines solve the simpler problems, so human experts have more time to think of new ways to solve more complex problems. Deploy AI and machine learning technologies to automate and speed up security operations and repetitive tasks.

And, finally, accept that your systems will be compromised at some point even if you implement machine learning. Rather than viewing this as a net-negative, organizations can learn from breaches by analyzing normal and abnormal network behavior to gain a greater understanding of threats and how to respond. Hackers are employing advanced AI tools to launch more sophisticated cyberattacks. To stay ahead in this cat-and-mouse game, organizations should begin incorporating machine learning into their cybersecurity strategies.

Read More
John FloresMachine Learning Has Become Critical to Effective Cybersecurity
blog-image.jpg

How to Prepare Your Data Center for Blockchain

Starting in September 2019, Walmart and its Sam’s Club division will require suppliers of fresh, leafy green vegetables to use blockchain to trace products back to the farm. Walmart is expected to expand the requirement to other produce suppliers in the near future with a goal of speeding up product recalls and the response to food scares.

Blockchain, the digital ledger technology that underlies cryptocurrencies such as Bitcoin, has been widely hyped. Although there has been significant investment in the cryptocurrency realm, many organizations still have serious reservations about moving forward with blockchain.

According to the 2018 Global Blockchain Survey from PwC, 48percent of organizations are unsure about how blockchain will be regulated, and45 percent don’t fully trust the technology. From a technical perspective, 44percent are concerned about how to integrate multiple networks, and 29 percent point to potential scalability issues. The ability to quickly process a high number of transactions has been a strong selling point for blockchain, but many organizations are unsure if their datacenter environments can handle it.

Despite these reservations, PwC found that 84 percent of organizations are using blockchain in some capacity. However, a separate report from Greenwich Associates found that most organizations are struggling with blockchain adoption, with 57 percent of surveyed executives saying implementation has been harder than expected.

Concerns about scalability cited in the PwC study have been realized by 42 percent of respondents to the Greenwich Associates survey. Although many companies had not yet implemented their blockchain solutions, those that have blockchain in production were dealing with very slow transaction speeds.

Despite these issues, the blockchain market is expected to grow from $708 million in 2017 to $60.7 billion by 2024 and to disrupt virtually every industry. Because blockchain has significant storage and compute requirements, experts expect it to have a major impact on the data center.

There are steps you can take to prepare your data center for the increased demand created by blockchain. For example, high-density servers can deliver the compute power and capacity to support blockchain applications in a small footprint. However, organizations should take a more sophisticated approach to capacity management.

It’s not as simple as making sure each server has enough capacity to support its workload. You have to determine where each workload should run and automatically adapt when necessary to maximize performance and resource utilization. To do this, you need intelligent tools to carefully analyze and forecast demand and plan capacity accordingly.

The implementation of digital twins – software versions of physical assets – can provide deeper insights into your infrastructure. Digital twins can help you more accurately predict and optimize performance and then test various solutions in a production environment. This makes it possible to make data-driven decisions to update your data center in a way that supports blockchain with less risk.Blockchain has generated plenty of buzz. Most organizations have dipped their toe in the water, and some corporate giants are pushing their suppliers to adopt blockchain sooner rather than later. But plenty of question marks remain, and significant planning is required to avoid the pitfalls that have doomed some early implementations. Let us help you evaluate your data center and recommend the necessary changes to take full advantage of blockchain.

Read More
John FloresHow to Prepare Your Data Center for Blockchain
cloud-computing-2001090_1920.jpg

Cloud Definition Embraces Value-Added Functionality

Seven years ago, the National Institute of Standards and Technology (NIST) published the 16th and final version of its Definition of Cloud Computing. NIST Special Publication 800-145 is designed to help federal agencies and private-sector companies identify services that are most likely to deliver the cost savings and agility promised by cloud computing. The definition has proved remarkably resilient as cloud services continue to expand their capabilities with value-added features.

Peter Mell and Tim Grance, authors of the NIST definition, described cloud computing as having these five essential characteristics:

  • On-demand self-service. Customers can provision computing capabilities as needed without human interaction with the service provider.
  • Broad network access. Capabilities are available over the network and can be accessed by a broad range of devices using standard mechanisms.
  • Resource pooling. Cloud resources are pooled and dynamically assigned and reassigned according to customer demand.
  • Rapid elasticity. Capabilities can be rapidly provisioned and quickly scaled up or down to meet changing requirements.
  • Measured service. Cloud usage can be monitored, controlled and reported, providing transparency for both the provider and customer.

One of the most remarkable things about this definition is what’s missing from it. Nowhere does it describe the cloud as “somebody else’s computers” — a notion that persists even among some IT folks. Nothing in the definition says it’s limited to compute and storage capacity, software development platforms, or Software-as-a-Service applications. It can encompass any IT services that meets the five criteria.

Today, cloud providers are offering a wide range of products and services that transcend computer and storage capacity. Some of these are what I call “utilities” — foundational IT services that are an essential part of any data center infrastructure.

For example, backup/recovery and replication services are increasingly incorporated into cloud platforms, enabling organizations to create end-to-end data protection solutions that are compatible with the enterprise IT environment. These services support on-premises, hybrid and cloud-native applications and allow you to set backup frequency and other parameters for each individual workload. They often include integrated monitoring and management tools.

Cloud providers are also offering an array of security and regulatory compliance features. These include identity and access management, single sign-on, and directory services that authenticate users and control access to cloud resources. Web application firewalls, distributed denial of service (DDoS) protection and threat detection services help to defend against external attacks. Encryption key management and rotation of credentials and other “secrets” further boosts security.

The key takeaway here is that the cloud is looking less like basic IT resources in someone else’s data center and more like a fully functioning IT environment. The reason is partly economic — there’s excess compute and storage capacity so cloud providers need to offer value-added services so they’re not competing solely on price. Cloud providers also recognize the need to offer a full suite of services to help reduce complexity and streamline operational tasks.

In developing their cloud definition, Mell and Grance stipulated that cloud computing is an evolving technology and that attributes and characteristics will continue to change over time. However, the definition remains highly accurate even as cloud services have evolved. The “utility” functionality now increasingly offered as part of cloud platforms is helping to deliver on the promise of cloud computing.

Read More
John FloresCloud Definition Embraces Value-Added Functionality