Top-Announcements-from-Microsoft-Inspire.png

Top Announcements from Microsoft Inspire: Azure, Teams, and Security

Things looked a little different for this year’s annual Microsoft Inspire event, a conference where partners around the world gather to discuss all things related to the well-known company. Although the event was held virtually this year, the experience remained the same – amazing content, presentations, keynotes, and new announcements made it just as good as last year’s. In fact, Microsoft rolled out so many new features and developments that it became tricky to know what to pay attention to. So, for your convenience, we took care of the hard part and summed up the top announcements from this year’s Inspire event!

 
Azure: Adjusting to Rapidly Changing Conditions

Microsoft continues to invest in hybrid offerings, announcing new Azure hybrid, migration, data, and developer capabilities to help improve the journey to the cloud.

Azure HCI – Hybrid Cloud Offering: Microsoft launched the latest member of the Microsoft hybrid portfolio – Azure Stack HCI – which combines the price-performance of hyperconverged infrastructure (HCI) with native Azure hybrid capabilities. Learn more here.

Azure Migrate: It is more important than ever for companies to be able to act, innovate, and pivot IT operations quickly to fit the needs of the business. Microsoft shared new Azure Migration enhancements to help customers with their datacenter assessments. Learn more here.

 
Teams: The Communication Platform that Keeps Growing

When the pandemic hit earlier this year, many organizations quickly implemented Microsoft Teams as part of their work from home strategy. At Converge, we are helping many such organizations adhere to their data governance and compliance policies while leveraging the power of the Teams platform.

At Inspire, there were new announcements around tighter integration with the Power platform, and our engineers are already y working to build out applications and workflows in a secure and compliant manner.

Teams Room Premium: Microsoft released Teams Rooms with premium meeting room experiences that provide cloud-based IT services with 24/7 proactive management. Learn more here!

Power BI: Microsoft recently announced a tighter integration with the Power Platform, which includes Power BI and Power Apps. With new abilities to design and deploy chatbots, automation, and access to different layers of data, business leaders have their work cut out for them. They need to understand these new capabilities and make sure they are right for their users and their businesses. It is a lot to expect average users to become low-code application experts and data integrators. Automation and timely access to the right information a can benefit almost everyone when deployed properly. Learn more here.

 
Security & Compliance: More Important Than Ever

As more organizations shift to the cloud, having a strong security strategy is more important than ever. Tools like Azure Sentinel, Microsoft Threat Protection, Microsoft Information Protection, and Azure Active Directory are sweet spots for security on Azure. Furthermore, the M365 suite allows for high collaboration and productivity while having a strong foundation of security embedded within it.

Endpoint DLP: Microsoft has announced Endpoint DLP is coming to tenancies possessing “Microsoft 365 E5/A5, Microsoft 365 E5/A5 Compliance, and Microsoft 365 E5/A5 Information Protection and Governance” subscriptions. Endpoint DLP, another Microsoft Information Protection technology, offers protection for data on devices when end users take certain actions.

Microsoft explained that “Endpoint DLP is native to Windows 10 and the new Microsoft Edge browser.” The benefit of this built-in aspect is that there’s no additional software to install. Learn more here.

Microsoft is committed to cost savings –you’re not going to find a better, more cost-effective security product with a company bigger or more agile behind it.

 
Enable better collaboration, integration, and performance with Microsoft technology solutions.

For a more comprehensive overview of the changes that Microsoft announced, check out the list of Microsoft Inspire updates in their 2020 Book of News.

The amount of announcements can be overwhelming – let’s discuss how these changes can affect your Microsoft investment and your business vision. Get started.

Read More
Eric GraceTop Announcements from Microsoft Inspire: Azure, Teams, and Security
Five-Questions-Every-CEO-Should-Ask-About-Cyber-Risks-3.png

Five Questions Every CEO Should Ask About Cyber Risks

Cyber risks are evolving quickly these days. For every organization, creating a risk aware culture is one of the top essential security practices. In the same way that CEOs focus on their organization’s financial and market position, it’s important that they understand their company’s security posture and gaps to effectively guide their organizations and create value.

That’s because a strong security posture can demonstrate a company’s commitment to due diligence, strengthen its reputation, and instill confidence in its customers. At the same time, it can mitigate more tangible risks related to security breaches, such as operational downtime, data loss, and negative financial impacts.

With this in mind, there are five key questions that CEO’s need to answer about their cybersecurity risk posture. In each of these areas, we believe that a consulting partner who can be a trusted security advisor is a key enabler in effectively answering each one.

 

1) How is our executive leadership informed about the current levels and business impact of cyber risks?

There are many ways to stay informed about your company’s security posture and how it stacks up to your industry. For example, workgroups, security conferences, and threat intelligence feeds are all solid tactical sources of research and information. However, if you don’t want to go through all of that research yourself or put your reputation on the line to make the technology decisions, a trusted advisor can be your safety net to help you choose the right security path.

Typically, we find that our clients are placing more and more faith into a trusted advisor to make strategic procurement decisions. By doing this, companies can push some risk onto a third party to make a determination that a technology is sufficient to protect them in a certain control category, such as network or application security. It’s a way of hedging a bet on changing technology by placing the bet on a trusted advisor instead of doing internal competitive analysis and independent research to find the right tool that might fit their tactical needs.

 

2) What is the current level and business impact of cyber risks to our company?

In terms of security, an organization needs to have some kind of litmus test to know what their baseline is. This “baseline” is known as their expected normal or “known good” state. Being able to measure against that baseline regularly is an important tool in understanding a company’s risk posture.

This produces trend analysis and delta reporting to identify what has changed since the last time an assessment was done. Whether your company or a third-party does the assessment, being able to understand the differential (or delta) between the previous report results and the current report results should give you an idea of the direction that your organization is heading in regarding your maturity: whether or not you are improving or regressing.

After the baseline is established and there is an understanding of current trends, an organization can prove out the impact of certain findings. In order to validate those findings, an organization can leverage a third party to actually emulate an attack pattern of an adversary. That would be in the form of proactive security assessments including ethical hacking and penetration testing to prove out the impact.

 

3) How does our cybersecurity program apply industry standards and best practices?

An organization’s security responsibility starts with looking at what requirements they have in terms of regulatory compliance. As you expand your security controls scope, it may include additional considerations. This is an iterative in approach. Once a framework is identified and security controls can be mapped to that framework, then you can clearly and consistently see where control gaps exist.

From there, those gaps should be weighed against potential outcomes, and your company should apply controls that are technical in nature to enforce the standards that are in place. The idea is that the policy or standard matches the control requirement and that requirement needs to be technically enforced. Therefore, that final piece—which is the enforcement level—leverages technology to enforce those policies and produces an audit trail that is repeatable, produces evidence, and illustrates due diligence, eliminating doubt and uncertainty.

 

4) With so many varied threats coming in weekly, how do we make sense of the noise and prioritize our response?

Instead of focusing on how many (or what type of) cyber incidents your company experiences in a week, a different approach is to understand where patterns are and what common elements can be considered as routine and repeated by the attackers.

This is done using threat modeling, which works backwards from a problem outcome. If the negative outcome is application downtime, the threat model is focused on eliminating downtime on the application. And that means the company can’t have downtime on the web server. Every potential contributing factor to application availability risk (downtime) is analyzed, and the approach goes down the list to mitigate every identified contributing factor.

This contrasts with the traditional enterprise cybersecurity approach, which is the opposite. In that traditional model, a company groups issues looking for the least common denominator, the most critical finding, the easiest vulnerability to exploit, and then chips away at it until they get to the center of the issue. Threat modeling, however, is a very pointed, specific defensive technique to try and combat or mitigate a very specific attack, maximizing the return on the organization’s investment of cybersecurity.

 

5) How comprehensive is our cyber incident response plan and how often is it tested?

In order to gauge the comprehensiveness of your incident response plan, certain elements need to truly be tested with the team that will actually use it. That’s because an incident response policy is great to have, but if it’s not actionable, hasn’t been used recently, or is difficult to locate, it’s not going to do the Computer Security Incident Response Team (CSIRT) much good. Like a fire extinguisher buried at the bottom of a coat closet, it’s not going to be of much use if it’s not in the kitchen where it might be needed immediately.

 

Moving forward

While cyber risks are always evolving, right now is a particularly challenging time as more workforces work from home. This is something that historically a lot of industries—such as the financial sector—have not needed to address in this way. Many companies have made concessions, which has created an expanded threat surface and a lot of low hanging fruit for global attackers.

Though it seems like things are changing every day, a trusted advisor like Converge can help keep you abreast of not just changes in legislation and compliance, but also the technology landscape. This provides the organization’s decision-makers with the frontline insight needed to produce a sound cybersecurity risk management framework that supports and strengthens your organization’s cybersecurity strategy, meets operational goals, and aligns with your corporate mission.

 

Read More
Sean ColicchioFive Questions Every CEO Should Ask About Cyber Risks
Netezza-Is-Extending-Its-Comeback-With-Netezza-on-Cloud.png

Netezza is extending its Comeback with Netezza on Cloud

In my last blog, I discussed the latest news: Netezza is making a comeback on Netezza Performance Server for IBM Cloud Pak for Data System. With Netezza Performance Server, companies can finally run Netezza wherever they want it – on-cloud, on-premises, or on hybrid deployments – with all the benefits and simplicity Netezza users have come to expect and enjoy.

That’s right: Netezza is back in an on-premises format!

We are now able to announce to you that Netezza is available in a public, cloud-based version (first on AWS and soon on Azure). Now Netezza completely fulfills the true hybrid data platform story like no other data platform.

 
On-premises appliance = AWS cloud = Azure Cloud = IBM Cloud

This means that you can get the same great benefits from the Netezza that you know and love with cloud capabilities. For our clients, this may soon become a part of their cloud journeys to get all on-premises workloads to the cloud or part of a flexible solution that handles regular workload demands and then scales and flexes in the cloud when required. It may represent a production Netezza on-premises and a disaster recovery or development instance in the cloud that can be scaled up and down as workloads require. All of the systems are 100% compatible with one another, and workloads and storage can be moved as required using an ever-increasing pallet of options to create a seamless experience.

 
Scale up, scale out, and (more importantly) scale down

The new cloud option gives administrators the ability to consume more compute or disk and provides an estimate of the cost of that workload. We envision clients using this feature to scale up and out when their demands increase (month-end, year-end, or critical events in your business year like Black Friday) or use this as a data science platform when large computation workloads are required for a Data Science project.

For those customers who have existing workloads on Netezza from prior generations of the platform, the effort to migrate is as simple as seven lines of code:

nz_migrate
-shost <src_host> -thost <tgt_host>
-sdb <src_db> -tdb <tgt_db>
-t <table1, table2, …>
-suser <src_user> -spassword <src_password>
-tuser <tgt_user> -tpassword <tgt_password>
-cksum fast -genStats Full
-TruncateTargetTable YES

Whether you’re a former fan of Netezza or if you’re in the market for a new storage and compute solution, this is great news. The platform is accessible and rife with features, and it’s easy to implement. Regardless of your organization’s current server environment, the new and improved Netezza is worth looking at. IT advances every day, and keeping a close eye on what’s available is always worth the time.

If this is too much for you to believe, let us know. We have created simple data proof of concepts and proof of technology for previous clients and can show you how simple this lift can be. Netezza is back – believe it and experience all the benefits and simplicity Netezza users have come to expect and enjoy.

Read More
Robb SinclairNetezza is extending its Comeback with Netezza on Cloud
Converge-blog-Analytics.jpg

Self-Service Solutions Democratize Data Analytics

In the Big Data era, organizations across all industries and sectors are accumulating huge amounts of qualitative data in search of patterns and insights that can be used to guide corporate decisions and policies. Many such initiatives are being stymied by inadequate data analysis capabilities, however.

Although 94 percent of organizations believe data analytics is important to their business growth, most have not invested enough in the technology and talent needed to effectively utilize their data resources, according to the “2020 Global State of Enterprise Analytics” study conducted by the market research firm Hall & Partners.

The survey of 500 business intelligence and analytics professionals found that most line-of-business employees are “data deprived” because they don’t have access to self-service data analytics tools. Without those tools, 79 percent say they have to ask IT staff for help analyzing data sets. As a result, they say, it takes hours or days to get the information they need, damaging their ability to make informed business decisions in a timely fashion.

Such delays often translate to lost business and missed opportunities. Other studies suggest that U.S. business are losing nearly $10 million each year due to poor data analysis capabilities. Meanwhile, employees in some industries waste as much as 50 percent of their time each week trying to find data they need.

Inside the Numbers

Without analytics, big data is just a bunch of numbers. To gain the full advantage of data resources, organizations need to invest in self-service tools that democratize analytics.

Conventional business intelligence and data aggregation solutions usually require an analytics expert to run database queries. To gain data-driven insights at business speed, organizations need self-service analytics that cut out the middleman and allow end-users to access and use data analysis in their day-to-day work.

The Harvard Business Review Analytic Services found that organizations enjoyed improved financial performance from their analytics if those tools were widely distributed across the organization. More widespread use of analytics also increased productivity, reduced risks and helped individuals make faster and better decisions. In data-driven companies, employees become more proactive and creative, generating a flow of new ideas. Managers use analytics to test those ideas, deliver feedback and encourage collaboration and innovation.

Converge’s Advanced Analytics

With our advanced analytics solutions, Converge can help customers enhance data-driven decision-making throughout the organization. Working closely with our vendor partners, we can design and implement solutions that leverage a variety of AI-powered techniques such as deep learning and natural language processing, as well as blockchain technology for ensuring data integrity.

Many of our customers have had great results with IBM Cognos Analytics, a state-of-the-art self-service analytics platform. It has many AI-infused features that help users quickly discover hidden insights, recommend visualizations and make conversation in natural language.

One of the best characteristics of IBM Cognos Analytics is that it accommodates users of all skill levels. The platform’s Knowledge Discovery Service can be set for either deep or shallow mode, depending on the level of detail required and the user’s expertise. In deep mode, it conducts a several types of analysis to capture data characteristics and relationships. Shallow mode is much faster because it analyzes metadata only for more generalized evaluation.

To create a truly data-driven environment, organizations must have the tools to examine vast stores of data for qualitative information that will enhance the decision-making process. Furthermore, those tools should be available to everyone from executives to front-line workers to help drive faster and more insightful decision making. Give us a call to learn how we can help you improve your data analytics capabilities.

Read More
John FloresSelf-Service Solutions Democratize Data Analytics
Converge-TrustBuilder-Blog-Banner.png

Converge TrustBuilder: Why Trust Ecosystems Matter

During our recent Converge TrustBuilder launch, we were frequently asked why trust ecosystems (groups of people and/or organizations that transact digitally in a trustworthy, privacy-preserving context) matter. Trust ecosystems will become a key economic enabler because they address a critical problem that economic participants face. Knowing with certainty whether the person or organization they are digitally transacting with is whom they purport to be and whether the information they are sharing about themselves is true and has not been tampered with is necessary for businesses to thrive in the world we live in. We need not look too far for examples of why that matters.

Trust Matters

Consider the recent college admissions scandal that saw high-profile celebrities such as Lori Laughlin, Mossimo Giannulli, Felicity Huffman, and investor Douglas Hodge convicted of or pleading guilty to charges related to the falsification of student athletic records. They created false credentials claiming their children were varsity athletes and false evidence of their participation in related activities to successfully usurp admission to respected secondary education institutions from others who had legitimately earned that right. This scheme had been active for years and was only discovered because someone offered the information in exchange for leniency in an unrelated case.

Now, consider the same scenario within a trust ecosystem. First, the identity of the people making these claims of athletic achievement would be cryptographically verified to ensure they were whom they purported to be. That was not at issue in the cases above, but it is often a vector for fraud. In addition, the proof of their athletic history and accomplishment would be created and cryptographically signed by the institutions with the authority to present those facts (e.g., their high school, the club they claimed to compete for, a verified athletic association, etc.), whose identity would also be verified. In addition, the proof of their achievements would be tamper-evident. The colleges they applied to could be confident of both the source of the proof and its contents. Furthermore, if they so desired, the student’s identity could have remained anonymous in order to ensure selection was based solely upon the merit of their achievements and other relevant qualifications.

Similarly, a homeowner, builder, or contractor could use a trust ecosystem to ensure that those working on a home had the requisite qualifications to deliver a sound structure. As a farmer, I might use a trust ecosystem to ensure those working on my farm had the appropriate equipment and training when working with pesticides to avoid potentially deadly outcomes. There are countless circumstances where this framework could be implemented to improve security and prove authenticity.

We can help

Building a trust ecosystem is costly, time-consuming, and requires skilled technologists who specialize in areas such as Blockchain, PKI, and cryptography. The Converge TrustBuilder program utilizes our expertise in those areas and provides a toolkit to enable the rapid creation and deployment of trust ecosystems based upon best practices and industry standards such as W3C.

Whether you are a government agency looking to eliminate fraudulent claims; reduce processing and validation times; reduce or eliminate manual validation steps; and/or decrease processing costs or a large manufacturer looking to manage networks of third-party implementation, support, and maintenance organizations, Converge TrustBuilder can help.

Read More
Bruce LevisConverge TrustBuilder: Why Trust Ecosystems Matter
Netezza-is-making-a-comeback-IBM.png

Netezza Is Making a Comeback on Netezza Performance Server for IBM Cloud Pak for Data System

For years, IBM’s Netezza proved to be an excellent option for companies to run advanced analytics on a single data warehouse appliance without needing to set up and configure a traditional data warehouse. When IBM decided to replace Netezza with Db2 Warehouse, some companies didn’t want to make the switch because of its costs, refresh issues, and other reasons.

Eventually, IBM opened the opportunity for companies to swap their Netezza workloads for the cloud-based database offerings of other vendors. These solutions promised the scale and cost benefits of a subscription cloud service instead of the larger upfront investment of the Netezza appliance.

As a result, some companies moved some of their Netezza workloads to these vendors. However, over time, companies often realized that the reality of the costs and management needs for these offerings proved to be greater than anticipated.

For Netezza fans, IBM has now brought an answer to the market. With Netezza Performance Server, companies can finally run Netezza wherever they want it—on-cloud, on-premises, or hybrid deployments—providing all the benefits and simplicity Netezza users grew to enjoy. In the coming years, companies have an important choice to make.

The end-of-life date for Netezza appliances is coming up in 2023. Companies have a few years to make a thoughtful decision about their next move. To help them make an informed decision, this blog series will explore the history of Netezza, its current state in the marketplace, the options for the future, and the benefits and drawbacks of pursuing the different options available on the market.

For a more complete picture of the current state, this first blog will dive deeper into how we got where we are today with Netezza.

 

The then—and the now—of Netezza

For years, many of our clients invested in IBM’s Netezza appliance to take advantage of its simplicity, performance, and scale-out capabilities. A few of our clients also leveraged onboard and push-down data science, which Netezza also provided.

Along the way, all of these clients benefited from Netezza’s features for call-home and self-diagnostics, and IBM’s soup-to-nuts support for hardware replacements, OS managed updates, and database updates.

Some of our clients started with early versions of the appliance and gathered a series of new benefits, such as cache, replication, flash, and encrypted at rest drives over two, three and even four hardware refreshes.

After many years and many easy and painless refreshes, along came a new branding and approach to IBM’s data warehouse appliances called Sailfish: Db2 Warehouse and Db2 Warehouse in the cloud. This was a very promising opportunity for our Netezza clients to again get the basis of Netezza and the benefits of in-columnar store, graph, and a growing list of benefits that we will discuss in future blogs.

 

netezza blog post 1

Unfortunately, the costs, efforts, and pains of refreshing a Netezza box with a Db2 Warehouse was too much for some clients. So, IBM created an opportunity for all of the “me-too” database warehouse products or the “born-in-the-cloud” database warehouse products to take hold.

We followed the lead of our clients, and we developed the skills and selling capacity for each of the newborn, in-the-cloud data warehouses. We also tried our best to convert the Netezza fan base into Db2 fans.

However, this didn’t work. The three technologies were simply not built for the same purpose. The Netezza appliance is made for analytics. Db2 Warehouse is made for data science. The cloud solutions are made for scale and temporal uses. Each are great in their own way, and each have their uses. But at this time, none of them can completely replace the other.

 

The promise and reality of the cloud-based “me-too” Netezza replacements

This time in the marketplace created an opportunity for all of the growth-on-demand, scale-out, JSON-based, and cloud-only solutions for Netezza fans. Our Netezza clients loved the POCs, the enablement, and the simplicity that these “me-too” vendors provided.

They looked for the cost take-outs that these vendors offered—considering that Netezza’s high-performance on-premises compute can be expensive. They looked at the common tasks, such as replicate, that could be simplified with cloud providers. And they increasingly sought the ability to scale up and down as their businesses needed.

Though, like a lot of things in life, if you are great at one thing, you are typically limited somewhere else. And if you are good at everything, you typically are not great at one thing (think SQL Server). Although, some of our clients developed a strategy for data platform take-out with a cloud-only solution. They looked for all of those things, and we helped them see these values.

 

netezza blog post 2

They reviewed the “new” “transformed” “modernized” approach, and it made sense: simplify, use what you need when you need it, scale it out, scale it up, drop it, and stop it. They were interested and looked to the “me too” vendors for a solution.

In making the switch, they looked at what they knew about their workload management on Netezza, and what cloud-based vendors asked about their workload on Netezza. From there, the vendors provided a size and cost estimate for replacing these Netezza workloads with the vendor’s offering.

If all of the modernized and digitized approaches of the vendor were followed and the transformation was followed through, these customers had a good cloud-only solution for an aggregate database with some inventive features for replicate and point-in-time/time-series analysis.

If the client data teams that had years with relational databases, and Netezza did not, and the transformation could not be completed or was years in the making, the client had to either find secondary options for cold storage, data-ride-along (like S3), or try the other “me-too” data warehouse in the cloud.

 

And then there were the costs…

The promise of getting the performance of an expensive appliance with an use  what you need cloud based solution for penny’s on the dollar is compelling. But like a lot of things, it’s important to read the fine print.

The baseline workloads were not analyzed in depth to truly understand “what would this workload take in the cloud?”  Those clients were interviewed by “me-too” vendors for what they knew and could provide. To the vendor’s questions, our clients response would often sound something like this: “We have 100 managed reports.”

However, their best-guesses missed a large list of unmanaged queries, ad-hoc queries, “because-I-need-it” reports queries, data science, as well as messy and poorly written queries. The reality was that Netezza did a great job of abstracting some of the complexities in today’s enterprise has in data hid a lot of sins.

As one can imagine, a metered cloud-based solution sized for a minimum workload began to show the signs of this reality. It quickly started taking up the cloud-credits that clients had paid for, and it consumed a growing amount of computer store and pipeline from on-premises sources to load through ETL and reports to be fed by the data in the cloud.

 

An abbreviated history of “me too” Netezza replacements

As we’ve seen, the promise of cloud-based Netezza workload replacements didn’t always meet our clients expectations going in. Let’s boil things down into an abbreviated timeline of how things usually went down. If the engagement were a week, this is what it would look like:

  • Day 1: Everything was great. The 100 managed and curated reports were fast enough and having been “transformed” were optimized for new data pipelines.

  • Day 2: The old “must-haves” and “I-just-need-reports” exposed the reality that there were reports and queues consuming a lot of resources and credits.

  • Day 3: The greater team began to build new reports and queries, and not all of them were great, which took up even more resources and credits.

  • Day 4: The client’s team realized that humans fall back into habits, and perhaps Netezza had spoiled them with its “set and forget” capabilities. As it turns out, the scale-up always happened, but the scale-down, stop, and pause rarely did.

  • Day 5: The client began looking around for tools, policies, operations, and best practices. A lot of these were not readily available—since innovation, speed, dynamics, and growth in sales had been the approach.

  • Day 6: IBM released the Cloud Pak for a data containerized version of Netezza called NPS or Netezza Performance Server.

  • Day 7: We stopped and looked around and began to realize that we were solving yesterday’s problems (aggregate data) with today’s answer (cloud), and what we should be doing is building today in 2020 for a platform we will be supporting through 2030.

 

This series will dive deeper into each of these topics in detail. Please join me on this journey to learn, discuss, and review where we were, where we are, where we are headed, and where we should aim our efforts going forward.

Read More
Robb SinclairNetezza Is Making a Comeback on Netezza Performance Server for IBM Cloud Pak for Data System
Portland-Skyline-Blog-Banner.png

The Evolution of the Silicon Forest: Technology in the Pacific Northwest

Tech is synonymous with Portland. Well, to be honest, so is beer, wine, hipsters, environmentalism, and activism, but tech goes back farther than any of those. How far back, you might ask? Portland tech reaches deep into the past, before the first transistor was even invented, as hard as that might be to imagine. Technology in Portland goes back as far as the 1940s with the founding of Tektronix and Electro Scientific Industries. Essentially, Portland was doing tech long before there was a tech scene.

Oregon, specifically the Portland Metro area, has been a major employer in the technology field since the 1970s. In 1974, Intel acquired its first property in Oregon and then quickly expanded. Now there are several of their facilities in Oregon, making Intel one of the state’s largest for-profit private employers to date.

The Portland area was nicknamed the Silicon Forest in the 1980s with Intel being a major contributor. Following in the footsteps of their predecessors, many new startups and spinoff companies jumped on the technology bandwagon by bringing new technology and innovations into the state, helping to solidify the Silicon Forest nickname.

Oregon has become a popular location for several large datacenters, including Google in The Dalles, Facebook near Prineville, and Amazon near Boardman OR with a fulfillment center in Troutdale. Microsoft and Hewlett-Packard, along with Tektronix and Electro Scientific Industries, are also some of the tech giants who have made a large impact within the Pacific Northwest. Today, the Portland tech scene is booming with new startups and large datacenters flocking to our state as we embrace innovation and welcome new technology trends. Portland’s Silicon Forest has become a major player in the technology industry, standing side-by-side with California’s Silicon Valley and Seattle’s IT bustle.

Technology continues to evolve and shape the future within the Pacific Northwest with a large focus on Artificial Intelligence (AI), Machine Learning (ML), Big Data, Cloud Computing, Cybersecurity, Blockchain, and the Internet of Things (IoT), to name a few specifics. Cutting edge technology is also on the rise; IT leaders in the region are experimenting with Data Driven Healthcare with Predictive Analytics; Hyper automation with AI and ML; and Quantum Computing with a focus on Virtual Reality, Augmented Reality, Robotic Process Automation (RPS), and Edge Computing.

Nordisk Systems has been a part of the Portland tech scene since its founding in 1983. Originally, the organization focused on datacenter infrastructure and end-user computing, but we’ve since expanded our capabilities with an emphasis on Cloud Computing, Cyber Security, Digital Infrastructure, Advanced Analytics, Digital Transformation, and Talent Acquisition while still providing the exceptional support that our customers have come to rely on. The tech industry has developed at a lighting pace, and Nordisk Systems has risen to the challenge by developing expertise in and incorporating new technologies with our custom integrated solutions.

Last year, Nordisk became a member of the Converge Technology Solutions Corp. family of companies. Becoming a part of the Converge family helped fuel our continued growth by expanding our capabilities and resources. Notably, we gained a diversified talent base, which allows us to offer top quality solutions and support to our customers.

Nordisk Systems is committed to assisting our customers in overcoming their technology challenges. Our exceptional staff and long-standing partner relationships have been the key to designing and delivering creative, successful, and cost-effective solutions that enable our customers to focus on what they do best.

Nordisk looks forward to continuing to serve the customers in the Pacific Northwest, Southwest, Midwest, and even the East Coast with an ever-expanding, never complete array of solutions focused on maximizing customer satisfaction and operational success.

Read More
Amy Barnes & Alex PaschalThe Evolution of the Silicon Forest: Technology in the Pacific Northwest
Lexington-Skyline-No-Text-Banner2.png

Leveraging Lexington: A Business Hotspot

Lexington, Kentucky was founded in 1775, seventeen years before Kentucky became a state. William McConnell and a group of frontier explorers were camped at a natural spring when word came from nearby Fort Boonesborough that the first battle of the American Revolution had been fought in Lexington, Massachusetts. In honor of that battle, the group named their site “Lexington”. By 1820, Lexington, Kentucky, was one of the largest and wealthiest towns west of the Allegheny Mountains. So cultured was its lifestyle, the city soon gained the nickname “Athens of the West.”

Many years later, after numerous moves across the United States while working with IBM, one of the principals of SIS selected Lexington, Kentucky as the company’s headquarters location.  Although Lexington is most notably known for southern culture, tobacco, bourbon, and world class thoroughbred breeding and racing, SIS chose this city for its strategic location to several major metropolitan areas in the surrounding region. This has enabled SIS to expand as a technology leader in these adjacent cities over the past 25 years.

Today, Lexington is recognized as a top-rated city to raise a family and conduct business in the United States. The University of Kentucky, UK Medical Center, and Transylvania University provide a foundation of education, research, and innovation for businesses to leverage in growing regional and national companies. Plus, we all know UK’s rich tradition as a basketball icon and powerhouse in college sports! Additionally, Lexington is built on a spoke reaching out to other great cities in the region. The richness of these nearby communities and states provides growth opportunities SIS has capitalized on in the past and continues to rely on for significant growth in the future.

The majority of SIS’ clients are in the Ohio Valley region and, in addition to Lexington, include cities like Louisville, Cincinnati, Indianapolis, and Detroit. Each of these places have had a major influence in how we have assembled our sales and technical teams in order to support the diverse geography and client bases we are so fortunate to partner with. Unrelentless effort to always place the customer first and be immediately responsive to their needs has allowed SIS to build a business in this region that is foundationally solid and includes vendor relationships with major technology providers. We continue to support our clients in Lexington and beyond with technical advisory, advanced & modern managed services, data center hardware, and best of breed software to deliver solutions that meet the demands and needs for the markets we serve.

What a journey and privilege it has been to see the technology sector grow and work with the great businesses & people in Lexington, the Ohio Valley Region, and across the great USA.

Read More
Steve SiggLeveraging Lexington: A Business Hotspot
Boston-Skyline-No-Text.png

Boston’s Tech Industry: A City of Firsts

The city of Boston, one of the oldest cities in America, is filled with history and takes pride in being a city of firsts. Boston Common was the first public park in 1634, Revere Beach was the first public beach in 1896, Boston built the first subway in the US in 1897, and, yes, we had the first Dunkin’ Donuts in 1948. Known as Beantown, Boston is a city known for our traffic, poorly portrayed accents in movies/tv, the Boston Marathon, and, of course, the city of champions due to the unprecedented success of our sports teams.

Boston is the home to some of the top education, healthcare, and financial services organizations in the world. We are a city focused on technological advances and education. Boston’s focus on higher education has helped drive the rise in technology, fueled by the 128 tech corridor, and we have a history of successful start-ups – remember Polaroid cameras? That was us. Boston is also home to some of the leading financial services and insurance companies in the world.

Another first – the first American Lighthouse at Little Brewster Island was built in Boston in 1716. Lighthouse Computer Services Inc. is proud to call some of the top colleges/universities, healthcare, financial services, and insurance organizations our clients. Although Lighthouse Computer Services is located in Lincoln, RI, between Boston and Providence, our office serves as the anchor for the New England region. Our people live throughout the six New England states, and we are a regionally-focused organization that has been providing exceptional services since 1995.

Why put our base here? It’s pretty simple, really. We like smart people. Boston is the home to the first college in North America (Harvard University was founded in 1636!). There are more than 100 colleges and universities in Boston and over 250 in New England, and they rank among the best in the world! In fact, over one-third of Boston’s population are college students. Needless to say, Boston has some ‘wicked smaht’ people here, and these colleges and universities have provided Lighthouse with access to the top talent in the industry.

Presently, Lighthouse is doing its best to help students during the pandemic. We are helping higher education institutions change the manner in which their students interact with their professors and classmates by providing collaboration solutions such as Microsoft Teams and Cisco Webex to ensure the learning continues! Our mixed team of veterans and young professionals positions us well to meet our clients’ evolving challenges.

Furthermore, Boston is home to some of the most prestigious hospitals in the world and a deep bio-tech community. These organizations have delivered many medical firsts such as the first successful human-organ transplant and groundbreaking heart stent technology. Today, Lighthouse is helping healthcare providers and payers ensure their patients continue to receive the best healthcare in the world by providing cloud-based digital infrastructure.

Opening day is right around the corner, see you at Fenway!!

Read More
Eric GraceBoston’s Tech Industry: A City of Firsts
Converge-blog-Telemedicine.jpg

Telemedicine Is Playing a Critical Role in Healthcare during Pandemic

Telemedicine technologies are seeing a surge in usage due to the COVID-19 pandemic. Providers in some states have reported that call volumes have increased more than 500 percent in recent weeks as coronavirus cases have spread.

As the name implies, telemedicine uses video conferencing and other collaboration tools to enable real-time remote consultations between healthcare providers and patients. While the technology has limitations compared to in-person diagnostics, it allows clinicians to evaluate patients while relieving the strain on hospitals and reducing the spread of the coronavirus.

To be effective, telemedicine requires more than a video conferencing setup. The system must provide high-quality voice and video communications and protections for patient privacy. Converge Technology Solutions is now offering a Virtual Care Technology (VCT) solution that provides agile and adaptable communications and meets stringent security and regulatory requirements.

Many Benefits

Although telemedicine was first used in the 1990s, it did not see widespread adoption due to technological limitations. Many of the solutions available at that time did not provide real-time, high-definition video conferencing. Each party had to capture and store video and forward it to the other party, creating uneven transmissions and security and private concerns.

Telemedicine has been steadily moving into the mainstream thanks to advances in communication and collaboration technologies, and cloud-based solutions have made these capabilities more accessible. According to an American Medical Association study, telemedicine usage has been growing at more than 50 percent per year.

Cost savings, increased efficiency and improved access to healthcare services have been touted as the primary benefits of telemedicine. However, the ability to conduct patient evaluations remotely has been the focus for telemedicine usage during the COVID-19 pandemic. Telemedicine helps protect staff from infection while reducing the need for masks, gloves, gowns and other protective gear that is in short supply.

In addition to providing a means for screening potential coronavirus patients and assessing their symptoms, telemedicine helps providers to keep tables on regular patients who have chronic health problems. Video consultations enable providers to discuss any new symptoms and ensure that these patients are taking their medications and following their care plans.

Proven Technology

The Converge VCT solution is based upon proven Cisco technology that enables 24×7 communication and monitoring of non-hospitalized patients using both traditional desktop systems and mobile devices. Hospital systems, community health services, nursing homes and other healthcare providers can also use VCT’s two-way audio/video collaboration tools to:

  • Provide prompt evaluations of new patients
  • Perform regular check-ins with patients
  • Continuously monitor patients
  • Respond to urgent situations
  • Provide a home health option where isolation is medically necessary
  • Increase collaboration among facilities, clinicians and first responders

The solution can also be used by organizations in other key industries that need highly reliable voice, video and collaboration technologies. It is agile and adaptable, with the ability to scale as needed to support high demand for remote work capabilities.

Telemedicine is providing relief to overburdened healthcare systems by offering an alternative to traditional in-person medical appointments. Converge is enabling the delivery of an effective telemedicine solution that gives organizations the communication and collaboration tools they need to handle escalating patient loads while conserving resources and protecting staff.

Read More
John FloresTelemedicine Is Playing a Critical Role in Healthcare during Pandemic