Read time: 3 minutes

Preparing for the jump to hyperscale

By , ITWeb
01 Oct 2019

Preparing for the jump to hyperscale

The ability to gain insight from vast and ever-growing repositories of data can potentially be a source of significant competitive advantage. However, this kind of analytics requires an entirely new approach to infrastructure. As a result, big data analytics is driving the accelerated adoption of hyperscale computing, which delivers scalability on demand to meet growing workloads.

The challenge with hyperscale is that big data can create big problems if it is not effectively managed, and these problems are multiplied as the volumes of data increase. Data management and effective governance are therefore more critical than ever as organisations prepare to make the move to hyperscale computing.

The most obvious benefit of hyperscale, and the one that is driving its adoption, is the ability to leverage on demand scaling in the cloud. However, this ability is also a notable risk. On demand scalability has the potential to spiral out of control, resulting in rapidly escalating costs and rampant bandwidth usage. Indiscriminately moving and storing all data in the cloud because hyperscale makes it possible is not necessarily a sound business decision.

Without some way of controlling and managing data, capacity requirements will become a financial drain on organisations.

In addition, indiscriminate storage of all data makes analytics harder since it becomes impossible to understand where business value might lie. Data quality is therefore also an important requirement in the hyperscale model.

When it comes to moving data at scale into the cloud, organisations need to ask themselves: 'how do we do this in a reliable way? How can we secure the data sharing agreements governing the data in the cloud? And 'how do we manage these processes?'

Handling big data analytics in the cloud also requires new skill sets, depending on the exact functions that are migrated. For example, if an organisation leverages hosted services, then this will reduce the need for infrastructure management and low-level technical skills. Yet, hosted services increase the need for higher-level data engineers. These data engineers must have the ability to move data into the cloud, keep it synchronised, and build multiple data pipelines for the organisation. They must also integrate data between the cloud environment and the on-premise environment.

Moreover, skills will be needed to ensure data quality, since analytics conducted on poor quality data will yield poor quality insights - a problem that is magnified when it comes to big data.

Up until now, cloud has largely been a buzzword in South Africa, but businesses are starting to take the adoption of cloud technologies more seriously. The needs of organisations are changing as data changes, and hyperscale computing environments are becoming increasingly mainstream.

In order to prepare for the hyperscale revolution, organisations need to start looking towards smart technologies that assist to simplify big data development.

Techniques such as real-time data analytics and streaming analytics will become increasingly necessary as data volumes continue to expand. These types of smart technologies help organisations with their data strategies and to manage data more effectively for enhanced analytics capabilities.

* By Gary Allemann, Managing Director at Master Data Management.

Daily newsletter