×
Community Blog The Holy Trinity of Cloud Computing

The Holy Trinity of Cloud Computing

Cloud Computing has gone mainstream. From consumers saving their photos to their online storage to businesses using powerful cloud-based algorithms to.

fb5d83efc8680aa74ba97de1c8675831aa8ea236

To the casual observer, it may seem that cloud computing has finally gone mainstream. And to a certain extent it has. From consumers saving their photos to their online storage account to businesses using powerful cloud-based algorithms to unlock the secrets of their data, the cloud is everywhere. But ubiquity does not necessarily herald progress, or power; today’s cloud industry is one framed by a lack of understanding about how to deliver truly useful on-demand computing.

The problem lies mainly in the assumption that scale alone is the solution – and it is not. Size is all too often being pursued at expense of power and functionality, two vital ingredients for any cloud platform of note. Data is being stored, but can it also be quickly and efficiently processed, and then mined for its hidden layers of value? This is the modern paradox of distributed computing, but one that is easily solved by the holy trinity of cloud computing: the rare yet extraordinarily powerful combination of big data, computer processing, and rich algorithms.

Data, data, everywhere

The facts are plain: that we’re producing more data than ever - as much in the last two years as in the rest of the entirety of human civilization in fact; that a trillion photos were taken last year (80% with smartphones), with billions shared and stored online; and that within three years, a third of all this data will pass through the cloud. As more and more of the world comes online, so the amount of data that users generate continues to increase. But how to make sense of this vast ocean of information? Simple storage alone is not enough.

Although businesses are now sitting on vast amounts of information, not all data is created equal. Low-quality data is an ongoing and growing problem, as businesses struggle to organize and collate the data they are generating. This is also exacerbated by a general lack of structure, stemming from a wide mix of data sources such as social, retail, transactional, machine-to-machine etc. When real-time data is added into the mix, organizations are often left with a cluttered, multi-scenario environment where the entire ecosystem lacks a functioning backbone.

Power in the machine

To be more than just information, data needs to be processed, a massive computational undertaking. Tools such as Hadoop have made the processing of large-scale data in cloud environments many multiple of times easier. Open source software such as this have rapidly reshaped the distributed computing industry, and helped begin to make sense of the large data sets emerging from SMEs right up to multinational organizations. New generations of similar tools – such as NoSQL, for example – offer even more features, and can process and store any data, even unstructured. While they lack the scalability of Hadoop, this new generation of database is perfect for the device-heavy (and very imminent) Internet of Things.

When combined with tools such as offline and real-time computing platforms, suddenly a cloud environment becomes an incredibly powerful and flexible distributed machine. This then opens the door for external parties to take advantage of the capabilities (and cost efficiencies) associated with the cloud, leading to the rise of Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Of course, some organizations choose to run some or all of their operations on an on-premise or private cloud, but these are limited by the resources of that organization. Thus we see the increasing popularity of hybrid cloud environments, with businesses creating a bespoke computing environment that draws on both the public and private cloud.

Mathematical magic

To organize and process your data is one thing, but to truly understand it is another. Some call this the holy grail of data – the ability to derive meaningful insights from large, seemingly benign data sets. Many cloud scientists live by the mantra that data without insight is largely useless, and that value will never be derived from managing data alone. Part of realizing this goal is to move the data away from the traditional minority of data scientists, and open it to developers and analysts who can bring the power of the ecosystem to the table.

Supporting this evolution are increasingly powerful algorithms that feed off the powerful computing platforms and mine big data sets for insights and value. Traditionally developed in laboratories, the better cloud providers are starting to integrate these tools across the platform as they seek to maximize their efficiency. The result is an accelerated integration of data across distributed computing platforms, and a new generation of insights from previously unrelated data. For example, meteorological data taken on its own has limited value, but when combined with agricultural or retail data it can suddenly unlock significant commercial value.

Better together

Cloud computing has evolved into one of the most exciting and important trends in business and IT today, a far cry from the early (largely unsuccessful) days of data warehousing. From its ability to store vast amounts of information to the innovative ways it processes it and then analyses it to derive value, the holy trinity of cloud computing is delivering insights to businesses around the world every second of every day. But each of these three core elements on their own are just facets of a bigger picture. To deliver one without the others is to fly in the face of efficiency and relegate true value to nothing more than an afterthought.

0 0 0
Share on

Alibaba Clouder

2,605 posts | 747 followers

You may also like

Comments

Alibaba Clouder

2,605 posts | 747 followers

Related Products