The Internet of Things (IoT) is usually characterized as billions of smart devices, appliances or sensors which are connected to the cloud. The cloud processes and analyzes the data received from these sources to generate valuable insights. For example the cloud may process data received from your smart car on the kind of music you listen to while driving, which will enable it to suggest a personalized playlist that matches your preferences.
However, for an increasing number of IoT devices, the cloud is not the most effective platform to process the data from these devices. Instead, it is better the data to be processed much closer to where the devices are physically situated, rather than being transferred to a cloud data center. This method is known as fog computing.
Why the cloud is not appropriate for some types of IoT devices
There are certain industries which require the data from their IoT devices to be processed and analyzed instantaneously and cannot afford the delay caused by sending the data to the cloud for analysis and waiting for the insights to come back. For example, in the oil industry smart sensors on oil pipelines send readings that need to be promptly analyzed since the readings may suggest that oil pressure has unexpectedly increased, which requires the pumps to slow down immediately to avert a catastrophe. In smart vehicle to vehicle communications the prevention of accidents and collisions would be jeopardized by the latency caused by sending data to the cloud. And of course, in healthcare and medicine, any lag in processing data from smart medical apparatuses could prove fatal.
How fog computing resolves this problem
Fog computing eliminates the latency caused by IoT data making a roundtrip to a cloud data center by processing IoT data locally. The data is analyzed locally by what are known as 'fog nodes'. Any system with sufficient computing and networking capability can be a fog node. This includes routers, gateways, laptops, or even the smart device itself.
The physical architecture of fog computing
The physical architecture of fog computing is thus similar to the traditional cloud architecture. Both architectures involve IoT devices in a particular location sending data for analysis to either fog nodes or cloud data centers. The difference between the two is that fog nodes are situated in the vicinity of the devices whereas cloud data centers are located remotely. The architecture of cloud computing is therefore spread out geographically (I .e. data centers and IoT devices are in different locations), whereas the architecture of fog computing is geographically concentrated (I .e. fog nodes and IoT devices are in close proximity).
The bandwidth, security and legal benefits of fog computing
In addition to eliminating latency for IoT devices that produce time-critical data, fog computing offers other advantages as well. For a start, it reduces the bandwidth burden on cloud data centers. According to one estimate there will be over 50 billion IoT devices by 2020. Traffic from all of these devices to cloud data centers will push the bandwidth limits of data centers to breaking point. By having some of the data IoT devices generate be processed by fog nodes, cloud data centers are less likely to undergo strain due to excessive traffic.
A further advantage of fog computing is that it enhances the security of any sensitive or confidential data produced by IoT devices, such as medical patient data by not sending it over the Internet where it may be susceptible to leaks or breaches. Fog computing also makes it harder to for hackers to hijack IoT devices to perform DDoS attacks (such as the Mirai botneck attack in October 2016), because the fog nodes add an additional security layer which hackers have to penetrate before they can take over IoT devices.
Last but not least, fog computing helps with compliance to data regulation laws in certain jurisdictions which prohibit sensitive data being transferred to cloud data centers in other countries.
It's all in the name
The term fog computing is an appropriate one, because the word fog is derived from the meteorological concept of a cloud being close to the ground, and in a sense fog computing brings the cloud computing capabilities of data analysis closer to the IoT devices which produce the data. The term fog computing was originally coined by Cisco in 2014.
Fog computing is also sometimes referred to as edge computing, because it involves processing IoT data on the edge of a network, which is located in between an IoT device and the cloud. Companies such as IBM and Akamai use the term edge computing.
The OpenFog Consortium
While the notion of fog computing was introduced by Cisco in 2014, since then the idea has been embraced by many high profile technology firms. In fact, Intel, Microsoft, Dell and ARM teamed with Cisco and Princeton University to create the OpenFog Consortium in 2015, which has the aim of promoting the architecture of fog computing. Many of these companies have also started to manufacture gateways and routers which can act as fog nodes for IoT devices.
Given this enthusiastic embrace of fog computing, it comes as no surprise that the IDC predicts that by 2019, 45% of IoT-created data will be processed locally.
Fog computing complements the cloud
In conclusion, it's worth highlighting that fog computing does not replace the cloud for IoT devices, but rather complements the cloud and even makes the cloud more effective. Fog nodes are used evaluating time-critical or sensitive IoT data, which frees up cloud resources to more effectively perform other tasks, such as analyzing large and complex datasets from IoT sources when time is not of essence.