Welcome to Thomas Insights — every day, we publish the latest news and analysis to keep our readers up to date on what’s happening in industry. Sign up here to get the day’s top stories delivered straight to your inbox.
Flash Forward Fridays
For the past few decades, technology has been evolving at a blink-and-you’ll-miss-it rate. In this biweekly column, Insights Staff Writer Kristin Manganello will be peeling back the curtain of the present and exploring the developing technologies that may soon become the standard in the not-so-distant future.
A Sea of Data
According to Domo, a mobile cloud-based operating system, we generated a mind-boggling 2.5 quintillion bytes of data on a daily basis in 2017.
What does that even mean?
A quintillion is a massive number that consists of 18 zeros; it’s so big that the standard calculator on a Windows 10 computer won’t even compute it.
In computing, bytes at the quintillion level are known as exabytes. To give you some perspective on how massive this is, some researchers claim that every single word ever spoken would be equal to five exabytes. According to Computer Hope, a free computer help and information website, one exabyte is equal to:
- 960,767,920,505,705 pages of plain text (1,200 characters).
- 4,803,839,602,528 books (200 pages or 240,000 characters).
- 687,194,767,360 web pages (with 1.6 MB average file size).
- 366,503,875,925 digital pictures (with 3 MB average file size).
- 274,877,906,944 MP3 audio files (with 4 MB average file size).
- 1,691,556,350 650 MB CD's.
- 245,146,535 4.38 GB DVDs.
- 42,949,672 25 GB Blu-ray discs.
Currently, data is processed through the internet in a highly complex maze of networks. Generally speaking, the process starts with a user making a request, such as searching for a video or going to a specific website. The request is sent out to the user’s internet service provider, which then connects the user to the requested destination.
Traditionally, all of this data and back-and-forth communication is handled by massive, centralized data-processing centers, which are essentially mega bulked-up hardware systems that act as major hub points for data. While this model has more or less served us well, tech innovators are always on the lookout for a better, more efficient option.
What is Edge Computing?
Conversely, edge computing doesn’t rely on a centralized data center. According to Hewlett Packard Enterprise, edge computing is a “distributed, open IT architecture that features decentralized processing power, enabling mobile computing and Internet of Things (IoT) technologies. In edge computing, data is processed by the device itself or by a local computer or server, rather than being transmitted to a data center.”
Instead of adhering to the go-big-or-go-home principles that bolster the centralized data centers, edge computing instead depends on a more localized approach using smaller servers. The idea is that by positioning large amounts of smaller processors throughout various localities, edge computing will provide a faster, more reliable internet connection.
Edge computing is far from being a widespread phenomenon, but it is gaining traction in several sectors, including smart infrastructure projects, traffic management, and oil and gas remote monitoring. All of these applications make use of edge computing’s hyper-localized power to process and transfer data in real-time.
Image Credit: metamorworks / Shutterstock.com