Personalise your experience

Get the latest insights relevant to your sector.

Blog · 06 Jun 2022

How edge computing is revolutionising data processing

Processing today’s data explosion in central clouds and data centres isn’t delivering the cost-effective, almost instantaneous results organisations need.

Sector CTO, Digital Industries

Not only are organisations creating more data than ever before, but they’re sitting on a wealth of data they’ve yet to use.

How can organisations handle this raw data explosion and turn it into the intelligent data that drives real-time decision-making? 

Increasingly, organisations are using edge computing to move processing power closer to the ‘edge’, where the decisions are being made, moving away from expensive and problematic processing in central clouds or data centres. As a result, edge computing is overtaking cloud computing in popularity and, in my conversations with global customers, I’d say 70% are focusing on edge computing and only 30% are looking at the cloud.

Time to move away from data processing at the core

Data is increasing, and it’s putting pressure on the traditional set up where data is sent to central resources for processing. More data means more data to transfer around, and many organisations going digital are finding this expensive. As data traffic grows, the costs of the bandwidth to support it are spiralling upwards, with no sign of stopping. Others are experiencing data overload, with their data centre links getting overwhelmed with traffic, again, with no end in sight.

Continuing to send vast quantities of data to core data centres or clouds for analysis isn’t sustainable – particularly as organisations are increasingly focusing on the benefits of sharing and analysing large volumes of data in real-time. Local processing at the edge is the most effective way to support Internet of Things (IoT) technologies, innovative apps and new operational developments.

Other factors, too, are encouraging the idea of keeping data processing local. Data sovereignty issues can mean data has to stay in-country. Local processing is often seen as a smart option from an information safety and availability point of view. And security and privacy concerns about public clouds are also driving interest in moving data processing to the edge.

At the same time, wider knowledge about the potential of digital transformation means every team across the organisation now expects equal access to digital capabilities - no matter how far their work base is from the enterprise’s core data centres and clouds. Surely, in this connected age, being at a remote site shouldn’t affect access to rapid data analysis?

Turning raw data into intelligent data is critical to supporting the real-time decision-making and outstanding experiences consumers and employees expect.

Time to make existing data work harder

There’s a new perspective on existing data, too. A greater awareness of the power of data is making organisations look again at how they can extract value from the data they’ve not used in the past. This is particularly noticeable in the industrial environment where data has traditionally been used on a reactive rather than proactive basis, with industrial control systems tracking events, and, in response, the team then takes action. 

Organisations are thinking bigger now. They want to explore ways to gather all the information they have, add some intelligence and start to predict problems. They’re interested in using Artificial Intelligence (AI) to predict when an issue is likely to cause failure, and then working prescriptively to extend the machinery’s lifespan until it can be serviced in a convenient window, avoiding expensive, unplanned downtime. New AI designed to optimise operations and minimise energy use and carbon emissions is causing a lot of excitement amongst organisations seeking more sustainable ways of working – but it needs edge computing.

Edge computing is the future for intelligent data

Edge computing solves the challenges of real-time data analysis by moving data, computing and workloads closer to where they’re needed. Edge computing’s local processing is the key to turning data into better insights, actions and results, and to doing it faster.

It keeps data local, complying with data sovereignty legislation, and it keeps data away from public cloud security vulnerabilities. Workloads processed at the edge have lower latency, supporting the almost-instant analysis requirements of IoT technologies, Augmented Reality (AR), Virtual Reality (VR) and AI applications.

In essence, edge computing enables organisations to change the way they approach data so they can extract more value from it, using intelligent data as a platform for more innovative and sustainable operations. 

How to make the move to edge computing

Deploying edge computing is more than plugging in some equipment and being ready to go. It has to happen as part of a holistic review of the organisation’s infrastructure that identifies how the network needs to be refreshed to support it.

Some organisations look at their ageing, flat networks and wonder where to start. They worry about the level of investment it will take.

My advice is to start small. Start with a single compelling use case, such as being able to run an AI-powered app that will identify how you can save 10% on your energy bills. Then, use that tangible return on investment to cross-subsidise further investment into fixing and updating your network. By thinking very carefully about the applications you want to run at the edge, you can quantify the returns and gradually establish a network that is edge ready.

To find out more about how edge computing can help your organisation operate more sustainably, get in touch with one of our specialists. 

Contact