Embrace Edge Computing for Web Applications – 4 Key Enablers

Over the past few years, the adoption of internet-connected devices has grown exponentially and will not slow down in the years to come. According to Gartner, by 2023 the average CIO will be responsible for more than three times the endpoints they managed in 2018. However, supporting such an increase would require scaling cloud infrastructure and a substantial provision of network capacity, which might not be enough. be economically feasible.

In such cases, edge computing could emerge as a solution because the necessary resources, such as computing, storage, and network, can be delivered closer to the data source for processing.

Businesses are looking for near real-time, actionable information, which is fueling the adoption of edge computing across industries. The benefits of edge computing are well known, and in a previous article, I’ve illustrated the benefits and some use cases.

Embrace Edge Computing in Web Application Development

It’s only a matter of time before the edge becomes mainstream, as demonstrated by a recent IDC survey that found 73% of respondents chose edge computing as a strategic investment. The open source community, cloud providers, and telecom service providers are all working to strengthen the edge computing ecosystem, accelerating its adoption and the pace of innovation.

With such tailwinds, web application developers should focus on having an edge adoption plan in place to be more agile and leverage the ability of the edge to improve engagement rate. users.

Benefits such as near real-time information with low latency and reduced cloud server bandwidth usage are driving the adoption of edge computing across industries for web-based applications. Adopting state-of-the-art IT architecture for website applications can increase productivity, reduce costs, save bandwidth, and create new revenue streams.

I discovered that there are four essential enablers for edge computing that help developers and web architects get started.

1. Ensure application agility with the right application architecture

The edge ecosystem consists of several components such as appliances, gateways, edge servers or edge nodes, cloud servers, etc. For web applications, the edge computing workload must be agile enough to run on edge ecosystem components, depending on peak load or availability.

However, there could be specific use cases such as detecting poaching activities via a drone in dense forest with little or no network connectivity, which requires the development of native apps for devices or devices. gateways.

“Adopting cloud-native architectural patterns, such as microservices or serverless, provides application agility. The definition of cloud native as explained by the Cloud Native Computing Foundation (CNCF) supports this argument.: ‘“Cloud-native technologies allow organizations to build and run scalable applications across public, private and hybrid clouds. »

Features such as containers, service meshes, microservices, immutable infrastructure, and declarative application programming interfaces (APIs) best exemplify this approach. These features enable loosely coupled systems that are resilient, manageable, and observable. They allow engineers to make high-impact changes frequently and with minimal effort. »

The most important step in adopting edge computing would be to use a cloud-native architecture for the application or at least for the service to be deployed at the edge.

2. Realize the benefits of edge infrastructure and services by adopting CSPs

Cloud Service Providers (CSPs) offer services such as local compute and storage in a region or zone, which act as mini regional data centers managed by CSPs. Applications or services adhering to the “build once and deploy everywhere” principle can be easily deployed on this edge infrastructure.

CSPs like AWS (outpost, snowball), Azure (edge ​​zones), GCP (Anthos), and IBM (satellite cloud) have already extended some of their fully managed services to an on-premises configuration. Startups or growth-stage companies can easily take advantage of these hybrid cloud solutions to deploy edge solutions faster and for greater security, as they can afford the associated cost.

For an application running on wireless mobile devices that rely on cellular connectivity, the new 5G cellular technology can provide a significant latency advantage. Additionally, CSPs deploy their compute and storage resources closer to the telco’s network, which mobile applications like games or virtual reality can use to enhance the end-user experience.

3. Take advantage of custom code execution with CDNs

Content Delivery Networks (CDNs) distributed Points of Presence (PoPs) to cache and deliver web application content faster. They are evolving rapidly, and many PoPs now have a language runtime environment such as JavaScript (v8), which enables program execution closer to the edge. Additionally, it increases security by migrating client-side program logic to the edge.

Web applications such as online shopping portals can provide better customer experience with reduced latency when equipped with such services. For example, applications can benefit more by moving cookie manipulation logic to CDN edge processing instead of touching the origin server. This move could prove effective if there is a surge in traffic during events such as Black Friday and Cyber ​​Monday.

Moreover, such a method could also prove effective for running A/B tests. You can serve a fixed subset of users with an experimental version of the application while giving the rest of the participants a different version.

4. Use open deep learning model formats that provide ML framework interoperability

The diversity of neural network models and model frameworks has multiplied in recent years. This has encouraged developers to use and share neural network models across a wide range of frameworks, tools, runtimes, and compilers. But before running a standard AI/ML model format on various edge devices, developers and entrepreneurs should look for some standardization to counter edge heterogeneity.

Open deep learning model formats like Open Neural Network Exchange (ONNX) are emerging as a solution as they support interoperability of commonly used deep learning frameworks. It provides a mechanism to export models from different frameworks to the ONNX format. ONNX Runtime is available in other languages, including JavaScript. Templates and runtimes are compatible with various platforms, including low-end devices.

The conventional approach for machine learning applications is to generate AI/ML models in a compute-intensive cloud environment and use that model for inference. With AI/ML JavaScript frameworks, it is possible to run inference on browser-based applications. Some of these frameworks also support training models in the browser or JavaScript backend.

The right technology decisions ensure better business values

Working with dozens of startups, I’ve found that the best business decisions sometimes depend on early adoption of emerging technologies like edge computing for better customer impact.

However, adopting emerging technologies requires foresight and planning to be successful. By following the catalysts above, you are well positioned for seamless and sustainable integration of edge computing to develop web applications.

Image Credit: Ketut Subiyanto; pexels; Thank you!

Pankaj Mendki

Pankaj Mendki is Head of Emerging Technologies at Talentica Software. Pankaj is an IIT Bombay alumnus and researcher who explores and accelerates the adoption of evolving technologies for early-stage and growth-stage startups. He has published and presented several research papers on blockchain, edge computing and IoT in several IEEE and ACM conferences.

Comments are closed.