A cloud-native network function (CNF), also known as container(ized) network function, is a network function designed to run inside containers, which are standardized units of software that bundle code so applications can run seamlessly across different computing environments.
CNFs are the latest advancement in the “softwarization” of network functions.
What is a Network Function?
A network function is a well-defined node or building block within a network that is responsible for facilitating a specific behavior. Examples of network functions include routing, natural address translation (NAT), domain name service (DNS), load balancing, firewalls, and packet inspection, among other things.
Initially, communications service providers (CSPs) used physical network functions (PNFs), or tangible hardware objects built to facilitate a certain function. Seeking a better way forward, CSPs eventually began digitizing network functions, and virtual network functions (VNFs) were born.
At a high level, VNFs enable CSPs and other types of companies to operate more nimbly by using software instead of hardware to serve customers and meet their needs. Thanks to VNFs, enterprises generally need less hardware and less power to accomplish their objectives and can save money on maintenance costs. Due to their digital nature, VNFs make it easier to complete upgrades and roll out new services.
While the emergence of VNFs certainly helped companies reduce capital expenditures and ramp up services faster than the six to eight weeks they previously had to wait with PNFs, they didn’t go far enough. Though companies could run network functions with commercial off-the-shelf servers, they still needed some lead time to make that happen — which doesn’t exactly align with our real-time world and the increasing need for organizational agility.
These factors have led to the arrival of the next phase in the evolution of network functions: CNFs.
What Does Cloud-Native Mean?
In case you’re unfamiliar, cloud-native is a (somewhat) new approach to software development in which organizations build, deploy, and run applications entirely in the cloud using tools, resources, and other services that live in the cloud, too.
With CNFs, can move even further away from physical hardware and take advantage of some of the biggest advantages the cloud has to offer, including resiliency, speed, elasticity, scalability, and flexibility.
Key Features of CNFs
There are some things that are very common to CNFs and the benefits they provide. These include:
Until relatively recently, software development focused on giant, monolithic components that aimed to solve every problem in one go. This was historical baggage; for many years there was only one computer, and networking was primitive. The modern approach is to break problems apart, and make each sub-problem the responsibility of a microservice.
This has many advantages if done well:
- You may be able to implement prepackaged open source components as microservices, without having to make any changes to them.
- Having many small components makes it easier to scale.
- Microservices make reuse much simpler. By insisting on zero dependencies and a clearly defined API at the very start of the development process a well written microservice shouldn’t care about what goes on around it, or in what context it’s being used.
- Development is much simpler because, once you define the APIs, different teams can develop different microservices independently.
Originally, all software was developed and run directly on a computer’s operating system.
This led to problems:
- Resources were shared with other applications, sometimes not very well.
- Any kind of OS, library or other change to the server was at best hard to do, and sometimes impossible to do, as every single application running on the server would have to be able to tolerate the change.
- When low level changes were made, for example to configuration files, over time they resulted in environments which were unique to that physical server, making support and management a nightmare.
The first solution to this problem was virtualization. Instead of running applications, the operating system ran another copy of itself, which ran the applications. This sounds crazy, but works. Large servers could run multiple virtual machines, each of which could run an application, and we didn’t have to worry about it fighting with other applications, as they were on different virtual machines. But virtual machines (VMs) also had their issues:
- Because a VM is usually an entire image of a live OS with a thin layer of application on top of it, it’s usually GB range in size.
- Aside from being awkwardly big, VMs take a long time to load and get running. This antagonizes developers and slows down development cycles.
- VMs can isolate CPU and RAM, but still share network and IO without knowing it.
- VMs still need all the security and other patches and other things a normal OS would need.
Containerization is the logical next step in this progression. You get enough operating system services to run your application, and can have your own libraries and so on, but are actually sharing a copy of the machine with many other containers. Containers are not automatically aware of other containers on the same server. Not only do containers start quickly, they are amenable to being created and torn down programmatically. Containers are run by a container manager, which has APIs for this.
3. Service Registries
Microservices and containerization created a complexity issue. It used to be that everybody knew where all the applications were because there were only a handful of servers. But with microservices and containerization, instead of the application hardcoding the name of the limited number of other components it needs to speak to, it has to find them and connect to them.
This is where a service registry comes in. A service registry is a database designed to store data structures for application-level communication and serves as a central location for app developers to find schemas and register their app. In other words, it’s a place for microservices to advertise their existence, availability, and capabilities. This solves the complexity issue but it also introduces the problem of a single point of failure.
4. Stateless Services
Traditionally in application development, persistence was a major issue. Persistence means that things you change stay changed, even if bad things, like failures, happen. Containers aren’t very good at persisting things, so an awful lot of services are “stateless”, meaning they have no stored knowledge of or reference to past transactions. This means containers have no reliably stored local information about what they are doing, have done, or are supposed to do. Instead, they use a stateful service to keep track of all this. Some use cases, such as caching data in RAM, work really, really well. Others—ie, ones that involve shared, finite resources such as credit—are reliant on a stateful service to keep them straight.
Stateless Services are very popular, and for good reason: if your service really is stateless, you can run less or more containers implementing it if you want to, without having to worry about scaling.
Advantages of CNFs
As 5G continues to roll out and workloads increasingly are handled at the edge, more and more companies are moving away from virtual network functions and embracing CNFs — if not using VNFs and CNFs in conjunction with one another.
That’s because CNFs deliver a ton of upside, including:
- Increased flexibility and agility, since rolling out new services or upgrades no longer involves swapping out any hardware. Instead, companies can create a new microservice and roll it out onto existing infrastructure, which accelerates time to market and reduces the costs associated with implementing new offerings the traditional way. The single greatest benefit from CNFs is that it frees you from the straitjacket of a single, horribly complicated application running on a single, overpriced set of hardware. You can mix and match, scale up and scale down, clone deployments to new markets and ultimately free yourself from wrestling with your own application, and instead devote time to keeping customers happy.
- Reduced expenses, as organizations need to buy even less hardware than they needed to buy to support VNFs. Due to pay-per-use and on-demand scalability, companies can rest comfortably knowing that they will always be able to access the infrastructure they need while simultaneously only having to pay for exactly what they use, and nothing more.
- Improved scalability, as containerized microservices can be spun up and down as needed. Due to the nature of the cloud, the sky is really the limit when it comes to scaling to support a massive influx of traffic or users.
- Improved reliability, with the fault-tolerance and resilience the cloud delivers. In the event that a container gets knocked offline for any reason, devs can spin up another one right away. Since upgrades can take place on the microservice level, companies don’t have to risk massive outages or schedule any downtime. This, in turn, delivers a more reliable suite of products and a better customer experience.
While the benefits of cloud-native network functions are quite persuasive, there are a few considerations to keep in mind if you’re planning to give CNFs a try:
- You may need to rearchitect existing network functions. For example, if you’re using any monolithic applications, you will need to break them down into microservices.
- You won’t be able to become entirely cloud-native overnight. As you begin the transition to CNFs, you will need to figure out how to ensure that they can interoperate with any VNFs that exist in your environment.
- You will need to ensure your data platform is highly performant at scale. For the best results, make sure your platform can process data at the edge instead of moving it back and forth between the data layer and the application layer.
For more information on what a battle-tested data platform designed for the 5G era and built to support cloud-native network functions looks like, check this out.