Patrik Rokyta, CTO Titan.ium Platform
As telecommunications networks accelerate their journey toward cloud-native architectures, the way we deploy and operate Cloud-native Network Functions (CNF) is undergoing a profound transformation. What began as a largely manual, command-driven process is evolving into an ecosystem of automated, declarative, and increasingly intelligent deployment methodologies tailored to the unique demands of telco-grade environments.
The diagram below highlights the key stages in the evolution of the CNF deployment process: from manual, imperative approaches, to automated GitOps-driven continuous deployment, and finally to intelligent CNF operators (represented by the Titan.ium logo) that add a telco-specific orchestration layer on top.

In the early wave of cloud adoption, telco teams relied heavily on imperative deployment techniques, running kubectl commands, applying ad-hoc YAML manifests, or installing Helm charts directly from an engineer’s workstation or a Continuous Integration (CI) pipeline. This approach offered flexibility and speed, but it also introduced risk. Without a single source of truth, configuration drift and human variability were almost inevitable, especially as teams scaled and CNFs grew more distributed and complex.
The introduction of GitOps marked a turning point. Tools such as Argo CD and Flux brought a standardized, declarative model for continuous deployment, shifting the industry from push-based, user-driven actions to pull-based, controller-driven reconciliation. By storing desired state in Git and letting controllers continuously align the cluster with that state, telcos gained consistency, auditability, and the operational discipline required for cloud-native automation. GitOps brought order, enabling repeatable and predictable deployments across large-scale Kubernetes estates.
Yet, as operators began deploying more sophisticated and mission-critical CNFs, another realization emerged: while Continuous Deployment (CD) tools are excellent at managing generic Kubernetes resources and Helm-based workloads, they are not inherently aware of the internal complexities of telecom network functions. Telco-grade applications may require multi-phase upgrades, strict sequencing logic, awareness of data-plane continuity, and in-service operational safeguards that exceed what standard reconciliation controllers can express.
This is where CNF-specific operators, purpose-built by CNF vendors, enter the picture. These operators embed deep domain knowledge about the network function: its microservice topology, configuration dependencies, stateful components, and service continuity requirements. They reconcile not just Kubernetes objects but the internal lifecycle of the CNF itself, enabling actions such as traffic-safe switchover, in-service software upgrades, and CNF-aware recovery logic.
In the sections that follow, we will explore each deployment model in detail and examine how the evolution of the deployment process is enabling predictable, automated, and telco-grade CNF deployments across large-scale cloud-native environments.
Imperative Deployment versus GitOps
The evolution of CNF deployment starts with one of the most fundamental shifts in cloud-native operations: moving from imperative deployments to a GitOps-driven model. Both approaches get applications onto Kubernetes, but they do so with very different philosophies and those differences matter even more in a telecom environment where consistency, auditability, and repeatability are non-negotiable. The table below highlights these differences.

In the imperative model, deployments are highly hands-on. Engineers interact directly with the cluster using tools such as kubectl and Helm client, applying manifests, triggering upgrades, and issuing commands that immediately change the system. This approach offers speed and flexibility, especially when experienced operators are driving it. But with that freedom comes a familiar set of challenges: manifests scattered across local machines, ad-hoc updates performed under time pressure, and a deployment history that depends more on human memory than on version control. Teams may often find themselves hoping that no one applied an untracked patch or tweaked a running workload “just this once.”
GitOps turns this model on its head by making Git, not the operator’s terminal, the authoritative source of truth. Instead of telling the cluster how to change, teams declare what the end state should be, storing all configuration as versioned, immutable objects in a central repository. A reconciliation controller such as Argo CD or Flux continuously monitors that repository and ensures the cluster matches it at all times.
The diagram below shows an exemplary deployment of the Titan.ium Number Portability CNF, which consists of three main components: the network-facing ENUM component, the core component responsible for service chaining across multiple data sources, and the local component that manages interactions with the local number portability database. When a configuration change is merged, the reconciliation controller built into the CD tool applies it. When drift occurs, the controller corrects it. The deployment process becomes predictable, fully auditable, and inherently aligned with modern practices.
For telco operators, the shift to GitOps has a significant impact. GitOps reduces operational variance, enforces configuration consistency across large fleets, and eliminates ambiguity about what is (or should be) running in production. Rather than relying on institutional knowledge or hoping no manual fix slipped through, teams gain a reliable, declarative, and repeatable mechanism for managing their CNFs.
While GitOps provides consistency and automation at the Kubernetes level, some CNF lifecycle operations, particularly upgrades, can be highly CNF-specific and tied to vendor-defined methods of procedure (MoPs) that must be strictly followed to ensure service continuity. This sets the stage for the next step in the evolution: layering telco-specific intelligence on top of GitOps through vendor-provided CNF operators, which embed the necessary operational knowledge to execute these critical workflows correctly and reliably, without sacrificing cloud-native elegance.
CD-Tool Reconciliation versus CNF Operator Intelligence
As deployment processes become more advanced, the question is not just whether to automate them, but how and at which layer to do so safely and effectively. This brings us to an increasingly relevant comparison: the reconciliation performed by generic CD tools such as Flux or Argo CD, versus the reconciliation logic implemented by a CNF-specific operator delivered by the vendor. The table below provides such comparison that is not meant to discourage the standard use of continuous deployment tooling; rather, it illustrates why certain carrier-grade scenarios may require additional operator logic.

CD tools provide the familiar, standardized workflow that cloud-native teams rely on. They monitor declarative deployment descriptors and configurations, expressed through objects stored in Git repositories, and ensure the cluster matches the desired state. In practice, this means that once the intended outcome is committed to the repository, the CD controller takes over by pulling changes, applying updates, and continuously reconciling any drift. The strength of this model lies in its predictability and repeatability. Whether performing software updates, configuration modifications, or rollbacks, CD tools execute these actions with a level of consistency that is foundational to GitOps.
However, these tools are fundamentally generic. Their reconciliation loop is designed for broad applicability across all cloud-native applications. Health checks and rollback strategies provide basic safety nets, but they remain limited to Kubernetes-level abstractions. For most stateless workloads, this is sufficient. For telco-grade CNFs, it often isn’t.
This is where CNF-specific operators come into play. Unlike CD controllers, these operators embed domain knowledge about the network function itself. They understand not just the Kubernetes objects that compose the CNF, but the internal relationships between its microservices, data paths, and operational states. Their reconciliation loop spans the CNF’s entire lifecycle: orchestrating multi-phase upgrades, validating in-service performance, sequencing component restarts, managing traffic drain and switchover, and enforcing safeguards that ensure service continuity throughout.
In other words, CD tools reconcile Kubernetes resources; CNF operators reconcile the CNF as a living system. For telecom environments, this distinction is crucial. A GitOps controller can reliably ensure that a new CNF version is deployed, but it cannot guarantee that the call-processing engine has safely drained sessions before an upgrade, or that the CNF components remain synchronized during a phased rollout. CNF operators, purpose-built by the vendor, are designed precisely for these requirements. This layered approach, generic CD tools driving desired-state propagation, and CNF operators enforcing domain-specific orchestration, creates a deployment model that is both cloud-native and telco-grade. It combines the strengths of declarative GitOps automation with the operational intelligence required to run mission-critical network functions at scale.
Closing the Loop
As cloud-native principles continue to reshape telecom infrastructure, the deployment of CNFs is evolving from manual, command-driven workflows to fully automated, intelligence-infused lifecycle management. Imperative methods laid the foundation, GitOps brought discipline and consistency, and CNF-specific operators now add the domain expertise required for carrier-grade reliability. Together, these layers form a deployment model that is both robust and adaptable, leveraging the strengths of standardized continuous deployment tools while integrating the operational safeguards unique to telecom networks. This convergence not only represents an evolution in tooling; it marks a step toward a deployment process that operates with the resilience and predictability that telco environments demand.
About Titan.ium
Titan.ium Platform is a leader in signaling, routing, subscriber data management, and security software and services. Our solutions are deployed in more than 80 countries by over 180 companies, including eight of the world’s top ten communications service providers.
Titan.ium began its cloud-native journey in 2019 with the introduction of its Titan.ium cloud-native platform. By the end of 2025, Titan.ium’s cloud-native portfolio includes several 5G network functions and numerous legacy network functions that have transitioned to cloud-native to address immediate market demands. At the same time, we continue supporting the Titan virtualized platform that can also be deployed on physical servers. This gradual shift enables communication service providers to harmonize their infrastructure while ensuring continuity.