Patrik Rokyta, CTO Titan.ium Platform
As the telecommunications industry embraces the shift to cloud-native architectures, the deployment of Cloud-Native Network Functions (CNFs) has become a focal point of transformation. Traditionally, CNF suppliers have relied on OCI-compliant container images and Helm charts, accompanied by deployment-specific overrides like Day-1 configurations. However, this delivery model has proven insufficient due to ongoing misalignment between suppliers and operators, particularly around the granularity of microservices decomposition, adherence to cloud-native best practices, and the level of automation expected in modern deployments. Many CNFs remain "lift-and-shift" versions of legacy software, with oversized images, limited alignment with the cloud-native architecture principles defined in the GSMA NG.139 document, and minimal integration of GitOps principles such as declarative configuration and continuous deployment. While operators demand greater openness and automation, suppliers often resist, constrained by legacy business models centered on professional services and tightly coupled solutions. In this blog, we explore how Titan.ium is evolving its CNF deployment approach to meet emerging expectations - emphasizing pre-validation, on-prem validation, and automation as critical components of a modern, portable, and scalable deployment process.
Effective pre-validation of CNFs in the cloud-native era requires broad compatibility, automation, and a shift toward validating against standardized infrastructure profiles rather than proprietary setups. After more than five years of cloud-native transformation, many vendors still pre-validate their CNFs against a narrow set of infrastructure configurations, often chosen based on internal preferences rather than industry consensus. This industry consensus is captured in the GSMA NG.139 document, titled “Cloud Infrastructure Reference Architecture Managed by Kubernetes”, which aims to reduce fragmentation by specifying common expectations for compute, storage, networking, observability, lifecycle management, and security. By aligning with these guidelines, the industry can promote interoperability, portability, and automation, making it easier for CNFs to run across diverse, operator-chosen cloud platforms shortly after a new upstream Kubernetes release and without requiring vendor-specific adaptations. Importantly, the level of alignment with cloud-native best practices outlined in the GSMA NG.139 document can be automatically verified using the CNF Test Suite. This test suite enables comprehensive validation of both cloud-native platforms and CNFs, and serves as the basis for CNF certification as one of the key pre-validation metrics.
CNF certification carried out by the CNF Test Suite v1.4.5 comprises of 19 essential tests grouped into:
The CNF must pass at least 15 tests for a successful certification.
The log below shows the execution of the node drain test from the CNF state tests group. This test is particularly important during the pre-validation phase, as it serves as a prerequisite for the In-Service Software Upgrade (ISSU), which is described later in this blog as part of the Sylva validation activity that Titan.ium participated in during spring 2025.
The table below visualizes the results of the successful iDNS certification using CNF Test Suite v1.4.5 and the OPNFV (Anuket) xTesting framework, as applied during the Sylva validation activity discussed later in this blog (Note: the log_output test fails because Titan.ium’s CNFs write logs to a Kafka message queue rather than to stdout/stderr, in order to avoid using host ephemeral storage in high-throughput signaling environments).
Using the CNF Test Suite as a vendor-neutral validation tool enables suppliers to pre-validate their CNFs shortly after each new upstream Kubernetes release, alongside other validation tests carried out as part of the secure Software Development LifeCycle (SDLC) process. While this provides a strong foundation for cloud-native readiness, additional pre-validation against specific commercial distributions such as OpenShift, Tanzu, Rancher, or hyperscaler platforms like EKS, AKS, and GKE may still be requested by individual operators and can be offered as part of a tailored professional services engagement.
On-prem validation plays a critical role in determining whether a Cloud-Native Network Function (CNF) is truly production-ready within a specific operator environment. Unlike pre-validation, which is typically performed on an infrastructure stack available to the CNF supplier, on-prem validation reflects the actual deployment conditions, including the operator's unique cloud-native platform, network setup, security policies, and integration requirements. This process begins with a deployment health check, ensuring that all CNF containers are in a running and ready state, have not experienced unexpected restarts, and are operating with valid certificates. The test may also include checks for data consistency or verification against performance metrics such as service latency and signaling throughput, along with any additional tests the CNF supplier or operator deems important for the specific deployment scenario.
An important aspect of on-prem validation is the portability of the testing process, along with the ability to exchange validation results and logs between the operator and the CNF supplier in a standardized, reproducible way. This is where the xTesting framework originating from the OPNFV project and now maintained under Anuket proves valuable. xTesting provides a lightweight, Python-based test orchestration library that facilitates the automation, packaging, and reuse of CNF test cases across different environments, enabling efficient and vendor-neutral validation workflows. The example below demonstrates a check for container restarts by querying the Kubernetes API server from the xTesting container.
The result of the deployment health check in OPNFV (Anuket) xTesting format is depicted below. The exit value of the xTesting process is propagated to the parent process and can be used in both automation frameworks and simple shell scripts.
Following this, functional testing validates the CNF’s behavior under real-world workloads. Functional testing of telecommunication products is essential to ensure that all features and services operate correctly and reliably, meeting customer requirements, user expectations, and the associated telecommunication and regulatory standards.
By default, functional tests are snapshots of an application’s performance, collected either on-demand or at configurable intervals. They are typically triggered by testing pipelines defined in Continuous Integration (CI) environments. While automated, the drawback of this approach is the risk of missing flaky system behavior that may occur sporadically between test events.
Titan.ium is embracing the concept of a portable test suite deployed directly onto the operator’s system. The test suite continuously probes the Application Under Test (AUT) by sending low-rate test vectors, making it possible to detect sporadic functional errors, such as those illustrated in the diagram below. These errors can then be reported to the fault supervision system.
When paired with the xTesting framework, test results are easily portable between the operator and the CNF supplier. The report below represents the on-prem functional test of Titan.ium’s Infrastructure DNS (iDNS) solution, as executed during the Sylva validation activity (Note: ENUM-specific tests were out of scope for the Sylva validation activity and were therefore skipped).
Both the deployment health check and the CNF functional test can be executed in test or pre-production environments or, where appropriate, in production systems by using canary rollouts. When integrated into automated pipelines, these tests become essential building blocks of a zero-touch operational model ensuring continuous validation, rapid feedback and minimal manual intervention throughout the CNF lifecycle.
Automation is a critical enabler that bridges pre-validation and on-prem validation in the CNF lifecycle. While pre-validation establishes a foundational level of confidence by testing CNFs against industry standards and cloud-native best practices, on-prem validation verifies actual deployment readiness within the operator’s unique environment. To fully realize this lifecycle, the deployment and configuration of CNFs must be automated end-to-end using declarative, cloud-native approaches, primarily driven by GitOps principles and tools like Flux or ArgoCD. This represents a necessary shift away from manual, imperative telco processes toward a continuous, intent-based deployment model, where everything is managed as code, and microservices remain loosely coupled to support scalable, independent operation. To support advanced deployment and lifecycle management, Kubernetes Operators can be used where necessary to handle version-aware upgrades and rollbacks; tasks that go beyond what Kubernetes' native rolling update mechanism can manage effectively.
Automation also plays a key role in validating CNF readiness, particularly through deployment health checks and functional testing. Continuous Integration (CI) frameworks, such as Jenkins or GitLab, can automatically trigger test workflows whenever changes occur in the CNF or underlying platform. This ensures tight integration between the software releases of the CNF supplier and operator-side testing pipelines, reducing manual effort and accelerating deployment validation across diverse cloud-native infrastructures.
CI pipelines can automate a wide range of tasks, from executing xTesting containers for specific tests (e.g., CNF deployment health checks, functional tests, CNF test suites, or platform compliance checks) to publishing results as structured reports or compressed files into S3-compatible object storage for analysis and traceability. These pipelines ensure repeatability, traceability, and speed throughout the testing process. The diagram below illustrates such an automated testing pipeline in action, with Jenkins stage view showing the executed functional test and an AWS S3 bucket displaying the reports published in OPNFV (Anuket) xTesting format.
In scenarios where a full CI framework is unavailable such as during product demos, proof-of-concept (PoC) trials, or certification events, similar level of automation can be achieved using Kubernetes Operators. In these cases, operators can be configured to launch xTesting containers in response to triggers such as the creation of a Custom Resource (CR) instance, providing a flexible, lightweight alternative for validating CNFs in controlled environments.
Titan.ium has demonstrated successful CNF onboarding and deployment validation as part of the validation process mandated by Project Sylva, an LF Europe initiative. The objectives of Project Sylva as outlined on https://sylvaproject.org/ are: “to release a cloud software framework tailored for telco and edge requirements that address the technical challenges of the industry layer of this ecosystem”, and “to develop a reference implementation of the cloud software framework and create a validation program for such implementations”.
The Application Under Test (AUT) used in the Sylva validation activity was Titan.ium’s cloud-native Infrastructure DNS (iDNS) solution, which was enhanced with a specialized init system and subjected to the following validation tests:
A key highlight delivered by Titan.ium during the Sylva validation activity was the consistent use of Kubernetes Operators combined with the xTesting framework to automate the triggering of the CNF Test Suite, CNF deployment health check, and CNF functional test. This unified approach ensures all tests follow the same zero-touch operational model, delivering a consistent quality of experience. Moreover, all test results are standardized and exposed in the OPNFV (Anuket) xTesting format, facilitating transparent reporting across the entire validation lifecycle. Finally, Titan.ium's iDNS was the first AUT in the Sylva validation program to successfully complete the ISSU procedure, which involved upgrading the Kubernetes version across all control and worker nodes without causing significant disruption to the service provided by the AUT. The full validation report can be viewed here
Titan.ium Platform is a leader in signaling, routing, subscriber data management, and security software and services. Our solutions are deployed in more than 80 countries by over 180 companies, including eight of the world’s top ten communications service providers.
Titan.ium began its cloud-native journey in 2019 with the introduction of its Titan.ium cloud-native platform. By the mid of 2025, Titan.ium’s cloud-native portfolio includes several 5G network functions and selected legacy network functions that have transitioned to cloud-native to address immediate market demands. At the same time, we continue supporting the Titan virtualized platform that can also be deployed on physical servers. This gradual shift enables communication service providers to harmonize their infrastructure while ensuring continuity.