Data-Driven Transformation in Telco Operators | White Paper | Polystar
The Telecom Industry is facing a deep transformation. Read about the CSP status, as well as the drivers for transformation. Download here
We all know that data is required for our operations. Learn how data data-driven operations help transforming how we run our telco networks.
We all know that data is required for our operations – but it’s also essential for unlocking new ways of transforming how we run our telco networks. Data consolidation is key and, once achieved, opens up new possibilities, like enhanced customer experience assurance.
Telcos and operators that are either embarking on or are mid-way through transformation programs know that data is critical to the success of these initiatives – and that new sources of data can enhance customer experience assurance programs.
Of course, data has long been collected from networks in order to inform operations and to drive service assurance but, today, we recognize that we need more – and that data can be used to support a multitude of initiatives.
There are many sources of data that we already know can be useful. That begins with classical sources – network user and control plane data from signaling and media traffic, performance management data from systems in the OSS layer, and so on. But we can also consider other sources – the RAN, edge devices, transport and the like. Plus, data from users themselves, as they report issues they encounter or demands they make across social media, or interactions with customer care and support teams.
The list goes on. Any source of data that we can consider useful needs to be made available in a way that can allow it to be analyzed, interpreted and processed, so that it can be accessed by systems that require such inputs and, crucially, support moves towards autonomous operations, enabled by the integration of AI / ML technologies and algorithms, which it is expected will be the end-result for transformation efforts.
To get there, we need to employ the right strategy for consolidating the available sources of data – and make sure that this is extensible to new sources, as they become available or are identified. We do not want to create more data silos that would, ultimately, require further convergence.
That much is generally acknowledged. What is less widely known – but should be – is that we already have technologies that can solve the data consolidation challenge today, paving the way for the growing inclusion of AI and ML in our operational systems: DataOps. We’ve already discussed the merits of DataOps and how it can assist both telecoms performance management and customer experience assurance – both traditional disciplines that remain crucial to operators’ success – and it’s a topic to which we return now. As a reminder, the definition of DataOps stated by Wikipedia:
"DataOps is a set of practices, processes and technologies that combines an integrated and process-oriented perspective on data with automation and methods from agile software engineering to improve quality, speed, and collaboration and promote a culture of continuous improvement in the area of data analytics."
This means creating an environment in which data can be consolidated from multiple sources and made accessible, as we noted earlier in this article. Enhancing experiences and assurance remain the key drivers, particularly as we move to more complex services, but there are other interesting areas to explore.
Let’s focus on a simple example to show how this approach can offer incremental value and transform a key discipline – customer experience assurance. Here, we can already combine two different data sources to deliver better outcomes. The combined data delivers new insights and to take account of additional factors that are related to the experiences delivered.
With network probes, we can capture detailed information of individual subscriber sessions. We know the MSISDN of the subscriber, so we can zoom into the session experienced by the user, using DPI and other techniques. However, we don’t know anything about the resource utilization metrics from the systems that are delivering the session in question.
This data is contained in performance management data – and, historically, this has existed in parallel with probe analytics data. So, while each is valuable, they have been separated into silos.
But, if we use DataOps to process performance management data - resource utilization counters, system health counters, telemetry data and so on - we can then inject this data into network analytics solutions so that we combine it with probe data in a single environment.
With this approach, we can correlate what the subscriber is doing (watching a streaming video) with system-level data (processor activity, alarms and so on) from the platforms that are delivering the content and session to the user’s device.
That gives us a bigger, more comprehensive picture that enriches what we already know and brings data from different sources together so that they can complement each other. So, rather than looking in two places to find helpful information, we can enjoy a combined view that delivers richer insights into customer experiences.
With that, we can enable multiple new use cases because we can correlate system behavior with network performance. For example, we can more easily perform root cause analysis, because with both network-level events and performance management metrics, we can quickly identify whether an issue is caused by network problems or by system performance. This could be because a CPU is overloaded, leading to service degradation for the user – the network is performing efficiently, but the systems can’t handle the demand.
The list goes on – but the correlation is enabled by data-driven operations that bring different data sources together, backed by DataOps processing.
So, back to our sources of data. We know that we need to bring more information to our operations – not just those we can capture today, but those from areas we may have neglected or for which we have lacked tools to secure the data we need.
DataOps gives us the platform to achieve this, with the ability to integrate any current or future source of information. It enables data from all these disparate sources to be consolidated into a single, usable resource, accessible to any application or user that wishes to consume it, be that.
The right data strategy, which means leveraging DataOps, is key to unlocking these possibilities and realizing our ambitions. In the future, we can consider other data inputs as part of this mix – in fact, we must not limit our ambitions, because we need to bring everything to bear on the problems we face – experiences, assurance and more, as we evolve our networks and deliver the next generation of services. In that context, all data could be relevant!
This is the last out of three articles in our blog post series about Automated Assurance in telecoms, where Asparuh Rashid and Mohammad Shaheen share their expert opinions.
You find the previous articles at the bottom of this page.