How mobile operators analyze our data
13-02-2024, 08:07
How mobile operators analyze our data
Mobile operators receive a lot of data and metadata, from which you can learn a lot about the life of a single subscriber. And having understood how this data is processed and stored, you will be able to track the entire chain of information passing from the call to debiting money. If we talk about the internal intruder model, then the possibilities are even more enormous, because data protection is not included in the tasks of the mobile operator's pre-billing systems at all.
To begin with, you need to take into account that subscriber traffic in the telecom operator's network is generated and comes from different equipment. This equipment can generate files with records (CDR files, RADIUS logs, ASCII text) and work on different protocols (NetFlow, SNMP, SOAP). And you need to control all this fun and unfriendly round dance, take data, process it and transfer it further to the billing system in a format that will be pre-standardized.
At the same time, subscriber data is running everywhere, access to which it is desirable not to provide outsiders. How secure is the information in such a system, taking into account all the chains? Let's figure it out.
Why do mobile operators need prebilling?
It is believed that subscribers want to receive more and more new and modern types of services, but it is impossible to constantly change equipment for this. Therefore, prebilling should be engaged in the implementation of new services and ways of providing them — this is its first task. The second is traffic analysis, checking its correctness, completeness of loading into subscriber billing, preparation of data for billing.
With the help of pre-billing, various data reconciliation and reloading are implemented. For example, reconciliation of the status of services on equipment and in billing. It happens that the subscriber uses the services despite the fact that he is already blocked in the billing. Or he used the services, but there was no record of this from the equipment. There can be many situations, most of these moments are solved with the help of pre-billing.
I once wrote a term paper on optimizing the company's business processes and calculating ROI. The problem with calculating ROI was not that there was no source data — I did not understand which "ruler" to measure them with. About the same often happens with prebilling. You can endlessly customize and improve processing, but always at some point circumstances and data will develop so that an exception will occur. It is possible to ideally build a system of operation and monitoring of auxiliary billing and pre-billing systems, but it is impossible to ensure uninterrupted operation of equipment and data transmission channels.
Therefore, there is a duplicate system that checks the data in billing and the data that has gone from pre-billing to billing. Her task is to catch what went off the equipment, but for some reason "did not fall on the subscriber." This role of duplicating and controlling the pre-billing system is usually played by the FMS - Fraud Management System. Of course, its main purpose is not to control pre-billing at all, but to identify fraudulent schemes and, as a result, monitor data losses and discrepancies from equipment and billing data.
In fact, there are a lot of options for using prebilling. For example, it can be a reconciliation between the subscriber's state on the equipment and in the CRM. Such a scheme may look like this.
Using SOAP pre-billing, we receive data from equipment (HSS, VLR, HLR, AUC, EIR).
Convert the original RAW data to the desired format.
We make a request to related CRM systems (databases, software interfaces).
We perform data reconciliation.
We form exception records.
We make a request to the CRM system for data synchronization.
The result is that a subscriber downloading a movie while roaming in South Africa is blocked with a zero balance and does not go into a wild minus.
Another example of use is the accumulation of data and their further processing. This option is possible when we have thousands of records from equipment (GGSN-SGSN, telephony): throwing all these records into the subscriber's details is utter madness, not to mention the fact that we infernally load all systems with so much small data. For this reason, the following scheme will work, which solves the problem.
Getting data from the equipment.
Data aggregation on pre-billing (we are waiting for all the necessary records to be collected according to some condition).
Sending data to the final billing.
As a result, instead of 10 thousand records, we sent one with an aggregating value of the counter of consumed Internet traffic. We made just one query to the database and saved a lot of resources, including electricity!
These are just typical schemes of work. The format of the article does not allow us to give examples of more complex schemes (for example, Big Data), but they also occur.
Hewlett-Packard Internet Usage Manager (HP IUM) Prebilling
To make it clearer how it works and where problems may arise here, let's take the Hewlett-Packard Internet Usage Manager (HP IUM, in an updated version of eIUM) pre-billing system and use its example to see how such software works.
Imagine a large meat grinder, into which meat, vegetables, loaves of bread are thrown — everything that is possible. That is, there are a variety of products at the entrance, but at the exit they all acquire the same shape. We can change the grate and get a different shape at the output, but the principle and way of processing our products will remain the same - screw, knife, grate. This is the classic pre-billing scheme: data collection, processing and output. In IUM prebilling, the links of this chain are called encapsulator, aggregator and datastore.
Here it is necessary to understand that at the entrance we must have completeness of data — a certain minimum amount of information, without which further processing is useless. In the absence of some block or data element, we receive an error or a warning that processing is impossible, since operations cannot be performed without this data.
Therefore, it is very important that the equipment generates record files that would have a strictly defined and set by the manufacturer set and type of data. Each type of equipment is a separate processor (collector) that works only with its own input data format. For example, you can't just take and throw a file from CISCO PGW-SGW equipment with Internet traffic of mobile subscribers to a collector that processes the stream from Iskratel Si3000 fixed-line equipment.
If we do this, then at best we will get an exception during processing, and at worst we will have all the processing of a particular stream, since the collector handler will fall with an error and wait until we solve the problem with the "broken" file from its point of view. Here you can notice that all pre-billing systems, as a rule, critically perceive data that a specific collector processor has not been configured to process.
Initially, the stream of parsed data (RAW) is formed at the encapsulator level and can already be transformed and filtered here. This is done if it is necessary to make changes to the flow before the aggregation scheme, which should be further applied to the entire data flow (when it passes through various aggregation schemes).
Files (.cdr, .log, and others) with records of subscriber user activity come from both local and remote sources (FTP, SFTP), there are possible options for working with other protocols. Parses parser files using different Java classes.
Since the pre-billing system in normal operation is not designed to store the history of processed files (and there may be hundreds of thousands of them per day), after processing, the file on the source is deleted. For various reasons, the file may not always be deleted correctly. As a result, it happens that the records from the file are processed repeatedly or with a long delay (when it was possible to delete the file). To prevent such duplicates, there are protection mechanisms: checking for duplicates of files or records, checking for time in records, and so on.
One of the most vulnerable points here is the criticality to the size of the data. The more data we store (in memory, in databases), the slower we process new data, the more resources we consume and eventually we still reach the limit after which we are forced to delete old data. Thus, auxiliary databases (MySQL, TimesTen, Oracle, and so on) are usually used to store this metadata. Accordingly, we get another system that affects the work of pre-billing with the resulting security issues.
More on the topic: Hiding data from the provider
How does prebilling work?
Once upon a time, at the dawn of such systems, languages were used that made it possible to work effectively with regular expressions— such as Perl, for example. In fact, almost all of the prebilling, if you do not take into account the work with external systems, is the rules of parsing-converting strings. Naturally, there is nothing better than regular expressions here. The ever-growing volume of data and increasing criticality to the time of launching a new service on the market made the use of such systems impossible, since testing and making changes took a lot of time, scalability was low.
Modern prebilling is a set of modules, usually written in Java, which can be controlled in a graphical interface using standard copy, paste, move, drag and drop operations. Working in this interface is simple and clear.
For work, an operating system based on Linux or Unix is mainly used, less often Windows.
The main problems are usually related to the testing process or error detection, as the data passes through a variety of rule chains and is enriched with data from other systems. Seeing what happens to them at each stage is not always convenient and understandable. Therefore, we have to look for the reason, catching changes in the necessary variables with the help of logs.
The weakness of this system is its complexity and the human factor. Any exception provokes data loss or incorrect data formation.
The data is processed sequentially. If we have an error at the input-an exception that does not allow us to correctly receive and process data, the entire input stream gets up or a portion of incorrect data is discarded. The disassembled RAW stream goes to the next stage — aggregation. There may be several aggregation schemes, and they are isolated from each other. As if a single stream of water entering the shower, passing through the grille of the watering can, is divided into different streams - some thick, others quite thin.
After aggregation, the data is ready for delivery to the consumer. Delivery can go either directly to the databases, or by writing to a file and sending it further, or simply by writing to the pre-billing repository, where they will lie until it is emptied.
After processing at the first level, data can be transferred to the second and further. Such a ladder is necessary to increase the processing speed and load distribution. In the second stage, another stream can be added to our data stream, mixed, shared, copied, merged, and so on. The final stage is always the delivery of data to the systems that consume it.
The tasks of prebilling are not included (and that's right!):
to monitor whether input and output data have been received and delivered - this should be handled by separate systems;
encrypt data at any stage.
Not the entire stream of incoming data is processed. Only the data that is needed for work is processed. There is no point in wasting time on the rest until they are needed. Thus, only what is needed for aggregation schemes should be taken from the RAW stream. From RAW (text files, query results, binary files), only the necessary is parsed.
Privacy of pre-billing
Here we have a complete mess! To begin with, the tasks of pre-billing do not include data protection in principle. Differentiation of access to pre-billing is necessary and possible at different levels (management interface, operating system), but if we force it to encrypt data, the complexity and processing time will increase so much that it will be completely unacceptable and unsuitable for billing.
Often, the time from using the service to displaying this fact in the billing should not exceed several minutes. As a rule, the metadata that is needed to process a specific portion of data is stored in a database (MySQL, Oracle, Solid). Input and output data almost always lie in the directory of a particular collector stream. Therefore, anyone who is allowed access to them (for example, a root user) can have access to them.
The prebilling configuration itself with a set of rules, information about database access, FTP, etc. is stored encrypted in a file database. If the login password for access to the prebilling is unknown, then it is not so easy to unload the configuration.
Any changes to the processing logic (rules) are recorded in the prebilling configuration log file (who changed when and what).
Even if data is transmitted directly through the chains of collector handlers inside the prebilling (bypassing uploading to a file), the data is still temporarily stored as a file in the handler directory, and if desired, it can be accessed.
The data that is being processed at the prebilling is depersonalized: they do not contain full names, addresses and passport data. Therefore, even if you get access to this information, you will not find out the subscriber's personal data from here. But you can catch some information by a specific number, IP or other identifier.
Having access to the prebilling configuration, you get data to access all related systems with which it works. As a rule, access to them is limited directly from the server on which the prebilling is running, but this does not always happen.
If you get to the directories where the file data of the handlers is stored, you will be able to make changes to these files that are waiting to be sent to consumers. Often these are the most ordinary text documents. Then the picture is as follows: the pre-billing data was received and processed, but they did not come to the final system — they disappeared in a "black hole".
And it will be difficult to find out the reason for these losses, since only part of the data is lost. In any case, it will be impossible to emulate the loss with further search for reasons. You can look at the input and output data, but it will be impossible to understand where they have gone. At the same time, the attacker can only cover his tracks in the operating system.
13-02-2024, 08:07
23-01-2023, 12:37
28-01-2023, 17:06
There are no comments
Information
Users of Visitor are not allowed to comment this publication.