Download Overview - LIA - Laboratory of Advanced Research on Computer

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Internet protocol suite wikipedia , lookup

Wake-on-LAN wikipedia , lookup

IEEE 1355 wikipedia , lookup

SIP extensions for the IP Multimedia Subsystem wikipedia , lookup

Net bias wikipedia , lookup

Computer network wikipedia , lookup

Deep packet inspection wikipedia , lookup

Distributed firewall wikipedia , lookup

Piggybacking (Internet access) wikipedia , lookup

Cracking of wireless networks wikipedia , lookup

Network tap wikipedia , lookup

Zero-configuration networking wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

List of wireless community networks by region wikipedia , lookup

Peer-to-peer wikipedia , lookup

Quality of service wikipedia , lookup

Airborne Networking wikipedia , lookup

Transcript
The role of software in
Telecommunications
0
Table of contents
1
Introduction .............................................................................................. 2
2
Initial employment of IT in TLC ................................................................. 3
3
Increase in the demand of new services: the Intelligent Network ............. 5
4
Evolution of the computers towards TLC .................................................. 7
4.1 Integration of IT communication in the telephone switches ................. 7
4.2 The birth of the data networks ............................................................ 9
5
Convergence between voice and data networks .................................... 11
5.1 The "New Generation Network" architecture..................................... 12
5.2 The Application Layer ....................................................................... 16
5.3 The IMS architecture ........................................................................ 17
6
Management systems ............................................................................ 22
7
Software development process in the TLC manufacturer companies .... 27
1
1 Introduction
What follows is a short overview of how software has entered the Telecommunication
world, gaining more and more importance as the technological improvement of the
networks pushed towards a convergence between TLC and IT, up to assuming the largely
predominant role in the development of any device, node, system of the
Telecommunication networks.
It is an attempt to make a historical outline of how this phenomenon has gradually
developed through the years, by means of a few examples for which some TLC systems
and network architectures have been considered particularly significative, on the basis of
my personal experience. Of course they are just a few examples, and this doesn't claim to
be an exhaustive overview. Also, they are just mentioned with the scope of giving a
perception of the importance of software in their realization and not with the claim of
providing a real description of them, which is well beyond the scope of this document.
2
2 Initial employment of IT in TLC
Up to few decades ago the telecommunication networks, and in particular the switching
ones, didn't take information technology into account, neither as development technique,
nor in terms of making computers communicate with each other:

the switching networks, both public and private, were required to provide the voice
services only, since the computers were used to working only as stand-alone machines
and didn't need any communication service;

information technology was not employed in the design of the network elements: the
telephone switches didn't need great flexibility, since the services required by the users
were limited. The development of the telephone switches was hardware based; in fact,
at the very beginning, they were realized in electromechanical technique (relays) and
their features were based on automatisms rigidly defined during the design phase.
As Information Technology developed, it began to enter the Telecommunication world as
the basis upon which the main organs of the network elements (mainly the telephone
switches, both public and private) were designed, when the enormous benefits that the
employment of SW could bring to their development became evident, like, for example:

it enormously extended the capabilities of the switches, both in terms of performance
and service complexity;

it allowed for simpler, cheaper and quicker service extension, both in terms of addition
of new services and of upgrade of the existing ones, without any modifications to the
hardware design, thus speeding up and simplifying the productive process;

it made the error and malfunction correction process quicker and simpler;

it extended the flexibility of the network elements, for example by allowing for an easier
customization, a better adaptability to different network contexts, an easier parameter
configurability, etc.;

it allowed for the implementation of more effective redundancy techniques (e.g. micro
synchronous duplication) thus increasing the system reliability;

...
The process of acquisition of information technology within the telecommunication world
experiences a further significative thrust with the diffusion of the microprocessors'
technology: at the beginning the main organs of the network elements started to be
designed as computers that each company used to develop by itself, based on discreet
logic (both the CPU and the I/O devices were realized with the employment of the
3
integrated circuits and the discrete logic that semiconductor manufacturers were making
available in those years); later, as microprocessors began spreading out, hardware and
software development became quicker, easier, the computer boards became much more
powerful.
Of course this caused dramatic changes both in the manufacturers and in the telecom
operators, since the know how required by the R&D labs, and in general by all the
technical departments, moved towards the computer world, even if the characteristics of
such types of computers were deeply different from those of the commercial ones.
The availability of more and more powerful and cheap microprocessors led to their
employment for the realization of the majority of the functional blocks of the network
elements, including the peripheral organs, and the and the network nodes (in particular the
telephone switches) became a complex ensemble of cooperating microprocessors.
Languages, real time operating systems, finite state software structures (based on SDL:
functional Specification and Description Language: a symbolic language defined by ITU in
order to specify and describe the behavior of the TLC systems), development and
debugging systems, as well as I/O devices expressly realized for being employed in the
telecommunications world started to spread out.
4
3 Increase in the demand of new services: the
Intelligent Network
The adoption of Information Technology in the Telecommunication world grew even more
extensive when the services requested by the users started to become more and more
complex and affected also the general architecture of the network. Some new services
were too complex to be effectively realized within each switch, or needed access to data
that were common to the whole network. Therefore, to provide those special services, the
network architectures started to include specialized computers and Data Bases,
centralized at the network level. The most typical example is the "Intelligent Network" (IN).
Fig. 1 - Intelligent Network architecture
IN is a clear example of how the great flexibility provided by software has been decisive in
making possible the provision of services otherwise much more complex or even too
complex to be realized. Besides the common telephone services directly provided by the
switches, additional "Intelligent" services were provided by the Service Control Point
(SCP), a functional block centralized at the network level, which hosted the Service logic
and accessed user data and account information stored in the Service Data Point (SDP).
Every time a user invoked one of the "Intelligent" services, the "Service Switching Point"
(SSP: an IN function hosted by the originating switch), after detecting that it was an IN
5
service, triggered the SCP. Depending on the implemented service logic and the user and
account information stored in SDP, the SCP node would replay with indications on how the
service should be exploited.
In this architecture, adding a new service in the network only requires acting at the SCP
and the SDP level through dedicated software applications (SCE: Service Creation
Environment) that allow for easy, quick, flexible and user friendly implementation of the
service logic and data, while the only information required by each individual switch is just
that related to the presence of the new service in order to make the SSP function able to
trigger SCP every time a user invokes it.
Televoting, Number Portability, Green Number, Reverse Charging, VPN, Calling card,
Personal Number, Universal Number, Call distribution (location and/or time based,
proportional, etc.) are a few examples of IN services. In the figure below an example of call
flow involving access to SCP/SDP for the "green number" service is reported.
Fig. 2 - Example of IN call flow: Green Number service
6
4 Evolution of the computers towards TLC
Meanwhile also the computer world started evolving towards Telecommunications: the
computers, initially acting as stand-alone machines, started showing more and more the
need to open towards the external world and to communicate with other equipment
through the telecommunication networks (e.g. to allow remote terminals to access a host
or other resources, or a given terminal equipment to access different hosts, to allow for
resources concentration, such as printers, in centralized pools to be shared by all the
users of the same organization, to let users access centralized Data Bases, to perform file
transfer between computers, etc.). As Personal Computers spread, of course, all of that
underwent a strong, further thrust, given the enormous diffusion of services requiring
access to the communication functions (one example for all: the electronic mail service).
The consequence was e big effort that developed in two directions, initially antithetic:
1) The integration in the telephone switches of features providing communication
capabilities also to IT equipment, besides the voice ones;
2) The birth of the data networks: LAN, IBM SNA, geographical packet networks (e.g.
Itapac), etc.
4.1 Integration of IT communication in the telephone
switches
This was realized through the adoption of devices, the "Terminal Adaptors" (TAs), that
interfaced the data equipment on one side and the telephone switch on the other. TAs
managed the signaling protocols with the switch in the same way as the traditional
telephone sets in order to establish the communication link between the data devices at
the two endpoints. After the link was set up, they provided for "rate adaption" between the
data flows at the IT device interfaces and the bearer channel in order to allow for reliable
data transfer between the two endpoints.
Protocol converters were also available as pools of resources to cope with protocol
differences between the communicating devices. Access to the external networks (e.g. the
public packet ones) was realized through the development of centralized interfaces, e.g.




Modem pools to communicate through the external telephone network
PAD (Packet Assembler Disassembler) to access external packet networks
SNA emulators to access the IBM SNA networks
…
7
The ISDN standards were defined with the goal of providing voice and data services
through one network interface enabling both voice and data equipment to access the
communication services in the same way. ISDN was an example of the push towards the
integration among different types of service within a unique switching device, even though
the accessed services were still quite distinct: on a per call basis either a voice or a data
communication service was invoked.
Above all they were the PABXs, more than the public exchanges, that gave the strongest
thrust towards this convergence and the unification of the procedures to access the
services; the reason for this is that in the private context users were much more
demanding and the request for TLC services was much bigger: PABX services had to
support the working activities in the companies and such advanced services as, for
example, videoconferencing, were, at the beginning, typically requested only in the private
market.
About the employment of the microprocessor's technology in the design of the switching
equipment, as mentioned in chapter 1, perhaps it may by worthy mentioning how the
realization of the interfaces with the IT devices represented a particular case of software
development. Real time management requirements were very strong and dedicated
operating systems were used, performing a limited set of features, but optimized for
concurrent processes management. Software was typically written in low level languages
to optimize the execution time and the memory use: for example in the equipment
developed in Telettra a structured macroassembler was adopted, i.e. an assembler
language enriched with the possibility to define macros and especially with primitives that
allowed for writing software in a structured way and that were converted into assembler
instructions by the compiler.
Software structures oriented to the management of finite state machines (e.g. SDL) were
frequently adopted for protocol management: protocols can be typically described as finite
state machines where in each state only a defined set of events is allowed (e.g. reception
of a frame or of an errored one, reception of a flow control message, a time-out elapsing,
etc.) and the action consequent to an event depends on the state (e.g. received frame
acceptance or not depending on whether the receiving edge was in an active flow control
status or in a free one)
The wide spread of the level 2 and 3 communication protocols (e.g. LAPB, SDLC SNA,
LAPD ISDN, X.25 Packet level, DMI mode 2 e 3) that were to be managed by
microprocessor boards, induced the integrated circuit manufacturers to produce chips
dedicated to autonomously handling the low level functions of the protocols. In the
consequent software architecture, the main CPU could therefore be relieved of those low
level functions (those that presented the most pressing requirements in terms of execution
time, such as bit stuffing/de-stuffing, frame start / end detection, length detection,
redundancy code check, etc.) thus attending only to such functions as processing the
protocol formalisms (e.g. sequence numbering, flow control management, retransmission
requests, timeout processing, etc.), managing the buffer queues, etc.
8
DMA was often used as well for buffers management in conjunction with the
transmitting/receiving devices. In this case, for example in reception, the CPU had only to
provide the initial address of the receiving buffer and the DMA provided to read every byte
from the protocol handling device, and to write it into the reception buffer (and vice versa in
the transmitting process). But there were also very powerful devices, based on
microcomputers much more powerful than the main processor of the equipment, that were
able to autonomously handle the complete formalism of the level 2 (data link) protocol,
letting the main CPU cope only with higher layer protocols.
To conclude this short overview on the data communication features provided by the
telephone switching networks, in particular by the PABXs, it should be noted that such
systems were all circuit switching based and, consequently, also data communication was
accomplished through the circuit switching technique.
However, since packet switching was in the meantime gaining more and more success in
the data communication field (as mentioned in next paragraph) some implementations for
allowing data equipment to communicate in this way even in the PABX world were done.
In particular within the Telettra's TAU-SDN system a "Frame switching/Frame Relaying"
technique was adopted by means of the realization of a "Frame Handler" (FH), a
centralized resource to which the IT devices requesting to communicate in this mode
gained access through the TAU-SDN circuit switched internal network, allowing for setting
up virtual circuits using the same outband ISDN signaling protocols with the Telephone
Processors as for all the other call types.
To access the Frame Handler the Terminal Adaptors had to use DMI mode 3, instead of
mode 2, as internal protocol. 1 Actually the TAU-SDN TAs were able to switch from one
method to the other depending on the required type of connection.
4.2 The birth of the data networks
As computers opened towards communication a big acceleration occurred in the definition
process of the communication standards in the computer world, both in the local premises
(the Local Area Networks, with the definition of standards like CSMA/CD, Token Bus,
Token Ring, LLC protocol, etc.) and in the geographical networks with packet switching (in
1
DMI Mode 2 was a "Rate Adaption" protocol used in circuit switched connections: it was adopted by the Terminal Adaptors as a
means to transfer, through the 64 Kbps channels of the internal switching matrix of the PABX, the data exchanged through the interface
(e.g. RS-232) with its own IT device. Essentially it consisted in inserting the received data into HDLC like frames.
DMI mode 3 was similar to mode 2, but its frame structure was the same as the LAPD one, and had been defined for frame switching
and relaying: the address field of the frames (not significative in mode 2 was) was used to identify the "Logical Links"; many different
Logical Links could be established simultaneously within the same equipment.
9
both the connection oriented, or virtual circuit, and the connectionless, or datagram,
versions) prevailing over circuit switching and with the success of X.25. Later, the
definition of the seven layer ISO-OSI model represented the apex of this process.
Under the implementation point of view, the ISO-OSI model represents a significative
example of the highly predominant role gained by software in the development of the
communication features, and of how software absorbed a significant part of the
development costs of the entire communication projects.
Naturally, the characteristics of the communication software in the nodes of the network
and in the endpoint computers were deeply different, and required different skills: in
general in the network equipment, where only the three lowest layers of the model were
used, the needs for processing speed and real time process management were
predominant, while in the network endpoints, hosting the whole stack, prevailed the need
for the communication packages to be integrated in the software environment of the host
computers, based on higher level operating systems and languages. The realization of the
whole OSI stack required highly specialist skill that only a few IT companies possessed, so
its development was normally left to specialized software companies which, after agreeing
with the customers all the details in terms of protocol configurations, communicating
applications, operating systems configuration, computer environment, etc. provided for the
development and delivery of the protocol stack, including its integration with the host
environment and the complete exploitation of the tests, and even periodical software
updates (through maintenance contracts) as the international protocol specifications
evolved.
In the meanwhile, the continuous technological development allowed for more and more
power, fast and cheap network nodes, and the consequent evolution of the networks and
protocols brought to the definition of more efficient standards (Frame Switching, Frame
Relaying, ATM, ...) and the final replacement of X.25 with the more efficient connectionless
Internet Protocol (IP) at the Network Level and, in the endpoints, the abandonment of the
OSI stack in favor of the Transport TCP/UDP protocols with the Application Layer directly
above them.
The birth of the Internet, that took origin from Arpanet and is conceived as a packet
network initially devoted only to the data traffic in a restricted environment, is an outcome
of this process.
10
5 Convergence
networks
between
voice
and
data
Also in the data network field, similarly to what happened for the voice ones, a strong need
arose to achieve the convergence between data and voice services, including also, in a
second step, the video ones. The birth of the VoIP protocols is consistent with that. Initially
VoIP was conceived just to transport voice packets (packetized voice as one of the many
types of data streams) through the data networks, but still without a real provision of the
usual telephone services (e.g. call forwarding, call waiting, etc.); later, with the introduction
of the "NGN" (New Generation Network) standards, the real ToIP (Telephony over IP)
services will be provided over the IP networks.
Two more factors triggered a further big thrust towards a more complete integration
between voice and data:
1) on one side the quick technological development of the data network nodes and the
consequent improvements of the performance of the nodes in terms of speed, reliability
and effectiveness, that brought to a considerable cost reduction of communication and
a significative service improvement, and of the transmission technology, with the
consequent increase of available bandwidth, factors that made possible services that
before could not be provided. In this context, decisive importance was assumed by the
definitive predominance, for all the information types (voice, data, images, video) of
packet switching over circuit switching, and of the IP protocol, which proved being the
most effective technique for the actual convergence of the different types of multimedia
services. Many Telecom operators around the world have adopted the IP networks to
provide multimedia services to their customers, including the traditional telephone
services (ToIP). Just as an example, the whole telephone transit network and the
international network of Telecom Italia are entirely based on IP.
2) on the other side the bigger and bigger demand of new services by the users. It was a
self-sustaining phenomenon, since the users, in front of the great opportunities
provided by the development of technology and computer science, pushed to obtain
more and better services, integrated with the existing ones, thus triggering those
market opportunities that urged the providers to act to satisfy the new demands and
conquer new market areas; the provision of new services, on turn, triggered the
development of better technologies, and so on.
Summarizing, convergence occurred between the information technology and the
telecommunications worlds: on one side the opening of the computers to communicate
with the other systems, on the other side the acquisition of the techniques proper to the
information technology world for the realization of the modern telecommunication systems.
Such convergence was mainly sustained by the technological progress that led to the
increase of the processing and the storage capacity of the systems, together with their
11
miniaturization and the fall of their costs, with the consequent spread of more and more
powerful users' devices in the homes and the offices, and the possibility to access more
and more sophisticated services.
5.1 The "New Generation Network" architecture
Perhaps the most important innovation in the direction of the convergence among different
media is the "New Generation Network" (NGN) architecture. The capability of the IP
network to transport and integrate all types of information (voice, data, video) to provide
multimedia services, together with the technological evolution and the availability of a
much wider bandwidth than that strictly needed for the traditional telephone calls, brought
to the loss of the traditional distinction between voice and data networks and to the
availability of one type of network (packet switched, IP based) able to transport all types of
information (voice, data, video).
The transport and switching networks became more and more service independent, and
therefore the provision of the actual services began moving to a different layer that used
the underlying network just as a bandwidth and QoS provider. In this way multimedia
services incorporating voice, data, images and video streams began to spread out,
integrated with other users' data such as, for example, Messaging or Presence
information.
The current trend is a unique IP based switching and transport network infrastructure, that
all what makes available to the users, on a per service basis, are the bandwidth and the
QoS, while the services, the added value, are provided by software applications running
on servers connected to the network: the Application Servers (ASs). In such a context, the
voice call is just one among many other available services.
In an NGN architecture, the softswitch acts as network controller and regulates the users'
access to the network, but doesn't provide any service by its own: it receives all the service
requests by the invoking users (through "INVITE" SIP messages) and, after checking the
request type, selects the appropriate Application Server to which the user's request must
be delivered. It is the Application Server, then, that executes the service logic, including, if
needed, the establishment of all the required sessions among users' devices by sending
the relative commands to the softswitch that, on turn, will establish the required sessions
exchanging the appropriate SIP messages with the involved users.
12
Fig. 3 - NGN architecture
The following are just examples of services that, in the NGN architecture, are provided by
Application Servers:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Telephone calls
Multiple Ringing
MRBT (Multimedia Ring Back Tone)
Unified Messaging
Voice / video mailing
Multimedia conferencing
Presence oriented services (e.g. conferencing)
Telepresence
IP-Centrex
IVR applications
Fixed-Mobile Convergence (VCC: Voice Call Continuity)
Content sharing
Virtual switchboard
Multiparty interactive games
Domotics
13
•
•
•
•
•
IPTV
Pre / Postpaid calling cards
Mass Calling
Televoting
...
A typical example of multimedia service which is commonly used in the business world,
where different media are integrated together to provide the global service, is the schedule
and realization (based on Presence information) of multimedia conferences in which the
attendants, each remaining in his/her own office or at home, besides seeing and listening
to the other attendants, can simultaneously share texts, slides, video clips or real time
made drawings, in order to submit them to the discussion and giving the other attendants
the possibility to make modifications, include them in other documents, etc.
The figure below shows a Conference server as an example of AS and of its structure.
Conference
Application Server
WEB Server
XML
JDBC, OCI
SCE
Application
Servers
HTML, SOAP
SIP
Service &
System Administr.
Media Server
RTP
SSW
IP
Fig. 4 - Conference AS architecture
If needed, different Application Servers can be simultaneously involved in the same
service; an example is an IP Centrex user that asks for a conference to be established
14
with other users; in this case the first AS to be involved is the IP Centrex one (since the
user is characterized in the network Data Base an IP Centrex user, all his/her calls will
always be managed by it); if the IP Centrex AS doesn't have conferencing capabilities by
its own, a further AS must be involved, this time on request of the IP Centrex AS itself that,
after examining the user's service request, asks the softswitch to establish sessions
among the involved users and the conference AS.
Summarizing NGN foresees a clear separation between:

Switching and Transport Layer, which is service independent and grants access to
any type of user's equipment (fixed or cellular phone, smartphone, PC, tablet, etc.)
through, if needed, different types of gateway. It provides the required bandwidth and
QoS for the realization of multimedia services according to the requests of the Service
Layer;

Service Layer, whose task is to realize all the services using the capability of the
underlying layer to provide the required bandwidth and QoS; it consists of a number of
software applications running on commercial servers, normally in high availability
configuration, with standard Operating Systems like Unix, Linux, etc.
It is a highly flexible network architecture characterized by the easiness of introduction of
new services (since any new service can be introduced just by adding one or more new
servers, or new software applications in the existing ones, and by recording the related
information in the network Data Base), by the possibility to access any service regardless
of the type of user's device, by the capability to provide multimedia data flows allocating
the required bandwidth on a per service basis, and to grant the requested level of QoS.
Of course, this evolution, that brought to the rigid distinction between the Access/Transport
and Control layers on one side and the Service layer on the other, didn't affect only the
architectural aspects of the networks, but had a big impact also on the playing actors, in a
scenario seeing the traditional TLC manufacturers remaining mainly focused on the first
two layers, while service development became more and more a prerogative of the
software companies, with the consequent shift of importance and business from the former
sector to the latter one. Furthermore, the need for more and more sophisticated services
has determined the tendency towards a high degree of specialization so that each service
development company tends to specialize in a particular type of service, thus, for example
some companies specialized in developing text, voice and video messaging application
servers, other in the conference servers, others in the centrex applications, etc.
Also, this architectural model entailed a complete vendor independence at the service
layer: it is a quite different situation from the traditional network architectures where any
new service had to be requested to the same vendors as the manufacturers of the network
elements. The modularity of the service layer, where the different services are performed
by different application servers independent from each other, enables the development of
new services by any software company, regardless of the vendors already present in the
network.
15
5.2 The Application Layer
Each Application Server is realized by software applications developed by specialized
companies, running on state-of-the-art computers with standard Operating Systems.
Compared to the traditional network architectures, the NGN, as well as the IMS one (as
summarized in the following paragraph), makes the development and the deployment of
new services much easier and quicker; of course, interoperability tests between any new
application server and the softswitch must be carried out before its deployment in the
network.
Among the characteristics normally representing the main requirements for the application
servers, the following can be mentioned:

should be realized by means of developing techniques allowing for quick service
development and easy modification;

should provide quick service customization, in order to allow network operators to
easily fine tune the general characteristics of the services to better meet the customers'
requirements; XML based languages, voice XML, CCXML, etc. are commonly adopted
for this scope;

should provide the customers with control possibilities over their preference settings, by
means of such tools like WEB interfaces, IVR, CPL (XML based Call Processing
Language), etc.
Hardware typically consists of a number of commercial servers running in different
configurations, depending on the system architecture. Normally the HW platform is
modular so that system upgrades can easily be accomplished by adding new elements
(e.g. additional servers, processors, RAM, etc.) or replacing them with more performing
ones.
For infrastructural elements like Data Base managers, Object Oriented platforms,
communication protocol stack platforms, etc., third party SW packages by specialized
companies are often included.
The systems are dimensioned on the basis of the size of the actual networks where they
are to be deployed, through a delicate engineering process that, taking into account a
number of parameters such as the number of users potentially accessing the services, the
estimated access ratio, etc., defines such items like the server class, the number of
servers, the processor type and number, the RAM size, the HD space, the number of
communication links, as well as the number and type of third party software licensees.
16
The HW platform is always redounded in high availability configuration in order to
guarantee service continuity even in the presence of faults.
Two configurations are most used (mainly depending on the number of servers the system
is made of):

in a fully duplicated configuration (2 X N) each server is duplicated; the two servers
of each couple work in load sharing, but each server is dimensioned to handle the
entire network load by itself; should one server fail, the remaining one takes over all the
functionalities.
2 X N redundancy is most used when the system is made of different servers
specialized to run different applications and can therefore be dimensioned in different
ways or even belong to different classes;
2 X N System redundancy allows for upgrades or release changes to be performed
without any service interruption; normally the procedure consists in:
– disabling one server of the couple that must be upgraded;
– performing the required changes (e.g. release change) on the disabled server(s)
while the other one(s) are still running;
– switching the operativeness between the servers of the couple so that the new one
is operative and the old release is inactive;
– should the new software present malfunctions, operativeness is moved back to the
inactive server(s) running the old one;
– after checking the good performance of the new software it is loaded also on the
inactive server(s) and the complete configuration is then restored.

N + 1 con*figuration is most used when the system consists of N identical servers, all
sharing the same applications, whose number only depends on load considerations; a
further server is added to the configuration in order to grant the simultaneous presence
of N active servers even in case one fails.
Communication links and network interface devices are always redounded as well.
5.3 The IMS architecture
IMS (IP Multimedia Subsystem) is the 3GPP architectural model standardization for the
NGN networks and represents the current state of the art of its standardization process. It
is a set of specifications for an unified service architecture and defines a complete
framework whose goal is that of enabling:
17

convergence of voice, video and data communications to provide multimedia services
through one service independent network based on an IP-based infrastructure using
SIP (Session Initiation Protocol) as session control protocol;

interoperability with any network (new generation or legacy) in order to enable the
delivery of multimedia services to any user regardless of the actual network he or she
is connected to. In other words, IMS provides a unified service architecture for all the
networks;

independency of the type of user's device (fixed or cellular phone, smartphone, PC,
tablet, etc.) regardless of the access technology (DSL, Ethernet, GPRS, WCDMA,
etc.). Besides granting direct access to SIP terminals, it allows for legacy ones
connected to legacy networks to still access IMS services through standardized
gateway functions.
The IMS standard foresees the splitting of the network into a number of functional blocks,
and includes the detailed definition of the functional behavior of each block, its
relationship, cooperation and information flow with the other blocks by means of which the
multimedia services are provided.
As usual, the IMS architectural model is to be considered just a conceptual one: nothing is
said about its physical implementation, so that even the NGN architecture as we have
seen before, with the softswitch as access controller and AS trigger, can be adapted to be
compliant with this model. In other words, some existing equipment can be seen as
performing all, or part of the jobs of more than one blocks, or, conversely, each conceptual
block can be split and its features can be distributed among different physical equipment,
and so on.
As far as the functional blocks are concerned, only the main ones are shown in fig. 5
above, of which a non exhaustive list of the main features is given at the end of this
paragraph; naturally, a detailed description of the IMS architecture and of the whole set of
its functional blocks is well beyond the scope of this brief overview, which is focused on
the importance gained by software in the TLC world, but it is referred to, here, as a further
evidence of the definitively consolidated evolution of the TLC networks based on their
splitting into different layers:

Access and Transport Layer, based on the IP network, providing interfaces with the
other networks, including the legacy ones, through the relative gateway functions; SIP
is used as “signaling” protocol to ask for service requests and call routing;

Control Layer, providing for user access control, call routing, bandwidth allocation and
QoS target matching; it contains the main users’ Data Base (HSS: Home Subscriber
Server) and the CSCF functions (Call Session Control Functions) which are
responsible for analyzing the users’ service requests and determining which
Application Server will be in charge of managing the requested service;
18

Application Layer, consisting of a number of Application Servers, providing all the
services to the users.
Fig. 5 - Simplified IMS architecture
In the same way as performed by the Softswitch in the NGN architecture, in the IMS one it
is up to the CSCF (Call Session Control Function) triggering the Application Layer.
Besides user authentication and call routing, the CSCF functional block determines which
application, within the Application Layer, must be invoked, depending on the user's service
request and the user's data stored in the network Data Base (HSS: Home Subscriber
Server).
Actually, under a theoretical point of view, the Application Layer doesn't properly belong to
the IMS itself, but it interacts with the control layer by means of SIP protocol and is
triggered by the CSCF function when a service must be invoked on the basis of the user's
request and the "Initial Filter Criteria" (sort of service specific parameters associated to the
user's profile)
In the figure above three types of servers are reported; as a matter of fact, besides the
new types of Application Servers (SIP AS), developed with the goal of being directly
integrated in the IMS network and therefore provided with the SIP interface, also legacy
systems can be integrated through the employment of interface adaptors:
19

The OSA-AS are Application Servers compatible with the OSA (Open Services
Architecture) standard. They are accessed by CSCF through the OSA-SCS function
(OSA Service Capability Server) which provides a standardized and secured access to
IMS by SIP protocol and uses an OSA Application Program Interface (API) for its
communication with the OSA servers.

Services provided according to the CAMEL (Customized Applications for Mobile
networks Enhanced Logic) standards can be provided to IMS users by means of “IMSSF” (IP Multimedia Service Switching Function).
A short summary of the main functions exploited by the functional blocks shown in the
figure above is reported below:

P-CSCF: Proxy Call Session Control Function: it is the first contact point of the
users within IMS. It is crossed by every SIP message to or from the users. It is
assigned to an IMS terminal during registration, and is the first point of contact for the
IMS terminal; its main features are:
– user authentication
– forwarding to I-CSCF of the registration requests received from the user
– the SIP messages forwarding to the S-CSCF that has been associated to the user
during the registration
– requests and answers forwarding to the user

I-CSCF: Interrogating Call Session Control Function: it is located at the
administrative domain edge:
– Its IP address is published in the DNS of the domain, in order to be located by
remote servers and use it as a forwarding point (e.g. registering) for SIP packets to
this domain.
– It retrieves the user location from the HSS and then routes the SIP request to its
assigned S-CSCF.

S-CSCF: Serving Call Session Control Function
– Interfaces to the HSS to download and upload user profiles
– Makes the correspondence between the user location (e.g. the IP address of the
terminal) and the SIP address
– Selects the application server(s) to which the SIP message are to be forwarded,
depending on the invoked services
– Provides routing services

HSS: Home Subscriber Server: is the network Data Base of the users' profiles and
the service data
20
–
–
–
–

Contains the user profiles
Contains the subscription information used by the service layer
Provides data used to perform user authentication and authorization
Provides information about the physical location of user
MRF: Media Resource Function: it supports bearer related services
–
–
–
–
–
Supports conferencing services by mixing audio streams
Provides for text-to-speech conversion (TTS) and speech recognition.
Performs transcoding (codec conversion) over multimedia data
Plays announcements (audio/video)
It consists of two functional blocks :
o MRFC: Media Resource Function Controller: controls MRFP (by means of
a H.248 interface) upon indications from S-CSCF
o MRFP: Media Resource Function Processor: realizes all media-related
functions.
The following functional blocks are used to manage the interworking with calls to traditional
circuit switched networks (PSTN, GSM, etc.).

BGCF: Break Out Gateway Control Function: selects the network in which PSTN
breakout has to occur and forwards the signaling messages to it through a selected
MGCF block that will manage the interworking process with the TDM network.

SGW (Signaling Gateway): provides for signaling conversion between IMS and the
TDM network at the transport layer.

MGCF (Media Gateway Controller Function): performs signaling protocol conversion
between SIP and SS7 / ISUP:
– Performs call control protocol conversion between SIP and ISUP
– Controls the MGW resources with a H.248 interface.

MGW (Media Gateway): performs RTP to PCM (and vice versa) media conversion
between IMS and the TDM network. It also performs codec conversion when required.
21
6 Management systems
Within an overview on the importance of software in Telecommunications, a subject of
crucial importance regards the Management Systems, or OSS (Operation Support
Systems). No Telecom Operator would ever introduce a new network element, equipment,
service or application into the network without the guarantee, properly tested, that it can be
fully managed. The Management Systems represent the only interface through which the
operators can "see" the network, and can monitor the behavior of both the individual
equipment and the network as a whole, as well as the services provided to the customers
and the relative Quality of Service.
Through the OSS the operators perform such activities as:

monitor the way how the network is behaving, the provided QoS, its traffic handling
capabilities, its faults and the way how they affect the services, etc. QoS can have
many different meanings, depending on the type of service: it can refer to the packet
loss ratio, the round trip delay, the bit error rate; as far as voice calls are concerned, it
can refer to the call success ratio; service QoS can deal with the service provision
delay, or the service success vs access ratio, etc.)

collect statistical performance data to identify critical points or as support to network
upgrade policies; for example such statistics can be provided as to identify which is the
traffic relationship (source-destination) with the highest packet loss, or the lowest call
success ratio, in the presence of which events, what is the most recurring cause, what
is the time distribution of a given network problem during the day. Statistics are much
used by network managers to identify the most affecting problems of the network and
to plan upgrades (e.g. addition of new links between particularly congestioned nodes,
increase of the capacity of a node, etc.)

react to critical situations like traffic congestions, serious faults, configuration errors,
etc.

set the configuration of the network nodes and check the current one

deploy and configure the services

perform customer management (e.g. statistics about the QoS provided to particular
customers, such as the frequency and the severity levels of faults affecting it. QoS is
often a contractual issue between the customer and the TLC operator: it often consists
of parameters to be monitored on a given period time, specified in the contract, that
can be of a great number of types: even the packet loss ratio could be a QoS
parameter to be monitored for a customer, and it must be accurately measured and
reported).
22

collect accounting information and provide billing data

...
The management systems are normally placed at four levels: at the Network Element, the
Network, the Service and the Business levels.

At the Network Element level the management operations are performed directly onto
the Network Elements as individual nodes. Operations performed at this level can be
very detailed and the operators must have deep knowledge of the detailed structure of
the node.
The main management areas of the Network Element level are the “FCAPS” ones, as
specified by the ISO TMN Model:
–
–
–
–
–
Fault
Configuration
Accounting
Performance
Security
Each individual node is managed by a proprietary "Element Manager" that, for
complex Network Elements (like the traditional telephone switches, the softswitches,
etc.) consists of a computer connected to all the components of the node, with which it
can exchange data through either proprietary or standard protocols. It is normally
based on state-of-the-art computers, redounded in high availability configuration, with
standard operating systems and proprietary applications running above. For simpler
Network Elements (like, for example, a router of an IP network) the element
management features are normally realized by a software application directly residing
in the equipment itself to which the operator can access from a PC connected directly
or through the communication network.
Normally the Element Management systems are equipped with a graphical user
interface (GUI) based on standard graphical tools, from which a local operator can
perform the complete set of management operations in a user friendly way (old
element managers provided only complicate MML interfaces that could be used only by
a restricted number of specialists).
The Element Managers are interfaced with the Network Management Systems through
standard protocols. They collect alarms, traffic and performance measurements,
configuration data and accounting information from all the components of the nodes,
and assemble them into standard interfaces towards the higher level Management
Systems (normally, especially for older equipment, different protocols can be used for
different types of management information, e.g. SNMP for alarms, FTP for traffic
measurements and accounting information, SQL-Net for the configuration data); they
23
also receive commands form the Management Systems above (e.g. configuration
commands, traffic and routing related commands, etc.) and forward them to the
appropriate block of the network element for their execution.

At the Network Level the network is managed as a whole, and the actions performed
over the individual nodes are consequent to the general network management strategy.
Operators at the network level don't need a detailed knowledge of the internal structure
of the network nodes, as required to local operators, but have a general view of the
network and its performance.
The Network Management systems collect configuration data, alarms, measurements
and accounting data from the Network Elements, through the respective Element
Managers, and integrate them in a global view of the entire network. They act upon the
network by sending configuration data and commands to the Network Elements
through the interfaces with the Element Managers.
Network Management systems typically cover the same areas as the Element
Manager, but under a global network perspective, for which purpose they must perform
such activities over the information gathered from the element managers as:
– normalization among data collected from NE by different manufacturers;
– integration of alarm indications coming from different NEs to identify the “Root
cause alarms”;
– integration of fault indications and traffic measurements to identify possible impacts
on the QoS provided to the customers (Service Assurance);
– aggregation of the performance measurements of different NEs to identify the traffic
loss causes or to anticipate future potentially critical situations, e.g. during the next
busy hour;
– integration of configuration data from different NE to identify possible configuration
inconsistencies;
– network behavior simulation with new configuration parameters before their actual
activation on the network nodes;
– ...
The main management features performed at the network levels are:
– Fault Management
o Fault detection
o Fault localization
o Trouble Ticketing (workflow over the whole lifecycle of the alarm, from its rising
up to its resolution)
– Configuration Management
o Users’ configuration
o Network configuration
24
For both users and network:
o Configuration setting (Provisioning)
o Configuration Data acquisition and storage (Inventory)
– Traffic and Performance Management
o Real time Traffic Management
 Traffic surveillance (near real time detection of critical situations)
 Traffic control (action performed to minimize traffic losses and QoS
reductions)
o Off-line Performance Management (mainly statistical reports)
– Accounting: collection of users’ related usage data (number and type of placed
calls, traffic volume through the IP network, etc.)
o Accounting data are sent to Billing systems at the Business Level to bill the
customers
o Can also be used by the Performance Management systems as detailed data to
complete the statistical reports or to support detailed analysis of the network
problems

The Service Level deals with all the activities related with the service provisioning and
monitoring, and covers areas such as:





Service planning and development
Service configuration
Service problem management
Service Quality management
The Business Level deals with commercial and customer management issues, such
as:
 Sales
 Order handling
 Customers' problem handling
 Customer QoS management
 Invoicing
The distinction above is mainly a theoretical one, since some management features
belonging to different levels can actually be implemented within the same system. Just as
an example, customer QoS monitoring is a business level issue, but such QoS parameters
like the packet loss ratio or the round trip delay in an IP network are often evaluated by the
same system that monitors the IP traffic at the network level; in the same way, how a fault
in a network node can affect the service provided to a given customer is often evaluated
by the Network Fault management system, etc. In other words, some of the Service and
25
Business Management features are often realized as applications within the Network
Management ones.
In general, such systems are realized by software applications running on commercial
state-of-the-art servers, redounded in high availability configuration, adopting standard
Operating Systems. Often, but not necessarily, Object Oriented languages and platforms
are adopted.
One peculiarity of the management systems is the importance of the Graphical User's
Interface (GUI) since it is the only means for the operators to monitor the behavior of the
network. The GUI must provide an as complete as possible view of the network at a
glance, allowing for quick and easy problem detection and providing user friendly
assistance to the operators to make quick problem solving decisions.
Another typical characteristic is that they must provide the operators with a uniform view of
the network, regardless of the possible presence of nodes by different manufacturers.
Despite the high standardization, the behavior can vary substantially in some
circumstances, and realizing a harmonization layer between the interfaces with the
network nodes and the actual management applications could be required; this could
sometimes be very complicate to be realized and could raise significantly the development
costs.
An important phase of the design of a management system at the network level is also its
dimensioning: the server class, he number of processors, the RAM size, the HD size, the
number of communication links, the number of licensees of third party software, etc. must
carefully evaluated depending on the size of the network, essentially in terms of number of
managed nodes and traffic volumes. Great flexibility is required in order to face future
network size changes just by means of servers upgrades, that must be possibly realized
with no (or minimum) system out of service time.
26
7 Software development process in
manufacturer companies
the TLC
At the end of this short overview on the role of software in Telecommunications, it can
perhaps be of some interest a general overview about the software development process
adopted by the TLC manufacturers. Of course it changes from company to company and
can differ significantly depending on the type of project, the conditions, how the company
is structured, etc. What follows is just a list of general steps, derived from my personal
experience made in the companies with which I have cooperated.

after deciding to take into consideration the development of a given feature, or a set of
features, or even an entire new system (such decisions can be made after an explicit
request by a customer, or by tender or following strategic and marketing policies) the
first phase is the definition of the high level specifications; if the features are to be
developed for a specific customer, normally this phase requires long meetings with him
to fine tune the high level specifications with the goal to reach a satisfactory
compromise between satisfying as much as possible the customer's needs on one side
and maintaining the development costs and the development time within reasonable
limits on the other;

a feasibility analysis follows, during which the general aspects of the activities are
evaluated, such as the estimated development costs, the resource allocation (including
the opportunity to involve internal or external resources) taking into account the
technological means of the company, the know how level required for the designers,
the availability of the necessary internal resources, the time of delivery requested by
the customer or by the market, the opportunity to split the project into different software
releases, etc.

after the project has been approved by the company management, it is assigned to one
or more development teams and the management team is designated: it varies from
one company to another and, even within the same company, can change significantly
after any reorganization; normally it includes a Program Manager, responsible for the
global coordination of all the activities, and the functional managers of the various
teams attending respectively to the definition of the detailed specifications, the software
development, the lab tests, the global integration tests, the deployment and the tests
with the customer (acceptance tests);

the detailed specification phase of all the features follows;
27

starting from the detailed specifications (i.e. what the features "must do" exactly) the
design specifications are defined, including the system architecture, the hardware
requirements, the software environment, the software architecture, its splitting in
functional blocks and in software modules, the way how the different modules
exchange information, etc.

at this point the actual software development phase begins;

after the development, the individual software modules are tested in the lab by the
developers;

after all the features have been developed and individually tested, the integration tests
are performed (i.e. the global test of the system) by the integration team on the basis of
a "test book" (list of tests) previously defined in cooperation with the system team;

The final product is delivered to the customer who performs the "Acceptance tests",
based on "Test Books" normally, but not always, proposed to the customer by the
manufacturer itself, apart from integrations and modifications that the customer could
want to make. The acceptance tests are normally performed in two steps:
– in the first step they are performed on a test plant normally, but not necessarily,
belonging to the customer, situated at the customer premises;
– in the second step the product is tested in the actual environment, i.e. after its
deployment in the network. This is a very critical phase since any still undetected
malfunction could affect even seriously the network service, and therefore such
phase starts only after reaching reasonable confidence that the product is
sufficiently steady and doesn't present serious malfunctions anymore.
Since only the tests in the actual network are performed in the presence of real traffic,
they represent the only phase when the system is actually tested in its correct
dimensioning and in its capability to handle the real traffic volumes. If the software is to
be installed on a plurality of systems (e.g. a new release of the routers of an IP network
or of the switches of a traditional telephone network) the network tests are first
performed on a very limited number of nodes and the release is then gradually spread
throughout the network only after making sure that the already deployed systems work
correctly.
Since then on, the behavior of the system will be monitored through the OSS (of course
its manageability by the management systems is always part of the tests of the
system). After this phase, any modification, patch, or new release follows about the
same process as summarized above.
28