Download Give the following concepts

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Human–computer interaction wikipedia , lookup

Time series wikipedia , lookup

Transcript
Lecture 1
ICT in Core Sectors of Development. ICT Standardization.
1.
2.
3.
Definition of ICT. Subject ICT and its purposes.
Main directions of development of ICT.
Standardization in ICT.
The objectives of the lecture.
Give the following concepts:
 Information and Communication Technologies;
 Content this technologies;
 Types of standartizations in information communication technologies.
1. Stands for "Information and Communication Technologies." ICT refers to
technologies that provide access to information through telecommunications. It is
similar to Information Technology (IT), but focuses primarily on communication
technologies. This includes the Internet, wireless networks, cell phones, and other
communication mediums.
In the past few decades, information and communication technologies have
provided society with a vast array of new communication capabilities. For example,
people can communicate in real-time with others in different countries using
technologies such as instant messaging, voice over IP (VoIP), and video-conferencing.
Social networking websites like Facebook allow users from all over the world to remain
in contact and communicate on a regular basis.
Modern information and communication technologies have created a "global
village," in which people can communicate with others across the world as if they were
living next door. For this reason, ICT is often studied in the context of how modern
communication technologies affect society.
2. ICT has become an integral and accepted part of everyday life for many people.
ICT is increasing in importance in people’s lives and it is expected that this trend will
continue, to the extent that ICT literacy will become a functional requirement for
people’s work, social, and personal lives.
ICT includes the range of hardware and software devices and programmes such as
personal computers, assistive technology, scanners, digital cameras, multimedia
programmes, image editing software, database and spreadsheet programmes. It also
includes the communications equipment through which people seek and access
information including the Internet, email and video conferencing.
The use of ICT in appropriate contexts in education can add value in teaching and
learning, by enhancing the effectiveness of learning, or by adding a dimension to
learning that was not previously available. ICT may also be a significant motivational
factor in students’ learning, and can support students’ engagement with collaborative
learning.
3.Standards are a powerful tool to facilitate access to markets and open the doors to
growth and jobs in the EU. This is especially true in the information and communication
technology (ICT) sector where the continuous emergence of new services, applications
and products fuels the need for more interoperability between systems. The EU
promotes ICT standardization to make sure ICT markets remain open and consumers
have choice.
ICT standardization is the voluntary cooperation for the development of technical
specifications that outlines the agreed properties for a particular product, service, or
procedure.
ICT specifications are primarily used to maximise the ability for systems to work
together. In modern ICT the value of a device relies on its ability to communicate with
other devices. This is known as the ‘network effect’ and is important in almost all areas
of ICT. Specifications ensure that products made by different manufacturers are
interoperable, and that users have the chance to pick and mix between different
suppliers, products or services. This is essential to ensure that markets remain open,
allowing consumers to have the widest choice of products possible and giving
manufacturers the benefit of economies of scale.
The EU supports an effective and coherent standardization framework, which
ensures that standards are developed in a way that supports EU policies and
competitiveness in the global market.
Regulation 1025/2012 on European standardization sets the legal framework in
which the different actors in the standardization system can operate. These actors are the
European Commission, the European standardization organizations, industry, small and
medium-sized industries (SMEs) and societal stakeholders.
Article 13 of Regulation 1025/2012 allows the Commission to identify ICT
technical specifications to be eligible for referencing in public procurement. This allows
public authorities to make use of the full range of specifications when buying IT
hardware, software and services, allowing for more competition in the field and
reducing the risk of lock-in to proprietary systems.
The Commission financially supports the work of the three European
standardization organizations:



ETSI – the European Telecommunications Standards Institute
CEN – the European Committee for Standardization
CENELEC – the European Committee for Electrotechnical Standardization
EU-funded research and innovation projects also make their results available to the
standardisation work of several standards-setting organisations.
Control questions:
1.
2.
3.
4.
5.
What does mean ICT?
What types of IT do you know?
What does include ICT?
What is the ETSI?
What is the CENELEC?
Lecture 2
Introduction to computer systems. Architecture of computer systems.
1.
2.
3.
Review of computer systems. Evolution of computer systems.
Architecture and components of computer systems.
The Use of computer systems.
The objectives of the lecture.
Give the following concepts:



Computing systems;
The main components of computing system;
Primary storage, secondary storage, input/output devices;
1.Computer system is defined as the combination of hardware, software, user and
data. A computer is a programmable device that can automatically perform a sequence
of calculations or other operations on data without human aid. It can store, retrieve, and
process data according to internal instructions.
A computer may be analog, digital, or hybrid, although most today are digital.
Digital computers express variables as numbers, usually in the binary system. They are
used for general purposes, whereas analog computers are built for specific tasks,
typically scientific or technical. The term "computer" is usually synonymous with
digital computer, and computers for business are exclusively digital.
The core of any computer is its central processing unit (CPU), commonly called a
processor or a chip. The typical CPU consists of an arithmetic-logic unit to carry out
calculations; main memory to store data temporarily for processing; and a control unit
to control the transfer between memory, input and output sources, and the arithmeticlogic unit.
In the five decades since 1940, the computer industry has experienced four
generations of development. Each computer generation is marked by a rapid change in
the implementation of its building blocks: from relays and vacuum tubes (1940s-1950s)
to discrete diodes and transistors (1950s-1960s), through small-scale and medium-scale
integrated (SSI/MSI) circuits (1960s-1970s) to large-scale and very large scale
integrated (LSI/VLSI) devices (1970s-1980s). Increases in device speed and reliability
and reductions in hardware cost and physical size have greatly enhanced computer
performance. However, better devices are not the sole factor contributing to high
performance. The division of computer system generations is determined by the device
technology, system architecture, processing mode, and languages used. We are currently
(1989) in the fourth generation; the fifth generation has not materialized yet, but
researchers are working on it.
2.Computer architecture deals with the logical and physical design of a computer
system. The Instruction Set Architecture (ISA) defines the set of machine-code
instructions that the computer's central processing unit can execute. The
microarchitecture describes the design features and circuitry of the central processing
unit itself. The system architecture (with which we are chiefly concerned in this section)
determines the main hardware components that make up the physical computer system
(including, of course, the central processing unit) and the way in which they are
interconnected. The main components required for a computer system are listed below.

Central processing unit (CPU)
Random access memory (RAM)

Read-only memory (ROM)

Input / output (I/O) ports

The system bus

A power supply unit (PSU)
In addition to these core components, in order to extend the functionality of the
system and to provide a computing environment with which a human operator can more
easily interact, additional components are required. These could include:

Secondary storage devices (e.g. disk drives)

Input devices (e.g. keyboard, mouse, scanner)

Output devices (e.g. display adapter, monitor, printer)

A distinction is usually made between the internal components of the system
(those normally located inside the main enclosure or case) and the external components
(those that connect to the internal components via an external interface. Examples of
such external components, usually referred to as peripherals, include the keyboard,
video display unit (monitor) and mouse. Other peripherals can include printers,
scanners, external speakers, external disk drives and webcams, to name but a few. The
Internal components usually (though not always) include one or more disk drives for
fixed or removable storage media (magnetic disk or tape, optical media etc.) although
the core computing function does not absolutely require them. The relationship between
the elements that make up the core of the system is illustrated below.
The core components in a personal computer
The core system components are mounted on a backplane, more commonly
referred to as a mainboard (or motherboard). The mainboard is a relatively large printed
circuit board that provides the electronic channels (buses) that carry data and control
signals between the various components, as well as the necessary interfaces (in the form
of slots or sockets) to allow the CPU, Memory cards and other components to be
plugged into the system. In most cases, the ROM chip is built in to the mainboard, and
the CPU and RAM must be compatible with the mainboard in terms of their physical
format and electronic configuration. Internal I/O ports are provided on the mainboard
for devices such as internal disk drives and optical drives.
Exploded view of personal computer system
External I/O ports are also provided on the mainboard to enable the system to be
connected to external peripheral devices such as the keyboard, mouse, video display
unit, and audio speakers. Both the video adaptor and audio card may be provided ?onboard? (i.e. built in to the mainboard), or as separate plug-in circuit boards that are
mounted in an appropriate slot on the mainboard. The mainboard also provides much of
the control circuitry required by the various system components, allowing the CPU to
concentrate on its main role, which is to execute programs. We will be looking at the
individual system components in detail in later sections.
Some of the external I/O ports found on a typical IBM PC
3. Computers have become an essential part of modern human life. Since the
invention of computer they have evolved in terms of increased computing power and
decreased size. Owing to the widespread use of computers in every sphere, Life in
today’s world would be unimaginable without computers. They have made human lives
better and happier. There are many computer uses in different fields of work. Engineers,
architects, jewelers, and filmmakers all use computers to design things. Teachers,
writers, and most office workers use computers for research, word processing and
emailing. Small businesses can use computers as a point of sale and for general record
keeping.
Computers have its dominant use in the education field which can significantly
enhance performance in learning. Even distance learning is made productive and
effective through internet and video-based classes. Researchers have massive usage of
these computers in their work from the starting to till the end of their scholarly work.
Most of the medical information can now be digitized from the prescription to
reports. Computation in the field of medicine allows us to offer varied miraculous
therapies to the patients. ECG’s, radiotherapy wasn’t possible without computers.
We know well that computers are being used by the financial institutions like
banks for different purposes. The foremost important thing is to store information about
different account holders in a database to be available at any time. Keeping the records
of the cash flow, giving the information regarding your account.
Computers are now the major entertainers and the primary pass time machines. We
can use computers for playing games, watching movies, listening to music, drawing
pictures.
With internet on computers we can know the details of the buses or trains or the
flight available to our desired destination. The timings and even the updates on the
delay can also be known through these computers. We can book our tickets through
online. Staff of the transport system will keep a track of the passengers, trains or flight
details, departure and arrival timings by using computers.
Every single information shared can be recorded by using computer. Official deals
and the issues were made even through online. We use email system to exchange the
information. It has wide uses in marketing, stock exchanges and bank. Even the
departmental stores can’t run effectively without computer.
Electronic mail is the revolutionary service offered by the computes. Video
Conferencing is also another major advantage. Electronic shopping through online
shopping added favor to purchaser and merchants. Electronic banking is now at your
hand where every bank has online support for transaction of monetary issues. You can
easily transfer your money anywhere even from your home.
As per the title, computers aid in designing buildings, magazines, prints,
newspapers, books and many others. The construction layouts are designed beautifully
on system using different tools and software’s.
Control questions:
1. What components of computer system do you know? Name them.
2. What is the RAM?
3. What is the mainboard? What other name of mainboard do you know?
What does the mainboard?
Lecture 3
Computer Software. Operating systems. Desktop applications
1.
The evolution of operating systems. Classification of operating systems.
Operating systems DOS, Windows, Unix, Linux, Mac OS.
2.
Operating systems for mobile devices. The Classification of computer
applications.
The objectives of the lecture.
Give the following concepts:

Operating systems;


History of operating systems;
Classification of OS.
The evolution of operating systems is directly dependent to the development of
computer systems and how users use them. Here is a quick tour of computing systems
through the past fifty years in the timeline.
Early Evolution:

1945: ENIAC, Moore School of Engineering, University of Pennsylvania.

1949: EDSAC and EDVAC

1949 BINAC - a successor to the ENIAC

1951: UNIVAC by Remington

1952: IBM 701

1956: The interrupt

1954-1957: FORTRAN was developed
Operating Systems by the late 1950s:
By the late 1950s Operating systems were well improved and started supporting
following usages :

It was able to Single stream batch processing

It could use Common, standardized, input/output routines for device access

Program transition capabilities to reduce the overhead of starting a new job
was added

Error recovery to clean up after a job terminated abnormally was added.

Job control languages that allowed users to specify the job definition and
resource requirements were made possible.
Operating Systems In 1960s:

1961: The dawn of minicomputers

1962 Compatible Time-Sharing System (CTSS) from MIT

1963 Burroughs Master Control Program (MCP) for the B5000 system

1964: IBM System/360

1960s: Disks become mainstream

1966: Minicomputers get cheaper, more powerful, and really useful

1967-1968: The mouse

1964 and onward: Multics

1969: The UNIX Time-Sharing System from Bell Telephone Laboratories
Accomplishments after 1970:

1971: Intel announces the microprocessor

1972: IBM comes out with VM: the Virtual Machine Operating System

1973: UNIX 4th Edition is published

1973: Ethernet

1974 The Personal Computer Age begins











1974: Gates and Allen wrote BASIC for the Altair
1976: Apple II
August 12, 1981: IBM introduces the IBM PC
1983 Microsoft begins work on MS-Windows
1984 Apple Macintosh comes out
1990 Microsoft Windows 3.0 comes out
1991 GNU/Linux
1992 The first Windows virus comes out
1993 Windows NT
2007: iOS
2008: Android OS
And the research and development work still goes on, with new operating systems
being developed and existing ones being improved to enhance the overall user
experience while making operating systems fast and efficient like they have never been
before.
Classification of operating systems.
Multiuser OS:
In a multiuser OS, more than one user can use the same system at a same time
through the multi I/O terminal or through the network.
For
example:
windows,
Linux,
A multiuser OS uses timesharing to support multiple users.
Mac,
etc.
Multiprocessing OS:
A multiprocessing OS can support the execution of multiple processes at the same
time. It uses multiple number of CPU. It is expensive in cost however, the processing
speed will be faster. It is complex in its execution. Operating system like Unix, 64 bit
edition of windows, server edition of windows, etc. are multiprocessing.
Multiprogramming OS:
In a multiprogramming OS more than one programs can be used at the same time.
It may or may not be multiprocessing. In a single CPU system , multiple program are
executed one after another by dividing the CPU into small time slice.example:
Windows, Mac, Linux,etc.
Multitasking OS:
In a multitasking system more than one task can be performed at the same time but
they are executed one after another through a single CPU by time sharing. For example:
Windows, Linux, Mac, Unix,etc. Multitasking OS are of two types: a) Pre-empetive
multitasking b) Co-operative multitasking
In the pre-empetive multitasking, the OS allows CPU times slice to each program.
After each time slice, CPU executes another task. Example: Windows XP.
In co-operative multitasking a task can control CPU as long as it requires .
However, it will free CPU to execute another program if it doesn’t require CPU.
Exaample: windows 3.x, multifinder,etc.
Multithreading:
A program in execution is known as process. A process can be further divided into
multiple sub-processers. These sub-processers are known as threads. A multi-threading
OS can divide process into threads and execute those threads. This increases operating
speed but also increases the complexity. For example: Unix, Server edition of Linux and
windows.
Batch Processing:
A batch processing is a group of processing system in which all the required input
of all the processing task is provided initially. The result of all the task is provided after
the completion of all the processing. Its main functions are:
1.
2.
3.
4.
5.
6.
7.
Multiple task are processed
User cannot provide input in between the processing
It is appropriate only when all the inputs are known in advance
It requires large memory
CPU ideal time is less
Printer is the appropriate output device
It is old processing technique and rarely used at present
Online Processing:
It is an individual processing system in which the task is processed on individual
basis as soon as they are provided by the user. It has features like:
1.
2.
3.
4.
5.
6.
7.
Individual task is processed at a time
User can provide input in between processing
It is appropriate when all inputs ate not known in advance
It doesn’t require large memory
CPU ideal time is more
Monitor is appropriate output device
It is modern processing technique and mostly used in present
2.Today's mobile devices are multifunctional devices capable of hosting a broad
range of applications for both business and consumer use. Smartphones and tablets
enable people to use their mobile device to access the Internet for email, instant
messaging, text messaging and Web browsing, as well as work documents, contact lists
and more.
Mobile devices are often seen as an extension to your own PC or laptop, and in
some cases newer, more powerful mobile devices can even completely replace PCs.
And when the devices are used together, work done remotely on a mobile device can be
synchronized with PCs to reflect changes and new information while away from the
computer.
Control questions:
1.
2.
3.
4.
What do you know about Operating Systems?
What can you say about evolution of OS?
How many tasks can process the Multitasking OS?
What is the on-line processing?
Lecture 5
Databases
1.
2.
Bases of management systems database: concept, characteristic, architecture.
Data models.
The objectives of the lecture.
Give the following concepts:
 Database as the storage of data;
 Organizational approach of databases;
 Data models.
1. A database is a collection of information that is organized so that it can easily be
accessed, managed, and updated. In one view, databases can be classified according to
types of content: bibliographic, full-text, numeric, and images.
In computing, databases are sometimes classified according to their organizational
approach. The most prevalent approach is the relational database, a tabular database in
which data is defined so that it can be reorganized and accessed in a number of different
ways. A distributed database is one that can be dispersed or replicated among different
points in a network. An object-oriented programming database is one that is congruent
with the data defined in object classes and subclasses.
Database architecture is logically divided into two types.
1.
2.
Logical two-tier Client / Server architecture
Logical three-tier Client / Server architecture
Two-tier Client / Server Architecture
Two-tier Client / Server architecture is used for User Interface program and
Application Programs that runs on client side. An interface called ODBC(Open
Database Connectivity) provides an API that allow client side program to call the dbms.
Most DBMS vendors provide ODBC drivers. A client program may connect to several
DBMS's. In this architecture some variation of client is also possible for example in
some DBMS's more functionality is transferred to the client including data dictionary,
optimization etc. Such clients are called Data server.
Three-tier Client / Server Architecture
Three-tier Client / Server database architecture is commonly used architecture for
web applications. Intermediate layer called Application server or Web Server stores
the web connectivity software and the business logic(constraints) part of application
used to access the right amount of data from the database server. This layer acts like
medium for sending partially processed data between the database server and the client.
2. Data models define how the logical structure of a database is modeled. Data
Models are fundamental entities to introduce abstraction in a DBMS. Data models
define how data is connected to each other and how they are processed and stored
inside the system.
The very first data model could be flat data-models, where all the data used are to be
kept in the same plane. Earlier data models were not so scientific, hence they were prone to
introduce lots of duplication and update anomalies.
Entity-Relationship Model
Entity-Relationship (ER) Model is based on the notion of real-world entities and
relationships among them. While formulating real-world scenario into the database model,
the ER Model creates entity set, relationship set, general attributes and constraints.
ER Model is best used for the conceptual design of a database.
ER Model is based on −

Entities and their attributes.

Relationships among entities.
These concepts are explained below.
Entity − An entity in an ER Model is a real-world entity having properties
called attributes. Every attribute is defined by its set of values called domain. For
example, in a school database, a student is considered as an entity. Student has various
attributes like name, age, class, etc.

Relationship − The logical association among entities is called relationship.
Relationships are mapped with entities in various ways. Mapping cardinalities define the
number of association between two entities.

Mapping cardinalities −

one to one
o
one to many
o
many to one
o
many to many
Control questions:
o
1.
2.
3.
4.
5.
What difference between ‘information’ and ‘data’?
What kind of data models do you know?
What does define data model?
What is the ‘Relationship’?
What does mean the ‘Entity’?
Lecture 6
1.
Introduction in the analysis of data and data management.
2.
Methods of data collection, classification and forecasting. Handling of big
data arrays.
3.
Methods and stages of Data mining. Tasks of Data mining.
The objectives of the lecture.
Give the following concepts:
 Data analysis;
 Operations with the data;
 Data mining.
1.Data Analysis is important to every organization to survive in this competitive
world. In recent years every one wants to make use of the data, understanding their
business and to take effective decisions.
Approach to do Data Analysis
Generally we follow the below approach while analyzing the data:
Understanding the Problem and Data: It is important to understand the business
questions or requirement. And also we need to understand the data. And we find
important variable to use in our analysis.
Data Collection: If you have historical data, then we can go for the next step. In
other cases we collect the data before proceeding for the next step. For example, If you
want to analyse how a particular Super market is performing from the last 2 two years.
We can study the historical data. i.e; sales data to draw the conclusions. In the second
case, If you want to study how your customers are satisfying with your service. You
have to collect the data (By asking the questions face to face, or by launching the
surveys) to analyse the data.
Cleansing and Formatting the Data:
Once our data is ready, the next step is cleaning and formatting the data. We can
not use the raw data which we have received as a input data. We have to study the data
to find if there are any missing or wrong data points. And we also format the data in the
required format for Analysis. We follow many approaches to clean the data. For
examples, we can generate simple frequency tables and see how the data is. Or we can
plot the charts (generally scatter chart) to see if there are any outliers.
Tabulation and Statistical Modelling: Once we are completed with the data
cleansing, we go for tabulating the variables. We can study the data to draw the basic
observations. And based on the requirement we can apply statistical techniques to
understand and interpret the data. We will see these techniques later in-detail.
Interpreting or Recommendations: Based on the outputs generated in the above
step, we will analyse the data. And we will write our recommendations (generally it is
in Presentation or Dashboard) by following the assumptions. And we send it to the
executives to take the decisions to solve the business problem.
Data collection is the process of gathering and measuring information on targeted
variables in an established systematic fashion, which then enables one to answer
relevant questions and evaluate outcomes. The data collection component of research is
common to all fields of study including physical and social sciences, humanities and
business. It helps us to collect the main points as gathered information. While methods
vary by discipline, the emphasis on ensuring accurate and honest collection remains the
same. The goal for all data collection is to capture quality evidence that then translates
to rich data analysis and allows the building of a convincing and credible answer to
questions that have been posed.
2.Regardless of the field of study or preference for defining data (quantitative or
qualitative), accurate data collection is essential to maintaining the integrity of research.
Both the selection of appropriate data collection instruments (existing, modified, or
newly developed) and clearly delineated instructions for their correct use reduce the
likelihood of errors occurring.
A formal data collection process is necessary as it ensures that data gathered are
both defined and accurate and that subsequent decisions based on arguments embodied
in the findings are valid.[2] The process provides both a baseline from which to measure
and in certain cases a target on what to improve.
3. Data mining is an interdisciplinary subfield of computer science. It is the
computational process of discovering patterns in large data sets involving methods at
the intersection of artificial intelligence, machine learning, statistics, and database
systems. The overall goal of the data mining process is to extract information from a
data set and transform it into an understandable structure for further use. Aside from the
raw analysis step, it involves database and data management aspects, data preprocessing, model and inference considerations, interestingness metrics, complexity
considerations, post-processing of discovered structures, visualization, and online
updating. Data mining is the analysis step of the "knowledge discovery in databases"
process, or KDD.
The term is a misnomer, because the goal is the extraction of patterns and
knowledge from large amounts of data, not the extraction (mining) of data itself. It also
is a buzzword and is frequently applied to any form of large-scale data or information
processing (collection, extraction, warehousing, analysis, and statistics) as well as any
application of computer decision support system, including artificial intelligence,
machine learning, and business intelligence. The book Data mining: Practical machine
learning tools and techniques with Java (which covers mostly machine learning
material) was originally to be named just Practical machine learning, and the term data
mining was only added for marketing reasons. Often the more general terms (large
scale) data analysis and analytics – or, when referring to actual methods, artificial
intelligence and machine learning – are more appropriate.
The actual data mining task is the automatic or semi-automatic analysis of large
quantities of data to extract previously unknown, interesting patterns such as groups of
data records (cluster analysis), unusual records (anomaly detection), and dependencies
(association rule mining). This usually involves using database techniques such as
spatial indices. These patterns can then be seen as a kind of summary of the input data,
and may be used in further analysis or, for example, in machine learning and predictive
analytics. For example, the data mining step might identify multiple groups in the data,
which can then be used to obtain more accurate prediction results by a decision support
system. Neither the data collection, data preparation, nor result interpretation and
reporting is part of the data mining step, but do belong to the overall KDD process as
additional steps.
Control questions:
Lecture 7
Networking and telecommunications
1.
2.
3.
Basic concepts and LAN components.
Types of networks.
Network protocols and standards.
The objectives of the lecture.
Give the following concepts:
 Internet Networking;
 Local Area Network, Metropolitan Area Network, Campus area Network,
Wireless Area Network;
 Types of protocols;
 Standards of the network.
1.
Computer networking is an engineering discipline that aims to study
and analyze the communication process among various computing devices or
computer systems that are linked, or networked, together to exchange information
and share resources.
Computer networking depends on the theoretical application and practical
implementation of fields like computer engineering, computer sciences, information
technology and telecommunication.
The components used to establish a local area network (LAN) have a variety of
functions. The common unifying theme among them is that they facilitate
communication between two or more computers. LAN components are configurable in
a variety of ways, but a LAN always requires the same basic components.
2. Local Area Network (LAN)
This is one of the original categories of network, and one of the simplest. LAN
networks connect computers together over relatively small distances, such as within a
single building or within a small group of buildings.
Homes often have LAN networks too, especially if there is more than one device in
the home. Often they do not contain more than one subnet, if any, and are usually
controlled by a single administrator. They do not have to be connected to the internet to
work, although they can be.
Other Types of Network
Metropolitan Area Network – This is a network which is larger than a LAN but
smaller than a WAN, and incorporates elements of both. It typically spans a town or city
and is owned by a single person or company, such as a local council or a large
company.
Campus Area Network – This is a network which is larger than a LAN, but
smaller than an MAN. This is typical in areas such as a university, large school or small
business. It is typically spread over a collection of buildings which are reasonably local
to each other. It may have an internal Ethernet as well as capability of connecting to the
internet.
Wireless Local Area Network – This is a LAN which works using wireless
network technology such as Wi-Fi. This type of network is becoming more popular as
wireless technology is further developed and is used more in the home and by small
businesses. It means devices do not need to rely on physical cables and wires as much
and can organise their spaces more effectively.
3. A Protocol is a predefined set of rules that dictates how network devices (such
as router, computer, or switch) communicate and exchange data on the network.
Application Protocols:
The Application Protocol are built on the top of TCP/IP protocol suite. The list of
protocol include the following:

Simple Network Management Protocol (SNMP)
The Simple Network Management Protocol (SNMP) is an application-layer
protocol designed to manage complex communication networks. SNMP works by
sending messages, called protocol data units (PDUs), to different parts of a network.
SNMP-compliant devices, called agents, store data about themselves in Management
Information Bases (MIBs) and return this data to the SNMP servers.
There are two versions of SNMP: Version 1 and Version 2.

File Transfer Protocol (FTP)
FTP is a Client Server protocol, used for copying files between an FTP server and
a client computer over a TCP/IP network. FTP is commonly used to communicate with
web servers to upload or download files.
FTP, the File Transfer Protocol, documented in RFC 959, is one of oldest Internet
protocols still in widespread use. FTP uses TCP protocol for communication, and
capable of transferring both binary files and text files. Some popular FTP clients
include FileZilla, and cuteFTP.
FTP uses port TCP port number 21.

Trivial File Transfer Protocol (TFTP)
TFTP stands for Trivial File Transfer Protocol. TFTP is very similar to FTP, but
uses UDP protocol for file transfer. UDP, as discusses elsewhere in the tutorial is
considered to an unreliable protocol. Hence, TFTP is not frequently used for normal file
transfer applications.

Simple Mail Transfer Protocol (SMTP)
SMTP (Simple Mail Transfer Protocol) is a TCP/IP protocol used for sending email messages between servers. SMTP is also used to send email messages from a client
machine to a server. An email client such as MS Outlook Express uses SMTP for
sending emails and POP3/IMAP for receiving emails from the server to the client
machine. In other words, we typically use a program that employs SMTP for sending email, and either POP3 or IMAP for receiving messages from our local (or ISP) server.
SMTP is usually implemented to operate over Transmission Control Protocol port 25.

Post Office Protocol (POP3)
POP3 stands for Post of Protocol version 3. It is used for fetching messages from
an email server. Most commonly used POP3 client programs include Outlook Express,
and Mozilla Thunderbird.

Internet Message Access Protocol (IMAP)
The Internet Message Access Protocol (commonly known as IMAP or IMAP4)
allows a local client to access e-mail on a remote server. The current version, IMAP
version 4 is defined by RFC 3501. IMAP4 and POP3 are the two most prevalent
Internet standard protocols for e-mail retrieval.

Network File System (NFS)
Network File System is a distributed file system which allows a computer to
transparently access files over a network.

Telnet
The Telnet service provides a remote login capability. This lets a user on one
machine log into another machine and act as if they are directly in front of the remote
machine. The connection can be anywhere on the local network, or on another network
anywhere in the world, as long as the user has permission to log into the remote system.
Telnet uses TCP to maintain a connection between two machines. Telnet uses port
number 23.

Hypertext Transfer Protocol (HTTP)
A protocol used to transfer hypertext pages across the World Wide Web. HTTP
defines how messages are formatted and transmitted, and what actions Web servers and
browsers should take in response to various commands. For example, when you enter a
URL in your browser, this actually sends an HTTP command to the Web server
directing it to fetch and transmit the requested Web page. Note that HTML deals with
how Web pages are formatted and displayed in a browser.
HTTP is called a stateless protocol because each command is executed
independently, without any knowledge of the commands that came before it.
Standards are necessary in almost every business and public service entity.
The primary reason for standards is to ensure that hardware and software
produced by different vendors can work together. Without networking standards, it
would be difficult—if not impossible—to develop networks that easily share
information. Standards also mean that customers are not locked into one vendor. They
can buy hardware and software from any vendor whose equipment meets the standard.
In this way, standards help to promote more competition and hold down prices.
The use of standards makes it much easier to develop software and hardware that
link different networks because software and hardware can be developed one layer at a
time.
Control questions:
1.
2.
3.
4.
5.
6.
How many types of networks do you know?
Name them.
What is the HTTP?
What is the NMTP?
What is the NFS?
What does mean standard of networking?
Lecture 8. Cyber Security, Ethics and Trust
1.
2.
3.
4.
Information security threats.
Security classification for information.
Enciphering (cryptography).
Anti-virus programs.
The objectives of the lecture.
Give the following concepts:
 Information security, CIA;
 Important aspects of information security;
 Threats;
 Cryptography an cryptology, encryption;
 Viruses;
 Anti-virus software.
1.Sometimes referred to as computer security, information technology security is
information security applied to technology (most often some form of computer system).
It is worthwhile to note that a computer does not necessarily mean a home desktop. A
computer is any device with a processor and some memory. Such devices can range
from non-networked standalone devices as simple as calculators, to networked mobile
computing devices such as smartphones and tablet computers. IT security specialists are
almost always found in any major enterprise/establishment due to the nature and value
of the data within larger businesses. They are responsible for keeping all of the
technology within the company secure from malicious cyber attacks that often attempt
to breach into critical private information or gain control of the internal systems.
Information assurance
The act of providing trust of the information, that the Confidentiality, Integrity and
Availability (CIA) of the information are not violated. These issues include, but are not
limited to: natural disasters, computer/server malfunction or physical theft. Since most
information is stored on computers in our modern era, information assurance is typically
dealt with by IT security specialists. A common method of providing information
assurance is to have an off-site backup of the data in case one of the mentioned issues
arise.
Threats
Information security threats come in many different forms. Some of the most
common threats today are software attacks, theft of intellectual property, identity theft,
theft of equipment or information, sabotage, and information extortion. Most people
have experienced software attacks of some sort. Viruses, worms, phishing attacks, and
Trojan horses are a few common examples of software attacks. The theft of intellectual
property has also been an extensive issue for many businesses in the IT field. Identity
theft is the attempt to act as someone else usually to obtain that person's personal
information or to take advantage of their access to vital information. Theft of equipment
or information is becoming more prevalent today due to the fact that most devices today
are mobile. Cell phones are prone to theft, and have also become far more desirable as
the amount of data capacity increases. Sabotage usually consists of the destruction of an
organization′s website in an attempt to cause loss of confidence on the part of its
customers. Information extortion consists of theft of a company′s property or
information as an attempt to receive a payment in exchange for returning the
information or property back to its owner, as with ransomware. There are many ways to
help protect yourself from some of these attacks but one of the most functional
precautions is user carefulness.
2.An important aspect of information security and risk management is recognizing
the value of information and defining appropriate procedures and protection
requirements for the information. Not all information is equal and so not all
information requires the same degree of protection. This requires information to be
assigned a security classification.
The first step in information classification is to identify a member of senior
management as the owner of the particular information to be classified. Next, develop a
classification policy. The policy should describe the different classification labels,
define the criteria for information to be assigned a particular label, and list the required
security controls for each classification.
Some factors that influence which classification information should be assigned
include how much value that information has to the organization, how old the
information is and whether or not the information has become obsolete. Laws and other
regulatory requirements are also important considerations when classifying information.
3. Cryptography or cryptology (from Greek κρυπτός kryptós, "hidden, secret";
and γράφειν graphein, "writing", or -λογία -logia, "study", respectively) is the practice
and study of techniques for secure communication in the presence of third parties called
adversaries. More generally, cryptography is about constructing and analyzing protocols
that prevent third parties or the public from reading private messages; various aspects in
information security such as data confidentiality, data integrity, authentication, and nonrepudiation are central to modern cryptography. Modern cryptography exists at the
intersection of the disciplines of mathematics, computer science, and electrical
engineering. Applications of cryptography include ATM cards, computer passwords,
and electronic commerce.
Cryptography prior to the modern age was effectively synonymous with
encryption, the conversion of information from a readable state to apparent nonsense.
The originator of an encrypted message (Alice) shared the decoding technique needed to
recover the original information only with intended recipients (Bob), thereby precluding
unwanted persons (Eve) from doing the same. The cryptography literature often uses
Alice ("A") for the sender, Bob ("B") for the intended recipient, and Eve
("eavesdropper") for the adversary. Since the development of rotor cipher machines in
World War I and the advent of computers in World War II, the methods used to carry
out cryptology have become increasingly complex and its application more widespread.
Modern cryptography is heavily based on mathematical theory and computer
science practice; cryptographic algorithms are designed around computational hardness
assumptions, making such algorithms hard to break in practice by any adversary. It is
theoretically possible to break such a system, but it is infeasible to do so by any known
practical means. These schemes are therefore termed computationally secure; theoretical
advances, e.g., improvements in integer factorization algorithms, and faster computing
technology require these solutions to be continually adapted. There exist informationtheoretically secure schemes that provably cannot be broken even with unlimited
computing power—an example is the one-time pad—but these schemes are more
difficult to implement than the best theoretically breakable but computationally secure
mechanisms.
The growth of cryptographic technology has raised a number of legal issues in the
information age. Cryptography's potential for use as a tool for espionage and sedition
has led many governments to classify it as a weapon and to limit or even prohibit its use
and export. In some jurisdictions where the use of cryptography is legal, laws permit
investigators to compel the disclosure of encryption keys for documents relevant to an
investigation. Cryptography also plays a major role in digital rights management and
copyright infringement of digital media.
4. Antivirus or anti-virus software (often abbreviated as AV), sometimes known
as anti-malware software, is computer software used to prevent, detect and remove
malicious software]
Antivirus software was originally developed to detect and remove computer
viruses, hence the name. However, with the proliferation of other kinds of malware,
antivirus software started to provide protection from other computer threats. In
particular, modern antivirus software can protect from: malicious browser helper objects
(BHOs), browser hijackers, ransomware, keyloggers, backdoors, rootkits, trojan horses,
worms, malicious LSPs, dialers, fraudtools, adware and spyware. Some products also
include protection from other computer threats, such as infected and malicious URLs,
spam, scam and phishing attacks, online identity (privacy), online banking attacks,
social engineering techniques, advanced persistent threat (APT) and botnet DDoS
attacks.
Control questions:
1.
2.
3.
4.
5.
6.
What do you know about information security?
What does give information assurance?
What kind of threats do you know?
What kind of cryptology algorithms do you know?
What do you know about viruses? Name types of viruses.
What is the anti-virus program? What kind of anti-virus program do you
use? Why?
Lecture 9. Internet Technology
1.
2.
3.
4.
5.
6.
Introduction to Internet technologies
Architecture the Client-server
Web server and web applications
HTTP
EXML
CSS
The objectives of the lecture.
Give the following concepts:





Internet technology;
Client-server, web-server;
HTTP;
XHTML;
CSS.
1.The Internet is essentially a large database where all different types of
information can be passed and transmitted. It can be passively passed along in the
form of non interactive websites and blogs; it can also be actively passed along in
the form of file sharing and document loading. Internet technology has lead to a
wealth of information available to anyone who is able to access the Internet. It has
allowed people who were accustomed to textbooks and libraries to learn anything
they could want from the comfort of a computer.
Internet technology is constantly improving and is able to speed up the
information highway that it has created. With the technologies powering the
Internet, speeds are faster, more information is available and different processes
are done that were not possible in the past. Internet technology has changed, and
will continue to change, the way that the world does business and how people
interact in daily life.
2.First we'll define a web application: it's a client-server application - there is
a browser (the client) and a web server. The logic of a web application is
distributed among the server and the client, there's a channel for information
exchange, and the data is stored mainly on the server. Further details depend on the
architecture: different ones distribute the logic in different ways. It can be placed
on the server as well as on the client side.
3. A web server is a specialized type of file server. Its job is to retrieve files
from the server’s hard drive, format the files for the Web browser, and send them
out via the network. Web servers are designed to do a great job of sending static
content out to a large number of users. The pages delivered by the server are
expected to be the same for everyone who visits the server.
The function of a typical Web server is shown below. The user requests a
web page. The Web Server finds the web page file in a local directory and sends it
back out to the user. When graphic files are requested, the same thing happens. The
Web Server finds the requested graphic files and sends them back to the user.
The Web Server standards were originally designed to publish static documents on
the Internet. There was a limited capability for accessing dynamic content, but this
was never intended to support high volume, highly interactive Web applications.
4. HTTP (Hypertext Transfer Protocol) is the set of rules for transferring files
(text, graphic images, sound, video, and other multimedia files) on the World Wide
Web. As soon as a Web user opens their Web browser, the user is indirectly
making use of HTTP. HTTP is an application protocol that runs on top of the
TCP/IP suite of protocols (the foundation protocols for the Internet).
5. As the World Wide Web Consortium (W3C) describes it, XHTML
(Extensible Hypertext Markup Language) is a reformulation of HTML 4.0 as an
application of the Extensible Markup Language (XML). For readers unacquainted
with either term, HTML is the set of codes (that's the "markup language") that a
writer puts into a document to make it displayable on the World Wide Web.
HTML 4 is the current version of it. XML is a structured set of rules for how one
might define any kind of data to be shared on the Web. It's called an "extensible"
markup language because anyone can invent a particular set of markup for a
particular purpose and as long as everyone uses it (the writer and an application
program at the receiver's end), it can be adapted and used for many purposes including, as it happens, describing the appearance of a Web page.
6. Cascading Style Sheet (CSS) is a way to design a website, or a group of
websites, so that they have a consistent look and feel, and so that their look and
feel is easy to change. By using CSS to design a website, the web developer gains
a greater degree of control over how the site appears.
A web developer can use a CSS file to control the look of a website in three
main ways. The first way is called inline, referring to the fact that the code is
placed right into the line of the website code. For example, a web developer might
want to make a particular sentence appear in bold, red type so that it stands out.
She could use CSS to set the style of that sentence to bold and red using inline
code. The benefit of this method is that it allows a quick and easy change to a
particular part of a web page.
Control questions:
1.
2.
3.
4.
5.
How does work web-server?
How does work client-server?
What is the HTTP?
What is the CSS?
Name the way to design a website.
Lecture 10. Cloud and Mobile technology
1. Introduction in a corporate and cloud computing.
2. Cloudy storage: Blobs, NoSQL (table). Future of a cloud computing.
Architecture of cloudy 28pplications.
3. Main terms and concepts of mobile technologies. Architecture of mobile
applications. Types of mobile computing devices.
The objectives of the lecture.
Give the following concepts:
 Cloud technology;
 Benefits of cloud technologies;
 Cloud storage;
 SQL and BLOBs;
 Mobile devices, mobile technologies, types of mobile devices;
 Mobile operating systems.
1. Essentially, cloud computing is an extreme form of outsourcing, one in
which hardware ownership and operation, software version updating, data storage
and backup, and occasionally other functions as well, are all outsourced to a singe
vendor. Moreover, hardware is generally located at the cloud vendor site, where
massive racks of servers and farms of storage devices process the applications and
maintain the data of huge numbers of users to achieve efficient operation. Since the
data and the programs are located “somewhere” in the apparently nebulous
structure of the web and accessed remotely through the Internet, this form of
shared remotely hosted service is called cloud computing.
Seen as a form of outsourcing, cloud computing offers a well-defined set of
benefits; these are the benefits that are traditionally associated with outsourcing,
and most are well known and well studied. The first of course are associated with
economies of scale: a large vendor will make much more efficient use of
personnel, and a large vendor sees much less variation day by day in demand than
each individual user will encounter and therefore can do more effective load
leveling and will require less excess capacity or “safety stock” in computing
resources. As a result, a large vendor can charge for actual usage, allowing those
users with high demand at a given time to consume unusually high levels of
resources and pay higher total fees, while allowing users with lower demand to
consume fewer resources and to pay lower fees. Economies of scale also allow
large vendors, whether cloud-based or not, to perform more R&D than smaller
users could perform.
There are several of benefits to modern online computing that are wrongly
lumped with cloud computing, like online access from any location, social
networking, community outreach, and ubiquitous connectivity. These are more
accurately attributed to remote web-based access, and indeed are not inextricably
linked to the cloud. The cloud is an outsourcing service delivery mechanism, and
the web is the medium for delivery.
2. Cloud Storage is a model of data storage in which the digital data is stored
in logical pools, the physical storage spans multiple servers (and often locations),
and the physical environment is typically owned and managed by a hosting
company. These cloud storage providers are responsible for keeping the data
available and accessible, and the physical environment protected and running.
People and organizations buy or lease storage capacity from the providers to store
user, organization, or application data.
Cloud storage services may be accessed through a co-located cloud computer
service, a web service application programming interface (API) or by applications
that utilize the API, such as cloud desktop storage, a cloud storage gateway or
Web-based content management systems.
A NoSQL database provides a mechanism for storage and retrieval of data
which is modeled in means other than the tabular relations used in relational
databases. Such databases have existed since the late 1960s, but did not obtain the
"NoSQL" moniker until a surge of popularity in the early twenty-first century,
triggered by the needs of Web 2.0 companies such as Facebook, Google, and
Amazon.com. NoSQL databases are increasingly used in big data and real-time
web applications. NoSQL systems are also sometimes called "Not only SQL" to
emphasize that they may support SQL-like query languages.
Windows Azure BLOB storage service can be used to store and retrieve
Binary Large Objects (BLOBs), or what are more commonly known as files. In
this introduction to the Windows Azure BLOB Storage service we will cover the
difference between the types of BLOBs you can store, how to get files into and out
of the service, how you can add metadata to your files and more.
There are many reasons why you should consider using BLOB storage.
Perhaps you want to share files with clients, or off-load some of the static content
from your web servers to reduce the load on them. However, if you are using
Azure’s Platform as a Service (PaaS), also known as Cloud Services, you’ll most
likely be very interested in BLOB storage because it provides persistent data
storage.
3. Mobile technology is the technology used for cellular communication.
Mobile code division multiple access (CDMA) technology has evolved rapidly
over the past few years. Since the start of this millennium, a standard mobile
device has gone from being no more than a simple two-way pager to being a
mobile phone, GPS navigation device, an embedded web browser and instant
messaging client, and a handheld game console. Many experts argue that the future
of computer technology rests in mobile computing with wireless networking.
Mobile computing by way of tablet computers are becoming more popular. Tablets
are available on the 3G and 4G networks.
Today's mobile devices are multifunctional devices capable of hosting a broad
range of applications for both business and consumer use. Smartphones and tablets
enable people to use their mobile device to access the Internet for email, instant
messaging, text messaging and Web browsing, as well as work documents, contact lists
and more
Types of Mobile Computing Devices
Smartphones
Smartphones combine a mobile phone and a handheld computer into a single
device. Smartphones allow users to access and store information (e.g. e-mail) and
install programs (applications) while also being able to use a mobile phone in one
device. For example, a smartphone could be a mobile phone with some PDA
functions integrated into the device or vice versa. Examples of smartphones over
the years have included the Apple iPhone, Samsung Galaxy, Microsoft and Nokia
Lumia, Sony Ericsson, Palm Treo, Blackberry, Nokia T-Mobile Sidekick, Torq,
Motorola Q, E-Ten, HP iPaq and I-mate.
Tablet PCs
Tablet PCs are an evolution of the notebook computer with touchscreen LCD
screens that can be utilized with your fingertips or with a stylus. The handwriting
with a stylus is digitized and can be converted to standard text through handwriting
recognition, or it can remain as handwritten text. The stylus can also be used to
type on a pen-based key layout where the lettered keys are arranged differently
than a QWERTY keyboard. Tablet PCs may also offer a removable keyboard as an
additional input option.Examples of tablet PCs have included Apple iPad,
Microsoft Surface and Surface Pro, Samsung Galaxy Tab, Samsung Nexus,
Amazon Kindle Fire HD and Lenovo Yoga.
Mobile Operating Systems (Mobile OS)
Like a computer operating system, a mobile operating system is the software
platform for mobile devices on top of which other programs run. When you
purchase a mobile device, the manufacturer will have chosen the mobile OS for
that specific device. The mobile operating system is responsible for determining
the functions and features available on your device, such as thumbwheel,
keyboards, WAP, synchronization with applications, e-mail, text messaging and
more.
The mobile operating system will also determine which third-party
applications can be used on your device. Some of the more common and wellknown mobile operating systems include the following:
Apple iOS
Apple’s iOS mobile operating system powers the company’s line of mobile
devices like the iPhone, iPad, iPod touch, and Apple TV. Apple iOS was
originally called the iPhone OS but was renamed in 2010 to reflect the operating
system’s evolving support for additional Apple devices. Apple updated iOS to iOS
9 in 2015 in conjunction with the company’s OS X El Capitan operating system
release.
Google Android
Google Android is a mobile operating system based on Linux that has quickly
become the biggest competitor to Apple iOS in the mobile device market. Google
originally released Android’s source code under open source licenses, and today
the company continues to develop the mobile OS privately prior to major update
releases that are made available to OEMs and the public.Manufacturers of
Android-powered smartphone and tablet devices include Samsung, Sony, Asus,
Amazon, HTC and LG, as well as Google itself.
Windows Phone
Originally called the Windows Mobile platform and then Windows Phone,
Microsoft’s mobile OS is available on a variety of devices from a variety of
wireless operators. You will find Windows Phone on Microsoft hardware devices
as well as Nokia, Dell, HP, Motorola, Palm and i-mate products. Microsoft
unveiled the latest release of its mobile operating system, Windows 10 Mobile, in
late 2015 as part of the Windows 10 family of operating systems.
Control questions:
1. What means cloud technology? Do you use any cloud in your daily life?
2. What do you know about mobile technology? What kind of mobile device do
you use?
3. Can you imagine your life without mobile devices?
4. What kind of opportunities does give mobile devices?
Lecture 11. Multimedia technologies
1. Representation text, audio, video and graphical information in a digital
format.
2. Instruments of multimedia applications. Use of multimedia technologies
for planning, descriptions of business processes and their visualization.
The objectives of the lecture.
Give the following concepts:
 Information in a digital format;
 Instruments of multimedia applications;
 Using of multimedia.
1. Digitizing or digitization is the of an object, image, sound, document or
signal (usually an analog signal) by generating a series of numbers that describe a
discrete set of its points or samples. The result is called digital representation or,
more specifically, a digital image, for the object, and digital form, for the signal. In
modern practice, the digitized data is in the form of binary numbers, which
facilitate computer processing and other operations, but strictly speaking, digitizing
simply means the conversion of analog source material into a numerical format;
the decimal or any other number system can be used instead.
Digitization is of crucial importance to data processing, storage and
transmission, because it "allows information of all kinds in all formats to be carried
with the same efficiency and also intermingled". Unlike analog data, which
typically suffers some loss of quality each time it is copied or transmitted, digital
data can, in theory, be propagated indefinitely with absolutely no degradation. This
is why it is a favored way of preserving information for many organizations around
the world.
2. Instruments of multimedia applications
CD-ROM Disk
CD-ROM operates in many respects the same as also
a sound compact disc. Actually, many disk drives of
compact discs are capable to reproduce sound compact
discs though return compatibility thus is not present.
Disk drives Applied now read compact discs, using the laser generating beams in
red area of a spectrum, but possibility of use of dark blue lasers which would allow
to increase volume of the information stored on a compact disc is already studied.
The Information can be read out, as soon as the laser will be positioned over a
demanded part of a disk. Speed from which the information is transferred to the
computer, is called as speed of data transmission. It is measured by volume of the
information which can be read out for one second.
Sound images
Two various types of sound files - wave (WAV) and not wave, or MIDI-files can
create and reproduce the Majority of sound cards.
Files MIDI, on the contrary, store not a wave spectrum, and, more likely,
commands for a reconstruction of sounds.
Other factor influencing quality of a sound, is a quantity of the bits accessible to
storage. Bit - the least unit hranimoj on the information computer. The more bits it
is used for each sound, the its quality is better. Sound cards usually are 8 or 16-bit.
The 16-bit payment can register and write down the most thin shades of a sound. If
you use frequency of digitization 44 kgts the 16-bit payment is necessary to you.
Visual display
Completely to take pleasure in multimedia, your
personal computer should be able to display accurate
and colourful images. The monitor reproduces any colour images by means of a
combination of three primary colours - red, dark blue and green. The image on the
screen is made of thousand the tiny points named in pixels. Each pixel, in turn,
consists of group of points which at hit in them of an electronic beam are shone by
red, dark blue or green colour. Changing intensity of a beam, it is possible to
receive various colours. The more on the screen, the the image is more accurate
than pixels.
Application of multimedia
The Combination of motionless and moving images, the animations, the written
and sounding words, music and other sounds makes strong impression. The good
multimedia program uses all these means in their unity. Giving too much attention
to one of them (for example, to the video image), it is possible to spoil cumulative
effect. Representing the information in very attractive and simple way for
understanding, video takes a lot of place on disk space, and its loading and
reproduction can slow down program work. Easy access to the information on a
disk is extremely important also. For way instructions to the related information
hyperlinks are used: to click a mouse on one of them as the user will pass to the
following screen of the essential information enough. Hyperlinks are especially
important in multimedia as give to the user freedom: it can supervise both volume
of a studied material, and speed of mastering. Well thought over programs of
multimedia give the chance to the user, having pressed the button, one jump to
return to the screen with an initial hyperlink. One more way of detection of the
information in a multimedia application is connected with use of the built in
possibilities of search. Search is based on the text. Even appendices which allow
you to search for video or sound files, find it under the text description.
Controlling questions:
1. What does mean digital information?
2. How many types of digital information do you know? Name them.
3. What kind of multimedia applications do you use?
Lecture 12. Smart technology
1. Internet of things. Big data.
2. Artificial intelligence.
3. Use of Smart-services.
4. Green technologies in ICT.
5. Teleconferences. Telemedicine.
The objectives of the lecture.
Give the following concepts:
1. Internet of things. Big data.
The Internet of things (stylised Internet of Things or IoT) is the
internetworking of physical devices, vehicles, buildings and other items—
embedded with electronics, software, sensors, actuators, and network connectivity
that enable these objects to collect and exchange data.
Typically, IoT is expected to offer advanced connectivity of devices, systems,
and services that goes beyond machine-to-machine communications and covers a
variety of protocols, domains, and applications. The interconnection of these
embedded devices (including smart objects), is expected to usher in automation in
nearly all fields, while also enabling advanced applications like a smart grid, and
expanding to the areas such as smart cities.
Big data is a term for data sets that are so large or complex that traditional
data processing applications are inadequate to deal with them. Challenges include
analysis, capture, data curation, search, sharing, storage, transfer, visualization,
querying, updating and information privacy. The term "big data" often refers
simply to the use of predictive analytics, user behavior analytics, or certain other
advanced data analytics methods that extract value from data, and seldom to a
particular size of data set. "There is little doubt that the quantities of data now
available are indeed large, but that’s not the most relevant characteristic of this
new data ecosystem."
Analysis of data sets can find new correlations to "spot business trends,
prevent diseases, combat crime and so on". Scientists, business executives,
practitioners of medicine, advertising and governments alike regularly meet
difficulties with large data-sets in areas including Internet search, finance, urban
informatics, and business informatics. Scientists encounter limitations in e-Science
work, including meteorology, genomics, connectomics, complex physics
simulations, biology and environmental research.
2. Artificial intelligence.
Artificial intelligence (AI) is intelligence exhibited by machines. In
computer science, an ideal "intelligent" machine is a flexible rational agent that
perceives its environment and takes actions that maximize its chance of success at
some goal. Colloquially, the term "artificial intelligence" is applied when a
machine mimics "cognitive" functions that humans associate with other human
minds, such as "learning" and "problem solving". As machines become
increasingly capable, facilities once thought to require intelligence are removed
from the definition. For example, optical character recognition is no longer
perceived as an exemplar of "artificial intelligence" having become a routine
technology. Capabilities currently classified as AI include successfully
understanding human speech, competing at a high level in strategic game systems
(such as Chess and Go), self-driving cars, and interpreting complex data. AI is also
considered a danger to humanity if it progresses unabatedly. AI research is divided
into subfields that focus on specific problems or on specific approaches or on the
use of a particular tool or towards satisfying particular applications.
The central problems (or goals) of AI research include reasoning, knowledge,
planning, learning, natural language processing (communication), perception and
the ability to move and manipulate objects. General intelligence is among the
field's long-term goals. Approaches include statistical methods, computational
intelligence, soft computing (e.g. machine learning), and traditional symbolic AI.
Many tools are used in AI, including versions of search and mathematical
optimization, logic, methods based on probability and economics. The AI field
draws upon computer science, mathematics, psychology, linguistics, philosophy,
neuroscience and artificial psychology.
3. Use of Smart-services
Smart systems incorporate functions of sensing, actuation, and control in
order to describe and analyze a situation, and make decisions based on the
available data in a predictive or adaptive manner, thereby performing smart
actions. In most cases the “smartness” of the system can be attributed to
autonomous operation based on closed loop control, energy efficiency, and
networking capabilities.
A major challenge in smart systems technology is the integration of a
multitude of diverse components, developed and produced in very different
technologies and materials. Focus is on the design and manufacturing of
completely new marketable products and services for specialized applications (e.g.,
in medical technologies), and for mass market applications (e.g., in the automotive
industries).
Smart systems also considerably contribute to the development of the future
Internet of Things, in that they provide smart functionality to everyday objects,
e.g., to industrial goods in the supply chain, or to food products in the food supply
chain. With the help of active RFID technology, wireless sensors, real-time sense
and response capability, energy efficiency, as well as networking functionality,
objects will become smart objects. These smart objects could support the elderly
and the disabled. The close tracking and monitoring of food products could
improve food supply and quality. Smart industrial goods could store information
about their origin, destination, components, and use. And waste disposal could
become a truly efficient individual recycling process.
4. Green technologies in ICT
Green computing, Green ICT as per IFG International Federation of Green
ICT and IFG Standard, green IT, or ICT sustainability, is the study and practice
of environmentally sustainable computing or IT. San Murugesan notes that this can
include "designing, manufacturing, using, and disposing of computers, servers, and
associated subsystems—such as monitors, printers, storage devices, and
networking and communications systems — efficiently and effectively with
minimal or no effect on the environment."
The goals of green computing are similar to green chemistry: reduce the use
of hazardous materials, maximize energy efficiency during the product's lifetime,
and promote the recyclability or biodegradability of defunct products and factory
waste. Green computing is important for all classes of systems, ranging from
handheld systems to large-scale data centers.
Many corporate IT departments have green computing initiatives to reduce
the environmental effect of their IT operations.
5. Teleconferences. Telemedicine.
A teleconference is a meeting or conference held via a telephone or network
connection between participants in remote cities or work sites. Many types of
teleconferences exist, with the simplest form involving the use of a speaker phone
at each location to conduct an audio conference. More sophisticated teleconference
meetings involve the exchange of audio, video, and data. The term teleconference
can also refer to a live event that is transmitted via satellite to various locations
simultaneously.
Although teleconference is a broad term that includes a variety of options, the
basics for all teleconferences involve the use of telecommunication equipment,
users at multiple locations, and collaborative communication between the
participants. The basic audio conference is in essence a conference call. Audio
graphics, another form of teleconference, allows participants to share graphics,
documents, and video in addition to audio; and involves additional equipment such
as electronic tablets, scanners, and voice data terminals.
Email is a common example of a computer teleconference. Through the use of
additional equipment, primarily a TV camera, a video teleconference most
resembles a face-to-face meeting, incorporating the ability to view the participants
at all locations. Many businesses have a videoconference room permanently
outfitted with the necessary equipment.
Telemedicine is the use of telecommunication and information technology to
provide clinical health care from a distance. It helps eliminate distance barriers and
can improve access to medical services that would often not be consistently
available in distant rural communities. It is also used to save lives in critical care
and emergency situations.
Although there were distant precursors to telemedicine, it is essentially a
product of 20th century telecommunication and information technologies. These
technologies permit communications between patient and medical staff with both
convenience and fidelity, as well as the transmission of medical, imaging and
health informatics data from one site to another.
Early forms of telemedicine achieved with telephone and radio have been
supplemented with videotelephony, advanced diagnostic methods supported by
distributed client/server applications, and additionally with telemedical devices to
support in-home care.
Control questions:
Lecture 13. E-technology. E-business. E-Learning. Social networks. Egov.
1. Electronic business: Main models of electronic business. Information
infrastructure of electronic business. Legal regulation in electronic
business.
2. Electronic training: architecture, structure and platforms. Electronic
textbooks and the intellectual training systems.
3. Electronic government: concept, architecture, services.
4. Electronic and digital signature.
The objectives of the lecture.
Give the following concepts:
1.
Electronic business: Main models of electronic business. Information
infrastructure of electronic business. Legal regulation in electronic business.
E-business is the conduct of business processes on the Internet. These
electronic business processes include buying and selling products, supplies and
services; servicing customers; processing payments; managing production control;
collaborating with business partners; sharing information; running automated
employee services; recruiting; and more.
E-business can comprise a range of functions and services, ranging from the
development of intranets and extranets to e-service, the provision of services and
tasks over the Internet by application service providers. Today, as major
corporations continuously rethink their businesses in terms of the Internet,
specifically its availability, wide reach and ever-changing capabilities, they are
conducting e-business to buy parts and supplies from other companies, collaborate
on sales promotions, and conduct joint research. With the security built into today's
browsers, and with digital certificates now available for individuals and companies
from Verisign, a certificate issuer, much of the early concern about the security of
business transaction on the Web has abated, and e-business by whatever name is
accelerating.
When organizations go online, they have to decide which e-business models
best suit their goals. A business model is defined as the organization of product,
service and information flows, and the source of revenues and benefits for
suppliers and customers. The concept of e-business model is the same but used in
the online presence.
1.
Electronic training: architecture, structure and platforms. Electronic
textbooks and the intellectual training systems.
Control questions: