Download Ans 1(a) refer page number 7 -8 from book Ans 1(b) Ans 3(a) Rate

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Java ConcurrentMap wikipedia , lookup

Transcript
Ans 1(a) refer page number 7 -8 from book
Ans 1(b)
Ans 3(a)
Rate monotic algorithmWe had already pointed out that RMA is an important event-driven scheduling algorithm.
This is a static priority algorithm and is extensively used in practical applications. RMA
assigns priorities to tasks based on their rates of occurrence. The lower the occurrence
rate of a task, the lower is the priority assigned to it. A task having the highest
occurrence rate (lowest period) is accorded the highest priority. RMA has been proved to
be the optimal static priority real-time task scheduling algorithm. The interested reader
may see [12] for a proof.
In RMA, the priority of a task is directly proportional to its rate (or, inversely proportional
to its period). That is, the priority of any task Ti is computed as: priority = , where pi is
the period of the task Ti and k is a constant. Using this simple expression, plots of
priority values of tasks under RMA for tasks of different periods can be easily obtained.
These plots have been shown in Fig. 2.10(a) and Fig. 2.10(b) where you can observe
that the priority of a task increases linearly with the arrival rate of the task and inversely
with its period.
Advantages and Disadvantages of RMA
In this section we first discuss the important advantages of RMA over EDF and then look
at some disadvantages of using RMA. As we had pointed out earlier, RMA is very
commonly used for scheduling real-time tasks in practical applications. Basic support is
available in almost all commercial real-time operating systems for developing
applications using RMA. RMA is simple and efficient and is also the optimal static priority
task scheduling algorithm. Unlike EDF, it requires very few special data structures. Most
commercial real-time operating systems support real-time (static) priority levels for
tasks. Tasks having real-time priority levels are arranged in multilevel feedback queues
(see Fig. 2.15). Among the tasks in a single level, these commercial real-time operating
systems generally provide an option of either time slicing and round-robin scheduling or
FIFO scheduling.
EDFIn Earliest Deadline First (EDF) scheduling, at every scheduling point the task having the
shortest deadline is taken up for scheduling. The basic principle of this algorithm is very
intuitive and simple to understand. The schedulability test for EDF is also simple. A task
set is schedulable under EDF, if and only if it satisfies the condition that the total
processor utilization due to the task set is less than 1.
EDF is an optimal real-time task scheduling algorithm on a uniprocessor, it suffers from a
few shortcomings. It cannot guarantee that the critical tasks meet their respective
deadlines under transient overload. Besides, implementation of resource sharing among
real-time tasks is extremely difficult. Therefore, EDF-based algorithms are rarely used in
practice and RMA-based scheduling algorithms have become popular.
How real time database differs from traditional database-
There are three main counts on which these two types of databases differ. First, unlike
traditional databases, timing constraints are associated with the different operations
carried out on real-time databases. Second, real-time databases have to deal with
temporal data compared to static data as in the case of traditional databases. Third, the
performance metrics that are meaningful to the transactions of these two types of
databases are very different. We now elaborate these three issues.
Temporal Data: Data whose validity is lost after the elapse of some prespecified time
interval are called temporal data or perishable data.
Ans 4 definition page 187
CONCURRENCY CONTROL IN REAL-TIME DATABASES
Each database transaction usually involves access to several data items, using which it
carries out the necessary processing. Each access to data items takes considerable time,
especially if disk accesses are involved. This contributes to making transactions of longer
duration than a typical task execution in a non-database application. For improved
throughput, it is a good idea to start the execution of a transaction as soon as the
transaction becomes ready (that is, concurrently along with other transactions already
under execution), rather than executing them one after the other. Concurrent
transactions at any time are those which are active (i.e., started but not yet complete).
The concurrent transactions can operate either in an interleaved or in a "truly
concurrent" manner—it does not really matter. What is important for a set of
transactions to be concurrent is that they are active at the same time. It is very unlikely
to find a commercial database that does not execute its transactions concurrently.
However, unless the concurrent transactions are properly controlled, they may produce
incorrect results by violating some ACID properties, e.g., result recorded by one
transaction is immediately overwritten by another. ACID properties were discussed
in Section 7.2. The main idea behind concurrency control is to ensure non-interference
(isolation and atomicity) among different transactions.
Concurrency control schemes normally ensure non-interference among transactions by
restricting concurrent transactions to be serializable. A concurrent execution of a set of
transactions is said to be serializable, if the database operations carried out by them is
equivalent to some serial execution of these transactions. In other words, concurrency
control protocols allow several transactions to access a database concurrently, but leave
the database consistent by en-forcing serializability.
Ans 5(a) page 361
Ans 5(b) causes of failures –page 283 book
Fault types page-page 285
FCZ and ECZ -288-289