Hello Friends!
Let's discuss questions on DBMS from previous year papers. To begin with, some of the multiple choice type questions are listed below:
Q. Location transparency allows :
I. Users to treat the data as if it is done at one location.
II. Programmers to treat the data as if it is at one location.
III. Managers to treat the data as if it is at one location.
Which one of the following is correct ?
(A) I, II and III (B) I and II only (C) II and III only (D) II only
The right answer is (A).
Reason: Location transparency means the location of data must not matter to the person who accesses/manipulates the data. This is a feature of distributed databases, which applies to every kind of database user. According to a definition on Wikipedia, "The location of a resource doesn't matter to either the software developers or the end-users. This creates the illusion that the entire system is located in a single computer, which greatly simplifies software development."
The I and II are database users. The III is a component of distributed databases. Database Manager components are responsible for providing seamless data access to users without regards to its location. Hence, this covers all 3 choices.
Q. Which of the following is correct?
I. Two phase locking is an optimistic protocol.
II. Two phase locking is pessimistic protocol
III. Time stamping is an optimistic protocol.
IV. Time stamping is pessimistic protocol.
(A) I and III (B) II and IV (C) I and IV (D) II and III
The right answer is (D).
Reason: Optimistic Vs. Pessimistic approach: The optimistic concurrency control approach doesn't actually lock anything. It is based on the assumption that conflicts of database operations are very less. Means, when when oner transaction is executing, other transactions will not access the same data item being accessed by the executing one. It lets transactions run to completion and only checks for conflicts when they are about to commit. Thus, a transaction is executed without any restrictions until it is committed.
The pessimistic approach believes that some other transaction might try to access the same piece of data. So, in order to prevent any conflict, a transaction will first acquire all the required locks, then perform all the operations. It has two phases:
1. Growing Phase, where a transaction must first acquire all the locks.
2. Shrinking Phase, where a transaction releases all the locks one-by-one.(It cannot issue lock requests here.)
Q. Data warehousing refers to
(A) storing data offline at a separate site (B) backing up data regularly
(C) is related to data mining (D) uses tape as opposed to disk
Common data warehouse models include a data warehouse that is subject oriented, time variant, non-volatile, and integrated.
Now, a few questions with descriptive answers:
Q. Show that 2-phase locking ensures serializability?
A. In databases and transaction processing two-phase locking, (2PL) is a concurrency control method that guarantees serializability. A transaction is said to follow the two-phase locking protocol if all locking operations (read_lock, write_lock) precede the first unlock operation in the transaction. Such a transaction can be divided into two phases:
Phase 1: Growing Phase
i) transaction may obtain locks
ii) transaction may not release locks
Phase 2: Shrinking Phase
i) transaction may release locks
ii) transaction may not obtain locks
If lock conversion is allowed, then upgrading of locks (from read-locked to write-locked) must be done during the expanding phase, and downgrading of locks (from write-locked to read-locked) must be done in the shrinking phase. Hence, a read_lock(X) operation that downgrades an already held write lock on X can appear only in the shrinking phase.
The protocol assures serializability. It can be proved that the transactions can be serialized in the order of their lock points (i.e. the point where a transaction acquired its final lock). Two-phase locking does not ensure freedom from deadlocks.
Q. How to determine candidate key(s) for a relation?
A. Finding candidate keys is just as simple as applying some algorithm here and there.
In the first example, there are five attributes:
W H O S E
WH -> S
HOS -> E
Steps:
1. Find the attributes that are neither on the left and right side
> (none)
2. Find attributes that are only on the right side
> E
3. Find attributes that are only on the left side
> WHO
4. Combine the attributes on step 1 and 3
> since step 1 has no attributes, it’s just WHO
5. Test if the closures of attributes on step 4 are all the attributes
> in this case, it is true. Because with WH we can get S, and by HOS, we can get E.
So we have only one candidate key that is WHO.
Q. What are steps of a Database design?
A. Specialisation:
Top-down design process; we designate subgroupings within an entity set that are distinctive from other entities in the set.
These subgroupings become lower-level entity sets that have attributes or participate in relationships that do not apply to the higher-level entity set.
Depicted by a triangle component labeled ISA (E.g. customer “is a” person).
Attribute inheritance – a lower-level entity set inherits all the attributes and relationship participation of the higher-level entity set to which it is linked.
Generalisation:
A bottom-up design process – combine a number of entity sets that share the same features into a higher-level entity set.
Specialization and generalization are simple inversions of each other; they are represented in an E-R diagram in the same way.
The terms specialization and generalization are used interchangeably.
Let's discuss questions on DBMS from previous year papers. To begin with, some of the multiple choice type questions are listed below:
Q. Location transparency allows :
I. Users to treat the data as if it is done at one location.
II. Programmers to treat the data as if it is at one location.
III. Managers to treat the data as if it is at one location.
Which one of the following is correct ?
(A) I, II and III (B) I and II only (C) II and III only (D) II only
The right answer is (A).
Reason: Location transparency means the location of data must not matter to the person who accesses/manipulates the data. This is a feature of distributed databases, which applies to every kind of database user. According to a definition on Wikipedia, "The location of a resource doesn't matter to either the software developers or the end-users. This creates the illusion that the entire system is located in a single computer, which greatly simplifies software development."
The I and II are database users. The III is a component of distributed databases. Database Manager components are responsible for providing seamless data access to users without regards to its location. Hence, this covers all 3 choices.
Q. Which of the following is correct?
I. Two phase locking is an optimistic protocol.
II. Two phase locking is pessimistic protocol
III. Time stamping is an optimistic protocol.
IV. Time stamping is pessimistic protocol.
(A) I and III (B) II and IV (C) I and IV (D) II and III
The right answer is (D).
Reason: Optimistic Vs. Pessimistic approach: The optimistic concurrency control approach doesn't actually lock anything. It is based on the assumption that conflicts of database operations are very less. Means, when when oner transaction is executing, other transactions will not access the same data item being accessed by the executing one. It lets transactions run to completion and only checks for conflicts when they are about to commit. Thus, a transaction is executed without any restrictions until it is committed.
The pessimistic approach believes that some other transaction might try to access the same piece of data. So, in order to prevent any conflict, a transaction will first acquire all the required locks, then perform all the operations. It has two phases:
1. Growing Phase, where a transaction must first acquire all the locks.
2. Shrinking Phase, where a transaction releases all the locks one-by-one.(It cannot issue lock requests here.)
Q. Data warehousing refers to
(A) storing data offline at a separate site (B) backing up data regularly
(C) is related to data mining (D) uses tape as opposed to disk
The right answer is (C)
Reason: NOT
A because: Not all data of Data warehouse stored offline as it depends on
nature and usage of data.
NOT
B because: A data warehouse typically stores a lot of historical data, that is
often not subject to change. Data that does not change only needs to be backed
up once.
NOT
D because: Data may be stored using a proper mix of disks, tapes, or near-line
storage.
Common data warehouse models include a data warehouse that is subject oriented, time variant, non-volatile, and integrated.
Q. The
"PROJECT' operator of a relational algebra creates a new table that has
always
(A)
More columns than columns in original table
(B)
More rows than original table
(C) Same number of
rows as the original table
(D)
Same number of columns as the original table
The right answer is (A)
Reason: The number of
tuples in the result of PROJECT <list> (R) is always less or equal to the number of tuples
in R.
Now, a few questions with descriptive answers:
Q. Show that 2-phase locking ensures serializability?
A. In databases and transaction processing two-phase locking, (2PL) is a concurrency control method that guarantees serializability. A transaction is said to follow the two-phase locking protocol if all locking operations (read_lock, write_lock) precede the first unlock operation in the transaction. Such a transaction can be divided into two phases:
Phase 1: Growing Phase
i) transaction may obtain locks
ii) transaction may not release locks
Phase 2: Shrinking Phase
i) transaction may release locks
ii) transaction may not obtain locks
If lock conversion is allowed, then upgrading of locks (from read-locked to write-locked) must be done during the expanding phase, and downgrading of locks (from write-locked to read-locked) must be done in the shrinking phase. Hence, a read_lock(X) operation that downgrades an already held write lock on X can appear only in the shrinking phase.
The protocol assures serializability. It can be proved that the transactions can be serialized in the order of their lock points (i.e. the point where a transaction acquired its final lock). Two-phase locking does not ensure freedom from deadlocks.
A. Finding candidate keys is just as simple as applying some algorithm here and there.
In the first example, there are five attributes:
W H O S E
WH -> S
HOS -> E
Steps:
1. Find the attributes that are neither on the left and right side
> (none)
2. Find attributes that are only on the right side
> E
3. Find attributes that are only on the left side
> WHO
4. Combine the attributes on step 1 and 3
> since step 1 has no attributes, it’s just WHO
5. Test if the closures of attributes on step 4 are all the attributes
> in this case, it is true. Because with WH we can get S, and by HOS, we can get E.
So we have only one candidate key that is WHO.
Q. What are steps of a Database design?
A. Major Steps in
Database Design are:
- Requirements
Analysis: Talk
to the potential users! Understand what data is to be stored, and what
operations and requirements are desired.
- Conceptual
Database Design: Develop a high-level description of the data and constraints
(we will use the ER data model)
- Logical
Database Design: Convert the conceptual model to a schema in the
chosen data model of the DBMS. For a relational database, this means
converting the conceptual to a relational schema (logical schema).
- Schema
Refinement: Look for potential problems in the original
choice of schema and try to redesign.
- Physical
Database Design: Direct the DBMS into choice of underlying data
layout (e.g., indexes and clustering) in hopes of optimizing the
performance.
- Applications
and Security Design: It defines how the underlying database will interact
with surrounding applications.
A. Specialisation:
Top-down design process; we designate subgroupings within an entity set that are distinctive from other entities in the set.
These subgroupings become lower-level entity sets that have attributes or participate in relationships that do not apply to the higher-level entity set.
Depicted by a triangle component labeled ISA (E.g. customer “is a” person).
Attribute inheritance – a lower-level entity set inherits all the attributes and relationship participation of the higher-level entity set to which it is linked.
Generalisation:
A bottom-up design process – combine a number of entity sets that share the same features into a higher-level entity set.
Specialization and generalization are simple inversions of each other; they are represented in an E-R diagram in the same way.
The terms specialization and generalization are used interchangeably.
No comments:
Post a Comment