Oracle8i Administration and Management

Overview

If you rely on instant access to critical information on an Oracle database, then you must become familiar with every aspect of this system. From the initial installation and day-to-day maintenance to critical backup, recovery, and disaster recovery procedures, this completely updated book and CD-ROM provide you with the necessary tools to run a fast, efficient, and reliable Oracle database. The book is loaded with practical case examples and real-world scenarios. With its in-depth coverage of Oracle8i, you'll ...

See more details below
Available through our Marketplace sellers.
Other sellers (Paperback)
  • All (10) from $1.99   
  • New (1) from $65.0   
  • Used (9) from $1.99   
Close
Sort by
Page 1 of 1
Showing All
Note: Marketplace items are not eligible for any BN.com coupons and promotions
$65.00
Seller since Tue Oct 07 09:37:03 EDT 2014

Feedback rating:

(184)

Condition:

New — never opened or used in original packaging.

Like New — packaging may have been opened. A "Like New" item is suitable to give as a gift.

Very Good — may have minor signs of wear on packaging but item works perfectly and has no damage.

Good — item is in good condition but packaging may have signs of shelf wear/aging or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Acceptable — item is in working order but may show signs of wear such as scratches or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Used — An item that has been opened and may show signs of wear. All specific defects should be noted in the Comments section associated with each item.

Refurbished — A used item that has been renewed or updated and verified to be in proper working condition. Not necessarily completed by the original manufacturer.

New
Brand new.

Ships from: acton, MA

Usually ships in 1-2 business days

  • Standard, 48 States
  • Standard (AK, HI)
Page 1 of 1
Showing All
Close
Sort by
Sending request ...

Overview

If you rely on instant access to critical information on an Oracle database, then you must become familiar with every aspect of this system. From the initial installation and day-to-day maintenance to critical backup, recovery, and disaster recovery procedures, this completely updated book and CD-ROM provide you with the necessary tools to run a fast, efficient, and reliable Oracle database. The book is loaded with practical case examples and real-world scenarios. With its in-depth coverage of Oracle8i, you'll discover new object-oriented extensions that allow real-world modeling using the power of the relational engine. The book covers all new Oracle8i commands, packages, and database features, including the management of resource plans, temporary tables, locally managed tablespaces, and row-level security, along with from-the-trenches coverage of earlier versions of Oracle.

Packed with tips and techniques, this book will help you:
* Administer the Oracle database using system features and third-party tools
* Administer database subjects
* Setup and administer users
* Use racle-provided packages, procedures, and utilities
* Monitor the database and fix any problems
* Tune the database for optimum efficiency
* Manage distributed databases
* Use the latest Oracle8i object extensions

The CD-ROM includes:
* All of the scripts from the book
* Lite and demo versions of administration software
* Useful management aids

Michael Ault (Alpharetta, GA) is a senior consultant for DMR Consulting Group, where he is the Associate Director for the Oracle Practice. He is also the systems operator for RevealNet's DBAPipeline forum. His articles appear regularly in Oracle Internals and Select.

"...explains the best methods for optimizing Oracle 7 & 8 performance...tips & techniques for installation, setting up users, monitoring & tuning the database, back-up & recovery, using Oracle object extensions, and more.

Read More Show Less

Editorial Reviews

From Barnes & Noble
The Barnes & Noble Review
Running Oracle 8i is one thing. Running it well is another. Oracle 8i Administration And Management shows you how to run it exceptionally well, giving you all the real-world examples and scenarios—plus more than 200 proven scripts for UNIX, NT and VMS environments.

You'll start with practical techniques for planning your installation or migration on UNIX, NT, and VMS; then walk through the fundamentals of database administration. There are detailed chapters on tablespace administration (including a complete script for documenting and rebuilding tablespaces); working with Oracle 8i's many types of tables (both relational and object); and creating indexes that deliver maximum speed.

You'll find a detailed chapter on managing users and security, including roles, auditing, and a complete script for controlling password complexity and verification. There's also complete coverage of monitoring: tables, indexes, clusters, snapshots, objects, users, libraries, control files, and more.

The entire book benefits from Michael Ault's world-class Oracle experience, but nowhere more than his guidance on tuning Oracle applications and the database itself—and managing Oracle in a distributed environment. If you're an Oracle DBA, you'll join us in praising this book to the hilt.

Booknews
Written by a private consultant, this guide for administrators responsible for maintaining and administering Oracle databases provides instructions from the initial installation through administration, management, and backup and recovery procedures. Included are examples of reports and the SQL, SQL*Plus, and PL/SQL code used to generate them. Also discussed are the program's new object-oriented extensions and other new features which allow for the management of resource plans, temporary tables, locally managed tablespaces, and row-level security. The included CD-ROM contains 200- plus software scripts for Oracle administration and lite and demo versions of administration software. Annotation c. Book News, Inc., Portland, OR (booknews.com)
Read More Show Less

Product Details

  • ISBN-13: 9780471354536
  • Publisher: Wiley
  • Publication date: 10/20/1999
  • Edition description: BK&CD-ROM
  • Edition number: 1
  • Pages: 1024
  • Product dimensions: 7.46 (w) x 9.20 (h) x 2.28 (d)

Meet the Author

MICHAEL AULT is the Corporate Technical Officer of DBMentors International, which he founded after 4 years as the Associate Director for the Oracle Practice at DMR Consulting Group. He is also the systems operator for RevealNet's DBAPipeline forum. His articles appear regularly in Oracle Internals and Select.

Read More Show Less

Table of Contents

Installation of Oracle.

Administration of ORACLE (After the Bloom Is Off the Rose...).

Tablespace Administration.

Administration Relational Database Tables.

Administration of Oracle8i Object Tables.

Administration of Indexes.

Administration of Other Database Objects.

Administration of Table Clusters.

Monitoring Database Tables, Clusters, Snapshots, Types, and Indexes.

Monitoring Users and Other Database Objects.

Tuning Oracle Applications.

Database Internals Tuning.

Managing in a Distributed Environment.

Backup and Recovery Procedures for Oracle.

User and Security Administration.

Appendix.

Read More Show Less

First Chapter

Chapter 14 Managing in a Distributed Environment

Summary:

The Oracle database in either Oracle7 or Oracle8 can be operated in one of several types of object-relational environments. These types are:

  • Exclusive, nonshared database
  • Parallel server database
  • Distributed database
  • Client/server database
  • Multithreaded server (MTS) database
  • Combinations of these five

For the most part, we have been discussing the database as if it were an exclusive-type database. The characteristics of this type of database are as follows:

  • Standalone operation; the database doesn't require other databases to be available in order to function.
  • Single location with either single or multiple CPUs; the database resides in one physical location and uses one or more CPUs.
  • Database administration is independent of other sites, and little, if any, intersite cooperation is required.

If you feel your applications will never operate outside of these parameters, then you can skip this chapter. If your applications will require a more complex environment involving multiple CPUs, sites, and databases, then read on.

14.1 Management in a "Shared" Environment

In a shared database, the CPUs involved are loosely coupled, such as in a Sun or VAX cluster, and they share disk resources. In UNIX you will have to (usually) use RAW disk resources since most UNIX platforms at the time this book was written couldn't share anything but RAW disks in a manner Oracle can use. The database is called shared because the database files themselves are shared between several instance process sets on several CPUs or nodes.

The benefits of this type of installation are severalfold:

  • The data is maintained on one disk farm, allowing ease of backup and management.
  • The user processing load is spread across several CPUs, thus allowing more users and faster access to data.
  • Different types of users can be placed on different machines to allow distribution of types of processing. For example, users who require large sorts and intense CPU activity can be placed on a cluster node with faster or multiple CPUs, and larger internal RAM (for a larger sort area), while users who only query limited sets of data can be placed on a smaller CPU.
  • This type of configuration is good for databases without a lot of update activity or where the types of update activity can be distributed between the nodes in the cluster. For example, the group that uses node A only deals with tables A, B, and C on drive A, while the group that uses node B only deals with tables D, E, and F on drive B.

Figure 14.1 shows an example of a shared or parallel system. The major disadvantage to this configuration is that a single point of failure can result in the loss of all database file access. For example, if the disk farm is hung off of a single disk controller on one VAX in the cluster and either the disk controller or VAX itself fails, access to the database is lost. The possibility of this happening can be reduced by using mirrored disk configurations.

The disadvantages to this type of configuration are as follows:

  • Each instance must have its own SGA, which will have to be maintained in parallel with the main SGA.
  • Additional overhead due to the parallel lock manager.

Figure 14.1 Example of a shared database configuration.
Initialization Parameters for a Shared Database

Most of the INIT.ORA parameters that control a parallel instance are prefixed with "GC_". These parameters are listed in the following sections. These parameters are subject to change, and this list is only a general guideline; you should use the most current version of the Oracle7 Parallel Server Administrator's Guide or Oracle8 or Oracle8i Parallel Server Guide for your database for the most current list. Another good reference will be the readme files that come with the most current release on CD or tape.

Initialization Parameters That Must Match in All Shared Instances

Note: The Parallel Server manual from Oracle states that CPU_COUNT and ROLLBACK_SEGMENTS must be set the same for all instances in a parallel server environment. This is not correct. The following are the initialization parameters that have to match in all shared instances:

DB_DOMAIN. DB_DOMAIN specifies the extension components of a global database name, which consist of valid identifiers, separated by periods. Specifying DB_DOMAIN as a unique string for all databases in a domain is recommended. For example, this parameter allows one department to create a database without worrying that it might have the same name as a database created by another department. If one sales department's DB_DOMAIN = "GERMANY.HMR.COM", then their "DRUG" database (DRUG.GERMANY.HMR.COM) is uniquely distinguished from another database with DB_NAME = "DRUG" but with DB_DOMAIN = "US.HMR.COM".

SERVICE_NAMES. SERVICE_NAMES is one or more strings that represent the names of the database on the network. Net8 8.1 wants a service name, rather than an SID, to identify a database. It is possible to provide multiple services names so that different usages of a single database can be identified separately. Service names can also be used to identify a single service that is available from two different databases through the use of replication.

CACHE_SIZE_THRESHOLD (New in 7.1; obsolete in 8i). This parameter defaults to (0.1*db_block_buffers). The parameter specifies the maximum size of a cached table partition that is spread among the caches of a parallel server instance set. If the partition is larger than this, it isn't cached.

CONTROL_FILES. This parameter gives the location for all control files for the database.

DB_FILES. This specifies the soft limit for the number of database files; the hard limit is set by the CREATE DATABASE - MAX_DATAFILES parameter.

DB_NAME. If set, it must be the same for all instances of shared database. If the value is not explicitly set in the initialization file it must be explicitly used in all database-related commands from the instance.

DB_BLOCK_SIZE. Is set at database creation, and the database must be rebuilt to change it. Since all instances share the same database, it must be the same for all instances.

DML_LOCKS. If nonzero, individual instances can have different values. If set to zero on one instance, all instances must be set to zero.

GC_DB_LOCKS (Obsolete in Oracle8). Specifies the total number of PCM locks covering data blocks cached in the multiple SGAs of a parallel server. Must be set to at least one plus the sum of locks specified in the parameter GC_FILES_TO_LOCKS. If this is zero, enables fine-grain locking. This is always rounded up to the next prime number. If this value is specified in a nonshared database, it will consume SGA resources and produce no return benefit.

GC_FILES_TO_LOCKS (Changed in Oracle8). This parameter controls the mapping of PCM locks to datafiles. The format for this parameter is:

GC_FILES_TO_LOCKS = "{file_list=lock_count[!blocks][EACH]} [:] . . ."

File_list is a list of one or more datafiles listed by their file numbers or ranges of file numbers. The lock_count is the number of locks to assign to the files listed; a zero enables fine-grain locking. The "!blocks" tells how many blocks to assign to each lock if it covers multiple locks; this defaults to one if not specified. The EACH keyword specifies that each file in the file list receives lock_count number of locks. So, if you wanted to give files one to three 1,000 locks, each covering 100 blocks per lock, the command would be

GC_FILES_TO_LOCKS="1-3=1000!100EACH"

Note that there are no spaces allowed inside the quotation marks. If the ratio of file_blocks/locks exceeds the !value specified, then each lock will cover the ratio file_blocks/locks number of locks (rounded up).

GC_FREELIST_GROUPS (New in 7.3; obsolete in Oracle8). This parameter defaults to five times the value of the GC_SEGMENTS parameter. The parameter determines the number of locks to specify for free list group blocks for an instance.

GC_LCK_PROCS. This parameter sets the number of background lock processes (LCK0 through LCK9) for an instance in the parallel server. This defaults to one, which is usually sufficient. If the instance lock request rate saturates the lock process, increase this parameter.

GC_ROLLBACK_LOCKS (Changed in Oracle8). For each rollback segment, this sets the number of instance locks available for simultaneously modified rollback segment blocks. This defaults to twenty, which is usually sufficient. Don't specify this parameter in a nonshared instance or it will allocate SGA resources with no benefit.

GC_ROLLBACK_SEGMENTS (Obsolete for Oracle8). This sets the maximum number of rollback segments including the system rollback segment. This parameter is for all instances in the shared environment that share the same database.

GC_SAVE_ROLLBACK_LOCKS (Obsolete for Oracle8). This parameter reserves instance locks for deferred rollback segments. The deferred rollback segments contain rollback entries for tablespaces that were taken off-line. Set this to 10 per each instance in the shared environment that shares the same database.

GC_SEGMENTS (Obsolete for Oracle8). This parameter governs how many simultaneous segment extensions can occur over the entire set of shared instances. If you have sized properly, the default is fine; if you get frequent extensions it may have to be increased. Do not specify this parameter in exclusive mode, or SGA resources will be allocated with no benefits returned.

GC_TABLESPACES (Obsolete for Oracle8). This is the maximum number of tablespaces that can be brought from off-line to on-line or on-line to off-line concurrently. Do not specify this parameter in exclusive mode, or SGA resources will be allocated with no benefits returned.

LM_LOCKS (Identical values recommended). LM_LOCKS is a Parallel Server parameter. This parameter specifies the number of locks that will be configured for the lock manager. The number of locks can be represented by the following equation, where R is the number of resources, N is the total number of nodes, and L is the total number of locks:

L = R + (R*(N - 1))/N

Note that lock configurations are per lock manager instance. Thus, the value of LM_LOCKS must be the same for all lock manager instances. Also note that, in the worst case, up to 2 * GC_RELEASABLE_LOCKS could be required with DBA locking.

LM_PROCS (Identical values recommended). LM_PROCS is a Parallel Server parameter. The value of this parameter represents the value of the PROCESSES parameter plus the maximum number of instances. Note that the processes' configurations are per lock manager instance. Thus, the value for LM_PROCS must be the same for all lock manager instances.

LM_RESS (Identical values recommended). LM_RESS is a Parallel Server parameter. This parameter controls the number of resources that can be locked by each lock manager instance. It is recommended that each instance be assigned the same parameter value.

The value specified for LM_RESS should be less than 2 * DML_LOCKS plus an overhead of about 20 locks. However, with DBA locking, up to 2 * GC_RELEASABLE_LOCKS resources are needed to cover all the BL locks, at least in the worst case.

LM_RESS covers the number of lock resources allocated for DML, DDL (data dictionary locks), and data dictionary and library, cache locks plus the file and log management locks.

LOG_FILES. This must be set to a minimum of MAXLOGFILES*MAXLOGMEMBERS. If it defaults to its maximum of 255, don't fool with it.

MAX_COMMIT_PROPAGATION_DELAY. This parameter controls how quickly the SCN (System Change Number) is refreshed by LGWR. Units are in hundredths of a second. The default of 90,000 (15 minutes) allows the existing high-performance mechanism to remain in place. Only change it to a lower value if the environment has numerous changes to the same data from multiple instances simultaneously. A setting of zero causes the SCN to be refreshed immediately after a commit.

PARALLEL_DEFAULT_MAX_INSTANCES (New in 7.1). This parameter is set to the maximum number of instances that can simultaneously cache a table's values for shared operations. Specifies the default number of instances to split a table across for parallel query processing. The value of this parameter is used if the INSTANCES DEFAULT is specified in the PARALLEL clause of a table's definition.

PARALLEL_DEFAULT_MAX_SCANS (New in 7.1). The maximum number of query servers that can be used by default for a query. This parameter is only used if there is no numeric value specified in a PARALLEL hint or in the PARALLEL clause of the table's definition. This limits the number of query servers that are selected by default when the value of the PARALLEL_DEFAULT_SCANSIZE parameter is used by a query coordinator.

REMOTE_LOGIN_PASSWORDFILE (New in 7.1). Specifies whether Oracle checks for a password file and how many databases can use the password file. Setting the parameter to NONE signifies that Oracle should ignore any password file (and therefore privileged users must be authenticated by the operating system). Setting the parameter to EXCLUSIVE signifies that the password file can only be used by one database and the password file can contain names other than SYS and INTERNAL. Setting the parameter to SHARED allows more than one database to use a password file; however, the only users recognized by the password file are SYS and INTERNAL.

ROW_LOCKING. Specifies if row locking should be used. It is set to ALWAYS by default and should not be changed.

SERIALIZABLE. If TRUE, then queries acquire table-level read locks, preventing any update of objects read until the transaction containing the query is committed. This mode of operation provides repeatable reads and ensures that two queries for the same data within the same transaction see the same values.

Setting SERIALIZABLE to TRUE provides ANSI degree three consistency at a considerable cost in concurrency.

SINGLE_PROCESS. Must be false for all instances.

Initialization Parameters That Must Be Different

The following initialization parameters must be different for each instance that participates in a parallel server environment.

THREAD. This parameter sets the redo log thread that will be used by the instance.

ROLLBACK_SEGMENTS. This parameter sets the rollback segments that will be used by the instance.

INSTANCE_NUMBER. This sets the instance-identifying number.

Initialization Parameters That Can Be the Same or Different

The following parameters can match or can differ for all instances in a parallel server environment.

CHECKPOINT_PROCESS. If set to TRUE, sets up a checkpoint process for the instance to perform checkpoints independent of the LGWR process.

CPU_COUNT. Normally won't be explicitly set; this is set by Oracle to the number of CPUs on the system.

INIT_SQL_FILES (Obsolete in 7.3). This becomes an undocumented parameter for releases after 7.3

.

LOG_ARCHIVE_DEST. This parameter specifies the destination for archive logging. It cannot be a RAW device.

LOG_ARCHIVE_FORMAT. Normally, this should be the same for all instances in a shared set of instances, but it doesn't have to be. This parameter specifies the format for the log files.

LOG_ARCHIVE_START. This parameter is used to start archive logging on an instance. In a shared environment, only one instance has to archive log to ensure the database is recoverable.

LOG_CHECKPOINT_INTERVAL. This specifies the interval in blocks between log checkpoints. Setting it to larger than the size of your redo logs allows checkpoints only when logs fill.

LOG_CHECKPOINT_TIMEOUT. This parameter specifies the time interval between log checkpoints. Setting this to zero forces checkpoints only when logs fill.

LOG_ENTRY_PREBUILD_THRESHOLD. This parameter tells the instance how big a redo entry must be before it is prebuilt.

PARALLEL_MIN_PERCENT (New in 7.3). This parameter specifies the minimum percentage of threads required for parallel query. This prevents a parallel query from being done sequentially if resources aren't available. If the parameter is set to zero then the query will use as many slaves as are available. If the percentage of parallel query slaves available is less than the specified value then the query will terminate with an error.

PARALLEL_MAX_SERVERS (New in 7.1). This parameter sets the maximum number of query servers that Oracle can start on demand.

PARALLEL_MIN_SERVERS (New in 7.1). This parameter sets the number of query servers that Oracle starts on startup and the number that the instance will always have as a minimum number running.

PROCESSES. This parameter sets the maximum number of simultaneous processes that Oracle will allow to connect to the instance.

RECOVERY_PARALLELISM (New in 7.1). This parameter sets the number of processes that will be used to recover an instance. A setting of zero will force serial recovery.

ROW_LOCKING. This parameter sets the value for row locking and should always be set to ALWAYS. A setting of DEFAULT is identical to ALWAYS. A setting of INTENT means row locks are used on a SELECT FOR UPDATE but at update time table locks are acquired.

SEQUENCE_CACHE_ENTRIES. This parameter sets the number of sequence caches that will be used if the CACHE value isn't specified in a CREATE SEQUENCE command

.

TRANSACTIONS_PER_ROLLBACK_SEGMENT. This parameter determines how Oracle acquires public rollback segments. If the ratio transactions/transactions_per_rollback_segment exceeds the number of available on-line rollback segments, any off-line public rollback segments are brought on-line.

New Oracle8 Parameters

The following parameters are new for Oracle8:

FREEZE_DB_FOR_FAST_INSTANCE_RECOVERY. This parameter is a parallel server parameter. If an instance in a parallel server environment requires recovery and if this parameter is set to TRUE then the entire database is frozen until the instance completes recovery. This allows faster instance recovery, but the database is unavailable to other instances.

LM_LOCKS. This parameter is the number of lock manager locks. Where R is the number of resources, N is the total number of instances (nodes), and L is the total number of locks, the calculation is

L = R + ( R * ( N - 1 )) / N

LM_PROCS. This parameter sets the number of total processes; this should be set to the number of processes times the number of instances.

LM_RESS. This parameter controls the number of resources that can be locked by the lock manager. It should be set to much less than 2 * DML_LOCKS + 20. This is roughly equivalent to the sum of DML_LOCKS, DDL locks, cache locks, and file and log management locks.

INSTANCE_GROUPS. This parameter controls the groups that the instance belongs to. These values can be different for all instances. This parameter is not dynamic and is set only at startup.

PARALLEL_INSTANCE_GROUP. This specifies which group to use for parallel operations. This parameter can be altered using the ALTER SESSION or ALTER SYSTEM command.

TIP

Use a central initialization file for shared parameters and refer to it using the "IFILE=" parameter from the individual instances initialization files.

14.2 Use of SQLDBA or SVRMGR

There are two types of instance in a parallel environment, the DEFAULT and the CURRENT instance. The DEFAULT instance is the instance that is resident on the node where the SQLDBA or SVRMGR program is invoked. The CURRENT instance is the instance where you have used the CONNECT command to attach to a remote instance using a connect string. The CURRENT and DEFAULT instances will be the same if you don't specify a connect string in the connect command. You can change the instance by using the SET INSTANCE command.

The following commands apply under SQLDBA or SVRMGR when you are connected to an instance:

ARCHIVE LOG-Current instance only.
CONNECT-Uses default instance if no connect string is specified.
CONNECT INTERNAL-Always applies to the current instance.
HOST-Applies to the node you are on regardless of the setting of the instance.
MONITOR-Applies to the current instance.
RECOVER-Applies to the database, not the instance.
SHOW INSTANCE-Always applies to the default instance, not the current instance.
SHOW PARAMETERS-Applies to the current instance.
SHOW SGA-Applies to current instance only.
SHUTDOWN-This is a privileged SVRMGR command; applies to the current instance.
STARTUP-Always applies to the current instance; it is a privileged SVRMGR command.

You change the DEFAULT instance by disconnecting from the CURRENT installation and issuing the "SET INSTANCE con string" command. Note there is no user name or password.

14.3 Basic Procedure for Creating a Shared Oracle Database

Everything must start somewhere. A parallel database starts with a single instance. This instance is created exactly like an exclusive instance. Once the instance is up and operating, the additional instances that will share the database are defined. The final step is to bring the INIT.ORA files into sync and start all the instances in shared mode. There are several parts of the database that need to be considered when designing and starting up a parallel instance, these are:

  • File structures
  • Data dictionary
  • Sequence number generators
  • Rollback segments
  • Redo log files
  • Database creation

By properly designing these six items, resource contention can be avoided, thus optimizing performance.

All instances in a parallel database use the same control files and datafiles. Each instance has its own SGA and redo logs. Each instance can have differing numbers of redo logs and different degrees of mirroring. Each instance must have its own dedicated rollback segment(s).

TIP

For UNIX, use 100-megabyte cooked file systems arranged in an OFA configuration on each server to act as locations for the soft links to the RAW devices. This allows "human"-understandable file names to be used. What does all this mean? First, before we can start up a parallel instance, there must be sufficient redo log threads and private rollback segments available to split up among the instances. Next, there must be sufficient memory available on each node to handle the SGA requirements for each instance.

The rollback segments can be either public or private. It is suggested that they be made as private rollback segments so that they can be positioned on the disk farm for each instance and made exclusive to that instance. Additional rollback segments can be specified from any active instance. Each instance acquires its private rollback segments by specifying their name in the INIT.ORA parameter ROLLBACK_SEGMENTS. At least one rollback segment must be available for each instance. This can be guaranteed via use of the private rollback segments.

Let's look at a more detailed procedure.

1. Using either the OEM or commands, create the initial instance. Verify that the parameters MAXINSTANCES, MAXLOGFILES, ARCHIVELOG, and MAXLOGHISTORY are set in accordance with the values you require. You should alter the INIT.ORA parameter DB_BLOCK_SIZE to ensure that the database can provide the number of free lists and extents you require. You should also set the DB_NAME parameter.

2. Add a second rollback segment to this new instance and then bring it on-line; add the second name to the INIT.ORA file via the ROLLBACK_SEGMENTS parameter, so that when you start up the instance again it will acquire the rollback segment. Under Oracle7, there is no need to shut down. Once the rollback segment is created it can be brought on-line with the ALTER ROLLBACK SEGMENT command.

3. Add required tablespaces to the instance. As a minimum, add the following additional tablespaces:

TOOLS. For forms, reports, and menu files owned by SYSTEM.

USERS. To use for the DEFAULT tablespace for users.

TEMP. To use as the TEMPORARY tablespace for users.

ROLLBACKS. To use for rollbacks for this instance. If there are different disk strings attached to other nodes, you may want to add additional rollback segment tablespaces for each set of disks that will be used by the parallel instance. n is an integer that specifies an arbitrary number assigned to each instance in the database.

4. Create additional rollback segments. There should be one rollback segment for every 4 to 10 users you expect for each instance. If you expect 50 users on instance 1, 24 users on instance 2, and 50 users on instance 3, assign 32 rollback segments total, spread over the three ROLLBACKS tablespaces (one tablespace for each instance), assuming the lower value of four users per rollback is used.

5. Create enough redo log threads for all of the expected instances. Each thread must have at least two groups of one redo log each; three groups per instance thread are suggested.

6. Shut down the instance; deassign the rollback segment assigned in step 2 by removing it from the ROLLBACK_SEGMENTS parameter. Assign the rollback segments for this instance only by placing their names in the ROLLBACK_SEGMENTS parameter. Alter the GC_ROLLBACK_SEGMENTS parameter (or add it) to the total number of rollback segments expected to ever be created by all instances in the parallel database. For Oracle7 and Oracle8, you can take the additional system rollback segment off-line with the ALTER ROLLBACK SEGMENT command and bring on this instance's rollback segments with the same command without shutting down the instance.

7. Add the THREAD parameter to the INIT.ORA file and set the THREAD value for this instance.

8. Set the ARCHIVE_LOG_DEST and the ARCHIVE_LOG_FORMAT parameters.

9. Set all INIT.ORA GC_ parameters.

10. Set the INIT.ORA parameter INSTANCE_NUMBER for the initial instance to 1. For additional instances, increment this number so each instance has a unique value. This value determines the free list used by the instance. If one instance uses this parameter, they all should. If it is not used, each instance acquires the lowest available instance number, and the DBA has no control over it.

11. Be sure the SINGLE_PROCESS value is set to FALSE in the INIT.ORA file for this and all other instances in the parallel instance. 12. If it is not already shut down, shut down and then restart the instance in parallel mode.

13. Create the additional instances, edit their INIT.ORA files in accordance with the parameters shown in previous sections, and specify, for each instance, the appropriate rollback segments and the redo thread.

14. Create a common INITS.ORA (the 'S' indicates this is a shared or common initialization file) file in which the common initialization files are placed. For a parallel database, all instances must have the same value for the following parameters:

CONTROL_FILES GC_SEGMENTS
DB_BLOCK_SIZE GC_TABLESPACES
DB_FILES LOG_FILES
DB_NAME LICENSE_MAX_SESSIONS
GC_DB_LOCKS LICENSE_MAX_USERS
GC_FILES_TO_LOCKS LICENSE_SESSIONS_WARNING
GC_LCK_PROCS ROW_LOCKING
GC_ROLLBACK_LOCKS SERIALIZABLE
GC_ROLLBACK_SEGMENTS SINGLE_PROCESS
GC_SAVE_ROLLBACK_LOCKS LM_LOCKS
IFILE LM_PROCS
LM_RESS

The IFILE parameter should be specified in each of the instance's INIT.ORA files to point to the location of the INITSHARE.ORA file that contains the rest of the parameters.

15. Start the additional instances in parallel.

Archive logging. Archive logging is controlled independently for each instance; however, if one archives it is suggested that all archive.

Backup. Backup can be either on-line or off-line. Any instance can issue the ALTER TABLESPACE name BEGIN | END command. For ease of management, it is suggested that the first instance created be used as the management instance and all backup and export actions be initiated there if possible.

Recovery. The recovery of a single failed node is accomplished by the SMON process of one of the other nodes when the failed node comes back on-line. Recovery of data, redo, or rollback files can be accomplished from any instance in the database, but it is suggested that recovery be done with the other instances shut down and that the instance used for recovery be in EXCLUSIVE mode. See Chapter 17 for database recovery instructions.

14.4 Porting an Existing System to Parallel Server

Unfortunately, most parallel server implementation strategies assume you are starting with the design phase, and, unfortunately, many of us who wish to implement this technology have existing applications. Porting to parallel server is a challenge. This section will point out some of the pitfalls and make the transition to parallel server easier.

Other Options

Before we get into parallel server, what are other possible options to allow for high availability and limited downtime for a production system? Essentially, there are three other options: standby database, master-master replication, and full database copies.

Standby Database

With the standby database option you maintain two copies of the database, one an active database, the other a full copy of the active database left in recovery mode. The archived redo log files are copied from the active database to the standby database platform and are applied as they are generated to the standby database. If the active database becomes corrupted, the worse case is that a single redo log worth of transactions is lost. This suggests that a small redo log size should be used in the standby database option. In Oracle version 7, the standby database is not accessible until the final archive log is applied and it is opened for use. In Oracle8 and Oracle8i, the standby database can be used in a read-only mode even while it is in recovery mode. Figure 14.2 demonstrates the standby database option.

Figure 14.2 Example of configuration for hot standby database
Master-Master Replication

In a master-master replication situation both databases are independent entities that maintain a full n-way replication on all tables. This n-way replication allows both databases to be used as fully active databases, and all changes to data are copied between both systems. Master-master n-way replication is complex to manage, and conflict resolution strategies must be created for all update-capable tables. In addition, master-master replication may require additional bandwidth for the network that connects the platforms used. One good feature of n-way or master-master replication is that if it is properly implemented you can use different system platforms in remote locations.

Full Database Copy

In a full database copy situation a full hot backup of the active database is copied from its platform to the backup site, and there it is restored as if from a disaster. A full database copy provides a backup copy of the database, but it is only current to the copy date. If any activity is allowed against the database that results from the copy operation, then the databases are thrown out of sync and a new copy must be made if the databases are to truly back each other up. The time required for this full database copy operation can be prohibitive for large databases. Figure 14.3 shows an example of this option.

Figure 14.3 Example of configuration for full database copy and master-master replication
Parallel Server

In Oracle parallel server, two or more instances attach to the same database by use of the Oracle parallel cache management system. In shops that are used to addressing databases and instances as synonymous objects, this distinction can be confusing. Remember that an instance is the set of processes, and the SGA (System Global Area) and the database are the set of database files. Unless the platform used for the parallel system is able to partition CPUs (such as in the E10000 from Sun) the instances in a parallel server must be on separate platforms running the same operating system, at the same version level, and be running the identical version of Oracle with the parallel server option installed. Figure 14.4 shows the configuration for this option.

Figure 14.4 Example of configuration for parallel server.

Benefits of Parallel Server

Parallel server allows for high availability. In a 24x7 shop the automatic fail over capability for some versions means that even with a full system failure, the database will not become unavailable. Also, in a parallel server environment the number of users served is increased by adding platforms and parallel instances with few changes to the database structure. Under Oracle parallel server, complex applications can be partitioned among several nodes, thus enhancing performance. One thing to note is that partitioning applications is complex and may not be possible if you are porting from standard to parallel server without significant redesign.

Problems with Parallel Server

The major problems that occur under parallel server involve two types of lock contention, PING and FALSE PING. Using parallel cache management combined with distributed lock management Oracle assigns database file blocks to locks based on how the DBA sets certain initialization parameters (see Table 14.1). Usually when a database is designed to operate under parallel server, tables are distributed to multiple tablespaces and datafiles, and the granularity of the locking is set for each table. The granularity depends on type of access; for example, where data is accessed one record at a time you would desire locking of a fine a grain as possible. In applications where data is read in chunks, then courser grain locking is required. If an application mixes these two types of access (OLTP fine-grain transactions during the day, batch-load course grain locking at night) then the setting of the initialization parameters for the PCM system is difficult, and at best a compromise is reached.

Table 14.1 PCM Initialization Parameters
Initialization Parameter Version Purpose
Lm_procs 8.x Set number of lock manager (LM) processes
Lm_ress 8.x Set number of resources for LM
Lm_locks 8.x Set number of locks used by LM
Parallel_server 8.x Startup in OPS if TRUE
Gc_lck_procs 7.x Set number of OPS LM procs
Gc_releasable_locks 7.x Releasable locks for OPS
Gc_rollback_locks 7.x Undo locks for OPS
Gc_files_to_locks 7.x Mapping between files and locks
Parallel_default_max_instances 7.x Max number of instance for parallel rec
Parallel_instance_group 8.x x Instance group to be used for all OPS operations
Ops_admin_group 8.x Instance group to use for all V$ queries

A PING occurs in either a course- or fine-grain locking situation where a data block required by one instance is already locked by another. The block must be released and reacquired. This process is known as a PING. A FALSE PING happens usually under course-grain locking when an instance requires a block that is held under a coarse PCM lock but isn't being used. This results in the required release and relock of that block.

Minimizing PINGS and FALSE PINGS (sometimes know as FALSE COLLISIONS) will be the primary job of a shared instance DBA. One thing to remember is that if the application cannot be partitioned, you will not be able to eliminate PINGS.

The Application to Be Ported

The application used in this example consists of two instances: a mixed-mode OLTP and batch-mode instance and a second snapshot instance used for reporting and data validation (essentially a batch-mode environment). The goal of the conversion is to eliminate the second snapshot instance and provide for a more robust near-24x7 environment. The ERD for the database is shown in Figure 14.5.

Figure 14.5 Application ERD (Note: Not all application tables shown).

The application as it currently exists resides in two main tablespaces, CNET_DATA and CNET_INDEX. One of the large tables, BBS_EARNINGS_INFO, has been placed in its own tablespace, CNET_ENINFO, as have its indexes into CNET_ENINFO_INDEX. The standard set of tablespaces, SYSTEM, USERS, TEMP, and RBS, are also present. The record counts for the major application tables are shown in Table 14.2.

Table Name Number of Rows Average Row Length
CUSTOMER_SITES 2,249,486 214
USERS 131,330 148
STI_ADDRESS 10,838,509 97
BBS_EARNINGS_INFO 3,225,310 251
BBS_DUNS_PROFILE 2,239,183 99
BBS_FRANCHISE_CODES 62,568 24
BBS_SIC_CODES 2,563,878 58
BBS_CIRCUIT_ID_INFO 10,680,189 46

The overall application data space uses around 17 gigabytes of storage including indices, rollback segments, and the space for other application support tables.

Application Activities

The main application has four operational modes: OLTP-From 7 A.M. to 7 P.M., Monday-Saturday
Batch data load-7 A.M. to 9 P.M., Monday-Saturday
Batch data load-As required for monthly and quarterly data updates
Batch read operations-From 3 A.M. to 4 A.M. to support the snapshot database

The snapshot application is used for batch reporting requirements and has no OLTP activity.

Each type of processing involves overlapping sets of tables as is shown in Table 14.3.

Table 14.3 Processing and Tables Affected

Process Major Tables Used Force Snapshot Rebuild?
Daily BBS_EARNINGS_INFO, STI_ADDRESS No
Monthly BBS_EARNINGS_INFO Yes
Monthly Circuit Load BBS_CIRCUIT_ID_INFO N/A
Quarterly CUSTOMER_SITES, BBS_DUNS_PROFILE, BBS_SIC_CODES, BBS_FRANCHISE_CODES, STI_ADDRESS Yes
Reporting BBS_EARNINGS_INFO, CUSTOMER_SITES, STI_ADDRESS, USERS N/A
OLTP CUSTOMER_SITES, STI_ADDRESS, BBS_CIRCUIT_ID_INFO, BBS_DUNS_PROFILE, BBS_SIC_CODES, BBS_FRANCHISE_CODES, USERS, update of small, secondary tables N/A

As can be seen from a glance at Table 14.3, the batch, reporting, and OLTP processes are tightly interwoven. The reporting functionality involves full table scans against several of the major tables. Even though in the current application setup the reporting is accomplished through a second, snapshot database, the functionality may be absorbed into the shared database as a part of the goals of the project.

The current application environment is a single Sun E6000 with 4 gig of RAM, a 600+-gig disk farm running Solaris 2.5.1, and Oracle 7.2.3. The database SGA size is just under a gigabyte with a 145-meg shared pool. Both the main and reporting databases reside on the same server. All tablespace datafiles are configured as RAW with links established to OFA-compliant names on 100-meg cooked file systems. This configuration is shown in Figure 14.6. Figure 14.7 shows how the Oracle datafiles are mapped against the RAW partitions.

Figure 14.6 Existing system physical configuration.

Figure 14.7 Cooked to RAW partition mapping (Report database mapping not shown). Requirements for Parallel Server: How Does the Application Measure Up?

Oracle parallel server requires:
  • On UNIX, that all datafiles and redo logs are on RAW-sharable devices
  • Two or more identical platforms (same OS and Oracle version, can differ in memory and CPUs) able to share the disks
  • Oracle Parallel Server Option (not parallel query!)
  • Configuration of the GC_* parameters and other required initialization parameters
  • More memory than non-parallel server Oracle for the same size SGA.

So how does this application setup stack up against the requirements? Since it was known early on that the application may be ported to parallel server, all datafiles where placed in RAW partitions (either 500 meg or 100 meg in size), so the only non-RAW objects are the redo logs. Two identical E6000 boxes configured with the DLM (Distributed Lock Manager) software and Oracle 7.3.2 with the parallel server option were procured and set up with a shared 700+-gig disk farm configured with the appropriate number and sizes of RAW partitions. An additional set of 20 - 20 -gig RAW partitions were set up to allow the transfer of redo logs from cooked to RAW partitions. The E6000s where purchased with 24 CPUs and 6 gig of memory each.

It appears the system is ready for the conversion from exclusive mode to shared-mode Oracle server. Steps to Convert to Parallel Server

The steps to convert to parallel server seem deceptively easy:
1. System administrator sets up disk farm and installed system software.
2. Create a table usage chart showing how all tables are accessed.
3. Decide on application partitioning.
4. Move large, frequently accessed tables to their own tablespaces; move their indexes to their own tablespace.
5. Move smaller tables that are accessed in a similar manner to their own tablespace and their indexes to a separate tablespace.
6. Use the GC_LOCKS and GC_FILES_TO_LOCKS parameters to map the PCM locks to the file blocks.
7. Create private and shared initialization files.
8. Move any non-RAW datafiles or redo logs to RAW partitions.
9. Start one instance on shared database.
10. Create second thread of redo logs.
11. Create additional rollback segments (private).
12. Bring up second instance assigned second redo thread and newly created rollback segments.
A Closer Look at These Easy Steps

The first step, the setup of the disk farm and installation of software by the system administrator and probably the DBA, involves the creation of the shadow or mirror volumes, the striping of the configuration, the creation of the RAW volumes, and the installation of the proper versions of Oracle and the operating system. There may also be required patches and distributed lock management (DLM) software. In addition, the DLM software must be properly configured and tested.

Once the platform and software are installed, the existing database is installed, the easiest method being from a backup. The install of the existing database will require the careful use of links and may require that some cooked partitions be created and used until tables can be moved and/or resized.

The next step requires someone who knows the database usage patterns for your application. A chart showing all database tables and their usage should be prepared to facilitate the mapping of tables and indexes to a new tablespace/datafile layout. This is one of the more critical steps in that if the database is not reconfigured your locking decisions will be severely limited.

Application partitioning involves deciding which nodes of your parallel server will service which types of customers. If you only want to spread users over multiple nodes, then partitioning isn't needed. However, you should be aware that your performance may be poorer than if you added more memory or CPUs to your existing server. Applications are partitioned so that distinct subsets of tables (such as accounts payable or general ledger tables) are set up in separate tablespaces and datafiles and a unique locking scheme calculated for each subapplication. This would allow OLTP users to use the proper granularity of locks as well as allowing DSS users their own locking scheme. Without partitioning everyone has to compromise.

Once table usage is defined and a partitioning scheme is devised, tables and indexes are moved to their respective tablespaces. Remember that all tablespaces must be moved to RAW partitions. The more you can isolate your tables from each other the happier Oracle parallel server will be. If possible, create the tablespaces in the order of usage patterns (OLTP first, DSS second, etc.). This will facilitate lock assignment. Once all of the tables and indexes are parsed out to their respective tablespaces, you will need to drop any non-RAW tablespaces. At this point, the old database should be shut down and backed up.

Using the usage chart, devise a locking scheme for each datafile. If you have created the datafiles in proper order then specifying locking will be made easier. Locks are assigned to datafiles by datafile number, not tablespace or file name. The locking parameters must be the same for all instances. Each PCM lock requires around 115 bytes of memory.

Once locking parameters are set you will need to set up three initialization files: one initialization file for each of the instances that has the instance-specific parameters, such as buffer data, redo log thread assignment, and rollback segment assignments, and one shared (note that this shared initialization file must be on a RAW partition) that holds the common parameters. You could provide a copy of the shared file to each node in the cluster, but by simply placing a copy in a small RAW partition on the cluster drives you can be sure it will never get out of sync.

Once the required initialization files are in place it is possible to restart the database in shared or parallel mode and create the second thread (and third, or fourth, etc.) of redo logs and any additional rollback segments. Once the new redo threads and rollback segments are available you can proceed with the startup of the other instances. Once you have a successful startup of the parallel system, shut down and take a backup. Restart and you are on your way! Let's examine these actions in light of our sample database.

Porting the Example Application

Now that the ideas of parallel server have been discussed in general, let's look at some specifics using the sample database.

Remember from previous discussions that the example database is currently located on a single Sun E6000 system. The target system is a dual E6000 cluster sharing a suitably sized disk farm. The system administrators have configured the system, loaded the proper version of the operating system, including the Sun distributed lock manager (DLM). In Oracle version 7, the vendor of the platform had to provide a DLM; in Oracle8 the DLM for supported platforms is provided by Oracle Corporation. The Oracle software with the parallel option has also been installed on the Sun cluster.

The major part of the work on the configuration of the new Sun cluster involved the setup of the disk farm. A five-disk stripe set was used with the size of the stripe set according to the Sun DLM specifications and the suggestions from Oracle. The disk farm was configured such that it used a RAID0/1 configuration (striped and mirrored). Once the striping and mirroring was set up, the disk farm was divided into multiple 500-meg, 100-meg, and 20-meg standard RAW partitions. In a review of Oracle white papers and tuning documents it was found that Oracle7, in some cases, prefers to deal with a file size of 500 meg or less. This was also a convenient size for backup and file transfer, so the maximum RAW partition was set to 500 meg. To prevent waste of space in the case of smaller tablespaces (such as SYSTEM, USERS, and TOOLS), the 100-meg RAW partitions were also configured, and, for use with the redo logs (sized at 19 meg), several 20-meg RAW partitions were reserved. Once all of the software installation and system configuration was complete, the table usage chart was prepared. An abbreviated example is shown in Table 14.4.

Table 14.4 Sample Table Usage Chart

Table Name Type of Access P1 Type of Access R1 Batch Mode Access
CUSTOMER_SITES Multiple single row lookups Full table scan rollup Multiple lookup and update, insert
USERS Multiple single row lookups Multiple single row lookups Not used
STI_ADDRESS Multiple single row lookups Full table scan rollup Multiple lookup and update, insert
BBS_EARNINGS_INFO Multiple single row lookups Full table scan rollup Multiple lookup and update, insert
BBS_DUNS_PROFILE Multiple single row lookup Not used Multiple lookup and update, insert
BBS_FRANCHISE_CODES Multiple single row lookup Not used Multiple lookup and update, insert
BBS_SIC_CODES Multiple single row lookup Not used Multiple lookup and update, insert
BBS_CIRCUIT_ID_INFO Multiple single row lookup Not used Dropped, re-created

As can be seen from looking at Table 14.4, we definitely have a mixed-mode environment. It is also obvious that unless we were willing to redesign, we could not provide for both the reporting and normal production in a single set of database files unless we compromised on lock granularity. Since we had little time or budget, the decision to compromise was forced on the project.

The lock granularity for a table is set to reduce or eliminate ping and false ping. For OLTP-type operations where single records are constantly being read, updated, and written, a fine-grain locking scheme is generally used. Ideally, if you could get to one record per block (operating system, not Oracle) and then use one lock per block, you would eliminate false ping and reduce pinging in a pure OLTP environment, especially if only a single user at a time used a single record at a time. For DSS where large blocks of data are handled, a coarser-grain locking is desired. Since most users will be accessing DSS-type data in full table scan type operations, a fine-grain lock scheme would result in a high level of block ping; therefore, a coarser-grain lock is required. It is a purely empirical determination to size the granularity of locking in a DSS system.

It was decided that the example application would use fine-grain locking where possible and a medium-grain locking where the OLTP, DSS, and batch mode environments collided.

When application partitioning was first addressed for the example application, it was naively thought that we could place the reporting users on one node and the OLTP users on the other and have little conflict. As we delved deeper into the finer points of locking these dreams were dashed. While partitioning the users for reporting onto the second node would remove their CPU activity, it would introduce significant pinging and false pinging, impacting performance. In fact, other than a complete redesign of the application no means of user partitioning or locking was found that could satisfy performance for both types of users. In the final analysis, it was decided to retain the dual environments using a master and a reporting database. For the rest of this section, we will discuss the OLTP database since all datafiles in the reporting database were set to the same granularity of locking.

The next step of the process, moving tables into their own tablespaces, was fairly complex. Figure 14.8 shows how the one tablespace currently used for data had to be exploded into multiple smaller tablespaces to allow for maximal use of locking.

Figure 14.8 Tablespace remapping

The actual table moves were accomplished through the use of table, index, and constraint rebuild scripts generated by the use of the re-create scripts provided in RevealNet, Inc.'s Oracle Administrator's Knowledgeware product. The scripts were generated and reflected the as-is state of the database. They were then edited to reflect the new tablespace assignments. The table creation scripts were modified to perform CREATE TABLE x STORAGE . . . TABLESPACE . . . AS SELECT * FROM y; type commands. Once the scripts were prepared, the additional tablespaces were configured and the scripts run. All of this was accomplished on the original platform. Once the tables were moved from the source single tablespace to the target tablespaces the source tablespace was resized by performing multiple export operations (due to the size limitations on parameter table lists in exports) to capture all of the remaining tables. Then, the source tablespace was dropped and re-created to a more realistic size and the multiple export files were imported.

Once all of the tables and indexes were moved into their own tablespaces and the appropriate resizing operations performed, the redo logs were dropped, moved, and resized into 20-meg RAW partitions. The source database was now configured as we would want it on the target platform disk farm. Using the source database as a template, we configured the target platforms with the proper cooked file systems and configured the link names identically to what was on the source system. Using the SQLBackTrack product for Oracle, "Obacktrack", we then cut a cold backup of the target database and then transferred it and the appropriate Obacktrack support files to the target platforms. After a minimum of reconfiguration of the support files for Obacktrack we restored the source database to the target database location. Before we could start up the target database we had to create the cooked file destinations for the archive logs and the backups and repoint the appropriate initialization file parameters to these cooked file areas.

The reason cooked file systems must be used for archive logs is because a RAW partition can only be used for one file at a time. This is also the reason why we had to create the large cooked areas for the backup files.

At this point, we started the database in exclusive mode and tested to make sure the transfer was successful. With the database ready we now had to determine the granularity of locking for the database system.

Determining Lock Granularity

The initialization parameters GC_DB_LOCKS and GC_FILES_TO_LOCKS determine the granularity of the PCM locks assigned to a parallel instance. The GC_DB_LOCKS parameter determines the total number of locks available, and the GC_FILES_TO_LOCKS determines the way these locks are mapped to each datafile. PCM locks are specified on a per-datafile basis, not per table or per tablespace. This is the reason why the tables and indexes must be placed in individual tablespaces and hence datafiles.

Ideally, in an OLTP database, one lock per block would be the way to configure the PCM locks. But let's look at a one-block-per-lock scenario:

1. Oracle block in our database = 8 kilobytes
2. Total size of all datafiles (excluding rollbacks) = 22,843 megabytes 22843 * 1,024 * 1,024 = 23,952,621,568 bytes / 8,192 bytes = 2,923,904 PCM locks
3. Allowing for datafile addition, assume at least 2- to 499-megabyte files will be added over the next six months:
2 * 499 * 1,024 * 1,024 = 1,046,478,848 bytes / 8,192 bytes = 127,744 locks
4. Therefore, we would need to configure (the +1 allows for the case where more than two additional files are configured):
2,923,904 + 127,744 + 1 = 3,051,649 locks
5. The additional approximate memory cost per instance would be:
3,051,649 locks * 115 bytes/lock = 350,939,635 bytes or 335 megabytes

One problem with fine-grain locking is the cost of lock management. In a multi-CPU environment a fine-grain locking scenario is possible since the load of managing the many locks is spread among several processors. If each instance used a single processor the performance costs could be significant.

In our situation it was decided a medium-grain of locking would be required. There are many schools of thought as to how to set up a medium-grain locking environment. If you expect some full table scanning then setting the granularity of locks such that it is a multiple of the multi_block_read_count initialization parameter may be advisable. Matching the number of blocks read in a single pass by your operating system against multi_block_read_count is also one scenario. Usually, a fixed number of locks per datafile is decided upon, and this is how the locking is set. In our situation, it was determined that setting the value to 3992 per 499-meg datafile and 792 per 99-meg datafile would give us a granularity of 16, which would match the setting of our multi_block_read_count parameter, giving 16 blocks per PCM lock. We chose a value of 16 for multi_block_read_count based on block size and striping values as well as the amount of blocks read in one read operation by the operating system. This gave us a setting of the GC_DB_LOCKS parameter of:

23,952,621,568 bytes / (8,192 bytes per block * 16 blocks) = 182,744 locks

Allowing for growth:

2 * 499 * 1024 * 1024 = 1,046,478,848 bytes / (8,192 bytes/block * 16 blocks/lock) = 7,984 locks

For a total of:

182,744 + 7,984 + 1 = 189,729 locks

at a memory cost of :

189,729 locks * 115 bytes/lock = 21,818,835 bytes or approximately 21 megabytes per instance

PCM locks can be assigned either as HASH or fine grained. Setting the value to zero for a datafile sets fine-grain locking. Setting any other setting sets up hashing, either DBA configured or system default. The format for the GC_FILES_TO_LOCKS parameter is:

GC_FILES_TO_LOCKS="{file_list=#locks[!blocks][EACH[:]} . . ."

Where:
File_list specifies a single file, or list of files and ranges:
FileA[-fileC][,fileE[-fileG]] . . .
File identifiers, not file names, are used. The DBA_DATA_FILES view provides the file identifiers for all datafiles.
#locks sets the number of PCM locks to assign the file(s) identified. A setting of 0 specifies fine-grain locking.
!blocks specifies the number of contiguous data blocks to be covered by each lock.
EACH specifies #locks as the number of locks to be allocated to each file in the file list.

In our situation, we created the tablespaces such that the file identifiers for the specific file sizes (99 megabytes and 499 megabytes) ran contiguously. This simplified specifications inside the GC_FILES_TO_LOCKS parameter. The 99-megabyte files were assigned first and carried the identifiers 1-14. The next 43 datafiles assigned were all 499 megabytes (15-58). The final two parameter assignments looked like so:

GC_DB_LOCKS = 189729
GC_FILES_TO_LOCKS = "1-14=#792!16EACH:15-58=#3992!16EACH"
Setting Other GC_* Parameters

The final datafiles were assigned to rollback segments. You should never assign locks to rollback segment datafiles in the GC_FILES_TO_LOCKS parameter. Instead, use the GC_ROLLBACK_SEGMENTS and GC_ROLLBACK_LOCKS parameters (note that in Oracle8, these parameters are no longer needed).

The GC_ROLLBACK_SEGMENTS is set to the number of rollback segments for all instances. The GC_ROLLBACK_LOCKS parameter should be set to the expected number of blocks used by individual rollback segments. I suggest starting the set point at the number of blocks used when the rollback segment is at its optimal setting.

The GC_SEGMENTS para meter is set to the expected number of database segments found by doing a COUNT(*) select against the DB_SEGMENTS view and applying a fudge factor to allow for growth (I suggest 1.25 to 1.5) rounded up to the next prime number. The next prime number can be determined by using SELECT LPRIME(seg_count*fudge_factor) FROM DUAL. The only segments that the GC_SEGMENTS parameter concerns are index, table, and cluster segments.

The GC_TABLESPACES parameter should be set to the number of tablespaces in the database.

The GC_FREELIST_GROUPS parameter should be set to the maximum number of free list groups assigned.

A Note on Free Lists

A free list group should be assigned for each instance that will use a table. In the example database, we reconfigured the tables when we rebuilt them to have two free list groups and five free lists each. These set points are based on two instances and thus two free list groups and five simultaneous accesses per table, thus five freelist groups.

Defining the Initialization Files

As was said before, in Oracle parallel server you are required to have one initialization file per instance for the instance-specific initialization parameters, and a central shared initialization file for the common parameters is highly recommended. By centralizing the common initialization parameters into a shared file you only need to make changes in one location for all instances. This prevents confusion and makes for easier maintenance. In the example database, the baseline initialization file was copied from the source system to the target instances, and the instance-specific initialization parameters were added or modified as needed. A special 5-meg RAW partition was created on the shared drives, and a shared initialization file called by the use of the IFILE initialization parameter was created and placed there to hold the initialization values that had to be the same for both instances.

Adding Redo Threads and Rollback Segments

Once the initialization files are configured, a single instance (the one tied to the existing redo log thread) is started in parallel mode. Once the database is up and operating, more threads of redo logs are created, one thread for each parallel instance that will share the database. For the example database, we only had to create one additional thread. Note that while all instances must be able to reach the redo log threads, only one may acquire a specific redo log thread at a time. Also, the redo log threads don't have to be identical in size or number of members.

Archive logging takes on an entirely new dimension in parallel mode for Oracle. Remember, we configured a large cooked volume to hold the archive logs. This means that it is difficult to share that location across the parallel instances. The archive logs from all instances must be available for all instances in order to allow distributed recovery and the recovery of one instance that crashes by another of the parallel instances. What all this means is that some means of copying the archived redo logs from all instances to all other instances must be created, or a central, perhaps NFS-mounted area created where all archive logs will be copied.

Additional rollback segments for each instance will also be required. I suggest the use of private rollback segments that are assigned explicitly to each instance. Public rollback segments will be acquired as needed by any instance that requires them. If all are acquired then one instance of a group may not be able to start up. In the example database we configured two identical strings of private rollback segments and separately assigned them to their respective databases.

Starting Up the Other Instances

Once all redo log threads are created and rollback segment assignments are complete, the other parallel instances can be started. This completes the porting to parallel server. References

Oracle7 Parallel Server Concepts & Administration, Release 7.3, Part No. A42522-1, Oracle Corporation, January 8, 1997.
Oracle8 Administration and Management, Michael R. Ault, Wiley Computer Publishing, 1998.
"Partitioning of Applications to Take advantage of Oracle's Parallel Server," Erik Peterson, Oracle White Paper, Revision 1.45, Oracle Corporation, December 7, 1995.
Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)