However, you should measure exactly how the system is currently performing before beginning any tuning effort. The baseline metrics described in the previous section are a good example of this type of measurement. Using this benchmark, you can then compare the performance of the system after any tuning changes and evaluate their impact. In addition to these baseline metrics, you can use additional sources of tuning information to monitor and tune database performance. The following section describes these sources.
The EM Database Control provides a wealth of information for improving database monitoring and management, but you also need to be aware of several other sources of information about database performance, including:
The Alert Log frequently indicates whether gross tuning problems exist in the database. Tables that are unable to acquire additional storage, sorts that are failing, and problems with undo segments are all examples of tuning problems that can show up as messages in the Alert log. Most of these messages are accompanied by an Oracle error message.
ADDM provides a mechanism for sending an alert whenever Oracle errors are detected in the Alert log. Click the Alert Log Content link on the EM Database Control main screen to see the most recent messages.
The 10046 trace event, which can be activated at the instance or session level, is
particularly useful for finding performance bottlenecks.
See Note 171647.1 at http://metalink.oracle.com for a discussion of using the 10046 trace event as a tuning technique.
Dynamic Performance Views Oracle 10g contains approximately 350 dynamic performance views.
A partial listing of some of the V$ views that are frequently used in performance tuning:
Name Description V$SGASTAT Shows information about the size of the SGA’s components. V$EVENT_NAME Shows database events that may require waits when requested by the system or by an individual session. There are approximately 200 possible wait events. V$SYSTEM_EVENT Shows events for which waits have occurred for all sessions accessing the system. V$SESSION_EVENT Shows events for which waits have occurred, individually identified by session. V$SESSION_WAIT Shows events for which waits are currently occurring, individually identified by session. V$STATNAME Matches the name to the statistics listed only by number in V$SESSTAT and V$SYSSAT. V$SYSSTAT Shows overall system statistics for all sessions, both currently and previously connected. V$SESSTAT Shows statistics on a per-session basis for currently connected sessions. V$SESSION Shows current connection information on a per-session basis. V$WAITSTAT Shows statistics related to block contention.
In general, queries that incorporate V$SYSSTAT show statistics for the entire instance since the time it was started. By joining this view to the other relevant views, you get the overall picture of performance in the database. Alternatively, queries that incorporate V$SESSTAT show statistics for a particular session. These queries are better suited for examining the performance of an individual operation or process. The EM Database Control makes extensive use of these views when creating performance- related graphs.
Depending on the features and options installed, there are approximately 1300 DBA data dictionary views in an Oracle 10g database.
A Sampling of Data Dictionary Views:
Name Description DBA_TABLES Table storage, row, and block information DBA_INDEXES Index storage, row, and block information INDEX_STATS Index depth and dispersion information DBA_DATA_FILES Datafile location, naming, and size information DBA_SEGMENTS General information about any space-consuming segment in the database. DBA_HISTOGRAMS Table and index histogram definition information DBA_OBJECTS General information about all objects in the database, including tables, indexes, triggers, sequences, and partitions.
The DBA_OBJECTS data dictionary view contains a STATUS column that indicates, through the use of a VALID or an INVALID value, whether a database object is valid and ready to be used or invalid and in need of some attention before it can be used. Common examples of invalid objects are PL/SQL code that contains errors or references to other invalid objects and indexes that are unusable due to maintenance operations or failed direct-path load processes. Some invalid objects, such as some types of PL/SQL code, dynamically recompile the next time they are accessed, and they then take on a status of VALID again. But you must manually correct other invalid objects, such as unusable indexes. Therefore, proactive database management techniques dictate that you identify and remedy invalid objects before they cause problems for database users.
SQL> SELECT owner, object_name, object_type
2 FROM dba_objects
3 WHERE status = 'INVALID';
OWNER OBJECT_NAME OBJECT_TYPE
------ ---------------------- ----------------- -----------
SH P_UPDATE_SALES_HISTORY PROCEDURE SYS DBA_HIST_LATCH VIEW
SQL> ALTER VIEW sys.dba_hist_filestatxs COMPILE;
SQL> ALTER PROCEDURE sh.p_update_sales_history COMPILE; Procedure altered.
SQL> SELECT owner, index_name, index_type
2 FROM dba_indexes
3 WHERE status = 'UNUSABLE';
OWNER INDEX_NAME INDEX_TYPE
-------------- ------------------- ----------
HR JOB_ID_PK NORMAL
SQL> ALTER INDEX hr.job_id_pk REBUILD;
The ALTER … COMPILE command also works on invalid PL/SQL triggers, packages, package bodies, and functions.
When rebuilding an index using the REBUILD command, the amount of space used by the index is temporarily larger than the actual space needed to store the index. Make sure that adequate space exists in the tablespace before starting the rebuild process; up to 1.5 times the size of the original index is a good rule of thumb.
The Procedures screen shows that the status of the P_UPDATE_SALES_HISTORY procedure is currently INVALID. By selecting the Compile option from the Actions drop- down list, you can begin the recompilation process.
Selecting the Compile option, and then clicking Go causes the procedure to recompile and displays the Edit Procedure screen.
Scrolling to the bottom of this screen shows the messages associated with the error.
Once the error condition is corrected (in other words, the missing SALES_HISTORY_VIEW view is re-created), the procedure can again be recompiled using the Compile button, after which the successful completion screen is displayed.
Using the Indexes screen, you can also use EM Database Control to recompile indexes that are in an unusable state. Click the Indexes link in the Administration screen to open the Indexes screen.
To begin the recompilation process, select the Reorganize option from the Actions dropdown list. Click Go to display the second screen of the Reorganize Objects Wizard. Click the Set Attributes or Set Attributes By Type button to modify the index’s attributes—such as the tablespace that it will be stored in or its storage parameters— before rebuilding. Click Next to display the third screen of the Reorganize Objects Wizard.
Using this screen, you can control how the index is rebuilt. For example, you can select the rebuild method, either offline or online, that is best suited for your environment. Offline rebuilds are faster but impact application users who need to access the index. Online rebuilds have minimal impact on users but take longer to complete. You can also specify a "scratch" tablespace where Oracle stores the intermediate results during the rebuild process. Redirecting this activity to another tablespace helps minimize potential space issues in the index’s tablespace during the rebuild. You can also specify whether to gather new optimizer statistics when the index build is complete. Click Next on this screen to generate an impact report.
The output indicates that there is adequate space in the EXAMPLE tablespace for the unusable JOBS_ID_PK index. Clicking Next displays the job scheduling screen.
Like the earlier job-scheduling example in this chapter, you can use this screen to assign a job description and to specify the start time for the job. Clicking Next submits the job and rebuilds the unusable index according to the parameters you defined.
The cost-based optimizer (CBO) uses these statistics to formulate efficient execution plans for each SQL statement that is issued by application users. For example, the CBO may have to decide whether to use an available index when processing a query. The CBO can only make an effective guess at the proper execution plan when it knows the number of rows in the table, the size and type of indexes on that table, and how many the CBO expects to be returned by a query. Because of this, the statistics gathered and stored in the data dictionary views are sometimes called optimizer statistics. In Oracle 10g, there are several ways to analyze tables and indexes to gather statistics for the CBO. These techniques are described in the following sections.
Click Next on the Introduction screen to open Step 2 in the wizard, and select the method to use when gathering the statistics.
As you can see, three primary statistics options are available: Compute, Estimate, and Delete. The Compute option examines the entire table or index when determining the statistics. This option is the most accurate, but also the most costly in terms of time and resources if used on large tables and indexes. The Estimate option takes a representative sample of the rows in the table and then stores those statistics in the data dictionary. The default sample size is 10 percent of the total table or index rows. You can also manually specify your own sample size if desired. You can also specify the sample method, telling EM Database Control to sample based on a percentage of the overall rows, or blocks, in the table or index. The Delete option removes statistics for a table or index from the data dictionary. If you specify a sample size of 50 percent or more, the table or index is analyzed using the Compute method.
After choosing a collection and sampling method, click Next to display the Object Selection screen.
This screen lets you focus your statistics collection by schema, table, index, partition, or the entire database. Click OK to display the statistics summary screen.
Click the Options button to specify the analysis method, sample method, and other options related to the gathering the table statistics, and then click Next to move to the fourth EM Gather Statistics Wizard screen.
Accepting the default values generates a system job ID and runs immediately for one time only. If desired, you can change the frequency and time for the statistics- gathering process. Click Next to display the final screen of the EM Gather Statistics Wizard.
Once the job is complete, it is moved to the Run History tab on the Scheduler Jobs screen where its output can be inspected for job success or failure and any associated runtime messages.
Gather table statistics much faster by performing the analysis in parallel. Automatically gather statistics on highly volatile tables and bypass gathering statistics on static tables.
The following example shows how the DBMS_STATS packages can be used to gather statistics on the PRODUCT_HISTORY table in SH’s schema:
SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS (‘SH’,’PRODUCT_HISTORY’);
You can use the DBMS_STATS package to analyze tables, indexes, an entire schema, or the whole database. A sample of some of the procedures available within the DBMS_STATS package are shown hereunder:
Procedure Name Description GATHER_INDEX_STATS Gathers statistics on a specified index GATHER_TABLE_STATS Gathers statistics on a specified table GATHER_SCHEMA_STATS Gathers statistics on a specified schema GATHER_DATABASE_STATS Gathers statistics on an entire database
The presence of accurate optimizer statistics has a big impact on two important measures of overall system performance: throughput and response time.
Performance considerations for transactional systems usually revolve around throughput maximization.
Another important metric related to performance is response time. Response time is the amount of time that it takes for a single user’s request to return the desired result when using an application, for example, the time it takes for the system to return a listing of all the customers who purchased products that require service contracts. Telling ADDM about Your Server I/O Capabilities Both throughput and response time are impacted by disk I/O activity. In order for ADDM to make meaningful recommendations about the I/O activity on your server, you need to give ADDM a reference point against which to compare the I/O statistics it has gathered. This reference point is defined as the "expected I/O" of the server. By default, ADDM uses an expected I/O rate of 10,000 microseconds (10 milliseconds). This means that ADDM expects that, on average, your server will need 10 milliseconds to read a single database block from disk.
Using operating system utilities, we performed some I/O tests against our large storage area network disk array and found that the average time needed to read a single database block was about 7 milliseconds (7000 microseconds). To give ADDM a more accurate picture of our expected I/O speeds, we used the DBMS_ADVISOR package to tell ADDM that our disk subsystem was faster than the default 10 millisecond value: EXECUTE DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER('ADDM', 'DBIO_EXPECTED', 7000);
Without this adjustment, the ADDM might have thought that our I/O rates were better than average (7 milliseconds instead of 10 milliseconds) when in fact they were only average for our system. The effect of this inaccurate assumption regarding I/O would impact nearly every recommendation that the ADDM made and would have almost certainly resulted in sub-par system performance.
Performance tuning considerations for decision-support systems usually revolve around response time minimization.
EM Database Control can be used to both monitor and react to sudden changes in performance metrics like throughput and response time.
Click the metric you want to examine to expand the available information.
Click the Database Block Changes (Per Second) link to display details on the number of database blocks that were modified by application users, per second, for any period between the last 24 hours and the last 31 days.
You can also see that the Warning threshold associated with this metric is 85 and that the Critical threshold is 95 block changes per second and that there were two occurrences of exceeding one or both of those thresholds.
EM Database Control also provides a rich source of performance-tuning information on the Performance tab of the EM Database Control main screen. The Performance tab is divided into three sections of information.
The Host section of the Performance tab shows run queue length and paging information for the host server hardware. The Run Queue Length graph indicates how many processes were waiting to perform processing during the previous one-hour period. The Paging Rate graph shows how many times per second the operating system had to page out memory to disk during the previous one-hour period.
In addition to other metrics, the Sessions: Waiting And Working section of the Performance tab always shows CPU and I/O activity per session for the previous one- hour period.
This portion of the Performance tab graphically depicts the logons and transactions per second and the physical reads and redo activity per second. You can also view these metrics on a per transaction basis instead of per section, by clicking the Per Transaction button below the graph.
Suppose you notice a drop in database performance within the last 30 minutes. Using the EM Database Control Performance tab, you can drill down into the detail of any of the performance metrics summarized on the tab and identify the source of the problem using techniques.