A datafile belongs to only one tablespace and only one database at a time. When Temp files are a special variety of datafile that are used in temporary tablespaces. (No initialization or filling with binary zeros will not occur)
By using AUTOEXTEND, resizing of datafile is automatically done (applies to individual datafile but not to the tablesace). It avoids out-of-space failures in applications.
ALTER DATABASE DATAFILE <'file name'> AUTOEXTEND ON NEXT 100M MAXSIZE 8000M;
To resize a datafile manually, use the ALTER DATABASE DATAFILE statement, like this:
ALTER DATABASE DATAFILE <'file name'> RESIZE 2000M;
To relocate a datafile, take it offline, move it using an operating system command, rename it, recover it (sync the file header with the rest of the database), and then bring it back online.Here is an example:
ALTER DATABASE DATAFILE <'file name'> OFFLINE;
HOST COPY C:\ORACLE\DATA02.DBF C:\ORACLE\ORADATA\ORA10\DATA02.DBF
ALTER DATABASE RENAME FILE
ALTER DATABASE DATAFILE
A schema is collection of database objects owned by a specific database user.
(SCHEMA has the same name as an Oracle user) hence both are synanonymous.
Schema objects include the segments (tables, indexes, and so on) as well asnonsegment objects such as constraints, views, synonyms, procedures, and packages.
All these are owned by a database user.
Objects not owned by a user such as roles, tablespaces, and directories are not schema objects.
Oracle10g has several built-in datatypes that can be used in tables.
Character datatypes store alphanumeric data in the database character set or the Unicode char- acter set. The database character set is specified when the database is created and indicates which languages can be represented in the database. The US7ASCII character set supports the English language as well as any other language that uses a subset of the English alphabet. The WE8ISO8859P1 character set supports several European languages, including English, French, German, and Spanish. The Unicode character set AL16UTF16 is intended to concurrently sup- port every known language, although there are a few not yet included, such as Egyptian hiero- glyphs and cuneiform.
The database character datatypes are as follows:
CHAR(size [byte|char]), NCHAR(size) Fixed width types that always store the column- width amount of data, right padding with spaces as needed. The size specification is in bytes if you do not include the keyword char. The NCHAR variation uses the Unicode character set, and the size is always given in characters.
VARCHAR(size [byte|char]), VARCHAR2(size [byte|char]), NVARCHAR2(size) Variable width types. Unlike their CHAR counterparts, the VARCHAR types store only the amount of data that is actually used. The size specification is in bytes if you do not include the keyword char. The NVARCHAR2 variation uses the Unicode character set and is always given in characters. VARCHAR and VARCHAR2 are synonymous in Oracle10g, but Oracle reserves the right to change comparison semantics of VARCHAR in future releases; so the VARCHAR2 type is preferred.
LONG A legacy datatype that exists for backward compatibility. It stores variable-length alpha- numeric data up to 2GB in size. There are many restrictions on the usage of the columns of type LONG: there is a limit of one column of type LONG per table, tables containing a LONG cannot be partitioned, LONG datatypes cannot be used in subqueries, and few functions will work with LONG data. The CLOB datatype is the preferred datatype for character data larger than VARCHAR2.
Numeric datatypes can store positive and negative fixed and floating-point numbers, zero, infinity, and the special value Not A Number.
Oracle10g has several datetime datatypes that can store dates, time, and time periods:
DATE: Stores a date and time with a one-second granularity. The date portion can be from January 1, 4712 BCE to December 31, 9999. The time portion of a DATE datatype defaults to midnight, or 00:00:00 hours, minutes, and seconds.
TIMESTAMP[(precision)] Stores a date and time with subsecond granularity. The date portion can be from January 1, 4712 BCE to December 31, 9999. precision is the number of digits of subsecond granularity. precision defaults to 6 and can range from 0 to 9.
TIMESTAMP[(precision)] WITH TIMEZONE: Extends the TIMESTAMP datatype by also stor- ing a time zone offset. This time zone offset defines the difference (in hours and minutes) from the local time zone and UTC (Coordinated Universal Time, also known as Greenwich mean time or GMT). Like TIMESTAMP, precision defaults to 6 and can range from 0 to 9.
TIMESTAMP WITH TIMEZONE values are considered equal if they represent the same chronolog- ical time. For example, 10:00AM EST is equal to 9:00AM CST or 15:00 UTC.
TIMESTAMP[(precision)] WITH LOCAL TIMEZONEExtends the TIMESTAMP datatype by also storing a time zone offset. The TIMESTAMP WITH LOCAL TIMEZONE datatype does not store the time zone offset with the column data.
Instead, the timestamp value is converted from the local time to the database time zone. Likewise, when data is retrieved, it is con- verted from the database time zone to the local time zone. Like TIMESTAMP, precision defaults to 6 and can range from 0 to 9.
INTERVAL YEAR[(precision)] TO MONTH: Stores a period of time in years and months. precision is the maximum number of digits needed for the year portion of this period, with a default of 2 and a range of 0 to 9. Use the INTERVAL YEAR TO MONTH datatype to store the difference between two datetime values if yearly or monthly granularity is needed.
INTERVAL DAY[(d_precision)] TO SECOND[(s_precision)]: Stores a period of time in days, hours, minutes, and seconds. d_precision is the maximum number of digits needed for the day portion of this period, with a default of 2 and a range of 0 to 9. s_precision is the number of digits to the right of the decimal point needed for the fractional seconds portion of this period, with a default of 6 and a range of 0 to 9. Use the INTERVAL DAY TO SECOND datatype to store the difference between two datetime values if granularity down to a fractional second is needed.
LOB Datatypes As the name implies, LOB datatypes store large objects, up to 232 – 1, or 4,294,967,295 data blocks. With an 8KB data block size, this comes out to about 32TB per field. LOBs are designed for text, image video, audio, and spatial data. When you create a table with LOB col- umns, you can specify a different tablespace and different attributes for the LOB data than for the rest of the table. The LOB locator, a kind of pointer, is stored inline with the row and is used to access the LOB data.
CLOB: Stores variable-length character data.
NCLOB Stores variable-length character data using the Unicode character set.
BLOB Stores binary variable-length data inside the database. BLOB data does not undergo character set conversion when passed between databases or between client and server processes.
BFILE Stores binary variable-length data outside the database. BFILEs are limited to a maximum of 4GB of data and even less in some operating systems.
ROWIDs are either physical or logical addresses that uniquely identify each row in an Oracle10g table. The database ROWID datatypes are as follows:
ROWID Stores the base64-encoded physical address of any row in a heap-organized table in the database. ROWIDs incorporate the Object ID (OID), relative file number, block number, and row slot within the block. They are used internally in indexes and via the ROWID pseudocolumn in SQL. You can use ROWID datatype columns in your tables to store "row pointers" to rows in other tables.
UROWID (Universal ROWID) Stores the base64-encoded string representing the logical address of a row in an index-organized table.
Binary Datatypes: used to store unstructured data. In Oracle binary data does not undergo character set conversion when passed from database to database. They are:
RAW(size) Stores unstructured data up to 2000 bytes in size.
LONG RAW: Stores unstructured data up to 2GB in size. Like the LONG datatype, it exists to sup- port backward compatibility, and there are several restrictions on LONG RAW columns—Oracle discourages their use. Consider using the BLOB datatype instead.