Updated: June 2024
In order to use DMU with a database older than 11.2.0.3, a database patch must be installed to add server-side migration functionality required by DMU. The patch is only available for the supported configurations. If your database version is not in the list, please consider upgrading to one of the supported releases. Note that all database releases since 11.2.0.3 are supported on all platforms except IBM z/OS and Fujitsu BS2000.
DMU 23.1 requires JDK 11. DMU 2.1 requires JDK 6 or 7.
While DMU can be run remotely from a different machine, it is highly recommended to run it locally on the same host as the database server to be migrated for faster performance and reducing network overhead. It is also recommended to run the database in dedicated server mode during migration.
Migration to Unicode is a resource-intensive operation especially for migrating large-scale databases. It is recommended to perform the migration on sufficiently configured hardware to maximize the migration throughput. The exact hardware requirements vary depending on the size of the database and the targeted down-time window, but it is advisable to have a minimum of 8 CPU cores and 16GB memory for migrating databases of 100GB or above.
Oracle recommends that you run the DMU client on the same host as the database server to be migrated to minimize network overhead. If you must run the DMU client on a different host, the minimum hardware requirements are: CPU speed - 2GHz, Memory - 4GB RAM. Avoid having other jobs running in the same environment if they consume significant system resources.
DMU stores the java
executable path in ~/.dmu_jdk
for Unix-based platforms. If you remove this file, the tool will ask for the full path again. If the java
executable is found in PATH
or JAVA_HOME
is defined, then it will not prompt the user. On Microsoft Windows, you can edit the JDK location in the file dmu\dmu\bin\dmu32.conf
(DMU with 32-bit JDK) or in the file dmu\dmu\bin\dmu64.conf
(DMU with 64-bit JDK) under the DMU installation folder. Look for the keyword SetJavaHome
.
Invalid data is often a result of storing data in a different encoding than the database character set using the pass-through configuration which bypasses the client/server character set conversion. In order to migrate such data correctly, you must identify the actual encoding used for these data. The DMU character set tagging feature allows you to analyze columns containing invalid data by re-rendering them in different character sets. After the actual character set is confirmed and tagged to the column, it will be used in all subsequent data scanning and conversion operations. If you believe all data in the database is stored in a different character set, you can set it in the "Assumed Database Character Set" field in the database property tab (e.g., storing WE8MSWIN1252 data in a WE8ISO8859P1 database).
It is also possible that the invalid data is caused by application bugs or storing binary data in character columns. Please see Chapter 6 of the Users' Guide for more cleansing scenarios.
When migrating non-ASCII data to Unicode, the resulting data may expand in size due to their multi-byte representations in Unicode. The data expansion issues can manifest either as over column limit issues or over data type limit issues.
For "exceed data type limit" issues, the options are:
Cleansing actions involving changes to column definitions may not be suitable to be performed on production environments since they typically require corresponding updates in the application code logic. DMU provides scheduled cleansing actions for such cases so that the changes can be saved in the DMU repository for execution later during the conversion phase as part of the downtime window. To define a scheduled cleansing action, select “Schedule Column Modification…” from the cleansing editor context menu on the target column.
In general, the DMU does not support converting data dictionary in this release if there is convertible data in data dictionary tables, except for the following:
CREATE PROCEDURE
, CREATE FUNCTION
, CREATE PACKAGE
, CREATE PACKAGE BODY
, CREATE TYPE BODY
, CREATE TRIGGER
, and CREATE LIBRARY
; type specifications (CREATE TYPE
) are not convertedCREATE VIEW
SYS.SCHEDULER$_JOB.NLS_ENV
– NLS environment for Database Scheduler jobs (DBMS_SCHEDULER
)SYS.SCHEDULER$_PROGRAM.NLS_ENV
– NLS environment for Database Scheduler job programs (DBMS_SCHEDULER
)SYS.JOB$.NLS_ENV
– NLS environment for legacy jobs (DBMS_JOB
)CTXSYS.DR$INDEX_VALUE.IXV_VALUE
– attribute values of Oracle Text policiesSYS
, SYSTEM
, and CTXSYS
schemas that contain user comments for various database objectsThe PL/SQL source code and the view source text are kept in multiple tables. The DMU checks the following columns when processing the source code and view definitions:
SYS.VIEW$.TEXT
– view definition textSYS.SOURCE$.SOURCE
– PL/SQL and Java source codeSYS.ARGUMENT$.PROCEDURE$
– PL/SQL argument definitions: procedure nameSYS.ARGUMENT$.ARGUMENT
– PL/SQL argument definitions: argument nameSYS.ARGUMENT.DEFAULT$
– PL/SQL argument definitions: default valueSYS.PROCEDUREINFO$.PROCEDURENAME
– names of procedures and functions declared in packagesSYS.IDL_CHAR$.PIECE
– internal representation of PL/SQLSYS.PLSCOPE_IDENTIFIER$.SYMREP
– internal representation of PL/SQL; this table is new Oracle Database 11gThe DMU does not report convertible character data in the tables and columns listed above as a convertibility issue. Any convertible data in the remaining tables and columns of the data dictionary is flagged as a convertibility issue in scan reports and on the Migration Status tab. The database conversion step cannot be started before the flagged data is removed. Cleansing operations are not allowed on data dictionary tables.
The SYS schema contains a number of tables with names beginning with WRI$
, WRH$%
, and WRR$_
, which comprise the Automatic Workload Repository (AWR). In addition to historical object statistics, this repository stores snapshots of vital system statistic, such as those visible in various fixed views, for example, V$SYSSTAT
and V$SQLAREA
.
If non-ASCII characters are used in object names or in SQL statements, for example in character literals or comments, they may get captured into the AWR tables. The DMU scan will report such characters as convertible data dictionary content, which prevents conversion of the database. To get rid of this data completely, recreate the Automatic Workload Repository by logging into SQL*Plus with SYSDBA
privileges and running:
SQL> @?/rdbms/admin/catnoawr.sql
SQL> @?/rdbms/admin/catawr.sql
SQL> execute dbms_swrf_internal.register_local_dbid;
As the catawr.sql
script is not present in Oracle Database versions 10.2.0.4 and earlier, Oracle recommends that you install the Oracle Database patch set 10.2.0.5 before purging AWR contents.
Modifying the structures of Oracle E-Business Suite schemas is not supported as it may cause the Oracle applications to malfunction. You should only modify such columns if the affected table is a custom table created by you or if you have been advised to do so by Oracle Support.
This is expected because rescanning larger data types on the client-side can be very expensive. The cleansing editor filtering and highlighting for CLOB, LONG, and XMLType columns are based on the most recent scan results for the table which won't change until you rescan the table/column.
The DMU classifies each data cell under only one of the scan result categories. Values that have invalid binary representation issues are classified only as such even if their lengths also exceed column or data type limit after the conversion. The user is generally expected to resolve the invalid data issues and rescan the data before attempting conversion. Since the DMU also allows you to ignore the invalid data issues and force the conversion of a column, you should be aware that forcefully converted values with invalid binary representation may be additionally truncated. You can compare the value of the Maximum Post-conversion Length property of the column with the column and data type length limits to see if the truncation will take place.
DMU can identify rows with a given type of convertibility issues by either using the rowids collected in the migration repository or analyzing the column values dynamically on the client as they are fetched from the database. For the latter approach, when only a small percentage of rows in the table meet the filter criteria, DMU may have to fetch and analyze many rows before it finds enough rows to fill the cleansing editor. From the performance standpoint, it is recommended to scan the table to pre-collect the rowid information and enable the “User Scan Log to Filter Data” button on the cleansing toolbar before applying the filters.
For tables with relatively large percentage of convertible data, DMU can assign the "Copy data using CREATE TABLE AS SELECT" conversion method which provides significant performance advantage. The CTAS conversion method is not enabled by default since it may not preserve the rowids of rows in the table. If your applications do not store rowids, you can set the "Consider CTAS with Row Movement Disabled:" parameter to "Yes" in the database property tab so that DMU will assign the CTAS conversion method for optimal conversion performance.
You can monitor the table-level conversion progress in the "View Table Conversion Progress" link on the Conversion Progress tab. It will display the completion percentage for each table based on the status from the V$SESSION_LONGOPS
view along with the execution status of individual SQL statements.
The SQL statement ALTER DATABASE CHARACTER SET, which the DMU uses in the conversion phase, succeeds only if the session executing the statement is the only user session logged into the database. Therefore, before starting the conversion, the DMU warns you about any user sessions logged into the database that are not opened by the DMU itself. You may use the following SQL statement in SQL*Plus or SQL Developer to find out the details of the offending sessions:
SELECT sid, serial#, username, status,
osuser, machine, process, program
FROM v$session
WHERE username IS NOT NULL
AND program <> 'Database Migration Assistant for Unicode';
At this point, you can still disconnect the offending sessions and resume the conversion.
The error indicates that the DMU was unable to read an external table. This can happen if data files of the external table are not available in the expected location or they are in a wrong format. As external tables are often used to load data from external sources only at certain points in time, it is not uncommon for data files to be unavailable. You can either make sure the data files exist and retry the scan or ignore these errors since they will not block any subsequent migration operations.
The value of the property "Report U+FFFD as an invalid character" on the Scanning sub-tab of the Database Properties tab determines how the DMU interprets the Unicode default replacement character U+FFFD (the byte sequence 0xEF 0xBF 0xBD in AL32UTF8 and UTF8). If the property value is "Yes", the character is treated as invalid data. This is the default behavior, because the presence of this character in data usually indicates that the data is the result of character set conversion of some input that was not properly tagged with its real character set. If you use the character U+FFFD for some internal processing purposes and you do not want the DMU to report it as invalid, change the property value to "No".
The DMU determines the data readiness status based on the results of the most recent scan. Please be sure to rescan the affected database objects after applying cleansing actions to see the effect of the changes and verify that the data issues have been resolved successfully.
If you cleansed a table outside of the DMU, for example by lengthening a column in SQL Developer or SQL*Plus, and the changes are not reflected in the DMU, then refresh the DMU repository by clicking "Refresh DMU Repository..." in the Migration menu.
The DMU does not offer any conversion rollback feature per se but it comes with built-in conversion error handling such that if the conversion process is interrupted by an error condition, it is possible to resolve the issue and resume the conversion. If you really need to rollback your database to the state before the conversion, you can restore from backup or use the flashback database feature.
Note: The flashback database feature has not been tested to work across the ALTER DATABASE CHARACTER SET
(ADBCS) statement. While the design of the feature should not conflict with ADBCS, it is recommended that you choose restoring from backup if ADBCS has already been performed during the conversion process, that is, the query SELECT value FROM nls_database_parameters WHERE parameter='NLS_CHARACTERSET'
shows the target character set. If you have no backup available and you are forced to try flashback database after ADBCS, make sure you restart the instance immediately after the flashback command. See Oracle Database Backup and Recovery User's Guide for more information on FLASHBACK DATABASE
and its requirements prior to starting the conversion process.
If you notice that a DMU operation, especially one that is normally fast, such as refreshing the repository or calculating split threshold before scanning, takes unusually long or appears to hang, then verify that the database initialization parameter optimizer_features_enable
is not set to an old database version. For example, the value 9.2.0
of this parameter is known to cause certain internal queries in the DMU to use suboptimal execution plans.
When starting on a Unix-like operating system, the DMU looks for a JDK in the following locations in turn:
SetJavaHome
directive in the file dmu/dmu/bin/dmu.conf
$APP_JAVA_HOME
$OIDE_JAVA_HOME
dmu/jdk
dmu/../jdk
$JAVA_HOME
/usr/java/jdk1.6.*
(with highest version)$HOME/.dmu_jdk
When starting on a Microsoft Windows operating system, the DMU looks for a JDK in the following locations in turn:
SetJavaHome
directive in the file dmu\dmu\bin\dmu32.conf
if started through dmu32.exe
or dmuW32.exe
or in the file dmu\dmu\bin\dmu64.conf
if started through dmu64.exe
or dmuW64.exe
dmu\..\jdk
SetJavaHome
in dmu32.conf
or dmu64.conf
correspondingly.In each of the locations, the DMU verifies the presence of the files bin/java
and jre/bin/java
(java.exe
on Windows).
If any of the locations above contains an incorrect version of the JDK, this version may be picked up by the DMU resulting in an error. To solve the issue, modify the file dmu/dmu/bin/dmu[32|64].conf
and add the path of the correct JDK installation in the SetJavaHome
directive.
It is generally not possible to extrapolate the migration downtime window based on the database size alone. Other important factors include the readiness of the data for conversion, the percentage of the data that needs to be converted (non-character data types and 7-bit ASCII data in non-CLOB columns need no conversion), and the size of CLOB columns (CLOBs are typically more expensive to convert than CHAR/VARCHAR2), etc.
To get a more precise estimate of the conversion time, the best approach is to take a clone of the database to be migrated and run the clone through the end-to-end process in a controlled test environment. This will give a good idea of what types of data convertibility issues need to be dealt with prior to the data conversion and the conversion time window one can expect.
This problem is reported in bug #19533216. In Oracle Database 12.1.0.2, a PDB may contain one or more rows with binary data in the data dictionary column SYS.BOOTSTRAP$.SQL_TEXT. The DMU 2.0 reports this data as having invalid representation, which prevents character set conversion of the PDB. The presence of this data is not considered a bug from the Oracle Database perspective.
To solve the problem, upgrade the DMU to the most current version. DMU 2.1 or later contains a fix for this issue.