Sunday, 28 January 2018

Data Pump or Export/ Import Data from Oracle 11g in Linux OS

First of all you must create a file location for the dump files.

Log in as Oracle user and create a file wherever you want. My file name is "dumps"

[oracle@prod ~]$ mkdir /u01/dumps

[oracle@prod ~]$ ls /u01/

app database dumps

[oracle@prod ~]$
---------------------------------------------------------------------------------
So first of all you must log into the database as the schema user.

[oracle@prod ~]$ sqlplus has/fcserver

Then issue below command to create the directory

SQL> create or replace directory export_dp as '/u01/dumps/';

Directory created.

export_dp is the logical name that you can give for your directory and the '/u01/dumps/' is the physical path.

You can check yous directory using below command

SQL> select directory_name from all_directories;

DIRECTORY_NAME
------------------------------
EXPORT_DP

[oracle@localhost ~]$ su - root

Password:

[root@localhost ~]# chmod 777 /u01/dumps

[root@localhost ~]# chmod 755 /u01/dumps 

[root@localhost ~]# chmod 777 /u01/dumps/greens-26012018_230101.dmp


[root@localhost ~]# su - oracle

SQL>
-----------------------------------------------------------------------------
now exit from sql promt and come back to oracle promt and issue below statement to take the export.

[oracle@prod ~]$ expdp has/fcserver@fort directory=export_dp dumpfile=has-$(date +%d%m%Y_%H%M%S).dmp logfile=has-$(date +%d%m%Y_%H%M%S).log schemas=has

OR

[oracle@prod ~]$ expdp hr/hr@DB11G directory=export_dp dumpfile=dump_hr.dmp logfile=log_hr.log schemas=hr
------------------------------------------------------------------------------
As you can see tables have been exported.. Cross verify whether dump file is there in the location.

[oracle@prod ~]$ ls /u01/dumps/

dump_has.dmp log_has.log
-------------------------------------------------
Now login to database as SYS user and create the new schema.. First of all you must create the tablespace for the user and then create the user. My tablespace name is "has_tablespace" and the user is "has".

[oracle@prod ~]$ sqlplus / as sysdba

SQL> create tablespace has_tablespace datafile 'has.dat' size 20M autoextend on;

Tablespace created.
-----------------------------------------------------------------------------------------------------------------------
Dropping a Tablespace:

Example The following statement drops the has_tablespace tablespace and drops all referential integrity constraints that refer to primary and unique keys inside has_tablespace:

DROP TABLESPACE has_tablespace 
    INCLUDING CONTENTS 
        CASCADE CONSTRAINTS; 

Deleting Operating System Files:

Example The following example drops the has_tablespace tablespace and deletes all associated operating system datafiles:

DROP TABLESPACE has_tablespace
   INCLUDING CONTENTS AND DATAFILES;
----------------------------------------------------------------------------------------------------------------------
If you want to create user has and assign has_tablespace tablespace for the user

SQL> create user has identified by fcserver default tablespace has_tablespace temporary tablespace temp quota unlimited on has_tablespace;

User created.

Now grant whatever the access you need for the user...I give dba writes for the user.

SQL> grant dba to has;

Grant succeeded.

SQL>

OR
--------------------------------------------------------------------------------------------------------------------
Now Log in as user and create import directory

SQL> connect has/fcserver

Connected.

SQL> create or replace directory import_dp as '/u01/dumps/';

Directory created.

now exit from sql and come to oracle prompt and issue below.

[oracle@prod ~]$ impdp has/fcserver@fort directory=import_dp dumpfile=greens-26012018_230101.dmp logfile=log_has.log REMAP_SCHEMA=sys:has


ora-39006: Internal Error

ora-39213: Metadata processing is not available

[oracle@prod ~]$ sqlplus / as sysdba

SQL> execute sys.dbms_metadata_util.load_stylesheets;

PL/SQL procedure successfully completed.
SQL>

Here is the link

--------------------------------------------------------------------------------------------------------------------------

No comments:

Post a Comment