Series: Project Lockdown
A phased approach to securing your database infrastructure
Updated August 2010
Phase 4
Duration: One Quarter
You've finally reached the final and longest phase of your security and compliance project. Here you will analyze some of the more complex setups and do long-term planning to eliminate potential threats.
Covered in this Installment:
- 4.1 Enable Fine-Grained Auditing
- 4.2 Activate a Virtual Private Database
- 4.3 Mask Sensitive Columns
- 4.4 Encrypt Sensitive Data
- 4.5 Secure Backups
- 4.6 Mine History from Archived Logs
- 4.7 Conclusion
4.1 Enable Fine Grained Auditing
Background
Thus far you may have noticed that triggers haven’t been mentioned as an auditing mechanism. Why not? Because the use of triggers carries an overhead of secondary sequential SQL execution, which adds to overall execution time. This overhead is rarely acceptable for production systems.
Fine Grained Auditing (FGA) is an excellent solution here because of its minimal performance overhead. If you are already familiar with FGA, skip ahead to the “Strategy” subsection.
FGA was introduced in Oracle9i Database to record an audit trail when a user selects from a table, not just changes it. This was a landmark feature because there was no other way a SELECT activity could be recorded. Regular auditing enabled on SELECT statements merely recorded who issued a statement on an object but not what was done.
In addition to details such as username, terminal, and time of the query, FGA records the SQL statement that was executed as well as the SCN number of that instance of time. This allows you to see not only the actual statement issued by the user but the actual values the user saw, by reconstructing the data using flashback queries. In Oracle Database 10g Release 1, this facility was extended to cover other data manipulation language (DML) statements such as INSERT, UPDATE, and DELETE.
Here’s a small example. Suppose there is a table named ACCOUNTS in the schema called BANK:
Name Null? Type
---------------- -------- ------------
ACCT_NO NOT NULL NUMBER
CUST_ID NOT NULL NUMBER
BALANCE NUMBER(15,2)
To enable auditing on it, you will need to define an FGA policy on it using the supplied package DBMS_FGA:
begin
dbms_fga.add_policy (
object_schema=>'BANK',
object_name =>'ACCOUNTS',
policy_name =>'ACCOUNTS_ACCESS'
);
end;
After the above code is executed, the FGA is active on the table ACCOUNTS. That’s it—there’s no need to change any initialization parameter or bounce the database.
The above code takes a shared lock on the table while executing, so you could do it online. Now, if a user named ARUP selects from the table by issuing the following
select * from bank.accounts;
the action is immediately recorded in the audit trail known as FGA Audit Trail. You can check it with the following:
select timestamp,
db_user,
os_user,
object_schema,
object_name,
sql_text
from dba_fga_audit_trail;
TIMESTAMP DB_USER OS_USER OBJECT_ OBJECT_N SQL_TEXT
--------- ------- ------- ------- -------- ---------------------------
08-FEB-06 ARUP arup BANK ACCOUNTS select * from bank.accounts
The view shows many other columns as well.
You can customize FGA in a variety of ways. For instance, if you want to record when a user selects the column BALANCE and only when the balance is more than 20,000, you could add additional parameters while defining the FGA policy as
begin
dbms_fga.add_policy (
object_schema =>'BANK',
object_name =>'ACCOUNTS',
policy_name =>'ACCOUNTS_ACCESS',
audit_column => 'BALANCE',
audit_condition => 'BALANCE >= 20000'
);
end;
Sidebar: Separating Roles
One of the most contentious issues in database security is that of role separation. To administer a database, the DBA must have certain privileges, which are often rolled into a role like SYSDBA or even DBA. However, this role also has powers such as granting privileges to others and selecting all data from any table. This super-user privilege goes against the grain of most regulations such as Sarbanes-Oxley and the Gramm-Leach-Bliley Act (GLBA) because it allows a single person to become too powerful.
The DBA role is a purely technical one that relates to the physical database and that is generally assigned to a pool of people. If the same people also have the ability to see and change any user data, then this creates a serious potential security risk of data vulnerability. Accountability is questionable, as the DBA can eliminate any data—even the audit trail. However, most compliance requirements demand a clear separation of duties and abilities. Thus, the privileges required to administer the physical database should not include those to query or modify user data.
Traditionally, Oracle Database has not supported that separation. Therefore, in 2006 Oracle announced two revolutionary tools (both in beta at the time of this writing). The first, Oracle Database Vault (an option of Oracle Database 10g Release 2 Enterprise Edition and later), allows separation of privileges by creating “realms” of authorization. For instance, the DBA role will no longer have carte blanche to query and modify any table. Instead, the role will be limited to a specific domain of privileges, which is implemented as a realm. Thus, Oracle Database Vault does away with the concept of an omnipotent super-user who can erase all traces. Similarly, Oracle Audit Vault, a standalone product, provides a secure auditing facility that can be in a separate realm outside the DBA’s control or that of the application manager; the audit trail can be controlled only by a security administrator. Using these two tools, the three relevant roles here—administering the physical database, managing object privileges, and managing the audit trail—can be completely separated, the end state demanded by almost all compliance provisions.
Another feature of FGA is that it can call a stored procedure in addition to the recording in the audit trail when the conditions are satisfied. This offers enormous advantage in certain cases involving additional processing, such as sending e-mails. Such SPs are called handler modules of the FGA policy. Remember, FGA can be activated by SELECT statements, which can then activate these handler modules. In a way, that makes handler modules “trigger on select” statements.
Oracle Database 10g introduced more FGA enhancements as well, such as these two noteworthy features:
Bind variables. Using FGA, you can capture bind variables in statements using a parameter in the DBMS_FGA.ADD_POLICY procedure:
audit_trail => DB_EXTENDED.
You can put this in the database initialization parameter file so that FGA records it all the time.
Relevant columns. Consider these two statements:
select balance from accounts where account_no = 9995;
select sum(balance) from accounts;
The first one clearly asks for a specific piece of sensitive information identifiable to a customer, something you may want to audit. The second one is more benign, whereby the user does not find any specific sensitive data about a customer. In your security policy, you may want to record the first statement but not the second. This will help limit the size of the trail.
You can do that by using another parameter, audit_column_opts => DBMS_FGA.ALL_COLUMNS, while defining the policy:
begin
dbms_fga.add_policy (
object_schema => 'ANANDA',
object_name => 'ACCOUNTS',
policy_name => 'ACCOUNTS_SEL',
audit_column => 'ACCOUNT_NO, BALANCE',
audit_condition => 'BALANCE >= 20000',
statement_types => 'SELECT',
audit_column_opts => DBMS_FGA.ALL_COLUMNS
);
end;
The default is DBMS_FGA.ANY_COLUMNS, which triggers an audit trail whenever any of the columns is selected.
Here we have merely scratched the surface of FGA. You will also find extensive discussions on FGA in my book Oracle PL/SQL for DBAs (O’Reilly Media, 2005).
Strategy
The biggest advantage of FGA over regular auditing is that it does not require any specific initialization parameters and therefore does not need a database bounce. You can enable or disable FGA policies at will on objects.
Specifically, you should be looking for sensitive columns, and optionally, sensitive values in those columns. First, you need to formulate a policy for auditing. Here is a typical example of such a policy:
In the table SALARIES:
- Audit when someone selects only the columns SALARY and BONUS. Do not audit when other columns are selected.
- Audit even when the user selects only one column such as SALARY alone, without any identifying information such as EMPNO.
- Audit whenever anyone selects any column from the table for EMPNOS below 1,000, which are reserved for executive management.
- Do not audit when the user SAP_PAYROLL_APP selects. This is the account userid used by the payroll processing application, and auditing this will generate volumes of data and fill up the trail.
- HR managers Jane and Sally regularly check and adjust the salary of employees in the low grades whose salary falls under 1,000. Their SELECT statements for salaries under 1,000 should not be audited.
- In addition to audit, execute a stored procedure enqueue_message that sends an e-mail to a security officer Scott.
With this in mind, you should build three different FGA policies.
First, the universal policy for EMPNO >= 1000 and for the columns SALARY and BONUS only:
begin
dbms_fga.add_policy (
object_schema => 'ACCMAN',
object_name => 'SALARY',
policy_name => 'SAL_SEL_UNIV',
audit_column => 'SALARY, BONUS',
statement_types => 'SELECT',
audit_option => 'EMPNO >= 1000 and USER NOT IN(''SAP_PAYROLL_APP'', ''JANE'', ''SALLY'')',
handler_module => 'ENQUEUE_MESSAGE'
);
end;
Second, build the all-columns policy for EMPNO < 1000:
begin
dbms_fga.add_policy (
object_schema => 'ACCMAN',
object_name => 'SALARY',
policy_name => 'SAL_SEL_EXEC',
statement_types => 'SELECT',
audit_option => 'EMPNO < 1000',
handler_module => 'ENQUEUE_MESSAGE'
);
end;
Third, add the special policy for Jane and Sally:
begin
dbms_fga.add_policy (
object_schema => 'ACCMAN',
object_name => 'SALARY',
policy_name => 'SAL_SEL_SPEC',
audit_column => 'SALARY, BONUS',
statement_types => 'SELECT',
audit_option => 'EMPNO >= 1000 AND SAL <= 1000 and USER IN
(''JANE'', ''SALLY'') AND USER != ''SAP_PAYROLL_APP''',
handler_module => 'ENQUEUE_MESSAGE'
);
end;
As you can see, the conditions in the audit_option are mutually exclusive, so only one policy will be in effect and only one record will be written when any user attempts the SELECT statements.
Using this strategy, you can build a set of FGA policies. You can then enable and disable policies at will without impacting operation.
Implications
There are four serious implications:
If the handler module, if defined, throws errors while the selection is made, it will cause different behavior in different versions of Oracle Database:
• In Oracle9i Database, it silently stops retrieving that row without reporting an error. So, if there are 100 rows and 4 of them satisfied the audit condition, the handler module fired four times, and each time it will fail. The query will return only 96 rows without reporting any error, and you will never know that it happened. This obviously leads to inaccurate results.
• In Oracle Database 10g Release 1, it will ignore the errors in the handler module and retrieve all 100 rows as expected.
• In Oracle Database 10g Release 2, it will report the error on the user’s session performing the query without returning any rows—not even the 96 rows that didn’t satisfy the audit condition and didn’t execute the handler function.
Therefore, test the FGA handler module thoroughly before implementing it.
The audit trail table—FGA_LOG$—is in the SYSTEM tablespace. As more FGA entries are generated, the tablespace fills up, which may cause the database to halt. Note that in Oracle Database 11g Release 2, it is possible to move the FGA_LOG$ table to a different tablespace (described in detail in Phase 1 of this article series).
The audit trail is written to a table, although asynchronously. This induces a transaction as well as I/O, which adds to the overall I/O in the database. If your database is I/O bound, you will see a performance impact on the entire database as a result of FGA.
The audit trails are written asynchronously using autonomous transactions. Therefore even if a user rolls back the transaction, the trail entry is not deleted, leading to false positives. If you are thinking about using FGA as a foolproof mechanism to identify users, you should be aware of these false positives.
Action Plan
- Identify sensitive tables and columns.
- Identify degree of sensitivity to access—for example, salary below 500 is OK.
- Put all possible combinations on a piece of paper and then combine them into pieces of WHERE conditions (predicates) in such a way that any given condition will be satisfied by a single predicate, not more.
- Build the FGA policy from those predicates.
- Enable FGA policies.
- After some time, analyze the FGA audit trail files.
- Build a purge schedule and purge the FGA trail table.
4.2 Activate a Virtual Private Database
Background
If you are already familiar with application contexts and Oracle Virtual Private Database (VPD; also known as Row Level Security or Fine Grained Access Control), you may skip this subsection and jump straight to the “Strategy” subsection.
(VPD is a large topic, so I will just cover the basics here. As with FGA, more information can be found in my book Oracle PL/SQL for DBAs).
Suppose you have a table called ACCOUNTS with the following data:
SQL> select * from accounts;
ACCNO ACC_NAME ACC_BAL
---------- -------------------- ----------
1 John 1000
2 Jill 1500
3 Joe 1200
4 Jack 1300
You want to make sure that only people with proper authorization should see account balances allowed for their levels. That is, Level 1 should not see balances of more than 1,000, Level 2 should not see balances of more than 1,200, and Level 3 should see all. You have another table to show users and their levels:
SQL> select * from userlevels;
USERNAME USERLEVEL
------------------------------ ----------
CLERK1 1
CLERK2 2
CLERK3 3
To hold the user’s level when they first log in, you will need to create an application context:
create context user_level_ctx using set_user_level_ctx;
and its associated trusted procedure:
create or replace procedure set_user_level_ctx
(
p_level in number
)
as
begin
dbms_session.set_context (
'USER_LEVEL_CTX',
'LEVEL',
p_level
);
end;
Then you will need to create a login trigger to set the proper application context:
create or replace trigger tr_set_user_level
after logon
on database
declare
l_level number;
begin
select userlevel
into l_level
from arup.userlevels
where username = user;
set_user_level_ctx (l_level);
exception
when NO_DATA_FOUND then
null;
when OTHERS then
raise;
end;
This sets the stage for setting the user levels in the application context attributes. Let’s test to make sure:
SQL> conn clerk1/clerk1
Connected.
SQL> select sys_context('USER_LEVEL_CTX','LEVEL') from dual;
SYS_CONTEXT('USER_LEVEL_CTX','LEVEL')
--------------------------------------
1
SQL> conn clerk2/clerk2
Connected.
SQL> select sys_context('USER_LEVEL_CTX','LEVEL') from dual;
SYS_CONTEXT('USER_LEVEL_CTX','LEVEL')
--------------------------------------
2
SQL> conn clerk3/clerk3
Connected.
SQL> select sys_context('USER_LEVEL_CTX','LEVEL') from dual;
SYS_CONTEXT('USER_LEVEL_CTX','LEVEL')
--------------------------------------
3
As you can see, each user ID properly sets the levels. Now, you can build the VPD on the table. The whole VPD infrastructure can be controlled using the supplied PL/SQL package DBMS_RLS; the rules governing which rows should be shown are controlled by a concept called a policy. A policy applies a “predicate” (a WHERE condition) to all the queries on the table, effectively restricting the access to rows. The WHERE condition is generated by a function called a policy function. So, first we have to create the policy function that returns a WHERE condition to be applied to the queries:
create or replace function get_acc_max_bal
(
p_schema in varchar2,
p_obj in varchar2
)
return varchar2
as
l_ret varchar2(2000);
begin
select
case userlevel
when 1 then 'acc_bal <= 1000'
when 2 then 'acc_bal <= 1200'
when 3 then null
else
'1=2'
end
into l_ret
from userlevels
where username = USER;
return l_ret;
end;
then add the policy:
begin
dbms_rls.add_policy (
object_name => 'ACCOUNTS',
policy_name => 'ACC_MAX_BAL',
policy_function => 'GET_ACC_MAX_BAL',
statement_types => 'INSERT, UPDATE, DELETE, SELECT',
update_check => TRUE
);
end;
At this time, the table is protected. When CLERK1logs in and selects from the table:
SQL> select * from arup.accounts;
ACCNO ACC_NAME ACC_BAL
---------- -------------------- ----------
1 John 1000
CLERK1 sees only ACCNO 1, with balance 1,000. Because he is not authorized to see anything above that account balance, the other accounts are invisible to him. But when CLERK2 logs in:
SQL> conn clerk2/clerk2
Connected.
SQL> select * from arup.accounts;
ACCNO ACC_NAME ACC_BAL
---------- -------------------- ----------
1 John 1000
2 Joe 1200
she can see ACCNO 2 as well. The balance of ACCNO 2 is 1,200, within the authorized limit for CLERK2.
Using this technique, you can build a sort of restricted view into the table. This will be a very handy tool in Project Lockdown.
Strategy
The key is to find out what information is to be protected from all parties and on which columns. This sounds much easier than it actually is. Collecting this information requires business knowledge, or at least collaboration with someone more familiar with those processes. Once you identify the tables and columns, you should be able to implement the VPD policy as shown in the examples in the “Background” subsection above.
What if you want to let some users have unrestricted access to the tables even though the VPD policy is in effect? The role EXEMPT ACCESS POLICY does exactly that:
grant exempt access policy to ananda;
From that point on, ANANDA will bypass all access policies defined on all tables.
It is probably unacceptable to allow a user to bypass all access restrictions, however; a better solution is to code it in the policy function itself. A good example is the schema owner of the table—you definitely want it to see all the rows of the table it owns without restriction. You can code it in the policy function as follows:
create or replace function get_acc_max_bal
(
p_schema in varchar2,
p_obj in varchar2
)
return varchar2
as
l_ret varchar2(2000);
begin
if (p_schema = USER) then
l_ret := NULL;
else
select
case userlevel
when 1 then 'acc_bal <= 1000'
when 2 then 'acc_bal <= 1200'
when 3 then null
else
'1=2'
end
into l_ret
from userlevels
where username = USER;
end if;
return l_ret;
end;
This version of the function returns NULL when the owner of the table logs in (p_schema = USER) and therefore provides unrestricted access to the table. You can, of course, make any changes in the function to allow more users to bypass the VPD policy.
The biggest challenge in VPD is putting the restriction on the child tables. Your limiting condition may be on a column called, say, ACC_BAL; but all other child tables may not have the column. So how can you restrict those tables?
For example, here is a table called ADDRESSES that contains the customer addresses. This table does not have a column called ACC_BAL, so how can you put the same restriction on this table as on ACCOUNTS? There are two ways:
The first way is to add a column ACC_BAL on the table ADDRESSES. This column may be updated through a trigger on the main table ACCOUNTS. Now you can define a policy on the table using the same policy function used on ACCOUNTS.
The second way is to use a different policy function for ADDRESSES. In this function, the return value should be
ACCNO in (SELECT ACCNO FROM ACCOUNTS)
This is the predicate used in the policy restriction. So, the address will be displayed only if the account exists on ACCOUNTS, and because the table ACCOUNTS is restricted anyway, the table ADDRESSES will be automatically restricted.
You have to choose between these two approaches based on your situation.
Implications
There are several potentially damaging implications.
VPD works by rewriting queries to add the additional predicate. The user queries may have been well written and perhaps well tuned, but the introduction of the additional predicate does throw a wrench into the works, because the optimization plan may change. You should carefully consider the potential impact and mitigate the risk by building indexes.
Materialized views work by selecting the rows from the underlying tables. If the schema owner of the view does not have unrestricted access to the table, only those rows that satisfy the VPD policy will be refreshed, rendering the view inaccurate.
If you have set up replication, the users doing the propagation and reception should have unrestricted access to the table. Otherwise, they will replicate only part of the table.
If you load the table using Direct Path Insert (INSERT with the APPEND hint), then you cannot have a VPD policy on the table. Either you should disable the policy temporarily or do the insert using a user that has unrestricted access to the table.
Direct Path Exports bypass the SQL Layer; hence, the VPD policy will not be applied. Therefore, when you export a table using the DIRECT=Y option, Oracle Database ignores it and exports it using the conventional path. This may add to the overall execution time.
Action Plan
This is a complex and fluid plan:
1. Identify the tables to be protected by VPD.
2. Identify the columns of those tables that need to be protected.
3. Identify the restricting condition—for example, Salaries > 1000.
4. Identify how to establish privileges—for example, do users have authority levels or roles? You may want to divide users into three groups with certain levels of authority associated with them. Or, perhaps you want to assign access restrictions on groups of users based on role—managers can access all rows, clerks can access SALARY > 2000, and so on.
5. Decide how you will pass the privileges to the policy function—through a package variable, through an application context, or through a static table.
6. Identify if further indexes need to be created.
7. Create additional indexes.
8. Identify child tables and decide on a scheme to enable the restriction on them—via a new column or the IN condition.
9. Re-identify the need for indexes based on your decision above.
10. Create indexes.
11. Build the policy function.
12. Create the policy (but as disabled).
13. On a light-activity time of day, enable the policy and test accessing the table from a regular user account to make sure that the policy works. If it does not work, check trace files to find the error.
14. If it works, enable the policy.
15. Monitor performance.
16. Identify further needs to build indexes, use outlines, and so on.
17. Iterate for all tables.
4.3 Mask Sensitive Columns
Background
Imagine that you’re an amateur database hacker. The database in this case contains medical records, and the information you are looking for is diagnosis codes. What column would you look for? Probably one named DIAGNOSIS, DISEASE, or something similar.
As you can see, sensitive columns with obvious names are a crucial security issue.
Strategy
When adversaries have no prior knowledge of the contents of your database, they will not decipher the meaning of your columns if they have nonintuitive names. This is a strategy known as “security by obscurity.” Even seasoned adversaries who deliberately force into the database will still need to track down the column names before they can do anything else. Because they may have limited time—they almost always do—they will usually move on to the next opportunity. Of course, using obscure column names makes development harder as well.
There is an alternative to making this trade-off, however: column masking, in which you hide the contents of the column and expose it only to legitimate users.
There are two column-masking approaches: by using a view and by using VPD.
Using a view. This method is applicable to any version of Oracle Database but is usually the only choice when your database release is Oracle9i or earlier.
Suppose your table looks like this:
SQL> desc patient_diagnosis
Name Null? Type
------------------- ------ -------------
PATIENT_ID NUMBER
DIAGNOSIS_ID NUMBER
DIAGNOSIS_CODE VARCHAR2(2)
DOCTOR_ID NUMBER(2)
BILLING_CODE NUMBER(10)
The rows look like this:
SQL> select * from patient_diagnosis;
PATIENT_ID DIAGNOSIS_ID DI DOCTOR_ID BILLING_CODE
---------- ------------ -- ---------- ------------
1 1 01 1 1003
1 2 02 1 1003
1 3 11 1 1003
2 1 11 1 1005
2 2 41 2 1005
In this case, you want to hide the values of the column DIAGNOSIS_CODE
create view vw_patient_disgnosis
as
select
patient_id,
diagnosis_id,
doctor_id,
billing_code
from patient_diagnosis
/
Then you can create a synonym PATIENT_DIAGNOSIS for the view VW_PATIENT_DIAGNOSIS and grant select on the view instead of the table. The view hides the column DIAGNOSIS_CODE.
This is a rather simplistic solution, so instead of masking the column for all users, you may want to create role-based obscurity. When the user is a manager, show the protected columns; otherwise, don’t. You can do so by passing an application context or a global variable to designate the role of the user. If the application context attribute were IS_MANAGER, you could use
create or replace view vw_patient_disgnosis
as
select
patient_id,
diagnosis_id,
decode(
sys_context('USER_ROLE_CTX','IS_MANAGER'),
'Y', DIAGNOSIS_CODE, null
) diagnosis_code,
doctor_id,
billing_code
from patient_diagnosis;
This is a more flexible view that can be granted to all users, and the contents of the view will be dynamic based on the user’s role.
Using VPD. Oracle Database 10g introduced a feature that makes VPD even more useful: There is no need to create a view. Rather, the VPD policy can suppress the display of sensitive column. In this case, the VPD policy function will look like this:
1 create or replace function pd_pol
2 (
3 p_schema in varchar2,
4 p_obj in varchar2
5 )
6 return varchar2
7 is
8 l_ret varchar2(2000);
9 begin
10 if (p_schema = USER) then
11 l_ret := NULL;
12 else
13 l_ret := '1=2';
14 end if;
15 return l_ret;
16 end;
Now create the policy function:
1 begin
2 dbms_rls.add_policy (
3 object_schema => 'ARUP',
4 object_name => 'PATIENT_DIAGNOSIS',
5 policy_name => 'PD_POL',
6 policy_function => 'PD_POL',
7 statement_types => 'SELECT',
8 update_check => TRUE,
9 sec_relevant_cols => 'DIAGNOSIS_CODE',
10 sec_relevant_cols_opt => dbms_rls.all_rows
11 );
12 end;
Note Lines 9 and 10. In Line 9, we mention the column DIAGNOSIS_CODE as a sensitive column. In Line 10, we specify that if the column is selected, all rows are displayed; but the column value is shown as NULL. This effectively masks the column. From the policy function, note that the predicate applied is NULL when the owner of the table selects from it—so, the VPD restrictions are not applied and the column is shown.
Remember, there is no way to “replace” a policy. If an old policy exists, you have to drop it first:
begin
dbms_rls.drop_policy (
object_schema => 'ARUP',
object_name => 'PATIENT_DIAGNOSIS',
policy_name => 'PD_POL'
);
end;
Now you can test the effect of this change
SQL> conn arup/arup
Connected.
SQL> select * from patient_diagnosis;
PATIENT_ID DIAGNOSIS_ID DI DOCTOR_ID BILLING_CODE
---------- ------------ --- ---------- ------------
1 1 01 1 1003
1 2 02 1 1003
1 3 11 1 1003
2 1 11 1 1005
2 2 41 2 1005
Note that the DIAGNOSIS_CODE column values are shown, because ARUP is the owner of the table and should see the values. Now, connect as another user who has select privileges on the table, and issue the same query:
SQL> set null ?
SQL> conn ananda/ananda
SQL> select * from arup.patient_diagnosis;
PATIENT_ID DIAGNOSIS_ID D DOCTOR_ID BILLING_CODE
---------- ------------ -- ---------- ------------
1 1 ? 1 1003
1 2 ? 1 1003
1 3 ? 1 1003
2 1 ? 1 1005
2 2 ? 2 1005
Note how the column DIAGNOSIS_CODE shows all NULL values.
This method is much more elegant, even apart from the fact that there is no view to be created on the table, no synonym to be created to point to the view, and no additional grants to be maintained. If you need to have different policies to show this value to different people, you can easily modify the policy function (Line 11) to add further checks. For instance, your policy may say that less sensitive diagnosis codes such as those for the common cold can be exposed to all users. So, your policy function will look like the following, assuming that the DIAGNOSIS_CODE for common cold is “01”:
1 create or replace function pd_pol
2 (
3 p_schema in varchar2,
4 p_obj in varchar2
5 )
6 return varchar2
7 is
8 l_ret varchar2(2000);
9 begin
10 if (p_schema = USER) then
11 l_ret := NULL;
12 else
13 l_ret := 'diagnosis_code=''01''';
14 end if;
15 return l_ret;
16* end;
Note Line 13, where we added the predicate to show only when the diagnosis code is “01” and nothing else. Now, test the effect:
SQL> conn arup/arup
Connected.
SQL> select * from arup.patient_diagnosis;
PATIENT_ID DIAGNOSIS_ID DI DOCTOR_ID BILLING_CODE
---------- ------------ -- ---------- ------------
1 1 01 1 1003
1 2 02 1 1003
1 3 11 1 1003
2 1 11 1 1005
2 2 41 2 1005
SQL> conn ananda/ananda
Connected.
SQL> select * from arup.patient_diagnosis;
PATIENT_ID DIAGNOSIS_ID DI DOCTOR_ID BILLING_CODE
---------- ------------ -- ---------- ------------
1 1 01 1 1003
1 2 ? 1 1003
1 3 ? 1 1003
2 1 ? 1 1005
2 2 ? 2 1005
Note that the diagnosis code is “01” for patient id 1 and diagnosis id 1, which is the only allowed diagnosis code; so it is shown clearly. All others have been shown as NULL and are effectively masked.
Implications
If you want to mask sensitive columns and the programs do not even mention them, there will be no implications.
If you use the view approach, where the sensitive column is simply removed from the view, it might pose a problem where the programs use a construct like SELECT * FROM TABLE .... Because the columns have not been explicitly named, the absence of one column will affect the program execution. But this is not the issue with the modified view approach where the column is still present but NULLed.
There is one very important implication you should be aware of. Suppose you have a column called CONTRACT_AMOUNT, which is shown if less than a certain value—say, $500. If more than $500, then the column shows NULL. The table has three rows with values 300, 300, and 600 in the column. Prior to column masking, if a user issues the query
select avg (contract_amount) from contracts;
he will get 400 (the average of 300, 300, and 600). After column masking, the value of the column will be NULL for the record where the value is $600, so the user will see the values as 300, 300, and NULL. Now the same query would show 200 (the average of 300, 300, and NULL). Note the important difference: The value shown is 200, not 400.
Be aware of this important difference that column masking can introduce.
Action Plan
- List all sensitive columns on all sensitive tables.
- Decide on sensitive columns to be masked.
- Decide on the privilege scheme.
- If you are on Oracle Database 10g Release 1 or later, choose the VPD approach.
ELSE
Choose the view-based approach. - If you choose the VPD approach:
- Create policy functions for each table.
- Create policies for each table. This will help you control masking on specific tables.
- If you choose the view-based approach:
- Create views on each table, typically named VW_<table_name>.
- Create a synonym (the same name as the table) pointing to the view.
- Revoke privileges made to the users on the table.
- Re-grant the privileges to the view.
- Recompile all dependent invalid objects.
4.4 Encrypt Sensitive Data
Background
As I mentioned previously in this series, security is like protecting yourself on a cold winter day with layers of clothing, versus wearing the bulkiest winter jacket available. But building layered defenses may not deter the most determined adversary, and it certainly won’t always prevent a legitimate user from stealing corporate assets. The last line of defense in such a case is encryption, by which the data is accessible to the user (or the adversary) but only with a key. Without the key, the data is useless. If you protect the key, you will protect the data.
Remember, encryption is not a substitute for other layers of security. You must have those defenses in place regardless.
Encryption is a vast topic, but I’ll try to give you an actionable overview here.
Oracle provides three types of encryption:
The first type is encryption APIs, such as the packages dbms_obfuscation_toolkit and dbms_crypto (in Oracle Database 10g Release 1 and later). Using these packages, you can build your own infrastructure to encrypt data. This is the most flexible approach but rather complex to build and manage.
Column-level Transparent Data Encryption, a feature of Oracle Database 10g Release 2 and later, obviates manual key management. The database manages the keys, but as the name suggests, the encryption is transparent— column data is stored in an encrypted manner only. When selected, it will be in clear text.
Tablespace-level Transparent Data Encryption, introduced in Oracle Database 11g Release 1, makes the process much easier and much more effective. A whole tablespace is encrypted, and anything on that tablespace—tables, indexes, materialized views—is stored in encrypted format. However, when they are selected and placed in the buffer cache, the data is in clear text. Because data is compared in the buffer cache, it is done in clear text as well, and consequently the comparison becomes faster.
I recommend that you review the Oracle documentation (Chapter 17 of the Oracle Database Security Guide and Chapter 3 of the Oracle Database Advanced Security Administrator’s Guide) to learn more about these features. At a minimum, you should review two articles on Oracle Technology Network: “Transparent Data Encryption” (oracle.com/technology/oramag/oracle/05-sep/o55security.html) and “Encrypting Tablespaces” (oracle.com/technology/oramag/oracle/09-jan/o19tte.html) before proceeding to the “Strategy” subsection.
Strategy
The choice between regular encryption and Transparent Data Encryption is a vital one. (In releases prior to Oracle Database 10g Release 2, however, only the former is available.)
In either case, you will have to identify the tables, and more specifically the columns, to be encrypted. It’s probably not a good idea to encrypt all columns, because encryption routines do burn CPU cycles.
Next, pick an encryption algorithm. A typical choice is Triple Data Encryption Standard (DES3 with 156-bit encryption. However, starting with Oracle Database 10g Release 1, you have access to the newer, faster, and more secure Advanced Encryption Standard (AES) algorithm that operates with a 128-bit long key.
Oracle9i Database provided dbms_obfuscation_toolkit; with Oracle Database 10g Release 1, you have access to dbms_crypto, a much better utility. The older package is still available, but avoid it if you are building encryption infrastructure for the first time.
At this point, you have to decide between TDE and your own routine (if applicable). The table below may help you decide.
Transparent Data Encryption | User-built Encryption | |
Flexibility | Minimal—For instance, if the column SALARY in the SALARIES table is encrypted, then any user with access to the table will be able to see the data clearly. You can’t place selective control on that column based on user roles and levels. The data in the database is encrypted but is decrypted when accessed. Note that you can use VPD with column masking that displays NULL for an encrypted column, as shown in section 4.3. | Robust—For instance, you may define the column to be shown in clear text only if the user is a manager, and encrypted otherwise. This will ensure that the same application sees the data differently based on who is using it. This flexibility can also be expanded to other variables, such as time of day or the client machine accessing the database. |
Setup | Minimal—This facility is truly transparent. There is nothing to do but issue this command (provided all other one-time jobs have been executed, such as building the wallet): ALTER TABLE SALARIES MODIFY (SALARY ENCRYPT) |
Extensive—To provide a seamless interface to the users, you have to create a view that does a decryption of the column. This view should then be granted. This introduces several layers of complexity in management. |
Key Management | Automated—Key management is handled by the database, using a wallet. In Oracle Database 11 onward, key management can also be hardware based, making it even more secure and effective. |
Manual—Because you have to manage the keys, you have to decide how you can balance between the two conflicting requirements:
|
Restrictions on columns | Some—In column-level TDE, certain columns cannot be encrypted, such as those with partition keys, of datatypes BLOB, and so on. There is no restriction, however, on Tablespace-level TDE, introduced in Oracle Database 11g Release 1. | One—The only restriction is LONG. |
Support for indexes | It depends on the TDE flavor used In tablespace-level TDE, indexes help as much as the regular clear-text data. In column-level TDE, indexes may not help in queries because the data is stored in an encrypted manner. | Yes—Because you control the encryption, you can create surrogate columns to build indexes on. |
Scope | It depends on the TDE flavor used. In column-level TDE, you can decide to encrypt a specific column while the rest can be clear text, preventing unneeded encryption/decryption. In tablespace-level TDE, all the columns of the table, sensitive or not, will be encrypted. | Controlled—Specific columns can be encrypted. |
If you decide to use TDE—regardless of the flavor—take the following steps:
1. Configure the wallet. It’s generally in the folder $ORACLE_BASE/admin/$ORACLE_SID/wallet, but you can use any location by putting the following lines in the file SQLNET.ORA:
ENCRYPTION_WALLET_LOCATION =
(SOURCE=
(METHOD=file)
(METHOD_DATA=
(DIRECTORY=/orawall) ) )
Here, the wallet location is set to /orawall.
2. Open the wallet and assign a password. Issue this command:
alter system set encryption key authenticated by "53cr3t";
Choose a password that is difficult to guess but easy to remember. (For security purposes, you may want the wallet password to remain unknown to the DBA, so that a second person is needed to open the wallet.) From now on, every time the database opens, you have to open the wallet with
alter system set encryption wallet open authenticated by "53cr3t";
After this, you can create tables with encrypted columns (as in column-level TDE):
create table accounts
(
acc_no number not null,
first_name varchar2(30) not null,
last_name varchar2(30) not null,
SSN varchar2(9) ENCRYPT USING 'AES128',
acc_type varchar2(1) not null,
folio_id number ENCRYPT USING 'AES128',
sub_acc_type varchar2(30),
acc_open_dt date not null,
acc_mod_dt date,
acc_mgr_id number
)
When users insert data into the table, the data is automatically converted into encrypted value and put on the disk. Similarly, when the data is retrieved by a select statement, it’s automatically converted into decrypted value and shown to the user.
If you decide to use tablespace-level TDE, you can’t convert an existing tablespace into an encrypted one. You must create a new tablespace as encrypted and move the tables or indexes into that tablespace. Here is how you create an encrypted tablespace:
create tablespace enc_ts
datafile '+DATA'
size 100M
encryption using 'AES128'
default storage (encrypt)
/
After that, you should move a table—for example, CREDIT_CARDS—into that tablespace:
alter table credit_cards
move tablespace enc_ts
/
That’s it; the table is now completely encrypted on disk. Of course, any new table you create on this tablespace will be encrypted from the beginning.
Implications
As with any major change, there are some serious implications.
Encryption is a CPU-intensive operation. If the system is already CPU bound, this will make it worse. Consider Tablespace Encryption in Oracle Database 11g Release 1 to reduce this performance issue.
Indexes will not work well with encrypted columns, especially predicates like WHERE <ColumnName> LIKE 'XYZ%', which should have used index range scan in unencrypted columns but will use full table scans. Again, Tablespace Encryption is not subject to this restriction.
Key management is a very important issue. If you lose the keys, the data becomes inaccessible.
Action Plan
The action plan is described in the “Strategy” subsection.
4.5 Secure Backups
Background
In many cases, DBAs forget the most vulnerable of spots: the backup of the database. Once the data leaves the secured perimeters of the server, it’s not under your control anymore. If an adversary can steal the tapes, he can mount them on a different server, restore the database, and browse the data at his leisure.
Fortunately, you can eliminate even that risk via backup encryption.
Strategy
In Oracle Database 10g Release 2 and later, you can encrypt backups in Oracle Recovery Manager (Oracle RMAN) in three different modes: transparent, password-based, and dual.
Transparent mode. In this (the most common) approach, Oracle RMAN gets the encryption key from the encryption wallet (discussed in the previous section) and uses it to encrypt the backup set. Needless to say, the wallet must be open during the backup and restore. If the wallet is not open, you will get the errors
ORA-19914: unable to encrypt backup
ORA-28365: wallet is not open
Open the wallet using this command:
alter system set encryption wallet open authenticated by "53cr3t";
Of course, the password may be something other than “53cr3t.” Remember, this is case sensitive so it must be enclosed in double quotes.
Make sure the encryption is enabled for the entire database. You can enable it by issuing
RMAN> configure encryption for database on;
Alternately, if you want to encrypt only a few tablespaces, you can issue the following for all tablespaces:
RMAN> configure encryption for tablespace users on;
After that you can use the regular backup commands without any other user intervention:
RMAN> backup tablespace users;
When you restore the backup, the wallet must be open. If the wallet is not open, then you get the error while restoring:
RMAN> restore tablespace users;
Starting restore at 09-FEB-06
using channel ORA_DISK_1
... messages ...
ORA-19870: error reading backup piece C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AR
EA\ANANDA\BACKUPSET\2006_02_09\O1_MF_NNNDF_TAG20060209T221325_1YR16QLT_.BKP
ORA-19913: unable to decrypt backup
ORA-28365: wallet is not open
The error message is pretty clear; the wallet must be opened. If you ever need to restore the tablespace in other servers, the wallet must be copied there and opened with the correct password. If an adversary steals the tapes, he will not be able to restore the backups.
Password-based mode. In this case, there is no need to use the wallet for key management. The backups are encrypted by a key that itself is encrypted by a password. Here is how a command looks:
RMAN> set encryption on identified by "53cr3t" only;
RMAN> backup tablespace users;
The backup set produced above will be encrypted by the password. To decrypt it during restore, you have to use the password as follows:
RMAN> set decryption identified by "53cr3t";
RMAN> restore tablespace users;
This eliminates the use of wallets. So to restore the backup on any server, all you need is the password, which does away with the backup of the wallet. However, your password must be visible in the scripts, and hence any adversary with access to the database server will be able to read it. Refer to Phase 2, where you learned how to hide passwords.
Dual mode. As the name suggests, this mode combines the other two approaches. You take the backup in the following manner:
RMAN> set encryption on identified by "53cr3t";
RMAN> backup tablespace users;
Note that there is no “only” clause in the command set encryption. This allows the backup to be restored in either of the two ways: by password “tiger” or by an open wallet. For instance, note the following command. No password has been given, yet the restore is successful.
RMAN> sql 'alter tablespace users offline';
RMAN> restore tablespace users;
Starting restore at 09-FEB-06
allocated channel: ORA_DISK_1
... restore command output comes here ...
This is useful when you generally use a script to handle backups and you often have to recover the database to a different server, such as a QA server. But in general, this method does not have much practical use.
If you do use the transparent mode of backup encryption, you can back up the wallet as well. Even if an adversary gets the wallet, he will not be able to open it without a password.
Implications
Encryption, as I mentioned before, is a fairly CPU-intensive process, and it is thus a highly CPU-intensive operation to encrypt the entire Oracle RMAN backup set. You could reduce the CPU cycles by partially encrypting backups—just select tablespaces that contain sensitive data, not all of them. A better option is to use TDE for column-level encryption.
The wallet (in transparent mode) and password (in password-based mode are very important. If you lose them, you will never be able to decrypt the backups. So have a plan for backing these up.
If you use password-based encryption, the password must be in the scripts. It could be vulnerable to theft.
Action Plan
- Decide if you really want to encrypt entire backup sets as opposed to specific columns in specific tables.
- If Oracle RMAN backup sets are to be encrypted, decide between the entire database and specific tablespaces.
- Choose between transparent, password-based, and dual-mode approaches.
- If transparent or dual-mode approach, then
- Configure the wallet.
- Open the wallet.
- In RMAN, configure the tablespace(s) or the entire database to be encrypted in backup.
-
If password-based or dual-mode approach, then
- Choose a password and put it in the Oracle RMAN scripts. Use any of the password hiding techniques described in Phase 2.
- Take a backup using those scripts only.
4.6 Mine History from Archived Logs
Background
One very important part of the lockdown process is ensuring that no unauthorized data definition language (DDL) activity occurs on the database. An adversary, after getting a legitimate route to connect to the database, will find it extremely easy to drop a few segments to create a simple denial-of-service attack.
In Phase 3, you learned how to lock down key objects so that they cannot be altered without the DBA’s consent. But what about the illegitimate alterations done with consent and an attack that was discovered only later? This is where the fourth pillar of security—accountability—comes into play.
One option is to turn on DDL auditing to track DDL activities. All in all, it’s the easiest option—it’s easy to set up, even easier to browse, possible to archive, and so on. However, auditing puts pressure on performance, something we are trying to avoid. The question is how to retrace the DDL statements without setting up auditing.
Strategy
This is where a tool (or a feature, if you will called Log Miner comes in very handy. Introduced in Oracle8i Database, Log Miner lets you search online redo logs or archived logs.
Here’s how you can set up log mining to unearth DDL details. First, get the online log file or archive log file to mine, and add that to the Log Miner session:
sqlplus / as sysdba
begin
dbms_logmnr.add_logfile (
'C:\ORACLE\DATABASE\ANANDA\REDO03.LOG');
dbms_logmnr.add_logfile (
'C:\ORACLE\DATABASE\ANANDA\REDO02.LOG');
dbms_logmnr.add_logfile (
'C:\ORACLE\DATABASE\ANANDA\REDO01.LOG');
end;
/
After adding the files, start the Log Miner session. You have to pass the source of the data dictionary; here we have specified the online catalog as the source:
begin
dbms_logmnr.start_logmnr (
options => dbms_logmnr.dict_from_online_catalog
);
end;
The mined contents are placed in a view named V$LOGMNR_CONTENTS, but this view is transient—it is visible only to the session that started the Log Miner. So, if you want to perform an analysis later, you need to preserve it in a permanent table:
create table lm_contents
nologging
as
select * from v$logmnr_contents;
Now for the analysis. To find the DDL commands, you would issue the following query:
select sql_redo
from lm_contents
where operation = 'DDL'
and seg_owner not in ('SYS','SYSTEM','PUBLIC');
Here is a sample output. Note that there is no DROP TABLE statement—in Oracle Database 10g, dropped tables are not actually dropped but rather renamed. The SQL_REDO will reflect that. In the cases where the user actually drops the table using the PURGE clause, the column SQL_REDO will reflect the correct command. (To save space, I have trimmed the output so that the DDLs for Functions and Procedures appear partially. I have also used RECSEPCHAR '.' to show devising lines between multiline records.)
SQL_REDO
-------------------------------------------------------------------------------
ALTER PACKAGE "XDB"."DBMS_XSLPROCESSOR" COMPILE SPECIFICATION REUSE SETTINGS;
ALTER PACKAGE "XDB"."DBMS_XMLDOM" COMPILE SPECIFICATION REUSE SETTINGS;
ALTER PACKAGE "XDB"."DBMS_XMLPARSER" COMPILE SPECIFICATION REUSE SETTINGS;
ALTER PACKAGE "EXFSYS"."DBMS_RLMGR_DR" COMPILE BODY REUSE SETTINGS;
truncate
table developers;
...............................................................................
create table patient_diagnosis
(
patient_id number,
...............................................................................
create view vw_patient_diagnosis
as
...............................................................................
create or replace view vw_patient_diagnosis
as
...............................................................................
create or replace function pd_pol
(
p_schema in varchar2,
...............................................................................
create or replace function pd_pol
(
p_schema in varchar2,
...............................................................................
create or replace function pd_pol
(
p_schema in varchar2,
...............................................................................
grant connect, resource to ananda identified by VALUES '1DB10D95DE84E304' ;
create or replace function pd_pol
(
...............................................................................
create or replace function pd_pol
(
p_schema in varchar2,
...............................................................................
ALTER TABLE "ARUP"."TEST1" RENAME TO "BIN$JQHaX2mpSxOyrhkxAteHmg==$0" ;
drop table test1 AS "BIN$JQHaX2mpSxOyrhkxAteHmg==$0" ;
Of course, this output itself is probably meaningless; you have to see more information such as time stamp, system change number (SCN) and so on in the Log Miner entries to make a valid connection between user actions and actual events:
select timestamp, scn, seg_owner, seg_name
from lm_contents
where operation = 'DDL'
and seg_owner not in ('SYS','SYSTEM','PUBLIC')
/
Here is a sample output:
TIMESTAMP SCN SEG_OWNER SEG_NAME
--------- ---------- ---------- - -----------------------------
06-FEB-06 1024674 XDB DBMS_XSLPROCESSOR
06-FEB-06 1026884 XDB DBMS_XMLDOM
06-FEB-06 1026896 XDB DBMS_XMLPARSER
06-FEB-06 1026918 EXFSYS DBMS_RLMGR_DR
06-FEB-06 1029244 ARUP DEVELOPERS
08-FEB-06 1096847 ARUP PATIENT_DIAGNOSIS
08-FEB-06 1097057 ARUP VW_PATIENT_DIAGNOSIS
08-FEB-06 1097920 ARUP VW_PATIENT_DIAGNOSIS
08-FEB-06 1100059 ARUP PD_POL
08-FEB-06 1100157 ARUP PD_POL
08-FEB-06 1100386 ARUP PD_POL
08-FEB-06 1100413 ANANDA
08-FEB-06 1101544 ANANDA PD_POL
08-FEB-06 1101564 ARUP PD_POL
09-FEB-06 1123950 ARUP TEST1
09-FEB-06 1123953 ARUP TEST1
Note that I have used SEG_OWNER and not the USERNAME. Due to a bug that is unresolved as of Oracle Database 10.2.0.1, the USERNAME is not populated.
This is just one example of how to pull DDL statements from the archived logs. You can use any statement to mine from the logs—DMLs, as well. Mining for DML statements is a great substitute for auditing, because it exerts no performance pressures on the database.
Implications
Log Miner is not intrusive, but it does take a lot of CPU cycles and PGA memory on the server. So, run it carefully, preferably under conditions of reduced load. (Another option is to move the archived logs to a different server and mine them there. This option should be exploited whenever possible.)
Action Plan
Identify what statements you want to mine from logs and how often. Some examples could be DDL statements related to dropping of objects; but you may need to be further selective. In data warehouse environments, applications temporarily create many tables and drop them later, so you may want to exclude those schemas.
Use the above technique to extract information from the archived logs on a regular basis and analyze them for possible abuse or pattern for abuse.
4.7 Conclusion
This final phase of your security and compliance project had a small number of tasks, but each of these tasks takes a considerable amount of time to execute. In addition, these tasks had no clear details as a part of the objective. These minor details vary so much that it is impossible to build a generalized, one-size-fits-all description.
The most important factor in your quest for security is to understand that you can’t chase it in a vacuum. To effectively build a secured database infrastructure, your actions must be tempered with the understanding of the unique nature of your organization, and your analysis must include your business processes to isolate sensitive data from the rest. For instance, credit-card numbers are to be protected at any organization, but what about sales numbers? In some organizations, such as retailers, the sheer volume of data makes it infeasible to protect it by encryption. The same can’t be said for a hedge-fund trading firm, where sales numbers are zealously guarded. A tiered approach to see sensitive data may work in some cases; but in most cases, it’s probably best provided on an as-needed basis.
At the end of this journey, I hope you have gained some valuable tools and techniques that you can use right now to protect your database infrastructure. I will highly appreciate it if you could take some time to give me your feedback and suggestions on enhancing this four-part series with other relevant issues and material.
The content provided here is for instructional purposes only and is not validated by Oracle. Use it at your own risk. Under no circumstances should you consider it to be part of a consulting or services offering.
The content provided here is for instructional purposes only and is not validated by Oracle; use it at your own risk! Under no circumstances should you consider it to be part of a consulting or services offering.