What is the key to Oracle SQL Performance ?

Oracle SQL performance tuning is multidimensional in nature since it involves a lot of variables like underlying operating system, Oracle Server, network and SQL.

We are going to discuss SQL performance( assuming the other variables are working fine) and look at two key points that can help debug a bad SQL. To arrive at the two key points, we need to look how database technology has evolved lately.

Since 2009, there has been a phenomenal growth in database technology catering to complex and huge data. The existing solutions like Oracle DB, MySQL etc.. were not sufficient to cater to these needs and hence the need for new databases or add-on to existing databases. We will discuss two key performance issues addressed by these new technologies, which is related to SQL performance in regular relational databases.

  • High I/O

A SQL query which is using the right index, correct joins and has a good plan and if still under-performing it could be because of high disk reads. SAP has come up with HANA, Oracle has come up with it’s own in-memory solutions and all these have been to address the issue of high disk reads. The only way to solve high I/O is to optimize memory. So now the question is, if your current SQL query is performing bad because of high I/O, how do you address it for handling high volume data?

Speed Oracle SQL with Temporary Tables (GTT)

The smart use of temporary tables while doing huge batch process can really boost performance. The idea is to store intermediate data while performing huge processing in temporary tables.The data in temporary table is stored in-memory and hence faster access when using it as an intermediate step in large data processing operations.

The other key advantage is that you can create index and check for integrity (primary key) of data in temp tables. GTT’s are the closest you can get to materialized view with data in memory. Please refer to “How to achieve high performance using temporary tables” article to look at a SQL demo.

  • Joins

The entire new crop of database technologies that have evolved is based on how the data model is defined in the database. Most of these databases either don’t use tables or if they have tables it’s been used as a Key-Value pair to avoid creation of lots of table. We have a key point to take from here and look at how lot of tables in a join can be inefficient and degrade performance.

Solution : Reduce the number of tables in a join

The whole idea of joining lots of tables in a single SQL join not only leads to unstable Oracle SQL plan but could be a nightmare to maintain for the DBA who is debugging the SQL. We write code once but read it hundreds of times. The engineer involved in maintaining the code could be a rookie or an expert, both could have a challenge reading complex joins with lots of table, more so for the former.

The Oracle optimizer is smart but not smart enough to always produce a good stable plan across various environments ( development, integration and production ) if you are joining lots of table in a single query. The prudent way is to use intermediate result set using arrays or temporary tables or  WITH clause and avoid joining to all the required tables in one complex join. The idea is to divide and conquer, breaking the SQL into many simple queries.

I have an article on Workday which has done something extreme using RDBMS to store data so that they can avoid big joins. (Article Name :1000's of table)

Conclusion : Divide and Conquer

The above two keys are not tips but steps when coding PL/SQL to achieve good performance and keep the code simple for readability. The more you make the query simple, you don’t have to depend on complex Oracle techniques to produce a stable SQL plan.

Comment