Databases Reference
In-Depth Information
There's more...
In this recipe, we have seen that we can use multi-column indexes to access our data faster.
When creating such an index, keep in mind that if the first column of the index is more
selective, operations like Index Range Scans will benefit. If we perform Index Skip Scan
operations, instead, performance will benefit more with a less selective column in the first
place of the index.
Try to create multi-column indexes on the attributes of a table you use together in the
predicate. If the index contains fields of the projection, the database could use the index only
to answer the query, without accessing table data.
If the leading columns of the index have low cardinality (that is, the number of distinct values
in the column is very small compared to the number of rows in the table), using compression
will lead to performance improvement in Index Range Scanning operations.
See also
F See the Indexing the correct way recipe in Chapter 3 , Optimizing Storage Structures
for more information on using indexes
Introducing arrays and bulk operations
In this recipe, we will see different ways to insert data in our tables and we will make some
considerations about the INSERT statement's performance.
We will see how arrays can be used to speed up insert and select statements, and why it may
be better to use a single statement to achieve certain goals than using a procedural approach.
How to do it...
The following steps will demonstrate the use of arrays to insert data into the tables:
1.
Connect to the SH schema:
CONNECT sh@TESTDB/sh
2.
Create an empty table called MY_SALES with the same structure as the SALES table:
CREATE TABLE sh.MY_SALES AS
SELECT cust_id, prod_id FROM sh.sales WHERE 1=0;
 
Search WWH ::




Custom Search