Database Reference
In-Depth Information
• Currently, DynamoDB columns with only scalar data types are supported, that is,
columns with data types in string and number are only supported. Data types SET
and BINARY are not supported.
• The attributes from DynamoDB that are not present in the Redshift table are
simply ignored.
Suppose you want to export the Employee table from DynamoDB to Redshift, then you
can simply use the following syntax:
COPY Employee_RS FROM 'dynamodb://Employee'
CREDENTIALS
'aws_access_key_id=<your-access-key>;aws_secret_access_key=<your-secret-key>'
In the case of readratio 50 , the first line mentions the name of the tables in Redshift
and DynamoDB. The second line is to provide credentials, that is, access key and secret
key. In the third line, you need to mention how much provisioned throughput Redshift
should use from DynamoDB table. Here, I have mentioned 50 which means 50 percent of
the table's provisioned read capacity units would be used.
Alternatively, you can also create temporary credentials and access keys and use it for the
purpose of copying data. The benefit of using temporary credentials is that first, they are
temporary, and second, they can be used only once and expire after a certain time. But one
needs to be sure that the temporary credentials are valid for the entire time frame of the
copy task.
The following is the syntax to use temporary credentials:
copy Employee_RS from 'dynamodb://Employee'
credentials
'aws_access_key_id=<temporary-access-key>;aws_secret_access_key=<temporary-secret-key>;token=<temporary-token>'
readratio 50;
Automatic compression and sampling
Sometimes you would be wondering that the COPY command has used more than the re-
quired provisioned throughput for data load. So, the answer for that is that the COPY com-
mand by default applies some compression on data being loaded to Redshift. To do so, it
first samples a certain number of rows and then see if it works fine. Once done, all these
Search WWH ::




Custom Search