Database Reference
In-Depth Information
Error scrubbing SSTableReader(path='/var/lib/
cassandra/data/twitterkeyspace/dumpuser/
twitterkeyspace-dumpuser-jb-1-Data.db'): null
2.
As a prestep, it's always better to take a backup before running the
repair command. First create a snapshot directory for a backup. As
you can see in the preceding output, an error occurred while scrub-
bing the .db file; hence, we need to manually remove
/twitterkeyspace-dumpuser-jb-1-Data.db and restore
the rest of the data from the snapshot directory. Please refer to
Chapter 9 , the “Data Backup and Restoration” section, for how to re-
store data from the snapshot directory.
3.
Also we need to run the nodetool repair command to bring
back the table:
$CASSANDRA_HOME/bin/nodetool repair
twitterkeyspace dumpuser
So here we have seen how to import JSON data into sstables and export data to
JSON files, as well how to deal with corrupted sstables.
Currently, there are discussions in the community about whether to retire or replace
these tools. You can refer to https://issues.apache.org/jira/browse/
CASSANDRA-7464 for more details.
Cassandra Bulk Loading
In the “Decommissiong a Node” section, we saw that using the COPY command we
can copy data from a .csv file to tables. Prior to CQL3's inception, in the early re-
leases of Cassandra, the distribution came with a tool called sstableloader, which could
be used to directly load .db files into sstables. We just had to write an implementation
using the SSTableWriter API to generate .db files. One such Thrift-based implementa-
tion can be downloaded from www.datastax.com/wp-content/uploads/
2011/08/DataImportExample.java . Since CQL3, however, the CQL3 binary
protocol is going to be the active protocol for future Cassandra development, and also
please note that support for Thrift has been discontinued.
Search WWH ::




Custom Search