I recently need to migrate all the data from a Cassandra cluster on EC2 into a Cassandra cluster that was behind our private firewall. Not only that, but the cluster ring sizes of the source and destination cluster were different.
I kicked around some crazy stupid ideas for a while, when someone pointed out that Cassandra 0.8.1 shipped with a new tool called sstableloader (angles start singing here...)
sstableloader is a tool that basically reads a folder full of Cassandra Keyspace data and index files and bulk loads their data into a destination cluster. It can only does this one Keyspace at a time.
After playing with the tool, and working around some gotcha's, I finally figured out the process for pulling off a cluster to cluster data migration. So I thought I'd write a little tutorial share this process with others. The tutorial assumes you're source Cassandra cluster is outside your firewall, and sstableloader doesn't have direct read access to the data folder on the source cluster.
Also, if any of the following steps are stupid or just straight wrong, please leave a comment and I'll update accordingly.
Collect cassandra data files from existing cluster:
On each node in the source cassandra ring, you'll need to collect all the data and index files (*.Data.db and *.Index.db) for the Keyspaces you want to migrate. The data files for a Keyspace are located in a folder named for the Keyspace (by default) under the "/var/lib/cassandra/data/" folder. On a EC2 server you probably changed this (or maybe should have changed this) to a folder on the /mnt folder, since most of the disk space is on /mnt.
Before you package up the data and index files, you'll want to flush the Cassandra memtable to SSTable using the nodetool. This will make sure the SSTables are up to date with all the data written to the Cassandra cluster. You will also want to kick off a data compaction with nodetool as well to minimize the volume of data you are going to be copying over to the destination network.
To package all data and index files for a Keyspace on a single node into a compressed tarball, run the following command, making sure to change the KeyspaceName to the Keyspace you want to collect, and the NodeNumber to the cassandra node you are working on:
find /mnt/cassandra/var/lib/cassandra/data/<KeyspaceName> -type f \( -name \*\Data.db -o -name \*\Index.db \) -print0 | xargs -0 tar -czvf <KeyspaceName><NodeNumber>.tar.gz
This crates a tarball for the Keyspace with only the data and index files in it. Run this on each Keyspace, on each node in the source cassandra cluster (Note: cassandra data files compress nicely to ~25% original size)
Once all cassandra data is packaged into tarball files, SFTP each tarball (one per Keyspace/per Cassandra node) down to the destination network. This should take ~forever.
Since sstableloader uses gossip to communicate with the destination ring, it is going to read the listen_address and storage_port values from the cassandra.yaml file and use this ip-address and port to communicate with the destination Cassandra ring. This means if you want to run sstableloader on the same machine as a running Cassandra instance you'll get the following error because Cassandra is already using this ip-address and port to communicate with the other nodes in the ring:
org.apache.cassandra.config.ConfigurationException: /127.0.0.1:7000 is in use by another process. Change listen_address:storage_port in cassandra.yaml to values that do not conflict with other services
To get around this you'll have to create a new loopback ip address for sstableloader to use. Running the following command will create a new loop back address on ip-address 127.0.0.2:
sudo ifconfig lo0 alias 127.0.0.2
Note: after you are finished using sstableloader and want to remove the new loopback alias run this command:
sudo ifconfig lo0 -alias 127.0.0.2
Also, since sstableloader reads the "../conf/cassandra.yaml" file to figure out what ip-address and port to use, you'll have to make a copy of the Cassandra install folder so you can change the yaml file without affecting the running Cassandra instance. So make a copy of the cassandra install folder, and rename it to something like apache-cassandra-0.8.1-sstableloader. Then open the conf/cassandra.yaml file in an editor and change the listen_address to 127.0.0.2.
sstableloader should now be fully configured to run from the apache-cassandra-0.8.1-sstableloader folder.
Since sstableloader will be dumping A LOT of data into the destination Cassandra cluster, you probably want to disable data file compaction in your destination cluster during this process. This will speedup the import process and consume WAY less disk space on each destination cassandra node.
When sstableloader runs, it uses the name of the folder that has the source data and index files as the Keyspace to write to in the destination ring. So for each Keyspace in your source Cassandra cluster, create a folder with the exact name as that Keyspace. Unpack one of your Keyspace tarballs into this folder (make sure its the correct Keyspace). Since different nodes from the source cluster probably use the same file names for the data and index files, you'll have to do this one tarball at a time to ensure you don't overwrite a file from a different node. Or you could write a utility to add a node number prefix to each file, that way you could unpack all tarball files into one folder, and run sstableloader only once for that Keyspace.
Once your data and index files unpacked in a folder that has the same name as the destination Keyspace, run the following command to kick off sstableloader (NOTE: make sure you run sstableloader from the the copied cassandra install folder you created earlier so it will use the correct ipaddress for gossip):
Once this is finished, delete the data and index files in the Keyspace folder and unpack the next tarball into the folder and repeat the process until the Keyspace has all data loaded into it. Then repeat this process for the other Keyspaces.
Once this is complete, you'll want to re-enable compaction on the destination Cassandra cluster and manually kick off a compaction to get rid of any duplicate data (because you imported all data from the source cluster, and each piece of data was replicated 3 times)