Revisiting Cassandra

(28 Feb & later Oct 2014, June 2015, Dec 2015, April 2016….)

Initially I was trying for a backup for a hazelcast cache - not really a normal usecase.   Since then I’ve changed the usecase and I’m now using Cassandra with Scala and Akka and Spark.

The Spark-Cassandra-Connector means I can download Spark and Cassandra, and then use Cassandra as the RDD source. We can forget about HDFS and Hadoop.

As ever, nothing here is original, just notes for myself, to find the source go to

Spark Cassandra Connector

Updated June 2015

Guide to cql

Guide to ddl

CQL notes:

In the early days they loved being a column db, now its just as similar to any old DB as they can make it for syntax…

select columnfamily_name from system.schema_columnfamilies where keyspace_name    = 'test';
cd C:\java\apache-cassandra-3.3\bin

#To start
#For interactive shell
#Note I had to edit cqlsh.bat to prefix python with my install dir
describe keyspaces;
use mykeyspace;
describe keyspace;
describe tables;
describe table my_user;

Data Modelling Guides

(crib sheets again - everything is other peoples work see above)

Rule 1: Spread data evenly around the cluster

Rule 2: Minimize the number of partitions read

Step 1: Determine what specific queries to support

Step 2: Try to create a table where you can satisfy your query by reading (roughly) one partition

So, a partition is the first part of the primary key. When doing a read you want to be able to search as few partitions as possible!


Primary Keys

When create a primary key, can be a composite. The first part is the partitioning key, and the second is called the clustering key.

A partitioning key can itself be composite - and it should be, ie spread the data evenly, so dont use a real world thing, which may vary in size. eg UserGroup could contain 1 user, or millions, so its bad, instead do:

# hash_prefix, holds a prefix of a hash of the username.    
# For example, it could be the first byte of the hash modulo four
 groupname text,
 username text,
 hash_prefix int,
 PRIMARY KEY ((groupname, hash_prefix), username)

But, for any group we now read 4 partitions (modulo 4), which breaks rule2, but meets rule1. The rules conflict!

Its distributed!

No joins! ie denormalise, never join, as joining data on different physical hosts is not what its about…

No Joins, but yes to Sets, Maps and Lists

If you have 1 to many type relationships, you can still denormalise and use the set, map etc for data that >1.


When you delete a record it is not deleted, instead an additional ‘tombstone’ marker is added, and after a grace period - 10 days, any tombstone data is removed.

So, if you have millions of short lived records research the issue in depth.

DO NOT SET grace period to 0. If you lose a network card, and repair it then the cluster may replicate data that should be tombstoned out of the restored node. ie while it was down it missed the delete, but the rest of the cluster didnt. When it reconnects it ‘repairs’ the other nodes.


An advantage of indexes is the operational ease of populating and maintaining the index. Indexes are built in the background automatically, without blocking reads or writes. Client-maintained tables as indexes must be created manually; for example, if the artists column had been indexed by creating a table such as songs_by_artist, your client application would have to populate the table with data from the songs table.

To perform a hot rebuild of an index, use the nodetool rebuild_index command.

Ordering by time

The clustering part of the key could use a timeuuid, which is a timestamp which avoids collisions.

Bad query could be, its bad because the query is slow - has order by in it

SELECT * FROM group_join_dates
 WHERE groupname = ?

Good query is to change the table and the query to be:

CREATE TABLE group_join_dates (
 groupname text,
 joined timeuuid,
 join_date text,
 username text,
 email text,
 age int,
 PRIMARY KEY ((groupname, join_date), joined)

Note we are reducing the partitions searched as we group into partions based on a time str.

Now we can use the slightly more efficient query:

SELECT * FROM group_join_dates
 WHERE groupname = ?


My env: Maven, JDK 1.7 in eclipse.

Download Cassandra 2.1

Untar to \java

Update project pom to use the datastax driver:


Configure Cassandra

“conf/cassandra.yaml: data_file_directories (/var/lib/cassandra/data), commitlog_directory (/var/lib/cassandra/commitlog), and saved_caches_directory (/var/lib/cassandra/saved_caches). Make sure these directories exist and can be written to.”

i.e. Changing /var/lib to /dev means editing:

 - /dev/cassandra/data
commitlog_directory: /dev/cassandra/commitlog
saved_caches_directory: /dev/cassandra/saved_caches

Start up Cassandra

cd c:\java\apache-cassandra-2.0.5\bin

You need Python

For the scripty stuff in cassandra you need Python 2.7, even though it says it works with 3.3 etc, Python has changed its syntax, so get 2.x.

To install the Cassandra Python libs on Windows do:

c:\java\apache-cassandra-2.1.0\pylib>c:\tools\Python27\python.exe install

Edit cqshl.bat to have the path to you python install, and then run it to show everything is sweet.


Original 2013 post was below:

Cassandra 2.0 and DataStax diver. Not a crib sheet for reference, just a flow of notes for now.

So it seems I need a datastore and hadoop may be well and good but my lil laptop struggles with the Oracle Virtual Box running hadoop setup. Also the problem I have to solve is not exactly map-reduce although it is probably big-data.

Netflix using Cassandra and the binary protocols etc seem to show its stable and mature. I worked near a group using Cassandra as well, and they seemed pretty happy.

So, downloaded Cassandra, updated the POM for datastax driver, and then start reading…yawn. Version hell it seems, so Cassandra 1.2 doesn’t support binary out of the box, no idea about version 2.0…So need to alter the node yaml…No I don’t its already configured…

start_native_transport: true

port for the CQL native transport to listen for clients on

native_transport_port: 9042

Wow, one hour from starting and I have a DataStax client connecting to Cassandra, creating ‘tables’ and storing data using prepared statements. This is pretty cool compared to many of the other techs I’ve played with so far this year.