Skip to content

Testing with phantom teskit

Flavian Alexandru edited this page May 30, 2016 · 5 revisions

Build Status Coverage Status Maven Central Bintray

Naturally, no job is considered truly done with the full power testing automation provided out-of-the box. This is exactly what we tried to achieve with the testing utilities, giving you a very simple, easily extensible, yet highly sensible defaults. We wanted something that works for most things most of the time with 0 integration work on your behalf, yet allowing you to go crazy and custom as you please if the scenario warrants it.

With that design philosophy in mind, we've created two kinds of tests, 1 running with a SimpleCassandraConnector, with the implementation found [here](https://github.com/websudos/phantom/blob/develop/phantom-testkit/src/main/scala/com/websudos/phantom/testkit /SimpleCassandraConnector.scala), where the testing utilities will auto-spawn an Embedded Cassandra database with the right version and the right settings, run all the tests and cleanup after tests are done.

The other, more complex implementation, targets users who want to use phantom/Cassandra in a distributed environment. This is an easy way to automate multi-DC or multi-cluster tests via service discovery with Apache ZooKeeper. More details are available right above.

For this, simply use create a test using a ZkContactPointLookup and mixin the resulting connector into the test.

There are 4 core implementations available:

Name Description ZooKeeper support Auto-embedding support
CassandraFlatSpec Simple FlatSpec trait mixin, based on org.scalatest.FlatSpec No Yes
CassandraFeatureSpec Simple FeatureSpec trait mixin, based on org.scalatest.FeatureSpec No Yes
BaseTest ZooKeeper powered FlatSpec trait mixin, based on org.scalatest.FlatSpec Yes Yes
FeatureBestTest ZooKeeper powered FeatureSpec trait mixin, based on org.scalatest.FeatureSpec Yes Yes

Using the built in testing utilities is very simple. In most cases, you use one of the first two base implementations, either CassandraFlatSpec or CassandraFeatureSpec, based on what kind of tests you like writing(flat or feature).

To get started with phantom tests, the usual steps are as follows:

  • Create a global method to initialise all your tables using phantom's auto-generation capability.
  • Create a global method to cleanup and truncate your tables after tests finish executing.
  • Create a root specification file that you plan to use for all your tests.
import scala.concurrent.{ Await, Future }
import scala.concurrent.duration._
import com.websudos.phantom.dsl._


object DatabaseService {
  def init(): Future[List[ResultSet]] = {
    val create = Future.sequence(List(
      Table1.create.future(),
      Table2.create.future()
    ))

    Await.ready(create, 5.seconds)
  }

   def cleanup(): Future[List[ResultSet]] = {
    val truncate = Future.sequence(List(
      Table1.truncate.future(),
      Table2.truncate.future()
    ))
    Await.ready(truncate, 5.seconds)
  }
}

import com.websudos.phantom.testkit._

trait CustomSpec extends CassandraFlatSpec {

   override def beforeAll(): Unit = {
     super.beforeAll()
     DatabaseService.init()
   }

   override def afterAll(): Unit = {
     super.afterAll()
     DatabaseService.cleanup()
   }
}

Running your database tests with phantom is now trivial. A great idea is to use asynchronous testing patterns and future sequencers to get the best possible performance even out of your tests. Now all your other test suites that need a running database would look like this:

import com.websudos.phantom.dsl._
import com.websudos.util.testing._

class UserDatabaseServiceTest extends CustomSpec {
  it should "register a user from a model" in {
    val user = //.. create a user

    // A for-yield will get de-sugared to a flatMap chain, but in effect you get a sequence that says:
    // First write, then fetch by id. The beauty of it is the first future will only complete when the user has been written
    // So you have an async sequence guarantee that the "getById" will be done only after the user is actually available.
    val chain = for {
      store <- UserDatabaseService.register(user)
      get <- UserDatabaseService.getById(user.id)
    } yield get

    // The "successful" method comes from com.websudos.util.testing._ in our util project.
    chain.successful {
      result => {

        // result is now Option[User]

        result.isDefined shouldEqual true
        result.get shouldEqual user
      }
    }
  }
}

If you are using ZooKeeper and you want to run tests through a full ZooKeeper powered cycle, where Cassandra settings are retrieved from a ZooKeeper that can either be running locally or auto-spawned if none is found, pick one of the last two base suites.

Phantom spares you of the trouble to spawn your own Cassandra server during tests. The implementation of this is based on the [cassandra-unit] (https://github.com/jsevellec/cassandra-unit) project. Phantom will automatically pick the right version of Cassandra, however do be careful. We often tend to use the latest version as we do our best to keep up with the latest features.

You may use a brand new phantom feature, see the tests passing with flying colours locally and then get bad errors in production. The version of Cassandra covered by the latest phantom release and used for embedding is written at the very top of this readme.

phantom uses the phantom-testkit module to run tests without a local Cassandra server running. There are no pre-requisites for running the tests. Phantom will automatically load an Embedded Cassandra with the right version, run all the tests and do the cleanup afterwards. Read more on the testing utilities to see how you can achieve the same thing in your own database tests.

If a local Cassandra installation is found running on localhost:9042, phantom will attempt to use that instead. Some of the version based logic is found directly inside phantom, although advanced compatibility and protocol version detection has been a task we left to our dear partners at Datastax as we've felt re-implementing that concern in Scala would bring no significant value add.

Phantom uses multiple SBT configurations to distinguish between two kinds of tests, normal and performance tests. Performance tests are not run during Travis CI runs and we usually run them manually when serious changes are made to the underlying Twitter Spool and Play Iterator based iterators, events that are very rare indeed.

  • Use sbt test to run the normal test suite which should finish pretty quickly, within 2 minutes.
  • Use sbt perf:test if you have a lot of time on your hands and you are debugging performance issues with the framework. This will take 40 -50 minutes.