The Barclays Data Science Hackathon: Building Retail Recommender Systems based on Customer Shopping Behaviour

From Data Science Milan meetup event:

In the depths of the last cold, wet British winter, the Advanced Data Analytics team from Barclays escaped to a villa on Lanzarote, Canary Islands, for a one week hackathon where they collaboratively developed a recommendation system on top of Apache Spark. The contest consisted on using Bristol customer shopping behaviour data to make personalised recommendations in a sort of Kaggle-like competition where each team’s goal was to build an MVP and then repeatedly iterate on it using common interfaces defined by a specifically built framework.
The talk will cover:

• How to rapidly prototype in Spark (via the native Scala API) on your laptop and magically scale to a production cluster without huge re-engineering effort.

• The benefits of doing type-safe ETLs representing data in hybrid, and possibly nested, structures like case classes.

• Enhanced collaboration and fair performance comparison by sharing ad-hoc APIs plugged into a common evaluation framework.

• The co-existence of machine learning models available in MLlib and domain-specific bespoke algorithms implemented from scratch.

• A showcase of different families of recommender models (business-to-business similarity, customer-to-customer similarity, matrix factorisation, random forest and ensembling techniques).

• How Scala (and functional programming) helped our cause.

Surfing and Coding in Lanzarote, the Barclays Data Science hackathon

This post has been published on the Cloudera blog and summurises the results and takeaways of a week-long hackathon happened in Lanzarote in December 2015. The goal was to prototype a recommender systems for retail customers of shops in Bristol in Bristol, UK. The article shows how the stack composed by Scala and Spark was great for quickly writing some prototyping code to run locally in a single laptop and at the same time scalable for larger dataset to process in the cluster.

man with laptop on colorful beach of island

Please continue reading at http://blog.cloudera.com/blog/2016/05/the-barclays-data-science-hackathon-using-apache-spark-and-scala-for-rapid-prototyping/.

Robust and declarative machine learning pipelines for predictive buying

Proof of concept of how to use Scala, Spark and the recent library Sparkz for building production quality machine learning pipelines for predicting buyers of financial products.

The pipelines are implemented through custom declarative APIs that gives us greater control, transparency and testability of the whole process.

The example followed the validation and evaluation principles as defined in The Data Science Manifesto available in beta at http://www.datasciencemanifesto.org

Functional Data Validation using monads and applicative functors

http://envirostructure.ignite.lexblog.com/wp-content/uploads/sites/386/2014/10/Oil-Pipeline-at-Sunset.jpg
http://envirostructure.ignite.lexblog.com/wp-content/uploads/sites/386/2014/10/Oil-Pipeline-at-Sunset.jpg

ETL is probably the most time consuming part of every Data Science project. The quality of extracted and crunched data is one of the major factor affecting the final results. In facts, real world data is always messy and inconsistent. Data Validation is a must for enforcing the correctness of the proposed solution and to make sure the underlying data represent the true business scenario.

When performing a data validation, the following issues often arise:

  • We want to track of how much information we lose for debugging and reporting scopes.
  • Sometime we want to cleanse invalid data instead of filtering out.
  • Part of the validation logic depends on the project requirements and/or model assumptions. They change often and the re-factoring may introduce bugs.

In this tutorial we are showing how to use monads, applicative functors and other functional programming concepts to safely and elegantly define the validation logic using a modular pattern. Each rule is defined individually and the final logic is built by using two types of composition:

  • Monad-composition. One rule after the other, if one fails the next rule is not applied.
  • Applicative-composition. All of the rules are applied independently and the validation results are collected and merged together.

Moreover, the data that do not pass the validation tests is not discarded but moved into a separate pipeline with all of the needed meta-data information attached to it explaining why this particular record was discarded. This allows us to:

  • Log all of the specific causes of data loss.
  • Easily recover previously invalidated data if the validation rules change.
  • Re-use part of the discarded data further down in the data pipeline. The whole ETL workflow is aware of what has been discarded before.

Spark, Scala, Scalaz and Sparkz

The tutorial is part of the open source project Sparkz which aims to extend the Apache Spark framework providing more functional APIs. The implementation is in Scala and leverage Scalaz, which is the framework from where Sparkz was inspired from.

Scalaz provides a Validation data structure that is similar to Either (where an object can either be Left/Failure or Right/Success) but is not a monad but an applicative functor because instead of chaining the result from first event to the next, Validation validates all events. Since that in case of a failure there must be at least one error message, we enforce the failure type to be a non empty list of error messages. For this purpose scalaz already provides a data structure ValidationNel to accumulate all of the error messages into a Nel (non empty list).

See this page for documentation: http://eed3si9n.com/learning-scalaz/Validation.html

The concepts and methodology can be applied to any data computation framework and programming language. You will just have to re-implement yourself part of the boiling-plate code you will see in this tutorial that does all of the magic for you.

The user/events data validation use case

The use case we are using for this example is a simple data type consisting of a triple of userId, eventCode and timestamp:


case class UserEvent(userId: Long, eventCode: Int, timestamp: Long)

Each UserEvent can either be marked as correct or as invalid. For the latter case we will wrap it into another case class InvalidEvent containing the invalid event as well as some meta information regarding the error cause:

sealed trait InvalidEventCause

case class InvalidEvent(event: UserEvent, cause: InvalidEventCause)

The goal is to build a function that takes an UserEvent and returns a ValidationNel of either the correct event or the non empty list of all of the causes:


UserEvent => ValidationNel[InvalidEvent, UserEvent]

The reason why we want to return ValidationNel[InvalidEvent, UserEvent] instead of ValidationNel[InvalidEventCause, UserEvent] is because we want to keep the original datum in case of further recovery instead of only storing the error causes. This implies that the same object is duplicated multiple times which is not efficient but we are not addressing optimisation issues in this tutorial, we will leave it for future posts.

Validation Rules

The easiest way to define each rule was via partial functions that map an UserEvent into an InvalidEventCause. A Partial function is a function that is only defined for a subdomain of the input arguments. In our case is a function which tries to invalidate a datum and is not defined for correct records. The full validation logic will be expressed as a List of partial functions such as:


val validationRules: List[PartialFunction[UserEvent, InvalidEventCause]]

In order to reduce the boilerplate code the PartialFunction returns an InvalidEventCause and then our implicit logic will wrap it together with the original UserEvent object into an InvalidEvent container.

Some rules are simply pre-defined, such as checking that a timestamp is in a min-max range or that the userId is in a white list and so on. Others are more complicated and are derived from the underlying raw data (before validation).
The method generating the final validation function takes as argument the RDD with the raw data plus a bunch of parameters and objects used for defining the single rules:

def validationFunction(events: RDD[UserEvent],
 eligibleUsers: Set[Long],
 validEventCodes: Set[Int],
 blackListEventCodes: Set[Int],
 minDate: String, maxDate: String): UserEvent => ValidationNel[InvalidEvent, UserEvent] 

In order to compile the snippets of code you will have to add some dependencies in the imports:

 
import org.apache.spark.broadcast.Broadcast
import org.apache.spark.rdd.RDD
import org.joda.time.{DateTime, Interval, LocalDate}
import sparkz.utils.Pimps._
import scalaz.Scalaz._
import scalaz.ValidationNel

First thing we grab the SparkContext from the events RDD:


val sc = events.context

Valid event code

We want to filter out all of the events whose code does not belong to the validEventCodes set.

 

case object NonRecognizedEventType extends InvalidEventCause

val validEventCodesBV: Broadcast[Set[Int]] = sc.broadcast(validEventCodes)
val notRecognizedEventCode: PartialFunction[UserEvent, InvalidEventCause] = {
  case event if !validEventCodesBV.value.contains(event.eventCode) => NonRecognizedEventType
}

We could have enclosed the set directly in the partial function but we rather prefered to broadcast it and retrieve it using the value API.

Why using Broadcast variables in Spark explained here: http://spark.apache.org/docs/latest/programming-guide.html#broadcast-variables

Eligible users

Just like event codes we want to make sure that we only select data from a pool of predefined eligible users:


case object NonEligibleUser extends InvalidEventCause

val eligibleUsersBV: Broadcast[Set[Long]] = sc.broadcast(eligibleUsers)
val customerNotEligible: PartialFunction[UserEvent, InvalidEventCause] = {
  case event if !eligibleUsersBV.value.contains(event.userId) => NonEligibleUser
}

N.B. that if the eligibleUsers set is large you cannot broadcast it as a shared variable but you want to rather turn into a paired RDD and use the userId as a key for the join. If you want to preserve the information of why you discarded a particular user you will have to perform an outer join instead of the inner join.

Blacklist users

This logic is slightly more complicated. We want to filter out all of the events of those users for which we observed at least one event in the black list. We will have to first scan the raw dataset in order to create the blacklist user ids.


case object BlackListUser extends InvalidEventCause

val blackListEventCodesBV: Broadcast[Set[Int]] = sc.broadcast(blackListEventCodes)
// Users for which we observed a black list event
val blackListUsersBV: Broadcast[Set[Long]] = sc.broadcast(
  events.filter(event => blackListEventCodesBV.value.contains(event.eventCode))
  .map(_.userId).distinct().collect().toSet
)
val customerIsInBlackList: PartialFunction[UserEvent, InvalidEventCause] = {
  case event if blackListUsersBV.value.contains(event.userId) => BlackListUser
}

We first run the distinct() to reduce the size of the RDD before to collect() and turn into a set.

Timestamp out of global interval

We specified the minDate and maxDate as strings in ISO format and we want to filter out all of the timestamps outside this range. For this logic we don’t need any pre-computation we can directly implement it as:


case object OutOfGlobalIntervalEvent extends InvalidEventCause

val eventIsOutOfGlobalInterval: PartialFunction[UserEvent, InvalidEventCause] = {
  case event if !new Interval(DateTime.parse(minDate), DateTime.parse(maxDate)).contains(event.timestamp) =>
    OutOfGlobalIntervalEvent
}

 

We use joda-time to parse strings and timestamp epoch numbers into more manageable classes.

First day to consider of the user

This time we want to have a stricter rule regarding the event timestamps. We want to avoid border effects by removing all of the events on the same date from where we started observing the first event of a particular user. That could be because the first date may be incomplete and do not contain all of the events and can invalidate our data assumptions. Since that we also introduced the concept of global min/max interval, the first date to consider for a particular user is the max between the first date we observed from the data and the start of the global interval.


case object FirstDayToConsiderEvent extends InvalidEventCause

// max between first date we have ever seen a customer event and the global min date
val customersFirstDayToConsiderBV: Broadcast[Map[Long, LocalDate]] =
  sc.broadcast(
    events.keyBy(_.userId)
    .mapValues(personalEvent => new DateTime(personalEvent.timestamp).toLocalDate)
    .reduceByKey((date1, date2) => List(date1, date2).minBy(_.toDateTimeAtStartOfDay.getMillis))
    .mapValues(firstDate => List(firstDate, LocalDate.parse(minDate)).maxBy(_.toDateTimeAtStartOfDay.getMillis))
    .collect().toMap
  )
val eventIsFirstDayToConsider: PartialFunction[UserEvent, InvalidEventCause] = {
  case event if customersFirstDayToConsiderBV.value(event.userId).isEqual(event.timestamp.toLocalDate) =>
    FirstDayToConsiderEvent
}

We reduced the events rdd into the minimum date for each userId using the reduceByKey monoidal aggregation. Then we applied the max function between the minimum reduced date and the global one.

Validation rules composition

All we have to do is take the individually defined partial functions (that in Scala are objects like everything else) and put them into a List.

We realised that eventIsFirstDayToConsider is dependending on eventIsOutOfGlobalInterval. If an event is filtered because outside the global interval there is no need of putting it through the first-day-to-consider rule. Thus we can create a monad of the two rules by using the orElse method on the first partial function which takes as argument the second partial function which is applied if and only if the first one is not defined (in our case is not already outside the global min-max range). The orElse method returns a new partial function which will be treated as a single validation rule and internally combines the two of them.

val validationRules: List[PartialFunction[UserEvent, InvalidEventCause]] =
  List(customerNotEligible, notRecognizedEventCode, customerIsInBlackList,
    eventIsOutOfGlobalInterval.orElse(eventIsFirstDayToConsider)
  )

The order of the validation rules does not matter since all of them will be applied independently even if computationally they will be applied sequentially. If you want to computationally apply them in parallel then you can use the par method of a list that turns it into a parallel collection.

The final validation function will be generated by a couple of syntactic sugar implicits that we implemented in Sparkz.


(event: UserEvent) => validationRules.map(_.toFailureNel(event, InvalidEvent(event, _))).reduce(_ |+++| _)

The magic operators

In the list of imports we specified:

import sparkz.utils.Pimps._

Pimps are a nice pattern used in Scala to implicitly add methods to classes (the action of pimping). It is particularly useful when you want to use the postfix notation to apply a method to a class that do not expose that method.

In order to implement our validation pattern we had to pimp two classes: the PartialFunction and the ValidationNel. The pimped methods hides the boilerplate logic computed behind the scenes.

implicit class PimpedPartialFunction[X, E](pf: PartialFunction[X, E]) {
  def toFailureNel[W](x: X, toW: E => W = identity _): ValidationNel[W, X] =
    pf.andThen(e => toW(e).failureNel[X]).applyOrElse(x, (_: X).successNel[W])

  def toFailureNel(x: X): ValidationNel[E, X] = toFailureNel(x, identity)
}

The toFailureNel method attempt to apply the partial function (X => E) to the element x and in case the function is defined returns a failureNel (a non empty list of a single failure of type E) where the failure object is the one returned by the original function. In case the function is not defined the pimped method creates a successNel of type X.

The more general method takes also an extra argument toW that converts the error object e returned by the original function (if defined) and wraps it into another type W. It acts as a functor for the failure case. This method allows us to encapsulate the information of the original event that generated the error together with its cause.

In our use case the generics types of X, E and W are:

X: UserEvent
E: InvalidEventCause
W: InvalidEvent

Once we have converted the partial functions into ValidationNel instances, we need to reduce them into a single one. Scalaz provides a monoid binary operator +++ that takes two ValidationNel instances and merge them together. In case of failures, appends all of the failures into a single list of type E. In case of success results, applies an external monoid operator on type X. In other words that operator knows how to reduce failures by concatenating the errors but it requires to specify how to combine the correct results when all of them are Success.

Since that in our use case we are not transforming the correct data but either we discard it or we keep it as it is, the monoid operator is straightforward. We assume that the success results contains always the same original object, aka they never transform it. Thus the monoid operator simply returns the object itself where the two objects to reduce just represent a copy of each other.

Our PimpedValidationNel provides a simplified operator that does not require the semigroup monoid defined for the type X:


implicit class PimpedValidationNel[E, X](x1: ValidationNel[E, X]) {
  // Extension of scalaz.Validation.+++ operator, does not require the semigroup defined for X
  def |+++|(x2: ValidationNel[E, X]) = x1 match {
    case Failure(a1) => x2 match {
      case Failure(a2) => Failure(a1 append a2)
      case Success(b2) => x1
    }
    case Success(b1) => x2 match {
      case b2@Failure(_) => b2
      case Success(b2) if b1 == b2 => Success(b1)
      case Success(b2) => throw new IllegalArgumentException(s"$b1 not equals to $b2")
    }
  }
}

This operator also allows us to merge together results coming from different validation pipelines. If we have two RDDs of the same ValidationNel generic types we could theoretically join them and merge the results of the same objects. This operation is very expensive and is much more prefered to join the lazy functions that generates the validation objects and apply the final combined function to each record in a single pass over the dataset.

What you can do with the ValidationNel API

Simplest thing would be getting only the valid records:

def onlyValidEvents(events: RDD[UserEvent],
                    validationFunc: UserEvent => ValidationNel[InvalidEvent, UserEvent]): RDD[UserEvent] =
  events.map(validationFunc).flatMap(_.toOption)

Or the opposite thing, getting only the invalid events and flat-mapping them into their corresponding error wrapping class:

def invalidEvents(events: RDD[UserEvent],
 validationFunc: UserEvent => ValidationNel[InvalidEvent, UserEvent]): RDD[InvalidEvent] =
 events.map(validationFunc).flatMap(_.swap.toOption).flatMap(_.toList)

Suppose we want to extract all of the original events that failed because of a particular cause, for instance when their timestamp was out of range:

def outOfRangeEvents(events: RDD[UserEvent],
                     validationFunc: UserEvent => ValidationNel[InvalidEvent, UserEvent]): RDD[UserEvent] =
  events.map(validationFunc).flatMap(_.swap.toOption).flatMap(_.toSet).flatMap {
    case InvalidEvent(event, OutOfGlobalIntervalEvent) => event.some
    case _ => Nil
  }

N.B. We are exploiting the implicit conversion from an Option to a Iterable to apply a combination of filter and map into a single flatMap operation.

Now, suppose we would like to print a debug message with the count of invalid events by the set of their error causes.

def causeSetToInvalidEventsCount(events: RDD[UserEvent],
                                 validationFunc: UserEvent => ValidationNel[InvalidEvent, UserEvent]): Map[Set[InvalidEventCause], Int] =
  events.map(validationFunc)
  .map(_.swap).flatMap(_.toOption).map(_.map(_.cause).toSet -> 1)
  .reduceByKey(_ + _)
  .collect().toMap

The above method will return a map that looks like:

Map(Set(NonEligibleCustomer, NonRecognizedEventType) -> 36018450,
Set(NonEligibleUser) -> 9037691,
Set(NonEligibleUser, BlackListUser, NonRecognizedEventType) -> 137816,
Set(NonEligibleUser) -> 464694973,
Set(BeforeFirstDayToConsiderEvent, NonRecognizedEventType) -> 5147475,
Set(OutOfGlobalIntervalEvent, NonRecognizedEventType) -> 983478).

Please pay attention that we are not counting by the individual cause but we are grouping by the combination of causes that co-occured together. This gives us much more debugging power with no loss of information as opposed to the traditional monad sequential validation where only the first cause would be recorded.

Moreover, what we might really be interested on is the count of how many users we lost as effect of the events validation. In other words how many users we lost because they had no any event left after validation.

def causeSetToUsersLostCount(events: RDD[UserEvent],
                             validationFunc: UserEvent => ValidationNel[InvalidEvent, UserEvent]): Map[Set[InvalidEventCause], Int] = {
  val survivedUsersBV: Broadcast[Set[Long]] =
    events.context.broadcast(events.map(validationFunc).flatMap(_.toOption).map(_.userId).distinct().collect().toSet)

  events.map(validationFunc).flatMap(_.swap.toOption)
  .keyBy(_.head.event.userId)
  .filter(_._1 |> (!survivedUsersBV.value(_)))
  .mapValues(_.map(_.cause).toSet)
  .mapValues(Set(_))
  .reduceByKey(_ ++ _)
  .flatMap(_._2)
  .map(_ -> 1)
  .reduceByKey(_ + _)
  .collect().toMap
}

What the above code does is the following:

  1. Compute the set of “survived” users from the correct events after validation.
  2. Filter only the invalid events and of the users who did not survive.
  3. Each list of failures is turned into a Set of failure (so that the order does not matter).
  4. All of the causes set are grouped by userId and deduplicated in such a way that the single combination of causes set only appears once for each userId.
  5. Then count for how many non-survived users each causes set appears.
  6. Returns a map from causes set to an integer representing the count of lost users.

It should return something like:

Map(Set(NonEligibleCustomer, NonRecognizedEventType) -> 1545,
Set(NonEligibleUser) -> 122,
Set(NonEligibleUser, BlackListUser, NonRecognizedEventType) -> 3224,
Set(NonEligibleUser) -> 4,
Set(BeforeFirstDayToConsiderEvent, NonRecognizedEventType) -> 335,
Set(OutOfGlobalIntervalEvent, NonRecognizedEventType) -> 33)

Conclusions

In this tutorial we showed how we can extend Scalaz to provide a functional and elegant way of applying validation rules to clean a raw dataset. We showed how the whole logic is parallelizable and scalable using the Apache Spark framework. The main advantages of this approach is that we never lose information but we are able to split the data into 2 pipelines (valid/invalid) and mark each invalid record with some metadata. We presented a simple tutorial for a common use case showing how to define validation rules and how to use the API of the generalized ValidationNel objects in order to perform debugging and cleansing tasks.

The source code is available at: https://github.com/gm-spacagna/sparkz/blob/master/src/examples/scala/sparkz/DataValidation.scala.

The whole procedure is not fully optimized for efficiency, the immutable objects creation may create an unnecessary overhead for the garbage collector and the way we wrap the original datum in case of failure creates a lot of duplicated clones of the same object. We leave this task of optimizing to future blog posts.

We hope that this tutorial will inspire Data Scientists and Engineers, regardless of the language and/or technology stack, to approach their coding in a more functional way. Functional programming offers the elegance and conciseness of implementing arbitrary complicated logic as a simple combination of reusable high-order functions as opposed to the classic imperative programming paradigm. We found this way of writing code to be much more suitable for implementing math and data transformation algorithms.

***

Similar articles about Data Validation using Scalaz:

https://github.com/FranklinChen/data-validation-demo
https://www.innoq.com/en/blog/validate-your-domain-in-scala/

6 points to compare Python and Scala for Data Science using Apache Spark

Apache Spark is a distributed computation framework that simplifies and speeds-up the data crunching and analytics workflow for data scientists and engineers working over large datasets. It offers an unified interface for prototyping as well as building production quality application which makes it particularly suitable for an agile approach. I personally believe that Spark will inevitably become the de-facto Big Data framework for  Machine Learning and Data Science.

Despite of the different opinions about Spark, let’s assume that a data science team wants to start adopting it as main technology. The choice of programming language is often a dilemma. Shall we build our models in Python or in Scala? Shall we run the exploratory analysis using the iPython notebook or iScala?
A common understanding is that Python is the scientific language and Scala is an engineering language seen as a better replacement for Java. Whilst there is truth in that, it does not have to be always the case.

Since that the two languages comparison has already been evaluated in details in other places, I would like to restrict the comparison to the particular use case of building data products leveraging Apache Spark in an agile workflow.

In particular, I can identify 6 important aspects that a Data Science programming language in this context should provide:

  1. Productivity
  2. Safe refactoring
  3. Spark integration
  4. Out-of-the-box Machine Learning/Statistics packages
  5. Documentation / Community
  6. Interactive Exploratory Analysis and built-in visualization tools

Why only Scala and Python?
Apache Spark comes with 4 APIs: Scala, Java, Python and recently R. The reason why I am only considering “PyScala” is because they mostly provides similar features respectively to the other 2 languages (Scala over Java and Python over R) with, in my opinion, better overall scoring. Moreover R is not a general-purpose language and its API is still in an experimental phase.

1. Productivity

Even though coding close to the bare metal produce always the most optimized results, pre-mature optimizations are known to be the root of all evil. Especially in the initial MVP phase we want to achieve high productivity with fewest possible lines of code and possibly be guided by a smart IDE.

Python is a very simple to learn and highly productive language to get things done quickly and from day 1. Scala requires a little bit more of thinking and abstraction due to its high level functional features but as soon as you get familiar with that, your productivity will dramatically boost. Code conciseness are quite comparable, both can be very concise depending on how good you are at coding. Reading Python is more explicit, it shows you step-by-step what your code execution is and the state of each variable. Scala in the other hand will focus more on describing what you are trying to achieve as final result hiding most of the implementation details and execution order. But remember with great power comes great responsibility. Whilst pattern matching is a very cool way to extract variables, advance features like implicits or custom DSLs can be confusing to the non-expert user.

In terms of IDEs, both IntelliJ and PyCharm are smart and productive environments. Nevertheless, Scala can take advantage of the type and compile-time cross-references that can provide some extra functionalities more naturally and without ambiguity, unlike in scripting languages. Just to name few: Find class/methods by name in the project and linked dependencies, find usages, auto-completion based on type compatibility, development-time errors or warnings.
In the other hand, all of those compile-time features comes with a cost: IntelliJ, sbt and all of the related tools are very slow and memory/cpu consuming. You shouldn’t be surprise if 2GB of your RAM is allocated in order to open multiple parallel projects in Scala. Python is more lightweight in this concern.

Conclusion: Both scores very well here, my recommendation is if you are developing simple intuitive logic then Python does the job greatly, if you want to do something more complex than it may be worth investing in learning and writing functional code in Scala.

2. Safe Refactoring

This requirement mainly comes with the agile methodology, we want to safely change the requirements of our code as we perform data explorations and adjust them at each iteration. Very commonly you first write some code with associated tests and immediately after the tests, implementations and APIs are broken. Everytime we perform a refactoring we face the risk of introducing bugs and silently breaking the previous logic.

Both the two languages must require tests (unit tests, integration tests, property based tests, etc…) in order to be safely refactored. Scala being a compile language has a better advantage in that but I am not going to argument the pros and cons of compiled vs scripting languages. So, I will skip that but at least for me I can see some useful benefits from having typed code.

Conclusion: Scala very well, Python average.

3. Spark Integration

Majority of the time and resources are generally spent on loading, cleaning, transforming data and extracting the most informative bits out of it. For that task, what is better than expressing your domain specific logic as combination of functions and do not bother about how it is lazily executed? No wonder that Big Data is turning more and more functional.

You now would expect me to say that Scala does better since that is natively functional. Actually in this scenario, the big difference is made by Spark rather than the programming language. Even though Python is not 100% fully functional (you could make it via external libraries), it wraps the Spark API which is indeed functional.

The implementation of the single map or reduce functions can then be either functional or not but at least the main logic is expressed as a pipe of transformations and operations over the raw data and the execution plan is defined by the computation framework.

You still have to smartly use the different Spark APIs in order to make your code scalable and optimized, but this task is the same for both the two cases. If we consider code execution performance then we all know that JVM compiled code runs faster than Python code but Spark is moving towards language-agnostic abstractions like DataFrame which will optimize most of the work for you producing comparable performance results.

Thus, the solution is “use Spark”. Because of that (and independently from the functional nature), Scala supports it natively which comes particularly handy especially when performing low-level tuning, optimizations and debugging. If you have used the Spark framework you are well familiar with its serialization exceptions. Since that the Python code is wrapped and executed in the JVM, you have less control over what is enclosed in your functions. Moreover some new features in recent Spark releases may only be available in Scala before to be ported as well in Python.

Conclusion: Scala better when comes to engineering, equivalent in terms of Spark integration and functionalities.

4. Out-of-the-box machine learning/statistics packages

When you marry a language, you marry the whole family. And Python has much more to bring on the table when it comes to out-of-the-box packages implementing most of the standard procedures and models you generally find in the literature and/or broadly adopted in the industry. Scala is still way behind in that yet can benefit from the Java libraries compatibility and  the community developing some of the popular machine learning algorithms on their distributed version directly on top of Spark (see MLlib, H20 Sparkling Water, DeepLearning4j …). A little note regarding MLlib, from my experience its implementation is a bit hacky and often hard to be modified or extended due to a mediocre design and non-sense limitations of private fields and classes.

Regarding the Java compatibility honestly I don’t see any Java framework to be anywhere close to what Python today provides with its amazing scikit-learn and related libraries. In the other hand many of those Python implementation only works locally (unless using some bootstrapping/bagging + model ensembling technique, see https://cornercases.wordpress.com/2013/10/23/example-python-machine-learning-algorithm-on-spark/) but their out-of-the-box implementations lack strong scalability when it comes to distributed algorithms. Scala in the other hand provides only a few implementations but already scalable and production-ready.

Nevertheless, do not forget that many big data problems can be reduced in small data problems, especially after an accurate feature selection, filtering and aggregation. It might make sense in some scenarios to crunch your large dataset into a vector space which can perfectly fit in memory and take advantage of the richness and advanced algorithms available in Python.

Conclusion: It really depends of what the size of your data is. Prefer Python every time that it can fit in memory but keep in mind also what are the requirements of your project: Is it just a prototype or is something you want to deploy/maintain in a production system? Python offers a complete selection of already-implemented packages that can satisfy any need. Scala will only provide the basics but in case of “productionisation” is a better engineering choice.

5. Documentation / Community

If we compare the two plain languages (without their external libraries) in terms of community size then Python belongs to the tier1 while Scala right after in tier2, see http://readwrite.com/2010/12/10/ranking-programming-languages. Practically speaking it means both of them have enough tutorials and answers in StackOverflow covering the majority of use cases and how-to’s.

If we consider documentation of the machine learning and statistics frameworks, the Python data science community is more mature and in fact you can find many tutorials and examples of how to solve a lot of problems and cool analysis using most of the Python libraries.

Unfortunately we cannot say the same for Scala. ML and MLlib libraries are very poor, the only way to really understand how they work is by reading the code. Likely with some other open source libraries that I found on GitHub.

Conclusion:
Both of them have a good and comparable community in terms of software development. When we consider data science community and cool data science projects, Python is hard to beat.

6. Interactive Exploratory Analysis and built-in visualization tools

iPython is one the greatest tools ever invented in the scientific world, one year ago it would have been without doubts the oscar winner. Today we can find many implementations of notebooks inspired by the iPython notebook available for any language. Jupyter, the iPython evolution, supports different kernels plus iScala actually re-implement it based on an akka play restful service. If you only consider opening a web-based notebook and start writing and interacting with some code, I think they are very similar.

If we consider using the notebook to interact with Spark, it may be a little more useful to use the Spark Notebook (in Scala) since that it is specifically designed for this purpose and provides a few utils to generates custom spark contexts or stopping the current in progress job without have to access the Spark UI or run commands from command line. While it is a nice to have feature, I don’t think makes a huge difference.

The pain comes when we comes to dependency injection and in that aspect Scala is a true nightmare! Being a compiled JVM language all of the dependencies must be available in the classpath and the kernel required to be restarted every time a jar changes or a new one comes in the path. Moreover using dependency management tools like sbt for some reason generates a whole lot of traffic and all of your dependencies are then packed into a fat jar of the size of hundreds of MBs which then must be loaded by the JVM executing your back-end code. Python here does much better because everything is specified at runtime and you can simply import code or libraries and the interpreter will automatically solves it for you without never restart your kernel. This aspect is extremely important especially when separating the development in the IDE from the exploration in the notebook calling the APIs of your implemented logic from the source folder. I raised this issue with the TypeSafe and SparkNotebook folks hoping that it can be addressed somehow in a more efficient way.

Built-in visualizations: Spark Notebook includes a very rudimental built-in viz library, a simple but acceptable WISP library and few wrappers around javascript technologies such as D3, Rickshaw. Generally speaking, it can render and wrap any javascript library but in a very non friendly nor intuitive. Python without any doubt is superior in the offer and selection of cool and advanced ways of plotting and building interactive dashboards.

Conclusion: Python wins, Scala is not enough mature yet even though the SparkNotebook does a good job. We haven’t yet considered the recent Apache Zeppelin which provides some fancy visualization features and supports the concept of language-agnostic notebook where each cell can represent any type of code: Scala, Python, SQL… and is specifically designed to integrate well with Spark.

Final Verdict

Shall I use Scala or Python? The answer is: Yes!
Give a try to both of them and try to test yourself what better works for your specific use case. As a rule of thumb: Python is more analytical oriented while Scala is more engineering oriented but both are great languages for building Data Science applications. The ideal scenario would be to have a data science team able to be confident with both of them and swap when needed.

Nonetheless, technology choices are often driven by what people are already comfortable with. Pressure to deliver does not give you enough resources to spend on researching new libraries, reading papers or learning new tools and languages. What most data scientists care at the end of the day is to deliver using whatever mean does the job.

If you do have to decide, my view is that if your scope is doing research, then a scripting language is enough complete in terms of experimentation and prototyping. If your goal is to build a product then you want to consider something more robust that gives you both experimentation and at the same delivers a product.

Since that the best solution is never white or black, I encourage trying hybrid approaches that can adapt based on each project specification. A typical scenario could be developing the whole ETL, data cleansing and feature extraction in Scala and then distribute the data over multiple partitions and learning using algorithms written in Python for then collecting the results and presenting in a Jupyter notebook. Moreover since that at the last stage we don’t need Spark anymore, we could even deploy an interactive and stunning dashboard using Shiny by RStudio?

My motto is “the best tool for each task”. Whatever balance you choose, avoid to split into two teams: Data Science Engineers (the Big Data/Scala guys) and Data Science Analysts (the Python and SQL folks). Aim to build a cross-functional team with the full skillset to operate on the full end-to-end development of your product, from the raw data to the manual analysis and from the modelling to a scalable deployment.

I hope that article can be found useful for both experienced data scientists and enthusiasts that want to start their career in this industry. Please consider that the above comparison is mainly specific for the Apache Spark use case which I strongly recommend but in case you are using a different stack and/or languages choice, I think many concepts are still valid and can be extended to the broader families of Compiled Vs. Scripting languages.

***

Related links:

https://www.quora.com/Which-one-should-I-learn-Python-or-Scala

https://www.linkedin.com/pulse/build-tool-pain-why-data-science-isnt-going-typed-sam-savage

https://www.quora.com/Is-Scala-a-better-choice-than-Python-for-Apache-Spark

http://stackoverflow.com/questions/32464122/spark-performance-for-scala-vs-python

Scala vs Python

http://datavirtualizer.com/popularity-vs-productivity-vs-performance/

Pro Python:

http://blog.mikiobraun.de/2013/11/how-python-became-the-language-of-choice-for-data-science.html

https://www.quora.com/Why-is-Python-a-language-of-choice-for-data-scientists

I am sorry but majority of comparisons of Python with other languages for data science is mainly Python Vs. R. I could not find so many other pro-python links comparing with Scala.

Pro Scala:

https://tech.coursera.org/blog/2014/02/18/why-we-love-scala-at-coursera/

http://blog.cloudera.com/blog/2014/03/why-apache-spark-is-a-crossover-hit-for-data-scientists/

https://www.linkedin.com/pulse/why-i-choose-scala-apache-spark-project-lan-jiang

https://www.linkedin.com/pulse/data-science-technology-choice-case-study-harry-powell

 

 

WordPress Blog Posts Recommender in Spark, Scala and the SparkNotebook

—At the Advanced Data Analytics team at Barclays we solved the Kaggle competition as proof-of-concept of how to use Spark, Scala and the Spark Notebook to solve a typical machine learning problem end-to-end.
—The case study is recommending a sequence of WordPress blog posts that the users may like based on their historical likes and blog/post/author characteristics.
Details of the competition available at —https://www.kaggle.com/c/predict-wordpress-likes.

What we want to share is a mix of methodology and tools for:

  • —Investigating Interactively the data; and
  • —Writing quality code in a productive environment; and
  • —Embedding the developed functions into executable entry points; and
  • —Presenting the results in a clean and visual way; and
  • —Meeting the required acceptance criteria.

——AKA: Delivering a Data Science MVP quickly in a complete Agile way!

The topics covered in this workshop are:

  • —DataFrame/RDD conversions and I/O
  • —Exploratory Data Analysis (EDA)
  • —Scalable Feature Engineering
  • —Modelling (MlLib and ML)
  • —End-to-end Evaluation
  • —Agile Methodology for Data Science

At the end of the workshop the lessons learnt are:

  • Spark, Dataframe, RDDs:—
    • DataFrame is great for I/O, schema inference from the sources and when you have flatten schemas. Operations start to be more complicated with nested and array fields.
    • —RDD gives you the flexibility of doing your ETL using the richness of the Scala framework, in the other hand you must be careful on optimizing your execution plans.
      Functional Programming allowed us to express complex logic with a simple and clear code and free of side effects.

      RDD gives you the flexibility of doing your ETL using the richness of the scala framework, in the other hand you must be careful on optimizing your execution plans.

    • —Map joins with broadcast maps is very efficient but we need to make sure to reduce at minimum its size before to broadcast, e.g. applying some filtering to remove the unmatched keys before the join or capping the size of each value in case of size-variable structures (e.g. hash maps).

      Developing in the notebook is very painful and non productive, the more you write code the more become impossible to track and refactor it.

  • ML, MlLib
    • —ETL and feature engineering is the most time-consuming part, once you obtained the data you want in vector format then you can convert back to DataFrame and use the ML APIs.
    • —ML unfortunately does not wrap everything available in MlLib, sometime you have to convert back to RDD[LabeledPoint] or RDD[(Double, Vector)] in order to use the MlLib features (e.g. evaluation metrics).

      Better writing code in IntelliJ and then either pack it into a fat jar and import it from the notebook or copy and paste

    • —ML pipeline API (Transformer, Estimator, Evaluator) seems cool but for an MVP is a pre-mature abstraction.
  • Modeling
    • —Do not underestimate simple solutions. In the worst case they serve as baseline for benchmarking.
    • —Even tough the Logistic Regression was better on classifying as true or false, the simple model outperformed when running the end-to-end ranking evaluation.
    • —Focus on solving problems rather than models or algorithms.
      Many Data Science problems can be solved with counts and divisions, e.g. Naïve Bayes.
    • —Logistic Regression “raw scores” are NOT probabilities, treat them carefully!
  • Spark Notebook
    • —SparkNotebook is good for EDA and as entry point for calling APIs and presenting results.
    • —Developing in the notebook is non very productive, the more you write code the more become harder to track and refactor previously developed code.
    • —Better writing code in IntelliJ and then either pack it into a fat jar and import it from the notebook or copy and paste every time into a notebook dedicated cell.
    • —In order to keep normal Notebook cells clean, they should not contain more than 4/5 lines of code or complex logic, they should ideally just code queries in the form of functional processing and entry points of a logic API.
  • Visualization
    • —Plotting in the notebook with the built in visualization is handy but very rudimental, can only visualize 25 points, we created a Pimp to take any Array[(Double,Double)] and interpolate its values to only 25 points.
    • —Tip: when you visualize a Scala Map with Double keys in the range 0.0 to 1.0, the take(25) method will return already uniform samples in that range and since the x-axis is numerical, the built-in visualization will automatically sort it for you.
    • —Probably we should have investigated advanced libraries like Bokeh or D3 that are already supported in the Notebook.

Check the source code on the GitHub page: https://github.com/gm-spacagna/wordpress-posts-recommender.