Mapping DataFrame to a typed RDD

I have recently published a blog post on DZone “Making the Impossible Possible with Tachyon: Accelerate Spark Jobs from Hours to Seconds” which describes the workflow and methodology that we use at Barclays to load data from the raw source (relational database) into the Data Science cluster (Spark). One of the described components is the mapping between DataFrame to a typed RDD of a custom case class.

There are a bunch of reasons why you would like to make your DataFrame typed, the following is a summary:

dataframe-vs-rdd

Examples of when is more convenient to use DataFrame Vs. RDD can be found in this workshop: WordPress Blog Posts Recommender

In this tutorial I have pulled out from the Tachyon blog post the part related to the conversion from DataFrame to RDD. The inverted conversion RDD to DataFrame is straightforward and can be found in the same recommender workshop above mentioned.

Typed Case Class Mapping

After we have constructed the DataFrame collection from the raw source we can now map it into an RDD of our ad-hoc case classes. Since a DataFrame is also an RDD of type org.apache.spark.sql.Row, it already provides the map/flatMap methods.

If there are no null values in any row, we could use pattern matching to extract each column from the Row object:

case class MyClass(a: Long, b: String, c: Int, d: String, e: String)
dataframe.map {
  case Row(a: java.math.BigDecimal, b: String, c: Int, _: String, _: java.sql.Date,
           e: java.sql.Date, _: java.sql.Timestamp, _: java.sql.Timestamp, _: java.math.BigDecimal,
           _: String) => MyClass(a = a.longValue(), b = b, c = c, d = d.toString, e = e.toString)
}

This approach will fail for null values due to the casting of the explicit types of each single field in the unapply method of the class Row. You can discard all the rows containing null values by doing:

dataframe.na.drop()

But that will drop records even if the null fields are not the ones we use in our case class.

If you want to handle it using Scala options you could turn the Row object into a List and then use the following pattern:

case class MyClass(a: Long, b: String, c: Option[Int], d: String, e: String)
dataframe.map(_.toSeq.toList match {
  case List(a: java.math.BigDecimal, b: String, c: Int, _: String, _: java.sql.Date,
            e: java.sql.Date, _: java.sql.Timestamp, _: java.sql.Timestamp, _: java.math.BigDecimal,
            _: String) => MyClass(a = a.longValue(), b = b, c = Option(c), d = d.toString, e = e.toString)
}

If the columns you are interested are sparse, then you could fetch them individually either by index or by column name:

row.getAs[SQLPrimitveType](columnIndex: Int)
row.getAs[SQLPrimitveType](columnName: String)

For the list of mapping of SQL primitive types and their corresponding Java/Scala classes, see: https://spark.apache.org/docs/latest/api/java/org/apache/spark/sql/Row.html.

N.B. The described procedure does not take advantage of the recently released DataSet API (http://spark.apache.org/docs/1.6.0/sql-programming-guide.html#datasets) which should automate the whole process of converting between DataFrames and RDDs. At the time we wrote this note we had not yet tested DataSet. Also there are open-source projects like Frameless (https://github.com/adelbertc/frameless) and an ongoing discussion on its gitter channel of how to leverage the awesome Shapeless (https://github.com/milessabin/shapeless) library to make Spark more functional and compile-time type-safe.

Similar articles:

Type safety on Spark Dataframes: http://www.51zero.com/blog/2016/2/24/type-safety-on-spark-dataframes-part-1

Posted in Spark, Uncategorized | Tagged , , | Leave a comment

Logical Data Warehouse for Data Science: map raw data directly from source to Spark in-memory with Tachyon

Common problems for large organizations dealing with Big Data and Data Science applications are:

  1. Data stored in non scalable infrastructure for analysis and processing
  2. Data governance and security policies

1. Data often resides into central data warehouse and RDBMS of which many legacy applications and analysts depends on.
Data Scientists insteads cannot build their models or perform exploratory analysis by using SQL queries. They need the data to be available into a scalable, programmatic and reactive stack such as Hadoop and Apache Spark and develop their logic using languages such as Python, R, Scala… (for comparison of how Python and Scala compare for Spark, see this post: 6 points to compare Python and Scala for Data Science using Apache Spark).
2. Nevertheless, data cannot just be transferred (in technical terms sqoop-ed) to an Hadoop cluster without incurring into tedious bureaucracy,  ingestion inconsistencies and strict policies. In big corporations that translates to at least a month to decide what tables are interesting and a few more months to write the ETL logic, move the data and test the consistency.

At Barclays we developed a stack to logically map the raw data from the central data warehouse into Spark and use Tachyon for in-memory saving the data for long-term availability. In such stack, we are able to iterate fast with immediate data availability from a scalable Big Data cluster by skipping the data ingestion process and still complying with all of the data policies.

Tachyon was the key enabling technology for us.

Our workflow iteration time decreased from hours to seconds. Tachyon enabled something that was impossible before.

You can find the original article published on DZone in collaboration with Gene Pang, Software Engineer at Tachyon Nexus and Haoyuan Li, CEO of Tachyon Nexus:
Making the Impossible Possible with Tachyon: Accelerate Spark Jobs from Hours to Seconds

Posted in Agile, Big Data, Scala, Spark | Tagged , , , , , , , | Leave a comment

6 points to compare Python and Scala for Data Science using Apache Spark

Apache Spark is a distributed computation framework that simplifies and speeds-up the data crunching and analytics workflow for data scientists and engineers working over large datasets. It offers an unified interface for prototyping as well as building production quality application which makes it particularly suitable for an agile approach. I personally believe that Spark will inevitably become the de-facto Big Data framework for  Machine Learning and Data Science.

Despite of the different opinions about Spark, let’s assume that a data science team wants to start adopting it as main technology. The choice of programming language is often a dilemma. Shall we build our models in Python or in Scala? Shall we run the exploratory analysis using the iPython notebook or iScala?
A common understanding is that Python is the scientific language and Scala is an engineering language seen as a better replacement for Java. Whilst there is truth in that, it does not have to be always the case.

Since that the two languages comparison has already been evaluated in details in other places, I would like to restrict the comparison to the particular use case of building data products leveraging Apache Spark in an agile workflow.

In particular, I can identify 6 important aspects that a Data Science programming language in this context should provide:

  1. Productivity
  2. Safe refactoring
  3. Spark integration
  4. Out-of-the-box Machine Learning/Statistics packages
  5. Documentation / Community
  6. Interactive Exploratory Analysis and built-in visualization tools

Why only Scala and Python?
Apache Spark comes with 4 APIs: Scala, Java, Python and recently R. The reason why I am only considering “PyScala” is because they mostly provides similar features respectively to the other 2 languages (Scala over Java and Python over R) with, in my opinion, better overall scoring. Moreover R is not a general-purpose language and its API is still in an experimental phase.

1. Productivity

Even though coding close to the bare metal produce always the most optimized results, pre-mature optimizations are known to be the root of all evil. Especially in the initial MVP phase we want to achieve high productivity with fewest possible lines of code and possibly be guided by a smart IDE.

Python is a very simple to learn and highly productive language to get things done quickly and from day 1. Scala requires a little bit more of thinking and abstraction due to its high level functional features but as soon as you get familiar with that, your productivity will dramatically boost. Code conciseness are quite comparable, both can be very concise depending on how good you are at coding. Reading Python is more explicit, it shows you step-by-step what your code execution is and the state of each variable. Scala in the other hand will focus more on describing what you are trying to achieve as final result hiding most of the implementation details and execution order. But remember with great power comes great responsibility. Whilst pattern matching is a very cool way to extract variables, advance features like implicits or custom DSLs can be confusing to the non-expert user.

In terms of IDEs, both IntelliJ and PyCharm are smart and productive environments. Nevertheless, Scala can take advantage of the type and compile-time cross-references that can provide some extra functionalities more naturally and without ambiguity, unlike in scripting languages. Just to name few: Find class/methods by name in the project and linked dependencies, find usages, auto-completion based on type compatibility, development-time errors or warnings.
In the other hand, all of those compile-time features comes with a cost: IntelliJ, sbt and all of the related tools are very slow and memory/cpu consuming. You shouldn’t be surprise if 2GB of your RAM is allocated in order to open multiple parallel projects in Scala. Python is more lightweight in this concern.

Conclusion: Both scores very well here, my recommendation is if you are developing simple intuitive logic then Python does the job greatly, if you want to do something more complex than it may be worth investing in learning and writing functional code in Scala.

2. Safe Refactoring

This requirement mainly comes with the agile methodology, we want to safely change the requirements of our code as we perform data explorations and adjust them at each iteration. Very commonly you first write some code with associated tests and immediately after the tests, implementations and APIs are broken. Everytime we perform a refactoring we face the risk of introducing bugs and silently breaking the previous logic.

Both the two languages must require tests (unit tests, integration tests, property based tests, etc…) in order to be safely refactored. Scala being a compile language has a better advantage in that but I am not going to argument the pros and cons of compiled vs scripting languages. So, I will skip that but at least for me I can see some useful benefits from having typed code.

Conclusion: Scala very well, Python average.

3. Spark Integration

Majority of the time and resources are generally spent on loading, cleaning, transforming data and extracting the most informative bits out of it. For that task, what is better than expressing your domain specific logic as combination of functions and do not bother about how it is lazily executed? No wonder that Big Data is turning more and more functional.

You now would expect me to say that Scala does better since that is natively functional. Actually in this scenario, the big difference is made by Spark rather than the programming language. Even though Python is not 100% fully functional (you could make it via external libraries), it wraps the Spark API which is indeed functional.

The implementation of the single map or reduce functions can then be either functional or not but at least the main logic is expressed as a pipe of transformations and operations over the raw data and the execution plan is defined by the computation framework.

You still have to smartly use the different Spark APIs in order to make your code scalable and optimized, but this task is the same for both the two cases. If we consider code execution performance then we all know that JVM compiled code runs faster than Python code but Spark is moving towards language-agnostic abstractions like DataFrame which will optimize most of the work for you producing comparable performance results.

Thus, the solution is “use Spark”. Because of that (and independently from the functional nature), Scala supports it natively which comes particularly handy especially when performing low-level tuning, optimizations and debugging. If you have used the Spark framework you are well familiar with its serialization exceptions. Since that the Python code is wrapped and executed in the JVM, you have less control over what is enclosed in your functions. Moreover some new features in recent Spark releases may only be available in Scala before to be ported as well in Python.

Conclusion: Scala better when comes to engineering, equivalent in terms of Spark integration and functionalities.

4. Out-of-the-box machine learning/statistics packages

When you marry a language, you marry the whole family. And Python has much more to bring on the table when it comes to out-of-the-box packages implementing most of the standard procedures and models you generally find in the literature and/or broadly adopted in the industry. Scala is still way behind in that yet can benefit from the Java libraries compatibility and  the community developing some of the popular machine learning algorithms on their distributed version directly on top of Spark (see MLlib, H20 Sparkling Water, DeepLearning4j …). A little note regarding MLlib, from my experience its implementation is a bit hacky and often hard to be modified or extended due to a mediocre design and non-sense limitations of private fields and classes.

Regarding the Java compatibility honestly I don’t see any Java framework to be anywhere close to what Python today provides with its amazing scikit-learn and related libraries. In the other hand many of those Python implementation only works locally (unless using some bootstrapping/bagging + model ensembling technique, see https://cornercases.wordpress.com/2013/10/23/example-python-machine-learning-algorithm-on-spark/) but their out-of-the-box implementations lack strong scalability when it comes to distributed algorithms. Scala in the other hand provides only a few implementations but already scalable and production-ready.

Nevertheless, do not forget that many big data problems can be reduced in small data problems, especially after an accurate feature selection, filtering and aggregation. It might make sense in some scenarios to crunch your large dataset into a vector space which can perfectly fit in memory and take advantage of the richness and advanced algorithms available in Python.

Conclusion: It really depends of what the size of your data is. Prefer Python every time that it can fit in memory but keep in mind also what are the requirements of your project: Is it just a prototype or is something you want to deploy/maintain in a production system? Python offers a complete selection of already-implemented packages that can satisfy any need. Scala will only provide the basics but in case of “productionisation” is a better engineering choice.

5. Documentation / Community

If we compare the two plain languages (without their external libraries) in terms of community size then Python belongs to the tier1 while Scala right after in tier2, see http://readwrite.com/2010/12/10/ranking-programming-languages. Practically speaking it means both of them have enough tutorials and answers in StackOverflow covering the majority of use cases and how-to’s.

If we consider documentation of the machine learning and statistics frameworks, the Python data science community is more mature and in fact you can find many tutorials and examples of how to solve a lot of problems and cool analysis using most of the Python libraries.

Unfortunately we cannot say the same for Scala. ML and MLlib libraries are very poor, the only way to really understand how they work is by reading the code. Likely with some other open source libraries that I found on GitHub.

Conclusion:
Both of them have a good and comparable community in terms of software development. When we consider data science community and cool data science projects, Python is hard to beat.

6. Interactive Exploratory Analysis and built-in visualization tools

iPython is one the greatest tools ever invented in the scientific world, one year ago it would have been without doubts the oscar winner. Today we can find many implementations of notebooks inspired by the iPython notebook available for any language. Jupyter, the iPython evolution, supports different kernels plus iScala actually re-implement it based on an akka play restful service. If you only consider opening a web-based notebook and start writing and interacting with some code, I think they are very similar.

If we consider using the notebook to interact with Spark, it may be a little more useful to use the Spark Notebook (in Scala) since that it is specifically designed for this purpose and provides a few utils to generates custom spark contexts or stopping the current in progress job without have to access the Spark UI or run commands from command line. While it is a nice to have feature, I don’t think makes a huge difference.

The pain comes when we comes to dependency injection and in that aspect Scala is a true nightmare! Being a compiled JVM language all of the dependencies must be available in the classpath and the kernel required to be restarted every time a jar changes or a new one comes in the path. Moreover using dependency management tools like sbt for some reason generates a whole lot of traffic and all of your dependencies are then packed into a fat jar of the size of hundreds of MBs which then must be loaded by the JVM executing your back-end code. Python here does much better because everything is specified at runtime and you can simply import code or libraries and the interpreter will automatically solves it for you without never restart your kernel. This aspect is extremely important especially when separating the development in the IDE from the exploration in the notebook calling the APIs of your implemented logic from the source folder. I raised this issue with the TypeSafe and SparkNotebook folks hoping that it can be addressed somehow in a more efficient way.

Built-in visualizations: Spark Notebook includes a very rudimental built-in viz library, a simple but acceptable WISP library and few wrappers around javascript technologies such as D3, Rickshaw. Generally speaking, it can render and wrap any javascript library but in a very non friendly nor intuitive. Python without any doubt is superior in the offer and selection of cool and advanced ways of plotting and building interactive dashboards.

Conclusion: Python wins, Scala is not enough mature yet even though the SparkNotebook does a good job. We haven’t yet considered the recent Apache Zeppelin which provides some fancy visualization features and supports the concept of language-agnostic notebook where each cell can represent any type of code: Scala, Python, SQL… and is specifically designed to integrate well with Spark.

Final Verdict

Shall I use Scala or Python? The answer is: Yes!
Give a try to both of them and try to test yourself what better works for your specific use case. As a rule of thumb: Python is more analytical oriented while Scala is more engineering oriented but both are great languages for building Data Science applications. The ideal scenario would be to have a data science team able to be confident with both of them and swap when needed.

Nonetheless, technology choices are often driven by what people are already comfortable with. Pressure to deliver does not give you enough resources to spend on researching new libraries, reading papers or learning new tools and languages. What most data scientists care at the end of the day is to deliver using whatever mean does the job.

If you do have to decide, my view is that if your scope is doing research, then a scripting language is enough complete in terms of experimentation and prototyping. If your goal is to build a product then you want to consider something more robust that gives you both experimentation and at the same delivers a product.

Since that the best solution is never white or black, I encourage trying hybrid approaches that can adapt based on each project specification. A typical scenario could be developing the whole ETL, data cleansing and feature extraction in Scala and then distribute the data over multiple partitions and learning using algorithms written in Python for then collecting the results and presenting in a Jupyter notebook. Moreover since that at the last stage we don’t need Spark anymore, we could even deploy an interactive and stunning dashboard using Shiny by RStudio?

My motto is “the best tool for each task”. Whatever balance you choose, avoid to split into two teams: Data Science Engineers (the Big Data/Scala guys) and Data Science Analysts (the Python and SQL folks). Aim to build a cross-functional team with the full skillset to operate on the full end-to-end development of your product, from the raw data to the manual analysis and from the modelling to a scalable deployment.

I hope that article can be found useful for both experienced data scientists and enthusiasts that want to start their career in this industry. Please consider that the above comparison is mainly specific for the Apache Spark use case which I strongly recommend but in case you are using a different stack and/or languages choice, I think many concepts are still valid and can be extended to the broader families of Compiled Vs. Scripting languages.

***

Related links:

https://www.quora.com/Which-one-should-I-learn-Python-or-Scala

https://www.linkedin.com/pulse/build-tool-pain-why-data-science-isnt-going-typed-sam-savage

https://www.quora.com/Is-Scala-a-better-choice-than-Python-for-Apache-Spark

http://stackoverflow.com/questions/32464122/spark-performance-for-scala-vs-python

http://statrgy.com/2015/05/05/scala-vs-python/

http://datavirtualizer.com/popularity-vs-productivity-vs-performance/

Pro Python:

http://blog.mikiobraun.de/2013/11/how-python-became-the-language-of-choice-for-data-science.html

https://www.quora.com/Why-is-Python-a-language-of-choice-for-data-scientists

I am sorry but majority of comparisons of Python with other languages for data science is mainly Python Vs. R. I could not find so many other pro-python links comparing with Scala.

Pro Scala:

https://tech.coursera.org/blog/2014/02/18/why-we-love-scala-at-coursera/

http://blog.cloudera.com/blog/2014/03/why-apache-spark-is-a-crossover-hit-for-data-scientists/

https://www.linkedin.com/pulse/why-i-choose-scala-apache-spark-project-lan-jiang

https://www.linkedin.com/pulse/data-science-technology-choice-case-study-harry-powell

 

 

Posted in Agile, Machine Learning, Python, Scala, Spark | Tagged , , | 13 Comments

WordPress Blog Posts Recommender in Spark, Scala and the SparkNotebook

—At the Advanced Data Analytics team at Barclays we solved the Kaggle competition as proof-of-concept of how to use Spark, Scala and the Spark Notebook to solve a typical machine learning problem end-to-end.
—The case study is recommending a sequence of WordPress blog posts that the users may like based on their historical likes and blog/post/author characteristics.
Details of the competition available at —https://www.kaggle.com/c/predict-wordpress-likes.

What we want to share is a mix of methodology and tools for:

  • —Investigating Interactively the data; and
  • —Writing quality code in a productive environment; and
  • —Embedding the developed functions into executable entry points; and
  • —Presenting the results in a clean and visual way; and
  • —Meeting the required acceptance criteria.

——AKA: Delivering a Data Science MVP quickly in a complete Agile way!

The topics covered in this workshop are:

  • —DataFrame/RDD conversions and I/O
  • —Exploratory Data Analysis (EDA)
  • —Scalable Feature Engineering
  • —Modelling (MlLib and ML)
  • —End-to-end Evaluation
  • —Agile Methodology for Data Science

At the end of the workshop the lessons learnt are:

  • Spark, Dataframe, RDDs:—
    • DataFrame is great for I/O, schema inference from the sources and when you have flatten schemas. Operations start to be more complicated with nested and array fields.
    • —RDD gives you the flexibility of doing your ETL using the richness of the Scala framework, in the other hand you must be careful on optimizing your execution plans.
      Functional Programming allowed us to express complex logic with a simple and clear code and free of side effects.

      RDD gives you the flexibility of doing your ETL using the richness of the scala framework, in the other hand you must be careful on optimizing your execution plans.

    • —Map joins with broadcast maps is very efficient but we need to make sure to reduce at minimum its size before to broadcast, e.g. applying some filtering to remove the unmatched keys before the join or capping the size of each value in case of size-variable structures (e.g. hash maps).

      Developing in the notebook is very painful and non productive, the more you write code the more become impossible to track and refactor it.

  • ML, MlLib
    • —ETL and feature engineering is the most time-consuming part, once you obtained the data you want in vector format then you can convert back to DataFrame and use the ML APIs.
    • —ML unfortunately does not wrap everything available in MlLib, sometime you have to convert back to RDD[LabeledPoint] or RDD[(Double, Vector)] in order to use the MlLib features (e.g. evaluation metrics).

      Better writing code in IntelliJ and then either pack it into a fat jar and import it from the notebook or copy and paste

    • —ML pipeline API (Transformer, Estimator, Evaluator) seems cool but for an MVP is a pre-mature abstraction.
  • Modeling
    • —Do not underestimate simple solutions. In the worst case they serve as baseline for benchmarking.
    • —Even tough the Logistic Regression was better on classifying as true or false, the simple model outperformed when running the end-to-end ranking evaluation.
    • —Focus on solving problems rather than models or algorithms.
      Many Data Science problems can be solved with counts and divisions, e.g. Naïve Bayes.
    • —Logistic Regression “raw scores” are NOT probabilities, treat them carefully!
  • Spark Notebook
    • —SparkNotebook is good for EDA and as entry point for calling APIs and presenting results.
    • —Developing in the notebook is non very productive, the more you write code the more become harder to track and refactor previously developed code.
    • —Better writing code in IntelliJ and then either pack it into a fat jar and import it from the notebook or copy and paste every time into a notebook dedicated cell.
    • —In order to keep normal Notebook cells clean, they should not contain more than 4/5 lines of code or complex logic, they should ideally just code queries in the form of functional processing and entry points of a logic API.
  • Visualization
    • —Plotting in the notebook with the built in visualization is handy but very rudimental, can only visualize 25 points, we created a Pimp to take any Array[(Double,Double)] and interpolate its values to only 25 points.
    • —Tip: when you visualize a Scala Map with Double keys in the range 0.0 to 1.0, the take(25) method will return already uniform samples in that range and since the x-axis is numerical, the built-in visualization will automatically sort it for you.
    • —Probably we should have investigated advanced libraries like Bokeh or D3 that are already supported in the Notebook.

Check the source code on the GitHub page: https://github.com/gm-spacagna/wordpress-posts-recommender.

Posted in Agile, Classification, Machine Learning, Scala, Spark | Tagged , , , , | Leave a comment

The complete 18 steps to start a new Agile Data Science project

Introduction

It is a very common pattern in software development to start a new project in a highly uncertain and chaotic scenario surrounded by plenty of ideas of what features we might want to implement. In Data Science the problem is even more amplified by its nondeterministic nature. In the start-up of a Data Science project we not just don’t know what we are trying to implement, we also don’t know how to implement it and also under which circumstances that would be possible and correct.

This initial lack of structure often is manifested by an initial spike of unnecessary development and later in the project in the form of technical debts and unexplained inconsistencies. You might spend a lot of resources before to find out that the delivered solution simply does not fit the business nature of the problem.

In Agile Data Science the goal should not be producing charts and reports or hacky scripts calling some machine learning library. In Agile Data Science we want to iteratively build production-quality applications that solve the true business needs by extracting hidden knowledge from the data.

This is the final summarising post of the Agile Data Science Iteration 0 series:

The Complete Checklist

  1. Rigorous definition of the business problem we are attempting to solve and why it is important
  2. Define your objective acceptance criteria
  3. Develop the validation framework (ergo, the acceptance test)
  4. Stop thinking, start googling!
  5. Gather initial dataset into proper infrastructure
  6. Initial Exploratory Data Analysis (EDA) targeted to understanding the underlying dataset
  7. Define and quickly develop the simplest solution to the problem
  8. Release/demo first basic solution
  9. Research of background for ways to improve the basic solution
  10. Gather additional data into proper infrastructure (if required)
  11. Ad-hoc Exploratory Data Analysis (EDA)
  12. Propose better solution minimising potential risks and marginal gain
  13. Develop the Data Sanity check
  14. Define the Data Types of your application domain
  15. Develop the ETL and output the normalised data into a proper infrastructure
  16. Clearly state all of the assumptions/hypothesis and document whether they have been verified or not and how they can be verified
  17. Develop the automated Hypothesis-Driven Analysis (HDA) consisting of hypothesis validation + statistics summary, on top of the normalised data
  18. Analyse the output of the automated HDA to adjust/revise the proposed solution

At the end of the Iteration 0 you have a very solid starting point for your project and you could now follow the typical Agile development cycle, whether you prefer more SCRUM, Kanban, a mix of them or your ad-hoc custom methodology.

Regardless of if you want to use a strict or flexible workflow, keep in mind that the main difference with the Agile iterations for software development consist in the fact that a ticket is typically broad and open-ended. You should not be surprised if the majority of your tickets get then split into multiple sub-tickets after the initial investigation of the problem. You should allow to create subtasks even after the sprint planning. In some cases you may prefer to mark them as blockers and re-scope them into the next sprint or in other cases you want to allow them to affect the current sprint.
What is important is that you should start implementing production-quality code only when the requirements and the acceptance test are well defined. In Data Science this not very likely to happen all the time. Every time you are presented with an open problem to investigate and solve you should try to break it into research/analysis and development subtasks.

What not to do?

  • Do not start any development without have done a prior detailed research/investigation
  • Do not just deliver analysis code in notebooks, after your investigation move the code into production-quality standards
  • Do not blindly trust external libraries or APIs if you don’t know exactly what they do and return, run some tests if needed
  • Do not generate manual reports of your finding until the experiments are reproducible and automated
  • Do not deploy any model if all of the assumptions haven’t been stated and verified
  • Do not be lazy to learn better technologies and methodologies!

To conclude, in this series of posts I just wanted to share some of my experience on starting new Data Science projects and common problems that I have seen to be addressed in a confusional and chaotic way. I hope that by following those guidelines you can reduce the technical debts of the project and the risk of working several months without never delivering a correct and working solution.

More details of the Agile cycle for Data Science applications and in particular how to time-box open-ended questions will be covered into another post. Stay tuned and get ready to run!

***

The Hypothesis-Driven Analysis << prev 

Posted in Agile | Tagged , | 5 Comments

Agile Data Science Iteration 0: The Hypothesis-Driven Analysis

This is the fifth post of the Agile Data Science Iteration 0 series:

Previously

What we have achieved so far (see previous posts above):

  1. Rigorous definition of the business problem we are attempting to solve and why it is important
  2. Define your objective acceptance criteria
  3. Develop the validation framework (ergo, the acceptance test)
  4. Stop thinking, start googling!
  5. Gather initial dataset into proper infrastructure
  6. Initial Exploratory Data Analysis (EDA) targeted to understanding the underlying dataset
  7. Define and quickly develop the simplest solution to the problem
  8. Release/demo first basic solution
  9. Research of background for ways to improve the basic solution
  10. Gather additional data into proper infrastructure (if required)
  11. Ad-hoc Exploratory Data Analysis (EDA)
  12. Propose better solution minimising potential risks and marginal gain
  13. Develop the Data Sanity check
  14. Define the Data Types of your application domain
  15. Develop the ETL and output the normalised data into a proper infrastructure

At this stage you have already modelled some entities of your application logic. You know well the raw data and already have produced a normalised and cleaned version of your dataset. Your data is now sanitised and stored into a proper analytical infrastructure. Ask yourself: what assumptions have I made so far and I am going to make? Agile Data Science even though is production and engineering oriented is not just software engineering. Agile Data Science is Science, thus it must comply with the scientific methodology.

The Oxford dictionary defines “scientific method” as:

“a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses.”

And this is by no mean different in the Data Science methodology.

The Hypothesis-Driven Analysis

16. Clearly state all of the assumptions/hypothesis and document whether they have been verified or not and how they can be verified

In Data Science we implement models and applications in a highly non deterministic context where often we make assumptions to simplify the problem. Assumptions are generally made based on intuitions, common-sense, previous experience, domain knowledge or sometime simply because the model require them.

Even though they might seem appropriate, they are dangerous! Unverified assumptions can easily lead to inconsistencies or, even worse, silently produce wrong results.

We can’t get rid of all of our assumptions and build an assumption-free model but we should try to document them, verify as soon as possible and track them over time. It is fine to have not-yet-fully-verified assumptions at this early stage, but they should not be forgotten and their verification should be planned in the immediate following iterations.

Every time we present any result we should clearly state what are all of the assumptions that have been made and if they have been verified or not.

17. Develop the automated Hypothesis-Driven Analysis (HDA) consisting of hypothesis validation + statistics summary, on top of the normalised data

What if the underlying data set or the observed environment has changed? Are our hypothesis still valid?
It is extremely important to develop an automated framework for running tests and experiments to validate all of the existing hypothesis.
We cannot achieve confidence about our deliverables if we are not sure that our hypothesis are correct and if anything has changed we must be able to find out immediately.

Yet, often is hard to have tests with boolean outcome: Success or Failure. It is a good practice though to have at least an automated job that calculates some key descriptive statistics that can help us understanding the underlying dataset and guiding the validation of our hypothesis. Think carefully of what measures your model would be interested to know in order to understand whether the proposed solution would make sense or not.

18. Analyse the output of the automated HDA to adjust/revise the proposed solution

The output of your HDA framework is your best friend for helping you going back and do the first changes to the proposed solution. You want to account for what the real phenomena are despite of what your original thoughts were.
If you manage to get all of your hypothesis right at the first shot, think twice!

Now you have a very detailed picture of what your solution proposal is and what are all of the requirements. You have gained a deep understanding of any detail you will need during the development and evaluation of your model. You have already built all of the tools to support you on that. You can feel safe to try out whatever you want because you know that your tests will check the validity. You have now reduced to the minimum the risks on this project before to even start implementing the first line of code for your model.

Align with your stakeholders and product owners and define the initial roadmap and expectations you want to meet for the first MVP.

***

Summary of the complete Agile Data Science Iteration 0″ series will be published soon, stay tuned.
Meanwhile, why not sharing or commenting below?

The ETL << prev | next >> The Final Checklist

Posted in Agile | Tagged , , | 6 Comments

What is Spark? Six reasons why CIOs should find out (and one why they shouldn’t) – 02 Nov 2015 – Computing Analysis

via What is Spark? Six reasons why CIOs should find out (and one why they shouldn’t) – 02 Nov 2015 – Computing Analysis.

Posted in Big Data, Scala | Tagged , | Leave a comment

‘Companies will stop hiring data scientists when they realise that the majority bring no value’ says data scientist – Computing

via ‘Companies will stop hiring data scientists when they realise that the majority bring no value’ says data scientist – Computing.

Posted in Uncategorized | Leave a comment

Agile Data Science Iteration 0: The ETL

This is the fourth post of the Agile Data Science Iteration 0 series:

Previously

What we have achieved so far (see previous posts above):

  1. Rigorous definition of the business problem we are attempting to solve and why it is important
  2. Define your objective acceptance criteria
  3. Develop the validation framework (ergo, the acceptance test)
  4. Stop thinking, start googling!
  5. Gather initial dataset into proper infrastructure
  6. Initial Exploratory Data Analysis (EDA) targeted to understanding the underlying dataset
  7. Define and quickly develop the simplest solution to the problem
  8. Release/demo first basic solution
  9. Research of background for ways to improve the basic solution
  10. Gather additional data into proper infrastructure (if required)
  11. Ad-hoc Exploratory Data Analysis (EDA)
  12. Propose better solution minimising potential risks and marginal gain

At this stage you have already a benchmark reference using the simple solution. You have done your research of how to improve and meet the business requirements. You have a good overview of the initial dataset used for solving this problem. You can now start your engineering stage and produce the right dataset according to your application domain and the required data quality.

THE ETL

13. Develop the Data Sanity check

There is no dataset on Earth that does not require a sanity check. Filter out all the malformed, invalid, irrelevant records. Some time a cleansing step is also worth. Instead of throwing everything away you may try to sanitise the bad records.

Make sure to repeat this process every time you are running your model with a different dataset. Make sure this process is:

  • automated
  • logging error messages
  • stopping the execution of your job in case you are handing your application over someone else that might use it with the wrong dataset

As a data scientist you don’t want to be blamed for having implemented a non-working model simply because someone else used it in the wrong way.

14. Define the Data Types of your application domain

Your data types are the first-class citizens of your application. Define them carefully accounting for how you would like to model your data in your domain rather than how the data currently looks like. It might be worth considering here optional fields, structured fields (for example a postcode might be represented as string or as a triple district code, sector code, unit code), identifier may require a long instead of an integer, categorical values might be hard coded using enumerations, timestamps could be stored as epoch time and so on. Avoid to have duplicated information in your data types, use primary keys to join your data collections later on.

Pre-mature optimisations are discouraged but as rule of thumb try to keep your types light. That is, do not use strings for representing numbers or any other expensive data structure. If you need to combine multiple fields into a single identifier use tuples instead of concatenating them into a single expensive object. It might make no difference now but refactoring the code to accommodate a different data type is one of the most expensive and painful task. Moreover, expensive types will cause scalability issues pretty soon. There will be always time for fixing it later but if you have to make a choice now and it requires the same effort, why not doing it well?

15. Develop the ETL and output the normalised data into a proper infrastructure

Your ETL goal is now to produce the desired output according to the previously defined data types so that you don’t want to do any additional pre-processing in your application and all of the requirements of the data format and quality are verified.

If the raw data don’t match the desired output format, here is where you want to do all of your transformations.

Any ETL job should always be finalised with the persistence to some data storage. It is discouraged to do the ETL as a pre-processing on-the-fly step for your application. Reason is that you want to quickly repeat all of your analysis on top of the normalised data rather than re-run the ETL every single time.

You can now forget about the original raw data and you can move your focus onto the high-quality dataset meeting your application requirements. Time for developing the model? Not yet. How many assumptions have you made so far and are you going to make in your model? Some data assumptions can be verified during the data check but what about your formulated hypothesis? The goal of delivering a data product is solving the business problem in its real context, unverified assumptions can easily invalidate your solution.

***

Details of how to perform the Hypothesis-Driven Analysis will follow on the next post of the “Agile Data Science Iteration 0” series, stay tuned.
Meanwhile, why not sharing or commenting below?

The Simple Solution << prev | next >> The Hypothesis-Driven Analysis

Posted in Agile, Big Data, Data Munging | Tagged | 5 Comments

AGILE DATA SCIENCE ITERATION 0: The Simple Solution

This is the third post of the Agile Data Science Iteration 0 series:

Previously

What we have achieved so far (see previous posts above):

  1. Rigorous definition of the business problem we are attempting to solve and why it is important
  2. Define your objective acceptance criteria
  3. Develop the validation framework (ergo, the acceptance test)
  4. Stop thinking, start googling!
  5. Gather initial dataset into proper infrastructure
  6. Initial Exploratory Data Analysis (EDA) targeted to understanding the underlying dataset

At this stage you should have a clear statement of what problem you are trying to solve and how you can objectively measure the quality of any possible solution regardless of what the final implementation will be. You have an initial background of state of the arts and an understanding of what the data look like.  You can now implement your first simple solution to the problem.

The Basic Solution

7. Define and quickly develop the simplest solution to the problem

The challenge here is “Are you able to implement a basic solution that solve the end-to-end goal (not necessary with the required quality) in a few days?”.

Before trying to think of very scalable algorithms or advanced modelling techniques, have you thought about a simples rules classifier? What if you can easily predict if an user is about to default his bank loan by simply looking at difference between how much is his earning and spending in the past 3 months and come out with a rule like “if that amount is less than X than the user is very likely to default”.

Maybe you spent 3 days of analysis and 2 days of development and you solved your problem even before to start it! Or maybe is not good enough but now you have got a basis to compare when you analyse the risk of continuing in this project at each iteration by comparing with what could be achieved in just 5 days of work.

8. Release/demo first basic solution

What would be more agile than releasing and demoing the basic solution? Never under-estimate the value of feedback and how your mind can focus on the next big thing given that everything done so far is checkpointed and reviewed.

Now you have got a quick and simple solution that does try to solve the business goal even if it might be not accurate yet. It could, but not necessary,  is your MVP. That depends on if the acceptance criteria are fulfilled or not.
What is important is that you have spent just a few days and you have got something to deliver and demoing. This will give you the following benefits:

  • trustiness with your stakeholder that you can deliver quickly
  • first set of feedback
  • inspiration for further improvements
  • baseline for comparison

You have all of the knowledge to start preparing your solution proposal.

The Proposal Preparation

9. Research of background for ways to improve the basic solution

Now you clearly now what goal you want to achieve, what minimum requirements to meet, what the data looks like, what basic solution to compare to. This is the right time for doing some deeper research of better ways of solving the problem by using more advanced techniques, domain specific knowledge and/or additional datasets.

10. Gather additional data into proper infrastructure (if required)

Like 5) but only if additional data is required by the current proposal.

11. Ad-hoc Exploratory Data Analysis (EDA)

At this stage the EDA is targeting explicitly to extracting knowledge related to the improved solution to propose.

12. Propose better solution minimising potential risks and marginal gain

Because you know have a comparison baseline, you should prefer quantifying the incremental benefit of your model rather than its absolute evaluation and try to trade off the additional complexity with the potential value gain.

At this stage you should most of the requirements defined. You have probably changed your mind different times as you were researching about the problem and re-scoping it into smaller problems. You now know what has to be implemented for your first MVP.

***

Details of how to structure your ETL will follow on the next post of the “Agile Data Science Iteration 0” series, stay tuned.
Meanwhile, why not sharing or commenting below?

The Initial Investigation << prev | next >> The ETL

Posted in Agile | Tagged , , | 5 Comments