Emerging Data Science Architecture Patterns

The past month I’ve taken two EdX courses to brush up on Enterprise Data Integration Architecture. One on Active Directory Identity Management in Azure and the other on deploying data application interface services with C#.

What does that have to do with Data Science? Everything. At least that is my strong suspicion. This article explores industry trends towards “Enterprise” data science and how we can build our architectures to support very rapidly evolving  Data Science Solutions.

The Big Ass Script “Architecture”

Data Science has a problem, we typically put all our data connection, transformation, and visualization code in one big-ass script (“BAS”). The BAS development pattern is great for one superstar data scientist to mess around with a bunch of data and make an awesome model. But there are a few limitations:

  • It’s hard to reuse in future analysis
  • No one else understands how it works
  • It’s difficult to debug

We have workarounds for the limitations, but copy pasting code, knowledge bases, and blind faith in results only get us so far.

BAS is only made easier to create with “Notebooks” like Jupyter and Zeppelin, which are practically IDEs for generating BAS solutions.

What about integrating a BAS into a production data pipeline? Simple answer, you can’t. It has to be refactored. Which is why we have a standard data science development pattern of:

  1. Data Science Data Mess-around
  2. Refactor data transformations into a more useful framework
  3. Build out a thick ETL dumping the scored data into a database

The Decoupled Architecture

“But we solved this problem over 30 years ago” says literally every enterprise data architect. Yes you did, but we Data Scientists think we know better and decided we could do it our own way.

We can’t.

So now we have to tuck our tails between our legs and learn some basic architecture and development patterns. Luckily for us, Data  and Data Science Architects are popularizing frameworks and tools to make this transition much easier.

Let’s look at the basics.

Multi-Tiered Architecture

https://en.wikipedia.org/wiki/Multitier_architecture

Who would have thought separating the data, data analysis (filtering, scoring, etc.), and reporting (visuals, dashboards, etc.) was important? Again, everyone except analysts and data scientists…

The second someone downloads a csv to their laptop or starts messing around in Excel any semblance of data governance is thrown out the window. To make matters worse, updating the model means we have to repeat whatever crazy process was used to get that csv in first place.

A tightly coupled analysis and reporting stack is just as bad. Now data scientists are forced to rerun analysis for every enhancement or bug fix requested. It can lead to analysts becoming front end developers.

So the multi-tiered architecture is not only good for managing hardware resources, it is good for management human resources as well.

Is this the same as Model View Control?

Great question, and the answer is debatable, but personally I look at the tiered architecture as a physical separation and MVC as logical separation. MVC is a code development pattern which isolates the data (“Model”) from any persistent modifications (“Control”) from an application’s interface (“View”).

https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller

Meaning, in a modern PaaS environment, effective MVC design is equivalent to multi-tier design, as the data storage, processing, and visualization functions are handled by different services.

However, MVC doesn’t solve everything, even with decoupling operations, we may not decouple how the operations themselves interact. In other words, we may develop solutions with extremely efficient code execution but the code is buried in a larger method, or maybe we want to modify a model but it’s used across multiple solutions.

The Microservice Architecture

Microservice is emerging as the data science architecture pattern of choice. To oversimplify, it focuses on creating standard interfaces for all data extraction, manipulation, and visualization operations – without the need for complicated middle layer service bus applications to handle communication.

A standard interface means we have a consistent format for data going in and data coming out of the “service.”

For a good example of a service, think of Facebook or Github Graph APIs. You typically make several calls to various service endpoints as you get new information on each hop. (E.g. userId -> postId -> text.) We know the format of data to provide to the endpoint and what data to expect in return, so we can reuse the endpoints in multiple applications. Furthermore, modifications to the underlying code probably happen all the time without anyone ever noticing.

How to Implement Data Science Microservice Architecture

Well Microservices sounds well and good, but how on earth are analysts and most data scientists going to actually develop like this? Am I seriously proposing they learn how to develop and deploy reusable APIs like back-end developers or engineers??

Well, kind of, yeah…

But it’s not as bad as it sounds. While purely coded solutions exist, for example flask for python or plumbr in R, they require a lot of administration and development to run in a fully integrated microservices architecture. There is no pre configured security, no management layer, no high availability, and not to mention it’s an entirely new coding paradigm.

So without a lot of IT Infrastructure and support around them, having data scientists start standing up apis they coded themselves is probably not going to happen.

Alternatively, PaaS providers like Azure and AWS have other ways for data scientists to begin deploying services.

Azure Machine Learning is a clean looking GUI for data manipulation and preconfigured statistical operations, as well as a way for raw R and Python code to be placed into modules for reuse in a group.

More importantly, any workflow created in it can be deployed as a RESTful endpoint for either triggering a model scoring workflow or making an request for some data produced by the workflow in real time.

Azure ML is probably the fastest and most cost effective way to start producing reusable data science microservices, particularly if your company is already using Azure Cloud Services.

AWS SageMaker is another relatively easy way to build and deploy models as services. However, SageMaker targeted more toward data science developers so most of its functionality is only accessible through code. Furthermore, the recommended data manipulation component is a separate service “AWS Glue”.

As of right now SageMaker is more powerful without a doubt, you can load custom containers into it for example. It depends on your existing cloud environment and the coding abilities of your data science team to determine which is correct for you.

These are only two examples of how we are moving into a new world of Microservices. It is likely Data Scientists will soon be part of development teams, and data science architecture will be needed to support this new enterprise data development pattern.

Leave a Reply

Your email address will not be published. Required fields are marked *