It’s been almost three and a half years since I published “Database Testing Patterns in Go“. The post was about how at clypd we were using dependency injection to test functions that would access external data sources. It’s a testament to the effectiveness of the pattern that it lasted so long. However, as our code base has grown over these years, we eventually started to run into some growing pains associated with the way we were doing things. Read More
In this special Q+A, we talk with Jingsong Cui, clypd’s Head of Media Analytics. We discuss the importance of forecasting, clypd’s approach to forecasting, and what the future brings to forecasting.
What’s your background?
I have been working with data and models throughout my career. In graduate school, I studied economics and used econometric models to analyze social and economic data. After getting my Ph.D, my first job was working for a marketing research company called @Futures. I led a team of statisticians to build forecasting models for pharmaceutical clients. The company was acquired by Nielsen in 2010. Within Nielsen, I worked with several industry verticals, across Buy (which focuses on consumer spending) and Watch (which focuses on media consumption). I have always enjoyed doing applied research and using data to solve real business problems. Read More
What do you do when your parents pull you out of school to go to Disney World but you have a project due the next day?
However, there were a few issues:
And tackle four industrial problems, including a programmatic TV problem proposed by clypd, during a week-long workshop. This happened on June 12-17, 2016 at Duke University during the 32nd Annual Mathematical Problems in Industry (MPI) Workshop. This year, the MPI workshop attracted 84 mathematicians from universities, industry, and national laboratories from Canada, the UK, and the US.
At clypd, almost every feature we build is data-driven. It is therefore important for the data access layer of our Go platform to be powerful, yet easy to build upon, maintain, and debug. Today we dive into the design of the clypd data access layer, the frameworks we chose, and how we augmented them.
A huge benefit provided by Go is simplicity throughout the clypd stack. Dependencies, particularly third-party libraries, are often opaque in other programming languages due to varying code style, source code that is difficult to find, or abstraction layers that are challenging to see through. The source code for any library is just a click away in Go. Design idioms found in dependency libraries are usually similar to the ones we use, which ensures that the entire system is easy for our team to comprehend.
At the beginning of 2014, I was thinking about how we might create a meaningful review process. I am not a big fan of only yearly reviews. I don’t know about you, but I have no idea what I was doing a year ago and often, what was important at this time last year is not relevant now. So, how do we make a better process that helps drive career growth?
I already had a one-on-one process established where we have the “how’s it going?” discussion. I always start the conversation with “Are you having fun? Are you learning stuff? Are you building awesomeness?” The response I always get is “yes, yes, yes.” Then I ask, “What did you learn? What part was fun and what was not? What kept you from building awesomeness?” And that’s where the interesting conversations start. For me, learning, fun and building are intertwined.
Almost a year ago, we blogged about our reasoning and methodology for choosing Go as our next generation platform here at clypd. A year is a long time, both in software technology and in the lifetime of a startup. Since paper is the traditional gift for a one year anniversary, it seems appropriate to write down our learnings so far.
When we posted “Getting to Go,” we included a list of the selection factors we used to evaluate potential platforms. The list includes
Thorough testing continues to be a key tenet of software development at clypd. Quality is paramount, but shipping code quickly requires us to be efficient in how we search for bugs. We’ve found innovative ways to unit test individual Go packages of our programmatic television ad platform as it has grown in its capabilities. The construction of a similarly efficient mechanism to test large portions of the whole system has taken our test automation to the next level.
In our last post, we introduced the importance of data management platforms (DMPs) in the television industry. This month, we’ll discuss the importance of set-top box (STB) data in DMPs and programmatic TV.
As we know, data is a core tenet of programmatic TV. The layering of data sources on top of the media activity is essential in understanding the audience composition for the best data-enhanced decisioning.
In the linear TV world, of the many data sources available, perhaps none is more important or particular, than the second-by-second viewership activity from the set-top box. STB data can be used to measure all the activity, including that which is not measured by Nielsen. This long tail inventory primarily being consumed on cable networks constitutes greater than 40% of TV viewership. The challenge lies in the different rules, technologies, and protocols that exist when looking to utilize that STB data in a consistent, coherent manner.
In the ad tech world, there are a lot of acronyms. In this series, we’ll discuss the DMP – or Data Management Platform. In the digital landscape, DMPs are fairly established and standardized, but in the television world, they operate differently, simply because television and digital media are different.
A DMP is a centralized computing system for collecting, integrating, and managing large sets of structured and unstructured data from disparate sources. First, data goes in, then it gets manipulated, normalized, and prepared in an easy to use manner that makes it actionable. In television, the design, functionality, and utility of a DMP are different than in other environments.