Friday, January 28, 2011

Predictive Analytics World Early-bird ends Monday

The earlybird special for Predictive Analytics World / San Francisco ends January 31, 2011 which saves you $200 on the conference rate and $100 on any workshop, including my Hands-On Predictive Analytics using SAS Enterprise Miner on March 17th.

More details on the 7 workshops can be found here.

Hope to see you there!

Thursday, January 27, 2011

Do analytics books sell?

Kevin Hillstrom has a fascinating post on brief, technical ebooks (Amazon singles) sold on Amazon here: Kevin Hillstrom: MineThatData: Amazon Singles. His points: interesting content is what sells. Length doesn't matter, but these ebooks are typically less than 50 pages. Price doesn't matter.

Should I jump in? Should you?

Saturday, January 22, 2011

Doing Data Mining Out of Order

I like the CRISP-DM process model for data mining, teach from it, and use it on my projects. I commend it to practitioners and managers routinely as an aid during any data mining project. However, while the process sequence is generally the one I use, I don't always; data mining often requires more creativity and "art" to re-work the data than we would like; it would be very nice if we could create a checklist and just run through the list on every project! But unfortunately data doesn't always cooperate in this way, and we therefore need to adapt to the specific data problems so that the data is better prepared.

For example, on a current financial risk project I am working, the customer is building data for predictive analytics for the first time. The customer is data savvy, but new to predictive analytics, so we've had to iterate several times on how the data is pulled and rolled up out of the database. In particular, target variable has had to be cleaned up because of historic coding anomalies.

One primary question to resolve for this project is an all-too-common debate over what is the right level of aggregation: do we use transactional data even though some customers have many transactions and some have few, or do we roll data up to the customer level to build customer risk models. (A transaction-based model will score each transaction for risk, whereas a customer-based model will score, daily, the risk associated with each customer given the new transactions that have been added.) There are advantages and disadvantages to both, but in this case, we are building a customer-centric risk model for reasons that make sense in this particular business context.

Back to the CRISP-DM process and why it is advantageous to deviate from CRISP-DM. In this project, we jumped from Business Understanding and the beginnings of Data Understanding straight to Modeling. I think in this case, I would call it "modeling" (small 'm') because we weren't building models to predict risk, but rather to understand the target variable better. We were not sure exactly how clean the data was to begin with, especially the definition of the target variable, because no one had ever looked at the data in aggregate before, only on a single customer-by-customer basis. By building models, and seeing some fields that predict the target variable "too well", we have been able to identify historic data inconsistencies and miscoding.

Now that we have the target variable better defined, I'm going back to the data understanding and data prep stages to complete those stages properly, and this is changing how the data will be prepped in addition to modifying the definition of the target variable. It's also much more enjoyable to build models than do data prep, so for me this was a "win-win" anyway!