Showing posts with label scikit-learn. Show all posts
Showing posts with label scikit-learn. Show all posts

Friday, May 8, 2015

Data Science with Python


At the last Tech Talk Tuesday we took an overview of Python's  Data Science related packages.







The key packages for numerical computing are Numpy, Scipy and Scikit-learn.  The documentation for python is great, and makes presentations like this easy.  These packages are loaded with code samples, even for complex concepts like  Grid search and cross validation.    The machine learning package, scikit-learn also has exercises below the code samples.  Doing the exercises enforces the concepts, and is great preparation for solving problems like the ones in Kaggle competitions.







We also demoed iPython Notebooks, a fantastic way to create live data analysis documents.




Sunday, May 3, 2015

Putting the Train in AppTrain



In late 2005, when first learning Ruby and Rails, I founded the AppTrain project,  a web interface to the early Rails generators.   The Train represented a vehicle that was rolling along on top of Rails. As Rails grew in popularity, we began helping build Rails teams and the train took on a new meaning. We were training developers in Rails and related technologies.





And now  years later,  still excited about the future of technology, we're doing plenty of data science programming and machine learning.  With machine learning, specifically supervised learning, it’s important to build a good training set of of data.  Training sets represent a relationship between a result and a list of data points that correspond with a that result.  The result is also referred to as a signal.  


In supervised learning different algorithms can be trained on these training sets.  When an algorithm is being trained, it is looking for a function that best explains a signal.


Imagine a small data set like this:
[x,y][4,2][6,3][2.1]


The first number on each line is our result, or signal.  The second is the input data that leads to that signal.  Do you see a function that could predict the value of the signal x given a new value for y?


[x,4]


Visually we immediately see that x is always greater than y.  (x > y)  Our minds are searching for a function that will predict x.  How much greater is x than y?   You'll notice pretty quickly, it's exactly double.

x = 2y
x=2(4) 
x = 8


x represents the signal. y is the training data.  


Data Scientists have developed many algorithms that can run through numbers similar to the way our minds do.  But they can do it much faster.  Imagine a training set not with three rows, but with 100 or 1000.  It would be pretty boring to read through them to make sure x was always double y, but it's a great job for a computer.


Complicated Data Sets


Now imagine the training set has not just 2 variables (columns), x and y, but 10, or 100.


Here's data from a training set in the Restaurant Revenue prediction competition at Kaggle.  





In this case the result (or signal) we're trying to predict is the last column in each row, the revenue.


The Python programming language is a favorite of data science programmers. Scikit-learn is the machine learning library for Python. It contains learning algorithms designed be trained on data sets like this restaurant revenue data.  Each algorithm in Scikit Learn looks for functions that predict the signals found in training data. 

To solve Kaggle problems like the restaurant revenue problem, competitors typically first try one of the single models found in scikit-learn. On the discussion board for the competition, people mention using support vector machines (SVM), Gradient Boosting (GBM) , and Random Forrest. But competition winners ultimately blend techniques, or even devise their own algorithms.

Meanwhile, the train cruises forward at AppTrain.  Today we're building training sets, and training algorithms. Want to learn more about machine learning?  Attend our tech talks at Làm việc mạng in Saigon this summer.


Popular Articles