Showing posts with label machine learning. Show all posts
Showing posts with label machine learning. Show all posts

Monday, July 20, 2015

A Brief History of Neural Networks



In computer science and specifically machine learning, programmers have been trying to simulate the behavior of the brain since the late 1940s.  The fundamental pattern of the brain is modeled by programmers as loosely connected nodes capable of learning and modifying their behavior as information is processed. 


In 1948 Alan Turing's paper Intelligent Machinery called these loosely connected nodes unorganized machines, and compared them to an infant's brain.
Neural Network used in a BioWall



In the 1950s Frank Rosenblatt developed the Perceptron, a binary classification algorithm and one of the first implementations of a Neural Network.  Programmers soon realized that neural networks were only very effective if they had 2 or more layers.  But machine processing power at the time prevented implementing anything useful.




By the 1970s machines had improved, and Neural Networks again gathered interest, but were soon surpassed in utility by simpler classification algorithms such as Support Vector Machines and linear classifiers.
Samples from two classes.  Samples on the margin between them are called support vectors.


This century, Neural Networks have made strides again with the invention of Deep LearningGeoffrey E. Hinton of the University of Toronto improved classification results by training each layer of a neural network separately.  As a result, many classification competitions are now won using Deep Neural Networks, often running on GPU processors.



Scilab , Python's Theano ,  Julia's Mocha,  and Caffe are all  focused on deep learning and neural networks.  Watch these projects evolve as deep learning gathers momentum.




Sunday, May 3, 2015

Putting the Train in AppTrain



In late 2005, when first learning Ruby and Rails, I founded the AppTrain project,  a web interface to the early Rails generators.   The Train represented a vehicle that was rolling along on top of Rails. As Rails grew in popularity, we began helping build Rails teams and the train took on a new meaning. We were training developers in Rails and related technologies.





And now  years later,  still excited about the future of technology, we're doing plenty of data science programming and machine learning.  With machine learning, specifically supervised learning, it’s important to build a good training set of of data.  Training sets represent a relationship between a result and a list of data points that correspond with a that result.  The result is also referred to as a signal.  


In supervised learning different algorithms can be trained on these training sets.  When an algorithm is being trained, it is looking for a function that best explains a signal.


Imagine a small data set like this:
[x,y][4,2][6,3][2.1]


The first number on each line is our result, or signal.  The second is the input data that leads to that signal.  Do you see a function that could predict the value of the signal x given a new value for y?


[x,4]


Visually we immediately see that x is always greater than y.  (x > y)  Our minds are searching for a function that will predict x.  How much greater is x than y?   You'll notice pretty quickly, it's exactly double.

x = 2y
x=2(4) 
x = 8


x represents the signal. y is the training data.  


Data Scientists have developed many algorithms that can run through numbers similar to the way our minds do.  But they can do it much faster.  Imagine a training set not with three rows, but with 100 or 1000.  It would be pretty boring to read through them to make sure x was always double y, but it's a great job for a computer.


Complicated Data Sets


Now imagine the training set has not just 2 variables (columns), x and y, but 10, or 100.


Here's data from a training set in the Restaurant Revenue prediction competition at Kaggle.  





In this case the result (or signal) we're trying to predict is the last column in each row, the revenue.


The Python programming language is a favorite of data science programmers. Scikit-learn is the machine learning library for Python. It contains learning algorithms designed be trained on data sets like this restaurant revenue data.  Each algorithm in Scikit Learn looks for functions that predict the signals found in training data. 

To solve Kaggle problems like the restaurant revenue problem, competitors typically first try one of the single models found in scikit-learn. On the discussion board for the competition, people mention using support vector machines (SVM), Gradient Boosting (GBM) , and Random Forrest. But competition winners ultimately blend techniques, or even devise their own algorithms.

Meanwhile, the train cruises forward at AppTrain.  Today we're building training sets, and training algorithms. Want to learn more about machine learning?  Attend our tech talks at Làm việc mạng in Saigon this summer.


Popular Articles