#
2 Dimensional Perceptron Computer

by Chad Rempp

This is a perceptron simulator. A perceptron takes a set of inputs usually {-1,1} and uses them to evaluate an activation function:

.

This activation value is then used in the threshold function:

.

This output is compared against the desired output for that input pair and if it is not correct an error function adjusts the weights according to this equation:

.

This is then repeated with the new weight values until an acceptable error level is reached:

.

This examples uses two inputs and a set of 10 data triples (the third value is the bias, β). The graphs section shows the error and the linear seperation of the two classes. In this example I use the terms vector and list interchangably. This is because where the algorithm calls for vectors I use *Mathematica* list data types. I also use the list data type to store data for error computation and other uses.

##
Compute Perceptron

###
Initialize Variables

#####
Reset Variables

#####
Set the input vector values

#####
Set the initial weight vector

#####
Set the weight record vector to an empty list

#####
Set the output vector to an empty list

#####
Set the net activation vector to an empty list

#####
Set the region 1 vector to an empty list

#####
Set the region 2 vector to an empty list

#####
Set the desired output vector

####
Set the error vector to an empty list

#####
Set the learning constant

#####
Set the number of times to loop throught the network during training.

Note that the total training iterations = loops * sets of input values

###
Compute

The perceptron algorithm.

Line 1 - Go through the input vector *loop* times.

Line 2 - Reset the output vector to an empty list for another pass through.

Line 3 - Reset the activation vector to an empty list for another pass through.

Line 4 - Loop through each of the input values. --TODO-- Change the hard coded value to a length function of the input vector.

Line 5 - The net activation level, *net*, is calculated using the activation function . For this algorithm I substituted a dot product for the summation for computational convenience.

Line 6 - The activation level is added to a storage list.

Line 7 - The activation value is used to produce and output value using the bipolar threshold function .

Line 8 - The output value is added to a storage list.

Line 9 - Calculate the weight adjustment if necessary and store it to the weight vector, else store the same weight to the weight vector since the out put was correct.

Line 13 - Calculate the error for this loop through the data set and store it to the error list.

Line 14 - Store this loops weight vector to a list of weight vectors for later use.

Line 15 - Reset the weight vector to the last set of weight values for use in the next loop.

##
Results

###
Preprocessing

#####
Create Tables

#####
Create equation for linear seperation

#####
Create data regions for graphing

#####
Create error graph

#####
Create seperation line graph

#####
Create region 1 graph

#####
Create region 2 graph

#####
Create a list of graphs that represent the linear seperation at the end of each loop

###
Results

#####
Show tables

Table (Activation values Weights}

Table (Activation values Weights}

#####
Show graphs

Error Graph

Region Graph

The Seperation

The progression of the seperation

Converted by *Mathematica*
December 8, 2003