Parameter Tuning for Perceptron Learning Algorithm
- by Albert Diego
Hi,
I'm having sort of an issue trying to figure out how to tune the parameters for my perceptron algorithm so that it performs relatively well on unseen data.
I've implemented a verified working perceptron algorithm and I'd like to figure out a method by which I can tune the numbers of iterations and the learning rate of the perceptron. These are the two parameters I'm interested in.
I know that the learning rate of the perceptron doesn't affect whether or not the algorithm converges and completes. I'm trying to grasp how to change n. Too fast and it'll swing around a lot, and too low and it'll take longer.
As for the number of iterations, I'm not entirely sure how to determine an ideal number.
In any case, any help would be appreciated. Thanks.