function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters) %GRADIENTDESCENT Performs gradient descent to learn theta % theta = GRADIENTDESENT(X, y, theta, alpha, num_iters) updates theta by % taking num_iters gradient steps with learning rate alpha % Initialize some useful values m = length(y); % number of training examples J_history = zeros(num_iters, 1); for iter = 1:num_iters % ====================== Main CODE HERE ====================== % Instructions: Perform a single gradient step on the parameter vector % theta. % hold = theta; for j = 1:length( theta ) theta(j) = hold(j) - ( alpha * sum(( X * hold - y ).*X( :,j ))) / m; end % ============================================================ % Save the cost J in every iteration J_history(iter) = computeCost(X, y, theta); end end
How do you implement this function in Octave?
This functions is implemented Octave
shouldn’t it be divided by m?
ohh!! ok sorry!! misread the equation
Thank you very much!
A(I) = B, the number of elements in B and I must be the same.
I get this error upon running in line 20. Any idea why this is happening?
You must have not put the differentiated factor in the sum function as well.
m getting error as:
Error using hold
Too many output arguments.
Error in gradientDescent (line 22)
theta(j) = hold(j)-(alpha * sum(( X * hold – y ).*X( :,j )))/ m;
Can anyone please explain why it’s valid to use X * hold? I thought it should be hold’X .. thanks!