Gradient Descent to Learn Theta in Matlab/Octave



function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
%GRADIENTDESCENT Performs gradient descent to learn theta
%   theta = GRADIENTDESENT(X, y, theta, alpha, num_iters) updates theta by 
%   taking num_iters gradient steps with learning rate alpha

% Initialize some useful values
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);

for iter = 1:num_iters

    % ====================== Main CODE HERE ======================
    % Instructions: Perform a single gradient step on the parameter vector
    %               theta. 
    %
	hold = theta;
	
	for j = 1:length( theta ) 
	
		theta(j) =	hold(j) - ( alpha * sum(( X * hold - y ).*X( :,j ))) / m;

	end



    % ============================================================

    % Save the cost J in every iteration    
    J_history(iter) = computeCost(X, y, theta);

end

end

9 thoughts on “Gradient Descent to Learn Theta in Matlab/Octave

  1. m getting error as:
    Error using hold
    Too many output arguments.

    Error in gradientDescent (line 22)
    theta(j) = hold(j)-(alpha * sum(( X * hold – y ).*X( :,j )))/ m;

Leave a comment