matlab来实现梯度下降算法,为什么误差越来越大?下面是源代码function [ theta,J_history ] = gradientDescent( X,y,theta,alpha,num_iters )%GRADIENTDESCENT Summary of this function goes here% Detailed explanation goes herem=size(X,1);
来源:学生作业帮助网 编辑:六六作业网 时间:2024/11/06 01:35:04
matlab来实现梯度下降算法,为什么误差越来越大?下面是源代码function [ theta,J_history ] = gradientDescent( X,y,theta,alpha,num_iters )%GRADIENTDESCENT Summary of this function goes here% Detailed explanation goes herem=size(X,1);
matlab来实现梯度下降算法,为什么误差越来越大?下面是源代码
function [ theta,J_history ] = gradientDescent( X,y,theta,alpha,num_iters )
%GRADIENTDESCENT Summary of this function goes here
% Detailed explanation goes here
m=size(X,1);
J_history = zeros(num_iters,1);
for iter=1:num_iters
p=theta(1)-alpha*(sum((X*theta-y).*X(:,1)));
q=theta(2)-alpha*(sum((X*theta-y).*X(:,2)));
theta(1)=p;
theta(2)=q;
J_history(iter) = computeCost(X,y,theta);
end
end
matlab来实现梯度下降算法,为什么误差越来越大?下面是源代码function [ theta,J_history ] = gradientDescent( X,y,theta,alpha,num_iters )%GRADIENTDESCENT Summary of this function goes here% Detailed explanation goes herem=size(X,1);
你for循环里怎么没有m出现?
应该是
p= theta(1) - (alpha / m) * sum((X * theta - y).* X(:,1));
q= theta(2) - (alpha / m) * sum((X * theta - y).* X(:,2));