Talk:Gauss–Newton algorithm
This is the talk page for discussing improvements to the Gauss–Newton algorithm article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: 1, 2Auto-archiving period: 12 months |
Other talk page banners | |||||||||||||||||||||||||||||
|
comfusion about 'normal equations'
[edit]The 'notes' section refers to something called 'the normal equations'. However, nothing of that name is mentioned in the main section ('Description') above the notes. Could the main section be altered to explicitly introduce these normal equations and also explain why they are in need of being solved? The thing is, in the bottom of the main section is an expression for . This expression seems perfect for straight out computation, as the desired component has been neatly isolated. Why then would we still need to to some 'solving of the normal equations'? — Preceding unsigned comment added by 87.73.120.49 (talk) 22:24, 19 January 2016 (UTC)
Name?
[edit]I always thought that Newton's method, when applied to systems of equation, is still called Newton's method (or Newton-Raphson) and that the Gauss-Newton is a modified Newton's method for solving least squares problems. To wit, let f(x) = sum_i r_i(x)^2 be the least squares problem (x is a vector, and I hope you don't mind my being too lazy to properly type-set the maths). Then we need to solve 2 A(x) r(x) = 0, where A(x) is the Jacobian matrix, so A_ij = derivative of r_j w.r.t. x_i. In my understanding, Newton's method is as described in the article: f'(x_k) (x_{k+1} - x_k) = - f(x_k) with f(x) = 2 A(x) r(x). The derivative f'(x) can be calculated as f'(x) = 2 A(x) A(x)^\top + 2 sum_i r_i(x) nabla^2 r_i(x). On the other hand, the Gauss-Newton method neglect the second term, so we get the iteration A(x_k) A(x_k)^\top (x_{k+1} - x_k) = - A(x_k) r(x_k).
Could you please tell me whether I am mistaken, preferably giving a reference if I am? Thanks. -- Jitse Niesen 18:33, 18 Nov 2004 (UTC)
- It is quite possible that you are right. Please feel free to improve both this article and the corresponding section in Newton's method. I do not have an appropriate reference at hands, therefore I am unable to contribute to a clarification. - By the way, thanks for the personal message on my talk page. -- Frau Holle 22:23, 20 Nov 2004 (UTC)
- I moved the description of the general Newton's method in R^n back to Newton's method and wrote here a bit about the modified method for least squares problems. -- Jitse Niesen 00:45, 5 Dec 2004 (UTC)
Demo Code
[edit]I have implemented the algorithm given in this article in Matlab code. Hoewever I don't know how to add it to the page / format etc. It may be useful.
%GaussNewton_Demo
%Data x = [0.038 0.1947 0.425 0.626 1.253 2.500 3.740]'; y = [0.05 0.127 0.094 0.2122 0.2729 0.2665 0.3317]';
%PLot X X = [0.01:0.01:4];
%Inital guess B1 = 0.9; B2 = 0.2;
%Plot figure(1) cla hold on scatter(x,y, 'rd', 'fill') axis auto
%Loop for i = 1:5
%Jacobian J = [-x./(B2+x) (B1*x)./(B2+x).^2];
%Hessian (approx) H = J'*J; %Calculate residuals r = y - (B1*x)./(B2+x);
%Sum of squares of residuals SSR = sum(r.*r) %Plot Y = (B1*X)./(B2+X); plot(X,Y, 'LineWidth', 1) %Calculate delta Delta = pinv(H)*J'*r; %Apply delta to parameters B1 = B1-Delta(1); B2 = B2-Delta(2); pause(1)
end — Preceding unsigned comment added by MartyBebop (talk • contribs) 11:39, 12 April 2012 (UTC)
data fitting
[edit]I think that in the Description section, when the data fitting example is introduced, it should read
- — Preceding unsigned comment added by 129.240.215.211 (talk) 12:32, 27 May 2015 (UTC)
Factor of 1/2 missing
[edit]It's there a factor of 1/2 missing in the derivation of \Delta \beta ? Alzibub (talk) 11:44, 9 January 2021 (UTC)
Ignore that. I see my mistake now. Alzibub (talk) 15:21, 9 January 2021 (UTC)