Optimization Toolbox    

Nonlinear Equations with Jacobian Sparsity Pattern

In the preceding example the function nlsf1 computes the Jacobian J, a sparse matrix, along with the evaluation of F. What if the code to compute the Jacobian is not available? By default, if you do not indicate the Jacobian can be computed in nlsf1 (using the Jacobian parameter in options), fsolve, lsqnonlin, and lsqcurvefit will instead use finite-differencing to approximate the Jacobian.

In order for this finite-differencing to be as efficient as possible, the sparsity pattern of the Jacobian should be supplied, using the JacobPattern parameter in options. That is, supply a sparse matrix Jstr whose nonzero entries correspond to nonzeroes of the Jacobian for all x. Indeed, the nonzeroes of Jstr can correspond to a superset of the nonzero locations of J; however, in general the computational cost of the sparse finite-difference procedure will increase with the number of nonzeroes of Jstr.

Providing the sparsity pattern can drastically reduce the time needed to compute the finite-differencing on large problems. If the sparsity pattern is not provided (and the Jacobian is not computed in the objective function either) then, in this problem nlsfs1, the finite-differencing code will attempt to compute all 1000-by-1000 entries in the Jacobian. But in this case there are only 2998 nonzeros, substantially less than the 1,000,000 possible nonzeros the finite-differencing code will attempt to compute. In other words, this problem is solvable if the sparsity pattern is provided. If not, most computers will run out of memory when the full dense finite-differencing is attempted. On most small problems, it is not essential to provide the sparsity structure.

Suppose the sparse matrix Jstr, computed previously, has been saved in file nlsdat1.mat. The following driver calls fsolve applied to nlsf1a which is the same as nlsf1 except only the function values are returned; sparse finite-differencing is used to estimate the sparse Jacobian matrix as needed.

Step 1: Write an M-file nlsf1a.m that computes the objective function values

Step 2: Call the system of equations solve routine

In this case, the output displayed is

Alternatively, it is possible to choose a sparse direct linear solver (i.e., a sparse QR factorization) by indicating a "complete" preconditioner, i.e., if we set PrecondBandWidth to Inf, then a sparse direct linear solver will be used instead of a preconditioned conjugate gradient iteration

and the resulting display is

When the sparse direct solvers are used, the CG iteration will be 1 for that (major) iteration, as shown in the output under CG-Iterations. Notice that the final optimality and f(x) value (which for fsolve, f(x), is the sum-of-the-squares of the function values) are closer to zero than using the PCG method, which is often the case.


 Nonlinear Equations with Jacobian Nonlinear Least Squares with Full Jacobian Sparsity Pattern