Conditioning

We define a problem as a function where is a nonlinear function from normed vector space to . Given an input data , we can say the problem instance is:

Perturbations can be from disturbances, noise and/or numerical errors. But how do we define small/large?

1. The Condition Number

For data and perturbation , we define the condition number , or as:

More precisely, we are interested in small perturbations:

If is differentiable, then .

1.1 Relative Condition Number

We may care about the relative perturbation , which leads to the relative output chnge :

If is differentiable, then .

2. Matrix Vector Multiplcation

Consider computing for some matrix . We can define the relative condition number of as:

Because it shops up often, we define the condition number of a square matrix as . This gives a bound on the relative change in given a perturbation in . It is usually computed using induced norms .

2.1 Floating Point Arithmetic

We represent floating point numbers as where:

Now, we can say that for any there exists such that . This is the relative error of . We can restate this as where . We can denote floating point operations as:

So, every operation of floating point arithmetic introduces an error of at most . We also have that:

3. Stability

We have a non-linear problem function from normed vector space . We now define an algorithm as a function . We can then define the relative error of computation as . An algorithm is stable for problem if for all :

An algorithm is backwards stable if for all :

In other words, an algorithm is backward stable if it produces an exact answer to nearly the right question. An algorithm can be forward stable but not backward stable. Stability usually implies backwards stability.

3.1 Conditioning and Stability

Suppose a backwards stable algorithm for problem is implemented on a computer: . So the relative error of the compuuted solution depends on both the condition number and the machine precision.

Back to Home