This is just a short post on the current status of the Cambridge projects, there are about 18 members so far and counting. Four are active and others are just lurking for now which is fine.

We have started to look at the first project, automated root search, ie to find where f(x) = 0 , and their associated errors. There are three methods:

1) Bisection

2) Direct Iteration (which we have covered in MS221)

3) Newton Rhapshon (again which we have covered in MS221)

In this post I will discuss the method of Bisection. This is really quite clever and simple. If an interval [a,b]

brackets a root of a function f(x0), then f(a) and f(b) must have different signs, so that the product f(a)f(b) is less than zero. Let c lie half way between a and b ie c = 1/2(a+b), Then evaluate the function at c

Now if f(a)f(c) < 0 then the root lies in the interval [a,c], else it lies in the interval [c,b]. Replace a or b accordingly and then bisect the new interval. Keep doing this until the error |a-b| is less than a predefined tolerance, then f(c) is the root of the function within that defined tolerance.

The convergence of the error is linear, as can be seen as follows. Given that the method guarantees that the root lies within a given interval, then the error for a given interval must be no greater than

e < |a-b|

As the error is halved for each interval, we must have the error at the n+1 th step is given by

e(n+1) = 1/2e(n).

As this is linearly dependent on the previous error, the convergence is said to be linear. Whilst the method is guaranteed to work, provided the intial interval brackets the root, it is in fact much preferable to have faster convergence. Of the techniques described above the Newton Rhapshon method has quadratic convergence provided the root is simple. I shall discuss this later, in the mean time I was able to write a simple programme to implement the method. The initial guess, being enabled, by getting the code to plot out the function before selecting the interval. Anyway a good start, I've nearly finished the Direct iteration method and I'll post the results of that next week, Looks like I'm in a good position to finish the initial project by the end of September.

This weekend I also completed the 6th TMA for M208 this was relatively straightforward the topics covered were as follows:

1) A question on limits and continuity including a basic question on the epsilon delta definition of continuity which I'm begining to like mainly via practice.

2) Questions on differentiability from the definition. Found the first part a bit tricky as it was a quotient and applying the definition directly led to a complete mess, so I was forced to follow the derivation of the differentiability of the quotient rule. Got there in the end but can't help think that there was an easier way.

3) Questions on partitions and lower and upper sums for a discontinuous function, alright provided you take care as to where the function is defined. Then (at last) some real calculus involving integration by substitution and derivation of a recurrence relation.

4) Questions on Taylor series, prior to embarking on the Cambridge Computing projects, this would have seemed a bit dry, but given that Taylor expansions form the basis of most numerical techniques, it has taken on a whole new interest. Those of my fellow students who are bored with Taylor series and haven't already enrolled on the Cambridge projects might like to enlist even just as a lurker.

Anyway really must get to grips with TMA04 for Complex Variables over the next week, I'm giving myself Monday and Tuesday of the week after next off, so hopefully will have finished it by then.

Also started to read Hume's enquiry in preparation for my Edinburgh Philosophy course, he has an interesting phrase for applied maths which he calls mixed maths. It's a joy to read and I'm looking forward to studying it in depth.

## No comments:

## Post a Comment