\documentstyle[12pt]{article} %\renewcommand\baselinestretch{1.667} \topmargin= -0.4in \textheight=10.4in %\textheight=10.5in %\textheight=10.5in %\textwidth=6in \textwidth=6.5in \oddsidemargin=0.0in \pagestyle{empty} \begin{document} %\vspace*{1.2in} \begin{center}CALIFORNIA STATE UNIVERSITY, LONG BEACH\\ \vspace*{.10in} Department of Mathematics and Statistics\\ \vspace*{.15in} NUMERICAL ANALYSIS COMPREHENSIVE EXAM\\ \vspace*{.10in} Sample Exam 2\\ \end{center} \vspace{.4in} \noindent Choose any \underline{Five Problems}. You may use only a \underline{non-graphing calculator}. \vspace*{.25in} \noindent 1. a) Let $A \in \mbox{\bf R}^{n \times n}.$ Prove that $$ \|A\|_{1} =\displaystyle{\max_{1 \leq j \leq n}} \sum_{i=1}^{n} |a_{ij}|. $$ Include all details. \vspace*{.1in} \noindent b) Let $A,B \in \mbox{\bf R}^{n \times n}$. Let $\| \cdot \|$ be any norm on $\mbox{\bf R}^{n}$ and $\|A\|$ be the corresponding natural matrix norm induced on A. State the \underline{definition} of $\|A\|$. Also, \underline{prove} the matrix norm property $\|A B\| \ \leq \ \|A\| \|B\|$. You may use other norm properties in your proof. \vspace*{.15in} \noindent c) Let $A \in \mbox{\bf R}^{n \times n}$. Let $\| \cdot \|$ be \underline{any} norm on $\mbox{\bf R}^{n}$ and $\|A\|$ be the corresponding natural matrix norm induced on A. Let $r_{\sigma}(A)$ be the spectral radius of $A$. Prove that if $r_{\sigma}(A) < 1$ then $\lim_{k \rightarrow \infty} \|A^{k}\| = 0$. You may use other facts that you know about $r_{\sigma}(A)$ in your proof. %\vspace*{.1in} %\noindent %c)Let $A \in \mbox{\bf R}^{n \times n}$. %Let $\| \cdot \|$ be any %norm on $\mbox{\bf R}^{n}$ and $\|A\|$ be the corresponding %natural matrix norm induced on A. State the \underline{definition} %of the spectral radius of A, $r_{\sigma}(A)$. Also, \underline{prove} %that $ r_{\sigma}(A) \leq \|A\| $. Include all details. \vspace*{.25in} \noindent 2. In parts (a), (b), and (c) below you are to construct \underline{direct proofs} concerning the convergence of fixed-point iteration methods and, in particular, Newton's method. Do not quote similar results to prove these parts. \vspace*{.1in} \noindent a) Suppose $g(x) \in C^{1}[a,b]$ has a fixed-point $p \in [a,b]$. Suppose also that $g(x)$ has the following two properties: 1) There is a constant $\lambda > 0$, such that $|g'(x)| \leq \lambda < 1$ for all $x \in [a,b]$. 2) If $x \in [a,b]$, then $g(x) \in [a,b]$. \noindent Let $x_{0} \in [a,b]$. Observe that by property (2), each $x_{i}$ generated by the sequence $x_{n+1} = g(x_{n})$, $n \geq 0$, is in $[a,b]$. Prove that this sequence converges to the fixed-point $p$. \vspace*{.1in} \noindent b) Let $f \in \mbox{\bf C}^{3}$ with $f(p)=0$, and $f^{'}(p) \neq 0$. Newton's method can be written as the iteration $$ \quad \quad \quad \quad \quad \quad \quad \quad x_{n+1} = g(x_{n}) \quad \quad {\rm where } \quad \quad g(x) = x - \frac{f(x)}{f^{'}(x)} \quad . \quad \quad \quad \quad \quad \quad \quad \quad $$ First, show that $p$ is a fixed-point of $g(x)$. Then, \underline{prove} that for $\epsilon > 0$ sufficiently small, this $g(x)$ satisfies all the conditions for part (a) on the interval $[a,b] = [p - \epsilon, p + \epsilon]$. \underline{Hint}: For the proof, begin by evaluating $g'(p)$. \vspace*{.1in} \noindent c) Using parts (a) and (b), state carefully what result you have proven concerning whether or not Newton's method will converge to $p$? \newpage \noindent 3. Let $A \in \mbox{\bf R}^{n \times n}$ be strictly diagonally dominant, and let $b \in \mbox{\bf R}^{n}$. Let $x^{(k+1)} = M x^{(k)} + g$ be the Jacobi iteration in matrix form for solving $Ax = b$. \vspace*{.2in} \noindent a)Use Gerschgorin's theorem to prove that 0 is not an eigenvalue of $A$. Thus, $A$ is nonsingular. \vspace*{.2in} \noindent b) From part a) we know that $Ax = b$ has a unique solution $x$. Beginning with the matrix equation $Ax = b$, derive the matrix $M$ and the vector $g$ of the Jacobi iteration. Use your construction to argue that $x$ is a fixed-point of the Jacobi iteration. \vspace*{.2in} \noindent c) Show that $ \|M\|_{\infty} < 1 $. Then, give a direct proof (without quoting a similar result) that $x^{(k)}$ converges to $x$ as $k \rightarrow \infty$ for all $x^{(0)} \in \mbox{\bf R}^{n}$. \underline{Hint}: Derive a suitable form for $( x^{(k)} - x )$. \vspace*{.5in} \noindent 4. a) Let $Q \in \mbox{\bf R}^{n \times n}$ be an orthogonal matrix. Prove that $\|Q\|_{2} = 1$. \vspace*{.2in} \noindent b) Let $x=(12,-4,-4,0,3)^{T}$ and $y=(12,-4,*,0,0)^{T}$. Choose the best value for $*$ and construct the Householder matrix $P$ such that $Px=y$. \underline{Explain} why the choice you made for $*$ is the best choice? \vspace*{.2in} \noindent c) Suppose the $QR$ factors are known for a general matrix $A \in \mbox{\bf R}^{m\times n}, m > n$, with rank $n$. For $b \in \mbox{\bf R}^{m}$, derive the least-squares solution $x^{*} \in \mbox{\bf R}^{n}$ associated with the system $Ax = b$ which uses $Q$ and $R$. Explain why your $x^{*}$ is a solution. Is the solution unique? Explain. \vspace*{.5in} \noindent 5. a) Use the method of undetermined coefficients to derive the Simpson's rule quadrature formula $Q(f) = A_{0} f(a) + A_{1} f(a+h) + A_{2} f(a+2h) $ which approximates $ \int_{a}^{a+2h} f(x) \, dx$. \noindent \underline{Hint}: Without loss of generality you may assume $a=0$ to simplify your calculations. \vspace*{.2in} \noindent b) Let $f(x)$ be a smooth function and define $F(x) \equiv \int_{a}^{x} f(t) \, dt$. Derive the error term of the Simpson's rule quadrature formula $Q(f)$ which you found in part (a) by applying Taylor series expanded about $a$ to $F(a+2h)$ and $Q(f)$ to compute $ \int_{a}^{a+2h} f(t) \, dt - Q(f)$. \vspace*{.2in} \noindent c) Show by direct calculation (not by quoting results) that Simpson's rule yields exact results when applied to $x^{3}$ on [$0,2h$], but yields an error when applied to $x^{4}$. Also, \underline{state the precision} of Simpson's rule. \newpage \noindent 6. a) Let $x_{0}, x_{1},\ldots, x_{n}$ be $(n+1)$ distinct points, and let $f(x)$ be defined at these points. Prove that the polynomial $P_{n}(x)$ of degree $\leq n$ which interpolates $f(x)$ at these $(n+1)$ points exists and is unique. Develop all details needed in your proof and clearly define any special symbols you use. \vspace*{.2in} \noindent b) Let $x_{0},x_{1}, \ldots , x_{n}$ be $(n+1)$ distinct points in interval $[a,b]$. Define the quadrature formula $Q(f)$ for approximating $\int_{a}^{b} f(x) dx$ by $Q(f) \equiv \sum_{i=0}^{n} A_{i}f(x_{i})$. Let each coefficient $A_{i}$ have the value $A_{i} = \int_{a}^{b} l_{i}(x) dx$, where $\l_{i}(x)$ is the Lagrange polynomial $$ l_{i}(x) \equiv \prod_{j=0,j\neq i}^{n} \left( \frac{x-x_{j}}{x_{i} - x_{j}} \right) . $$ Let $P(x)$ be \underline{any} polynomial of degree $\leq n$. Prove that $Q(P) \ = \ \int_{a}^{b} P(x) dx$. Explain key details in your proof. \vspace*{.5in} \noindent 7. For solving numerically the initial-value problem $y^{'}(x) = f(x,y),\ y(x_{0}) = y_{0}$, $a \leq x \leq b$, \noindent \vspace*{.2 in} a) the Midpoint Runge-Kutta method is given by $$ w_{i+1} = w_{i} + h\, f(x_{i} + \frac{h}{2} \ , \ w_{i} + \frac{h}{2}f(x_{i},w_{i})) \ . $$ Show that the order of the local truncation error is at least $O(h^{2})$ for this method. \vspace*{.2in} \noindent b) Assume that the function $f(x,y)$ above satisfies a Lipschitz condition in the variable $y$ with Lipschitz constant $L$ for all $a \leq x \leq b$ and $-\infty < y < \infty$, i.e., $|f(x,y) - f(x,\bar{y})| \leq L|y - \bar{y}|$ for $a \leq x \leq b$ and $-\infty < y,\bar{y} < \infty$. Let $$ \phi(x,y,h) = f(x + \frac{h}{2} \ , \ y + \frac{h}{2}f(x,y)) \ . $$ \noindent Prove that $\phi(x,y,h)$ satisfies a Lipschitz condition in the variable $y$ for all $a \leq x \leq b$, $-\infty < y < \infty$, and $0 \leq h \leq h_{0}$, $h_{0} > 0$. Also, state the Lipschitz constant for $\phi(x,y,h)$. \vspace*{.2in} \noindent c) Analyze the Midpoint method in part a) with the condition in part b) for consistency, stability, and convergence. State any theorems you use in your analysis and verify that the hypotheses in those theorems are satisfied. Explain each of your conclusions. \end{document}