%%Modelo para elaboração de artigos da revista PECEN/CFP/UFCG
\documentclass[12pt,a4paper]{article}
\usepackage[top=2.5cm,left=2.5cm,right=2.5cm,bottom=2.5cm]{geometry}
\usepackage{amssymb,latexsym,amsmath,amsfonts}
\usepackage[normalem]{ulem} 
\usepackage[utf8]{inputenc}
\usepackage[brazil]{babel} 
\usepackage{epsfig} 
\usepackage{epstopdf} 
\usepackage{indentfirst} 
\usepackage{setspace} 
\usepackage{graphicx,color}
\usepackage{pslatex} 
\usepackage[svgnames]{xcolor} 

%Pacotes inseridos por nós
%\usepackage{lineno,hyperref}
\usepackage{algorithm}
\usepackage{textcomp}
\usepackage[noend]{algpseudocode}
\usepackage{subcaption}
\usepackage{systeme}
%Fim dos Pacotes inseridos por nós

%Pacotes para Teorema
\newtheorem{theorem}{Theorem}
\newtheorem{remark}{Remark}
\newtheorem{proposition}{Proposition}
\newtheorem{proof}{Proof}
\newtheorem{definition}{Definition}
%Fim Pacotes para Teorema

\usepackage[round]{natbib}%S%
\usepackage{url}%S%
\urlstyle{rm}%S%

\begin{document}
\thispagestyle{empty} 

%%%%%%%Capa%%%%%%%%

%%%%%Título e autores%%%%%
\vspace{-2.5cm}
\centerline{{\Large \textbf{Métodos de ordem $m$ de Jacobi e Gauss--Seidel e métodos}}}
\centerline{{\Large \textbf{simétricos de Gauss--Seidel}}}
%\vspace{0.8cm}
\vspace{0.4cm}
\centerline{{Dedicatory: To the memory of all researchers in the field of knowledge}.}
\vspace{0.4cm}
\centerline{{\large Gustavo Benitez Alvarez$^{1,2}$ \& Diomar Cesar Lobão$^{1,2}$}}
\centerline{{\large Welton Alves  de Menezes$^{3}$}}
\vspace{0.5cm}

%%%%Informações dos autores%%%%%%%
{
\fontsize{10pt}{\baselineskip}\selectfont   

\noindent (1) Master’s Programme in Computational Modeling in Science and Technology, Federal Fluminense University, Av. dos Trabalhadores, 420, Vila Santa Cec\'ilia, CEP 27255-125, Volta Redonda, RJ, Brazil. E-mail: benitez.gustavo@gmail.com

\noindent(2) Department of Exact Sciences, Federal Fluminense University, Brazil. E-mail: lobaodiomarcesar@yahoo.ca

\noindent(3) Department of Exact Sciences, Federal Fluminense University, Brazil. E-mail: wamenezes@id.uff.br
}

\noindent \rule{16cm}{2pt}
%%%%%%%%%resumo e abstract%%%%%%
{
\fontsize{10pt}{\baselineskip}\selectfont   

\noindent \textbf{Resumo:} Aqui, são desenvolvidos métodos de ordem $m$ que conservam a forma dos métodos de primeira ordem. Métodos de ordem $m$ têm uma taxa de convergência maior que sua versão de primeira ordem. Esses métodos de ordem $m$ são subsequências de seu método precursor, onde alguns benefícios do uso de processadores vetoriais e paralelos podem ser explorados. Os resultados numéricos obtidos com as implementações vetoriais mostram vantagens computacionais quando comparadas às versões de primeira ordem.

\noindent {\bf Palavras-chave:} Métodos iterativos de ordem $m$, Taxa media de convergência, Sistema linear de equações, Aceleradores
\\

\centerline{\textbf{The $m$-order Jacobi, Gauss--Seidel and symmetric Gauss--Seidel methods}}


\noindent \textbf{Abstract:} Here, $m$-order methods are developed that conserve the form of the first-order methods. The $m$-order methods have a higher rate of convergence than their first-order version. These $m$-order methods are subsequences of its precursor method, where some benefits of using vector and parallel processors can be explored. The numerical results obtained with vector implementations show computational advantages when compared to the first-order versions.

\noindent {\bf Key words:} $m$-order iterative methods, Average rate of convergence, Linear system of equations, Accelerators

}

\noindent \rule{16cm}{2pt}
%\noindent \hrulefill  %preenche toda a linha

%%%%%%%%%%%%%%%%%%%%%%%%

\section*{Introduction}
\label{section1}

Iterative methods for solving linear systems emerged in the 19th century \citep{Young1989, saad2019}. Since then, many researchers have contributed to the development, improvement and understanding of these methods until today. It would be impossible to mention all the articles and books dedicated to this topic, and in this article we would like to pay tribute to all these researchers. For example, a historical review with the state of the art in different decades can be found in \cite{Young1989} and \cite{saad2019}. These methods are used in several areas of science and technology, among which we highlight: mathematics, physics, chemistry, engineering and scientific computing. As an example, when a linear partial differential equation is solved by finite difference method, finite element method or finite volume method, the problem is transformed into a system of linear algebraic equations \citep{Young1989}.

Consider the square system of linear algebraic equations of order $n$ and its equivalent defined as
\begin{equation}
\label{eq1}
Ax=b \text{ or } GAx=Gb,
\end{equation}
where the matrix $A$ and the vector $b$ are known data for the problem. For simplicity, assume that the entries of all matrices and vectors are real numbers. If the matrix $G$ is non-singular, each matrix $G$ chosen determines an equivalent system where a different iterative method can be deduced. If the matrix $A$ is non-singular, its inverse $A^{-1}$ exists, and the matrix equation (\ref{eq1}) has a unique solution $x=A^{-1}b$ or $x=[AG]^{-1}b$ \citep{Forsythe, Varga, Golub}. We are interested in solving equation (\ref{eq1}) with iterative methods that can be represented in the form 
\begin{equation}
\label{eq2}
x^{k+1}=Tx^{k}+d,
\end{equation}
where $T=I-GA$ is called the iteration matrix, $d=Gb$ is a vector, and $x^{k+1}$ denotes the approximate solution in the $k+1$ iteration \citep{Forsythe, Varga, Young, Saad2003, Francesa2006}. It is well known that the iterative process given by equation (\ref{eq2}) is convergent if $\rho(T)<1$, where $\rho(T)$ denotes the spectral radius of the iteration matrix $T$ \citep{Young1989, Varga, Saad2003, Francesa2006}. The spectral radius of $T$ is defined as the largest eigenvalue $\lambda$ of $T$ in module. That is, $\rho(T)= \max\limits_{\substack{1 \leq i \leq n}} \{ \vert \lambda_{i} \vert \}$.

The Jacobi, Gauss--Seidel and symmetric Gauss--Seidel methods are iterative methods described by the equation (\ref{eq2}). These methods are still being researched as can be founded in the following references: \cite{saad2019}, \cite{Varga}, \cite{Golub}, \cite{Saad2003}, \cite{Francesa2006}, \cite{Krylov2018}, \cite{Paper3}, \cite{JCP2016}, \cite{Paper1}, \cite{Paper2} and \cite{Paper6}. The main goal of the present work is to increase the rate of convergence of these methods without modifying the form of a first-order method. This is done by generating the $m$-order versions of these methods. The $m$-order methods generate subsequences of their precursor method, where some benefits of using vector and parallel processors can be explored.

This paper is organized as follows. In next Section the Jacobi, Gauss--Seidel, symmetric Gauss--Seidel iterative methods and an alternative symmetric Gauss--Seidel will be presented. In the following Section are deduced $m$-order methods and a new method of type symmetric Gauss--Seidel. Later, the new methods are confronted with their precursors in numerical experiments. Finally, some conclusions are presented in the last Section. 


%-----------------------------------------------------------

\section*{The Jacobi, Gauss--Seidel and symmetric Gauss--Seidel methods.}
\label{section2}

The matrix $A$ of the equation (\ref{eq1}) is splitted as $A= D + L + U$, where $D$ is the diagonal matrix of the diagonal entries of $A$, $L$ is the strict lower triangular matrix of $A$, and $U$ is the strict upper triangular matrix of $A$. The well-known Jacobi method (J) is defined as
\begin{equation}
\label{eq3}
x^{k+1}=(I-D^{-1}A)x^{k}+D^{-1}b,
\end{equation}
whose $T_{J}=I-D^{-1}A$. It is known that if $A$ is diagonally dominant the method will be convergent \citep{Forsythe, Varga, Young, Saad2003}. This is a sufficient condition but not necessary.

The Gauss--Seidel method or forward Gauss--Seidel method (fGS) is defined as
\begin{equation}
\label{eq4}
x^{k+1}=-(D + L)^{-1}Ux^{k}+(D + L)^{-1}b,
\end{equation}
whose $T_{fGS}=-(D + L)^{-1}U$ and $d_{fGS}=(D + L)^{-1}b$. Also, it is known that convergence is guaranteed if the matrix $A$ is diagonally dominant. There is a method similar to the equation (\ref{eq4}) known as backward Gauss--Seidel method (bGS) \citep{Francesa2006, Krylov2018}. This method is defined by the equation (\ref{eq5}). 
\begin{equation}
\label{eq5}
x^{k+1}=-(D + U)^{-1}Lx^{k}+(D + U)^{-1}b,
\end{equation}   
whose $T_{bGS}=-(D + U)^{-1}L$ and $d_{bGS}=(D + U)^{-1}b$. Note that to obtain the solution in equation (\ref{eq4}) the unknowns are ordered from $1$ to $n$, and in equation (\ref{eq5}) the unknowns are ordered from $n$ to $1$. Generally, $T_{fGS} \neq T_{bGS}$, and consequently the approximate solutions of both methods are different. Therefore, this motivated the development of a method that takes into account the two forms of ordering the unknowns. This method was called the symmetric Gauss--Seidel method \citep{Francesa2006, Krylov2018}.

The well-known symmetric Gauss--Seidel method (sGS) is obtained by combining the forward and backward variants of the Gauss--Seidel method \citep{Young1989, Francesa2006, Krylov2018}.  
\begin{align}
\label{eq6}
& \text{forward} & & (D+L)x^{k+1/2}_{f}=-Ux^{k}_{f}+b, \\
\label{eq7}
& \text{backward} & & (D+U)x^{k+1}_{b}=-Lx^{k+1/2}_{b}+b. 
\end{align}
Frequently, the method is obtained if in the above equations $x^{k+1/2}_{f}$ and $x^{k+1/2}_{b}$ are eliminated assuming that $x^{k+1/2}_{f}=x^{k+1/2}_{b}$. Furthermore, it is assumed that $x^{k+1}=x^{k+1}_{b}$ and $x^{k}=x^{k}_{f}$ to obtain
\begin{align}
\label{eq8}
x^{k+1}= & (D + U)^{-1}L(D + L)^{-1}U x^{k} + (D + U)^{-1}[I - L(D + L)^{-1}] b \nonumber \\
       = & T_{bGS} T_{fGS} x^{k} + d_{bGS} + T_{bGS} d_{fGS}.
\end{align}   

On the other hand, starting from the alternative system (\ref{eq9}-\ref{eq10}) it is possible to deduce a new symmetric Gauss--Seidel method (nsGS).
\begin{align}
\label{eq9}
& \text{forward} & & (D+L)x^{k+1}_{f}=-Ux^{k+1/2}_{f}+b, \\
\label{eq10}
& \text{backward} & & (D+U)x^{k+1/2}_{b}=-Lx^{k}_{b}+b. 
\end{align}
Analogously it is possible to assume that $x^{k+1/2}_{f}=x^{k+1/2}_{b}$, and solving the system is obtained
\begin{align}
\label{eq11}
x^{k+1}= & (D + L)^{-1}U(D + U)^{-1}L x^{k} + (D + L)^{-1}[I - U(D + U)^{-1}] b \nonumber \\
       = & T_{fGS} T_{bGS} x^{k} + d_{fGS} + T_{fGS} d_{bGS}.
\end{align} 
Note that $T_{sGS}=(D + U)^{-1}L(D + L)^{-1}U$ and $T_{nsGS}=(D + L)^{-1}U(D + U)^{-1}L$. Furthermore, both methods have an equivalent computational cost. In general, $T_{sGS} \neq T_{nsGS}$ because the product of matrices is not commutative. 

However, in general $x^{k+1/2}_{f} \neq x^{k+1/2}_{b}$, and this allows to deduce new methods of the symmetric type. In fact, in the two combinations of forward and backward Gauss--Seidel methods performed above, an intermediate step $x^{k+1/2}$ is introduced, which is later eliminated. For this reason, the rate of convergence of the methods generated with these combinations should be equal to or greater than each Gauss--Seidel separately. Based on this idea of combining forward and backward Gauss--Seidel methods, $m$-order methods can be deduced, which have a higher rate of convergence.



%--------------------------------------------------------

\section*{The $m$-order methods and new symmetric Gauss--Seidel methods.}
\label{section3}

Consider a new way to combine forward and backward Gauss--Seidel methods like
\begin{align}
\label{eq12}
& \text{forward} & & (D+L)x^{k+1}_{f}=-Ux^{k+1/2}_{f}+b, \\
\label{eq13}
& \text{forward} & & (D+L)x^{k+1/2}_{f}=-Ux^{k}_{f}+b, \\
\label{eq14}
& \text{backward} & & (D+U)x^{k+1}_{b}=-Lx^{k+1/2}_{b}+b, \\
\label{eq15}
& \text{backward} & & (D+U)x^{k+1/2}_{b}=-Lx^{k}_{b}+b. 
\end{align}
Thus, three new methods can be obtained. The first method, described by equation (\ref{eq16}), is obtained by eliminating $x^{k+1/2}_{f}$ in the equations (\ref{eq12}) and (\ref{eq13}). The second method, described by the equation (\ref{eq17}), is obtained by eliminating $x^{k+1/2}_{b}$ in the equations (\ref{eq14}) and (\ref{eq15}). 
\begin{align}
\label{eq16}
& \text{forward} & & x^{k+1}_{f}=[(D+L)^{-1}U]^2 x^{k}_{f} + [-(D+L)^{-1}U + I](D+L)^{-1}b. \\
\label{eq17}
& \text{backward} & & x^{k+1}_{b}=[(D+U)^{-1}L]^2 x^{k}_{b} + [-(D+U)^{-1}L + I](D+U)^{-1}b. 
\end{align}
These two new methods can be called $2$-order forward and backward Gauss--Seidel, because their iteration matrices are defined by $T_{fGSo2}=[(D+L)^{-1}U]^2=T_{fGS}T_{fGS}$ and $T_{bGSo2}=[(D+U)^{-1}L]^2=T_{bGS}T_{bGS}$. Note that the number of steps which the current iteration depends on is one, as the contribution of the intermediate step is transferred to the iteration matrix and vector $d$. Thus, these methods retain the form of a first-order method, but with a rate of convergence greater than its precursor methods given equations (\ref{eq4}) and (\ref{eq5}). The third method is obtained as a linear combination of the equations (\ref{eq16}) and (\ref{eq17}). 
\begin{align}
\label{eq18}
x^{k+1} = & \mu x^{k+1}_{f} + (1-\mu) x^{k+1}_{b} \nonumber \\ 
          = &  \{ \mu [(D+L)^{-1}U]^2 + (1-\mu)[(D+U)^{-1}L]^2 \} x^{k} \nonumber \\ 
            & + \{ \mu [-(D+L)^{-1}U + I](D+L)^{-1} + (1-\mu)[-(D+U)^{-1}L + I](D+U)^{-1} \} b,
\end{align}   
where $x^{k}_{f} = x^{k}_{b}$ was assumed, and the free parameter $\mu \in [0,1]$.

Considering all of the above, it is possible to deduce another method such as
\begin{align}
\label{eq19}
x^{k+1} = & \mu x^{k+1}_{f} + (1-\mu) x^{k+1}_{b} \nonumber \\ 
          = &  -[\mu (D+L)^{-1}U + (1-\mu) (D+U)^{-1}L] x^{k} \nonumber \\
					  &  + [\mu (D+L)^{-1} + (1-\mu) (D+U)^{-1} ] b,
\end{align}   
where $x^{k+1}_{f}$ and $x^{k+1}_{b}$ are given by the equations (\ref{eq4}) and (\ref{eq5}) respectively. This new symmetrical method has as its particular case the methods determined by the equations (\ref{eq4}) ($\mu = 1$) and (\ref{eq5}) ($\mu = 0$). However, this method should not be called an $2$-order method because its iteration matrix is not the product of two iteration matrices.

This procedure can also be used to obtain higher order Jacobi methods, and with this it is possible to increase the rate of convergence. Consider Jacobi method with $m$ intermediate steps that will be eliminated
\begin{align}
\label{eq20}
x^{k+1} = &  (I-D^{-1}A) x^{k+(m-1)/m} + D^{-1}b \nonumber \\ 
x^{k+(m-1)/m} = & (I-D^{-1}A) x^{k+(m-2)/m} + D^{-1}b  \nonumber \\ 
\vdots \ \ \ \ \ \  = & \ \ \ \ \ \ \ \ \ \  \vdots  \nonumber \\
x^{k+1/m}  =  & + (I-D^{-1}A) x^{k} + D^{-1}b.
\end{align}   
After eliminating all the intermediate steps, the $m$-order Jacobi method is obtained  and given by
\begin{equation}
\label{eq21}
x^{k+1}=[I-D^{-1}A]^{m} x^{k} + \Big[I + \sum_{l=1}^{m-1}  (I-D^{-1}A)^{m-l} \Big] D^{-1}b.
\end{equation}   
Note that $m$-order Jacobi methods have practical utility, since they have a higher rate of convergence than the Jacobi method and retain the property of being parallelizable. 

In general, from a generic method defined by the equation (\ref{eq2}) its $m$-order method is determined by
\begin{equation}
\label{eq22}
x^{k+1}=[T]^{m} x^{k} + \Big[I + \sum_{l=1}^{m-1}  [T]^{m-l} \Big] d.
\end{equation} 
In the particular case of the Gauss-Seidel methods, we obtain its forward and backward $m$-order methods determined by the equations (\ref{eq23}) and (\ref{eq24}).
\begin{equation}
\label{eq23}
x^{k+1}=[-(D + L)^{-1}U]^{m} x^{k} + \Big[I + \sum_{l=1}^{m-1}  [-(D + L)^{-1}U]^{m-l} \Big] (D + L)^{-1}b.
\end{equation}     
\begin{equation}
\label{eq24}
x^{k+1}=[-(D + U)^{-1}L]^{m} x^{k} + \Big[I + \sum_{l=1}^{m-1}  [-(D + U)^{-1}L]^{m-l} \Big] (D + U)^{-1}b.
\end{equation}     
Even though the forward and backward Gauss--Seidel methods are not easily parallelizable, their $m$-order methods have some benefits for parallelization. This is because the iteration matrix of the $m$-order methods is the product of $m$ matrices of the precursor method, and the matrix product can be parallelized.

In addition, $m$-order methods can be interpreted as methods that generate subsequences of their precursors. For this reason a faster convergence can be proved for the $m$-order methods if their precursors are convergent. Before, it is necessary to define the rate of convergence for the precursor iterative method described by the equation (\ref{eq2}). Let $\epsilon^{k}:= x - x^k$ be the error vector after $k$ iterations. Thus, $\epsilon^k = T \epsilon^{k-1} = \cdots = T^k \epsilon^0$. Consider the average rate of convergence for $k$ iterations defined by Kahan as $R_{K} :=-\frac{\ln(\frac{\Vert \epsilon^{k} \Vert}{\Vert \epsilon^{0} \Vert})}{k}$ \citep{Kahan}, and by Varga as $R_{V} :=-\frac{\ln(\Vert T^k \Vert)}{k}$ \citep{Varga}.

\begin{theorem}
\label{teor1}
Let an iterative method defined by the equation (\ref{eq2}) and its corresponding $m$-order method determined by the equation (\ref{eq22}). Let $x^{0}$ be the same initial guess for both iterative methods. If the precursor method (\ref{eq2}) is convergent with rate of convergence $R_{K}$ and $R_{V}$, then the $m$-order method is convergent with rate of convergence $\bar{R}_{K} \geq (m-1)R_{V} + R_{K}$.
\end{theorem} 

\begin{proof}
\label{pteor1}
The proof uses the following theorem from mathematical analysis. \emph{A sequence $\{x^{k}\}$ is convergent if, and only if, every its subsequences $\{x^{l(k)}\}$ are convergent} \citep{Apostol}. If $x^{0}$ is the same initial guess for both iterative methods, then the convergent precursor method defined by equation (\ref{eq2}) generates the sequence $\{x^{k}\}$, and its $m$-order method defined by the equation (\ref{eq22}) generates a subsequence $\{\bar{x}^{l} = x^{l(k)}\}$ such that $\bar{x}^{l} = x^{k}$ if $k=ml$ with $l=1, 2, \ldots$. Therefore, the subsequence $\{\bar{x}^{l}\}$ is convergent, and represents an acceleration in the rate of convergence.

Note that for the $m$-order method we have $\bar{\epsilon}^{k}:= x - \bar{x}^k$, $\bar{\epsilon}^k = [T^m]^k \bar{\epsilon}^0$. If $\bar{x}^0 = x^0$ it follows that $\bar{\epsilon}^0 = \epsilon^0$ and $\bar{\epsilon}^k = [T^{k}]^{m-1} \epsilon^k$. Therefore,
\begin{equation}
\label{eq25}
\Vert \bar{\epsilon}^{k} \Vert \leq \Vert [T^{k}]^{m-1} \Vert \Vert \epsilon^{k} \Vert \leq \Vert T^{k} \Vert ^{m-1} \Vert \epsilon^{k} \Vert \leq \Vert T \Vert ^{k(m-1)} \Vert \epsilon^{k} \Vert.
\end{equation} 

Analogously to the definition of Kahan it is possible to define $\bar{R}_{K} :=-\frac{\ln(\frac{\Vert \bar{\epsilon}^{k} \Vert}{\Vert \bar{\epsilon}^{0} \Vert})}{k}$ for the $m$-order method. Hence, from inequality (\ref{eq25}) follows
\begin{equation}
\label{eq26}
\bar{R}_{K} \geq -\frac{ln(\Vert [T^{k}]^{m-1} \Vert)}{k} + R_{K} \geq (m-1)R_{V} + R_{K} \geq -k(m-1)\frac{ln(\Vert T \Vert)}{k} + R_{K},
\end{equation}
which completes the proof. $\square$ 
\end{proof} 

The previous theorem shows that the $m$-order method is always iteratively faster than its precursor method for $k$ iterations. It must be said that this process of acceleration of convergence via subsequence may be more difficult to build in Krylov subspace methods, among which we highlight Conjugate Gradient (CG), Conjugate Gradient Squared (CGS), and Generalized Minimum Residual (GMRES). In addition, the $m$-order methods presented here are different from the so-called Chebyshev semi-iterative methods, second order Richardson method and second-degree methods \citep{Golub1961, Young1972}, but the possibility that there is some relationship between them should not be ruled out. Furthermore, all the ideas developed here can be applied to other methods such as successive overrelaxation method (SOR) and symmetrical successive overrelaxation method (SSOR).

On the other hand, it is possible to build a linear combination of different methods with a parameter $\mu$ such that $\mu \in [0,1]$ \citep{Traub}. For example, combining the Jacobi and Gauss--Seidel methods as
\begin{align}
\label{eq27}
x^{k+1}= &  \mu [(I-D^{-1}A)x^{k}+D^{-1}b] + (1-\mu)[-(D + L)^{-1}Ux^{k}+(D + L)^{-1}b] \nonumber \\ 
  = &  [\mu (I-D^{-1}A) - (1-\mu)(D + L)^{-1}U]x^{k} + \mu D^{-1}b + (1-\mu)(D + L)^{-1}b,
\end{align}
whose iteration matrix is $T_{JfGS}= \mu T_{J} + (1-\mu) T_{fGS}$. In addition, it is possible to generate new methods using the composition of different methods \citep{Traub}.   

%-------------------------------------------------------------------

\section*{Numerical experiments.}
\label{section4}

Seven matrices were chosen to perform numerical experiments and compare the performance of the methods. The first matrix $A_1 = A_{4 \times 4}$ is the well-known Hilbert matrix defined as $a_{i,j} = (i+j-1)^{-1}$ \citep{Forsythe}. In the next four experiments we use the matrices defined in Example 4.2 of \cite{Francesa2006}, which are defined as
\begin{align}
\label{eq28}
A_2=
\left[ \begin{array}{ccc}
3 & 0 & 4 \\
7 & 4 & 2 \\
-1 & 1 & 2
\end{array} \right],\ \ \ A_3=
\left[ \begin{array}{ccc}
-3 & 3 & -6 \\
-4 & 7 & -8 \\
5 & 7 & -9
\end{array} \right],  \nonumber \\
A_4=
\left[ \begin{array}{ccc}
4 & 1 & 1 \\
2 & -9 & 0 \\
0 & -8 & -6
\end{array} \right],\ \ \ A_5=
\left[ \begin{array}{ccc}
7 & 6 & 9 \\
4 & 5 & -4 \\
-7 & -3 & 8
\end{array} \right].
\end{align}   
Matrices $A_6= A_{48 \times 48}$ and $A_7= A_{153 \times 153}$ are sparse, and were obtained in the SuiteSparse Matrix Collection \citep{matrix}. The matrix $A_6$ is the file `bcsstk01' from BCSSTRUC1 set. The matrix $A_7$ is the file `bcsstk05' from BCSSTRUC1 set. For all experiments the vector $b$ was chosen such that the exact solution is $x_i = i$ $\forall i=1, \ldots, n$, and the same initial guess was $x^{0}=0$. However, there are better initial guess for each method, which are determined by $x^{0}=d$. This last choice allows to reduce the number of iterations necessary to verify the stopping criterion. The stopping criterion used was $\max\limits_{\substack{1 \leq i \leq n}} \{ \vert x^{k+1}_i - x^k_i \vert \} < 10^{-14}$.

The algorithms developed and implemented in this work make extensive use of vector structures, therefore not presenting explicit loops with the exception of the external repetition structure of the `while' type used to execute the iterative convergence process. Thus, the computational performance of the Jacobi and Gauss--Seidel methods are optimized in the Matlab® language environment. With respect to the methods given by equations (\ref{eq21}), (\ref{eq23}) and (\ref{eq24}) the sum of order $m$ is implemented using a `for' type repetition structure since in the simulations $m$ was equal to 2 and 10. Therefore, it is good to remember that vectorized code does not necessarily mean faster code, due to the values of $m$ it was decided to implement the sum in a serial way without loss of computational efficiency \citep{Kepner}. Furthermore, the implementation of the algorithm considered that matrices $A_6$ and $A_7$ are sparse. This allows you to save a significant amount of memory and reduce computation time.

In Tables \ref{tab1}, \ref{tab2}, \ref{tab3}, \ref{tab4}, \ref{tab5}, \ref{tab6} and \ref{tab7} the determinant, the condition number $\kappa(T) = \Vert T \Vert_{2} \Vert T^{-1} \Vert_{2}$, the spectral radius and the number of iterations for each method are presented. For matrix $A_1$ only the Jacobi methods are divergent. For matrix $A_2$ only the method defined by equation (\ref{eq19}) is convergent. In the case of matrix $A_3$ only the forward Gauss--Seidel methods are divergent, and for the matrix $A_5$ only the backward Gauss--Seidel methods are divergent. The numerical results confirm the Theorem 1.

\newpage

\begin{table}[h]
    \caption{Iteration matrix of each method for the linear system defined by the matrix $A_1$: determinant, condition number, spectral radius and number of iterations.}
    \label{tab1}
    \centering
\begin{tabular}{|c|c|c|c|c|} \hline
Method & $det(T)$ & $\kappa(T)$ & $\rho(T)$  & Iterations \\ \hline
J Eq. (\ref{eq3}) & -1.5294270 & 8.4311823 & 2.5820911 & divergent \\ \hline
2-o J Eq. (\ref{eq21}) & 2.3391472 & 29.409302 &  6.6671949 & divergent \\ \hline
10-o J Eq. (\ref{eq21}) & 70.030586 & 3.417$\times10^{6}$ & 13173.942  & divergent \\ \hline
fGS Eq. (\ref{eq4}) & 0 & $\infty$ & 0.9990297  &  22002 \\ \hline
2-o fGS Eq. (\ref{eq16}) & 0 & $\infty$ & 0.9980605  & 12002 \\ \hline
10-o fGS Eq. (\ref{eq23}) & 0 & $\infty$ &  0.9903401 & 2853 \\ \hline
bGS Eq. (\ref{eq5}) & 0 & $\infty$ & 0.9990297  &  22002 \\ \hline
2-o bGS Eq. (\ref{eq17}) & 0 & $\infty$ & 0.9980605  & 13317 \\ \hline
10-o bGS Eq. (\ref{eq24}) & 0 & $\infty$ & 0.9903401  & 2835 \\ \hline
sGS Eq. (\ref{eq8}) & 0 & $\infty$ & 0.9985069   & 15002 \\ \hline
nsGS Eq. (\ref{eq11}) & 0 & $\infty$ & 0.9985069  & 14002 \\ \hline
npsGS $\mu=\frac{1}{2}$ Eq. (\ref{eq18}) & -0.0297230 & 19.554683 & 0.9984568  & 12002 \\ \hline
psGS $\mu=\frac{1}{2}$ Eq. (\ref{eq19}) & -0.0984462 & 16.596875 & 0.9992367   &  26002 \\ \hline
\end{tabular}
\end{table}


\begin{table}[h]
    \caption{Iteration matrix of each method for the linear system defined by the matrix $A_2$: determinant, condition number, spectral radius and number of iterations.}
    \label{tab2}
    \centering
\begin{tabular}{|c|c|c|c|c|} \hline
Method & $det(T)$ & $\kappa(T)$ & $\rho(T)$  & Iterations \\ \hline
J Eq. (\ref{eq3}) & -1.1666666 & 4.0795444 & 1.1251473 & divergent  \\ \hline
2-o J Eq. (\ref{eq21}) & 1.3611111 & 4.8365021 & 1.2659565 & divergent  \\ \hline
10-o J Eq. (\ref{eq21}) & 4.6716241 & 14.294370 & 3.2515769 & divergent \\ \hline
fGS Eq. (\ref{eq4}) & 0 & $\infty$ & 1.5833333  & divergent \\ \hline
2-o fGS Eq. (\ref{eq16}) & 0 & $\infty$ & 2.5069444  & divergent \\ \hline
10-o fGS Eq. (\ref{eq23}) & 0 & $\infty$ & 99.020142  & divergent \\ \hline
bGS Eq. (\ref{eq5}) & 0 & $\infty$ & 1.0801234  & divergent \\ \hline
2-o bGS Eq. (\ref{eq17}) & 0 & $\infty$ &  1.1666666 & divergent \\ \hline
10-o bGS Eq. (\ref{eq24}) & 0 & $\infty$ &  2.1613940 & divergent \\ \hline
sGS Eq. (\ref{eq8}) & 0 & $\infty$ & 1.5833333 &  divergent \\ \hline
nsGS Eq. (\ref{eq11}) & 0 & $\infty$ & 1.5833333  & divergent \\ \hline
npsGS $\mu=\frac{1}{2}$ Eq. (\ref{eq18}) & 0.6959153 & 4.5638159 & 1.3980206  & divergent \\ \hline
psGS $\mu=\frac{1}{2}$ Eq. (\ref{eq19}) & -0.3767361 & 5.4447600 & 0.7842738 & 142  \\ \hline
\end{tabular}
\end{table}


\begin{table}[tbp]
    \caption{Iteration matrix of each method for the linear system defined by the matrix $A_3$: determinant, condition number, spectral radius and number of iterations.}
    \label{tab3}
    \centering
\begin{tabular}{|c|c|c|c|c|} \hline
Method & $det(T)$ & $\kappa(T)$ & $\rho(T)$  & Iterations \\ \hline
J Eq. (\ref{eq3}) & -0.2539682 & 28.1625119 &  0.8133091 & 167 \\ \hline
2-o J Eq. (\ref{eq21}) & 0.0644998 & 37.9834491 & 0.6614717 & 18 \\ \hline
10-o J Eq. (\ref{eq21}) & 1.1163$\times10^{-6}$ & 604.18447 & 0.1266357 & 18 \\ \hline
fGS Eq. (\ref{eq4}) & 0 & $\infty$ & 1.1111111 &  divergent \\ \hline
2-o fGS Eq. (\ref{eq16}) & 0 & $\infty$ & 1.2345679  & divergent  \\ \hline
10-o fGS Eq. (\ref{eq23}) & 0 & $\infty$ & 2.8679719  &  divergent \\ \hline
bGS Eq. (\ref{eq5}) & 0 & $\infty$ & 0.9428090 &  565 \\ \hline
2-o bGS Eq. (\ref{eq17}) & 0 & $\infty$ & 0.8888888 & 288  \\ \hline
10-o bGS Eq. (\ref{eq24}) & 0 & $\infty$ & 0.5549289 & 60  \\ \hline
sGS Eq. (\ref{eq8}) & 0 & $\infty$ & 0.7126966  & 103 \\ \hline
nsGS Eq. (\ref{eq11}) & 0 & $\infty$ & 0.7126966 & 101  \\ \hline
npsGS $\mu=\frac{1}{2}$ Eq. (\ref{eq18}) & 0.1854533 & 12.8438001 & 0.8232698  & 165  \\ \hline
psGS $\mu=\frac{1}{2}$ Eq. (\ref{eq19}) & -0.2968002 & 6.8249624 & 0.6993380 & 96  \\ \hline
\end{tabular}
\end{table}

\begin{table}[tbp]
    \caption{Iteration matrix of each method for the linear system defined by the matrix $A_4$: determinant, condition number, spectral radius and number of iterations.}
    \label{tab4}
    \centering
\begin{tabular}{|c|c|c|c|c|} \hline
Method & $det(T)$ & $\kappa(T)$ & $\rho(T)$ & Iterations  \\ \hline
J Eq. (\ref{eq3}) & 0.0740740 & 6.1081965 & 0.4438188 & 45  \\ \hline
2-o J Eq. (\ref{eq21}) & 0.0054869 & 6.4441038 & 0.1969751 & 7  \\ \hline
10-o J Eq. (\ref{eq21}) & 4.973$\times10^{-12}$ & 16.129493 & 0.0002965  & 7 \\ \hline
fGS Eq. (\ref{eq4}) & 0 & $\infty$ & 0.0185185  & 12 \\ \hline
2-o fGS Eq. (\ref{eq16}) & 0 & $\infty$ & 0.0003429  & 7 \\ \hline
10-o fGS Eq. (\ref{eq23}) & 0 & $\infty$ & 0.0000000  & 3 \\ \hline
bGS Eq. (\ref{eq5}) & 0 & $\infty$ & 0.3013571  & 30 \\ \hline
2-o bGS Eq. (\ref{eq17}) & 0 & $\infty$ & 0.0908161  & 16 \\ \hline
10-o bGS Eq. (\ref{eq24}) & 0 & $\infty$ &  0.0000061 & 5 \\ \hline
sGS Eq. (\ref{eq8}) & 0 & $\infty$ & 0.0185185  & 11 \\ \hline
nsGS Eq. (\ref{eq11}) & 0 & $\infty$ & 0.0185185  & 11 \\ \hline
npsGS $\mu=\frac{1}{2}$ Eq. (\ref{eq18}) & -1.246$\times10^{-5}$  & 71.573476 & 0.0496594  & 13 \\ \hline
psGS $\mu=\frac{1}{2}$ Eq. (\ref{eq19}) & 9.087$\times10^{-3}$ & 5.5394076 & 0.2388210  & 25 \\ \hline
\end{tabular}
\end{table}


\begin{table}[tbp]
    \caption{Iteration matrix of each method for the linear system defined by the matrix $A_5$: determinant, condition number, spectral radius and number of iterations.}
    \label{tab5}
    \centering
\begin{tabular}{|c|c|c|c|c|} \hline
Method & $det(T)$ & $\kappa(T)$ & $\rho(T)$ & Iterations  \\ \hline
J Eq. (\ref{eq3}) & -0.2142857 & 17.5108316 &  0.6411328 & 79 \\ \hline
2-o J Eq. (\ref{eq21}) & 0.0459183 & 20.473569 & 0.4110512 &  10 \\ \hline
10-oJ Eq. (\ref{eq21}) & 2.041$\times10^{-7}$ & 79.631459 & 0.0117349  & 10 \\ \hline
fGS Eq. (\ref{eq4}) & 0 & $\infty$ & 0.7745966  & 136 \\ \hline
2-o fGS Eq. (\ref{eq16}) & 0 & $\infty$ & 0.6000000 &  70 \\ \hline
10-o fGS Eq. (\ref{eq23}) & 0 & $\infty$ & 0.0777599 & 16  \\ \hline
bGS Eq. (\ref{eq5}) & 0 & $\infty$ &  1.0923807 & divergent \\ \hline
2-o bGS Eq. (\ref{eq17}) & 0 & $\infty$ & 1.1932958  & divergent \\ \hline
10-o bGS Eq. (\ref{eq24}) & 0 & $\infty$ & 2.4195832  & divergent \\ \hline
sGS Eq. (\ref{eq8}) & 0 & $\infty$ & 0.4535573  & 46 \\ \hline
nsGS Eq. (\ref{eq11}) & 0 & $\infty$ & 0.4535573  & 46 \\ \hline
npsGS $\mu=\frac{1}{2}$ Eq. (\ref{eq18}) & 0.0799683 & 6.1521714 & 0.7625609  & 115 \\ \hline
psGS $\mu=\frac{1}{2}$ Eq. (\ref{eq19}) & -0.1455420 & 10.015480 & 0.5892481  & 66 \\ \hline
\end{tabular}
\end{table}


\begin{table}[tbp]
    \caption{Iteration matrix of each method for the linear system defined by the matrix $A_6$: determinant, condition number and spectral radius.}
    \label{tab6}
    \centering
\begin{tabular}{|c|c|c|c|} \hline
Method & $det(T)$ & $\kappa(T)$ & $\rho(T)$   \\ \hline
J Eq. (\ref{eq3}) & 3.844$\times10^{-18}$ & 1.140$\times10^{5}$ & 1.1014522  \\ \hline
2-o J Eq. (\ref{eq21}) & 1.478$\times10^{-35}$ & 2.149$\times10^{6}$ & 1.2131969  \\ \hline
10-o J Eq. (\ref{eq21}) & 6.958$\times10^{-175}$ & 3.740$\times10^{11}$ & 2.6281890  \\ \hline
fGS Eq. (\ref{eq4}) & 0 & $\infty$ & 0.9969136  \\ \hline
2-o fGS Eq. (\ref{eq16}) & 0 & $\infty$ & 0.9938367  \\ \hline
10-o fGS Eq. (\ref{eq23}) & 0 & $\infty$ &  0.9695613 \\ \hline
bGS Eq. (\ref{eq5}) & 0 & $\infty$ & 0.9969136  \\ \hline
2-o bGS Eq. (\ref{eq17}) & 0 & $\infty$ &  0.9938367 \\ \hline
10-o bGS Eq. (\ref{eq24}) & 0 & $\infty$ &  0.9695613 \\ \hline
sGS Eq. (\ref{eq8}) & 0 & $\infty$ &  0.9968851 \\ \hline
nsGS Eq. (\ref{eq11}) & 0 & $\infty$ &  0.9968851 \\ \hline
npsGS $\mu=\frac{1}{2}$ Eq. (\ref{eq18}) & 3.367$\times10^{-69}$ & 5.292$\times10^{5}$ &  0.9946049 \\ \hline
psGS $\mu=\frac{1}{2}$ Eq. (\ref{eq19}) & -1.035$\times10^{-43}$ & 1.980$\times10^{5}$ &  0.9976792 \\ \hline
\end{tabular}
\end{table}


\begin{table}[tbp]
    \caption{Iteration matrix of each method for the linear system defined by the matrix $A_7$: determinant, condition number and spectral radius.}
    \label{tab7}
    \centering
\begin{tabular}{|c|c|c|c|} \hline
Method & $det(T)$ & $\kappa(T)$ & $\rho(T)$   \\ \hline
J Eq. (\ref{eq3}) & 2.311$\times10^{-51}$ & 2.521$\times10^{3}$ & 2.0149510  \\ \hline
2-o J Eq. (\ref{eq21}) & 5.342$\times10^{-102}$ & 1.760$\times10^{6}$ &  4.0600279 \\ \hline
10-o J Eq. (\ref{eq21}) & 0 & $\infty$ & 1103.1767  \\ \hline
fGS Eq. (\ref{eq4}) & 0 & $\infty$ & 0.9985763  \\ \hline
2-o fGS Eq. (\ref{eq16}) & 0 & $\infty$ &  0.9971546 \\ \hline
10-o fGS Eq. (\ref{eq23}) & 0 & $\infty$ &  0.9858541 \\ \hline
bGS Eq. (\ref{eq5}) & 0 & $\infty$ & 0.9985763  \\ \hline
2-o bGS Eq. (\ref{eq17}) & 0 & $\infty$ & 0.9971546  \\ \hline
10-o bGS Eq. (\ref{eq24}) & 0 & $\infty$ & 0.9858541  \\ \hline
sGS Eq. (\ref{eq8}) & 0 & $\infty$ &  0.9977967 \\ \hline
nsGS Eq. (\ref{eq11}) & 0 & $\infty$ & 0.9977967  \\ \hline
npsGS $\mu=\frac{1}{2}$ Eq. (\ref{eq18}) & -4.112$\times10^{-159}$ & 6.055$\times10^{3}$ &  0.9974169 \\ \hline
psGS $\mu=\frac{1}{2}$ Eq. (\ref{eq19}) & -3.445$\times10^{-97}$ & 1.021$\times10^{4}$ &  0.9987709 \\ \hline
\end{tabular}
\end{table}

In the Tables \ref{tab8} and \ref{tab9} three times in seconds for each method are presented. The times $t_T$ and $t_d$ correspond to the time taken to calculate the iteration matrix $T$ and the vector $d$ respectively. Time $t_W$ corresponds to the time spent in the iterative process of each method or `while'. This time was obtained considering the same initial guess $x^{0}=0$ for all methods. The total time $t_t= t_T + t_d + t_W$ should be used to compare the temporal performance of each method. This is because the larger $m$, the greater the time $t_T$ and $t_d$, and the smaller the time $t_W$. 

\begin{table}[tbp]
    \caption{Times to calculate the iteration matrix $t_T$, the vector $t_d$, total time $t_t$ for $A_6$ and the number of iterations.}
    \label{tab8}
    \centering
\begin{tabular}{|c|c|c|c|c|} \hline
Method & $t_T$ & $t_d$ & $t_t$  & Iterations \\ \hline
J Eq. (\ref{eq3}) & $4.0800\times10^{-4}$ & $3.4040\times10^{-4}$ &  --- & divergent  \\ \hline
2-o J Eq. (\ref{eq21}) & $4.3590\times10^{-4}$ & $6.9340\times10^{-4}$ & ---  &  divergent  \\ \hline
10-o J Eq. (\ref{eq21}) & $5.4720\times10^{-4}$ & $29.6490\times10^{-4}$ & ---  &  divergent \\ \hline
fGS Eq. (\ref{eq4}) & $3.5250\times10^{-4}$ & $3.5250\times10^{-4}$ &  19.517732  & 9829  \\ \hline
2-o fGS Eq. (\ref{eq16}) & $5.9830\times10^{-4}$ & $8.1300\times10^{-4}$ & 5.007476  &  5115 \\ \hline
10-o fGS Eq. (\ref{eq23}) & $48.542\times10^{-4}$ & $433.543\times10^{-4}$ & 0.177706  &  1068  \\ \hline
bGS Eq. (\ref{eq5}) & $3.0350\times10^{-4}$ & $3.5380\times10^{-4}$ &  17.243418  &  9140 \\ \hline
2-o bGS Eq. (\ref{eq17}) & $5.9830\times10^{-4}$ & $8.7980\times10^{-4}$ & 3.950617  &  4703 \\ \hline
10-o bGS Eq. (\ref{eq24}) & $17.686\times10^{-4}$ & $110.108\times10^{-4}$ & 0.157725  &  996  \\ \hline
sGS Eq. (\ref{eq8}) & $6.1640\times10^{-4}$ & $6.1090\times10^{-4}$ &  18.314291  &  9564 \\ \hline
nsGS Eq. (\ref{eq11}) & $5.8820\times10^{-4}$ & $7.4930\times10^{-4}$ &  16.978738  &  9225 \\ \hline
npsGS $\mu=\frac{1}{2}$ Eq. (\ref{eq18}) & $12.462\times10^{-4}$ & $14.577\times10^{-4}$ & 6.007731  &  5685  \\ \hline
psGS $\mu=\frac{1}{2}$ Eq. (\ref{eq19}) & $7.2730\times10^{-4}$ & $5.5010\times10^{-4}$ & 31.678465  &  12597  \\ \hline
\end{tabular}
\end{table}

\begin{table}[tbp]
    \caption{Times to calculate the iteration matrix $t_T$, the vector $t_d$, total time $t_t$ for $A_7$ and the number of iterations.}
    \label{tab9}
    \centering
\begin{tabular}{|c|c|c|c|c|} \hline
Method & $t_T$ & $t_d$ & $t_t$  & Iterations  \\ \hline
J Eq. (\ref{eq3}) & $9.3470\times10^{-4}$ & $3.1560\times10^{-4}$ & --- & divergent \\ \hline
2-o J Eq. (\ref{eq21}) & $18.3030\times10^{-4}$ & $7.5370\times10^{-4}$ & --- & divergent \\ \hline
10-o J Eq. (\ref{eq21}) & $35.0760\times10^{-4}$ & $371.2180\times10^{-4}$ & --- & divergent \\ \hline
fGS Eq. (\ref{eq4}) & $33.7520\times10^{-4}$ & $28.8670\times10^{-4}$ & 283.972996 & 21246 \\ \hline
2-o fGS Eq. (\ref{eq16}) & $74.0830\times10^{-4}$ & $96.9050\times10^{-4}$ & 76.972961 & 11020 \\ \hline
10-o fGS Eq. (\ref{eq23}) & $325.071\times10^{-4}$ & $1606.472\times10^{-4}$ & 10.532080 & 4002 \\ \hline
bGS Eq. (\ref{eq5}) & $23.8040\times10^{-4}$ & $25.1510\times10^{-4}$ & 279.075221 & 21253 \\ \hline
2-o bGS Eq. (\ref{eq17}) & $67.4790\times10^{-4}$ & $71.0550\times10^{-4}$ & 140.556805 & 15002 \\ \hline
10-o bGS Eq. (\ref{eq24}) & $326.277\times10^{-4}$ & $1725.13\times10^{-4}$ & 4.338155 & 2528 \\ \hline
sGS Eq. (\ref{eq8}) & $105.240\times10^{-4}$ & $89.1220\times10^{-4}$ & 122.439710 & 14011 \\ \hline
nsGS Eq. (\ref{eq11}) & $95.7180\times10^{-4}$ & $79.4130\times10^{-4}$ & 122.298417 &  14002 \\ \hline
npsGS $\mu=\frac{1}{2}$ Eq. (\ref{eq18}) & $155.998\times10^{-4}$ & $145.274\times10^{-4}$ & 88.492540 & 11817 \\ \hline
psGS $\mu=\frac{1}{2}$ Eq. (\ref{eq19}) & $60.7230\times10^{-4}$ & $62.1690\times10^{-4}$ & 392.218996 & 24913 \\ \hline
\end{tabular}
\end{table}


%-----------------------------------------------------------


\section*{Conclusions.}
\label{section5}

The $m$-order methods and new symmetric Gauss--Seidel methods are developed. The $m$-order methods have a higher rate of convergence than their first-order methods, but conserve the form of the first-order version. These $m$-order methods are subsequences of its precursor method, where some benefits of using vector and parallel processors can be explored. This is because the iteration matrix of the $m$-order methods is the product of $m$ matrices of the precursor method. The numerical results obtained with vector implementations show computational advantages when compared to the first-order versions. The Theorem 1 shows that the $m$-order method is always iteratively faster than its precursor method for $k$ iterations. 

The well-known symmetric Gauss--Seidel method defined by equation (\ref{eq8}) depends on the order used to combine the forward and backward variants of the Gauss--Seidel method, since its iteration matrix is obtained by composing these two variants. If the composition order was reversed, then a new symmetric Gauss--Seidel method defined by equation (\ref{eq11}) is obtained. Furthermore, the computational implementation of these two symmetric methods requires the determination of the product of the iteration matrices of the two Gauss--Seidel variants. 

On the other hand, the new symmetric Gauss--Seidel method defined by equation (\ref{eq19}) does not depend on the order used to combine the forward and backward variants of the Gauss--Seidel method, since its iteration matrix is obtained with the linear combination of these two variants. In addition, it does not require the determination of the product of the iteration matrices of the two Gauss--Seidel variants, so its computational implementation is simpler than the methods defined by equations (\ref{eq8}) and (\ref{eq11}). This method defined by equation (\ref{eq19}) proved to be interesting as it was the only method that converged in all the numerical experiments presented here. However, a better choice of the linear combination parameter $\mu$ should be further investigated. In all our numerical experiments we used $\mu = 1/2$. Finally, as far as the authors know, this is the first time that the methods described by equations (\ref{eq11}), (\ref{eq18}), (\ref{eq19}), (\ref{eq21}), (\ref{eq23}) and (\ref{eq24}) are presented.



%----------------------------------------------------------------

\section*{Acknowledgements}
%\begin{acks}
This work was supported by the Universidade Federal Fluminense. %\\
%\end{acks}

%----------------------------------------------------------------
%----------------------------------------------------------------


\bibliographystyle{PECEN}%S%
\bibliography{mybibfile}%S%


\end{document}

