Computation Vibration Spectroscopy as a Tool for Investigation of Complicated Systems

Short overview for the computation vibration spectroscopy methodology has been done. The main features of the used methods to compute parameters for complete vibration spectra, including inelastic neutron spectroscopy, infrared and Raman, has been described, too. Matrix method to solve inverted vibration problem, its limitation and modifications are discussed. All these algorithms are implemented into software called ''COSPECO''.


Introduction
The object of the computation chemistry is the evaluation of computer models of the system under study. From our point of view the model of the simulated system should describe all available experimental properties. The creation of the computer model consists of three main stages: model building, model computing and model verification. Some aspects of the first stage -model building -was described previously [1]. The second aspects -model computing -is well described in the literature [2] and is based on the known methods like molecular mechanics, molecular dynamics, ab initio and semiempirical quantum chemistry methods. But the third aspectmodel verification -is still not very defined and described. In this paper we would like to show some methods for model verification, which we have used a lot of time with success during investigation of complicated problems of industrial important production.
To have correct interpretation of the vibration spectra of different silica and silica-based glasses we have developed the program complex for quantum chemical calculation and vibration spectra verification glass and small particle models. We have used vibration spectra (inelastic neutron scattering -INS, infrared -IR and Raman) for model verification because standard structure methods like X-ray and neu-tron diffraction does not work for completely amorphous solids.
The main part of the quantum chemical calculations is finished with the determination of space and electronic structure, the heat of formation (or totals energy for ab initio methods) and other integral characteristics for the system under study. But for the analysis of complicated systems and their details like amorphous objects and their surface, for the model building and verification it is necessary to use more structure-selective properties. We suppose that the vibration spectrum (inelastic neutron scattering (INS), infrared (IR) and Raman spectra) or the combination of the different vibration spectra is the best computable characteristic for the system under study. This thesis is based on the next well-known premises -it is possible to record the vibrational spectra for any compound in any aggregate states (gas, liquid and solid), there is direct method to compute vibration spectra using quantum chemistry (force field matrix is the matrix of the second derivatives of the total energy respect the atomic displacement), vibration bands position and their intensity and shape are extremely sensitive to the space and electronic structures, the vibration spectrum is fingerprint for the substance.
In our opinion, now the computation chemistry investigation for the system under study should be deeper and consists in the building and verification of the computer model, which can describe all avail-able experimental data and must predict unknown properties. Basing on these premises we have developed and used the methodology and its software realization for the computation chemistry investigation for the complicated systems. Below we would like to describe the main points of this methodology and their mathematical representation, which were used for the software developing. The quantum chemistry methodology is described well enough, so here we fix our attention on the computational vibration spectroscopy part and our realization -COSPECO software.
The initial data for the normal coordinate analysis are space structure for the system under study and its force field. The first data set -space structure -is available from any quantum chemistry or molecular mechanics/molecular dynamics calculations and mainly in the Cartesian coordinate system and we will not discuss the methods to solve this problem. They are described in the literature very well. We fix our attention on the second problem -how to get the force field as a matrix of the second derivatives of the total energy respect the atomic displacement.

General remarks
Our computation tool for the molecule and cluster simulation and verification consists of two main and independent parts: quantum chemistry software named ''QuChem'' and vibration spectroscopy software named ''COSPECO''. Below we would like to describe the main features and basic methods of this software.
The basic methods of semiempirical quantum chemistry methods, their abilities and limitations, are well discussed in literature [10][11][12], so we shall describe only the main features of our program.
Vibration problem as a small deviation of atoms respect equilibrium position is defined for the stationary point with positively defined Hessian matrix. It means the before force field calculation the system should be optimized with high accuracy. Most useful optimization methods require for fast and ac-curate minimization the first derivatives of the total energy respect atomic deviation in the used coordination system. For chemist this coordination system is internal coordinates i.e. bond length, bond angles and dihedral angles. Direct analytical calculation of total energy gradient respect atomic shift ∂E/∂x in quantum chemistry methods is performed in the cartesian coordinates while space structure optimization is performed in internal coordinates and requires total energy gradient respect internal coordinate ∂E/∂q, so we transform cartesian gradient to internal gradient [6]: This way to transform Cartesian gradient to internal coordinate system gives the ability for mechanochemical calculation [13].
Cartesian force field evaluation as an atomic mass-weighted energy second derivative with respect to atomic shift square matrix After complete optimization of system under study, when the Cartesian (and internal) gradient will be close to zero, it is possible to start force field calculation. It is very important to begin the force field calculation at the stationary point, where the gradient is zero G(x) = δE/δx = 0. It means that geometry optimization must be used for the experimental defined space structure of the system under study because the minimum of energy from experimental investigation and quantum chemistry may not be the same exactly, sometime with noticeable difference. The calculation of the second derivatives matrix F = ||δ 2 E/δx 2 || of the energy with respect to atomic shift is achieved by the finite difference method using Cartesian gradient for the finite difference method. But this method requires a lot of computing time to reach an acceptable accuracy. Therefore, in future, we would like to add analytical method for the evaluation of this Cartesian force constant matrix.
In general, this finite difference method is very simple. Starting from the equilibrium (stationary) point, where G(x) ~= 0, (G(x) = δE/δx) is cartesian gradient vector with length 3×n, (n -number of the real atoms), we shift every atom along axis X, Y, Z by a fixed step (step = 0.003 Å) in two opposite directions and compute G 1 =G(x+step) and G 2 =G(x-step). After that, the column (or row, because the force constant matrix is symmetrical) of the force constant matrix can be evaluated as:

|F| = (G 1 -G 2 )/(step+step)
From these equations it is clearly seen that the accuracy of the space structure optimization should be better than step magnitude. Only in this case, one can guarantee that during force field computation the system under study will be close to local minimum.
Total dimension of force matrix is 3×N, where N is total number of atom in the system under study. Force constant matrix F is symmetrical, but numerical methods with limited accuracy (computer word length) lead to some small deviation between elements f ij and f ji . After complete column (or row) force matrix computation, we perform matrix balancing adjustment as f' ij = 0.5×( f ij + f ji ). Force matrix F is then mass-weighted by multiplication of all elements f ij by the 1/m i 1/2 ×m j 1/2 . Using matrix notation, it is possible to define this operation as F mw =|m ij | 1/2 ×F ij ×|m ij | 1/2 . This mass-weighted matrix is dynamic matrix in the Cartesian coordinate system. Diagonalization of this matrix produces eigenvalues and corresponding eigenvectors. These data are kept inside the program and used for IR and Raman spectra calculation by the ''QuChem'' program. After calculation, this massweighted force constant matrix F mw records on with corresponding space structure (Cartesian geometry) into files for utilization by the ''COSPECO'' program.
This matrix contains some systematical deviation due to neglecting of Eckart conditions, too. As result of limit of double precision numerical word during computation and other relative reasons the first six (or five for linear systems) smallest of eigenvalues of this symmetrical matrix, which represent true rotation and translation, are not exactly zero, but have some deviation from these values. This is typical phenomenon for numerical methods and these values can be neglected, if their values are not more then 10-20 cm -1 , but if one or more of these values exceed 10-20 cm -1 it is necessary to verify results. Sometimes this phenomenon is result of low accuracy of structure optimization, and sometimes it is result of neglecting of Eckart conditions, when heavy atoms are placed far from the center of mass. But if the negative frequency is too large by absolute value, this situation indicates that current structure is not in the minimum of energy, but in the saddle point.

IR-intensity calculation. Quantum chemical part
After Cartesian force field is computed it is possible to start infrared and Raman intensity calcula-tions. To compute these intensities it is necessary to have the derivatives of the dipole moment and of the polarizability with respect to atom shift. To prevent accuracy lost during numerical differentiation due to dipole moment and polarizability lesser sensitivity to small atomic deviation we have used quite different step = 0.1 Å for these kinds of calculation, compare to the step = 0.003 Å for the force field calculation. Dipole moments are computed by standard method even for charged system. It is known that for charged system dipole moment contains the additional value, which depends on the position of the system relative to the origin of coordinates. But due to numerical differentiation of the computed dipole moment this additional value for charged systems is compensated by the same with negative sign. Therefore, this method can be applied for dipole moment derivatives and infrared intensity calculation for charged systems: where µ 1 and µ 2 are system dipole moments at the different atom shifting and step is finite difference step. Dipole moment contains three components, therefore dipole moment derivatives array contains 3×3N values.

Raman spectra intensity calculation. Quantum chemical part
Reliable theoretical prediction of the magnitude of microscopic linearities and nonlinearities are believed to be useful for the technologists. It is well known that the polarization P induced in the atom or molecule by an external field E can be expressed as: where the vector quantities P and E are related by the tensor quantities α, β and γ, which are often referred as polarizability, hyperpolarizability and second hyperpolarizability, respectively.
For quantum chemical calculation of molecular optical properties and nonlinearities, two conceptually different methods, named, the sum-over-states (SOS) method and the derivative method, are generally used [14][15][16]. In the SOS method, or the excitedstate perturbation expansion, the perturbation expansion over molecular states is applied to account for the effects of an externally applied electromagnetic field on the motions of electron associated with the system under study. To compute the interaction with external electric field with acceptable accuracy, it is (1) (3) (4) necessary to use at least all one and two times excited states over all atoms of the system under study. But semiempirical methods (MNDO, AM1 and PM3) are parameterized for the ground state only. Therefore, the deviation of the results for excited states may be very large and unacceptable. The limitation of SOS comes from the arbitrary truncation of the summation for the sake of computational efficiency where only contribution from a limited number of excited states are included. In practice, the summation over only low-lying excited states often is utilized [17,18]. The accuracy of this SOS method, however, is severely limited by the approximations used for the excited electronic states [19].
The derivative method, such as finite field method and the analytical method, is less restricted in comparison with the SOS method. This method is based on expansion of energy or dipole moment as a function of applied field in the power series form [15]: where U 0 is the energy of the system under study in the absence of the field and µ 0 is the permanent dipole moment. The coefficient α corresponds to the polarizability of the molecule and is estimated by computing the second derivative of the energy or the first derivative of the dipole moment with respect to the electrical field of the incident light. The first hyperpolarizability, β, and second hyperpolarizability, γ, are also given by the third derivative of the energy (or the second derivative of the dipole moment) and the fourth derivative of the energy (or the third derivative of the dipole moment) with respect to the applied external electric field. The various derivatives are then calculated either numerically as the finite-field method or analytically as in the analytical method. But there are no published and well-described methods for the analytical evaluation of these derivatives. However, they are included into commercial software (GAUSSIAN98), but without detailed explanations. Therefore, we should use finitefield method.
The polarizability, α, first hyperpolarizability, β, and second hyperpolarizability, γ, may be, in principle, obtained by computing the gradient of µ(E) with respect to E in the limit of zero field, as allowed by the Hellman-Feynman theorem (as far as we need only polarizability for Raman intensity, we will describe in detail only polarizability α calculations): In practice, this is done numerically by computing: Since µ is the vector with length 3 each previous formula computes one row (or column) of tensor 3×3 α. Therefore, for the polarizability α ij = ∂ 2 E/(∂ε i ∂ε j ), the derivative: In this equation ε i and ε j are electric field perturbations in i, (i = x, y, z) and j, j = (x,y,z) directions, R k is a nuclear coordinate in k, k = (x,y,z) directions, and E is the total energy of the system under study.
in particular, the following five terms: where indexes show the direction of the applied external electric fields.
All these formulas give us the ability to check the accuracy of the polarizability calculation and correctness of the external electric field quantum chemical subroutine. One can compute the dipole moment analytically (as it was implemented into quantum chemical software) and by the external electric field influence: Using these relations one can show that the second term in equation 5 (with complete neglecting of the following members of the series -linear approximation) dipole moment may be obtained as the first derivatives of the total energy respects to external field: In out numerical tests the magnitude of the external electric field (0.002 atomic units -a.u.) gives the good numerical accuracy and is far from the significant nonlinear effects -the difference between analytically calculated dipole moment and numerically calculated values is not more then 2.5% for the tested simple molecules. It means that the quantum chemical subroutine for external field calculation works correctly and produces reasonable results.
We have used this finite-field approximation to compute polarizability and derivatives of polarizability for Raman intensity calculations. Polarizability tensor 3×3 α is computed by the finite-field method at the stationary point: Since the dipole moment µ is a vector with length 3 and it contains three components, application of the external electric field along every axe in direct and inverse directions gives two corresponding dipole moments µ i (E,0,0) and µ i (−E,0,0) correspondingly. Using formulas 8a-8c one can calculate the row or the column of the polarizability tensor. As far as of this tensor is symmetric it is possible to keep only six independent values.
Using this technique for the one atom displacing during dipole moment derivative calculation one can compute derivative of polarizability respect atom displacement. It means that one can perform evaluation of the dipole moment derivative without external electric field and, after that, include external electric field and evaluate dipole moment derivatives respect to atomic shift and external electric field. The array of the polarizability derivatives with dimension 6×3N is stored for the Raman spectra calculation. Therefore, the dipole moment derivatives and polarizability derivatives subroutine can be combined together into one subroutine to save the computing time.
The program ''QuChem'' is finishing computation on the dipole moment and polarizability derivatives evaluation. At the end of the program working we have space and electronic structure, heat of formation, dipole and quadrupole moments and potential of ionization. Additional files contain force field in the cartesian coordinate, cartesian coordinate of all atoms, dipole moment derivatives and derivatives of polarizability. All these data are enough for complete normal coordinate analysis and intensity calculation for inelastic neutron scattering spectra (INS), infrared spectra (IR) and Raman spectra.

Normal coordinate analysis in the internal depended coordinate system
Since analysis of normal vibration in cartesian coordinates is very complicated for systems, which contain more then three atoms, we perform normal coordinate analysis only in internal depended coordinate system, i.e. in the bond length, bond angle, linear parts, torsion and out-of-plane vibrations. All these data represent complete vibration spectrum of any system under study. Therefore, next step of the computation is converting quantum chemical data from Cartesian to depended coordinate system.
The first step of the normal coordinate analysis is kinematics matrix G computation on the base of wellknown GF matrix method [20,21]. Since Cartesian coordinates for all atoms are known, the main problem on this step is how to define the bonds between atoms in the system under study. There are a lot of methods, but we have used the simplest of them, which is based on the method of the standard bond length. If the distance between atoms is inside the lowest and highest limits, the program indicates that these two atoms are bonded by the chemical bond. If the distance between atoms is below lowest limits, the program indicates the problem with chemical bond definition and requires additional control by user. Additional control is based on the density matrix of system under study and Wiberg indexes [see definition and ref. in 22]. After complete bond definition the kinematics matrix G is computed by standard method [20,21] as like as matrix B for Cartesian to internal coordinate conversion. During this computation the set of equivalent vibration coordinates (with equivalent force constants) and set of (7) (8a) (8b) (8c) nonbonded vibration coordinates (with zero values of force constants using approximation of valenceforce field [20,21]) are computed simultaneously.
The basic equation of the GF-matrix method is:

GFL=LΛ
where G -is kinematics matrix, F -is the force constant matrix, L -is the form (relative amplitude) of normal vibration and Λ -is the diagonal matrix of the eigenvalues, which represent the vibrational fre- . Coefficient C depends on the unit system. Since the matrix product GF is not the symmetrical, the standard method to solve this equation 9 (evaluation of the L -the form of normal vibration and eigenvalues Λ for the frequencies determination) performs in two steps. The first step is diagonalization of the symmetrical kinematics matrix G using standard linear algebra methods and well-known software. After diagonalization one can obtain the eigenvector matrix L g and eigenvalue diagonal matrix Λ g . Next step is calculation of the intermediate matrix: L is the transposed eigenvector matrix L g . Intermediate matrix W is symmetric and can be diagonalizated using standard linear algebra methods and wellknown software too. During diagonalization one can obtain the eigenvalue diagonal matrix Λ f and eigenvector matrix L f . The form of normal vibration can be obtained by the simple matrix multiplication: The matrix L p , called impulse transform matrix one can evaluates as L p = L g Λ -1/2 L f . Totally the eigenvalue diagonal matrix Λ f can be expressed as: Usually, there is the end of normal coordinate analysis. Additionally, for the better normal vibration assignment, it is possible to use the potential energy distribution (PED) V k ij for normal vibration k. PED shows the contribution of the normal coordinate deformation energy into the normal vibration k V k ij = f ij ×l ik ×l jk For every normal vibration k we have the square matrix with size N, where N is the matrix dimension or total number of the vibration coordinates. In order to simplify this method and since the diagonal force constants are much large than off-diagonal, the PED matrix can be reduced to the vector: The biggest elements of this vector shows the main structure elements (vibration coordinates), which play important role for this normal vibration k. These elements (or one element) is the assignment of this normal vibration k and usually is put to the assignment table of the normal coordinate analysis.
As it was pointed above, the kinematics matrix G can be easily computed by the standard methods using Cartesian coordinates of atoms [20]. The main problem for the computation vibration spectroscopy is the method for evaluation of the force constant matrix F in the depended internal coordinate system. Below we will describe the method for the transformation of force matrix from Cartesian coordinates to depended internal vibration coordinate system. It is necessary to point that internal vibration coordinate system is depended, because only this kind of coordinate system can represent complete symmetry of vibration without linear combination of symmetry coordinates. For systems, that contain more then 5-10 atoms, the redundant coordinate system leads to the linear dependency of kinematics matrix G. It means that during diagonalization of matrix G one or more eigenvalues will be near zero, but not exactly zero due to finite accuracy of the numerical methods. In any case, this kinematics matrix G can not be inverted directly by the ordinary methods of linear algebra. Because of finite (small but nonzero value) atomic shift the Eckart conditions is not satisfied completely and it is necessary to remove all external (rotations and translations) degree of freedoms from the Cartesian force matrix U c . We have used projection matrix method [23], based on the translation and rotation invariance of matrix B. Projection matrix P can be computed: After that the Cartesian force constant matrix U c is projected on translation and rotation-free space: here U i is translation and rotation-free cartesian force constant matrix.
To transform Cartesian force field matrix U i to (9) (12a) the internal depended coordinate system force matrix F we have used the following treatment. For Cartesian coordinate one can write next expression: where Λ is diagonal matrix of the eigenvalues, D is eigenvector matrix of mass-weighted cartesian force matrix U i . U mw = |m| 1/2 ×U i ×|m| 1/2 and |m| is diagonal matrix with triple inverted atomic masses. For Cartesian coordinates the eigenvector matrix column is Cartesian displacement of atoms for normal vibration. One can evaluate the same values -displacement of atoms -using internal depended coordinates [20,21]: where for inversion of kinematics matrix G we have used pseudoinversion:~g The described method gives us the ability to convert Cartesian force constant matrix into depended internal coordinate system and it is implemented into ''COSPECO'' software.
All these methods and algorithms are called as ''direct vibration problem'' solution. At the beginning of the normal coordinate analysis we have the space structure and force constant matrix and at the end of the ''direct vibration problem'' solution we have got the frequencies of normal vibrations and their assignment using forms of normal vibrations and/or using the PED assignment. Next steps of the computation vibration spectroscopy investigation are the inverted vibration problem, force constant scaling and vibration spectra intensity calculations.

Inverted vibration problem and methods for force constant fitting
In general, the inverted vibration problem may be described in two ways. First of all it is complete determination of the force constant matrix using only experimental spectra data and the second one is fitting of the introduced force field using experimental spectra data and other available data. Below we will use the second definition of the inverted vibration problem. Therefore, the inverted vibration problem is force constant fitting, using experimental data of frequencies for normal vibrations and other available data. In contrast to the direct vibration problem, which is uniquely defined and is well posed the inverted vibration problem is an ill-conditioned problem from the mathematical point of view. General description of this ill-conditioned problem is the problem to fit the symmetrical matrix with N×(N+1)/2 elements using N or less elements. From the pure mathematics point of view there is no solution for this problem unless N = 1. But we need to do this and we need to solve this problem. Below we will describe these problems more deeply.
As it was pointed before, one can represent the direct vibration problem solution as: Pseudoinvertion of the diagonal matrix (represented as vector) Λ g of the first diagonalization eigenvalues is performed using border value ε. For all elements of the vector λ(i) g ≥ε we use λ -1/2 (i) g = 1/λ 1/2 (i) g and for small elements of the vector λ(i) g <ε we use λ -1/2 (i) g = 0. By numerical testing we have chosen the value for ε = 0.001. The matrix L p , called impulse transform matrix (as it was shown before) and which can transform frequencies as eigenvalues Λ f , can be obtained as square matrix. These presentations of the impulse transform matrix L p simplify all calculation. Therefore any force constant matrix element f ij can be obtained as scalar production of the row i and j of the impulse transform matrix L p with corresponding eigenvalue as the weighting factor for every production of the row elements i and j of the impulse transform matrix. Since the force constant matrix F is symmetric matrix it is possible to represent this matrix as a production of the two matrices: We will use intensively this expression below. This way if we use computed eigenvalues Λ f for the force constant matrix reestablishment, we will obtain initial force constant matrix with accuracy of the limit computer word length and other numerical treatment. If we use experimental eigenvalues (computed from the experimental frequencies), we will obtain fitted force constant matrix. The main point of this matrix method for inverted vibration problem solution is that the direct vibration problem solution give us the initial matrix of the form of normal vibrations and we replace computed eigenvalues Λ f by the experimental one and we are able to neglect by the deviation of the form of normal vibrations. This assumption is more correct if the difference between calculated and experimental eigenvalues is smaller. If we have the spectra for some isotopic substituted molecules than we are able to obtain the force field matrixes for all isotopic species, average the force field and repeat this process up to the needed accuracy. This method is very fast and during this process we are able to fit all force field matrix. This method is indifferent to the linear dependencies between force constants. To increase the computation accuracy it is possible to use the differential method for inverted vibration problem solution. It is based on the same equation (19) but we are changing the experimental eigenvalues on the difference between calculated eigenvalues and experimental one. After application of this equation 19 we will have the increment for the force constant matrix. Practically this method works well enough even for matrix size N = 1000 or more using single precision.
If our starting force field produces acceptable accuracy for the direct vibration problem solution (for harmonic approximation it is ~20 cm -1 for the region 1000 cm -1 ) we do not need to fit this force field. But, if we need to fit the force field we should introduce the model restriction on the force field, depending on the system origin. We have used two type of restrictions: the force constants for the equivalent vibration coordinates should be equivalent and, the second one, for the system without conjugation the force constants, which describe the interaction between nonbonded vibration coordinates (vibration coordinates without common atoms), should be zero. This restriction is known as the valent force field model.
Let us to look on this method and popular least square method (LSQM) [24][25][26] for the inverted vibration problem solution. LSQM uses the same procedure for the direct vibration problem solution and it requires the starting force field, too. After that it is necessary to choose the force constant for fitting. It is very important that the total number of these fitted force constants should not exceed the number of the nonzero experimental frequencies. It means that the total number of the fitted force constant, as usually, is much less than total number of the force constants. And there is no systematical method to define what constants are necessary to fit and what constants fitting may be neglected. After this step one can build the system of the linear equations: − partial derivative of the eigenvalues on the fitted force constants: X ∆ -force constant increment (column); L ∆ -difference between calculated and experimental eigenvalues (square of the frequencies). Using classic LSQM one can obtain the best solution (with minimal norm of the deviation between calculated and experimental data) by solving of the next equation: Obtained force field increment is introduced into force field and this process is repeated again up to the requested accuracy of up to the exhaust the number of iteration.
Sometimes this iteration process is not successful due to bad conditionality of this linear equation system and this system is ill-conditioned. The result of this is abnormal result of the solution of this linear equation system and/or high sensitivity to the computer accuracy and floating point computer word length. It means that the condition value σ > 1 for the linear equation system (21) may be big enough. Left multiplication of the equation 21 on the transposed matrix Ã and conversion of it to equation 22 leads to the squaring of the condition number σ and leads equation 22 to the highest instability to the experimental data error and algorithm round-off errors.
As an alternative to this more stable and sophisticated methods are used [27,28].
The LSQM and the matrix method for the inverted vibration problem solution are similar enough. The elements of the matrix A (partial derivative of the eigenvalues on the fitted force constants) a ij are computed as the production of the i-th and j-th components of the matrix L -form of normal vibrations.
Last equation 23 may be presented as R= L ⊗ L = L p −1 ⊗ L p −1 direct matrix production [29, p. 227-228]. The linear system 23 may be analyzed by standard methods of the linear algebra. Matrix R contains the squared of the number of the singularities in the matrixes L p and L or the number of linear dependencies in the vibration system of coordinates. After that one can solve this system using pseudoinversion of matrix R: Since the elements of matrix R are the productions of the rows of normal vibration forms matrix, the system of the linear equation 21 is a part of the equation 23 and can be obtained from the 24 by selection of the fitted force constants or, in other words, by introducing of restrictions. Therefore the difference between matrix methods for the inverted problem solution and LSQM is in the method of the introducing of restrictions. Incorrect or inconsistent method for the selection of force constant for fitting leads to the degeneracy of the matrix A very often. In these cases it is necessary to use singular decomposition methods or other stable methods.
These data show that matrix methods and LSQM with the same restriction on the force field lead to the same result. Since the matrix method is more stable and is able to work without force constant element selection we use this method and its modification for the practical work and we have introduced these matrices methods into our software.
General analysis of these equations leads to the interesting results. Widely current point of view on the equation 9 consists on the using › as a vector only. But very simple transformation of this equation (premultiplication of this equation on the inverted or pseudoinverted matrix of the form of normal vibrations) leads to the next equation: L -1 GFL = Λ All matrices in the left part of this equation are square and their production is the square matrix too. Therefore, the Λ is diagonal square matrix and offdiagonal elements are zero, but they are also significant. From this point of view the postulate about insufficiency of data for the all force constant matrix fitting is incorrect, too. It is additional argument to use matrix method for the force constant fitting and using of the matrix method to solve inverted vibration problem in our software. Therefore, independently from the method of inverted problem solution, the best choice of the initial force field is very important for the fitting. Decreasing of the difference between initial force field and fitted force field leads to decreasing of the deviation of the form of normal vibrations matrixes and, thus, (25) decreasing of the force field dependency on the introduced model for the force field. There is one side of the problem. The other side of this problem consists on the correctness of the experimental spectra assignment. All mathematical methods for the inverted vibration problem solution are senseless for the incorrect assignment. But there is another big problem and we will not discuss it in the current article.
Therefore, the problem of the first approximation of the force field is very important. Before last decade the main method for the force field construction was the method of force constant transfer in the series of the analogous molecules or systems and/or empirical dependencies between structure and force constants. These methods were useful for the organic compounds, but they were not so successful for the inorganic and coordination compounds without series of the homology sequences. On our point of view this reason is the main difficulty for the practical application of the computational vibration spectroscopy for the ordinary scientific research.

Inverted vibration problem and methods for force constant scaling
We have described the method for quantum chemical calculation of the force field as the initial model for the experimental spectra assignment and fitting. This method and its software realization can be used not only with semiempirical quantum chemical methods but with any other quantum chemical methods, too. But all known quantum chemical method give us the force field with some systematical shifts of the frequencies to compare with experimental. Therefore inorder to decrease this shifting we have used the force constant scaling methodology or SQFF (scaled quantum mechanical force field) [30][31][32][33]. This method is based on the simple formulas for diagonal and out-off-diagonal force constants scaling: This method is very simple for application because it needs the same amount of the scaling factor as the number of vibration coordinates and it scales force field matrix completely. The scaling facto for ab initio force fields are in between 0.95-0.7 depending on the basis set and type of internal vibrational coordinate. For semiempirical quantum chemical methods the scaling factor is varied within the more wide limits, which can be more then 1 -between 1.45 and 0.35 for some organic compounds [31]. Below we would like to present an alternative realization of this scaling methodology and its relation with the inverted vibration problem solution.
As it was shown before (eq. 20), the force constant matrix can be presented as production of two identical matrices . First two components of matrix V form force field independent and can be computed once only. First three components of matrix V form the matrix L p , called impulse transform matrix, which one can evaluate as L p = L g Λ -1/2 L f . Therefore, the matrix V can be presented as the scaled using frequencies by column impulse transform matrix The force constant matrix element f ij can presented as scalar production of two rows i and j from matrix V. Diagonal force constant matrix element f ii is squared length of the corresponding row i. This presentation completely satisfies the definition of the force constant matrix [26]: positively defined (the length of the real vector is positive value always) and for off-diagonal matrix elements Using this force constant matrix presentation one can present the scaling procedure as the matrix V every row length correction to the square root from the scaling factor. Off-diagonal scaling can be obtained automatically during restoring force constant matrix F by equation 20. Using model restriction like equivalence of the force constants for the diagonal elements is very simple -it is necessary to make equivalent the length of the rows, corresponding equivalent vibrational coordinates. As it was tested numerically, in these cases, the off-diagonal elements, which refers, correspond to the interaction of the equivalent vibrational coordinates, are very similar with small deviations. This fact shows the good applicability for the proposed method. Combining equations 20, 28 and the method of their computing an advanced method for the inverted problem solution were developed. Using initial force constant matrix one can solve direct vibration problem and compute impulse transform matrix L p . Scaling the column of this matrix L p on the experimental frequencies in the form Λ f 1/2 one can obtain the matrix V. Keeping the length of the every column of the (26) matrix V by the constant and making the equivalent every row length for the equivalent vibration constants it is easy to construct selfconsistent process for the matrix V concordance. It is possible to use the differential method like increasing computational accuracy. To do this it is necessary to use the difference between calculated and experimental eigenvalues in the equation 28 instead of the experimental eigenvalues Λ f 1/2 . But in this case it is necessary to compute the positive and negative increments to the force constant matrix separately to prevent complex number utilization in the software. This incremental method significantly accelerates the inverted vibration problem solution and it is introduced into our software.

Intensity calculation. Normal coordinate analysis part
We have used three types of experimental vibration spectra -inelastic neutron scattering (INS), infrared (IR) and Raman spectra. Therefore, to use all potential ability of experimental study for the computation vibration spectroscopy we should compute all these spectra using the same force field and the different weighing factors to convert calculated density of the vibration states (VDOS) to the real experimental spectra. It means that we need to have the coupling coefficient to convert VDOS to the measured spectra.
Intensity for all three types of experimental vibration spectra -inelastic neutron scattering (INS), infrared and Raman spectra -is calculated using Cartesian atomic shift or displacement. One can evaluate the displacement of atoms D using internal depended coordinates [20,21] by equation 14 ) and all known matrixes. Program for intensity calculation reads the matrix B for Cartesian to internal coordinate conversion from external files and impulse transform matrix L p (L p = L g Λ -1/2 L f ) and compute the matrix D of the displacement of atoms. This part of calculation is identical for all three programs for intensity calculation.
INS spectra intensity calculation is performed basing on the a little bit simplified formula 4.1.56 from [34]. Since we integrate the scattering data over all angles and calculate only relative intensities we are able to neglect absolute scattering factors and angle dependency. INS measurement is performed at low temperature and, therefore, it is possible to neglect the population of the higher vibration levels.
Result formula for normal vibration with frequency ω is: where β i -is coherent cross-section for the neutron scattering for atom i, σ i -is incoherent cross-section for the neutron scattering for atom i, m i -is mass for atom i, − is length of the vector of the atoms i and j shifting, N -is total number of atoms in the system under study. Production β i β j represents coherent part of neutron cross-section and production σ i σ j represents incoherent part of neutron cross-section. Delta-function δ ij indicates that incoherent part of neutron cross-section should be included for the same atom only. The values for the coherent and incoherent cross-sections for the neutron scattering for atoms are experimental and tabulated. Therefore, to compute INS intensity and INS spectrum we need the space structure and force field only or, in other words, the amplitude of the atomic displacements without any additional approximation and parameters. There is a reason to call INS spectra as the amplitude-weighted density of vibration states -AWDS, which directly represents VDOS. There are no selection rules for INS spectra and one can, in principle, observe any vibration in the system under study, which is inactive in the other vibration spectra such as IR or Raman. There is a big advantage of the INS spectroscopy, from one side, and imperfection, from other side due to this phenomenon makes spectra more complicated. Therefore, the best way is using all kind of vibration spectra.
As it was noted, the IR-intensity represents the changing of the system dipole moment for the normal vibration. To compute this system dipole moment changing we use derivatives of the total dipole moment δµ /δ x and displacement (∆ x) of atoms D in the Cartesian coordinates.
For Raman intensity the proposed method is used too. According to [21] the Raman intensity is: where P = αE is induced dipole moment, α is polarizability and E is a vector of external electric field intensity. Using Cartesian derivatives of polarizability and Cartesian shifts of every atom one can describe: The Raman intensities are defined by the following (35) two quantities. Using better notation from the [35] without confusing misstatement (α as polarizability and α as matrix spur) with previous used notation for polarizability and second hyperpolarizability the intensity activity coefficient one can write (I): I = 45T 2 + 7G 2 and depolarization ratio: 3G + = ρ This notation (T) and (G) for last two equation is better to compare with very often used notation (T ⇒ α and G ⇒ γ) because these values are not the same as before, where α is polarizability tensor and γ is second hyperpolarizability. The relation between these values can be defined as:  [15] the relation between the polarizability tensor deviation ij α , which was computed by quantum chemical program QuChem, from one side, and mean (isotropic) polarizability T (spur of derivative polarizability tensor matrix) and anisotropy of the derivative polarizability tensor matrix G 2 , from other side. Using these data it is possible to compute Raman intensity for current normal vibration: have used the instrumental broadening function, which is known for the used INS spectrometers (KDSOG and NERA in Joint Institute for Nuclear Research (Dubna, Russia). For IR and Raman spectra we have used initial value for the band broadening parameter 10 cm -1 and after that, if it is necessary, one can perform the fitting of these broadening parameters for better coincidence of theoretical and experimental spectra. If three vibration spectra (INS, IR and Raman) for our model under study are enough similar to the corresponding experimental spectra, it means that our model under study reproduce the space and electronic structure for the real system under study and it may be adequately used for explanations and prediction of properties.

Conclusion
We have illustrated the pathway for the computation chemistry investigation and, more deeply, the verification of the model using computation vibration spectroscopy methods. A brief review for the computation details for the vibration spectra calculation, including force field and intensity calculations, was done. All these features are included into working software, which are named QuChem for the quantum chemical and COSPECO for the vibration spectroscopy investigation.