Model reduction of a large scale system using PCA technique
MetadataShow full item record
CitationKouadri, A., Zelmat, M. ve Namoune, A. (2009). Model reduction of a large scale system using PCA technique. Maltepe Üniversitesi, İnsan ve Toplum Bilimleri Fakültesi. s. 62.
Most of the model reduction techniques proposed in the literature are based on the use of multivariate statistical techniques. The linear Principal Component Analysis (PCA) is one of the most known methods in data analysis. It looks for one subspace of a smaller dimension than the initial space and projects the studied data into this space with a minimum loss of information. Therefore, the obtained result is a representation of data with a reduction of dimension. To reduce calculations, in the case where the correlation matrix is large, the neural network of the PCA has been proposed. In general, neural network approaches in PCA distinguish themselves through two criteria of optimised training that are equivalent: variances maximization of data projection and quadratic error minimization of estimated data. Most approaches which use networks of multi-layer perceptron for obtaining the non-linear PCA model (NLPCA) encounter problems of optimization often non-linear such as the headache of convergence and initialization of this network type. For this reason, while combining the main curves and the Radial Basis Function Neural Networks, we propose an approach for the NLPCA with two networks of three cascading layers. The problem of training presents a linear regression in relation with the output layer weights. The algorithm which determines the number of nonlinear components to be retained in the NLPCA model is based on the accumulate variance.
SourceInternational Conference of Mathematical Sciences
- Makale Koleksiyonu 
The following license files are associated with this item: