All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Application of a balanced identification method for gap-filling in CO2 flux data in a sphagnum peat bog
Computer Research and Modeling, 2019, v. 11, no. 1, pp. 153-171Views (last year): 19.The method of balanced identification was used to describe the response of Net Ecosystem Exchange of CO2 (NEE) to change of environmental factors, and to fill the gaps in continuous CO2 flux measurements in a sphagnum peat bog in the Tver region. The measurements were provided in the peat bog by the eddy covariance method from August to November of 2017. Due to rainy weather conditions and recurrent periods with low atmospheric turbulence the gap proportion in measured CO2 fluxes at our experimental site during the entire period of measurements exceeded 40%. The model developed for the gap filling in long-term experimental data considers the NEE as a difference between Ecosystem Respiration (RE) and Gross Primary Production (GPP), i.e. key processes of ecosystem functioning, and their dependence on incoming solar radiation (Q), soil temperature (T), water vapor pressure deficit (VPD) and ground water level (WL). Applied for this purpose the balanced identification method is based on the search for the optimal ratio between the model simplicity and the data fitting accuracy — the ratio providing the minimum of the modeling error estimated by the cross validation method. The obtained numerical solutions are characterized by minimum necessary nonlinearity (curvature) that provides sufficient interpolation and extrapolation characteristics of the developed models. It is particularly important to fill the missing values in NEE measurements. Reviewing the temporary variability of NEE and key environmental factors allowed to reveal a statistically significant dependence of GPP on Q, T, and VPD, and RE — on T and WL, respectively. At the same time, the inaccuracy of applied method for simulation of the mean daily NEE, was less than 10%, and the error in NEE estimates by the method was higher than by the REddyProc model considering the influence on NEE of fewer number of environmental parameters. Analyzing the gap-filled time series of NEE allowed to derive the diurnal and inter-daily variability of NEE and to obtain cumulative CO2 fluxs in the peat bog for selected summer-autumn period. It was shown, that the rate of CO2 fixation by peat bog vegetation in August was significantly higher than the rate of ecosystem respiration, while since September due to strong decrease of GPP the peat bog was turned into a consistent source of CO2 for the atmosphere.
-
Neuro-fuzzy model of fuzzy rules formation for objects state evaluation in conditions of uncertainty
Computer Research and Modeling, 2019, v. 11, no. 3, pp. 477-492Views (last year): 12.This article solves the problem of constructing a neuro-fuzzy model of fuzzy rules formation and using them for objects state evaluation in conditions of uncertainty. Traditional mathematical statistics or simulation modeling methods do not allow building adequate models of objects in the specified conditions. Therefore, at present, the solution of many problems is based on the use of intelligent modeling technologies applying fuzzy logic methods. The traditional approach of fuzzy systems construction is associated with an expert attraction need to formulate fuzzy rules and specify the membership functions used in them. To eliminate this drawback, the automation of fuzzy rules formation, based on the machine learning methods and algorithms, is relevant. One of the approaches to solve this problem is to build a fuzzy neural network and train it on the data characterizing the object under study. This approach implementation required fuzzy rules type choice, taking into account the processed data specificity. In addition, it required logical inference algorithm development on the rules of the selected type. The algorithm steps determine the number and functionality of layers in the fuzzy neural network structure. The fuzzy neural network training algorithm developed. After network training the formation fuzzyproduction rules system is carried out. Based on developed mathematical tool, a software package has been implemented. On its basis, studies to assess the classifying ability of the fuzzy rules being formed have been conducted using the data analysis example from the UCI Machine Learning Repository. The research results showed that the formed fuzzy rules classifying ability is not inferior in accuracy to other classification methods. In addition, the logic inference algorithm on fuzzy rules allows successful classification in the absence of a part of the initial data. In order to test, to solve the problem of assessing oil industry water lines state fuzzy rules were generated. Based on the 303 water lines initial data, the base of 342 fuzzy rules was formed. Their practical approbation has shown high efficiency in solving the problem.
-
Migration processes modelling: methods and tools (overview)
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1205-1232Migration has a significant impact on the shaping of the demographic structure of the territories population, the state of regional and local labour markets. As a rule, rapid change in the working-age population of any territory due to migration processes results in an imbalance in supply and demand on labour markets and a change in the demographic structure of the population. Migration is also to a large extent a reflection of socio-economic processes taking place in the society. Hence, the issues related to the study of migration factors, the direction, intensity and structure of migration flows, and the prediction of their magnitude are becoming topical issues these days.
Mathematical tools are often used to analyze, predict migration processes and assess their consequences, allowing for essentially accurate modelling of migration processes for different territories on the basis of the available statistical data. In recent years, quite a number of scientific papers on modelling internal and external migration flows using mathematical methods have appeared both in Russia and in foreign countries in recent years. Consequently, there has been a need to systematize the currently most commonly used methods and tools applied in migration modelling to form a coherent picture of the main trends and research directions in this field.
The presented review considers the main approaches to migration modelling and the main components of migration modelling methodology, i. e. stages, methods, models and model classification. Their comparative analysis was also conducted and general recommendations on the choice of mathematical tools for modelling were developed. The review contains two sections: migration modelling methods and migration models. The first section describes the main methods used in the model development process — econometric, cellular automata, system-dynamic, probabilistic, balance, optimization and cluster analysis. Based on the analysis of modern domestic and foreign publications on migration, the most common classes of models — regression, agent-based, simulation, optimization, probabilistic, balance, dynamic and combined — were identified and described. The features, advantages and disadvantages of different types of migration process models were considered.
-
Analysis of predictive properties of ground tremor using Huang decomposition
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 939-958A method is proposed for analyzing the tremor of the earth’s surface, measured by means of space geodesy, in order to highlight the prognostic effects of seismicity activation. The method is illustrated by the example of a joint analysis of a set of synchronous time series of daily vertical displacements of the earth’s surface on the Japanese Islands for the time interval 2009–2023. The analysis is based on dividing the source data (1047 time series) into blocks (clusters of stations) and sequentially applying the principal component method. The station network is divided into clusters using the K-means method from the maximum pseudo-F-statistics criterion, and for Japan the optimal number of clusters was chosen to be 15. The Huang decomposition method into a sequence of independent empirical oscillation modes (EMD — Empirical Mode Decomposition) is applied to the time series of principal components from station blocks. To provide the stability of estimates of the waveforms of the EMD decomposition, averaging of 1000 independent additive realizations of white noise of limited amplitude was performed. Using the Cholesky decomposition of the covariance matrix of the waveforms of the first three EMD components in a sliding time window, indicators of abnormal tremor behavior were determined. By calculating the correlation function between the average indicators of anomalous behavior and the released seismic energy in the vicinity of the Japanese Islands, it was established that bursts in the measure of anomalous tremor behavior precede emissions of seismic energy. The purpose of the article is to clarify common hypotheses that movements of the earth’s crust recorded by space geodesy may contain predictive information. That displacements recorded by geodetic methods respond to the effects of earthquakes is widely known and has been demonstrated many times. But isolating geodetic effects that predict seismic events is much more challenging. In our paper, we propose one method for detecting predictive effects in space geodesy data.
-
Modeling of helix formation in peptides containing aspartic and glutamic residues
Computer Research and Modeling, 2010, v. 2, no. 1, pp. 83-90Views (last year): 2. Citations: 4 (RSCI).In present work we used the methods of molecular dynamics simulations and quantum chemistry to study the concept, according to which aspartic and glutamic residues play a key role in initiation of helix formation in oligopeptides. It has been shown, that the first turn of the alpha-helix can be organized from various amino acid sequences with Asp and Glu residues on the N-terminus. Thermodynamic properties of such a process were analyzed. The obtained results do not interfere with known experimental and statistical data and they substantially elaborate present views on the processes of early peptide folding stages.
-
On the investigation of plasma turbulence by the analysis of the spectra
Computer Research and Modeling, 2012, v. 4, no. 4, pp. 793-802Views (last year): 2. Citations: 4 (RSCI).The article describes the examples of the analysis of the experimental data spectra for identifying typical structures of processes forming plasma turbulence. The method is based on the original algorithm which is close to the one-sample bootstrap. The base model for description of the fine structure of stochastic processes is finite local-scale normal mixtures. For finding the statistical estimates (maximum likelihood estimates) well known EM algorithm is used. The efficiency of the proposed research technique is demonstrated for a number of spectra’s set obtained in different modes of low-frequency plasma turbulence.
-
The long-term empirical macro model of world dynamics
Computer Research and Modeling, 2013, v. 5, no. 5, pp. 883-891Views (last year): 4. Citations: 3 (RSCI).The work discusses the methodological basis and problems of modeling of world dynamics. Outlines approaches to the construction of a new simulation model of global development and the results of the simulation. The basis of the model building is laid empirical approach which based on the statistical analysis of the main socio-economic indicators. On the basis of this analysis identified the main variables. Dynamic equations (in continuous differential form) were written for these variables. Dependencies between variables were selected based on the dynamics of indicators in the past and on the basis of expert assessments, while econometric techniques were used, based on regression analysis. Calculations have been performed for the resulting dynamic equations system, the results are presented in the form of a trajectories beam for those indicators that are directly observable, and for which statistics are available. Thus, it is possible to assess the scatter of the trajectories and understand the predictive capability of this model.
-
On a possible approach to a sport game with continuous time simulation
Computer Research and Modeling, 2014, v. 6, no. 3, pp. 455-460Views (last year): 3. Citations: 2 (RSCI).This paper is dedicated to discussing methods of statistical modeling the outcomes of sport events and, particularly, matches with continuous time. We propose a simulation-based approach to predicting the outcome of a match, somehow medium between pure statistical methods and agent simulation of individual players. An example of retrospective prediction is given.
-
R/S method application in neurological speech disorders analyses
Computer Research and Modeling, 2014, v. 6, no. 5, pp. 775-791Views (last year): 2. Citations: 2 (RSCI).Based on modified rescaled range scale computation algorithm, the technique of Hurst exponent and its characteristic time estimation is proposed. The approach of increase the accuracy and simplification automatic Hurst exponent calculation is developed. The Hurst exponent and characteristic time is calculated for power time sets of speech signals with various motor pathologies (aphasias and dysarthrias). Results is statistically analyzed, the correlation between Hurst exponent and characteristic time is estimated.
-
Empirical testing of institutional matrices theory by data mining
Computer Research and Modeling, 2015, v. 7, no. 4, pp. 923-939The paper has a goal to identify a set of parameters of the environment and infrastructure with the most significant impact on institutional-matrices that dominate in different countries. Parameters of environmental conditions includes raw statistical indices, which were directly derived from the databases of open access, as well as complex integral indicators that were by method of principal components. Efficiency of discussed parameters in task of dominant institutional matrices type recognition (X or Y type) was evaluated by a number of methods based on machine learning. It was revealed that greatest informational content is associated with parameters characterizing risk of natural disasters, level of urbanization and the development of transport infrastructure, the monthly averages and seasonal variations of temperature and precipitation.
Keywords: institutional matrices theory, machine learning.Views (last year): 7. Citations: 13 (RSCI).
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




