All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Wavelet transform with the Morlet wavelet: Calculation methods based on a solution of diffusion equations
Computer Research and Modeling, 2009, v. 1, no. 1, pp. 5-12Views (last year): 5. Citations: 3 (RSCI).Two algorithms of evaluation of the continuous wavelet transform with the Morlet wavelet are presented. The first one is the solution of PDE with transformed signal, which plays a role of the initial value. The second allows to explore the influence of central frequency variation via the diffusion smoothing of the data modulated by the harmonic functions. These approaches are illustrated by the analysis of the chaotic oscillations of the coupled Roessler systems.
-
Investigation of approximation order of invariant differential operators on movable irregular quadrangular grid
Computer Research and Modeling, 2011, v. 3, no. 4, pp. 353-364Views (last year): 2.The a priori analysis of approximation of magnetohydrodynamic equations on irregular quadrangular analysis was performed. The values of coefficients wich determine the misalignment norm for difference analogs of operators gradient and divergence were calculated. Was studied the influence of properties of grid cells on misalignment. For the numerical confirmation of obtained estimations were cited the examples of calculations with specifying identical initial data on different grids.
-
Conditions of Rice statistical model applicability and estimation of the Rician signal’s parameters by maximum likelihood technique
Computer Research and Modeling, 2014, v. 6, no. 1, pp. 13-25Views (last year): 2. Citations: 4 (RSCI).The paper develops a theory of a new so-called two-parametric approach to the random signals' analysis and processing. A mathematical simulation and the task solutions’ comparison have been implemented for the Gauss and Rice statistical models. The applicability of the Rice statistical model is substantiated for the tasks of data and images processing when the signal’s envelope is being analyzed. A technique is developed and theoretically substantiated for solving the task of the noise suppression and initial image reconstruction by means of joint calculation of both statistical parameters — an initial signal’s mean value and noise dispersion — based on the maximum likelihood method within the Rice distribution. The peculiarities of this distribution’s likelihood function and the following from them possibilities of the signal and noise estimation have been analyzed.
-
Neural network model of human intoxication functional state determining in some problems of transport safety solution
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 285-293Views (last year): 42. Citations: 2 (RSCI).This article solves the problem of vehicles drivers intoxication functional statedetermining. Its solution is relevant in the transport security field during pre-trip medical examination. The problem solution is based on the papillomometry method application, which allows to evaluate the driver state by his pupillary reaction to illumination change. The problem is to determine the state of driver inebriation by the analysis of the papillogram parameters values — a time series characterizing the change in pupil dimensions upon exposure to a short-time light pulse. For the papillograms analysis it is proposed to use a neural network. A neural network model for determining the drivers intoxication functional state is developed. For its training, specially prepared data samples are used which are the values of the following parameters of pupillary reactions grouped into two classes of functional states of drivers: initial diameter, minimum diameter, half-constriction diameter, final diameter, narrowing amplitude, rate of constriction, expansion rate, latent reaction time, the contraction time, the expansion time, the half-contraction time, and the half-expansion time. An example of the initial data is given. Based on their analysis, a neural network model is constructed in the form of a single-layer perceptron consisting of twelve input neurons, twenty-five neurons of the hidden layer, and one output neuron. To increase the model adequacy using the method of ROC analysis, the optimal cut-off point for the classes of solutions at the output of the neural network is determined. A scheme for determining the drivers intoxication state is proposed, which includes the following steps: pupillary reaction video registration, papillogram construction, parameters values calculation, data analysis on the base of the neural network model, driver’s condition classification as “norm” or “rejection of the norm”, making decisions on the person being audited. A medical worker conducting driver examination is presented with a neural network assessment of his intoxication state. On the basis of this assessment, an opinion on the admission or removal of the driver from driving the vehicle is drawn. Thus, the neural network model solves the problem of increasing the efficiency of pre-trip medical examination by increasing the reliability of the decisions made.
-
Modern methods of mathematical modeling of blood flow using reduced order methods
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 581-604Views (last year): 62. Citations: 2 (RSCI).The study of the physiological and pathophysiological processes in the cardiovascular system is one of the important contemporary issues, which is addressed in many works. In this work, several approaches to the mathematical modelling of the blood flow are considered. They are based on the spatial order reduction and/or use a steady-state approach. Attention is paid to the discussion of the assumptions and suggestions, which are limiting the scope of such models. Some typical mathematical formulations are considered together with the brief review of their numerical implementation. In the first part, we discuss the models, which are based on the full spatial order reduction and/or use a steady-state approach. One of the most popular approaches exploits the analogy between the flow of the viscous fluid in the elastic tubes and the current in the electrical circuit. Such models can be used as an individual tool. They also used for the formulation of the boundary conditions in the models using one dimensional (1D) and three dimensional (3D) spatial coordinates. The use of the dynamical compartment models allows describing haemodynamics over an extended period (by order of tens of cardiac cycles and more). Then, the steady-state models are considered. They may use either total spatial reduction or two dimensional (2D) spatial coordinates. This approach is used for simulation the blood flow in the region of microcirculation. In the second part, we discuss the models, which are based on the spatial order reduction to the 1D coordinate. The models of this type require relatively small computational power relative to the 3D models. Within the scope of this approach, it is also possible to include all large vessels of the organism. The 1D models allow simulation of the haemodynamic parameters in every vessel, which is included in the model network. The structure and the parameters of such a network can be set according to the literature data. It also exists methods of medical data segmentation. The 1D models may be derived from the 3D Navier – Stokes equations either by asymptotic analysis or by integrating them over a volume. The major assumptions are symmetric flow and constant shape of the velocity profile over a cross-section. These assumptions are somewhat restrictive and arguable. Some of the current works paying attention to the 1D model’s validation, to the comparing different 1D models and the comparing 1D models with clinical data. The obtained results reveal acceptable accuracy. It allows concluding, that the 1D approach can be used in medical applications. 1D models allow describing several dynamical processes, such as pulse wave propagation, Korotkov’s tones. Some physiological conditions may be included in the 1D models: gravity force, muscles contraction force, regulation and autoregulation.
-
Cellular automata review based on modern domestic publications
Computer Research and Modeling, 2019, v. 11, no. 1, pp. 9-57Views (last year): 58.The paper contains the analysis of the domestic publications issued in 2013–2017 years and devoted to cellular automata. The most of them concern on mathematical modeling. Scientometric schedules for 1990–2017 years have proved relevance of subject. The review allows to allocate the main personalities and the scientific directions/schools in modern Russian science, to reveal their originality or secondness in comparison with world science. Due to the authors choice of national publications basis instead of world, the paper claims the completeness and the fact is that about 200 items from the checked 526 references have an importance for science.
In the Annex to the review provides preliminary information about CA — the Game of Life, a theorem about gardens of Eden, elementary CAs (together with the diagram of de Brujin), block Margolus’s CAs, alternating CAs. Attention is paid to three important for modeling semantic traditions of von Neumann, Zuse and Zetlin, as well as to the relationship with the concepts of neural networks and Petri nets. It is allocated conditional 10 works, which should be familiar to any specialist in CA. Some important works of the 1990s and later are listed in the Introduction.
Then the crowd of publications is divided into categories: the modification of the CA and other network models (29 %), Mathematical properties of the CA and the connection with mathematics (5 %), Hardware implementation (3 %), Software implementation (5 %), Data Processing, recognition and Cryptography (8 %), Mechanics, physics and chemistry (20 %), Biology, ecology and medicine (15 %), Economics, urban studies and sociology (15 %). In parentheses the share of subjects in the array are indicated. There is an increase in publications on CA in the humanitarian sphere, as well as the emergence of hybrid approaches, leading away from the classic CA definition.
-
Stable character of the Rice statistical distribution: the theory and application in the tasks of the signals’ phase shift measuring
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 475-485The paper concerns the study of the Rice statistical distribution’s peculiarities which cause the possibility of its efficient application in solving the tasks of high precision phase measuring in optics. The strict mathematical proof of the Rician distribution’s stable character is provided in the example of the differential signal consideration, namely: it has been proved that the sum or the difference of two Rician signals also obey the Rice distribution. Besides, the formulas have been obtained for the parameters of the resulting summand or differential signal’s Rice distribution. Based upon the proved stable character of the Rice distribution a new original technique of the high precision measuring of the two quasi-harmonic signals’ phase shift has been elaborated in the paper. This technique is grounded in the statistical analysis of the measured sampled data for the amplitudes of the both signals and for the amplitude of the third signal which is equal to the difference of the two signals to be compared in phase. The sought-for phase shift of two quasi-harmonic signals is being calculated from the geometrical considerations as an angle of a triangle which sides are equal to the three indicated signals’ amplitude values having been reconstructed against the noise background. Thereby, the proposed technique of measuring the phase shift using the differential signal analysis, is based upon the amplitude measurements only, what significantly decreases the demands to the equipment and simplifies the technique implementation in practice. The paper provides both the strict mathematical substantiation of a new phase shift measuring technique and the results of its numerical testing. The elaborated method of high precision phase measurements may be efficiently applied for solving a wide circle of tasks in various areas of science and technology, in particular — at distance measuring, in communication systems, in navigation, etc.
-
Optimal threshold selection algorithms for multi-label classification: property study
Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1221-1238Multi-label classification models arise in various areas of life, which is explained by an increasing amount of information that requires prompt analysis. One of the mathematical methods for solving this problem is a plug-in approach, at the first stage of which, for each class, a certain ranking function is built, ordering all objects in some way, and at the second stage, the optimal thresholds are selected, the objects on one side of which are assigned to the current class, and on the other — to the other. Thresholds are chosen to maximize the target quality measure. The algorithms which properties are investigated in this article are devoted to the second stage of the plug-in approach which is the choice of the optimal threshold vector. This step becomes non-trivial if the $F$-measure of average precision and recall is used as the target quality assessment since it does not allow independent threshold optimization in each class. In problems of extreme multi-label classification, the number of classes can reach hundreds of thousands, so the original optimization problem is reduced to the problem of searching a fixed point of a specially introduced transformation $\boldsymbol V$, defined on a unit square on the plane of average precision $P$ and recall $R$. Using this transformation, two algorithms are proposed for optimization: the $F$-measure linearization method and the method of $\boldsymbol V$ domain analysis. The properties of algorithms are studied when applied to multi-label classification data sets of various sizes and origin, in particular, the dependence of the error on the number of classes, on the $F$-measure parameter, and on the internal parameters of methods under study. The peculiarity of both algorithms work when used for problems with the domain of $\boldsymbol V$, containing large linear boundaries, was found. In case when the optimal point is located in the vicinity of these boundaries, the errors of both methods do not decrease with an increase in the number of classes. In this case, the linearization method quite accurately determines the argument of the optimal point, while the method of $\boldsymbol V$ domain analysis — the polar radius.
-
Introduction to the parallelization of algorithms and programs
Computer Research and Modeling, 2010, v. 2, no. 3, pp. 231-272Views (last year): 53. Citations: 22 (RSCI).Difference of software development for parallel computing technology from sequential programming is dicussed. Arguements for introduction of new phases into technology of software engineering are given. These phases are: decomposition of algorithms, assignment of jobs to performers, conducting and mapping of logical to physical performers. Issues of performance evaluation of algorithms are briefly discussed. Decomposition of algorithms and programs into parts that can be executed in parallel is dicussed.
-
Theoretical substantiation of the mathematical techniques for joint signal and noise estimation at rician data analysis
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 445-473Views (last year): 2. Citations: 2 (RSCI).The paper provides a solution of the two-parameter task of joint signal and noise estimation at data analysis within the conditions of the Rice distribution by the techniques of mathematical statistics: the maximum likelihood method and the variants of the method of moments. The considered variants of the method of moments include the following techniques: the joint signal and noise estimation on the basis of measuring the 2-nd and the 4-th moments (MM24) and on the basis of measuring the 1-st and the 2-nd moments (MM12). For each of the elaborated methods the explicit equations’ systems have been obtained for required parameters of the signal and noise. An important mathematical result of the investigation consists in the fact that the solution of the system of two nonlinear equations with two variables — the sought for signal and noise parameters — has been reduced to the solution of just one equation with one unknown quantity what is important from the view point of both the theoretical investigation of the proposed technique and its practical application, providing the possibility of essential decreasing the calculating resources required for the technique’s realization. The implemented theoretical analysis has resulted in an important practical conclusion: solving the two-parameter task does not lead to the increase of required numerical resources if compared with the one-parameter approximation. The task is meaningful for the purposes of the rician data processing, in particular — the image processing in the systems of magnetic-resonance visualization. The theoretical conclusions have been confirmed by the results of the numerical experiment.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"