All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Effective rank of a problem of function estimation based on measurement with an error of finite number of its linear functionals
Computer Research and Modeling, 2014, v. 6, no. 2, pp. 189-202The problem of restoration of an element f of Euclidean functional space L2(X) based on the results of measurements of a finite set of its linear functionals, distorted by (random) error is solved. A priori data aren't assumed. Family of linear subspaces of the maximum (effective) dimension for which the projections of element f to them allow estimates with a given accuracy, is received. The effective rank ρ(δ) of the estimation problem is defined as the function equal to the maximum dimension of an orthogonal component Pf of the element f which can be estimated with a error, which is not surpassed the value δ. The example of restoration of a spectrum of radiation based on a finite set of experimental data is given.
-
Conditions of Rice statistical model applicability and estimation of the Rician signal’s parameters by maximum likelihood technique
Computer Research and Modeling, 2014, v. 6, no. 1, pp. 13-25Views (last year): 2. Citations: 4 (RSCI).The paper develops a theory of a new so-called two-parametric approach to the random signals' analysis and processing. A mathematical simulation and the task solutions’ comparison have been implemented for the Gauss and Rice statistical models. The applicability of the Rice statistical model is substantiated for the tasks of data and images processing when the signal’s envelope is being analyzed. A technique is developed and theoretically substantiated for solving the task of the noise suppression and initial image reconstruction by means of joint calculation of both statistical parameters — an initial signal’s mean value and noise dispersion — based on the maximum likelihood method within the Rice distribution. The peculiarities of this distribution’s likelihood function and the following from them possibilities of the signal and noise estimation have been analyzed.
-
Neural network model of human intoxication functional state determining in some problems of transport safety solution
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 285-293Views (last year): 42. Citations: 2 (RSCI).This article solves the problem of vehicles drivers intoxication functional statedetermining. Its solution is relevant in the transport security field during pre-trip medical examination. The problem solution is based on the papillomometry method application, which allows to evaluate the driver state by his pupillary reaction to illumination change. The problem is to determine the state of driver inebriation by the analysis of the papillogram parameters values — a time series characterizing the change in pupil dimensions upon exposure to a short-time light pulse. For the papillograms analysis it is proposed to use a neural network. A neural network model for determining the drivers intoxication functional state is developed. For its training, specially prepared data samples are used which are the values of the following parameters of pupillary reactions grouped into two classes of functional states of drivers: initial diameter, minimum diameter, half-constriction diameter, final diameter, narrowing amplitude, rate of constriction, expansion rate, latent reaction time, the contraction time, the expansion time, the half-contraction time, and the half-expansion time. An example of the initial data is given. Based on their analysis, a neural network model is constructed in the form of a single-layer perceptron consisting of twelve input neurons, twenty-five neurons of the hidden layer, and one output neuron. To increase the model adequacy using the method of ROC analysis, the optimal cut-off point for the classes of solutions at the output of the neural network is determined. A scheme for determining the drivers intoxication state is proposed, which includes the following steps: pupillary reaction video registration, papillogram construction, parameters values calculation, data analysis on the base of the neural network model, driver’s condition classification as “norm” or “rejection of the norm”, making decisions on the person being audited. A medical worker conducting driver examination is presented with a neural network assessment of his intoxication state. On the basis of this assessment, an opinion on the admission or removal of the driver from driving the vehicle is drawn. Thus, the neural network model solves the problem of increasing the efficiency of pre-trip medical examination by increasing the reliability of the decisions made.
-
Modern methods of mathematical modeling of blood flow using reduced order methods
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 581-604Views (last year): 62. Citations: 2 (RSCI).The study of the physiological and pathophysiological processes in the cardiovascular system is one of the important contemporary issues, which is addressed in many works. In this work, several approaches to the mathematical modelling of the blood flow are considered. They are based on the spatial order reduction and/or use a steady-state approach. Attention is paid to the discussion of the assumptions and suggestions, which are limiting the scope of such models. Some typical mathematical formulations are considered together with the brief review of their numerical implementation. In the first part, we discuss the models, which are based on the full spatial order reduction and/or use a steady-state approach. One of the most popular approaches exploits the analogy between the flow of the viscous fluid in the elastic tubes and the current in the electrical circuit. Such models can be used as an individual tool. They also used for the formulation of the boundary conditions in the models using one dimensional (1D) and three dimensional (3D) spatial coordinates. The use of the dynamical compartment models allows describing haemodynamics over an extended period (by order of tens of cardiac cycles and more). Then, the steady-state models are considered. They may use either total spatial reduction or two dimensional (2D) spatial coordinates. This approach is used for simulation the blood flow in the region of microcirculation. In the second part, we discuss the models, which are based on the spatial order reduction to the 1D coordinate. The models of this type require relatively small computational power relative to the 3D models. Within the scope of this approach, it is also possible to include all large vessels of the organism. The 1D models allow simulation of the haemodynamic parameters in every vessel, which is included in the model network. The structure and the parameters of such a network can be set according to the literature data. It also exists methods of medical data segmentation. The 1D models may be derived from the 3D Navier – Stokes equations either by asymptotic analysis or by integrating them over a volume. The major assumptions are symmetric flow and constant shape of the velocity profile over a cross-section. These assumptions are somewhat restrictive and arguable. Some of the current works paying attention to the 1D model’s validation, to the comparing different 1D models and the comparing 1D models with clinical data. The obtained results reveal acceptable accuracy. It allows concluding, that the 1D approach can be used in medical applications. 1D models allow describing several dynamical processes, such as pulse wave propagation, Korotkov’s tones. Some physiological conditions may be included in the 1D models: gravity force, muscles contraction force, regulation and autoregulation.
-
Cellular automata review based on modern domestic publications
Computer Research and Modeling, 2019, v. 11, no. 1, pp. 9-57Views (last year): 58.The paper contains the analysis of the domestic publications issued in 2013–2017 years and devoted to cellular automata. The most of them concern on mathematical modeling. Scientometric schedules for 1990–2017 years have proved relevance of subject. The review allows to allocate the main personalities and the scientific directions/schools in modern Russian science, to reveal their originality or secondness in comparison with world science. Due to the authors choice of national publications basis instead of world, the paper claims the completeness and the fact is that about 200 items from the checked 526 references have an importance for science.
In the Annex to the review provides preliminary information about CA — the Game of Life, a theorem about gardens of Eden, elementary CAs (together with the diagram of de Brujin), block Margolus’s CAs, alternating CAs. Attention is paid to three important for modeling semantic traditions of von Neumann, Zuse and Zetlin, as well as to the relationship with the concepts of neural networks and Petri nets. It is allocated conditional 10 works, which should be familiar to any specialist in CA. Some important works of the 1990s and later are listed in the Introduction.
Then the crowd of publications is divided into categories: the modification of the CA and other network models (29 %), Mathematical properties of the CA and the connection with mathematics (5 %), Hardware implementation (3 %), Software implementation (5 %), Data Processing, recognition and Cryptography (8 %), Mechanics, physics and chemistry (20 %), Biology, ecology and medicine (15 %), Economics, urban studies and sociology (15 %). In parentheses the share of subjects in the array are indicated. There is an increase in publications on CA in the humanitarian sphere, as well as the emergence of hybrid approaches, leading away from the classic CA definition.
-
Development of network computational models for the study of nonlinear wave processes on graphs
Computer Research and Modeling, 2019, v. 11, no. 5, pp. 777-814In various applications arise problems modeled by nonlinear partial differential equations on graphs (networks, trees). In order to study such problems and various extreme situations arose in the problems of designing and optimizing networks developed the computational model based on solving the corresponding boundary problems for partial differential equations of hyperbolic type on graphs (networks, trees). As applications, three different problems were chosen solved in the framework of the general approach of network computational models. The first was modeling of traffic flow. In solving this problem, a macroscopic approach was used in which the transport flow is described by a nonlinear system of second-order hyperbolic equations. The results of numerical simulations showed that the model developed as part of the proposed approach well reproduces the real situation various sections of the Moscow transport network on significant time intervals and can also be used to select the most optimal traffic management strategy in the city. The second was modeling of data flows in computer networks. In this problem data flows of various connections in packet data network were simulated as some continuous medium flows. Conceptual and mathematical network models are proposed. The numerical simulation was carried out in comparison with the NS-2 network simulation system. The results showed that in comparison with the NS-2 packet model the developed streaming model demonstrates significant savings in computing resources while ensuring a good level of similarity and allows us to simulate the behavior of complex globally distributed IP networks. The third was simulation of the distribution of gas impurities in ventilation networks. It was developed the computational mathematical model for the propagation of finely dispersed or gas impurities in ventilation networks using the gas dynamics equations by numerical linking of regions of different sizes. The calculations shown that the model with good accuracy allows to determine the distribution of gas-dynamic parameters in the pipeline network and solve the problems of dynamic ventilation management.
-
Optimal threshold selection algorithms for multi-label classification: property study
Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1221-1238Multi-label classification models arise in various areas of life, which is explained by an increasing amount of information that requires prompt analysis. One of the mathematical methods for solving this problem is a plug-in approach, at the first stage of which, for each class, a certain ranking function is built, ordering all objects in some way, and at the second stage, the optimal thresholds are selected, the objects on one side of which are assigned to the current class, and on the other — to the other. Thresholds are chosen to maximize the target quality measure. The algorithms which properties are investigated in this article are devoted to the second stage of the plug-in approach which is the choice of the optimal threshold vector. This step becomes non-trivial if the $F$-measure of average precision and recall is used as the target quality assessment since it does not allow independent threshold optimization in each class. In problems of extreme multi-label classification, the number of classes can reach hundreds of thousands, so the original optimization problem is reduced to the problem of searching a fixed point of a specially introduced transformation $\boldsymbol V$, defined on a unit square on the plane of average precision $P$ and recall $R$. Using this transformation, two algorithms are proposed for optimization: the $F$-measure linearization method and the method of $\boldsymbol V$ domain analysis. The properties of algorithms are studied when applied to multi-label classification data sets of various sizes and origin, in particular, the dependence of the error on the number of classes, on the $F$-measure parameter, and on the internal parameters of methods under study. The peculiarity of both algorithms work when used for problems with the domain of $\boldsymbol V$, containing large linear boundaries, was found. In case when the optimal point is located in the vicinity of these boundaries, the errors of both methods do not decrease with an increase in the number of classes. In this case, the linearization method quite accurately determines the argument of the optimal point, while the method of $\boldsymbol V$ domain analysis — the polar radius.
-
Simulation of turbulent compressible flows in the FlowVision software
Computer Research and Modeling, 2023, v. 15, no. 4, pp. 805-825Simulation of turbulent compressible gas flows using turbulence models $k-\varepsilon$ standard (KES), $k-\varepsilon$ FlowVision (KEFV) and SST $k-\omega$ is discussed in the given article. A new version of turbulence model KEFV is presented. The results of its testing are shown. Numerical investigation of the discharge of an over-expanded jet from a conic nozzle into unlimited space is performed. The results are compared against experimental data. The dependence of the results on computational mesh is demonstrated. The dependence of the results on turbulence specified at the nozzle inlet is demonstrated. The conclusion is drawn about necessity to allow for compressibility in two-parametric turbulence models. The simple method proposed by Wilcox in 1994 suits well for this purpose. As a result, the range of applicability of the three aforementioned two-parametric turbulence models is essentially extended. Particular values of the constants responsible for the account of compressibility in the Wilcox approach are proposed. It is recommended to specify these values in simulations of compressible flows with use of models KES, KEFV, and SST.
In addition, the question how to obtain correct characteristics of supersonic turbulent flows using two-parametric turbulence models is considered. The calculations on different grids have shown that specifying a laminar flow at the inlet to the nozzle and wall functions at its surfaces, one obtains the laminar core of the flow up to the fifth Mach disk. In order to obtain correct flow characteristics, it is necessary either to specify two parameters characterizing turbulence of the inflowing gas, or to set a “starting” turbulence in a limited volume enveloping the region of presumable laminar-turbulent transition next to the exit from the nozzle. The latter possibility is implemented in model KEFV.
-
Quantile shape measures for heavy-tailed distributions
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1041-1077Currently, journal papers contain numerous examples of the use of heavy-tailed distributions for applied research on various complex systems. Models of extreme data are usually limited to a small set of distribution shapes that in this field of applied research historically been used. It is possible to increase the composition of the set of probability distributions shapes through comparing the measures of the distribution shapes and choosing the most suitable implementations. The example of a beta distribution of the second kind shown that the lack of definability of the moments of heavy-tailed implementations of the beta family of distributions limits the applicability of the existing classical methods of moments for studying the distributions shapes when are characterized heavy tails. For this reason, the development of new methods for comparing distributions based on quantile shape measures free from the restrictions on the shape parameters remains relevant study the possibility of constructing a space of quantile measures of shapes for comparing distributions with heavy tails. The operation purpose consists in computer research of creation possibility of space of the quantile’s measures for the comparing of distributions property with heavy tails. On the basis of computer simulation there the distributions implementations in measures space of shapes were been shown. Mapping distributions in space only of the parametrical measures of shapes has shown that the imposition of regions for heavy tails distribution made impossible compare the shape of distributions belonging to different type in the space of quantile measures of skewness and kurtosis. It is well known that shape information measures such as entropy and entropy uncertainty interval contain additional information about the shape measure of heavy-tailed distributions. In this paper, a quantile entropy coefficient is proposed as an additional independent measure of shape, which is based on the ratio of entropy and quantile uncertainty intervals. Also estimates of quantile entropy coefficients are obtained for a number of well-known heavy-tailed distributions. The possibility of comparing the distributions shapes with realizations of the beta distribution of the second kind is illustrated by the example of the lognormal distribution and the Pareto distribution. Due to mapping the position of stable distributions in the three-dimensional space of quantile measures of shapes estimate made it possible the shape parameters to of the beta distribution of the second kind, for which shape is closest to the Lévy shape. From the paper material it follows that the display of distributions in the three-dimensional space of quantile measures of the forms of skewness, kurtosis and entropy coefficient significantly expands the possibility of comparing the forms for distributions with heavy tails.
-
On application of the asymptotic tests for estimating the number of mixture distribution components
Computer Research and Modeling, 2012, v. 4, no. 1, pp. 45-53Views (last year): 1. Citations: 2 (RSCI).The paper demonstrates the efficiency of asymptotically most powerful test of statistical hypotheses about the number of mixture components in the adding and splitting component models. Test data are the samples from different finite normal mixtures. The results are compared for various significance levels and weights.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"