All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Effective rank of a problem of function estimation based on measurement with an error of finite number of its linear functionals
Computer Research and Modeling, 2014, v. 6, no. 2, pp. 189-202The problem of restoration of an element f of Euclidean functional space L2(X) based on the results of measurements of a finite set of its linear functionals, distorted by (random) error is solved. A priori data aren't assumed. Family of linear subspaces of the maximum (effective) dimension for which the projections of element f to them allow estimates with a given accuracy, is received. The effective rank ρ(δ) of the estimation problem is defined as the function equal to the maximum dimension of an orthogonal component Pf of the element f which can be estimated with a error, which is not surpassed the value δ. The example of restoration of a spectrum of radiation based on a finite set of experimental data is given.
-
Tool for integration of heterogeneous models and its application to loosely coupled sets of differential equations
Computer Research and Modeling, 2009, v. 1, no. 2, pp. 127-136Views (last year): 1.We develop the software tool for integration of dynamics models, which are inhomogeneous over mathematical properties and/or over requirements to the time step. The family of algorithms for the parallel computation of heterogeneous models with different time steps is offered. Analytical estimates and direct measurements of the error of these algorithms are made with reference to weakly coupled ODE sets. The advantage of the algorithms in the time cost as compared to accurate methods is shown.
-
Optimal threshold selection algorithms for multi-label classification: property study
Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1221-1238Multi-label classification models arise in various areas of life, which is explained by an increasing amount of information that requires prompt analysis. One of the mathematical methods for solving this problem is a plug-in approach, at the first stage of which, for each class, a certain ranking function is built, ordering all objects in some way, and at the second stage, the optimal thresholds are selected, the objects on one side of which are assigned to the current class, and on the other — to the other. Thresholds are chosen to maximize the target quality measure. The algorithms which properties are investigated in this article are devoted to the second stage of the plug-in approach which is the choice of the optimal threshold vector. This step becomes non-trivial if the F-measure of average precision and recall is used as the target quality assessment since it does not allow independent threshold optimization in each class. In problems of extreme multi-label classification, the number of classes can reach hundreds of thousands, so the original optimization problem is reduced to the problem of searching a fixed point of a specially introduced transformation V, defined on a unit square on the plane of average precision P and recall R. Using this transformation, two algorithms are proposed for optimization: the F-measure linearization method and the method of V domain analysis. The properties of algorithms are studied when applied to multi-label classification data sets of various sizes and origin, in particular, the dependence of the error on the number of classes, on the F-measure parameter, and on the internal parameters of methods under study. The peculiarity of both algorithms work when used for problems with the domain of V, containing large linear boundaries, was found. In case when the optimal point is located in the vicinity of these boundaries, the errors of both methods do not decrease with an increase in the number of classes. In this case, the linearization method quite accurately determines the argument of the optimal point, while the method of V domain analysis — the polar radius.
-
Numerical simulation of the propagation of probing pulses in a dense bed of a granular medium
Computer Research and Modeling, 2024, v. 16, no. 6, pp. 1361-1384The need to model high-speed flows of compressible media with shock waves in the presence of dense curtains or layers of particles arises when studying various processes, such as the dispersion of particles from a layer behind a shock wave or propagation of combustion waves in heterogeneous explosives. These directions have been successfully developed over the past few decades, but the corresponding mathematical models and computational algorithms continue to be actively improved. The mechanisms of wave processes in two-phase media differ in different models, so it is important to continue researching and improving these models.
The paper is devoted to the numerical study of the propagation of disturbances inside a sand bed under the action of successive impacts of a normally incident air shock wave. The setting of the problem follows the experiments of A. T.Akhmetov with co-authors. The aim of this study is to investigate the possible reasons for signal amplification on the pressure sensor within the bed, as observed under some conditions in experiments. The mathematical model is based on a one-dimensional system of Baer –Nunziato equations for describing dense flows of two-phase media taking into account intergranular stresses in the particle phase. The computational algorithm is based on the Godunov method for the Baer – Nunziato equations.
The paper describes the dynamics of waves inside and outside a particle bed after applying first and second pressure pulses to it. The main components of the flow within the bed are filtration waves in the gas phase and compaction waves in the solid phase. The compaction wave, generated by the first pulse and reflected from the walls of the shock tube, interacts with the filtration wave caused by the second pulse. As a result, the signal measured by the pressure sensor inside the bed has a sharp peak, explaining the new effect observed in experiments.
-
Research and reduction of mathematical model of chemical reaction by Sobol’ method
Computer Research and Modeling, 2016, v. 8, no. 4, pp. 633-646Views (last year): 10. Citations: 4 (RSCI).The technique of simplification of mathematical model of a chemical reaction by reducing the number of steps of the reaction scheme, based on an analysis of sensitivity to changes in the objective function of the model parameters, is proposed. The reduced scheme of model reaction of formaldehyde oxidation is received. Functional characterizes the measure of proximity to the calculated values for the initial kinetic reaction scheme and the scheme resulting disturbance of its parameters. The advantage of this technique is the ability to analyze complex kinetic schemes and reduction of kinetic models to a size suitable for practical use. The results of computational experiments under different reaction conditions can be included in the functional and thus to receive the reduce scheme, which is consistent the detailed scheme for the desired range of conditions. Sensitivity analysis of the functional model allows to identify those parameters, which provide the largest (or smallest) the contribution to the result of the process simulation. The mathematical model can contain parameters, which change of values do not affect the qualitative and quantitative description of the process. The contribution of these parameters in the functional value won’t be of great importance. Thus it can be eliminated from consideration, which do not serve for modeling kinetic curves substances. The kinetic scheme of formaldehyde oxidation, the detailed mechanism which includes 25 stages and 15 substances, were investigated using this method. On the basis of the local and global sensitivity analysis, the most important stage of the process that affect the overall dynamics of the target concentrations of the reaction. The reduced scheme of model reaction of formaldehyde oxidation is received. This scheme also describes the behavior of the main substances, as detailed scheme, but has a much smaller number of reaction stages. The results of the comparative analysis of modeling of formaldehyde oxidation on detailed and reduced schemes are given. Computational aspects of the problems of chemical kinetics by Sobol’ global method an example of this reaction are specified. The comparison results are local, global and total sensitivity indices are given.
-
Conversion of the initial indices of the technological process of the smelting of steel for the subsequent simulation
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 187-199Views (last year): 6. Citations: 1 (RSCI).Efficiency of production directly depends on quality of the management of technology which, in turn, relies on the accuracy and efficiency of the processing of control and measuring information. Development of the mathematical methods of research of the system communications and regularities of functioning and creation of the mathematical models taking into account structural features of object of researches, and also writing of the software products for realization of these methods are an actual task. Practice has shown that the list of parameters that take place in the study of complex object of modern production, ranging from a few dozen to several hundred names, and the degree of influence of each factor in the initial time is not clear. Before working for the direct determination of the model in these circumstances, it is impossible — the amount of the required information may be too great, and most of the work on the collection of this information will be done in vain due to the fact that the degree of influence on the optimization of most factors of the original list would be negligible. Therefore, a necessary step in determining a model of a complex object is to work to reduce the dimension of the factor space. Most industrial plants are hierarchical group processes and mass volume production, characterized by hundreds of factors. (For an example of realization of the mathematical methods and the approbation of the constructed models data of the Moldavian steel works were taken in a basis.) To investigate the systemic linkages and patterns of functioning of such complex objects are usually chosen several informative parameters, and carried out their sampling. In this article the sequence of coercion of the initial indices of the technological process of the smelting of steel to the look suitable for creation of a mathematical model for the purpose of prediction is described. The implementations of new types became also creation of a basis for development of the system of automated management of quality of the production. In the course of weak correlation the following stages are selected: collection and the analysis of the basic data, creation of the table the correlated of the parameters, abbreviation of factor space by means of the correlative pleiads and a method of weight factors. The received results allow to optimize process of creation of the model of multiple-factor process.
-
Four-factor computing experiment for the random walk on a two-dimensional square field
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 905-918Views (last year): 21.Nowadays the random search became a widespread and effective tool for solving different complex optimization and adaptation problems. In this work, the problem of an average duration of a random search for one object by another is regarded, depending on various factors on a square field. The problem solution was carried out by holding total experiment with 4 factors and orthogonal plan with 54 lines. Within each line, the initial conditions and the cellular automaton transition rules were simulated and the duration of the search for one object by another was measured. As a result, the regression model of average duration of a random search for an object depending on the four factors considered, specifying the initial positions of two objects, the conditions of their movement and detection is constructed. The most significant factors among the factors considered in the work that determine the average search time are determined. An interpretation is carried out in the problem of random search for an object from the constructed model. The important result of the work is that the qualitative and quantitative influence of initial positions of objects, the size of the lattice and the transition rules on the average duration of search is revealed by means of model obtained. It is shown that the initial neighborhood of objects on the lattice does not guarantee a quick search, if each of them moves. In addition, it is quantitatively estimated how many times the average time of searching for an object can increase or decrease with increasing the speed of the searching object by 1 unit, and also with increasing the field size by 1 unit, with different initial positions of the two objects. The exponential nature of the growth in the number of steps for searching for an object with an increase in the lattice size for other fixed factors is revealed. The conditions for the greatest increase in the average search duration are found: the maximum distance of objects in combination with the immobility of one of them when the field size is changed by 1 unit. (that is, for example, with 4×4 at 5×5) can increase the average search duration in e1.69≈5.42. The task presented in the work may be relevant from the point of view of application both in the landmark for ensuring the security of the state, and, for example, in the theory of mass service.
-
Hierarchical method for mathematical modeling of stochastic thermal processes in complex electronic systems
Computer Research and Modeling, 2019, v. 11, no. 4, pp. 613-630Views (last year): 3.A hierarchical method of mathematical and computer modeling of interval-stochastic thermal processes in complex electronic systems for various purposes is developed. The developed concept of hierarchical structuring reflects both the constructive hierarchy of a complex electronic system and the hierarchy of mathematical models of heat exchange processes. Thermal processes that take into account various physical phenomena in complex electronic systems are described by systems of stochastic, unsteady, and nonlinear partial differential equations and, therefore, their computer simulation encounters considerable computational difficulties even with the use of supercomputers. The hierarchical method avoids these difficulties. The hierarchical structure of the electronic system design, in general, is characterized by five levels: Level 1 — the active elements of the ES (microcircuits, electro-radio-elements); Level 2 — electronic module; Level 3 — a panel that combines a variety of electronic modules; Level 4 — a block of panels; Level 5 — stand installed in a stationary or mobile room. The hierarchy of models and modeling of stochastic thermal processes is constructed in the reverse order of the hierarchical structure of the electronic system design, while the modeling of interval-stochastic thermal processes is carried out by obtaining equations for statistical measures. The hierarchical method developed in the article allows to take into account the principal features of thermal processes, such as the stochastic nature of thermal, electrical and design factors in the production, assembly and installation of electronic systems, stochastic scatter of operating conditions and the environment, non-linear temperature dependencies of heat exchange factors, unsteady nature of thermal processes. The equations obtained in the article for statistical measures of stochastic thermal processes are a system of 14 non-stationary nonlinear differential equations of the first order in ordinary derivatives, whose solution is easily implemented on modern computers by existing numerical methods. The results of applying the method for computer simulation of stochastic thermal processes in electron systems are considered. The hierarchical method is applied in practice for the thermal design of real electronic systems and the creation of modern competitive devices.
-
Mathematical modeling of the interval stochastic thermal processes in technical systems at the interval indeterminacy of the determinative parameters
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 501-520Views (last year): 15. Citations: 6 (RSCI).The currently performed mathematical and computer modeling of thermal processes in technical systems is based on an assumption that all the parameters determining thermal processes are fully and unambiguously known and identified (i.e., determined). Meanwhile, experience has shown that parameters determining the thermal processes are of undefined interval-stochastic character, which in turn is responsible for the intervalstochastic nature of thermal processes in the electronic system. This means that the actual temperature values of each element in an technical system will be randomly distributed within their variation intervals. Therefore, the determinative approach to modeling of thermal processes that yields specific values of element temperatures does not allow one to adequately calculate temperature distribution in electronic systems. The interval-stochastic nature of the parameters determining the thermal processes depends on three groups of factors: (a) statistical technological variation of parameters of the elements when manufacturing and assembling the system; (b) the random nature of the factors caused by functioning of an technical system (fluctuations in current and voltage; power, temperatures, and flow rates of the cooling fluid and the medium inside the system); and (c) the randomness of ambient parameters (temperature, pressure, and flow rate). The interval-stochastic indeterminacy of the determinative factors in technical systems is irremediable; neglecting it causes errors when designing electronic systems. A method that allows modeling of unsteady interval-stochastic thermal processes in technical systems (including those upon interval indeterminacy of the determinative parameters) is developed in this paper. The method is based on obtaining and further solving equations for the unsteady statistical measures (mathematical expectations, variances and covariances) of the temperature distribution in an technical system at given variation intervals and the statistical measures of the determinative parameters. Application of the elaborated method to modeling of the interval-stochastic thermal process in a particular electronic system is considered.
-
The stabilizing role of fish population structure under the influence of fishery and random environment variations
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 609-620Views (last year): 6. Citations: 2 (RSCI).We study the influence of fishery on a structured fish population under random changes of habitat conditions. The population parameters correspond to dominant pelagic fish species of Far-Eastern seas of the northwestern part of the Pacific Ocean (pollack, herring, sardine). Similar species inhabit various parts of the Word Ocean. The species body size distribution was chosen as a main population feature. This characteristic is easy to measure and adequately defines main specimen qualities such as age, maturity and other morphological and physiological peculiarities. Environmental fluctuations have a great influence on the individuals in early stages of development and have little influence on the vital activity of mature individuals. The fishery revenue was chosen as an optimality criterion. The main control characteristic is fishing effort. We have chosen quadratic dependence of fishing revenue on the fishing effort according to accepted economic ideas stating that the expenses grow with the production volume. The model study shows that the population structure ensures the increased population stability. The growth and drop out of the individuals’ due to natural mortality smoothens the oscillations of population density arising from the strong influence of the fluctuations of environment on young individuals. The smoothing part is played by diffusion component of the growth processes. The fishery in its turn smooths the fluctuations (including random fluctuations) of the environment and has a substantial impact upon the abundance of fry and the subsequent population dynamics. The optimal time-dependent fishing effort strategy was compared to stationary fishing effort strategy. It is shown that in the case of quickly changing habitat conditions and stochastic dynamics of population replenishment there exists a stationary fishing effort having approximately the same efficiency as an optimal time-dependent fishing effort. This means that a constant or weakly varying fishing effort can be very efficient strategy in terms of revenue.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"