All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
ALICE computing update before start of RUN2
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 415-419Views (last year): 2.The report presents a number of news and updates of the ALICE computing for RUN2 and RUN3.
This includes:
– implementation in production of a new system EOS;
– migration to the file system CVMFS to be used for storage of the software;
– the plan for solving the problem of “Long-Term Data Preservation”;
– overview of the concept of “O square”, combining offline and online data processing;
– overview of the existing models to use the virtual clouds for ALICE data processing. Innovations are shown on the example of the Russian sites.
-
Reduction of decision rule of multivariate interpolation and approximation method in the problem of data classification
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 475-484Views (last year): 5.This article explores a method of machine learning based on the theory of random functions. One of the main problems of this method is that decision rule of a model becomes more complicated as the number of training dataset examples increases. The decision rule of the model is the most probable realization of a random function and it's represented as a polynomial with the number of terms equal to the number of training examples. In this article we will show the quick way of the number of training dataset examples reduction and, accordingly, the complexity of the decision rule. Reducing the number of examples of training dataset is due to the search and removal of weak elements that have little effect on the final form of the decision function, and noise sampling elements. For each $(x_i,y_i)$-th element sample was introduced the concept of value, which is expressed by the deviation of the estimated value of the decision function of the model at the point $x_i$, built without the $i$-th element, from the true value $y_i$. Also we show the possibility of indirect using weak elements in the process of training model without increasing the number of terms in the decision function. At the experimental part of the article, we show how changed amount of data affects to the ability of the method of generalizing in the classification task.
-
The calculation of hydrodynamic impact on reentry vehicle during splashdown
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 37-46Views (last year): 30.The reentry vehicle of the transportation spacecraft that is being created by RSC Energia in regular mode makes soft landing on land surface using a parachute system and thruster devices. But in not standard situations the reentry vehicle also is capable of executing a splashdown. In that case, it becomes important to define the hydrodynamics impact on the reentry vehicle at the moment of the first contact with the surface of water and during submersion into water medium, and to study the dynamics of the vehicle behavior at more recent moments of time.
This article presents results of numerical studies of hydrodynamics forces on the conical vehicle during splashdown, done with the FlowVision software. The paper reviews the cases of the splashdown with inactive solid rocket motors on calm sea and the cases with interactions between rocket jets and the water surface. It presents data on the allocation of pressure on the vehicle in the process of the vehicle immersion into water medium and dynamics of the vehicle behavior after splashdown. The paper also shows flow structures in the area of the reentry vehicle at the different moments of time, and integral forces and moments acting on the vehicle.
For simulation process with moving interphases in the FlowVision software realized the model VOF (volume of fluid). Transfer of the phase boundary is described by the equation of volume fraction of this continuous phase in a computational cell. Transfer contact surface is described by the convection equation, and at the surface tension is taken into account by the Laplace pressure. Key features of the method is the splitting surface cells where data is entered the corresponding phase. Equations for both phases (like the equations of continuity, momentum, energy and others) in the surface cells are accounted jointly.
-
Conversion of the initial indices of the technological process of the smelting of steel for the subsequent simulation
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 187-199Views (last year): 6. Citations: 1 (RSCI).Efficiency of production directly depends on quality of the management of technology which, in turn, relies on the accuracy and efficiency of the processing of control and measuring information. Development of the mathematical methods of research of the system communications and regularities of functioning and creation of the mathematical models taking into account structural features of object of researches, and also writing of the software products for realization of these methods are an actual task. Practice has shown that the list of parameters that take place in the study of complex object of modern production, ranging from a few dozen to several hundred names, and the degree of influence of each factor in the initial time is not clear. Before working for the direct determination of the model in these circumstances, it is impossible — the amount of the required information may be too great, and most of the work on the collection of this information will be done in vain due to the fact that the degree of influence on the optimization of most factors of the original list would be negligible. Therefore, a necessary step in determining a model of a complex object is to work to reduce the dimension of the factor space. Most industrial plants are hierarchical group processes and mass volume production, characterized by hundreds of factors. (For an example of realization of the mathematical methods and the approbation of the constructed models data of the Moldavian steel works were taken in a basis.) To investigate the systemic linkages and patterns of functioning of such complex objects are usually chosen several informative parameters, and carried out their sampling. In this article the sequence of coercion of the initial indices of the technological process of the smelting of steel to the look suitable for creation of a mathematical model for the purpose of prediction is described. The implementations of new types became also creation of a basis for development of the system of automated management of quality of the production. In the course of weak correlation the following stages are selected: collection and the analysis of the basic data, creation of the table the correlated of the parameters, abbreviation of factor space by means of the correlative pleiads and a method of weight factors. The received results allow to optimize process of creation of the model of multiple-factor process.
-
Development of anisotropic nonlinear noise-reduction algorithm for computed tomography data with context dynamic threshold
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 233-248Views (last year): 21.The article deals with the development of the noise-reduction algorithm based on anisotropic nonlinear data filtering of computed tomography (CT). Analysis of domestic and foreign literature has shown that the most effective algorithms for noise reduction of CT data use complex methods for analyzing and processing data, such as bilateral, adaptive, three-dimensional and other types of filtrations. However, a combination of such techniques is rarely used in practice due to long processing time per slice. In this regard, it was decided to develop an efficient and fast algorithm for noise-reduction based on simplified bilateral filtration method with three-dimensional data accumulation. The algorithm was developed on C ++11 programming language in Microsoft Visual Studio 2015. The main difference of the developed noise reduction algorithm is the use an improved mathematical model of CT noise, based on the distribution of Poisson and Gauss from the logarithmic value, developed earlier by our team. This allows a more accurate determination of the noise level and, thus, the threshold of data processing. As the result of the noise reduction algorithm, processed CT data with lower noise level were obtained. Visual evaluation of the data showed the increased information content of the processed data, compared to original data, the clarity of the mapping of homogeneous regions, and a significant reduction in noise in processing areas. Assessing the numerical results of the algorithm showed a decrease in the standard deviation (SD) level by more than 6 times in the processed areas, and high rates of the determination coefficient showed that the data were not distorted and changed only due to the removal of noise. Usage of newly developed context dynamic threshold made it possible to decrease SD level on every area of data. The main difference of the developed threshold is its simplicity and speed, achieved by preliminary estimation of the data array and derivation of the threshold values that are put in correspondence with each pixel of the CT. The principle of its work is based on threshold criteria, which fits well both into the developed noise reduction algorithm based on anisotropic nonlinear filtration, and another algorithm of noise-reduction. The algorithm successfully functions as part of the MultiVox workstation and is being prepared for implementation in a single radiological network of the city of Moscow.
-
Neural network analysis of transportation flows of urban aglomeration using the data from public video cameras
Computer Research and Modeling, 2021, v. 13, no. 2, pp. 305-318Correct modeling of complex dynamics of urban transportation flows requires the collection of large volumes of empirical data to specify types of the modes and their identification. At the same time, setting a large number of observation posts is expensive and technically not always feasible. All this results in insufficient factographic support for the traffic control systems as well as for urban planners with the obvious consequences for the quality of their decisions. As one of the means to provide large-scale data collection at least for the qualitative situation analysis, the wide-area video cameras are used in different situation centers. There they are analyzed by human operators who are responsible for observation and control. Some video cameras provided their videos for common access, which makes them a valuable resource for transportation studies. However, there are significant problems with getting qualitative data from such cameras, which relate to the theory and practice of image processing. This study is devoted to the practical application of certain mainstream neuro-networking technologies for the estimation of essential characteristics of actual transportation flows. The problems arising in processing these data are analyzed, and their solutions are suggested. The convolution neural networks are used for tracking, and the methods for obtaining basic parameters of transportation flows from these observations are studied. The simplified neural networks are used for the preparation of training sets for the deep learning neural network YOLOv4 which is later used for the estimation of speed and density of automobile flows.
-
Estimation of the probability of spontaneous synthesis of computational structures in relation to the implementation of parallel information processing
Computer Research and Modeling, 2021, v. 13, no. 4, pp. 677-696We consider a model of spontaneous formation of a computational structure in the human brain for solving a given class of tasks in the process of performing a series of similar tasks. The model is based on a special definition of a numerical measure of the complexity of the solution algorithm. This measure has an informational property: the complexity of a computational structure consisting of two independent structures is equal to the sum of the complexities of these structures. Then the probability of spontaneous occurrence of the structure depends exponentially on the complexity of the structure. The exponential coefficient requires experimental determination for each type of problem. It may depend on the form of presentation of the source data and the procedure for issuing the result. This estimation method was applied to the results of a series of experiments that determined the strategy for solving a series of similar problems with a growing number of initial data. These experiments were described in previously published papers. Two main strategies were considered: sequential execution of the computational algorithm, or the use of parallel computing in those tasks where it is effective. These strategies differ in how calculations are performed. Using an estimate of the complexity of schemes, you can use the empirical probability of one of the strategies to calculate the probability of the other. The calculations performed showed a good match between the calculated and empirical probabilities. This confirms the hypothesis about the spontaneous formation of structures that solve the problem during the initial training of a person. The paper contains a brief description of experiments, detailed computational schemes and a strict definition of the complexity measure of computational structures and the conclusion of the dependence of the probability of structure formation on its complexity.
-
Method for processing acoustic emission testing data to define signal velocity and location
Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1029-1040Non-destructive acoustic emission testing is an effective and cost-efficient way to examine pressure vessels for hidden defects (cracks, laminations etc.), as well as the only method that is sensitive to developing defects. The sound velocity in the test object and its adequate definition in the location scheme are of paramount importance for the accurate detection of the acoustic emission source. The acoustic emission data processing method proposed herein comprises a set of numerical methods and allows defining the source coordinates and the most probable velocity for each signal. The method includes pre-filtering of data by amplitude, by time differences, elimination of electromagnetic interference. Further, a set of numerical methods is applied to them to solve the system of nonlinear equations, in particular, the Newton – Kantorovich method and the general iterative process. The velocity of a signal from one source is assumed as a constant in all directions. As the initial approximation is taken the center of gravity of the triangle formed by the first three sensors that registered the signal. The method developed has an important practical application, and the paper provides an example of its approbation in the calibration of an acoustic emission system at a production facility (hydrocarbon gas purification absorber). Criteria for prefiltering of data are described. The obtained locations are in good agreement with the signal generation sources, and the velocities even reflect the Rayleigh-Lamb division of acoustic waves due to the different signal source distances from the sensors. The article contains the dependency graph of the average signal velocity against the distance from its source to the nearest sensor. The main advantage of the method developed is its ability to detect the location of different velocity signals within a single test. This allows to increase the degree of freedom in the calculations, and thereby increase their accuracy.
-
The influence of solar flares on the release of seismic energy
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 567-581The influence of solar activity on various processes on Earth has long been the subject of close study, which resulted in the appearance of the term “space weather”. The most striking manifestation of solar activity are the so-called “solar flares”, which are explosive releases of energy in the solar atmosphere, resulting in a flow of photons and charged particles reaching the Earth with a slight delay. After two or three days, a plasma flow reaches the Earth. Thus, a solar flare is an event stretched out in time for several days. The impact of solar flares on human health and the technosphere is a popular subject for discussion and scientific research. This article provides a quantitative assessment of the trigger effect of solar flares on the release of energy as a result of seismic events. The article provides an estimate in the form of a “percentage” of the released seismic energy of the trigger effect of solar flares on the release of seismic energy worldwide and in 8 areas of the Pacific Fire Ring. The initial data are a time series of solar flares from July 31, 1996 to the end of 2024. The time points of the greatest local extremes of solar flare intensity and released seismic energy were studied in successive time intervals of 1 day. For each pair of time sequences in sliding time windows, the “lead measures” of each time sequence relative to the other were estimated using a parametric model of the intensity of interacting point processes. The difference between the “direct” lead measure of the time points of local extremes of solar flare intensity relative to the moments of maximum released seismic energy and the “reverse” lead measure was calculated. The average value of the difference in lead measures provides an estimate of the share of the intensity of seismic events for which solar flares are a trigger.
-
Discrete-element simulation of a spherical projectile penetration into a massive obstacle
Computer Research and Modeling, 2015, v. 7, no. 1, pp. 71-79Views (last year): 5. Citations: 5 (RSCI).А discrete element model is applied to the problem of a spherical projectile penetration into a massive obstacle. According to the model both indenter and obstacle are described by a set of densely packed particles. To model the interaction between the particles the two-parameter Lennard–Jones potential is used. Computer implementation of the model has been carried out using parallelism on GPUs, which resulted in high spatial — temporal resolution. Based on the comparison of the results of numerical simulation with experimental data the binding energy has been identified as a function of the dynamic hardness of materials. It is shown that the use of this approach allows to accurately describe the penetration process in the range of projectile velocities 500–2500 m/c.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




