Результаты поиска по 'methods':
Найдено статей: 636
  1. Kudrov A.I., Sheremet M.A.
    Numerical simulation of corium cooling driven by natural convection in case of in-vessel retention and time-dependent heat generation
    Computer Research and Modeling, 2021, v. 13, no. 4, pp. 807-822

    Represented study considers numerical simulation of corium cooling driven by natural convection within a horizontal hemicylindrical cavity, boundaries of which are assumed isothermal. Corium is a melt of ceramic fuel of a nuclear reactor and oxides of construction materials.

    Corium cooling is a process occurring during severe accident associated with core melt. According to invessel retention conception, the accident may be restrained and localized, if the corium is contained within the vessel, only if it is cooled externally. This conception has a clear advantage over the melt trap, it can be implemented at already operating nuclear power plants. Thereby proper numerical analysis of the corium cooling has become such a relevant area of studies.

    In the research, we assume the corium is contained within a horizontal semitube. The corium initially has temperature of the walls. In spite of reactor shutdown, the corium still generates heat owing to radioactive decays, and the amount of heat released decreases with time accordingly to Way–Wigner formula. The system of equations in Boussinesq approximation including momentum equation, continuity equation and energy equation, describes the natural convection within the cavity. Convective flows are taken to be laminar and two-dimensional.

    The boundary-value problem of mathematical physics is formulated using the non-dimensional nonprimitive variables «stream function – vorticity». The obtained differential equations are solved numerically using the finite difference method and locally one-dimensional Samarskii scheme for the equations of parabolic type.

    As a result of the present research, we have obtained the time behavior of mean Nusselt number at top and bottom walls for Rayleigh number ranged from 103 to 106. These mentioned dependences have been analyzed for various dimensionless operation periods before the accident. Investigations have been performed using streamlines and isotherms as well as time dependences for convective flow and heat transfer rates.

  2. Safiullina L.F., Gubaydullin I.M.
    Analysis of the identifiability of the mathematical model of propane pyrolysis
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1045-1057

    The article presents the numerical modeling and study of the kinetic model of propane pyrolysis. The study of the reaction kinetics is a necessary stage in modeling the dynamics of the gas flow in the reactor.

    The kinetic model of propane pyrolysis is a nonlinear system of ordinary differential equations of the first order with parameters, the role of which is played by the reaction rate constants. Math modeling of processes is based on the use of the mass conservation law. To solve an initial (forward) problem, implicit methods for solving stiff ordinary differential equation systems are used. The model contains 60 input kinetic parameters and 17 output parameters corresponding to the reaction substances, of which only 9 are observable. In the process of solving the problem of estimating parameters (inverse problem), there is a question of non-uniqueness of the set of parameters that satisfy the experimental data. Therefore, before solving the inverse problem, the possibility of determining the parameters of the model is analyzed (analysis of identifiability).

    To analyze identifiability, we use the orthogonal method, which has proven itself well for analyzing models with a large number of parameters. The algorithm is based on the analysis of the sensitivity matrix by the methods of differential and linear algebra, which shows the degree of dependence of the unknown parameters of the models on the given measurements. The analysis of sensitivity and identifiability showed that the parameters of the model are stably determined from a given set of experimental data. The article presents a list of model parameters from most to least identifiable. Taking into account the analysis of the identifiability of the mathematical model, restrictions were introduced on the search for less identifiable parameters when solving the inverse problem.

    The inverse problem of estimating the parameters was solved using a genetic algorithm. The article presents the found optimal values of the kinetic parameters. A comparison of the experimental and calculated dependences of the concentrations of propane, main and by-products of the reaction on temperature for different flow rates of the mixture is presented. The conclusion about the adequacy of the constructed mathematical model is made on the basis of the correspondence of the results obtained to physicochemical laws and experimental data.

  3. Ostroukhov P.A.
    Tensor methods inside mixed oracle for min-min problems
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 377-398

    In this article we consider min-min type of problems or minimization by two groups of variables. In some way it is similar to classic min-max saddle point problem. Although, saddle point problems are usually more difficult in some way. Min-min problems may occur in case if some groups of variables in convex optimization have different dimensions or if these groups have different domains. Such problem structure gives us an ability to split the main task to subproblems, and allows to tackle it with mixed oracles. However existing articles on this topic cover only zeroth and first order oracles, in our work we consider high-order tensor methods to solve inner problem and fast gradient method to solve outer problem.

    We assume, that outer problem is constrained to some convex compact set, and for the inner problem we consider both unconstrained case and being constrained to some convex compact set. By definition, tensor methods use high-order derivatives, so the time per single iteration of the method depends a lot on the dimensionality of the problem it solves. Therefore, we suggest, that the dimension of the inner problem variable is not greater than 1000. Additionally, we need some specific assumptions to be able to use mixed oracles. Firstly, we assume, that the objective is convex in both groups of variables and its gradient by both variables is Lipschitz continuous. Secondly, we assume the inner problem is strongly convex and its gradient is Lipschitz continuous. Also, since we are going to use tensor methods for inner problem, we need it to be p-th order Lipschitz continuous ($p > 1$). Finally, we assume strong convexity of the outer problem to be able to use fast gradient method for strongly convex functions.

    We need to emphasize, that we use superfast tensor method to tackle inner subproblem in unconstrained case. And when we solve inner problem on compact set, we use accelerated high-order composite proximal method.

    Additionally, in the end of the article we compare the theoretical complexity of obtained methods with regular gradient method, which solves the mentioned problem as regular convex optimization problem and doesn’t take into account its structure (Remarks 1 and 2).

  4. Suzdaltsev V.A., Suzdaltsev I.V., Tarhavova E.G.
    Fuzzy knowledge extraction in the development of expert predictive diagnostic systems
    Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1395-1408

    Expert systems imitate professional experience and thinking process of a specialist to solve problems in various subject areas. An example of the problem that it is expedient to solve with the help of the expert system is the problem of forming a diagnosis that arises in technology, medicine, and other fields. When solving the diagnostic problem, it is necessary to anticipate the occurrence of critical or emergency situations in the future. They are situations, which require timely intervention of specialists to prevent critical aftermath. Fuzzy sets theory provides one of the approaches to solve ill-structured problems, diagnosis-making problems belong to which. The theory of fuzzy sets provides means for the formation of linguistic variables, which are helpful to describe the modeled process. Linguistic variables are elements of fuzzy logical rules that simulate the reasoning of professionals in the subject area. To develop fuzzy rules it is necessary to resort to a survey of experts. Knowledge engineers use experts’ opinion to evaluate correspondence between a typical current situation and the risk of emergency in the future. The result of knowledge extraction is a description of linguistic variables that includes a combination of signs. Experts are involved in the survey to create descriptions of linguistic variables and present a set of simulated situations.When building such systems, the main problem of the survey is laboriousness of the process of interaction of knowledge engineers with experts. The main reason is the multiplicity of questions the expert must answer. The paper represents reasoning of the method, which allows knowledge engineer to reduce the number of questions posed to the expert. The paper describes the experiments carried out to test the applicability of the proposed method. An expert system for predicting risk groups for neonatal pathologies and pregnancy pathologies using the proposed knowledge extraction method confirms the feasibility of the proposed approach.

  5. Makarov I.S., Bagantsova E.R., Iashin P.A., Kovaleva M.D., Zakharova E.M.
    Development of and research into a rigid algorithm for analyzing Twitter publications and its influence on the movements of the cryptocurrency market
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 157-170

    Social media is a crucial indicator of the position of assets in the financial market. The paper describes the rigid solution for the classification problem to determine the influence of social media activity on financial market movements. Reputable crypto traders influencers are selected. Twitter posts packages are used as data. The methods of text, which are characterized by the numerous use of slang words and abbreviations, and preprocessing consist in lemmatization of Stanza and the use of regular expressions. A word is considered as an element of a vector of a data unit in the course of solving the problem of binary classification. The best markup parameters for processing Binance candles are searched for. Methods of feature selection, which is necessary for a precise description of text data and the subsequent process of establishing dependence, are represented by machine learning and statistical analysis. First, the feature selection is used based on the information criterion. This approach is implemented in a random forest model and is relevant for the task of feature selection for splitting nodes in a decision tree. The second one is based on the rigid compilation of a binary vector during a rough check of the presence or absence of a word in the package and counting the sum of the elements of this vector. Then a decision is made depending on the superiority of this sum over the threshold value that is predetermined previously by analyzing the frequency distribution of mentions of the word. The algorithm used to solve the problem was named benchmark and analyzed as a tool. Similar algorithms are often used in automated trading strategies. In the course of the study, observations of the influence of frequently occurring words, which are used as a basis of dimension 2 and 3 in vectorization, are described as well.

  6. Stonyakin F.S., Ablaev S.S., Baran I.V., Alkousa M.S.
    Subgradient methods for weakly convex and relatively weakly convex problems with a sharp minimum
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 393-412

    The work is devoted to the study of subgradient methods with different variations of the Polyak stepsize for minimization functions from the class of weakly convex and relatively weakly convex functions that have the corresponding analogue of a sharp minimum. It turns out that, under certain assumptions about the starting point, such an approach can make it possible to justify the convergence of the subgradient method with the speed of a geometric progression. For the subgradient method with the Polyak stepsize, a refined estimate for the rate of convergence is proved for minimization problems for weakly convex functions with a sharp minimum. The feature of this estimate is an additional consideration of the decrease of the distance from the current point of the method to the set of solutions with the increase in the number of iterations. The results of numerical experiments for the phase reconstruction problem (which is weakly convex and has a sharp minimum) are presented, demonstrating the effectiveness of the proposed approach to estimating the rate of convergence compared to the known one. Next, we propose a variation of the subgradient method with switching over productive and non-productive steps for weakly convex problems with inequality constraints and obtain the corresponding analog of the result on convergence with the rate of geometric progression. For the subgradient method with the corresponding variation of the Polyak stepsize on the class of relatively Lipschitz and relatively weakly convex functions with a relative analogue of a sharp minimum, it was obtained conditions that guarantee the convergence of such a subgradient method at the rate of a geometric progression. Finally, a theoretical result is obtained that describes the influence of the error of the information about the (sub)gradient available by the subgradient method and the objective function on the estimation of the quality of the obtained approximate solution. It is proved that for a sufficiently small error $\delta > 0$, one can guarantee that the accuracy of the solution is comparable to $\delta$.

  7. Bernadotte A., Mazurin A.D.
    Optimization of the brain command dictionary based on the statistical proximity criterion in silent speech recognition task
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 675-690

    In our research, we focus on the problem of classification for silent speech recognition to develop a brain– computer interface (BCI) based on electroencephalographic (EEG) data, which will be capable of assisting people with mental and physical disabilities and expanding human capabilities in everyday life. Our previous research has shown that the silent pronouncing of some words results in almost identical distributions of electroencephalographic signal data. Such a phenomenon has a suppressive impact on the quality of neural network model behavior. This paper proposes a data processing technique that distinguishes between statistically remote and inseparable classes in the dataset. Applying the proposed approach helps us reach the goal of maximizing the semantic load of the dictionary used in BCI.

    Furthermore, we propose the existence of a statistical predictive criterion for the accuracy of binary classification of the words in a dictionary. Such a criterion aims to estimate the lower and the upper bounds of classifiers’ behavior only by measuring quantitative statistical properties of the data (in particular, using the Kolmogorov – Smirnov method). We show that higher levels of classification accuracy can be achieved by means of applying the proposed predictive criterion, making it possible to form an optimized dictionary in terms of semantic load for the EEG-based BCIs. Furthermore, using such a dictionary as a training dataset for classification problems grants the statistical remoteness of the classes by taking into account the semantic and phonetic properties of the corresponding words and improves the classification behavior of silent speech recognition models.

  8. Aksenov A.A., Kashirin V.S., Timushev S.F., Shaporenko E.V.
    Development of acoustic-vortex decomposition method for car tyre noise modelling
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 979-993

    Road noise is one of the key issues in maintaining high environmental standards. At speeds between 50 and 120 km/h, tires are the main source of noise generated by a moving vehicle. It is well known that either the interaction between the tire tread and the road surface or some internal dynamic effects are responsible for tire noise and vibration. This paper discusses the application of a new method for modelling the generation and propagation of sound during tire motion, based on the application of the so-called acoustic-vortex decomposition. Currently, the application of the Lighthill equation and the aeroacoustics analogy are the main approaches used to model tire noise. The aeroacoustics analogy, in solving the problem of separating acoustic and vortex (pseudo-sound) modes of vibration, is not a mathematically rigorous formulation for deriving the source (righthand side) of the acoustic wave equation. In the development of the acoustic-vortex decomposition method, a mathematically rigorous transformation of the equations of motion of a compressible medium is performed to obtain an inhomogeneous wave equation with respect to static enthalpy pulsations with a source term that de-pends on the velocity field of the vortex mode. In this case, the near-field pressure fluctuations are the sum of acoustic fluctuations and pseudo-sound. Thus, the acoustic-vortex decomposition method allows to adequately modeling the acoustic field and the dynamic loads that generate tire vibration, providing a complete solution to the problem of modelling tire noise, which is the result of its turbulent flow with the generation of vortex sound, as well as the dynamic loads and noise emission due to tire vibration. The method is first implemented and test-ed in the FlowVision software package. The results obtained with FlowVision are compared with those obtained with the LMS Virtual.Lab Acoustics package and a number of differences in the acoustic field are highlighted.

  9. Plokhotnikov K.E.
    The problem of choosing solutions in the classical format of the description of a molecular system
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1573-1600

    The numerical methods developed by the author recently for calculating the molecular system based on the direct solution of the Schrodinger equation by the Monte Carlo method have shown a huge uncertainty in the choice of solutions. On the one hand, it turned out to be possible to build many new solutions; on the other hand, the problem of their connection with reality has become sharply aggravated. In ab initio quantum mechanical calculations, the problem of choosing solutions is not so acute after the transition to the classical format of describing a molecular system in terms of potential energy, the method of molecular dynamics, etc. In this paper, we investigate the problem of choosing solutions in the classical format of describing a molecular system without taking into account quantum mechanical prerequisites. As it turned out, the problem of choosing solutions in the classical format of describing a molecular system is reduced to a specific marking of the configuration space in the form of a set of stationary points and reconstruction of the corresponding potential energy function. In this formulation, the solution of the choice problem is reduced to two possible physical and mathematical problems: to find all its stationary points for a given potential energy function (the direct problem of the choice problem), to reconstruct the potential energy function for a given set of stationary points (the inverse problem of the choice problem). In this paper, using a computational experiment, the direct problem of the choice problem is discussed using the example of a description of a monoatomic cluster. The number and shape of the locally equilibrium (saddle) configurations of the binary potential are numerically estimated. An appropriate measure is introduced to distinguish configurations in space. The format of constructing the entire chain of multiparticle contributions to the potential energy function is proposed: binary, threeparticle, etc., multiparticle potential of maximum partiality. An infinite number of locally equilibrium (saddle) configurations for the maximum multiparticle potential is discussed and illustrated. A method of variation of the number of stationary points by combining multiparticle contributions to the potential energy function is proposed. The results of the work listed above are aimed at reducing the huge arbitrariness of the choice of the form of potential that is currently taking place. Reducing the arbitrariness of choice is expressed in the fact that the available knowledge about the set of a very specific set of stationary points is consistent with the corresponding form of the potential energy function.

  10. Sergienko A.V., Akimenko S.S., Karpov A.A., Myshlyavtsev A.V.
    Influence of the simplest type of multiparticle interactions on the example of a lattice model of an adsorption layer
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 445-458

    Self-organization of molecules on a solid surface is one of the promising directions for materials generation with unique magnetic, electrical, and optical properties. They can be widely used in fields such as electronics, optoelectronics, catalysis, and biology. However, the structure and physicochemical properties of adsorbed molecules are influenced by many parameters that must be taken into account when studying the self-organization of molecules. Therefore, the experimental study of such materials is expensive, and quite often it is difficult for various reasons. In such situations, it is advisable to use the mathematical modeling. One of the parameters in the considered adsorption systems is the multiparticle interaction, which is often not taken into account in simulations due to the complexity of the calculations. In this paper, we evaluated the influence of multiparticle interactions on the total energy of the system using the transfer-matrix method and the Materials Studio software package. The model of monocentric adsorption with nearest interactions on a triangular lattice was taken as the basis. Phase diagrams in the ground state were constructed and a number of thermodynamic characteristics (coverage $\theta$, entropy $S$, susceptibility $\xi$) were calculated at nonzero temperatures. The formation of all four ordered structures (lattice gas with $\theta=0$, $(\sqrt{3} \times \sqrt{3}) R30^{\circ}$ with $\theta = \frac{1}{3}$, $(\sqrt{3} \times \sqrt{3})R^{*}30^{\circ}$ with $\theta = \frac{2}{3}$ and densest phase with $\theta = 1$) in a system with only pairwise interactions, and the absence of the phase  $(\sqrt{3}\times \sqrt{3}) R30^\circ$ when only three-body interactions are taken into account, were found. Using the example of an atomistic model of the trimesic acid adsorption layer by quantum mechanical methods we determined that in such a system the contribution of multiparticle interactions is 11.44% of the pair interactions energy. There are only quantitative differences at such values. The transition region from the  $(\sqrt{3} \times \sqrt{3}) R^{*}30^\circ$ to the densest phase shifts to the right by 38.25% at $\frac{\varepsilon}{RT} = 4$ and to the left by 23.46% at $\frac{\varepsilon}{RT} = −2$.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"