All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Computer simulation of temperature field of blast furnace’s air tuyere
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 117-125Views (last year): 7.Study of work of heating equipment is an actual issue because it allows determining optimal regimes to reach highest efficiency. At that it is very helpful to use computer simulation to predict how different heating modes influence the effectiveness of the heating process and wear of heating equipment. Computer simulation provides results whose accuracy is proven by many studies and requires costs and time less than real experiments. In terms of present research, computer simulation of heating of air tuyere of blast furnace was realized with the help of FEM software. Background studies revealed possibility to simulate it as a flat, axisymmetric problem and DEFORM-2D software was used for simulation. Geometry, necessary for simulation, was designed with the help of SolidWorks, saved in .dxf format. Then it was exported to DEFORM-2D pre-processor and positioned. Preliminary and boundary conditions were set up. Several modes of operating regimes were under analysis. In order to demonstrate influence of eah of the modes and for better visualization point tracking option of the DEFORM-2D post-processor was applied. Influence of thermal insulation box plugged into blow channel, with and without air gap, and thermal coating on air tuyere’s temperature field was investigated. Simulation data demonstrated significant effect of thermal insulation box on air tuyere’s temperature field. Designed model allowed to simulate tuyere’s burnout as a result of interaction with liquid iron. Conducted researches have demonstrated DEFORM-2D effectiveness while using it for simulation of heat transfer and heating processes. DEFORM-2D is about to be used in further studies dedicated to more complex process connected with temperature field of blast furnace’s air tuyere.
-
Method of estimation of heart failure during a physical exercise
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 311-321Views (last year): 8. Citations: 1 (RSCI).The results of determination of the risk of cardiovascular failure of young athletes and adolescents in stressful physical activity have been demonstrated. The method of screening diagnostics of the risk of developing heart failure has been described. The results of contactless measurement of the form of the pulse wave of the radial artery using semiconductor laser autodyne have been presented. In the measurements used laser diode type RLD-650 specifications: output power of 5 mW, emission wavelength 654 nm. The problem was solved by the reduced form of the reflector movement, which acts as the surface of the skin of the human artery, tested method of assessing the risk of cardiovascular disease during exercise and the analysis of the results of its application to assess the risk of cardiovascular failure reactions of young athletes. As analyzed parameters were selected the following indicators: the steepness of the rise in the systolic portion of the fast and slow phase, the rate of change in the pulse wave catacrota variability of cardio intervals as determined by the time intervals between the peaks of the pulse wave. It analyzed pulse wave form on its first and second derivative with respect to time. The zeros of the first derivative of the pulse wave allow to set aside time in systolic rise. A minimum of the second derivative corresponds to the end of the phase and the beginning of the slow pressure build-up in the systole. Using the first and second derivative of the pulse wave made it possible to separately analyze the pulse wave form phase of rapid and slow pressure increase phase during systolic expansion. It has been established that the presence of anomalies in the form of the pulse wave in combination with vagotonic nervous regulation of the cardiovascular system of a patient is a sign of danger collapse of circulation during physical exercise.
-
Cytokines as indicators of the state of the organism in infectious diseases. Experimental data analysis
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1409-1426When person`s diseases is result of bacterial infection, various characteristics of the organism are used for observation the course of the disease. Currently, one of these indicators is dynamics of cytokine concentrations are produced, mainly by cells of the immune system. There are many types of these low molecular weight proteins in human body and many species of animals. The study of cytokines is important for the interpretation of functional disorders of the body's immune system, assessment of the severity, monitoring the effectiveness of therapy, predicting of the course and outcome of treatment. Cytokine response of the body indicating characteristics of course of disease. For research regularities of such indication, experiments were conducted on laboratory mice. Experimental data are analyzed on the development of pneumonia and treatment with several drugs for bacterial infection of mice. As drugs used immunomodulatory drugs “Roncoleukin”, “Leikinferon” and “Tinrostim”. The data are presented by two types cytokines` concentration in lung tissue and animal blood. Multy-sided statistical ana non statistical analysis of the data allowed us to find common patterns of changes in the “cytokine profile” of the body and to link them with the properties of therapeutic preparations. The studies cytokine “Interleukin-10” (IL-10) and “Interferon Gamma” (IFN$\gamma$) in infected mice deviate from the normal level of infact animals indicating the development of the disease. Changes in cytokine concentrations in groups of treated mice are compared with those in a group of healthy (not infected) mice and a group of infected untreated mice. The comparison is made for groups of individuals, since the concentrations of cytokines are individual and differ significantly in different individuals. Under these conditions, only groups of individuals can indicate the regularities of the processes of the course of the disease. These groups of mice were being observed for two weeks. The dynamics of cytokine concentrations indicates characteristics of the disease course and efficiency of used therapeutic drugs. The effect of a medicinal product on organisms is monitored by the location of these groups of individuals in the space of cytokine concentrations. The Hausdorff distance between the sets of vectors of cytokine concentrations of individuals is used in this space. This is based on the Euclidean distance between the elements of these sets. It was found that the drug “Roncoleukin” and “Leukinferon” have a generally similar and different from the drug “Tinrostim” effect on the course of the disease.
Keywords: data processing, experiment, cytokine, immune system, pneumonia, statistics, approximation, Hausdorff distance. -
Analysis of the identifiability of the mathematical model of propane pyrolysis
Computer Research and Modeling, 2021, v. 13, no. 5, pp. 1045-1057The article presents the numerical modeling and study of the kinetic model of propane pyrolysis. The study of the reaction kinetics is a necessary stage in modeling the dynamics of the gas flow in the reactor.
The kinetic model of propane pyrolysis is a nonlinear system of ordinary differential equations of the first order with parameters, the role of which is played by the reaction rate constants. Math modeling of processes is based on the use of the mass conservation law. To solve an initial (forward) problem, implicit methods for solving stiff ordinary differential equation systems are used. The model contains 60 input kinetic parameters and 17 output parameters corresponding to the reaction substances, of which only 9 are observable. In the process of solving the problem of estimating parameters (inverse problem), there is a question of non-uniqueness of the set of parameters that satisfy the experimental data. Therefore, before solving the inverse problem, the possibility of determining the parameters of the model is analyzed (analysis of identifiability).
To analyze identifiability, we use the orthogonal method, which has proven itself well for analyzing models with a large number of parameters. The algorithm is based on the analysis of the sensitivity matrix by the methods of differential and linear algebra, which shows the degree of dependence of the unknown parameters of the models on the given measurements. The analysis of sensitivity and identifiability showed that the parameters of the model are stably determined from a given set of experimental data. The article presents a list of model parameters from most to least identifiable. Taking into account the analysis of the identifiability of the mathematical model, restrictions were introduced on the search for less identifiable parameters when solving the inverse problem.
The inverse problem of estimating the parameters was solved using a genetic algorithm. The article presents the found optimal values of the kinetic parameters. A comparison of the experimental and calculated dependences of the concentrations of propane, main and by-products of the reaction on temperature for different flow rates of the mixture is presented. The conclusion about the adequacy of the constructed mathematical model is made on the basis of the correspondence of the results obtained to physicochemical laws and experimental data.
-
The agent model of intercultural interactions: the emergence of cultural uncertainties
Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1143-1162The article describes a simulation agent-based model of intercultural interactions in a country whose population belongs to different cultures. It is believed that the space of cultures can be represented as a Hilbert space, in which certain subspaces correspond to different cultures. In the model, the concept of culture is understood as a structured subspace of the Hilbert space. This makes it possible to describe the state of agents by a vector in a Hilbert space. It is believed that each agent is described by belonging to a certain «culture». The number of agents belonging to certain cultures is determined by demographic processes that correspond to these cultures, the depth and integrity of the educational process, as well as the intensity of intercultural contacts. Interaction between agents occurs within clusters, into which, according to certain criteria, the entire set of agents is divided. When agents interact according to a certain algorithm, the length and angle that characterize the state of the agent change. In the process of imitation, depending on the number of agents belonging to different cultures, the intensity of demographic and educational processes, as well as the intensity of intercultural contacts, aggregates of agents (clusters) are formed, the agents of which belong to different cultures. Such intercultural clusters do not entirely belong to any of the cultures initially considered in the model. Such intercultural clusters create uncertainties in cultural dynamics. The paper presents the results of simulation experiments that illustrate the influence of demographic and educational processes on the dynamics of intercultural clusters. The issues of the development of the proposed approach to the study (discussion) of the transitional states of the development of cultures are discussed.
-
Fuzzy knowledge extraction in the development of expert predictive diagnostic systems
Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1395-1408Expert systems imitate professional experience and thinking process of a specialist to solve problems in various subject areas. An example of the problem that it is expedient to solve with the help of the expert system is the problem of forming a diagnosis that arises in technology, medicine, and other fields. When solving the diagnostic problem, it is necessary to anticipate the occurrence of critical or emergency situations in the future. They are situations, which require timely intervention of specialists to prevent critical aftermath. Fuzzy sets theory provides one of the approaches to solve ill-structured problems, diagnosis-making problems belong to which. The theory of fuzzy sets provides means for the formation of linguistic variables, which are helpful to describe the modeled process. Linguistic variables are elements of fuzzy logical rules that simulate the reasoning of professionals in the subject area. To develop fuzzy rules it is necessary to resort to a survey of experts. Knowledge engineers use experts’ opinion to evaluate correspondence between a typical current situation and the risk of emergency in the future. The result of knowledge extraction is a description of linguistic variables that includes a combination of signs. Experts are involved in the survey to create descriptions of linguistic variables and present a set of simulated situations.When building such systems, the main problem of the survey is laboriousness of the process of interaction of knowledge engineers with experts. The main reason is the multiplicity of questions the expert must answer. The paper represents reasoning of the method, which allows knowledge engineer to reduce the number of questions posed to the expert. The paper describes the experiments carried out to test the applicability of the proposed method. An expert system for predicting risk groups for neonatal pathologies and pregnancy pathologies using the proposed knowledge extraction method confirms the feasibility of the proposed approach.
-
Subgradient methods for weakly convex and relatively weakly convex problems with a sharp minimum
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 393-412The work is devoted to the study of subgradient methods with different variations of the Polyak stepsize for minimization functions from the class of weakly convex and relatively weakly convex functions that have the corresponding analogue of a sharp minimum. It turns out that, under certain assumptions about the starting point, such an approach can make it possible to justify the convergence of the subgradient method with the speed of a geometric progression. For the subgradient method with the Polyak stepsize, a refined estimate for the rate of convergence is proved for minimization problems for weakly convex functions with a sharp minimum. The feature of this estimate is an additional consideration of the decrease of the distance from the current point of the method to the set of solutions with the increase in the number of iterations. The results of numerical experiments for the phase reconstruction problem (which is weakly convex and has a sharp minimum) are presented, demonstrating the effectiveness of the proposed approach to estimating the rate of convergence compared to the known one. Next, we propose a variation of the subgradient method with switching over productive and non-productive steps for weakly convex problems with inequality constraints and obtain the corresponding analog of the result on convergence with the rate of geometric progression. For the subgradient method with the corresponding variation of the Polyak stepsize on the class of relatively Lipschitz and relatively weakly convex functions with a relative analogue of a sharp minimum, it was obtained conditions that guarantee the convergence of such a subgradient method at the rate of a geometric progression. Finally, a theoretical result is obtained that describes the influence of the error of the information about the (sub)gradient available by the subgradient method and the objective function on the estimation of the quality of the obtained approximate solution. It is proved that for a sufficiently small error $\delta > 0$, one can guarantee that the accuracy of the solution is comparable to $\delta$.
-
The problem of choosing solutions in the classical format of the description of a molecular system
Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1573-1600The numerical methods developed by the author recently for calculating the molecular system based on the direct solution of the Schrodinger equation by the Monte Carlo method have shown a huge uncertainty in the choice of solutions. On the one hand, it turned out to be possible to build many new solutions; on the other hand, the problem of their connection with reality has become sharply aggravated. In ab initio quantum mechanical calculations, the problem of choosing solutions is not so acute after the transition to the classical format of describing a molecular system in terms of potential energy, the method of molecular dynamics, etc. In this paper, we investigate the problem of choosing solutions in the classical format of describing a molecular system without taking into account quantum mechanical prerequisites. As it turned out, the problem of choosing solutions in the classical format of describing a molecular system is reduced to a specific marking of the configuration space in the form of a set of stationary points and reconstruction of the corresponding potential energy function. In this formulation, the solution of the choice problem is reduced to two possible physical and mathematical problems: to find all its stationary points for a given potential energy function (the direct problem of the choice problem), to reconstruct the potential energy function for a given set of stationary points (the inverse problem of the choice problem). In this paper, using a computational experiment, the direct problem of the choice problem is discussed using the example of a description of a monoatomic cluster. The number and shape of the locally equilibrium (saddle) configurations of the binary potential are numerically estimated. An appropriate measure is introduced to distinguish configurations in space. The format of constructing the entire chain of multiparticle contributions to the potential energy function is proposed: binary, threeparticle, etc., multiparticle potential of maximum partiality. An infinite number of locally equilibrium (saddle) configurations for the maximum multiparticle potential is discussed and illustrated. A method of variation of the number of stationary points by combining multiparticle contributions to the potential energy function is proposed. The results of the work listed above are aimed at reducing the huge arbitrariness of the choice of the form of potential that is currently taking place. Reducing the arbitrariness of choice is expressed in the fact that the available knowledge about the set of a very specific set of stationary points is consistent with the corresponding form of the potential energy function.
-
Reinforcement learning-based adaptive traffic signal control invariant to traffic signal configuration
Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1253-1269In this paper, we propose an adaptive traffic signal control method invariant to the configuration of the traffic signal. The proposed method uses one neural network model to control traffic signals of various configurations, differing both in the number of controlled lanes and in the used traffic light control cycle (set of phases). To describe the state space, both dynamic information about the current state of the traffic flow and static data about the configuration of a controlled intersection are used. To increase the speed of model training and reduce the required amount of data required for model convergence, it is proposed to use an “expert” who provides additional data for model training. As an expert, we propose to use an adaptive control method based on maximizing the weighted flow of vehicles through an intersection. Experimental studies of the effectiveness of the developed method were carried out in a microscopic simulation software package. The obtained results confirmed the effectiveness of the proposed method in different simulation scenarios. The possibility of using the developed method in a simulation scenario that is not used in the training process was shown. We provide a comparison of the proposed method with other baseline solutions, including the method used as an “expert”. In most scenarios, the developed method showed the best results by average travel time and average waiting time criteria. The advantage over the method used as an expert, depending on the scenario under study, ranged from 2% to 12% according to the criterion of average vehicle waiting time and from 1% to 7% according to the criterion of average travel time.
-
Hypergeometric functions in model of General equilibrium of multisector economy with monopolistic competition
Computer Research and Modeling, 2017, v. 9, no. 5, pp. 825-836Views (last year): 10.We show that basic properties of some models of monopolistic competition are described using families of hypergeometric functions. The results obtained by building a general equilibrium model in a multisector economy producing a differentiated good in $n$ high-tech sectors in which single-product firms compete monopolistically using the same technology. Homogeneous (traditional) sector is characterized by perfect competition. Workers are motivated to find a job in high-tech sectors as wages are higher there. However, they are at risk to remain unemployed. Unemployment persists in equilibrium by labor market imperfections. Wages are set by firms in high-tech sectors as a result of negotiations with employees. It is assumed that individuals are homogeneous consumers with identical preferences that are given the separable utility function of general form. In the paper the conditions are found such that the general equilibrium in the model exists and is unique. The conditions are formulated in terms of the elasticity of substitution $\mathfrak{S}$ between varieties of the differentiated good which is averaged over all consumers. The equilibrium found is symmetrical with respect to the varieties of differentiated good. The equilibrium variables can be represented as implicit functions which properties are associated elasticity $\mathfrak{S}$ introduced by the authors. A complete analytical description of the equilibrium variables is possible for known special cases of the utility function of consumers, for example, in the case of degree functions, which are incorrect to describe the response of the economy to changes in the size of the markets. To simplify the implicit function, we introduce a utility function defined by two one-parameter families of hypergeometric functions. One of the families describes the pro-competitive, and the other — anti-competitive response of prices to an increase in the size of the economy. A parameter change of each of the families corresponds to all possible values of the elasticity $\mathfrak{S}$. In this sense, the hypergeometric function exhaust natural utility function. It is established that with the increase in the elasticity of substitution between the varieties of the differentiated good the difference between the high-tech and homogeneous sectors is erased. It is shown that in the case of large size of the economy in equilibrium individuals consume a small amount of each product as in the case of degree preferences. This fact allows to approximate the hypergeometric functions by the sum of degree functions in a neighborhood of the equilibrium values of the argument. Thus, the change of degree utility functions by hypergeometric ones approximated by the sum of two power functions, on the one hand, retains all the ability to configure parameters and, on the other hand, allows to describe the effects of change the size of the sectors of the economy.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




