All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Relation between performance of organization and its structure during sudden and smoldering crises
Computer Research and Modeling, 2016, v. 8, no. 4, pp. 685-706Views (last year): 2. Citations: 2 (RSCI).The article describes a mathematical model that simulates performance of a hierarchical organization during an early stage of a crisis. A distinguished feature of this stage of crisis is presence of so called early warning signals containing information on the approaching event. Employees are capable of catching the early warnings and of preparing the organization for the crisis based on the signals’ meaning. The efficiency of the preparation depends on both parameters of the organization and parameters of the crisis. The proposed simulation agentbased model is implemented on Java programming language and is used for conducting experiments via Monte- Carlo method. The goal of the experiments is to compare how centralized and decentralized organizational structures perform during sudden and smoldering crises. By centralized organizations we assume structures with high number of hierarchy levels and low number of direct reports of every manager, while decentralized organizations mean structures with low number of hierarchy levels and high number of direct reports of every manager. Sudden crises are distinguished by short early stage and low number of warning signals, while smoldering crises are defined as crises with long lasting early stage and high number of warning signals not necessary containing important information. Efficiency of the organizational performance during early stage of a crisis is measured by two parameters: percentage of early warnings which have been acted upon in order to prepare organization for the crisis, and time spent by top-manager on working with early warnings. As a result, we show that during early stage of smoldering crises centralized organizations process signals more efficiently than decentralized organizations, while decentralized organizations handle early warning signals more efficiently during early stage of sudden crises. However, occupation of top-managers during sudden crises is higher in decentralized organizations and it is higher in centralized organizations during smoldering crises. Thus, neither of the two classes of organizational structures is more efficient by the two parameters simultaneously. Finally, we conduct sensitivity analysis to verify the obtained results.
-
Analysis of the physics-informed neural network approach to solving ordinary differential equations
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1621-1636Considered the application of physics-informed neural networks using multi layer perceptrons to solve Cauchy initial value problems in which the right-hand sides of the equation are continuous monotonically increasing, decreasing or oscillating functions. With the use of the computational experiments the influence of the construction of the approximate neural network solution, neural network structure, optimization algorithm and software implementation means on the learning process and the accuracy of the obtained solution is studied. The analysis of the efficiency of the most frequently used machine learning frameworks in software development with the programming languages Python and C# is carried out. It is shown that the use of C# language allows to reduce the time of neural networks training by 20–40%. The choice of different activation functions affects the learning process and the accuracy of the approximate solution. The most effective functions in the considered problems are sigmoid and hyperbolic tangent. The minimum of the loss function is achieved at the certain number of neurons of the hidden layer of a single-layer neural network for a fixed training time of the neural network model. It’s also mentioned that the complication of the network structure increasing the number of neurons does not improve the training results. At the same time, the size of the grid step between the points of the training sample, providing a minimum of the loss function, is almost the same for the considered Cauchy problems. Training single-layer neural networks, the Adam method and its modifications are the most effective to solve the optimization problems. Additionally, the application of twoand three-layer neural networks is considered. It is shown that in these cases it is reasonable to use the LBFGS algorithm, which, in comparison with the Adam method, in some cases requires much shorter training time achieving the same solution accuracy. The specificity of neural network training for Cauchy problems in which the solution is an oscillating function with monotonically decreasing amplitude is also investigated. For these problems, it is necessary to construct a neural network solution with variable weight coefficient rather than with constant one, which improves the solution in the grid cells located near by the end point of the solution interval.
-
Statistical modeling of the production processes оf the flexible automated assembly in the object-oriented programming environment
Computer Research and Modeling, 2015, v. 7, no. 2, pp. 289-300Views (last year): 2. Citations: 1 (RSCI).Using modern object-oriented programming language C# a program for simulation of operation of the conveyor for flexible automated assembly of PC was developed. Class diagram of the simulation model of a flexible automated assembly line for PC assembly in mass production mode is presented. Simulation results analysis is presented.
-
Model for building of the radio environment map for cognitive communication system based on LTE
Computer Research and Modeling, 2022, v. 14, no. 1, pp. 127-146The paper is devoted to the secondary use of spectrum in telecommunication networks. It is emphasized that one of the solutions to this problem is the use of cognitive radio technologies and dynamic spectrum access for the successful functioning of which a large amount of information is required, including the parameters of base stations and network subscribers. Storage and processing of information should be carried out using a radio environment map, which is a spatio-temporal database of all activity in the network and allows you to determine the frequencies available for use at a given time. The paper presents a two-level model for forming a map of the radio environment of a cellular communication system LTE, in which the local and global levels are highlighted, which is described by the following parameters: a set of frequencies, signal attenuation, signal propagation map, grid step, current time count. The key objects of the model are the base station and the subscriber unit. The main parameters of the base station include: name, identifier, cell coordinates, range number, radiation power, numbers of connected subscriber devices, dedicated resource blocks. For subscriber devices, the following parameters are used: name, identifier, location, current coordinates of the device cell, base station identifier, frequency range, numbers of resource blocks for communication with the station, radiation power, data transmission status, list of numbers of the nearest stations, schedules movement and communication sessions of devices. An algorithm for the implementation of the model is presented, taking into account the scenarios of movement and communication sessions of subscriber devices. A method for calculating a map of the radio environment at a point on a coordinate grid, taking into account losses during the propagation of radio signals from emitting devices, is presented. The software implementation of the model is performed using the MatLab package. The approaches are described that allow to increase the speed of its work. In the simulation, the choice of parameters was carried out taking into account the data of the existing communication systems and the economy of computing resources. The experimental results of the algorithm for the formation of a radio environment map are demonstrated, confirming the correctness of the developed model.
-
The structure of site percolation models on three-dimensional square lattices
Computer Research and Modeling, 2013, v. 5, no. 4, pp. 607-622Views (last year): 8. Citations: 5 (RSCI).In this paper we consider the structure of site percolation models on three-dimensional square lattices with various shapes of (1,π)-neighborhood. For these models, are proposed iso- and anisotropic modifications of the invasion percolation algorithm with (1,0)- and (1,π)-neighborhoods. All the above algorithms are special cases of the anisotropic invasion percolation algorithm on the n-dimensional lattice with a (1,π)-neighborhood. This algorithm is the basis for the package SPSL, released under GNU GPL-3 using the free programming language R.
-
Analysis of the rate of electron transport through photosynthetic cytochrome b6f complex
Computer Research and Modeling, 2024, v. 16, no. 4, pp. 997-1022We consider an approach based on linear algebra methods to analyze the rate of electron transport through the cytochrome b6f complex. In the proposed approach, the dependence of the quasi-stationary electron flux through the complex on the degree of reduction of pools of mobile electron carriers is considered a response function characterizing this process. We have developed software in the Python programming language that allows us to construct the master equation for the complex according to the scheme of elementary reactions and calculate quasi-stationary electron transport rates through the complex and the dynamics of their changes during the transition process. The calculations are performed in multithreaded mode, which makes it possible to efficiently use the resources of modern computing systems and to obtain data on the functioning of the complex in a wide range of parameters in a relatively short time. The proposed approach can be easily adapted for the analysis of electron transport in other components of the photosynthetic and respiratory electron-transport chain, as well as other processes in multienzyme complexes containing several reaction centers. Cryo-electron microscopy and redox titration data were used to parameterize the model of cytochrome b6f complex. We obtained dependences of the quasi-stationary rate of plastocyanin reduction and plastoquinone oxidation on the degree of reduction of pools of mobile electron carriers and analyzed the dynamics of rate changes in response to changes in the redox state of the plastoquinone pool. The modeling results are in good agreement with the available experimental data.
-
Modeling the kinetics of radiopharmaceuticals with iodine isotopes in nuclear medicine problems
Computer Research and Modeling, 2020, v. 12, no. 4, pp. 883-905Radiopharmaceuticals with iodine radioisotopes are now widely used in imaging and non-imaging methods of nuclear medicine. When evaluating the results of radionuclide studies of the structural and functional state of organs and tissues, parallel modeling of the kinetics of radiopharmaceuticals in the body plays an important role. The complexity of such modeling lies in two opposite aspects. On the one hand, excessive simplification of the anatomical and physiological characteristics of the organism when splitting it to the compartments that may result in the loss or distortion of important clinical diagnosis information, on the other – excessive, taking into account all possible interdependencies of the functioning of the organs and systems that, on the contrary, will lead to excess amount of absolutely useless for clinical interpretation of the data or the mathematical model becomes even more intractable. Our work develops a unified approach to the construction of mathematical models of the kinetics of radiopharmaceuticals with iodine isotopes in the human body during diagnostic and therapeutic procedures of nuclear medicine. Based on this approach, three- and four-compartment pharmacokinetic models were developed and corresponding calculation programs were created in the C++ programming language for processing and evaluating the results of radionuclide diagnostics and therapy. Various methods for identifying model parameters based on quantitative data from radionuclide studies of the functional state of vital organs are proposed. The results of pharmacokinetic modeling for radionuclide diagnostics of the liver, kidney, and thyroid using iodine-containing radiopharmaceuticals are presented and analyzed. Using clinical and diagnostic data, individual pharmacokinetic parameters of transport of different radiopharmaceuticals in the body (transport constants, half-life periods, maximum activity in the organ and the time of its achievement) were determined. It is shown that the pharmacokinetic characteristics for each patient are strictly individual and cannot be described by averaged kinetic parameters. Within the framework of three pharmacokinetic models, “Activity–time” relationships were obtained and analyzed for different organs and tissues, including for tissues in which the activity of a radiopharmaceutical is impossible or difficult to measure by clinical methods. Also discussed are the features and the results of simulation and dosimetric planning of radioiodine therapy of the thyroid gland. It is shown that the values of absorbed radiation doses are very sensitive to the kinetic parameters of the compartment model. Therefore, special attention should be paid to obtaining accurate quantitative data from ultrasound and thyroid radiometry and identifying simulation parameters based on them. The work is based on the principles and methods of pharmacokinetics. For the numerical solution of systems of differential equations of the pharmacokinetic models we used Runge–Kutta methods and Rosenbrock method. The Hooke–Jeeves method was used to find the minimum of a function of several variables when identifying modeling parameters.
-
A survey on the application of large language models in software engineering
Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1715-1726Large Language Models (LLMs) are transforming software engineering by bridging the gap between natural language and programming languages. These models have revolutionized communication within development teams and the Software Development Life Cycle (SDLC) by enabling developers to interact with code using natural language, thereby improving workflow efficiency. This survey examines the impact of LLMs across various stages of the SDLC, including requirement gathering, system design, coding, debugging, testing, and documentation. LLMs have proven to be particularly useful in automating repetitive tasks such as code generation, refactoring, and bug detection, thus reducing manual effort and accelerating the development process. The integration of LLMs into the development process offers several advantages, including the automation of error correction, enhanced collaboration, and the ability to generate high-quality, functional code based on natural language input. Additionally, LLMs assist developers in understanding and implementing complex software requirements and design patterns. This paper also discusses the evolution of LLMs from simple code completion tools to sophisticated models capable of performing high-level software engineering tasks. However, despite their benefits, there are challenges associated with LLM adoption, such as issues related to model accuracy, interpretability, and potential biases. These limitations must be addressed to ensure the reliable deployment of LLMs in production environments. The paper concludes by identifying key areas for future research, including improving the adaptability of LLMs to specific software domains, enhancing their contextual understanding, and refining their capabilities to generate semantically accurate and efficient code. This survey provides valuable insights into the evolving role of LLMs in software engineering, offering a foundation for further exploration and practical implementation.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"