All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Comparative analysis of human adaptation to the growth of visual information in the tasks of recognizing formal symbols and meaningful images
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 571-586We describe an engineering-psychological experiment that continues the study of ways to adapt a person to the increasing complexity of logical problems by presenting a series of problems of increasing complexity, which is determined by the volume of initial data. Tasks require calculations in an associative or non-associative system of operations. By the nature of the change in the time of solving the problem, depending on the number of necessary operations, we can conclude that a purely sequential method of solving problems or connecting additional brain resources to the solution in parallel mode. In a previously published experimental work, a person in the process of solving an associative problem recognized color images with meaningful images. In the new study, a similar problem is solved for abstract monochrome geometric shapes. Analysis of the result showed that for the second case, the probability of the subject switching to a parallel method of processing visual information is significantly reduced. The research method is based on presenting a person with two types of tasks. One type of problem contains associative calculations and allows a parallel solution algorithm. Another type of problem is the control one, which contains problems in which calculations are not associative and parallel algorithms are ineffective. The task of recognizing and searching for a given object is associative. A parallel strategy significantly speeds up the solution with relatively small additional resources. As a control series of problems (to separate parallel work from the acceleration of a sequential algorithm), we use, as in the previous experiment, a non-associative comparison problem in cyclic arithmetic, presented in the visual form of the game “rock, paper, scissors”. In this problem, the parallel algorithm requires a large number of processors with a small efficiency coefficient. Therefore, the transition of a person to a parallel algorithm for solving this problem is almost impossible, and the acceleration of processing input information is possible only by increasing the speed. Comparing the dependence of the solution time on the volume of source data for two types of problems allows us to identify four types of strategies for adapting to the increasing complexity of the problem: uniform sequential, accelerated sequential, parallel computing (where possible), or undefined (for this method) strategy. The Reducing of the number of subjects, who switch to a parallel strategy when encoding input information with formal images, shows the effectiveness of codes that cause subject associations. They increase the speed of human perception and processing of information. The article contains a preliminary mathematical model that explains this phenomenon. It is based on the appearance of a second set of initial data, which occurs in a person as a result of recognizing the depicted objects.
-
The dynamic model of a high-rise firefighting drone
Computer Research and Modeling, 2022, v. 14, no. 1, pp. 115-126The utilization of unmanned aerial vehicles (UAVs) in high-rise firefighting operations is the right solution for reaching the fire scene on high floors quickly and effectively. The article proposes a quadrotor-type firefighting UAV model carrying a launcher to launch a missile containing fire extinguishing powders into a fire. The kinematic model describing the flight kinematics of this UAV model is built based on the Newton – Euler method when the device is in normal motion and at the time of launching a firefighting missile. The results from the simulation testing the validity of the kinematic model and the simulation of the motion of the UAV show that the variation of Euler angles, flight angles, and aerodynamic angles during a flight are within an acceptable range and overload guarantee in flight. The UAV flew to the correct position to launch the required fire-extinguishing ammunition. The results of the research are the basis for building a control system of high-rise firefighting drones in Vietnam.
-
Molecular dynamics assessment of the mechanical properties of fibrillar actin
Computer Research and Modeling, 2022, v. 14, no. 5, pp. 1081-1092Actin is a conserved structural protein that is expressed in all eukaryotic cells. When polymerized, it forms long filaments of fibrillar actin, or F-actin, which are involved in the formation of the cytoskeleton, in muscle contraction and its regulation, and in many other processes. The dynamic and mechanical properties of actin are important for interaction with other proteins and the realization of its numerous functions in the cell. We performed 204.8 ns long molecular dynamics (MD) simulations of an actin filament segment consisting of 24 monomers in the absence and the presence of MgADP at 300 K in the presence of a solvent and at physiological ionic strength using the AMBER99SBILDN and CHARMM36 force fields in the GROMACS software environment, using modern structural models as the initial structure obtained by high-resolution cryoelectron microscopy. MD calculations have shown that the stationary regime of fluctuations in the structure of the F-actin long segment is developed 80–100 ns after the start of the MD trajectory. Based on the results of MD calculations, the main parameters of the actin helix and its bending, longitudinal, and torsional stiffness were estimated using a section of the calculation model that is far enough away from its ends. The estimated subunit axial (2.72–2.75 nm) and angular (165–168◦) translation of the F-actin helix, its bending (2.8–4.7 · 10−26 N·m2), longitudinal (36–47·10−9 N), and torsional (2.6–3.1·10−26 N·m2) stiffness are in good agreement with the results of the most reliable experiments. The results of MD calculations have shown that modern structural models of F-actin make it possible to accurately describe its dynamics and mechanical properties, provided that computational models contain a sufficiently large number of monomers, modern force fields, and relatively long MD trajectories are used. The inclusion of actin partner proteins, in particular, tropomyosin and troponin, in the MD model can help to understand the molecular mechanisms of such important processes as the regulation of muscle contraction.
-
Cloud interpretation of the entropy model for calculating the trip matrix
Computer Research and Modeling, 2024, v. 16, no. 1, pp. 89-103As the population of cities grows, the need to plan for the development of transport infrastructure becomes more acute. For this purpose, transport modeling packages are created. These packages usually contain a set of convex optimization problems, the iterative solution of which leads to the desired equilibrium distribution of flows along the paths. One of the directions for the development of transport modeling is the construction of more accurate generalized models that take into account different types of passengers, their travel purposes, as well as the specifics of personal and public modes of transport that agents can use. Another important direction of transport models development is to improve the efficiency of the calculations performed. Since, due to the large dimension of modern transport networks, the search for a numerical solution to the problem of equilibrium distribution of flows along the paths is quite expensive. The iterative nature of the entire solution process only makes this worse. One of the approaches leading to a reduction in the number of calculations performed is the construction of consistent models that allow to combine the blocks of a 4-stage model into a single optimization problem. This makes it possible to eliminate the iterative running of blocks, moving from solving a separate optimization problem at each stage to some general problem. Early work has proven that such approaches provide equivalent solutions. However, it is worth considering the validity and interpretability of these methods. The purpose of this article is to substantiate a single problem, that combines both the calculation of the trip matrix and the modal choice, for the generalized case when there are different layers of demand, types of agents and classes of vehicles in the transport network. The article provides possible interpretations for the gauge parameters used in the problem, as well as for the dual factors associated with the balance constraints. The authors of the article also show the possibility of combining the considered problem with a block for determining network load into a single optimization problem.
-
Numerical simulation of electromagnetic properties of the saturated rock media with surface conductivity effects
Computer Research and Modeling, 2015, v. 7, no. 5, pp. 1081-1088Views (last year): 4. Citations: 1 (RSCI).New numerical simulation technique to calculate electrical properties of rocks with two-phase “oil– water” saturation is proposed. This technique takes into account surface conductivity of electrical double layers at the contact between solid rock and aqueous solution inside pore space. The numerical simulation technique is based on acquiring of electrical potential distribution in high-resolution three-dimensional digital model of porous medium. The digital model incorporates the spatial geometry of pore channels and contains bulk and surface grid cells. Numerical simulation results demonstrate the importance of surface conductivity effects.
-
Languages in China provinces: quantitative estimation with incomplete data
Computer Research and Modeling, 2016, v. 8, no. 4, pp. 707-716Views (last year): 3.This paper formulates and solves a practical problem of data recovery regarding the distribution of languages on regional level in context of China. The necessity of this recovery is related to the problem of the determination of the linguistic diversity indices, which, in turn, are used to analyze empirically and to predict sources of social and economic development as well as to indicate potential conflicts at regional level. We use Ethnologue database and China census as the initial data sources. For every language spoken in China, the data contains (a) an estimate of China residents who claim this language to be their mother tongue, and (b) indicators of the presence of such residents in China provinces. For each pair language/province, we aim to estimate the number of the province inhabitants that claim the language to be their mother tongue. This base problem is reduced to solving an undetermined system of algebraic equations. Given additional restriction that Ethnologue database introduces data collected at different time moments because of gaps in Ethnologue language surveys and accompanying data collection expenses, we relate those data to a single time moment, that turns the initial task to an ’ill-posed’ system of algebraic equations with imprecisely determined right hand side. Therefore, we are looking for an approximate solution characterized by a minimal discrepancy of the system. Since some languages are much less distributed than the others, we minimize the weighted discrepancy, introducing weights that are inverse to the right hand side elements of the equations. This definition of discrepancy allows to recover the required variables. More than 92% of the recovered variables are robust to probabilistic modelling procedure for potential errors in initial data.
-
Application of the streamline method for nonlinear filtration problems acceleration
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 709-728Views (last year): 18.The paper contains numerical simulation of nonisothermal nonlinear flow in a porous medium. Twodimensional unsteady problem of heavy oil, water and steam flow is considered. Oil phase consists of two pseudocomponents: light and heavy fractions, which like the water component, can vaporize. Oil exhibits viscoplastic rheology, its filtration does not obey Darcy's classical linear law. Simulation considers not only the dependence of fluids density and viscosity on temperature, but also improvement of oil rheological properties with temperature increasing.
To solve this problem numerically we use streamline method with splitting by physical processes, which consists in separating the convective heat transfer directed along filtration from thermal conductivity and gravitation. The article proposes a new approach to streamline methods application, which allows correctly simulate nonlinear flow problems with temperature-dependent rheology. The core of this algorithm is to consider the integration process as a set of quasi-equilibrium states that are results of solving system on a global grid. Between these states system solved on a streamline grid. Usage of the streamline method allows not only to accelerate calculations, but also to obtain a physically reliable solution, since integration takes place on a grid that coincides with the fluid flow direction.
In addition to the streamline method, the paper presents an algorithm for nonsmooth coefficients accounting, which arise during simulation of viscoplastic oil flow. Applying this algorithm allows keeping sufficiently large time steps and does not change the physical structure of the solution.
Obtained results are compared with known analytical solutions, as well as with the results of commercial package simulation. The analysis of convergence tests on the number of streamlines, as well as on different streamlines grids, justifies the applicability of the proposed algorithm. In addition, the reduction of calculation time in comparison with traditional methods demonstrates practical significance of the approach.
-
Experimental identification of the organization of mental calculations of the person on the basis of algebras of different associativity
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 311-327Views (last year): 16.The work continues research on the ability of a person to improve the productivity of information processing, using parallel work or improving the performance of analyzers. A person receives a series of tasks, the solution of which requires the processing of a certain amount of information. The time and the validity of the decision are recorded. The dependence of the average solution time on the amount of information in the problem is determined by correctly solved problems. In accordance with the proposed method, the problems contain calculations of expressions in two algebras, one of which is associative and the other is nonassociative. To facilitate the work of the subjects in the experiment were used figurative graphic images of elements of algebra. Non-associative calculations were implemented in the form of the game “rock-paper-scissors”. It was necessary to determine the winning symbol in the long line of these figures, considering that they appear sequentially from left to right and play with the previous winner symbol. Associative calculations were based on the recognition of drawings from a finite set of simple images. It was necessary to determine which figure from this set in the line is not enough, or to state that all the pictures are present. In each problem there was no more than one picture. Computation in associative algebra allows the parallel counting, and in the absence of associativity only sequential computations are possible. Therefore, the analysis of the time for solving a series of problems reveals a consistent uniform, sequential accelerated and parallel computing strategy. In the experiments it was found that all subjects used a uniform sequential strategy to solve non-associative problems. For the associative task, all subjects used parallel computing, and some have used parallel computing acceleration of the growth of complexity of the task. A small part of the subjects with a high complexity, judging by the evolution of the solution time, supplemented the parallel account with a sequential stage of calculations (possibly to control the solution). We develop a special method for assessing the rate of processing of input information by a person. It allowed us to estimate the level of parallelism of the calculation in the associative task. Parallelism of level from two to three was registered. The characteristic speed of information processing in the sequential case (about one and a half characters per second) is twice less than the typical speed of human image recognition. Apparently the difference in processing time actually spent on the calculation process. For an associative problem in the case of a minimum amount of information, the solution time is near to the non-associativity case or less than twice. This is probably due to the fact that for a small number of characters recognition almost exhausts the calculations for the used non-associative problem.
-
Migration processes modelling: methods and tools (overview)
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1205-1232Migration has a significant impact on the shaping of the demographic structure of the territories population, the state of regional and local labour markets. As a rule, rapid change in the working-age population of any territory due to migration processes results in an imbalance in supply and demand on labour markets and a change in the demographic structure of the population. Migration is also to a large extent a reflection of socio-economic processes taking place in the society. Hence, the issues related to the study of migration factors, the direction, intensity and structure of migration flows, and the prediction of their magnitude are becoming topical issues these days.
Mathematical tools are often used to analyze, predict migration processes and assess their consequences, allowing for essentially accurate modelling of migration processes for different territories on the basis of the available statistical data. In recent years, quite a number of scientific papers on modelling internal and external migration flows using mathematical methods have appeared both in Russia and in foreign countries in recent years. Consequently, there has been a need to systematize the currently most commonly used methods and tools applied in migration modelling to form a coherent picture of the main trends and research directions in this field.
The presented review considers the main approaches to migration modelling and the main components of migration modelling methodology, i. e. stages, methods, models and model classification. Their comparative analysis was also conducted and general recommendations on the choice of mathematical tools for modelling were developed. The review contains two sections: migration modelling methods and migration models. The first section describes the main methods used in the model development process — econometric, cellular automata, system-dynamic, probabilistic, balance, optimization and cluster analysis. Based on the analysis of modern domestic and foreign publications on migration, the most common classes of models — regression, agent-based, simulation, optimization, probabilistic, balance, dynamic and combined — were identified and described. The features, advantages and disadvantages of different types of migration process models were considered.
-
Model for building of the radio environment map for cognitive communication system based on LTE
Computer Research and Modeling, 2022, v. 14, no. 1, pp. 127-146The paper is devoted to the secondary use of spectrum in telecommunication networks. It is emphasized that one of the solutions to this problem is the use of cognitive radio technologies and dynamic spectrum access for the successful functioning of which a large amount of information is required, including the parameters of base stations and network subscribers. Storage and processing of information should be carried out using a radio environment map, which is a spatio-temporal database of all activity in the network and allows you to determine the frequencies available for use at a given time. The paper presents a two-level model for forming a map of the radio environment of a cellular communication system LTE, in which the local and global levels are highlighted, which is described by the following parameters: a set of frequencies, signal attenuation, signal propagation map, grid step, current time count. The key objects of the model are the base station and the subscriber unit. The main parameters of the base station include: name, identifier, cell coordinates, range number, radiation power, numbers of connected subscriber devices, dedicated resource blocks. For subscriber devices, the following parameters are used: name, identifier, location, current coordinates of the device cell, base station identifier, frequency range, numbers of resource blocks for communication with the station, radiation power, data transmission status, list of numbers of the nearest stations, schedules movement and communication sessions of devices. An algorithm for the implementation of the model is presented, taking into account the scenarios of movement and communication sessions of subscriber devices. A method for calculating a map of the radio environment at a point on a coordinate grid, taking into account losses during the propagation of radio signals from emitting devices, is presented. The software implementation of the model is performed using the MatLab package. The approaches are described that allow to increase the speed of its work. In the simulation, the choice of parameters was carried out taking into account the data of the existing communication systems and the economy of computing resources. The experimental results of the algorithm for the formation of a radio environment map are demonstrated, confirming the correctness of the developed model.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




