All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
- Ajaegbu Dr. Chigozirim AjaegbuS. (Ajaegbu C.C.)
- Badogiannis E.S. (Efstratios B.N.)
- Rufai Dr. Adewole Usman RufaiC. (Rufai A.U.)
- Wang X.D. (Xin W.N.)
- Yange Dr. Simon T. YangeS. (Yange S.T.)
-
Comparative analysis of human adaptation to the growth of visual information in the tasks of recognizing formal symbols and meaningful images
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 571-586We describe an engineering-psychological experiment that continues the study of ways to adapt a person to the increasing complexity of logical problems by presenting a series of problems of increasing complexity, which is determined by the volume of initial data. Tasks require calculations in an associative or non-associative system of operations. By the nature of the change in the time of solving the problem, depending on the number of necessary operations, we can conclude that a purely sequential method of solving problems or connecting additional brain resources to the solution in parallel mode. In a previously published experimental work, a person in the process of solving an associative problem recognized color images with meaningful images. In the new study, a similar problem is solved for abstract monochrome geometric shapes. Analysis of the result showed that for the second case, the probability of the subject switching to a parallel method of processing visual information is significantly reduced. The research method is based on presenting a person with two types of tasks. One type of problem contains associative calculations and allows a parallel solution algorithm. Another type of problem is the control one, which contains problems in which calculations are not associative and parallel algorithms are ineffective. The task of recognizing and searching for a given object is associative. A parallel strategy significantly speeds up the solution with relatively small additional resources. As a control series of problems (to separate parallel work from the acceleration of a sequential algorithm), we use, as in the previous experiment, a non-associative comparison problem in cyclic arithmetic, presented in the visual form of the game “rock, paper, scissors”. In this problem, the parallel algorithm requires a large number of processors with a small efficiency coefficient. Therefore, the transition of a person to a parallel algorithm for solving this problem is almost impossible, and the acceleration of processing input information is possible only by increasing the speed. Comparing the dependence of the solution time on the volume of source data for two types of problems allows us to identify four types of strategies for adapting to the increasing complexity of the problem: uniform sequential, accelerated sequential, parallel computing (where possible), or undefined (for this method) strategy. The Reducing of the number of subjects, who switch to a parallel strategy when encoding input information with formal images, shows the effectiveness of codes that cause subject associations. They increase the speed of human perception and processing of information. The article contains a preliminary mathematical model that explains this phenomenon. It is based on the appearance of a second set of initial data, which occurs in a person as a result of recognizing the depicted objects.
-
Some relationships between thermodynamic characteristics and water vapor and carbon dioxide fluxes in a recently clear-cut area
Computer Research and Modeling, 2017, v. 9, no. 6, pp. 965-980Views (last year): 15. Citations: 1 (RSCI).The temporal variability of exergy of short-wave and long-wave radiation and its relationships with sensible heat, water vapor (H2O) and carbon dioxide (CO2) fluxes on a recently clear-cut area in a mixed coniferous and small-leaved forest in the Tver region is discussed. On the basis of the analysis of radiation and exergy efficiency coefficients suggested by Yu.M. Svirezhev it was shown that during the first eight months after clearcutting the forest ecosystem functions as a "heat engine" i.e. the processes of energy dissipation dominated over processes of biomass production. To validate the findings the statistical analysis of temporary variability of meteorological parameters, as well as, daily fluxes of sensible heat, H2O and CO2 was provided using the trigonometrical polynomials. The statistical models that are linearly depended on an exergy of short-wave and long-wave radiation were obtained for mean daily values of CO2 fluxes, gross primary production of regenerated vegetation and sensible heat fluxes. The analysis of these dependences is also confirmed the results obtained from processing the radiation and exergy efficiency coefficients. The splitting the time series into separate time intervals, e.g. “spring–summer” and “summer–autumn”, allowed revealing that the statistically significant relationships between atmospheric fluxes and exergy were amplified in summer months as the clear-cut area was overgrown by grassy and young woody vegetation. The analysis of linear relationships between time-series of latent heat fluxes and exergy showed their statistical insignificance. The linear relationships between latent heat fluxes and temperature were in turn statistically significant. The air temperature was a key factor improving the accuracy of the models, whereas effect of exergy was insignificant. The results indicated that at the time of active vegetation regeneration within the clear-cut area the seasonal variability of surface evaporation is mainly governed by temperature variation.
-
Experimental identification of the organization of mental calculations of the person on the basis of algebras of different associativity
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 311-327Views (last year): 16.The work continues research on the ability of a person to improve the productivity of information processing, using parallel work or improving the performance of analyzers. A person receives a series of tasks, the solution of which requires the processing of a certain amount of information. The time and the validity of the decision are recorded. The dependence of the average solution time on the amount of information in the problem is determined by correctly solved problems. In accordance with the proposed method, the problems contain calculations of expressions in two algebras, one of which is associative and the other is nonassociative. To facilitate the work of the subjects in the experiment were used figurative graphic images of elements of algebra. Non-associative calculations were implemented in the form of the game “rock-paper-scissors”. It was necessary to determine the winning symbol in the long line of these figures, considering that they appear sequentially from left to right and play with the previous winner symbol. Associative calculations were based on the recognition of drawings from a finite set of simple images. It was necessary to determine which figure from this set in the line is not enough, or to state that all the pictures are present. In each problem there was no more than one picture. Computation in associative algebra allows the parallel counting, and in the absence of associativity only sequential computations are possible. Therefore, the analysis of the time for solving a series of problems reveals a consistent uniform, sequential accelerated and parallel computing strategy. In the experiments it was found that all subjects used a uniform sequential strategy to solve non-associative problems. For the associative task, all subjects used parallel computing, and some have used parallel computing acceleration of the growth of complexity of the task. A small part of the subjects with a high complexity, judging by the evolution of the solution time, supplemented the parallel account with a sequential stage of calculations (possibly to control the solution). We develop a special method for assessing the rate of processing of input information by a person. It allowed us to estimate the level of parallelism of the calculation in the associative task. Parallelism of level from two to three was registered. The characteristic speed of information processing in the sequential case (about one and a half characters per second) is twice less than the typical speed of human image recognition. Apparently the difference in processing time actually spent on the calculation process. For an associative problem in the case of a minimum amount of information, the solution time is near to the non-associativity case or less than twice. This is probably due to the fact that for a small number of characters recognition almost exhausts the calculations for the used non-associative problem.
-
Repressilator with time-delayed gene expression. Part II. Stochastic description
Computer Research and Modeling, 2021, v. 13, no. 3, pp. 587-609The repressilator is the first genetic regulatory network in synthetic biology, which was artificially constructed in 2000. It is a closed network of three genetic elements $lacI$, $\lambda cI$ and $tetR$, which have a natural origin, but are not found in nature in such a combination. The promoter of each of the three genes controls the next cistron via the negative feedback, suppressing the expression of the neighboring gene. In our previous paper [Bratsun et al., 2018], we proposed a mathematical model of a delayed repressillator and studied its properties within the framework of a deterministic description. We assume that delay can be both natural, i.e. arises during the transcription / translation of genes due to the multistage nature of these processes, and artificial, i.e. specially to be introduced into the work of the regulatory network using gene engineering technologies. In this work, we apply the stochastic description of dynamic processes in a delayed repressilator, which is an important addition to deterministic analysis due to the small number of molecules involved in gene regulation. The stochastic study is carried out numerically using the Gillespie algorithm, which is modified for time delay systems. We present the description of the algorithm, its software implementation, and the results of benchmark simulations for a onegene delayed autorepressor. When studying the behavior of a repressilator, we show that a stochastic description in a number of cases gives new information about the behavior of a system, which does not reduce to deterministic dynamics even when averaged over a large number of realizations. We show that in the subcritical range of parameters, where deterministic analysis predicts the absolute stability of the system, quasi-regular oscillations may be excited due to the nonlinear interaction of noise and delay. Earlier, we have discovered within the framework of the deterministic description, that there exists a long-lived transient regime, which is represented in the phase space by a slow manifold. This mode reflects the process of long-term synchronization of protein pulsations in the work of the repressilator genes. In this work, we show that the transition to the cooperative mode of gene operation occurs a two order of magnitude faster, when the effect of the intrinsic noise is taken into account. We have obtained the probability distribution of moment when the phase trajectory leaves the slow manifold and have determined the most probable time for such a transition. The influence of the intrinsic noise of chemical reactions on the dynamic properties of the repressilator is discussed.
-
IX International Conference “Engineering and Telecommunications — En&T – 2022”
Computer Research and Modeling, 2023, v. 15, no. 1, pp. 125-127 -
Numerical study of the mechanisms of propagation of pulsating gaseous detonation in a non-uniform medium
Computer Research and Modeling, 2023, v. 15, no. 5, pp. 1263-1282In the last few years, significant progress has been observed in the field of rotating detonation engines for aircrafts. Scientific laboratories around the world conduct both fundamental researches related, for example, to the issues of effective mixing of fuel and oxidizer with the separate supply, and applied development of existing prototypes. The paper provides a brief overview of the main results of the most significant recent computational work on the study of propagation of a onedimensional pulsating gaseous detonation wave in a non-uniform medium. The general trends observed by the authors of these works are noted. In these works, it is shown that the presence of parameter perturbations in front of the wave front can lead to regularization and to resonant amplification of pulsations behind the detonation wave front. Thus, there is an appealing opportunity from a practical point of view to influence the stability of the detonation wave and control it. The aim of the present work is to create an instrument to study the gas-dynamic mechanisms of these effects.
The mathematical model is based on one-dimensional Euler equations supplemented by a one-stage model of the kinetics of chemical reactions. The defining system of equations is written in the shock-attached frame that leads to the need to add a shock-change equations. A method for integrating this equation is proposed, taking into account the change in the density of the medium in front of the wave front. So, the numerical algorithm for the simulation of detonation wave propagation in a non-uniform medium is proposed.
Using the developed algorithm, a numerical study of the propagation of stable detonation in a medium with variable density as carried out. A mode with a relatively small oscillation amplitude is investigated, in which the fluctuations of the parameters behind the detonation wave front occur with the frequency of fluctuations in the density of the medium. It is shown the relationship of the oscillation period with the passage time of the characteristics C+ and C0 over the region, which can be conditionally considered an induction zone. The phase shift between the oscillations of the velocity of the detonation wave and the density of the gas before the wave is estimated as the maximum time of passage of the characteristic C+ through the induction zone.
-
Monitoring the spread of Sosnowskyi’s hogweed using a random forest machine learning algorithm in Google Earth Engine
Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1357-1370Examining the spectral response of plants from data collected using remote sensing has a lot of potential for solving real-world problems in different fields of research. In this study, we have used the spectral property to identify the invasive plant Heracleum sosnowskyi Manden from satellite imagery. H. sosnowskyi is an invasive plant that causes many harms to humans, animals and the ecosystem at large. We have used data collected from the years 2018 to 2020 containing sample geolocation data from the Moscow Region where this plant exists and we have used Sentinel-2 imagery for the spectral analysis towards the aim of detecting it from the satellite imagery. We deployed a Random Forest (RF) machine learning model within the framework of Google Earth Engine (GEE). The algorithm learns from the collected data, which is made up of 12 bands of Sentinel-2, and also includes the digital elevation together with some spectral indices, which are used as features in the algorithm. The approach used is to learn the biophysical parameters of H. sosnowskyi from its reflectances by fitting the RF model directly from the data. Our results demonstrate how the combination of remote sensing and machine learning can assist in locating H. sosnowskyi, which aids in controlling its invasive expansion. Our approach provides a high detection accuracy of the plant, which is 96.93%.
-
Experimental comparison of PageRank vector calculation algorithms
Computer Research and Modeling, 2023, v. 15, no. 2, pp. 369-379Finding PageRank vector is of great scientific and practical interest due to its applicability to modern search engines. Despite the fact that this problem is reduced to finding the eigenvector of the stochastic matrix $P$, the need for new algorithms is justified by a large size of the input data. To achieve no more than linear execution time, various randomized methods have been proposed, returning the expected result only with some probability close enough to one. We will consider two of them by reducing the problem of calculating the PageRank vector to the problem of finding equilibrium in an antagonistic matrix game, which is then solved using the Grigoriadis – Khachiyan algorithm. This implementation works effectively under the assumption of sparsity of the input matrix. As far as we know, there are no successful implementations of neither the Grigoriadis – Khachiyan algorithm nor its application to the task of calculating the PageRank vector. The purpose of this paper is to fill this gap. The article describes an algorithm giving pseudocode and some details of the implementation. In addition, it discusses another randomized method of calculating the PageRank vector, namely, Markov chain Monte Carlo (MCMC), in order to compare the results of these algorithms on matrices with different values of the spectral gap. The latter is of particular interest, since the magnitude of the spectral gap strongly affects the convergence rate of MCMC and does not affect the other two approaches at all. The comparison was carried out on two types of generated graphs: chains and $d$-dimensional cubes. The experiments, as predicted by the theory, demonstrated the effectiveness of the Grigoriadis – Khachiyan algorithm in comparison with MCMC for sparse graphs with a small spectral gap value. The written code is publicly available, so everyone can reproduce the results themselves or use this implementation for their own needs. The work has a purely practical orientation, no theoretical results were obtained.
-
Modeling of the supply–demand imbalance in engineering labor market
Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1249-1273Nowadays the situation of supply-demand imbalances in the professionals’ labor markets causes human capital losses as far as hampers scientific and innovation development. In Russia, supply-demand imbalances in the engineering labor market are associated with deindustrialization processes and manufacturing decline, resulted in a negative public perception of the engineering profession and high rates of graduates not working within the specialty or changing their occupation.
For analysis of the supply-demand imbalances in the engineering labor market, we elaborated a macroeconomic model. The model consists of 14 blocks, including blocks for demand and supply for engineers and technicians, along with the blocks for macroeconomic indicators as industry and service sector output, capital investment. Using this model, we forecasted the perspective supply-demand imbalances in the engineering labor market in a short-term period and examined the parameters of getting supply-demand balance in the medium-term perspective.
The results obtained show that situation of more balanced supply and demand for engineering labor is possible if there is simultaneous increase in the share of investments in fixed assets of manufacturing and relative wages in industry, besides getting to balance is facilitated by a decrease of the share of graduates not working by specialty. It is worth noting that a decrease in the share of graduates not working by specialty may be affected whether by the growth of relative wages in industry and number of vacancies or by the implementation of measures aimed at improving the working conditions of the engineering workforce and increasing the attractiveness of the profession. To summarize, in the case of the simplest scenario, not considering additional measures of working conditions improvement and increasing the attractiveness of the profession, the conditions of supply-demand balance achievement implies slightly lower growth rates of investment in industry than required in scenarios that involve increasing the share of engineers and technicians working in their specialty after graduation. The latter case, where a gradual decrease in the proportion of those who do not work in engineering specialty is expected, requires, probably, higher investment costs for attracting specialists and creating new jobs, as well as additional measures to strengthen the attractiveness of the engineering profession.
-
Numerical modeling of flows with flow swirling
Computer Research and Modeling, 2013, v. 5, no. 4, pp. 635-648Views (last year): 4. Citations: 2 (RSCI).This paper is devoted to investigation of the swirl flows. Such flows are widely used in various industrial processes. Swirl flows can be accompanied by time-dependent effects, for example, precession of the vortex core. In turn, the large-scale fluctuations due to the precession of the vortex can cause damage of structures and reduce of equipment reliability. Thus, for engineering calculations approaches that sufficiently well described such flows are required. This paper presents the technique of swirl flows calculation, tested for CFD packages Fluent and SigmaFlow. A numerical simulation of several swirl flow test problems was carried out. Obtained results are compared with each other and with the experimental data.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"