All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
The tests for checking of the parallel organization in logical calculation which are based on the algebra and the automats
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 621-638Views (last year): 14. Citations: 1 (RSCI).We build new tests which permit to increase the human capacity for the information processing by the parallel execution of the several logic operations of prescribed type. For checking of the causes of the capacity increasing we develop the check tests on the same logic operations class in which the parallel organization of the calculations is low-effectively. We use the apparatus of the universal algebra and automat theory. This article is the extension of the cycle of the work, which investigates the human capacity for the parallel calculations. The general publications on this theme content in the references. The tasks in the described tests may to define in the form of the calculation of the result in the sequence of the same type operations from some algebra. If this operation is associative then the parallel calculation is effectively by successful grouping of process. In Theory of operations that is the using the simultaneous work several processors. Each processor transforms in the time unit the certain known number of the elements of the input date or the intermediate results (the processor productivity). Now it is not known what kind elements of date are using by the brain for the logical or mathematical calculation, and how many elements are treating in the time units. Therefore the test contains the sequence of the presentations of the tasks with different numbers of logical operations in the fixed alphabet. That is the measure of the complexity for the task. The analysis of the depending of the time for the task solution from the complexity gives the possible to estimate the processor productivity and the form of the calculate organization. For the sequence calculations only one processor is working, and the time of solution is a line function of complexity. If the new processors begin to work in parallel when the complexities of the task increase than the depending of the solution time from complexity is represented by the curve which is convex at the bottom. For the detection of situation when the man increases the speed of the single processor under the condition of the increasing complexity we use the task series with similar operations but in the no associate algebra. In such tasks the parallel calculation is little affectivity in the sense of the increasing efficiency by the increasing the number of processors. That is the check set of the tests. In article we consider still one class of the tests, which are based on the calculation of the trajectory of the formal automat state if the input sequence is determined. We investigate the special class of automats (relay) for which the construction affect on the affectivity of the parallel calculations of the final automat state. For all tests we estimate the affectivity of the parallel calculation. This article do not contained the experiment results.
-
Model of formation of primary behavioral patterns with adaptive behavior based on the combination of random search and experience
Computer Research and Modeling, 2016, v. 8, no. 6, pp. 941-950Views (last year): 6. Citations: 2 (RSCI).In this paper, we propose an adaptive algorithm that simulates the process of forming the initial behavioral skills on the example of the system ‘eye-arm’ animat. The situation is the formation of the initial behavioral skills occurs, for example, when a child masters the management of their hands by understanding the relationship between baseline unidentified spots on the retina of his eye and the position of the real object. Since the body control skills are not ‘hardcoded’ initially in the brain and the spinal cord at the level of instincts, the human child, like most young of other mammals, it is necessary to develop these skills in search behavior mode. Exploratory behavior begins with trial and error and then its contribution is gradually reduced as the development of the body and its environment. Since the correct behavior patterns at this stage of development of the organism does not exist for now, then the only way to select the right skills is a positive reinforcement to achieve the objective. A key feature of the proposed algorithm is to fix in the imprinting mode, only the final action that led to success, and that is very important, led to the familiar imprinted situation clearly leads to success. Over time, the continuous chain is lengthened right action — maximum use of previous positive experiences and negative ‘forgotten’ and not used.
Thus there is the gradual replacement of the random search purposeful actions that observed in the real young. Thus, the algorithm is able to establish a correspondence between the laws of the world and the ‘inner feelings’, the internal state of the animat. The proposed animat model was used 2 types of neural networks: 1) neural network NET1 to the input current which is fed to the position of the brush arms and the target point, and the output of motor commands, directing ‘brush’ manipulator animat to the target point; 2) neural network NET2 is received at the input of target coordinates and the current coordinates of the ‘brush’ and the output value is formed likelihood that the animat already ‘know’ this situation, and he ‘knows’ how to react to it. With this architecture at the animat has to rely on the ‘experience’ of neural networks to recognize situations where the response from NET2 network of close to 1, and on the other hand, run a random search, when the experience of functioning in this area of the visual field in animat not (response NET2 close to 0).
-
Multifractal and entropy statistics of seismic noise in Kamchatka in connection with the strongest earthquakes
Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1507-1521The study of the properties of seismic noise in Kamchatka is based on the idea that noise is an important source of information about the processes preceding strong earthquakes. The hypothesis is considered that an increase in seismic hazard is accompanied by a simplification of the statistical structure of seismic noise and an increase in spatial correlations of its properties. The entropy of the distribution of squared wavelet coefficients, the width of the carrier of the multifractal singularity spectrum, and the Donoho – Johnstone index were used as statistics characterizing noise. The values of these parameters reflect the complexity: if a random signal is close in its properties to white noise, then the entropy is maximum, and the other two parameters are minimum. The statistics used are calculated for 6 station clusters. For each station cluster, daily median noise properties are calculated in successive 1-day time windows, resulting in an 18-dimensional (3 properties and 6 station clusters) time series of properties. To highlight the general properties of changes in noise parameters, a principal component method is used, which is applied for each cluster of stations, as a result of which the information is compressed into a 6-dimensional daily time series of principal components. Spatial noise coherences are estimated as a set of maximum pairwise quadratic coherence spectra between the principal components of station clusters in a sliding time window of 365 days. By calculating histograms of the distribution of cluster numbers in which the minimum and maximum values of noise statistics are achieved in a sliding time window of 365 days in length, the migration of seismic hazard areas was assessed in comparison with strong earthquakes with a magnitude of at least 7.
-
Investigation of individual-based mechanisms of single-species population dynamics by logical deterministic cellular automata
Computer Research and Modeling, 2015, v. 7, no. 6, pp. 1279-1293Views (last year): 16. Citations: 3 (RSCI).Investigation of logical deterministic cellular automata models of population dynamics allows to reveal detailed individual-based mechanisms. The search for such mechanisms is important in connection with ecological problems caused by overexploitation of natural resources, environmental pollution and climate change. Classical models of population dynamics have the phenomenological nature, as they are “black boxes”. Phenomenological models fundamentally complicate research of detailed mechanisms of ecosystem functioning. We have investigated the role of fecundity and duration of resources regeneration in mechanisms of population growth using four models of ecosystem with one species. These models are logical deterministic cellular automata and are based on physical axiomatics of excitable medium with regeneration. We have modeled catastrophic death of population arising from increasing of resources regeneration duration. It has been shown that greater fecundity accelerates population extinction. The investigated mechanisms are important for understanding mechanisms of sustainability of ecosystems and biodiversity conservation. Prospects of the presented modeling approach as a method of transparent multilevel modeling of complex systems are discussed.
-
Uncertainty factor in modeling dynamics of economic systems
Computer Research and Modeling, 2018, v. 10, no. 2, pp. 261-276Views (last year): 39.Analysis and practical aspects of implementing developed in the control theory robust control methods in studying economic systems is carried out. The main emphasis is placed on studying results obtained for dynamical systems with structured uncertainty. Practical aspects of implementing such results in control of economic systems on the basis of dynamical models with uncertain parameters and perturbations (stabilization of price on the oil market and inflation in macroeconomic systems) are discussed. With the help of specially constructed aggregate model of oil price dynamics studied the problem of finding control which provides minimal deviation of price from desired levels over middle range period. The second real problem considered in the article consists in determination of stabilizing control providing minimal deviation of inflation from desired levels (on the basis of constructed aggregate macroeconomic model of the USA over middle range period).
Upper levels of parameters uncertainty and control laws guaranteeing stabilizability of the real considered economic systems have been found using the robust method of control with structured uncertainty. At the same time we have come to the conclusion that received estimates of parameters uncertainty upper levels are conservative. Monte-Carlo experiments carried out for the article made it possible to analyze dynamics of oil price and inflation under received limit levels of models parameters uncertainty and under implementing found robust control laws for the worst and the best scenarios. Results of these experiments show that received robust control laws may be successfully used under less stringent uncertainty constraints than it is guaranteed by sufficient conditions of stabilization.
-
Traffic flow speed prediction on transportation graph with convolutional neural networks
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 359-367Views (last year): 36.The short-term prediction of road traffic condition is one of the main tasks of transportation modelling. The main purpose of which are traffic control, reporting of accidents, avoiding traffic jams due to knowledge of traffic flow and subsequent transportation planning. A number of solutions exist — both model-driven and data driven had proven to be successful in capturing the dynamics of traffic flow. Nevertheless, most space-time models suffer from high mathematical complexity and low efficiency. Artificial Neural Networks, one of the prominent datadriven approaches, show promising performance in modelling the complexity of traffic flow. We present a neural network architecture for traffic flow prediction on a real-world road network graph. The model is based on the combination of a recurrent neural network and graph convolutional neural network. Where a recurrent neural network is used to model temporal dependencies, and a convolutional neural network is responsible for extracting spatial features from traffic. To make multiple few steps ahead predictions, the encoder-decoder architecture is used, which allows to reduce noise propagation due to inexact predictions. To model the complexity of traffic flow, we employ multilayered architecture. Deeper neural networks are more difficult to train. To speed up the training process, we use skip-connections between each layer, so that each layer teaches only the residual function with respect to the previous layer outputs. The resulting neural network was trained on raw data from traffic flow detectors from the US highway system with a resolution of 5 minutes. 3 metrics: mean absolute error, mean relative error, mean-square error were used to estimate the quality of the prediction. It was found that for all metrics the proposed model achieved lower prediction error than previously published models, such as Vector Auto Regression, LSTM and Graph Convolution GRU.
-
Neuro-fuzzy model of fuzzy rules formation for objects state evaluation in conditions of uncertainty
Computer Research and Modeling, 2019, v. 11, no. 3, pp. 477-492Views (last year): 12.This article solves the problem of constructing a neuro-fuzzy model of fuzzy rules formation and using them for objects state evaluation in conditions of uncertainty. Traditional mathematical statistics or simulation modeling methods do not allow building adequate models of objects in the specified conditions. Therefore, at present, the solution of many problems is based on the use of intelligent modeling technologies applying fuzzy logic methods. The traditional approach of fuzzy systems construction is associated with an expert attraction need to formulate fuzzy rules and specify the membership functions used in them. To eliminate this drawback, the automation of fuzzy rules formation, based on the machine learning methods and algorithms, is relevant. One of the approaches to solve this problem is to build a fuzzy neural network and train it on the data characterizing the object under study. This approach implementation required fuzzy rules type choice, taking into account the processed data specificity. In addition, it required logical inference algorithm development on the rules of the selected type. The algorithm steps determine the number and functionality of layers in the fuzzy neural network structure. The fuzzy neural network training algorithm developed. After network training the formation fuzzyproduction rules system is carried out. Based on developed mathematical tool, a software package has been implemented. On its basis, studies to assess the classifying ability of the fuzzy rules being formed have been conducted using the data analysis example from the UCI Machine Learning Repository. The research results showed that the formed fuzzy rules classifying ability is not inferior in accuracy to other classification methods. In addition, the logic inference algorithm on fuzzy rules allows successful classification in the absence of a part of the initial data. In order to test, to solve the problem of assessing oil industry water lines state fuzzy rules were generated. Based on the 303 water lines initial data, the base of 342 fuzzy rules was formed. Their practical approbation has shown high efficiency in solving the problem.
-
Tracking on the BESIII CGEM inner detector using deep learning
Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1361-1381The reconstruction of charged particle trajectories in tracking detectors is a key problem in the analysis of experimental data for high energy and nuclear physics.
The amount of data in modern experiments is so large that classical tracking methods such as Kalman filter can not process them fast enough. To solve this problem, we have developed two neural network algorithms of track recognition, based on deep learning architectures, for local (track by track) and global (all tracks in an event) tracking in the GEM tracker of the BM@N experiment at JINR (Dubna). The advantage of deep neural networks is the ability to detect hidden nonlinear dependencies in data and the capability of parallel execution of underlying linear algebra operations.
In this work we generalize these algorithms to the cylindrical GEM inner tracker of BESIII experiment. The neural network model RDGraphNet for global track finding, based on the reverse directed graph, has been successfully adapted. After training on Monte Carlo data, testing showed encouraging results: recall of 98% and precision of 86% for track finding.
The local neural network model TrackNETv2 was also adapted to BESIII CGEM successfully. Since the tracker has only three detecting layers, an additional neuro-classifier to filter out false tracks have been introduced. Preliminary tests demonstrated the recall value at the first stage of 99%. After applying the neuro-classifier, the precision was 77% with a slight decrease of the recall to 94%. This result can be improved after the further model optimization.
-
Model for building of the radio environment map for cognitive communication system based on LTE
Computer Research and Modeling, 2022, v. 14, no. 1, pp. 127-146The paper is devoted to the secondary use of spectrum in telecommunication networks. It is emphasized that one of the solutions to this problem is the use of cognitive radio technologies and dynamic spectrum access for the successful functioning of which a large amount of information is required, including the parameters of base stations and network subscribers. Storage and processing of information should be carried out using a radio environment map, which is a spatio-temporal database of all activity in the network and allows you to determine the frequencies available for use at a given time. The paper presents a two-level model for forming a map of the radio environment of a cellular communication system LTE, in which the local and global levels are highlighted, which is described by the following parameters: a set of frequencies, signal attenuation, signal propagation map, grid step, current time count. The key objects of the model are the base station and the subscriber unit. The main parameters of the base station include: name, identifier, cell coordinates, range number, radiation power, numbers of connected subscriber devices, dedicated resource blocks. For subscriber devices, the following parameters are used: name, identifier, location, current coordinates of the device cell, base station identifier, frequency range, numbers of resource blocks for communication with the station, radiation power, data transmission status, list of numbers of the nearest stations, schedules movement and communication sessions of devices. An algorithm for the implementation of the model is presented, taking into account the scenarios of movement and communication sessions of subscriber devices. A method for calculating a map of the radio environment at a point on a coordinate grid, taking into account losses during the propagation of radio signals from emitting devices, is presented. The software implementation of the model is performed using the MatLab package. The approaches are described that allow to increase the speed of its work. In the simulation, the choice of parameters was carried out taking into account the data of the existing communication systems and the economy of computing resources. The experimental results of the algorithm for the formation of a radio environment map are demonstrated, confirming the correctness of the developed model.
-
Mathematical investigation of antiangiogenic monotherapy effect on heterogeneous tumor progression
Computer Research and Modeling, 2017, v. 9, no. 3, pp. 487-501Views (last year): 10. Citations: 2 (RSCI).In the last decade along with classical cytotoxic agents, antiangiogenic drugs have been actively used in cancer chemotherapy. They are not aimed at killing malignant cells, but at blocking the process of angiogenesis, i.e., the growth of new vessels in the tumor and its surrounding tissues. Agents that stimulate angiogenesis, in particular, vascular endothelial growth factor, are actively produced by tumor cells in the state of metabolic stress. It is believed that blocking of tumor neovascularization should lead to a shortage of nutrients flow to the tumor, and thus can stop, or at least significantly slow down its growth. Clinical practice on the use of first antiangiogenic drug bevacizumab has shown that in some cases such therapy does not influence the growth rate of the tumor, whereas for other types of malignant neoplasms antiangiogenic therapy has a high antitumor effect. However, it has been shown that along with successful slowing of tumor growth, therapy with bevacizumab can induce directed tumor progression to a more invasive, and therefore more lethal, type. These data require theoretical analysis and rationale for the evolutionary factors that lead to the observation of epithelial-mesenchymal transition. For this purpose we have developed a spatially distributed mathematical model of growth and antiangiogenic therapy of heterogeneous tumor consisting of two subpopulations of malignant cells. One of subpopulations possesses inherent characteristics of epithelial phenotype, i.e., low motility and high proliferation rate, the other one corresponds to mesenchymal phenotype having high motility and low proliferation rate. We have performed the investigation of competition between these subpopulations of heterogeneous tumor in the cases of tumor growth without therapy and under bevacizumab monotherapy. It is shown that constant use of antiangiogenic drug leads to an increase of the region in parameter space, where the dominance of mesenchymal phenotype takes place, i.e., within a certain range of parameters in the absence of therapy epithelial phenotype is dominant but during bevacizumab administration mesenchymal phenotype begins to dominate. This result provides a theoretical basis of the clinically observed directed tumor progression to more invasive type under antiangiogenic therapy.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"