All issues
- 2025 Vol. 17
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Conditions of Rice statistical model applicability and estimation of the Rician signal’s parameters by maximum likelihood technique
Computer Research and Modeling, 2014, v. 6, no. 1, pp. 13-25Views (last year): 2. Citations: 4 (RSCI).The paper develops a theory of a new so-called two-parametric approach to the random signals' analysis and processing. A mathematical simulation and the task solutions’ comparison have been implemented for the Gauss and Rice statistical models. The applicability of the Rice statistical model is substantiated for the tasks of data and images processing when the signal’s envelope is being analyzed. A technique is developed and theoretically substantiated for solving the task of the noise suppression and initial image reconstruction by means of joint calculation of both statistical parameters — an initial signal’s mean value and noise dispersion — based on the maximum likelihood method within the Rice distribution. The peculiarities of this distribution’s likelihood function and the following from them possibilities of the signal and noise estimation have been analyzed.
-
Grid based high performance computing in satellite imagery. Case study — Perona–Malik filter
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 399-406Views (last year): 3.The present paper discusses an approach to the efficient satellite image processing which involves two steps. The first step assumes the distribution of the steadily increasing volume of satellite collected data through a Grid infrastructure. The second step assumes the acceleration of the solution of the individual tasks related to image processing by implementing execution codes which make heavy use of spatial and temporal parallelism. An instance of such execution code is the image processing by means of the iterative Perona–Malik filter within FPGA application specific hardware architecture.
-
Cellular automata review based on modern domestic publications
Computer Research and Modeling, 2019, v. 11, no. 1, pp. 9-57Views (last year): 58.The paper contains the analysis of the domestic publications issued in 2013–2017 years and devoted to cellular automata. The most of them concern on mathematical modeling. Scientometric schedules for 1990–2017 years have proved relevance of subject. The review allows to allocate the main personalities and the scientific directions/schools in modern Russian science, to reveal their originality or secondness in comparison with world science. Due to the authors choice of national publications basis instead of world, the paper claims the completeness and the fact is that about 200 items from the checked 526 references have an importance for science.
In the Annex to the review provides preliminary information about CA — the Game of Life, a theorem about gardens of Eden, elementary CAs (together with the diagram of de Brujin), block Margolus’s CAs, alternating CAs. Attention is paid to three important for modeling semantic traditions of von Neumann, Zuse and Zetlin, as well as to the relationship with the concepts of neural networks and Petri nets. It is allocated conditional 10 works, which should be familiar to any specialist in CA. Some important works of the 1990s and later are listed in the Introduction.
Then the crowd of publications is divided into categories: the modification of the CA and other network models (29 %), Mathematical properties of the CA and the connection with mathematics (5 %), Hardware implementation (3 %), Software implementation (5 %), Data Processing, recognition and Cryptography (8 %), Mechanics, physics and chemistry (20 %), Biology, ecology and medicine (15 %), Economics, urban studies and sociology (15 %). In parentheses the share of subjects in the array are indicated. There is an increase in publications on CA in the humanitarian sphere, as well as the emergence of hybrid approaches, leading away from the classic CA definition.
-
Development of network computational models for the study of nonlinear wave processes on graphs
Computer Research and Modeling, 2019, v. 11, no. 5, pp. 777-814In various applications arise problems modeled by nonlinear partial differential equations on graphs (networks, trees). In order to study such problems and various extreme situations arose in the problems of designing and optimizing networks developed the computational model based on solving the corresponding boundary problems for partial differential equations of hyperbolic type on graphs (networks, trees). As applications, three different problems were chosen solved in the framework of the general approach of network computational models. The first was modeling of traffic flow. In solving this problem, a macroscopic approach was used in which the transport flow is described by a nonlinear system of second-order hyperbolic equations. The results of numerical simulations showed that the model developed as part of the proposed approach well reproduces the real situation various sections of the Moscow transport network on significant time intervals and can also be used to select the most optimal traffic management strategy in the city. The second was modeling of data flows in computer networks. In this problem data flows of various connections in packet data network were simulated as some continuous medium flows. Conceptual and mathematical network models are proposed. The numerical simulation was carried out in comparison with the NS-2 network simulation system. The results showed that in comparison with the NS-2 packet model the developed streaming model demonstrates significant savings in computing resources while ensuring a good level of similarity and allows us to simulate the behavior of complex globally distributed IP networks. The third was simulation of the distribution of gas impurities in ventilation networks. It was developed the computational mathematical model for the propagation of finely dispersed or gas impurities in ventilation networks using the gas dynamics equations by numerical linking of regions of different sizes. The calculations shown that the model with good accuracy allows to determine the distribution of gas-dynamic parameters in the pipeline network and solve the problems of dynamic ventilation management.
-
Stable character of the Rice statistical distribution: the theory and application in the tasks of the signals’ phase shift measuring
Computer Research and Modeling, 2020, v. 12, no. 3, pp. 475-485The paper concerns the study of the Rice statistical distribution’s peculiarities which cause the possibility of its efficient application in solving the tasks of high precision phase measuring in optics. The strict mathematical proof of the Rician distribution’s stable character is provided in the example of the differential signal consideration, namely: it has been proved that the sum or the difference of two Rician signals also obey the Rice distribution. Besides, the formulas have been obtained for the parameters of the resulting summand or differential signal’s Rice distribution. Based upon the proved stable character of the Rice distribution a new original technique of the high precision measuring of the two quasi-harmonic signals’ phase shift has been elaborated in the paper. This technique is grounded in the statistical analysis of the measured sampled data for the amplitudes of the both signals and for the amplitude of the third signal which is equal to the difference of the two signals to be compared in phase. The sought-for phase shift of two quasi-harmonic signals is being calculated from the geometrical considerations as an angle of a triangle which sides are equal to the three indicated signals’ amplitude values having been reconstructed against the noise background. Thereby, the proposed technique of measuring the phase shift using the differential signal analysis, is based upon the amplitude measurements only, what significantly decreases the demands to the equipment and simplifies the technique implementation in practice. The paper provides both the strict mathematical substantiation of a new phase shift measuring technique and the results of its numerical testing. The elaborated method of high precision phase measurements may be efficiently applied for solving a wide circle of tasks in various areas of science and technology, in particular — at distance measuring, in communication systems, in navigation, etc.
-
The adaptive Gaussian receptive fields for spiking encoding of numeric variables
Computer Research and Modeling, 2025, v. 17, no. 3, pp. 389-400Conversion of numeric data to the spiking form and information losses in this process are serious problems limiting usage of spiking neural networks in applied informational systems. While physical values are represented by numbers, internal representation of information inside spiking neural networks is based on spikes — elementary objects emitted and processed by neurons. This problem is especially hard in the reinforcement learning applications where an agent should learn to behave in the dynamic real world because beside the accuracy of the encoding method, its dynamic characteristics should be considered as well. The encoding algorithm based on the Gaussian receptive fields (GRF) is frequently used. In this method, one numeric variable fed to the network is represented by spike streams emitted by a certain set of network input nodes. The spike frequency in each stream is determined by proximity of the current variable value to the center of the receptive field corresponding to the given input node. In the standard GRF algorithm, the receptive field centers are placed equidistantly. However, it is inefficient in the case of very uneven distribution of the variable encoded. In the present paper, an improved version of this method is proposed which is based on adaptive selection of the Gaussian centers and spike stream frequencies. This improved GRF algorithm is compared with its standard version in terms of amount of information lost in the coding process and of accuracy of classification models built on spike-encoded data. The fraction of information retained in the process of the standard and adaptive GRF encoding is estimated using the direct and reverse encoding procedures applied to a large sample from the triangular probability distribution and counting coinciding bits in the original and restored samples. The comparison based on classification was performed on a task of evaluation of current state in reinforcement learning. For this purpose, the classification models were created by machine learning algorithms of very different nature — nearest neighbors algorithm, random forest and multi-layer perceptron. Superiority of our approach is demonstrated on all these tests.
-
Denoising fluorescent imaging data with two-step truncated HOSVD
Computer Research and Modeling, 2025, v. 17, no. 4, pp. 529-542Fluorescent imaging data are currently widely used in neuroscience and other fields. Genetically encoded sensors, based on fluorescent proteins, provide a wide inventory enabling scientiests to image virtually any process in a living cell and extracellular environment. However, especially due to the need for fast scanning, miniaturization, etc, the imaging data can be severly corrupred with multiplicative heteroscedactic noise, reflecting stochastic nature of photon emission and photomultiplier detectors. Deep learning architectures demonstrate outstanding performance in image segmentation and denoising, however they can require large clean datasets for training, and the actual data transformation is not evident from the network architecture and weight composition. On the other hand, some classical data transforms can provide for similar performance in combination with more clear insight in why and how it works. Here we propose an algorithm for denoising fluorescent dynamical imaging data, which is based on multilinear higher-order singular value decomposition (HOSVD) with optional truncation in rank along each axis and thresholding of the tensor of decomposition coefficients. In parallel, we propose a convenient paradigm for validation of the algorithm performance, based on simulated flurescent data, resulting from biophysical modeling of calcium dynamics in spatially resolved realistic 3D astrocyte templates. This paradigm is convenient in that it allows to vary noise level and its resemblance of the Gaussian noise and that it provides ground truth fluorescent signal that can be used to validate denoising algorithms. The proposed denoising method employs truncated HOSVD twice: first, narrow 3D patches, spanning the whole recording, are processed (local 3D-HOSVD stage), second, 4D groups of 3D patches are collaboratively processed (non-local, 4D-HOSVD stage). The effect of the first pass is twofold: first, a significant part of noise is removed at this stage, second, noise distribution is transformed to be more Gaussian-like due to linear combination of multiple samples in the singular vectors. The effect of the second stage is to further improve SNR. We perform parameter tuning of the second stage to find optimal parameter combination for denoising.
-
Analytical solution and computer simulation of the task of Rician distribution’s parameters in limiting cases of large and small values of signal-to-noise ratio
Computer Research and Modeling, 2015, v. 7, no. 2, pp. 227-242Views (last year): 2.The paper provides a solution of a task of calculating the parameters of a Rician distributed signal on the basis of the maximum likelihood principle in limiting cases of large and small values of the signal-tonoise ratio. The analytical formulas are obtained for the solution of the maximum likelihood equations’ system for the required signal and noise parameters for both the one-parameter approximation, when only one parameter is being calculated on the assumption that the second one is known a-priori, and for the two-parameter task, when both parameters are a-priori unknown. The direct calculation of required signal and noise parameters by formulas allows escaping the necessity of time resource consuming numerical solving the nonlinear equations’ s system and thus optimizing the duration of computer processing of signals and images. There are presented the results of computer simulation of a task confirming the theoretical conclusions. The task is meaningful for the purposes of Rician data processing, in particular, magnetic-resonance visualization.
-
Theoretical substantiation of the mathematical techniques for joint signal and noise estimation at rician data analysis
Computer Research and Modeling, 2016, v. 8, no. 3, pp. 445-473Views (last year): 2. Citations: 2 (RSCI).The paper provides a solution of the two-parameter task of joint signal and noise estimation at data analysis within the conditions of the Rice distribution by the techniques of mathematical statistics: the maximum likelihood method and the variants of the method of moments. The considered variants of the method of moments include the following techniques: the joint signal and noise estimation on the basis of measuring the 2-nd and the 4-th moments (MM24) and on the basis of measuring the 1-st and the 2-nd moments (MM12). For each of the elaborated methods the explicit equations’ systems have been obtained for required parameters of the signal and noise. An important mathematical result of the investigation consists in the fact that the solution of the system of two nonlinear equations with two variables — the sought for signal and noise parameters — has been reduced to the solution of just one equation with one unknown quantity what is important from the view point of both the theoretical investigation of the proposed technique and its practical application, providing the possibility of essential decreasing the calculating resources required for the technique’s realization. The implemented theoretical analysis has resulted in an important practical conclusion: solving the two-parameter task does not lead to the increase of required numerical resources if compared with the one-parameter approximation. The task is meaningful for the purposes of the rician data processing, in particular — the image processing in the systems of magnetic-resonance visualization. The theoretical conclusions have been confirmed by the results of the numerical experiment.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"




