All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
The use of GIS INTEGRO in searching tasks for oil and gas deposits
Computer Research and Modeling, 2015, v. 7, no. 3, pp. 439-444Views (last year): 4.GIS INTEGRO is the geo-information software system forming the basis for the integrated interpretation of geophysical data in researching a deep structure of Earth. GIS INTEGRO combines a variety of computational and analytical applications for the solution of geological and geophysical problems. It includes various interfaces that allow you to change the form of representation of data (raster, vector, regular and irregular network of observations), the conversion unit of map projections, application blocks, including block integrated data analysis and decision prognostic and diagnostic tasks.
The methodological approach is based on integration and integrated analysis of geophysical data on regional profiles, geophysical potential fields and additional geological information on the study area. Analytical support includes packages transformations, filtering, statistical processing, calculation, finding of lineaments, solving direct and inverse tasks, integration of geographic information.
Technology and software and analytical support was tested in solving problems tectonic zoning in scale 1:200000, 1:1000000 in Yakutia, Kazakhstan, Rostov region, studying the deep structure of regional profiles 1:S, 1-SC, 2-SAT, 3-SAT and 2-DV, oil and gas forecast in the regions of Eastern Siberia, Brazil.
The article describes two possible approaches of parallel calculations for data processing 2D or 3D nets in the field of geophysical research. As an example presented realization in the environment of GRID of the application software ZondGeoStat (statistical sensing), which create 3D net model on the basis of data 2d net. The experience has demonstrated the high efficiency of the use of environment of GRID during realization of calculations in field of geophysical researches.
-
A.S. Komarov’s publications about cellular automata modelling of the population-ontogenetic development in plants: a review
Computer Research and Modeling, 2016, v. 8, no. 2, pp. 285-295The possibilities of cellular automata simulation applied to herbs and dwarf shrubs are described. Basicprinciples of discrete description of the ontogenesis of plants on which the mathematical modeling based are presents. The review discusses the main research results obtained with the use of models that revealing the patterns of functioning of populations and communities. The CAMPUS model and the results of computer experiment to study the growth of two clones of lingonberry with different geometry of the shoots are described. The paper is dedicated to the works of the founder of the direction of prof. A. S. Komarov. A list of his major publications on this subject is given.
Keywords: computer models, individual-based approach.Views (last year): 2. Citations: 6 (RSCI). -
Layered Bénard–Marangoni convection during heat transfer according to the Newton’s law of cooling
Computer Research and Modeling, 2016, v. 8, no. 6, pp. 927-940Views (last year): 10. Citations: 3 (RSCI).The paper considers mathematical modeling of layered Benard–Marangoni convection of a viscous incompressible fluid. The fluid moves in an infinitely extended layer. The Oberbeck–Boussinesq system describing layered Benard–Marangoni convection is overdetermined, since the vertical velocity is zero identically. We have a system of five equations to calculate two components of the velocity vector, temperature and pressure (three equations of impulse conservation, the incompressibility equation and the heat equation). A class of exact solutions is proposed for the solvability of the Oberbeck–Boussinesq system. The structure of the proposed solution is such that the incompressibility equation is satisfied identically. Thus, it is possible to eliminate the «extra» equation. The emphasis is on the study of heat exchange on the free layer boundary, which is considered rigid. In the description of thermocapillary convective motion, heat exchange is set according to the Newton’s law of cooling. The application of this heat distribution law leads to the third-kind initial-boundary value problem. It is shown that within the presented class of exact solutions to the Oberbeck–Boussinesq equations the overdetermined initial-boundary value problem is reduced to the Sturm–Liouville problem. Consequently, the hydrodynamic fields are expressed using trigonometric functions (the Fourier basis). A transcendental equation is obtained to determine the eigenvalues of the problem. This equation is solved numerically. The numerical analysis of the solutions of the system of evolutionary and gradient equations describing fluid flow is executed. Hydrodynamic fields are analyzed by a computational experiment. The existence of counterflows in the fluid layer is shown in the study of the boundary value problem. The existence of counterflows is equivalent to the presence of stagnation points in the fluid, and this testifies to the existence of a local extremum of the kinetic energy of the fluid. It has been established that each velocity component cannot have more than one zero value. Thus, the fluid flow is separated into two zones. The tangential stresses have different signs in these zones. Moreover, there is a fluid layer thickness at which the tangential stresses at the liquid layer equal to zero on the lower boundary. This physical effect is possible only for Newtonian fluids. The temperature and pressure fields have the same properties as velocities. All the nonstationary solutions approach the steady state in this case.
-
Analysis of point model of fibrin polymerization
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 247-258Views (last year): 8.Functional modeling of blood clotting and fibrin-polymer mesh formation is of a significant value for medical and biophysics applications. Despite the fact of some discrepancies present in simplified functional models their results are of the great interest for the experimental science as a handy tool of the analysis for research planning, data processing and verification. Under conditions of the good correspondence to the experiment functional models can be used as an element of the medical treatment methods and biophysical technologies. The aim of the paper in hand is a modeling of a point system of the fibrin-polymer formation as a multistage polymerization process with a sol-gel transition at the final stage. Complex-value Rosenbroke method of second order (CROS) used for computational experiments. The results of computational experiments are presented and discussed. It was shown that in the physiological range of the model coefficients there is a lag period of approximately 20 seconds between initiation of the reaction and fibrin gel appearance which fits well experimental observations of fibrin polymerization dynamics. The possibility of a number of the consequent $(n = 1–3)$ sol-gel transitions demonstrated as well. Such a specific behavior is a consequence of multistage nature of fibrin polymerization process. At the final stage the solution of fibrin oligomers of length 10 can reach a semidilute state, leading to an extremely fast gel formation controlled by oligomers’ rotational diffusion. Otherwise, if the semidilute state is not reached the gel formation is controlled by significantly slower process of translational diffusion. Such a duality in the sol-gel transition led authors to necessity of introduction of a switch-function in an equation for fibrin-polymer formation kinetics. Consequent polymerization events can correspond to experimental systems where fibrin mesh formed gets withdrawn from the volume by some physical process like precipitation. The sensitivity analysis of presented system shows that dependence on the first stage polymerization reaction constant is non-trivial.
-
The tests for checking of the parallel organization in logical calculation which are based on the algebra and the automats
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 621-638Views (last year): 14. Citations: 1 (RSCI).We build new tests which permit to increase the human capacity for the information processing by the parallel execution of the several logic operations of prescribed type. For checking of the causes of the capacity increasing we develop the check tests on the same logic operations class in which the parallel organization of the calculations is low-effectively. We use the apparatus of the universal algebra and automat theory. This article is the extension of the cycle of the work, which investigates the human capacity for the parallel calculations. The general publications on this theme content in the references. The tasks in the described tests may to define in the form of the calculation of the result in the sequence of the same type operations from some algebra. If this operation is associative then the parallel calculation is effectively by successful grouping of process. In Theory of operations that is the using the simultaneous work several processors. Each processor transforms in the time unit the certain known number of the elements of the input date or the intermediate results (the processor productivity). Now it is not known what kind elements of date are using by the brain for the logical or mathematical calculation, and how many elements are treating in the time units. Therefore the test contains the sequence of the presentations of the tasks with different numbers of logical operations in the fixed alphabet. That is the measure of the complexity for the task. The analysis of the depending of the time for the task solution from the complexity gives the possible to estimate the processor productivity and the form of the calculate organization. For the sequence calculations only one processor is working, and the time of solution is a line function of complexity. If the new processors begin to work in parallel when the complexities of the task increase than the depending of the solution time from complexity is represented by the curve which is convex at the bottom. For the detection of situation when the man increases the speed of the single processor under the condition of the increasing complexity we use the task series with similar operations but in the no associate algebra. In such tasks the parallel calculation is little affectivity in the sense of the increasing efficiency by the increasing the number of processors. That is the check set of the tests. In article we consider still one class of the tests, which are based on the calculation of the trajectory of the formal automat state if the input sequence is determined. We investigate the special class of automats (relay) for which the construction affect on the affectivity of the parallel calculations of the final automat state. For all tests we estimate the affectivity of the parallel calculation. This article do not contained the experiment results.
-
Searching stochastic equilibria in transport networks by universal primal-dual gradient method
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 335-345Views (last year): 28.We consider one of the problems of transport modelling — searching the equilibrium distribution of traffic flows in the network. We use the classic Beckman’s model to describe time costs and flow distribution in the network represented by directed graph. Meanwhile agents’ behavior is not completely rational, what is described by the introduction of Markov logit dynamics: any driver selects a route randomly according to the Gibbs’ distribution taking into account current time costs on the edges of the graph. Thus, the problem is reduced to searching of the stationary distribution for this dynamics which is a stochastic Nash – Wardrope equilibrium in the corresponding population congestion game in the transport network. Since the game is potential, this problem is equivalent to the problem of minimization of some functional over flows distribution. The stochasticity is reflected in the appearance of the entropy regularization, in contrast to non-stochastic case. The dual problem is constructed to obtain a solution of the optimization problem. The universal primal-dual gradient method is applied. A major specificity of this method lies in an adaptive adjustment to the local smoothness of the problem, what is most important in case of the complex structure of the objective function and an inability to obtain a prior smoothness bound with acceptable accuracy. Such a situation occurs in the considered problem since the properties of the function strongly depend on the transport graph, on which we do not impose strong restrictions. The article describes the algorithm including the numerical differentiation for calculation of the objective function value and gradient. In addition, the paper represents a theoretical estimate of time complexity of the algorithm and the results of numerical experiments conducted on a small American town.
-
Stochastic formalization of the gas dynamic hierarchy
Computer Research and Modeling, 2022, v. 14, no. 4, pp. 767-779Mathematical models of gas dynamics and its computational industry, in our opinion, are far from perfect. We will look at this problem from the point of view of a clear probabilistic micro-model of a gas from hard spheres, relying on both the theory of random processes and the classical kinetic theory in terms of densities of distribution functions in phase space, namely, we will first construct a system of nonlinear stochastic differential equations (SDE), and then a generalized random and nonrandom integro-differential Boltzmann equation taking into account correlations and fluctuations. The key feature of the initial model is the random nature of the intensity of the jump measure and its dependence on the process itself.
Briefly recall the transition to increasingly coarse meso-macro approximations in accordance with a decrease in the dimensionalization parameter, the Knudsen number. We obtain stochastic and non-random equations, first in phase space (meso-model in terms of the Wiener — measure SDE and the Kolmogorov – Fokker – Planck equations), and then — in coordinate space (macro-equations that differ from the Navier – Stokes system of equations and quasi-gas dynamics systems). The main difference of this derivation is a more accurate averaging by velocity due to the analytical solution of stochastic differential equations with respect to the Wiener measure, in the form of which an intermediate meso-model in phase space is presented. This approach differs significantly from the traditional one, which uses not the random process itself, but its distribution function. The emphasis is placed on the transparency of assumptions during the transition from one level of detail to another, and not on numerical experiments, which contain additional approximation errors.
The theoretical power of the microscopic representation of macroscopic phenomena is also important as an ideological support for particle methods alternative to difference and finite element methods.
-
Tasks and algorithms for optimal clustering of multidimensional objects by a variety of heterogeneous indicators and their applications in medicine
Computer Research and Modeling, 2024, v. 16, no. 3, pp. 673-693The work is devoted to the description of the author’s formal statements of the clustering problem for a given number of clusters, algorithms for their solution, as well as the results of using this toolkit in medicine.
The solution of the formulated problems by exact algorithms of implementations of even relatively low dimensions before proving optimality is impossible in a finite time due to their belonging to the NP class.
In this regard, we have proposed a hybrid algorithm that combines the advantages of precise methods based on clustering in paired distances at the initial stage with the speed of methods for solving simplified problems of splitting by cluster centers at the final stage. In the development of this direction, a sequential hybrid clustering algorithm using random search in the paradigm of swarm intelligence has been developed. The article describes it and presents the results of calculations of applied clustering problems.
To determine the effectiveness of the developed tools for optimal clustering of multidimensional objects according to a variety of heterogeneous indicators, a number of computational experiments were performed using data sets including socio-demographic, clinical anamnestic, electroencephalographic and psychometric data on the cognitive status of patients of the cardiology clinic. An experimental proof of the effectiveness of using local search algorithms in the paradigm of swarm intelligence within the framework of a hybrid algorithm for solving optimal clustering problems has been obtained.
The results of the calculations indicate the actual resolution of the main problem of using the discrete optimization apparatus — limiting the available dimensions of task implementations. We have shown that this problem is eliminated while maintaining an acceptable proximity of the clustering results to the optimal ones. The applied significance of the obtained clustering results is also due to the fact that the developed optimal clustering toolkit is supplemented by an assessment of the stability of the formed clusters, which allows for known factors (the presence of stenosis or older age) to additionally identify those patients whose cognitive resources are insufficient to overcome the influence of surgical anesthesia, as a result of which there is a unidirectional effect of postoperative deterioration of complex visual-motor reaction, attention and memory. This effect indicates the possibility of differentiating the classification of patients using the proposed tools.
-
CUDA and OpenCL implementations of Conway’s Game of Life cellular automata
Computer Research and Modeling, 2010, v. 2, no. 3, pp. 323-326Views (last year): 9. Citations: 3 (RSCI).In this article the experience of reading “CUDA and OpenCL programming” course during high perfomance computing summer school MIPT-2010 is analyzed. Content of lectures and practical tasks, as well as manner of presenting of the material are regarded. Performance issues of different algorithms implemented by students at practical training session are dicussed.
-
Modelling spatio-temporal dynamics of circadian rythms in Neurospora crassa
Computer Research and Modeling, 2011, v. 3, no. 2, pp. 191-213Views (last year): 6. Citations: 20 (RSCI).We derive a new model of circadian oscillations in Neurospora crassa, which is suitable to analyze both temporal and spatial dynamics of proteins responsible for mechanism of rythms. The model is based on the non-linear interplay between proteins FRQ and WCC which are products of transcription of frequency and white collar genes forming a feedback loop comprised both positive and negative elements. The main component of oscillations mechanism is supposed to be time-delay in biochemical reactions of transcription. We show that the model accounts for various features observed in Neurospora’s experiments such as entrainment by light cycles, phase shift under light pulse, robustness to action of fluctuations and so on. Wave patterns excited during spatial development of the system are studied. It is shown that the wave of synchronization of biorythms arises under basal transcription factors.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"