All issues
- 2024 Vol. 16
- 2023 Vol. 15
- 2022 Vol. 14
- 2021 Vol. 13
- 2020 Vol. 12
- 2019 Vol. 11
- 2018 Vol. 10
- 2017 Vol. 9
- 2016 Vol. 8
- 2015 Vol. 7
- 2014 Vol. 6
- 2013 Vol. 5
- 2012 Vol. 4
- 2011 Vol. 3
- 2010 Vol. 2
- 2009 Vol. 1
-
Computational investigation of aerodynamic performance of the generic flying-wing aircraft model using FlowVision computational code
Computer Research and Modeling, 2017, v. 9, no. 1, pp. 67-74Views (last year): 10. Citations: 1 (RSCI).Modern approach to modernization of the experimental techniques involves design of mathematical models of the wind-tunnel, which are also referred to as Electronic of Digital Wind-Tunnels. They are meant to supplement experimental data with computational analysis. Using Electronic Wind-Tunnels is supposed to provide accurate information on aerodynamic performance of an aircraft basing on a set of experimental data, to obtain agreement between data from different test facilities and perform comparison between computational results for flight conditions and data with the presence of support system and test section.
Completing this task requires some preliminary research, which involves extensive wind-tunnel testing as well as RANS-based computational research with the use of supercomputer technologies. At different stages of computational investigation one may have to model not only the aircraft itself but also the wind-tunnel test section and the model support system. Modelling such complex geometries will inevitably result in quite complex vertical and separated flows one will have to simulate. Another problem is that boundary layer transition is often present in wind-tunnel testing due to quite small model scales and therefore low Reynolds numbers.
In the current article the first stage of the Electronic Wind-Tunnel design program is covered. This stage involves computational investigation of aerodynamic characteristics of the generic flying-wing UAV model previously tested in TsAGI T-102 wind-tunnel. Since this stage is preliminary the model was simulated without taking test-section and support system geometry into account. The boundary layer was considered to be fully turbulent.
For the current research FlowVision computational code was used because of its automatic grid generation feature and stability of the solver when simulating complex flows. A two-equation k–ε turbulence model was used with special wall functions designed to properly capture flow separation. Computed lift force and drag force coefficients for different angles-of-attack were compared to the experimental data.
-
Analysis of point model of fibrin polymerization
Computer Research and Modeling, 2017, v. 9, no. 2, pp. 247-258Views (last year): 8.Functional modeling of blood clotting and fibrin-polymer mesh formation is of a significant value for medical and biophysics applications. Despite the fact of some discrepancies present in simplified functional models their results are of the great interest for the experimental science as a handy tool of the analysis for research planning, data processing and verification. Under conditions of the good correspondence to the experiment functional models can be used as an element of the medical treatment methods and biophysical technologies. The aim of the paper in hand is a modeling of a point system of the fibrin-polymer formation as a multistage polymerization process with a sol-gel transition at the final stage. Complex-value Rosenbroke method of second order (CROS) used for computational experiments. The results of computational experiments are presented and discussed. It was shown that in the physiological range of the model coefficients there is a lag period of approximately 20 seconds between initiation of the reaction and fibrin gel appearance which fits well experimental observations of fibrin polymerization dynamics. The possibility of a number of the consequent $(n = 1–3)$ sol-gel transitions demonstrated as well. Such a specific behavior is a consequence of multistage nature of fibrin polymerization process. At the final stage the solution of fibrin oligomers of length 10 can reach a semidilute state, leading to an extremely fast gel formation controlled by oligomers’ rotational diffusion. Otherwise, if the semidilute state is not reached the gel formation is controlled by significantly slower process of translational diffusion. Such a duality in the sol-gel transition led authors to necessity of introduction of a switch-function in an equation for fibrin-polymer formation kinetics. Consequent polymerization events can correspond to experimental systems where fibrin mesh formed gets withdrawn from the volume by some physical process like precipitation. The sensitivity analysis of presented system shows that dependence on the first stage polymerization reaction constant is non-trivial.
-
Dynamical theory of information as a basis for natural-constructive approach to modeling a cognitive process
Computer Research and Modeling, 2017, v. 9, no. 3, pp. 433-447Views (last year): 6.The main statements and inferences of the Dynamic Theory Information (DTI) are considered. It is shown that DTI provides the possibility two reveal two essentially important types of information: objective (unconventional) and subjective (conventional) informtion. There are two ways of obtaining information: reception (perception of an already existing one) and generation (production of new) information. It is shown that the processes of generation and perception of information should proceed in two different subsystems of the same cognitive system. The main points of the Natural-Constructivist Approach to modeling the cognitive process are discussed. It is shown that any neuromorphic approach faces the problem of Explanatory Gap between the “Brain” and the “Mind”, i. e. the gap between objectively measurable information about the ensemble of neurons (“Brain”) and subjective information about the human consciousness (“Mind”). The Natural-Constructive Cognitive Architecture developed within the framework of this approach is discussed. It is a complex block-hierarchical combination of several neuroprocessors. The main constructive feature of this architecture is splitting the whole system into two linked subsystems, by analogy with the hemispheres of the human brain. One of the subsystems is processing the new information, learning, and creativity, i.e. for the generation of information. Another subsystem is responsible for processing already existing information, i.e. reception of information. It is shown that the lowest (zero) level of the hierarchy is represented by processors that should record images of real objects (distributed memory) as a response to sensory signals, which is objective information (and refers to the “Brain”). The next hierarchy levels are represented by processors containing symbols of the recorded images. It is shown that symbols represent subjective (conventional) information created by the system itself and providing its individuality. The highest hierarchy levels containing the symbols of abstract concepts provide the possibility to interpret the concepts of “consciousness”, “sub-consciousness”, “intuition”, referring to the field of “Mind”, in terms of the ensemble of neurons. Thus, DTI provides an opportunity to build a model that allows us to trace how the “Mind” could emerge basing on the “Brain”.
-
The tests for checking of the parallel organization in logical calculation which are based on the algebra and the automats
Computer Research and Modeling, 2017, v. 9, no. 4, pp. 621-638Views (last year): 14. Citations: 1 (RSCI).We build new tests which permit to increase the human capacity for the information processing by the parallel execution of the several logic operations of prescribed type. For checking of the causes of the capacity increasing we develop the check tests on the same logic operations class in which the parallel organization of the calculations is low-effectively. We use the apparatus of the universal algebra and automat theory. This article is the extension of the cycle of the work, which investigates the human capacity for the parallel calculations. The general publications on this theme content in the references. The tasks in the described tests may to define in the form of the calculation of the result in the sequence of the same type operations from some algebra. If this operation is associative then the parallel calculation is effectively by successful grouping of process. In Theory of operations that is the using the simultaneous work several processors. Each processor transforms in the time unit the certain known number of the elements of the input date or the intermediate results (the processor productivity). Now it is not known what kind elements of date are using by the brain for the logical or mathematical calculation, and how many elements are treating in the time units. Therefore the test contains the sequence of the presentations of the tasks with different numbers of logical operations in the fixed alphabet. That is the measure of the complexity for the task. The analysis of the depending of the time for the task solution from the complexity gives the possible to estimate the processor productivity and the form of the calculate organization. For the sequence calculations only one processor is working, and the time of solution is a line function of complexity. If the new processors begin to work in parallel when the complexities of the task increase than the depending of the solution time from complexity is represented by the curve which is convex at the bottom. For the detection of situation when the man increases the speed of the single processor under the condition of the increasing complexity we use the task series with similar operations but in the no associate algebra. In such tasks the parallel calculation is little affectivity in the sense of the increasing efficiency by the increasing the number of processors. That is the check set of the tests. In article we consider still one class of the tests, which are based on the calculation of the trajectory of the formal automat state if the input sequence is determined. We investigate the special class of automats (relay) for which the construction affect on the affectivity of the parallel calculations of the final automat state. For all tests we estimate the affectivity of the parallel calculation. This article do not contained the experiment results.
-
Searching stochastic equilibria in transport networks by universal primal-dual gradient method
Computer Research and Modeling, 2018, v. 10, no. 3, pp. 335-345Views (last year): 28.We consider one of the problems of transport modelling — searching the equilibrium distribution of traffic flows in the network. We use the classic Beckman’s model to describe time costs and flow distribution in the network represented by directed graph. Meanwhile agents’ behavior is not completely rational, what is described by the introduction of Markov logit dynamics: any driver selects a route randomly according to the Gibbs’ distribution taking into account current time costs on the edges of the graph. Thus, the problem is reduced to searching of the stationary distribution for this dynamics which is a stochastic Nash – Wardrope equilibrium in the corresponding population congestion game in the transport network. Since the game is potential, this problem is equivalent to the problem of minimization of some functional over flows distribution. The stochasticity is reflected in the appearance of the entropy regularization, in contrast to non-stochastic case. The dual problem is constructed to obtain a solution of the optimization problem. The universal primal-dual gradient method is applied. A major specificity of this method lies in an adaptive adjustment to the local smoothness of the problem, what is most important in case of the complex structure of the objective function and an inability to obtain a prior smoothness bound with acceptable accuracy. Such a situation occurs in the considered problem since the properties of the function strongly depend on the transport graph, on which we do not impose strong restrictions. The article describes the algorithm including the numerical differentiation for calculation of the objective function value and gradient. In addition, the paper represents a theoretical estimate of time complexity of the algorithm and the results of numerical experiments conducted on a small American town.
-
Parallel implementation of the grid-characteristic method in the case of explicit contact boundaries
Computer Research and Modeling, 2018, v. 10, no. 5, pp. 667-678Views (last year): 18.We consider an application of the Message Passing Interface (MPI) technology for parallelization of the program code which solves equation of the linear elasticity theory. The solution of this equation describes the propagation of elastic waves in demormable rigid bodies. The solution of such direct problem of seismic wave propagation is of interest in seismics and geophysics. Our implementation of solver uses grid-characteristic method to make simulations. We consider technique to reduce time of communication between MPI processes during the simulation. This is important when it is necessary to conduct modeling in complex problem formulations, and still maintain the high level of parallelism effectiveness, even when thousands of processes are used. A solution of the problem of effective communication is extremely important when several computational grids with arbirtrary geometry of contacts between them are used in the calculation. The complexity of this task increases if an independent distribution of the grid nodes between processes is allowed. In this paper, a generalized approach is developed for processing contact conditions in terms of nodes reinterpolation from a given section of one grid to a certain area of the second grid. An efficient way of parallelization and establishing effective interprocess communications is proposed. For provided example problems we provide wave fileds and seismograms for both 2D and 3D formulations. It is shown that the algorithm can be realized both on Cartesian and on structured (curvilinear) computational grids. The considered statements demonstrate the possibility of carrying out calculations taking into account the surface topographies and curvilinear geometry of curvilinear contacts between the geological layers. Application of curvilinear grids allows to obtain more accurate results than when calculating only using Cartesian grids. The resulting parallelization efficiency is almost 100% up to 4096 processes (we used 128 processes as a basis to find efficiency). With number of processes larger than 4096, an expected gradual decrease in efficiency is observed. The rate of decline is not great, so at 16384 processes the parallelization efficiency remains at 80%.
-
To the problem of program implementation of the potential-streaming method of description of physical and chemical process
Computer Research and Modeling, 2018, v. 10, no. 6, pp. 817-832Views (last year): 12.In the framework of modern non-equilibrium thermodynamics (macroscopic approach of description and mathematical modeling of the dynamics of real physical and chemical processes), the authors developed a potential- flow method for describing and mathematical modeling of real physical and chemical processes applicable in the general case of real macroscopic physicochemical systems. In accordance with the potential-flow method, the description and mathematical modeling of these processes consists in determining through the interaction potentials of the thermodynamic forces driving these processes and the kinetic matrix determined by the kinetic properties of the system in question, which in turn determine the dynamics of the course of physicochemical processes in this system under the influence of the thermodynamic forces in it. Knowing the thermodynamic forces and the kinetic matrix of the system, the rates of the flow of physicochemical processes in the system are determined, and according to these conservation laws the rates of change of its state coordinates are determined. It turns out in this way a closed system of equations of physical and chemical processes in the system. Knowing the interaction potentials in the system, the kinetic matrices of its simple subsystems (individual processes that are conjugate to each other and not conjugate with other processes), the coefficients entering into the conservation laws, the initial state of the system under consideration, external flows into the system, one can obtain a complete dynamics of physicochemical processes in the system. However, in the case of a complex physico-chemical system in which a large number of physicochemical processes take place, the dimension of the system of equations for these processes becomes appropriate. Hence, the problem arises of automating the formation of the described system of equations of the dynamics of physical and chemical processes in the system under consideration. In this article, we develop a library of software data types that implement a user-defined physicochemical system at the level of its design scheme (coordinates of the state of the system, energy degrees of freedom, physico-chemical processes, flowing, external flows and the relationship between these listed components) and algorithms references in these types of data, as well as calculation of the described system parameters. This library includes both program types of the calculation scheme of the user-defined physicochemical system, and program data types of the components of this design scheme (coordinates of the system state, energy degrees of freedom, physicochemical processes, flowing, external flows). The relationship between these components is carried out by reference (index) addressing. This significantly speeds up the calculation of the system characteristics, because faster access to data.
-
Calculation of absorption spectra of silver-thiolate complexes
Computer Research and Modeling, 2019, v. 11, no. 2, pp. 275-286Views (last year): 14.Ligand protected metal nanoclusters (NCs) have gained much attention due to their unique physicochemical properties and potential applications in material science. Noble metal NCs protected with thiolate ligands have been of interest because of their long-term stability. The detailed structures of most of the ligandstabilized metal NCs remain unknown due to the absence of crystal structure data for them. Theoretical calculations using quantum chemistry techniques appear as one of the most promising tools for determining the structure and electronic properties of NCs. That is why finding a cost-effective strategy for calculations is such an important and challenging task. In this work, we compare the performance of different theoretical methods of geometry optimization and absorption spectra calculation for silver-thiolate complexes. We show that second order Moller–Plesset perturbation theory reproduces nicely the geometries obtained at a higher level of theory, in particular, with RI-CC2 method. We compare the absorption spectra of silver-thiolate complexes simulated with different methods: EOM-CCSD, RI-CC2, ADC(2) and TDDFT. We show that the absorption spectra calculated with the ADC(2) method are consistent with the spectra obtained with the EOM-CCSD and RI-CC2 methods. CAM-B3LYP functional fails to reproduce the absorption spectra of the silver-thiolate complexes. However, M062X global hybrid meta-GGA functional seems to be a nice compromise regarding its low computational costs. In our previous study, we have already demonstrated that M062X functional shows good accuracy as compared to ADC(2) ab initio method predicting the excitation spectra of silver nanocluster complexes with nucleobases.
-
Solving of the Exner equation for morphologically complex bed
Computer Research and Modeling, 2019, v. 11, no. 3, pp. 449-461Views (last year): 10.The Exner equation in conjunction phenomenological sediment transport models is widely used for mathematical modeling non-cohesive river bed. This approach allows to obtain an accurate solution without any difficulty if one models evolution of simple shape bed. However if one models evolution of complex shape bed with unstable soil the numerical instability occurs in some cases. It is difficult to detach this numerical instability from the natural physical instability of bed.
This paper analyses the causes of numerical instability occurring while modeling evolution of complex shape bed by using the Exner equation and phenomenological sediment rate models. The paper shows that two kinds of indeterminateness may occur while solving numerically the Exner equation closed by phenomenological model of sediment transport. The first indeterminateness occurs in the bed area where sediment transport is transit and bed is not changed. The second indeterminateness occurs at the extreme point of bed profile when the sediment rate varies and the bed remains the same. Authors performed the closure of the Exner equation by the analytical sediment transport model, which allowed to transform the Exner equation to parabolic type equation. Analysis of the obtained equation showed that it’s numerical solving does not lead to occurring of the indeterminateness mentioned above. Parabolic form of the transformed Exner equation allows to apply the effective and stable implicit central difference scheme for this equation solving.
The model problem of bed evolution in presence of periodic distribution of the bed shear stress is carried out. The authors used the explicit central difference scheme with and without filtration method application and implicit central difference scheme for numerical solution of the problem. It is shown that the explicit central difference scheme is unstable in the area of the bed profile extremum. Using the filtration method resulted to increased dissipation of the solution. The solution obtained by using the implicit central difference scheme corresponds to the distribution law of bed shear stress and is stable throughout the calculation area.
-
Numerical Simulation, Parallel Algorithms and Software for Performance Forecast of the System “Fractured-Porous Reservoir – Producing Well” During its Commissioning Into Operation
Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1069-1075The mathematical model, finite-difference schemes and algorithms for computation of transient thermoand hydrodynamic processes involved in commissioning the unified system including the oil producing well, electrical submersible pump and fractured-porous reservoir with bottom water are developed. These models are implemented in the computer package to simulate transient processes with simultaneous visualization of their results along with computations. An important feature of the package Oil-RWP is its interaction with the special external program GCS which simulates the work of the surface electric control station and data exchange between these two programs. The package Oil-RWP sends telemetry data and current parameters of the operating submersible unit to the program module GCS (direct coupling). The station controller analyzes incoming data and generates the required control parameters for the submersible pump. These parameters are sent to Oil-RWP (feedback). Such an approach allows us to consider the developed software as the “Intellectual Well System”.
Some principal results of the simulations can be briefly presented as follows. The transient time between inaction and quasi-steady operation of the producing well depends on the well stream watering, filtration and capacitive parameters of oil reservoir, physical-chemical properties of phases and technical characteristics of the submersible unit. For the large time solution of the nonstationary equations governing the nonsteady processes is practically identical to the inverse quasi-stationary problem solution with the same initial data. The developed software package is an effective tool for analysis, forecast and optimization of the exploiting parameters of the unified oil-producing complex during its commissioning into the operating regime.
Indexed in Scopus
Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU
The journal is included in the Russian Science Citation Index
The journal is included in the RSCI
International Interdisciplinary Conference "Mathematics. Computing. Education"