Результаты поиска по 'information technology':
Найдено статей: 31
  1. Nikulin A.S., ZHediaevskii D.N., Fedorova E.B.
    Applying artificial neural network for the selection of mixed refrigerant by boiling curve
    Computer Research and Modeling, 2022, v. 14, no. 3, pp. 593-608

    The paper provides a method for selecting the composition of a refrigerant with a given isobaric cooling curve using an artificial neural network (ANN). This method is based on the use of 1D layers of a convolutional neural network. To train the neural network, we applied a technological model of a simple heat exchanger in the UniSim design program, using the Peng – Robinson equation of state.We created synthetic database on isobaric boiling curves of refrigerants of different compositions using the technological model. To record the database, an algorithm was developed in the Python programming language, and information on isobaric boiling curves for 1 049 500 compositions was uploaded using the COM interface. The compositions have generated by Monte Carlo method. Designed architecture of ANN allows select composition of a mixed refrigerant by 101 points of boiling curve. ANN gives mole flows of mixed refrigerant by composition (methane, ethane, propane, nitrogen) on the output layer. For training ANN, we used method of cyclical learning rate. For results demonstration we selected MR composition by natural gas cooling curve with a minimum temperature drop of 3 К and a maximum temperature drop of no more than 10 К, which turn better than we predicted via UniSim SQP optimizer and better than predicted by $k$-nearest neighbors algorithm. A significant value of this article is the fact that an artificial neural network can be used to select the optimal composition of the refrigerant when analyzing the cooling curve of natural gas. This method can help engineers select the composition of the mixed refrigerant in real time, which will help reduce the energy consumption of natural gas liquefaction.

  2. Cheremisina E.N., Senner A.E.
    The use of GIS INTEGRO in searching tasks for oil and gas deposits
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 439-444

    GIS INTEGRO is the geo-information software system forming the basis for the integrated interpretation of geophysical data in researching a deep structure of Earth. GIS INTEGRO combines a variety of computational and analytical applications for the solution of geological and geophysical problems. It includes various interfaces that allow you to change the form of representation of data (raster, vector, regular and irregular network of observations), the conversion unit of map projections, application blocks, including block integrated data analysis and decision prognostic and diagnostic tasks.

    The methodological approach is based on integration and integrated analysis of geophysical data on regional profiles, geophysical potential fields and additional geological information on the study area. Analytical support includes packages transformations, filtering, statistical processing, calculation, finding of lineaments, solving direct and inverse tasks, integration of geographic information.

    Technology and software and analytical support was tested in solving problems tectonic zoning in scale 1:200000, 1:1000000 in Yakutia, Kazakhstan, Rostov region, studying the deep structure of regional profiles 1:S, 1-SC, 2-SAT, 3-SAT and 2-DV, oil and gas forecast in the regions of Eastern Siberia, Brazil.

    The article describes two possible approaches of parallel calculations for data processing 2D or 3D nets in the field of geophysical research. As an example presented realization in the environment of GRID of the application software ZondGeoStat (statistical sensing), which create 3D net model on the basis of data 2d net. The experience has demonstrated the high efficiency of the use of environment of GRID during realization of calculations in field of geophysical researches.

    Views (last year): 4.
  3. Kalashnikov S.V., Krivoschapov A.A., Mitin A.L., Nikolaev N.V.
    Computational investigation of aerodynamic performance of the generic flying-wing aircraft model using FlowVision computational code
    Computer Research and Modeling, 2017, v. 9, no. 1, pp. 67-74

    Modern approach to modernization of the experimental techniques involves design of mathematical models of the wind-tunnel, which are also referred to as Electronic of Digital Wind-Tunnels. They are meant to supplement experimental data with computational analysis. Using Electronic Wind-Tunnels is supposed to provide accurate information on aerodynamic performance of an aircraft basing on a set of experimental data, to obtain agreement between data from different test facilities and perform comparison between computational results for flight conditions and data with the presence of support system and test section.

    Completing this task requires some preliminary research, which involves extensive wind-tunnel testing as well as RANS-based computational research with the use of supercomputer technologies. At different stages of computational investigation one may have to model not only the aircraft itself but also the wind-tunnel test section and the model support system. Modelling such complex geometries will inevitably result in quite complex vertical and separated flows one will have to simulate. Another problem is that boundary layer transition is often present in wind-tunnel testing due to quite small model scales and therefore low Reynolds numbers.

    In the current article the first stage of the Electronic Wind-Tunnel design program is covered. This stage involves computational investigation of aerodynamic characteristics of the generic flying-wing UAV model previously tested in TsAGI T-102 wind-tunnel. Since this stage is preliminary the model was simulated without taking test-section and support system geometry into account. The boundary layer was considered to be fully turbulent.

    For the current research FlowVision computational code was used because of its automatic grid generation feature and stability of the solver when simulating complex flows. A two-equation k–ε turbulence model was used with special wall functions designed to properly capture flow separation. Computed lift force and drag force coefficients for different angles-of-attack were compared to the experimental data.

    Views (last year): 10. Citations: 1 (RSCI).
  4. The present article describes the authors’ model of construction of the distributed computer network and realization in it of the distributed calculations which are carried out within the limits of the software-information environment providing management of the information, automated and engineering systems of intellectual buildings. The presented model is based on the functional approach with encapsulation of the non-determined calculations and various side effects in monadic calculations that allows to apply all advantages of functional programming to a choice and execution of scenarios of management of various aspects of life activity of buildings and constructions. Besides, the described model can be used together with process of intellectualization of technical and sociotechnical systems for increase of level of independence of decision-making on management of values of parameters of the internal environment of a building, and also for realization of methods of adaptive management, in particular application of various techniques and approaches of an artificial intellect. An important part of the model is a directed acyclic graph, which is an extension of the blockchain with the ability to categorically reduce the cost of transactions taking into account the execution of smart contracts. According to the authors it will allow one to realize new technologies and methods — the distributed register on the basis of the directed acyclic graph, calculation on edge and the hybrid scheme of construction of artificial intellectual systems — and all this together can be used for increase of efficiency of management of intellectual buildings. Actuality of the presented model is based on necessity and importance of translation of processes of management of life cycle of buildings and constructions in paradigm of Industry 4.0 and application for management of methods of an artificial intellect with universal introduction of independent artificial cognitive agents. Model novelty follows from cumulative consideration of the distributed calculations within the limits of the functional approach and hybrid paradigm of construction of artificial intellectual agents for management of intellectual buildings. The work is theoretical. The article will be interesting to scientists and engineers working in the field of automation of technological and industrial processes both within the limits of intellectual buildings, and concerning management of complex technical and social and technical systems as a whole.

  5. Koganov A.V., Zlobin A.I., Rakcheeva T.A.
    Research of possibility for man the parallel information handling in task series with increase complexity
    Computer Research and Modeling, 2013, v. 5, no. 5, pp. 845-861

    We schedule the computer technology for present the engineer psychology tests which reveal probationer men which may hasten the logic task solution by simultaneous execution several standard logic operations. These tests based on the theory of two logic task kinds: in first kind the parallel logic is effectively, and in second kind it is not effectively. The realize experiment confirms the capability parallel logic for impotent part of people. The vital speedup execution of logic operations is very uncommon in simultaneous logic. The efficacy of methodic is confirmed.

    Views (last year): 1. Citations: 4 (RSCI).
  6. Kutovskiy N.A., Nechaevskiy A.V., Ososkov G.A., Pryahina D.I., Trofimov V.V.
    Simulation of interprocessor interactions for MPI-applications in the cloud infrastructure
    Computer Research and Modeling, 2017, v. 9, no. 6, pp. 955-963

    А new cloud center of parallel computing is to be created in the Laboratory of Information Technologies (LIT) of the Joint Institute for Nuclear Research JINR) what is expected to improve significantly the efficiency of numerical calculations and expedite the receipt of new physically meaningful results due to the more rational use of computing resources. To optimize a scheme of parallel computations at a cloud environment it is necessary to test this scheme for various combinations of equipment parameters (processor speed and numbers, throughput оf а communication network etc). As a test problem, the parallel MPI algorithm for calculations of the long Josephson junctions (LDJ) is chosen. Problems of evaluating the impact of abovementioned factors of computing mean on the computing speed of the test problem are solved by simulation with the simulation program SyMSim developed in LIT.

    The simulation of the LDJ calculations in the cloud environment enable users without a series of test to find the optimal number of CPUs with a certain type of network run the calculations in a real computer environment. This can save significant computational time in countable resources. The main parameters of the model were obtained from the results of the computational experiment conducted on a special cloud-based testbed. Computational experiments showed that the pure computation time decreases in inverse proportion to the number of processors, but depends significantly on network bandwidth. Comparison of results obtained empirically with the results of simulation showed that the simulation model correctly simulates the parallel calculations performed using the MPI-technology. Besides it confirms our recommendation: for fast calculations of this type it is needed to increase both, — the number of CPUs and the network throughput at the same time. The simulation results allow also to invent an empirical analytical formula expressing the dependence of calculation time by the number of processors for a fixed system configuration. The obtained formula can be applied to other similar studies, but requires additional tests to determine the values of variables.

    Views (last year): 10. Citations: 1 (RSCI).
  7. Shumov V.V.
    Consideration of psychological factors in models of the battle (conflict)
    Computer Research and Modeling, 2016, v. 8, no. 6, pp. 951-964

    The course and outcome of the battle is largely dependent on the morale of the troops, characterized by the percentage of loss in killed and wounded, in which the troops still continue to fight. Every fight is a psychological act of ending his rejection of one of the parties. Typically, models of battle psychological factor taken into account in the decision of Lanchester equations (the condition of equality of forces, when the number of one of the parties becomes zero). It is emphasized that the model Lanchester type satisfactorily describe the dynamics of the battle only in the initial stages. To resolve this contradiction is proposed to use a modification of Lanchester's equations, taking into account the fact that at any moment of the battle on the enemy firing not affected and did not abandon the battle fighters. The obtained differential equations are solved by numerical method and allow the dynamics to take into account the influence of psychological factor and evaluate the completion time of the conflict. Computational experiments confirm the known military theory is the fact that the fight usually ends in refusal of soldiers of one of the parties from its continuation (avoidance of combat in various forms). Along with models of temporal and spatial dynamics proposed to use a modification of the technology features of the conflict of S. Skaperdas, based on the principles of combat. To estimate the probability of victory of one side in the battle takes into account the interest of the maturing sides of the bloody casualties and increased military superiority.

    Views (last year): 7. Citations: 4 (RSCI).
  8. Bratsun D.A., Buzmakov M.D.
    Repressilator with time-delayed gene expression. Part II. Stochastic description
    Computer Research and Modeling, 2021, v. 13, no. 3, pp. 587-609

    The repressilator is the first genetic regulatory network in synthetic biology, which was artificially constructed in 2000. It is a closed network of three genetic elements $lacI$, $\lambda cI$ and $tetR$, which have a natural origin, but are not found in nature in such a combination. The promoter of each of the three genes controls the next cistron via the negative feedback, suppressing the expression of the neighboring gene. In our previous paper [Bratsun et al., 2018], we proposed a mathematical model of a delayed repressillator and studied its properties within the framework of a deterministic description. We assume that delay can be both natural, i.e. arises during the transcription / translation of genes due to the multistage nature of these processes, and artificial, i.e. specially to be introduced into the work of the regulatory network using gene engineering technologies. In this work, we apply the stochastic description of dynamic processes in a delayed repressilator, which is an important addition to deterministic analysis due to the small number of molecules involved in gene regulation. The stochastic study is carried out numerically using the Gillespie algorithm, which is modified for time delay systems. We present the description of the algorithm, its software implementation, and the results of benchmark simulations for a onegene delayed autorepressor. When studying the behavior of a repressilator, we show that a stochastic description in a number of cases gives new information about the behavior of a system, which does not reduce to deterministic dynamics even when averaged over a large number of realizations. We show that in the subcritical range of parameters, where deterministic analysis predicts the absolute stability of the system, quasi-regular oscillations may be excited due to the nonlinear interaction of noise and delay. Earlier, we have discovered within the framework of the deterministic description, that there exists a long-lived transient regime, which is represented in the phase space by a slow manifold. This mode reflects the process of long-term synchronization of protein pulsations in the work of the repressilator genes. In this work, we show that the transition to the cooperative mode of gene operation occurs a two order of magnitude faster, when the effect of the intrinsic noise is taken into account. We have obtained the probability distribution of moment when the phase trajectory leaves the slow manifold and have determined the most probable time for such a transition. The influence of the intrinsic noise of chemical reactions on the dynamic properties of the repressilator is discussed.

  9. Adamovskiy Y.R., Chertkov V.M., Bohush R.P.
    Model for building of the radio environment map for cognitive communication system based on LTE
    Computer Research and Modeling, 2022, v. 14, no. 1, pp. 127-146

    The paper is devoted to the secondary use of spectrum in telecommunication networks. It is emphasized that one of the solutions to this problem is the use of cognitive radio technologies and dynamic spectrum access for the successful functioning of which a large amount of information is required, including the parameters of base stations and network subscribers. Storage and processing of information should be carried out using a radio environment map, which is a spatio-temporal database of all activity in the network and allows you to determine the frequencies available for use at a given time. The paper presents a two-level model for forming a map of the radio environment of a cellular communication system LTE, in which the local and global levels are highlighted, which is described by the following parameters: a set of frequencies, signal attenuation, signal propagation map, grid step, current time count. The key objects of the model are the base station and the subscriber unit. The main parameters of the base station include: name, identifier, cell coordinates, range number, radiation power, numbers of connected subscriber devices, dedicated resource blocks. For subscriber devices, the following parameters are used: name, identifier, location, current coordinates of the device cell, base station identifier, frequency range, numbers of resource blocks for communication with the station, radiation power, data transmission status, list of numbers of the nearest stations, schedules movement and communication sessions of devices. An algorithm for the implementation of the model is presented, taking into account the scenarios of movement and communication sessions of subscriber devices. A method for calculating a map of the radio environment at a point on a coordinate grid, taking into account losses during the propagation of radio signals from emitting devices, is presented. The software implementation of the model is performed using the MatLab package. The approaches are described that allow to increase the speed of its work. In the simulation, the choice of parameters was carried out taking into account the data of the existing communication systems and the economy of computing resources. The experimental results of the algorithm for the formation of a radio environment map are demonstrated, confirming the correctness of the developed model.

  10. Aksenov A.A., Zhluktov S.V., Kashirin V.S., Sazonova M.L., Cherny S.G., Drozdova E.A., Rode A.A.
    Numerical modeling of raw atomization and vaporization by flow of heat carrier gas in furnace technical carbon production into FlowVision
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 921-939

    Technical carbon (soot) is a product obtained by thermal decomposition (pyrolysis) of hydrocarbons (usually oil) in a stream of heat carrier gas. Technical carbon is widely used as a reinforcing component in the production of rubber and plastic masses. Tire production uses 70% of all carbon produced. In furnace carbon production, the liquid hydrocarbon feedstock is injected into the natural gas combustion product stream through nozzles. The raw material is atomized and vaporized with further pyrolysis. It is important for the raw material to be completely evaporated before the pyrolysis process starts, otherwise coke, that contaminates the product, will be produced. It is impossible to operate without mathematical modeling of the process itself in order to improve the carbon production technology, in particular, to provide the complete evaporation of the raw material prior to the pyrolysis process. Mathematical modelling is the most important way to obtain the most complete and detailed information about the peculiarities of reactor operation.

    A three-dimensional mathematical model and calculation method for raw material atomization and evaporation in the thermal gas flow are being developed in the FlowVision software package PC. Water is selected as a raw material to work out the modeling technique. The working substances in the reactor chamber are the combustion products of natural gas. The motion of raw material droplets and evaporation in the gas stream are modeled in the framework of the Eulerian approach of interaction between dispersed and continuous media. The simulation results of raw materials atomization and evaporation in a real reactor for technical carbon production are presented. Numerical method allows to determine an important atomization characteristic: average Sauter diameter. That parameter could be defined from distribution of droplets of raw material at each time of spray forming.

Pages: previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"