Результаты поиска по 'computer program':
Найдено статей: 62
  1. Minkevich I.G.
    Estimation of maximal values of biomass growth yield based on the mass-energy balance of cell metabolism
    Computer Research and Modeling, 2019, v. 11, no. 4, pp. 723-750

    The biomass growth yield is the ratio of the newly synthesized substance of growing cells to the amount of the consumed substrate, the source of matter and energy for cell growth. The yield is a characteristic of the efficiency of substrate conversion to cell biomass. The conversion is carried out by the cell metabolism, which is a complete aggregate of biochemical reactions occurring in the cells.

    This work newly considers the problem of maximal cell growth yield prediction basing on balances of the whole living cell metabolism and its fragments called as partial metabolisms (PM). The following PM’s are used for the present consideration. During growth on any substrate we consider i) the standard constructive metabolism (SCM) which consists of identical pathways during growth of various organisms on any substrate. SCM starts from several standard compounds (nodal metabolites): glucose, acetyl-CoA 2-oxoglutarate, erythrose-4-phosphate, oxaloacetate, ribose-5- phosphate, 3-phosphoglycerate, phosphoenolpyruvate, and pyruvate, and ii) the full forward metabolism (FM) — the remaining part of the whole metabolism. The first one consumes high-energy bonds (HEB) formed by the second one. In this work we examine a generalized variant of the FM, when the possible presence of extracellular products, as well as the possibilities of both aerobic and anaerobic growth are taken into account. Instead of separate balances of each nodal metabolite formation as it was made in our previous work, this work deals at once with the whole aggregate of these metabolites. This makes the problem solution more compact and requiring a smaller number of biochemical quantities and substantially less computational time. An equation expressing the maximal biomass yield via specific amounts of HEB formed and consumed by the partial metabolisms has been derived. It includes the specific HEB consumption by SCM which is a universal biochemical parameter applicable to the wide range of organisms and growth substrates. To correctly determine this parameter, the full constructive metabolism and its forward part are considered for the growth of cells on glucose as the mostly studied substrate. We used here the found earlier properties of the elemental composition of lipid and lipid-free fractions of cell biomass. Numerical study of the effect of various interrelations between flows via different nodal metabolites has been made. It showed that the requirements of the SCM in high-energy bonds and NAD(P)H are practically constants. The found HEB-to-formed-biomass coefficient is an efficient tool for finding estimates of maximal biomass yield from substrates for which the primary metabolism is known. Calculation of ATP-to-substrate ratio necessary for the yield estimation has been made using the special computer program package, GenMetPath.

    Views (last year): 2.
  2. Khorkov A.V., Khorkov A.V.
    Linear and nonlinear optimization models of multiple covering of a bounded plane domain with circles
    Computer Research and Modeling, 2019, v. 11, no. 6, pp. 1101-1110

    Problems of multiple covering ($k$-covering) of a bounded set $G$ with equal circles of a given radius are well known. They are thoroughly studied under the assumption that $G$ is a finite set. There are several papers concerned with studying this problem in the case where $G$ is a connected set. In this paper, we study the problem of minimizing the number of circles that form a $k$-covering, $k \geqslant 1$, provided that $G$ is a bounded convex plane domain.

    For the above-mentioned problem, we state a 0-1 linear model, a general integer linear model, and a nonlinear model, imposing a constraint on the minimum distance between the centers of covering circles. The latter constraint is due to the fact that in practice one can place at most one device at each point. We establish necessary and sufficient solvability conditions for the linear models and describe one (easily realizable) variant of these conditions in the case where the covered set $G$ is a rectangle.

    We propose some methods for finding an approximate number of circles of a given radius that provide the desired $k$-covering of the set $G$, both with and without constraints on distances between the circles’ centers. We treat the calculated values as approximate upper bounds for the number of circles. We also propose a technique that allows one to get approximate lower bounds for the number of circles that is necessary for providing a $k$-covering of the set $G$. In the general linear model, as distinct from the 0-1 linear model, we require no additional constraint. The difference between the upper and lower bounds for the number of circles characterizes the quality (acceptability) of the constructed $k$-covering.

    We state a nonlinear mathematical model for the $k$-covering problem with the above-mentioned constraints imposed on distances between the centers of covering circles. For this model, we propose an algorithm which (in certain cases) allows one to find more exact solutions to covering problems than those calculated from linear models.

    For implementing the proposed approach, we have developed computer programs and performed numerical experiments. Results of numerical experiments demonstrate the effectiveness of the method.

  3. Bogdanov A.V., Gankevich I.G., Gayduchok V.Yu., Yuzhanin N.V.
    Running applications on a hybrid cluster
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 475-483

    A hybrid cluster implies the use of computational devices with radically different architectures. Usually, these are conventional CPU architecture (e.g. x86_64) and GPU architecture (e. g. NVIDIA CUDA). Creating and exploiting such a cluster requires some experience: in order to harness all computational power of the described system and get substantial speedup for computational tasks many factors should be taken into account. These factors consist of hardware characteristics (e.g. network infrastructure, a type of data storage, GPU architecture) as well as software stack (e.g. MPI implementation, GPGPU libraries). So, in order to run scientific applications GPU capabilities, software features, task size and other factors should be considered.

    This report discusses opportunities and problems of hybrid computations. Some statistics from tests programs and applications runs will be demonstrated. The main focus of interest is open source applications (e. g. OpenFOAM) that support GPGPU (with some parts rewritten to use GPGPU directly or by replacing libraries).

    There are several approaches to organize heterogeneous computations for different GPU architectures out of which CUDA library and OpenCL framework are compared. CUDA library is becoming quite typical for hybrid systems with NVIDIA cards, but OpenCL offers portability opportunities which can be a determinant factor when choosing framework for development. We also put emphasis on multi-GPU systems that are often used to build hybrid clusters. Calculations were performed on a hybrid cluster of SPbU computing center.

    Views (last year): 4.
  4. Silaeva V.A., Silaeva M.V., Silaev A.M.
    Estimation of models parameters for time series with Markov switching regimes
    Computer Research and Modeling, 2018, v. 10, no. 6, pp. 903-918

    The paper considers the problem of estimating the parameters of time series described by regression models with Markov switching of two regimes at random instants of time with independent Gaussian noise. For the solution, we propose a variant of the EM algorithm based on the iterative procedure, during which an estimation of the regression parameters is performed for a given sequence of regime switching and an evaluation of the switching sequence for the given parameters of the regression models. In contrast to the well-known methods of estimating regression parameters in the models with Markov switching, which are based on the calculation of a posteriori probabilities of discrete states of the switching sequence, in the paper the estimates are calculated of the switching sequence, which are optimal by the criterion of the maximum of a posteriori probability. As a result, the proposed algorithm turns out to be simpler and requires less calculations. Computer modeling allows to reveal the factors influencing accuracy of estimation. Such factors include the number of observations, the number of unknown regression parameters, the degree of their difference in different modes of operation, and the signal-to-noise ratio which is associated with the coefficient of determination in regression models. The proposed algorithm is applied to the problem of estimating parameters in regression models for the rate of daily return of the RTS index, depending on the returns of the S&P 500 index and Gazprom shares for the period from 2013 to 2018. Comparison of the estimates of the parameters found using the proposed algorithm is carried out with the estimates that are formed using the EViews econometric package and with estimates of the ordinary least squares method without taking into account regimes switching. The account of regimes switching allows to receive more exact representation about structure of a statistical dependence of investigated variables. In switching models, the increase in the signal-to-noise ratio leads to the fact that the differences in the estimates produced by the proposed algorithm and using the EViews program are reduced.

    Views (last year): 36.
  5. Volokhova A.V., Zemlyanay E.V., Kachalov V.V., Rikhvitskiy V.S.
    Simulation of the gas condensate reservoir depletion
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1081-1095

    One of problems in developing the gas condensate fields lies on the fact that the condensed hydrocarbons in the gas-bearing layer can get stuck in the pores of the formation and hence cannot be extracted. In this regard, research is underway to increase the recoverability of hydrocarbons in such fields. This research includes a wide range of studies on mathematical simulations of the passage of gas condensate mixtures through a porous medium under various conditions.

    In the present work, within the classical approach based on the Darcy law and the law of continuity of flows, we formulate an initial-boundary value problem for a system of nonlinear differential equations that describes a depletion of a multicomponent gas-condensate mixture in porous reservoir. A computational scheme is developed on the basis of the finite-difference approximation and the fourth order Runge .Kutta method. The scheme can be used for simulations both in the spatially one-dimensional case, corresponding to the conditions of the laboratory experiment, and in the two-dimensional case, when it comes to modeling a flat gas-bearing formation with circular symmetry.

    The computer implementation is based on the combination of C++ and Maple tools, using the MPI parallel programming technique to speed up the calculations. The calculations were performed on the HybriLIT cluster of the Multifunctional Information and Computing Complex of the Laboratory of Information Technologies of the Joint Institute for Nuclear Research.

    Numerical results are compared with the experimental data on the pressure dependence of output of a ninecomponent hydrocarbon mixture obtained at a laboratory facility (VNIIGAZ, Ukhta). The calculations were performed for two types of porous filler in the laboratory model of the formation: terrigenous filler at 25 .„R and carbonate one at 60 .„R. It is shown that the approach developed ensures an agreement of the numerical results with experimental data. By fitting of numerical results to experimental data on the depletion of the laboratory reservoir, we obtained the values of the parameters that determine the inter-phase transition coefficient for the simulated system. Using the same parameters, a computer simulation of the depletion of a thin gas-bearing layer in the circular symmetry approximation was carried out.

  6. Shepelev V.D., Kostyuchenkov N.V., Shepelev S.D., Alieva A.A., Makarova I.V., Buyvol P.A., Parsin G.A.
    The development of an intelligent system for recognizing the volume and weight characteristics of cargo
    Computer Research and Modeling, 2021, v. 13, no. 2, pp. 437-450

    Industrial imaging or “machine vision” is currently a key technology in many industries as it can be used to optimize various processes. The purpose of this work is to create a software and hardware complex for measuring the overall and weight characteristics of cargo based on an intelligent system using neural network identification methods that allow one to overcome the technological limitations of similar complexes implemented on ultrasonic and infrared measuring sensors. The complex to be developed will measure cargo without restrictions on the volume and weight characteristics of cargo to be tariffed and sorted within the framework of the warehouse complexes. The system will include an intelligent computer program that determines the volume and weight characteristics of cargo using the machine vision technology and an experimental sample of the stand for measuring the volume and weight of cargo.

    We analyzed the solutions to similar problems. We noted that the disadvantages of the studied methods are very high requirements for the location of the camera, as well as the need for manual operations when calculating the dimensions, which cannot be automated without significant modifications. In the course of the work, we investigated various methods of object recognition in images to carry out subject filtering by the presence of cargo and measure its overall dimensions. We obtained satisfactory results when using cameras that combine both an optical method of image capture and infrared sensors. As a result of the work, we developed a computer program allowing one to capture a continuous stream from Intel RealSense video cameras with subsequent extraction of a three-dimensional object from the designated area and to calculate the overall dimensions of the object. At this stage, we analyzed computer vision techniques; developed an algorithm to implement the task of automatic measurement of goods using special cameras and the software allowing one to obtain the overall dimensions of objects in automatic mode.

    Upon completion of the work, this development can be used as a ready-made solution for transport companies, logistics centers, warehouses of large industrial and commercial enterprises.

  7. Bogdanov A.V., P. Sone K. Ko, Zaya K.
    Performance of the OpenMP and MPI implementations on ultrasparc system
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 485-491

    This paper targets programmers and developers interested in utilizing parallel programming techniques to enhance application performance. The Oracle Solaris Studio software provides state-of-the-art optimizing and parallelizing compilers for C, C++ and Fortran, an advanced debugger, and optimized mathematical and performance libraries. Also included are an extremely powerful performance analysis tool for profiling serial and parallel applications, a thread analysis tool to detect data races and deadlock in memory parallel programs, and an Integrated Development Environment (IDE). The Oracle Message Passing Toolkit software provides the high-performance MPI libraries and associated run-time environment needed for message passing applications that can run on a single system or across multiple compute systems connected with high performance networking, including Gigabit Ethernet, 10 Gigabit Ethernet, InfiniBand and Myrinet. Examples of OpenMP and MPI are provided throughout the paper, including their usage via the Oracle Solaris Studio and Oracle Message Passing Toolkit products for development and deployment of both serial and parallel applications on SPARC and x86/x64 based systems. Throughout this paper it is demonstrated how to develop and deploy an application parallelized with OpenMP and/or MPI.

    Views (last year): 2.
  8. Zavodskikh R.K., Efanov N.N.
    Performance prediction for chosen types of loops over one-dimensional arrays with embedding-driven intermediate representations analysis
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 211-224

    The method for mapping of intermediate representations (IR) set of C, C++ programs to vector embedding space is considered to create an empirical estimation framework for static performance prediction using LLVM compiler infrastructure. The usage of embeddings makes programs easier to compare due to avoiding Control Flow Graphs (CFG) and Data Flow Graphs (DFG) direct comparison. This method is based on transformation series of the initial IR such as: instrumentation — injection of artificial instructions in an instrumentation compiler’s pass depending on load offset delta in the current instruction compared to the previous one, mapping of instrumented IR into multidimensional vector with IR2Vec and dimension reduction with t-SNE (t-distributed stochastic neighbor embedding) method. The D1 cache miss ratio measured with perf stat tool is considered as performance metric. A heuristic criterion of programs having more or less cache miss ratio is given. This criterion is based on embeddings of programs in 2D-space. The instrumentation compiler’s pass developed in this work is described: how it generates and injects artificial instructions into IR within the used memory model. The software pipeline that implements the performance estimation based on LLVM compiler infrastructure is given. Computational experiments are performed on synthetic tests which are the sets of programs with the same CFGs but with different sequences of offsets used when accessing the one-dimensional array of a given size. The correlation coefficient between performance metric and distance to the worst program’s embedding is measured and proved to be negative regardless of t-SNE initialization. This fact proves the heuristic criterion to be true. The process of such synthetic tests generation is also considered. Moreover, the variety of performance metric in programs set in such a test is proposed as a metric to be improved with exploration of more tests generators.

  9. Iakushkin O.O., Degtyarev A.B., Shvemberger S.V.
    Decomposition of the modeling task of some objects of archeological research for processing in a distributed computer system
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 533-537

    Although each task of recreating artifacts is truly unique, the modeling process for façades, foundations and building elements can be parametrized. This paper is focused on a complex of the existing programming libraries and solutions that need to be united into a single computer system to solve such a task. An algorithm of generating 3D filling of objects under reconstruction is presented. The solution architecture necessary for the system's adaptation for a cloud environment is studied.

    Views (last year): 1. Citations: 2 (RSCI).
  10. Smirnov S.A., Tarasov A.S.
    An automated system for program parameters fine tuning in the cloud
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 587-592

    The paper presents a software system aimed at finding best (in some sense) parameters of an algorithm. The system handles both discrete and continuous parameters and employs massive parallelism offered by public clouds. The paper presents an overview of the system, a method to measure algorithm's performance in the cloud and numerical results of system's use on several problem sets.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"