Результаты поиска по 'production functions':
Найдено статей: 37
  1. We build new tests which permit to increase the human capacity for the information processing by the parallel execution of the several logic operations of prescribed type. For checking of the causes of the capacity increasing we develop the check tests on the same logic operations class in which the parallel organization of the calculations is low-effectively. We use the apparatus of the universal algebra and automat theory. This article is the extension of the cycle of the work, which investigates the human capacity for the parallel calculations. The general publications on this theme content in the references. The tasks in the described tests may to define in the form of the calculation of the result in the sequence of the same type operations from some algebra. If this operation is associative then the parallel calculation is effectively by successful grouping of process. In Theory of operations that is the using the simultaneous work several processors. Each processor transforms in the time unit the certain known number of the elements of the input date or the intermediate results (the processor productivity). Now it is not known what kind elements of date are using by the brain for the logical or mathematical calculation, and how many elements are treating in the time units. Therefore the test contains the sequence of the presentations of the tasks with different numbers of logical operations in the fixed alphabet. That is the measure of the complexity for the task. The analysis of the depending of the time for the task solution from the complexity gives the possible to estimate the processor productivity and the form of the calculate organization. For the sequence calculations only one processor is working, and the time of solution is a line function of complexity. If the new processors begin to work in parallel when the complexities of the task increase than the depending of the solution time from complexity is represented by the curve which is convex at the bottom. For the detection of situation when the man increases the speed of the single processor under the condition of the increasing complexity we use the task series with similar operations but in the no associate algebra. In such tasks the parallel calculation is little affectivity in the sense of the increasing efficiency by the increasing the number of processors. That is the check set of the tests. In article we consider still one class of the tests, which are based on the calculation of the trajectory of the formal automat state if the input sequence is determined. We investigate the special class of automats (relay) for which the construction affect on the affectivity of the parallel calculations of the final automat state. For all tests we estimate the affectivity of the parallel calculation. This article do not contained the experiment results.

    Views (last year): 14. Citations: 1 (RSCI).
  2. Bozhko A.N.
    Hypergraph approach in the decomposition of complex technical systems
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1007-1022

    The article considers a mathematical model of decomposition of a complex product into assembly units. This is an important engineering problem, which affects the organization of discrete production and its operational management. A review of modern approaches to mathematical modeling and automated computer-aided of decompositions is given. In them, graphs, networks, matrices, etc. serve as mathematical models of structures of technical systems. These models describe the mechanical structure as a binary relation on a set of system elements. The geometrical coordination and integrity of machines and mechanical devices during the manufacturing process is achieved by means of basing. In general, basing can be performed on several elements simultaneously. Therefore, it represents a variable arity relation, which can not be correctly described in terms of binary mathematical structures. A new hypergraph model of mechanical structure of technical system is described. This model allows to give an adequate formalization of assembly operations and processes. Assembly operations which are carried out by two working bodies and consist in realization of mechanical connections are considered. Such operations are called coherent and sequential. This is the prevailing type of operations in modern industrial practice. It is shown that the mathematical description of such operation is normal contraction of an edge of the hypergraph. A sequence of contractions transforming the hypergraph into a point is a mathematical model of the assembly process. Two important theorems on the properties of contractible hypergraphs and their subgraphs proved by the author are presented. The concept of $s$-hypergraphs is introduced. $S$-hypergraphs are the correct mathematical models of mechanical structures of any assembled technical systems. Decomposition of a product into assembly units is defined as cutting of an $s$-hypergraph into $s$-subgraphs. The cutting problem is described in terms of discrete mathematical programming. Mathematical models of structural, topological and technological constraints are obtained. The objective functions are proposed that formalize the optimal choice of design solutions in various situations. The developed mathematical model of product decomposition is flexible and open. It allows for extensions that take into account the characteristics of the product and its production.

  3. Korolev S.A., Maykov D.V.
    Solution of the problem of optimal control of the process of methanogenesis based on the Pontryagin maximum principle
    Computer Research and Modeling, 2020, v. 12, no. 2, pp. 357-367

    The paper presents a mathematical model that describes the process of obtaining biogas from livestock waste. This model describes the processes occurring in a biogas plant for mesophilic and thermophilic media, as well as for continuous and periodic modes of substrate inflow. The values of the coefficients of this model found earlier for the periodic mode, obtained by solving the problem of model identification from experimental data using a genetic algorithm, are given.

    For the model of methanogenesis, an optimal control problem is formulated in the form of a Lagrange problem, whose criterial functionality is the output of biogas over a certain period of time. The controlling parameter of the task is the rate of substrate entry into the biogas plant. An algorithm for solving this problem is proposed, based on the numerical implementation of the Pontryagin maximum principle. In this case, a hybrid genetic algorithm with an additional search in the vicinity of the best solution using the method of conjugate gradients was used as an optimization method. This numerical method for solving an optimal control problem is universal and applicable to a wide class of mathematical models.

    In the course of the study, various modes of submission of the substrate to the digesters, temperature environments and types of raw materials were analyzed. It is shown that the rate of biogas production in the continuous feed mode is 1.4–1.9 times higher in the mesophilic medium (1.9–3.2 in the thermophilic medium) than in the periodic mode over the period of complete fermentation, which is associated with a higher feed rate of the substrate and a greater concentration of nutrients in the substrate. However, the yield of biogas during the period of complete fermentation with a periodic mode is twice as high as the output over the period of a complete change of the substrate in the methane tank at a continuous mode, which means incomplete processing of the substrate in the second case. The rate of biogas formation for a thermophilic medium in continuous mode and the optimal rate of supply of raw materials is three times higher than for a mesophilic medium. Comparison of biogas output for various types of raw materials shows that the highest biogas output is observed for waste poultry farms, the least — for cattle farms waste, which is associated with the nutrient content in a unit of substrate of each type.

  4. Stepin Y.P., Leonov D.G., Papilina T.M., Stepankina O.A.
    System modeling, risks evaluation and optimization of a distributed computer system
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1349-1359

    The article deals with the problem of a distributed system operation reliability. The system core is an open integration platform that provides interaction of varied software for modeling gas transportation. Some of them provide an access through thin clients on the cloud technology “software as a service”. Mathematical models of operation, transmission and computing are to ensure the operation of an automated dispatching system for oil and gas transportation. The paper presents a system solution based on the theory of Markov random processes and considers the stable operation stage. The stationary operation mode of the Markov chain with continuous time and discrete states is described by a system of Chapman–Kolmogorov equations with respect to the average numbers (mathematical expectations) of the objects in certain states. The objects of research are both system elements that are present in a large number – thin clients and computing modules, and individual ones – a server, a network manager (message broker). Together, they are interacting Markov random processes. The interaction is determined by the fact that the transition probabilities in one group of elements depend on the average numbers of other elements groups.

    The authors propose a multi-criteria dispersion model of risk assessment for such systems (both in the broad and narrow sense, in accordance with the IEC standard). The risk is the standard deviation of estimated object parameter from its average value. The dispersion risk model makes possible to define optimality criteria and whole system functioning risks. In particular, for a thin client, the following is calculated: the loss profit risk, the total risk of losses due to non-productive element states, and the total risk of all system states losses.

    Finally the paper proposes compromise schemes for solving the multi-criteria problem of choosing the optimal operation strategy based on the selected set of compromise criteria.

  5. Belotelov N.V., Apal’kova T.G., Mamkin V.V., Kurbatova Y.A., Olchev A.V.
    Some relationships between thermodynamic characteristics and water vapor and carbon dioxide fluxes in a recently clear-cut area
    Computer Research and Modeling, 2017, v. 9, no. 6, pp. 965-980

    The temporal variability of exergy of short-wave and long-wave radiation and its relationships with sensible heat, water vapor (H2O) and carbon dioxide (CO2) fluxes on a recently clear-cut area in a mixed coniferous and small-leaved forest in the Tver region is discussed. On the basis of the analysis of radiation and exergy efficiency coefficients suggested by Yu.M. Svirezhev it was shown that during the first eight months after clearcutting the forest ecosystem functions as a "heat engine" i.e. the processes of energy dissipation dominated over processes of biomass production. To validate the findings the statistical analysis of temporary variability of meteorological parameters, as well as, daily fluxes of sensible heat, H2O and CO2 was provided using the trigonometrical polynomials. The statistical models that are linearly depended on an exergy of short-wave and long-wave radiation were obtained for mean daily values of CO2 fluxes, gross primary production of regenerated vegetation and sensible heat fluxes. The analysis of these dependences is also confirmed the results obtained from processing the radiation and exergy efficiency coefficients. The splitting the time series into separate time intervals, e.g. “spring–summer” and “summer–autumn”, allowed revealing that the statistically significant relationships between atmospheric fluxes and exergy were amplified in summer months as the clear-cut area was overgrown by grassy and young woody vegetation. The analysis of linear relationships between time-series of latent heat fluxes and exergy showed their statistical insignificance. The linear relationships between latent heat fluxes and temperature were in turn statistically significant. The air temperature was a key factor improving the accuracy of the models, whereas effect of exergy was insignificant. The results indicated that at the time of active vegetation regeneration within the clear-cut area the seasonal variability of surface evaporation is mainly governed by temperature variation.

    Views (last year): 15. Citations: 1 (RSCI).
  6. Sokolov A.V., Mamkin V.V., Avilov V.K., Tarasov D.L., Kurbatova Y.A., Olchev A.V.
    Application of a balanced identification method for gap-filling in CO2 flux data in a sphagnum peat bog
    Computer Research and Modeling, 2019, v. 11, no. 1, pp. 153-171

    The method of balanced identification was used to describe the response of Net Ecosystem Exchange of CO2 (NEE) to change of environmental factors, and to fill the gaps in continuous CO2 flux measurements in a sphagnum peat bog in the Tver region. The measurements were provided in the peat bog by the eddy covariance method from August to November of 2017. Due to rainy weather conditions and recurrent periods with low atmospheric turbulence the gap proportion in measured CO2 fluxes at our experimental site during the entire period of measurements exceeded 40%. The model developed for the gap filling in long-term experimental data considers the NEE as a difference between Ecosystem Respiration (RE) and Gross Primary Production (GPP), i.e. key processes of ecosystem functioning, and their dependence on incoming solar radiation (Q), soil temperature (T), water vapor pressure deficit (VPD) and ground water level (WL). Applied for this purpose the balanced identification method is based on the search for the optimal ratio between the model simplicity and the data fitting accuracy — the ratio providing the minimum of the modeling error estimated by the cross validation method. The obtained numerical solutions are characterized by minimum necessary nonlinearity (curvature) that provides sufficient interpolation and extrapolation characteristics of the developed models. It is particularly important to fill the missing values in NEE measurements. Reviewing the temporary variability of NEE and key environmental factors allowed to reveal a statistically significant dependence of GPP on Q, T, and VPD, and RE — on T and WL, respectively. At the same time, the inaccuracy of applied method for simulation of the mean daily NEE, was less than 10%, and the error in NEE estimates by the method was higher than by the REddyProc model considering the influence on NEE of fewer number of environmental parameters. Analyzing the gap-filled time series of NEE allowed to derive the diurnal and inter-daily variability of NEE and to obtain cumulative CO2 fluxs in the peat bog for selected summer-autumn period. It was shown, that the rate of CO2 fixation by peat bog vegetation in August was significantly higher than the rate of ecosystem respiration, while since September due to strong decrease of GPP the peat bog was turned into a consistent source of CO2 for the atmosphere.

    Views (last year): 19.
  7. Katasev A.S.
    Neuro-fuzzy model of fuzzy rules formation for objects state evaluation in conditions of uncertainty
    Computer Research and Modeling, 2019, v. 11, no. 3, pp. 477-492

    This article solves the problem of constructing a neuro-fuzzy model of fuzzy rules formation and using them for objects state evaluation in conditions of uncertainty. Traditional mathematical statistics or simulation modeling methods do not allow building adequate models of objects in the specified conditions. Therefore, at present, the solution of many problems is based on the use of intelligent modeling technologies applying fuzzy logic methods. The traditional approach of fuzzy systems construction is associated with an expert attraction need to formulate fuzzy rules and specify the membership functions used in them. To eliminate this drawback, the automation of fuzzy rules formation, based on the machine learning methods and algorithms, is relevant. One of the approaches to solve this problem is to build a fuzzy neural network and train it on the data characterizing the object under study. This approach implementation required fuzzy rules type choice, taking into account the processed data specificity. In addition, it required logical inference algorithm development on the rules of the selected type. The algorithm steps determine the number and functionality of layers in the fuzzy neural network structure. The fuzzy neural network training algorithm developed. After network training the formation fuzzyproduction rules system is carried out. Based on developed mathematical tool, a software package has been implemented. On its basis, studies to assess the classifying ability of the fuzzy rules being formed have been conducted using the data analysis example from the UCI Machine Learning Repository. The research results showed that the formed fuzzy rules classifying ability is not inferior in accuracy to other classification methods. In addition, the logic inference algorithm on fuzzy rules allows successful classification in the absence of a part of the initial data. In order to test, to solve the problem of assessing oil industry water lines state fuzzy rules were generated. Based on the 303 water lines initial data, the base of 342 fuzzy rules was formed. Their practical approbation has shown high efficiency in solving the problem.

    Views (last year): 12.
  8. Shibkov A.A., Kochegarov S.S.
    Computer and physical-chemical modeling of the evolution of a fractal corrosion front
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 105-124

    Corrosion damage to metals and alloys is one of the main problems of strength and durability of metal structures and products operated in contact with chemically aggressive environments. Recently, there has been a growing interest in computer modeling of the evolution of corrosion damage, especially pitting corrosion, for a deeper understanding of the corrosion process, its impact on the morphology, physical and chemical properties of the surface and mechanical strength of the material. This is mainly due to the complexity of analytical and high cost of experimental in situ studies of real corrosion processes. However, the computing power of modern computers allows you to calculate corrosion with high accuracy only on relatively small areas of the surface. Therefore, the development of new mathematical models that allow calculating large areas for predicting the evolution of corrosion damage to metals is currently an urgent problem.

    In this paper, the evolution of the corrosion front in the interaction of a polycrystalline metal surface with a liquid aggressive medium was studied using a computer model based on a cellular automat. A distinctive feature of the model is the specification of the solid body structure in the form of Voronoi polygons used for modeling polycrystalline alloys. Corrosion destruction was performed by setting the probability function of the transition between cells of the cellular automaton. It was taken into account that the corrosion strength of the grains varies due to crystallographic anisotropy. It is shown that this leads to the formation of a rough phase boundary during the corrosion process. Reducing the concentration of active particles in a solution of an aggressive medium during a chemical reaction leads to corrosion attenuation in a finite number of calculation iterations. It is established that the final morphology of the phase boundary has a fractal structure with a dimension of 1.323 ± 0.002 close to the dimension of the gradient percolation front, which is in good agreement with the fractal dimension of the etching front of a polycrystalline aluminum-magnesium alloy AlMg6 with a concentrated solution of hydrochloric acid. It is shown that corrosion of a polycrystalline metal in a liquid aggressive medium is a new example of a topochemical process, the kinetics of which is described by the Kolmogorov–Johnson– Meil–Avrami theory.

  9. Samoylenko I.A., Kuleshov I.V., Raigorodsky A.M.
    The model of two-level intergroup competition
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 355-368

    At the middle of the 2000-th, scientists studying the functioning of insect communities identified four basic patterns of the organizational structure of such communities. (i) Cooperation is more developed in groups with strong kinship. (ii) Cooperation in species with large colony sizes is often more developed than in species with small colony sizes. And small-sized colonies often exhibit greater internal reproductive conflict and less morphological and behavioral specialization. (iii) Within a single species, brood size (i. e., in a sense, efficiency) per capita usually decreases as colony size increases. (iv) Advanced cooperation tends to occur when resources are limited and intergroup competition is fierce. Thinking of the functioning of a group of organisms as a two-level competitive market in which individuals face the problem of allocating their energy between investment in intergroup competition and investment in intragroup competition, i. e., an internal struggle for the share of resources obtained through intergroup competition, we can compare such a biological situation with the economic phenomenon of “coopetition” — the cooperation of competing agents with the goal of later competitively dividing the resources won in consequence In the framework of economic researches the effects similar to (ii) — in the framework of large and small group competition the optimal strategy of large group would be complete squeezing out of the second group and monopolization of the market (i. e. large groups tend to act cooperatively) and (iii) — there are conditions, in which the size of the group has a negative impact on productivity of each of its individuals (this effect is called the paradox of group size or Ringelman effect). The general idea of modeling such effects is the idea of proportionality — each individual (an individual/rational agent) decides what share of his forces to invest in intergroup competition and what share to invest in intragroup competition. The group’s gain must be proportional to its total investment in competition, while the individual’s gain is proportional to its contribution to intra-group competition. Despite the prevalence of empirical observations, no gametheoretic model has yet been introduced in which the empirically observed effects can be confirmed. This paper proposes a model that eliminates the problems of previously existing ones and the simulation of Nash equilibrium states within the proposed model allows the above effects to be observed in numerical experiments.

  10. Puchinin S.M., Korolkov E.R., Stonyakin F.S., Alkousa M.S., Vyguzov A.A.
    Subgradient methods with B.T. Polyak-type step for quasiconvex minimization problems with inequality constraints and analogs of the sharp minimum
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 105-122

    In this paper, we consider two variants of the concept of sharp minimum for mathematical programming problems with quasiconvex objective function and inequality constraints. It investigated the problem of describing a variant of a simple subgradient method with switching along productive and non-productive steps, for which, on a class of problems with Lipschitz functions, it would be possible to guarantee convergence with the rate of geometric progression to the set of exact solutions or its vicinity. It is important that to implement the proposed method there is no need to know the sharp minimum parameter, which is usually difficult to estimate in practice. To overcome this problem, the authors propose to use a step adjustment procedure similar to that previously proposed by B. T. Polyak. However, in this case, in comparison with the class of problems without constraints, it arises the problem of knowing the exact minimal value of the objective function. The paper describes the conditions for the inexactness of this information, which make it possible to preserve convergence with the rate of geometric progression in the vicinity of the set of minimum points of the problem. Two analogs of the concept of a sharp minimum for problems with inequality constraints are considered. In the first one, the problem of approximation to the exact solution arises only to a pre-selected level of accuracy, for this, it is considered the case when the minimal value of the objective function is unknown; instead, it is given some approximation of this value. We describe conditions on the inexact minimal value of the objective function, under which convergence to the vicinity of the desired set of points with a rate of geometric progression is still preserved. The second considered variant of the sharp minimum does not depend on the desired accuracy of the problem. For this, we propose a slightly different way of checking whether the step is productive, which allows us to guarantee the convergence of the method to the exact solution with the rate of geometric progression in the case of exact information. Convergence estimates are proved under conditions of weak convexity of the constraints and some restrictions on the choice of the initial point, and a corollary is formulated for the convex case when the need for an additional assumption on the choice of the initial point disappears. For both approaches, it has been proven that the distance from the current point to the set of solutions decreases with increasing number of iterations. This, in particular, makes it possible to limit the requirements for the properties of the used functions (Lipschitz-continuous, sharp minimum) only for a bounded set. Some computational experiments are performed, including for the truss topology design problem.

Pages: previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"