Результаты поиска по 'data model':
Найдено статей: 232
  1. Ilyasov D.V., Molchanov A.G., Glagolev M.V., Suvorov G.G., Sirin A.A.
    Modelling of carbon dioxide net ecosystem exchange of hayfield on drained peat soil: land use scenario analysis
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1427-1449

    The data of episodic field measurements of carbon dioxide balance components (soil respiration — Rsoil, ecosystem respiration — Reco, net ecosystem exchange — NEE) of hayfields under use and abandoned one are interpreted by modelling. The field measurements were carried within five field campaigns in 2018 and 2019 on the drained part of the Dubna Peatland in Taldom District, Moscow Oblast, Russia. The territory is within humid continental climate zone. Peatland drainage was done out for milled peat extraction. After extraction was stopped, the residual peat deposit (1–1.5 m) was ploughed and grassed (Poa pratensis L.) for hay production. The current ground water level (GWL) varies from 0.3–0.5 m below the surface during wet and up to 1.0 m during dry periods. Daily dynamics of CO2 fluxes was measured using dynamic chamber method in 2018 (August) and 2019 (May, June, August) for abandoned ditch spacing only with sanitary mowing once in 5 years and the ditch spacing with annual mowing. NEE and Reco were measured on the sites with original vegetation, and Rsoil — after vegetation removal. To model a seasonal dynamics of NEE, the dependence of its components (Reco, Rsoil, and Gross ecosystematmosphere exchange of carbon dioxide — GEE) from soil and air temperature, GWL, photosynthetically active radiation, underground and aboveground plant biomass were used. The parametrization of the models has been carried out considering the stability of coefficients estimated by the bootstrap method. R2 (α = 0.05) between simulated and measured Reco was 0.44 (p < 0.0003) on abandoned and 0.59 (p < 0.04) on under use hayfield, and GEE was 0.57 (p < 0.0002) and 0.77 (p < 0.00001), respectively. Numerical experiments were carried out to assess the influence of different haymaking regime on NEE. It was found that NEE for the season (May 15 – September 30) did not differ much between the hayfield without mowing (4.5±1.0 tC·ha–1·season–1) and the abandoned one (6.2±1.4). Single mowing during the season leads to increase of NEE up to 6.5±0.9, and double mowing — up to 7.5±1.4 tC·ha–1·season–1. This means increase of carbon losses and CO2 emission into the atmosphere. Carbon loss on hayfield for both single and double mowing scenario was comparable with abandoned hayfield. The value of removed phytomass for single and double mowing was 0.8±0.1 tC·ha–1·season–1 and 1.4±0.1 (45% carbon content in dry phytomass) or 3.0 and 4.4 t·ha–1·season–1 of hay (17% moisture content). In comparison with the fallow, the removal of biomass of 0.8±0.1 at single and 1.4±0.1 tC·ha–1·season–1 double mowing is accompanied by an increase in carbon loss due to CO2 emissions, i.e., the growth of NEE by 0.3±0.1 and 1.3±0.6 tC·ha–1·season–1, respectively. This corresponds to the growth of NEE for each ton of withdrawn phytomass per hectare of 0.4±0.2 tС·ha–1·season–1 at single mowing, and 0.9±0.7 tС·ha–1·season–1 at double mowing. Therefore, single mowing is more justified in terms of carbon loss than double mowing. Extensive mowing does not increase CO2 emissions into the atmosphere and allows, in addition, to “replace” part of the carbon loss by agricultural production.

  2. Andreeva A.A., Anand M., Lobanov A.I., Nikolaev A.V., Panteleev M.A.
    Using extended ODE systems to investigate the mathematical model of the blood coagulation
    Computer Research and Modeling, 2022, v. 14, no. 4, pp. 931-951

    Many properties of ordinary differential equations systems solutions are determined by the properties of the equations in variations. An ODE system, which includes both the original nonlinear system and the equations in variations, will be called an extended system further. When studying the properties of the Cauchy problem for the systems of ordinary differential equations, the transition to extended systems allows one to study many subtle properties of solutions. For example, the transition to the extended system allows one to increase the order of approximation for numerical methods, gives the approaches to constructing a sensitivity function without using numerical differentiation procedures, allows to use methods of increased convergence order for the inverse problem solution. Authors used the Broyden method belonging to the class of quasi-Newtonian methods. The Rosenbroke method with complex coefficients was used to solve the stiff systems of the ordinary differential equations. In our case, it is equivalent to the second order approximation method for the extended system.

    As an example of the proposed approach, several related mathematical models of the blood coagulation process were considered. Based on the analysis of the numerical calculations results, the conclusion was drawn that it is necessary to include a description of the factor XI positive feedback loop in the model equations system. Estimates of some reaction constants based on the numerical inverse problem solution were given.

    Effect of factor V release on platelet activation was considered. The modification of the mathematical model allowed to achieve quantitative correspondence in the dynamics of the thrombin production with experimental data for an artificial system. Based on the sensitivity analysis, the hypothesis tested that there is no influence of the lipid membrane composition (the number of sites for various factors of the clotting system, except for thrombin sites) on the dynamics of the process.

  3. Dubinina M.G.
    Spatio-temporal models of ICT diffusion
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1695-1712

    The article proposes a space-time approach to modeling the diffusion of information and communication technologies based on the Fisher –Kolmogorov– Petrovsky – Piskunov equation, in which the diffusion kinetics is described by the Bass model, which is widely used to model the diffusion of innovations in the market. For this equation, its equilibrium positions are studied, and based on the singular perturbation theory, was obtained an approximate solution in the form of a traveling wave, i. e. a solution that propagates at a constant speed while maintaining its shape in space. The wave speed shows how much the “spatial” characteristic, which determines the given level of technology dissemination, changes in a single time interval. This speed is significantly higher than the speed at which propagation occurs due to diffusion. By constructing such an autowave solution, it becomes possible to estimate the time required for the subject of research to achieve the current indicator of the leader.

    The obtained approximate solution was further applied to assess the factors affecting the rate of dissemination of information and communication technologies in the federal districts of the Russian Federation. Various socio-economic indicators were considered as “spatial” variables for the diffusion of mobile communications among the population. Growth poles in which innovation occurs are usually characterized by the highest values of “spatial” variables. For Russia, Moscow is such a growth pole; therefore, indicators of federal districts related to Moscow’s indicators were considered as factor indicators. The best approximation to the initial data was obtained for the ratio of the share of R&D costs in GRP to the indicator of Moscow, average for the period 2000–2009. It was found that for the Ural Federal District at the initial stage of the spread of mobile communications, the lag behind the capital was less than one year, for the Central Federal District, the Northwestern Federal District — 1.4 years, for the Volga Federal District, the Siberian Federal District, the Southern Federal District and the Far Eastern Federal District — less than two years, in the North Caucasian Federal District — a little more 2 years. In addition, estimates of the delay time for the spread of digital technologies (intranet, extranet, etc.) used by organizations of the federal districts of the Russian Federation from Moscow indicators were obtained.

  4. Moiseev N.A., Nazarova D.I., Semina N.S., Maksimov D.A.
    Changepoint detection on financial data using deep learning approach
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 555-575

    The purpose of this study is to develop a methodology for change points detection in time series, including financial data. The theoretical basis of the study is based on the pieces of research devoted to the analysis of structural changes in financial markets, description of the proposed algorithms for detecting change points and peculiarities of building classical and deep machine learning models for solving this type of problems. The development of such tools is of interest to investors and other stakeholders, providing them with additional approaches to the effective analysis of financial markets and interpretation of available data.

    To address the research objective, a neural network was trained. In the course of the study several ways of training sample formation were considered, differing in the nature of statistical parameters. In order to improve the quality of training and obtain more accurate results, a methodology for feature generation was developed for the formation of features that serve as input data for the neural network. These features, in turn, were derived from an analysis of mathematical expectations and standard deviations of time series data over specific intervals. The potential for combining these features to achieve more stable results is also under investigation.

    The results of model experiments were analyzed to compare the effectiveness of the proposed model with other existing changepoint detection algorithms that have gained widespread usage in practical applications. A specially generated dataset, developed using proprietary methods, was utilized as both training and testing data. Furthermore, the model, trained on various features, was tested on daily data from the S&P 500 index to assess its effectiveness in a real financial context.

    As the principles of the model’s operation are described, possibilities for its further improvement are considered, including the modernization of the proposed model’s structure, optimization of training data generation, and feature formation. Additionally, the authors are tasked with advancing existing concepts for real-time changepoint detection.

  5. Kamenev G.K., Kamenev I.G.
    Multicriterial metric data analysis in human capital modelling
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1223-1245

    The article describes a model of a human in the informational economy and demonstrates the multicriteria optimizational approach to the metric analysis of model-generated data. The traditional approach using the identification and study involves the model’s identification by time series and its further prediction. However, this is not possible when some variables are not explicitly observed and only some typical borders or population features are known, which is often the case in the social sciences, making some models pure theoretical. To avoid this problem, we propose a method of metric data analysis (MMDA) for identification and study of such models, based on the construction and analysis of the Kolmogorov – Shannon metric nets of the general population in a multidimensional space of social characteristics. Using this method, the coefficients of the model are identified and the features of its phase trajectories are studied. In this paper, we are describing human according to his role in information processing, considering his awareness and cognitive abilities. We construct two lifetime indices of human capital: creative individual (generalizing cognitive abilities) and productive (generalizing the amount of information mastered by a person) and formulate the problem of their multi-criteria (two-criteria) optimization taking into account life expectancy. This approach allows us to identify and economically justify the new requirements for the education system and the information environment of human existence. It is shown that the Pareto-frontier exists in the optimization problem, and its type depends on the mortality rates: at high life expectancy there is one dominant solution, while for lower life expectancy there are different types of Paretofrontier. In particular, the Pareto-principle applies to Russia: a significant increase in the creative human capital of an individual (summarizing his cognitive abilities) is possible due to a small decrease in the creative human capital (summarizing awareness). It is shown that the increase in life expectancy makes competence approach (focused on the development of cognitive abilities) being optimal, while for low life expectancy the knowledge approach is preferable.

  6. The paper presents the results of applying a scheme of very high accuracy and resolution to obtain numerical solutions of the Navier – Stokes equations of a compressible gas describing the occurrence and development of instability of a two-dimensional laminar boundary layer on a flat plate. The peculiarity of the conducted studies is the absence of commonly used artificial exciters of instability in the implementation of direct numerical modeling. The multioperator scheme used made it possible to observe the subtle effects of the birth of unstable modes and the complex nature of their development caused presumably by its small approximation errors. A brief description of the scheme design and its main properties is given. The formulation of the problem and the method of obtaining initial data are described, which makes it possible to observe the established non-stationary regime fairly quickly. A technique is given that allows detecting flow fluctuations with amplitudes many orders of magnitude smaller than its average values. A time-dependent picture of the appearance of packets of Tollmien – Schlichting waves with varying intensity in the vicinity of the leading edge of the plate and their downstream propagation is presented. The presented amplitude spectra with expanding peak values in the downstream regions indicate the excitation of new unstable modes other than those occurring in the vicinity of the leading edge. The analysis of the evolution of instability waves in time and space showed agreement with the main conclusions of the linear theory. The numerical solutions obtained seem to describe for the first time the complete scenario of the possible development of Tollmien – Schlichting instability, which often plays an essential role at the initial stage of the laminar-turbulent transition. They open up the possibilities of full-scale numerical modeling of this process, which is extremely important for practice, with a similar study of the spatial boundary layer.

  7. Timiryanova V.M., Lakman I.A., Larkin M.M.
    Retail forecasting on high-frequency depersonalized data
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1713-1734

    Technological development determines the emergence of highly detailed data in time and space, which expands the possibilities of analysis, allowing us to consider consumer decisions and the competitive behavior of enterprises in all their diversity, taking into account the context of the territory and the characteristics of time periods. Despite the promise of such studies, they are currently limited in the scientific literature. This is due to the range of problems, the solution of which is considered in this paper. The article draws attention to the complexity of the analysis of depersonalized high-frequency data and the possibility of modeling consumption changes in time and space based on them. The features of the new type of data are considered on the example of real depersonalized data received from the fiscal data operator “First OFD” (JSC “Energy Systems and Communications”). It is shown that along with the spectrum of problems inherent in high-frequency data, there are disadvantages associated with the process of generating data on the side of the sellers, which requires a wider use of data mining tools. A series of statistical tests were carried out on the data under consideration, including a Unit-Root Test, test for unobserved individual effects, test for serial correlation and for cross-sectional dependence in panels, etc. The presence of spatial autocorrelation of the data was tested using modified tests of Lagrange multipliers. The tests carried out showed the presence of a consistent correlation and spatial dependence of the data, which determine the expediency of applying the methods of panel and spatial analysis in relation to high-frequency data accumulated by fiscal operators. The constructed models made it possible to substantiate the spatial relationship of sales growth and its dependence on the day of the week. The limitation for increasing the predictive ability of the constructed models and their subsequent complication, due to the inclusion of explanatory factors, was the lack of open access statistics grouped in the required detail in time and space, which determines the relevance of the formation of high-frequency geographically structured data bases.

  8. Podryga V.O., Polyakov S.V.
    3D molecular dynamic simulation of thermodynamic equilibrium problem for heated nickel
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 573-579

    This work is devoted to molecular dynamic modeling of the thermal impact processes on the metal sample consisting of nickel atoms. For the solution of this problem, a continuous mathematical model on the basis of the classical Newton mechanics equations has been used; a numerical method based on the Verlet scheme has been chosen; a parallel algorithm has been offered, and its realization within the MPI and OpenMP technologies has been executed. By means of the developed parallel program, the investigation of thermodynamic equilibrium of nickel atoms’ system under the conditions of heating a sample to desired temperature has been executed. In numerical experiments both optimum parameters of calculation procedure and physical parameters of analyzed process have been defined. The obtained numerical results are well corresponding to known theoretical and experimental data.

    Views (last year): 2.
  9. Reed R.G., Cox M.A., Wrigley T., Mellado B.
    A CPU benchmarking characterization of ARM based processors
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 581-586

    Big science projects are producing data at ever increases rates. Typical techniques involve storing the data to disk, after minor filtering, and then processing it in large computer farms. Data production has reached a point where on-line processing is required in order to filter the data down to manageable sizes. A potential solution involves using low-cost, low-power ARM processors in large arrays to provide massive parallelisation for data stream computing (DSC). The main advantage in using System on Chips (SoCs) is inherent in its design philosophy. SoCs are primarily used in mobile devices and hence consume less power while maintaining relatively good performance. A benchmarking characterisation of three different models of ARM processors will be presented.

    Views (last year): 1.
  10. Wrigley T., Reed R.G., Mellado B.
    Memory benchmarking characterisation of ARM-based SoCs
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 607-613

    Computational intensity is traditionally the focus of large-scale computing system designs, generally leaving such designs ill-equipped to efficiently handle throughput-oriented workloads. In addition, cost and energy consumption considerations for large-scale computing systems in general remain a source of concern. A potential solution involves using low-cost, low-power ARM processors in large arrays in a manner which provides massive parallelisation and high rates of data throughput (relative to existing large-scale computing designs). Giving greater priority to both throughput-rate and cost considerations increases the relevance of primary memory performance and design optimisations to overall system performance. Using several primary memory performance benchmarks to evaluate various aspects of RAM and cache performance, we provide characterisations of the performances of four different models of ARM-based system-on-chip, namely the Cortex-A9, Cortex- A7, Cortex-A15 r3p2 and Cortex-A15 r3p3. We then discuss the relevance of these results to high volume computing and the potential for ARM processors.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"