Результаты поиска по 'big data':
Найдено статей: 12
  1. Krechet V.G., Oshurko V.B., Kisser A.E.
    Cosmological models of the Universe without a Beginning and without a singularity
    Computer Research and Modeling, 2021, v. 13, no. 3, pp. 473-486

    A new type of cosmological models for the Universe that has no Beginning and evolves from the infinitely distant past is considered.

    These models are alternative to the cosmological models based on the Big Bang theory according to which the Universe has a finite age and was formed from an initial singularity.

    In our opinion, there are certain problems in the Big Bang theory that our cosmological models do not have.

    In our cosmological models, the Universe evolves by compression from the infinitely distant past tending a finite minimum of distances between objects of the order of the Compton wavelength $\lambda_C$ of hadrons and the maximum density of matter corresponding to the hadron era of the Universe. Then it expands progressing through all the stages of evolution established by astronomical observations up to the era of inflation.

    The material basis that sets the fundamental nature of the evolution of the Universe in the our cosmological models is a nonlinear Dirac spinor field $\psi(x^k)$ with nonlinearity in the Lagrangian of the field of type $\beta(\bar{\psi}\psi)^n$ ($\beta = const$, $n$ is a rational number), where $\psi(x^k)$ is the 4-component Dirac spinor, and $\psi$ is the conjugate spinor.

    In addition to the spinor field $\psi$ in cosmological models, we have other components of matter in the form of an ideal liquid with the equation of state $p = w\varepsilon$ $(w = const)$ at different values of the coefficient $w (−1 < w < 1)$. Additional components affect the evolution of the Universe and all stages of evolution occur in accordance with established observation data. Here $p$ is the pressure, $\varepsilon = \rho c^2$ is the energy density, $\rho$ is the mass density, and $c$ is the speed of light in a vacuum.

    We have shown that cosmological models with a nonlinear spinor field with a nonlinearity coefficient $n = 2$ are the closest to reality.

    In this case, the nonlinear spinor field is described by the Dirac equation with cubic nonlinearity.

    But this is the Ivanenko–Heisenberg nonlinear spinor equation which W.Heisenberg used to construct a unified spinor theory of matter.

    It is an amazing coincidence that the same nonlinear spinor equation can be the basis for constructing a theory of two different fundamental objects of nature — the evolving Universe and physical matter.

    The developments of the cosmological models are supplemented by their computer researches the results of which are presented graphically in the work.

  2. Khoruzhnikov S.E., Grudinin V.A., Sadov O.L., Shevel A.Y., Kairkanov A.B.
    Preliminary study of big data transfer over computer network
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 421-427

    The transfer of Big Data over computer network is important and unavoidable operation in the past, now and in any feasible future. There are a number of methods to transfer the data over computer global network (Internet) with a range of tools. In this paper the transfer of one piece of Big Data from one point in the Internet to another point in Internet in general over long range distance: many thousands kilometers. Several free of charge systems to transfer the Big Data are analyzed here. The most important architecture features are emphasized and suggested idea to add SDN Openflow protocol technique for fine tuning the data transfer over several parallel data links.

    Views (last year): 4.
  3. The work is devoted to the problem of creating a model with stationary parameters using historical data under conditions of unknown disturbances. The case is considered when a representative sample of object states can be formed using historical data accumulated only over a significant period of time. It is assumed that unknown disturbances can act in a wide frequency range and may have low-frequency and trend components. In such a situation, including data from different time periods in the sample can lead to inconsistencies and greatly reduce the accuracy of the model. The paper provides an overview of approaches and methods for data harmonization. In this case, the main attention is paid to data sampling. An assessment is made of the applicability of various data sampling options as a tool for reducing the level of uncertainty. We propose a method for identifying a self-leveling object model using data accumulated over a significant period of time under conditions of unknown disturbances with a wide frequency range. The method is focused on creating a model with stationary parameters that does not require periodic reconfiguration to new conditions. The method is based on the combined use of sampling and presentation of data from individual periods of time in the form of increments relative to the initial point in time for the period. This makes it possible to reduce the number of parameters that characterize unknown disturbances with a minimum of assumptions that limit the application of the method. As a result, the dimensionality of the search problem is reduced and the computational costs associated with setting up the model are minimized. It is possible to configure both linear and, in some cases, nonlinear models. The method was used to develop a model of closed cooling of steel on a unit for continuous hot-dip galvanizing of steel strip. The model can be used for predictive control of thermal processes and for selecting strip speed. It is shown that the method makes it possible to develop a model of thermal processes from a closed cooling section under conditions of unknown disturbances, including low-frequency components.

  4. Nikulin A.S., ZHediaevskii D.N., Fedorova E.B.
    Applying artificial neural network for the selection of mixed refrigerant by boiling curve
    Computer Research and Modeling, 2022, v. 14, no. 3, pp. 593-608

    The paper provides a method for selecting the composition of a refrigerant with a given isobaric cooling curve using an artificial neural network (ANN). This method is based on the use of 1D layers of a convolutional neural network. To train the neural network, we applied a technological model of a simple heat exchanger in the UniSim design program, using the Peng – Robinson equation of state.We created synthetic database on isobaric boiling curves of refrigerants of different compositions using the technological model. To record the database, an algorithm was developed in the Python programming language, and information on isobaric boiling curves for 1 049 500 compositions was uploaded using the COM interface. The compositions have generated by Monte Carlo method. Designed architecture of ANN allows select composition of a mixed refrigerant by 101 points of boiling curve. ANN gives mole flows of mixed refrigerant by composition (methane, ethane, propane, nitrogen) on the output layer. For training ANN, we used method of cyclical learning rate. For results demonstration we selected MR composition by natural gas cooling curve with a minimum temperature drop of 3 К and a maximum temperature drop of no more than 10 К, which turn better than we predicted via UniSim SQP optimizer and better than predicted by $k$-nearest neighbors algorithm. A significant value of this article is the fact that an artificial neural network can be used to select the optimal composition of the refrigerant when analyzing the cooling curve of natural gas. This method can help engineers select the composition of the mixed refrigerant in real time, which will help reduce the energy consumption of natural gas liquefaction.

  5. Suvorov N.V., Shleymovich M.P.
    Mathematical model of the biometric iris recognition system
    Computer Research and Modeling, 2020, v. 12, no. 3, pp. 629-639

    Automatic recognition of personal identity by biometric features is based on unique peculiarities or characteristics of people. Biometric identification process consist in making of reference templates and comparison with new input data. Iris pattern recognition algorithms presents high accuracy and low identification errors percent on practice. Iris pattern advantages over other biometric features are determined by its high degree of freedom (nearly 249), excessive density of unique features and constancy. High recognition reliability level is very important because it provides search in big databases. Unlike one-to-one check mode that is applicable only to small calculation count it allows to work in one-to-many identification mode. Every biometric identification system appears to be probabilistic and qualitative characteristics description utilizes such parameters as: recognition accuracy, false acceptance rate and false rejection rate. These characteristics allows to compare identity recognition methods and asses the system performance under any circumstances. This article explains the mathematical model of iris pattern biometric identification and its characteristics. Besides, there are analyzed results of comparison of model and real recognition process. To make such analysis there was carried out the review of existing iris pattern recognition methods based on different unique features vector. The Python-based software package is described below. It builds-up probabilistic distributions and generates large test data sets. Such data sets can be also used to educate the identification decision making neural network. Furthermore, synergy algorithm of several iris pattern identification methods was suggested to increase qualitative characteristics of system in comparison with the use of each method separately.

  6. Malkov S.Yu., Davydova O.I.
    Modernization as a global process: the experience of mathematical modeling
    Computer Research and Modeling, 2021, v. 13, no. 4, pp. 859-873

    The article analyzes empirical data on the long-term demographic and economic dynamics of the countries of the world for the period from the beginning of the 19th century to the present. Population and GDP of a number of countries of the world for the period 1500–2016 were selected as indicators characterizing the long-term demographic and economic dynamics of the countries of the world. Countries were chosen in such a way that they included representatives with different levels of development (developed and developing countries), as well as countries from different regions of the world (North America, South America, Europe, Asia, Africa). A specially developed mathematical model was used for modeling and data processing. The presented model is an autonomous system of differential equations that describes the processes of socio-economic modernization, including the process of transition from an agrarian society to an industrial and post-industrial one. The model contains the idea that the process of modernization begins with the emergence of an innovative sector in a traditional society, developing on the basis of new technologies. The population is gradually moving from the traditional sector to the innovation sector. Modernization is completed when most of the population moves to the innovation sector.

    Statistical methods of data processing and Big Data methods, including hierarchical clustering were used. Using the developed algorithm based on the random descent method, the parameters of the model were identified and verified on the basis of empirical series, and the model was tested using statistical data reflecting the changes observed in developed and developing countries during the period of modernization taking place over the past centuries. Testing the model has demonstrated its high quality — the deviations of the calculated curves from statistical data are usually small and occur during periods of wars and economic crises. Thus, the analysis of statistical data on the long-term demographic and economic dynamics of the countries of the world made it possible to determine general patterns and formalize them in the form of a mathematical model. The model will be used to forecast demographic and economic dynamics in different countries of the world.

  7. Cox M.A., Reed R.G., Mellado B.
    The development of an ARM system on chip based processing unit for data stream computing
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 505-509

    Modern big science projects are becoming highly data intensive to the point where offline processing of stored data is infeasible. High data throughput computing, or Data Stream Computing, for future projects is required to deal with terabytes of data per second which cannot be stored in long-term storage elements. Conventional data-centres based on typical server-grade hardware are expensive and are biased towards processing power. The overall I/O bandwidth can be increased with massive parallelism, usually at the expense of excessive processing power and high energy consumption. An ARM System on Chip (SoC) based processing unit may address the issue of system I/O and CPU balance, affordability and energy efficiency since ARM SoCs are mass produced and designed to be energy efficient for use in mobile devices. Such a processing unit is currently in development, with a design goal of 20 Gb/s I/O throughput and significant processing power. The I/O capabilities of consumer ARM System on Chips are discussed along with to-date performance and I/O throughput tests.

    Views (last year): 1.
  8. Gankevich I.G., Degtyarev A.B.
    Efficient processing and classification of wave energy spectrum data with a distributed pipeline
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 517-520

    Processing of large amounts of data often consists of several steps, e.g. pre- and post-processing stages, which are executed sequentially with data written to disk after each step, however, when pre-processing stage for each task is different the more efficient way of processing data is to construct a pipeline which streams data from one stage to another. In a more general case some processing stages can be factored into several parallel subordinate stages thus forming a distributed pipeline where each stage can have multiple inputs and multiple outputs. Such processing pattern emerges in a problem of classification of wave energy spectra based on analytic approximations which can extract different wave systems and their parameters (e.g. wave system type, mean wave direction) from spectrum. Distributed pipeline approach achieves good performance compared to conventional “sequential-stage” processing.

    Views (last year): 3. Citations: 2 (RSCI).
  9. Reed R.G., Cox M.A., Wrigley T., Mellado B.
    A CPU benchmarking characterization of ARM based processors
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 581-586

    Big science projects are producing data at ever increases rates. Typical techniques involve storing the data to disk, after minor filtering, and then processing it in large computer farms. Data production has reached a point where on-line processing is required in order to filter the data down to manageable sizes. A potential solution involves using low-cost, low-power ARM processors in large arrays to provide massive parallelisation for data stream computing (DSC). The main advantage in using System on Chips (SoCs) is inherent in its design philosophy. SoCs are primarily used in mobile devices and hence consume less power while maintaining relatively good performance. A benchmarking characterisation of three different models of ARM processors will be presented.

    Views (last year): 1.
  10. Dobrynin V.N., Filozova I.A.
    Cataloging technology of information fund
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 661-673

    The article discusses the approach to the improvement of information processing technology on the basis of logical-semantic network (LSN) Question–Answer–Reaction aimed at formation and support of the catalog service providing efficient search of answers to questions.

    The basis of such a catalog service are semantic links, reflecting the logic of presentation of the author's thoughts within the framework this publication, theme, subject area. Structuring and support of these links will allow working with a field of meanings, providing new opportunities for the study the corps of digital libraries documents. Cataloging of the information fund includes: formation of lexical dictionary; formation of the classification tree for several bases; information fund classification for question–answer topics; formation of the search queries that are adequate classification trees the question–answer; automated search queries on thematic search engines; analysis of the responses to queries; LSN catalog support during the operational phase (updating and refinement of the catalog). The technology is considered for two situations: 1) information fund has already been formed; 2) information fund is missing, you must create it.

    Views (last year): 3.
Pages: next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"