Результаты поиска по 'data':
Найдено статей: 358
  1. Zhluktov S.V., Aksenov A.A., Kuranosov N.S.
    Simulation of turbulent compressible flows in the FlowVision software
    Computer Research and Modeling, 2023, v. 15, no. 4, pp. 805-825

    Simulation of turbulent compressible gas flows using turbulence models $k-\varepsilon$ standard (KES), $k-\varepsilon$ FlowVision (KEFV) and SST $k-\omega$ is discussed in the given article. A new version of turbulence model KEFV is presented. The results of its testing are shown. Numerical investigation of the discharge of an over-expanded jet from a conic nozzle into unlimited space is performed. The results are compared against experimental data. The dependence of the results on computational mesh is demonstrated. The dependence of the results on turbulence specified at the nozzle inlet is demonstrated. The conclusion is drawn about necessity to allow for compressibility in two-parametric turbulence models. The simple method proposed by Wilcox in 1994 suits well for this purpose. As a result, the range of applicability of the three aforementioned two-parametric turbulence models is essentially extended. Particular values of the constants responsible for the account of compressibility in the Wilcox approach are proposed. It is recommended to specify these values in simulations of compressible flows with use of models KES, KEFV, and SST.

    In addition, the question how to obtain correct characteristics of supersonic turbulent flows using two-parametric turbulence models is considered. The calculations on different grids have shown that specifying a laminar flow at the inlet to the nozzle and wall functions at its surfaces, one obtains the laminar core of the flow up to the fifth Mach disk. In order to obtain correct flow characteristics, it is necessary either to specify two parameters characterizing turbulence of the inflowing gas, or to set a “starting” turbulence in a limited volume enveloping the region of presumable laminar-turbulent transition next to the exit from the nozzle. The latter possibility is implemented in model KEFV.

  2. Sviridenko A.B.
    The iterations’ number estimation for strongly polynomial linear programming algorithms
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 249-285

    A direct algorithm for solving a linear programming problem (LP), given in canonical form, is considered. The algorithm consists of two successive stages, in which the following LP problems are solved by a direct method: a non-degenerate auxiliary problem at the first stage and some problem equivalent to the original one at the second. The construction of the auxiliary problem is based on a multiplicative version of the Gaussian exclusion method, in the very structure of which there are possibilities: identification of incompatibility and linear dependence of constraints; identification of variables whose optimal values are obviously zero; the actual exclusion of direct variables and the reduction of the dimension of the space in which the solution of the original problem is determined. In the process of actual exclusion of variables, the algorithm generates a sequence of multipliers, the main rows of which form a matrix of constraints of the auxiliary problem, and the possibility of minimizing the filling of the main rows of multipliers is inherent in the very structure of direct methods. At the same time, there is no need to transfer information (basis, plan and optimal value of the objective function) to the second stage of the algorithm and apply one of the ways to eliminate looping to guarantee final convergence.

    Two variants of the algorithm for solving the auxiliary problem in conjugate canonical form are presented. The first one is based on its solution by a direct algorithm in terms of the simplex method, and the second one is based on solving a problem dual to it by the simplex method. It is shown that both variants of the algorithm for the same initial data (inputs) generate the same sequence of points: the basic solution and the current dual solution of the vector of row estimates. Hence, it is concluded that the direct algorithm is an algorithm of the simplex method type. It is also shown that the comparison of numerical schemes leads to the conclusion that the direct algorithm allows to reduce, according to the cubic law, the number of arithmetic operations necessary to solve the auxiliary problem, compared with the simplex method. An estimate of the number of iterations is given.

  3. Polosin V.G.
    Quantile shape measures for heavy-tailed distributions
    Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1041-1077

    Currently, journal papers contain numerous examples of the use of heavy-tailed distributions for applied research on various complex systems. Models of extreme data are usually limited to a small set of distribution shapes that in this field of applied research historically been used. It is possible to increase the composition of the set of probability distributions shapes through comparing the measures of the distribution shapes and choosing the most suitable implementations. The example of a beta distribution of the second kind shown that the lack of definability of the moments of heavy-tailed implementations of the beta family of distributions limits the applicability of the existing classical methods of moments for studying the distributions shapes when are characterized heavy tails. For this reason, the development of new methods for comparing distributions based on quantile shape measures free from the restrictions on the shape parameters remains relevant study the possibility of constructing a space of quantile measures of shapes for comparing distributions with heavy tails. The operation purpose consists in computer research of creation possibility of space of the quantile’s measures for the comparing of distributions property with heavy tails. On the basis of computer simulation there the distributions implementations in measures space of shapes were been shown. Mapping distributions in space only of the parametrical measures of shapes has shown that the imposition of regions for heavy tails distribution made impossible compare the shape of distributions belonging to different type in the space of quantile measures of skewness and kurtosis. It is well known that shape information measures such as entropy and entropy uncertainty interval contain additional information about the shape measure of heavy-tailed distributions. In this paper, a quantile entropy coefficient is proposed as an additional independent measure of shape, which is based on the ratio of entropy and quantile uncertainty intervals. Also estimates of quantile entropy coefficients are obtained for a number of well-known heavy-tailed distributions. The possibility of comparing the distributions shapes with realizations of the beta distribution of the second kind is illustrated by the example of the lognormal distribution and the Pareto distribution. Due to mapping the position of stable distributions in the three-dimensional space of quantile measures of shapes estimate made it possible the shape parameters to of the beta distribution of the second kind, for which shape is closest to the Lévy shape. From the paper material it follows that the display of distributions in the three-dimensional space of quantile measures of the forms of skewness, kurtosis and entropy coefficient significantly expands the possibility of comparing the forms for distributions with heavy tails.

  4. Omarova A.G., Beybalayev V.D.
    Numerical solution of the third initial-boundary value problem for the nonstationary heat conduction equation with fractional derivatives
    Computer Research and Modeling, 2024, v. 16, no. 6, pp. 1345-1360

    Recently, to describe various mathematical models of physical processes, fractional differential calculus has been widely used. In this regard, much attention is paid to partial differential equations of fractional order, which are a generalization of partial differential equations of integer order. In this case, various settings are possible.

    Loaded differential equations in the literature are called equations containing values of a solution or its derivatives on manifolds of lower dimension than the dimension of the definitional domain of the desired function. Currently, numerical methods for solving loaded partial differential equations of integer and fractional orders are widely used, since analytical solving methods for solving are impossible. A fairly effective method for solving this kind of problem is the finite difference method, or the grid method.

    We studied the initial-boundary value problem in the rectangle $\overline{D}=\{(x,\,t)\colon 0\leqslant x\leqslant l,\;0\leqslant t\leqslant T\}$ for the loaded differential heat equation with composition fractional derivative of Riemann – Liouville and Caputo – Gerasimov and with boundary conditions of the first and third kind. We have gotten an a priori assessment in differential and difference interpretations. The obtained inequalities mean the uniqueness of the solution and the continuous dependence of the solution on the input data of the problem. A difference analogue of the composition fractional derivative of Riemann – Liouville and Caputo –Gerasimov order $(2-\beta )$ is obtained and a difference scheme is constructed that approximates the original problem with the order $O\left(\tau +h^{2-\beta } \right)$. The convergence of the approximate solution to the exact one is proven at a rate equal to the order of approximation of the difference scheme.

  5. Adekotujo A.S., Enikuomehin T., Aribisala B., Mazzara M., Zubair A.F.
    Computational treatment of natural language text for intent detection
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1539-1554

    Intent detection plays a crucial role in task-oriented conversational systems. To understand the user’s goal, the system relies on its intent detector to classify the user’s utterance, which may be expressed in different forms of natural language, into intent classes. However, lack of data, and the efficacy of intent detection systems has been hindered by the fact that the user’s intent text is typically characterized by short, general sentences and colloquial expressions. The process of algorithmically determining user intent from a given statement is known as intent detection. The goal of this study is to develop an intent detection model that will accurately classify and detect user intent. The model calculates the similarity score of the three models used to determine their similarities. The proposed model uses Contextual Semantic Search (CSS) capabilities for semantic search, Latent Dirichlet Allocation (LDA) for topic modeling, the Bidirectional Encoder Representations from Transformers (BERT) semantic matching technique, and the combination of LDA and BERT for text classification and detection. The dataset acquired is from the broad twitter corpus (BTC) and comprises various meta data. To prepare the data for analysis, a pre-processing step was applied. A sample of 1432 instances were selected out of the 5000 available datasets because manual annotation is required and could be time-consuming. To compare the performance of the model with the existing model, the similarity scores, precision, recall, f1 score, and accuracy were computed. The results revealed that LDA-BERT achieved an accuracy of 95.88% for intent detection, BERT with an accuracy of 93.84%, and LDA with an accuracy of 92.23%. This shows that LDA-BERT performs better than other models. It is hoped that the novel model will aid in ensuring information security and social media intelligence. For future work, an unsupervised LDA-BERT without any labeled data can be studied with the model.

  6. Kiselev M.V., Urusov A.M., Ivanitsky A.Y.
    The adaptive Gaussian receptive fields for spiking encoding of numeric variables
    Computer Research and Modeling, 2025, v. 17, no. 3, pp. 389-400

    Conversion of numeric data to the spiking form and information losses in this process are serious problems limiting usage of spiking neural networks in applied informational systems. While physical values are represented by numbers, internal representation of information inside spiking neural networks is based on spikes — elementary objects emitted and processed by neurons. This problem is especially hard in the reinforcement learning applications where an agent should learn to behave in the dynamic real world because beside the accuracy of the encoding method, its dynamic characteristics should be considered as well. The encoding algorithm based on the Gaussian receptive fields (GRF) is frequently used. In this method, one numeric variable fed to the network is represented by spike streams emitted by a certain set of network input nodes. The spike frequency in each stream is determined by proximity of the current variable value to the center of the receptive field corresponding to the given input node. In the standard GRF algorithm, the receptive field centers are placed equidistantly. However, it is inefficient in the case of very uneven distribution of the variable encoded. In the present paper, an improved version of this method is proposed which is based on adaptive selection of the Gaussian centers and spike stream frequencies. This improved GRF algorithm is compared with its standard version in terms of amount of information lost in the coding process and of accuracy of classification models built on spike-encoded data. The fraction of information retained in the process of the standard and adaptive GRF encoding is estimated using the direct and reverse encoding procedures applied to a large sample from the triangular probability distribution and counting coinciding bits in the original and restored samples. The comparison based on classification was performed on a task of evaluation of current state in reinforcement learning. For this purpose, the classification models were created by machine learning algorithms of very different nature — nearest neighbors algorithm, random forest and multi-layer perceptron. Superiority of our approach is demonstrated on all these tests.

  7. Muravlev V.I., Brazhe A.R.
    Denoising fluorescent imaging data with two-step truncated HOSVD
    Computer Research and Modeling, 2025, v. 17, no. 4, pp. 529-542

    Fluorescent imaging data are currently widely used in neuroscience and other fields. Genetically encoded sensors, based on fluorescent proteins, provide a wide inventory enabling scientiests to image virtually any process in a living cell and extracellular environment. However, especially due to the need for fast scanning, miniaturization, etc, the imaging data can be severly corrupred with multiplicative heteroscedactic noise, reflecting stochastic nature of photon emission and photomultiplier detectors. Deep learning architectures demonstrate outstanding performance in image segmentation and denoising, however they can require large clean datasets for training, and the actual data transformation is not evident from the network architecture and weight composition. On the other hand, some classical data transforms can provide for similar performance in combination with more clear insight in why and how it works. Here we propose an algorithm for denoising fluorescent dynamical imaging data, which is based on multilinear higher-order singular value decomposition (HOSVD) with optional truncation in rank along each axis and thresholding of the tensor of decomposition coefficients. In parallel, we propose a convenient paradigm for validation of the algorithm performance, based on simulated flurescent data, resulting from biophysical modeling of calcium dynamics in spatially resolved realistic 3D astrocyte templates. This paradigm is convenient in that it allows to vary noise level and its resemblance of the Gaussian noise and that it provides ground truth fluorescent signal that can be used to validate denoising algorithms. The proposed denoising method employs truncated HOSVD twice: first, narrow 3D patches, spanning the whole recording, are processed (local 3D-HOSVD stage), second, 4D groups of 3D patches are collaboratively processed (non-local, 4D-HOSVD stage). The effect of the first pass is twofold: first, a significant part of noise is removed at this stage, second, noise distribution is transformed to be more Gaussian-like due to linear combination of multiple samples in the singular vectors. The effect of the second stage is to further improve SNR. We perform parameter tuning of the second stage to find optimal parameter combination for denoising.

  8. Karpov V.E.
    Introduction to the parallelization of algorithms and programs
    Computer Research and Modeling, 2010, v. 2, no. 3, pp. 231-272

    Difference of software development for parallel computing technology from sequential programming is dicussed. Arguements for introduction of new phases into technology of software engineering are given. These phases are: decomposition of algorithms, assignment of jobs to performers, conducting and mapping of logical to physical performers. Issues of performance evaluation of algorithms are briefly discussed. Decomposition of algorithms and programs into parts that can be executed in parallel is dicussed.

    Views (last year): 53. Citations: 22 (RSCI).
  9. Yakovenko G.N.
    Orbits in the two-body problem in terms of symmetries
    Computer Research and Modeling, 2011, v. 3, no. 1, pp. 39-45

    For the two-body problem computed 12-parameter group symmetry transformations which translate the obvious solution — uniform motion of bodies in circular orbits with a common fixed center — a motion with arbitrary initial data.

  10. Gorshenin A.K.
    On application of the asymptotic tests for estimating the number of mixture distribution components
    Computer Research and Modeling, 2012, v. 4, no. 1, pp. 45-53

    The paper demonstrates the efficiency of asymptotically most powerful test of statistical hypotheses about the number of mixture components in the adding and splitting component models. Test data are the samples from different finite normal mixtures. The results are compared for various significance levels and weights.

    Views (last year): 1. Citations: 2 (RSCI).
Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"