Результаты поиска по 'information':
Найдено статей: 168
  1. Khudhur H.M., Halil I.H.
    Noise removal from images using the proposed three-term conjugate gradient algorithm
    Computer Research and Modeling, 2024, v. 16, no. 4, pp. 841-853

    Conjugate gradient algorithms represent an important class of unconstrained optimization algorithms with strong local and global convergence properties and simple memory requirements. These algorithms have advantages that place them between the steep regression method and Newton’s algorithm because they require calculating the first derivatives only and do not require calculating and storing the second derivatives that Newton’s algorithm needs. They are also faster than the steep descent algorithm, meaning that they have overcome the slow convergence of this algorithm, and it does not need to calculate the Hessian matrix or any of its approximations, so it is widely used in optimization applications. This study proposes a novel method for image restoration by fusing the convex combination method with the hybrid (CG) method to create a hybrid three-term (CG) algorithm. Combining the features of both the Fletcher and Revees (FR) conjugate parameter and the hybrid Fletcher and Revees (FR), we get the search direction conjugate parameter. The search direction is the result of concatenating the gradient direction, the previous search direction, and the gradient from the previous iteration. We have shown that the new algorithm possesses the properties of global convergence and descent when using an inexact search line, relying on the standard Wolfe conditions, and using some assumptions. To guarantee the effectiveness of the suggested algorithm and processing image restoration problems. The numerical results of the new algorithm show high efficiency and accuracy in image restoration and speed of convergence when used in image restoration problems compared to Fletcher and Revees (FR) and three-term Fletcher and Revees (TTFR).

  2. Bozhko A.N.
    Structural models of product in CAD-systems
    Computer Research and Modeling, 2024, v. 16, no. 5, pp. 1079-1091

    Computer-aided assembly planning of complex products is an important area of modern information technology. The sequence of assembly and decomposition of the product into assembly units largely depend on the mechanical structure of a technical system (machine, mechanical device, etc.). In most modern research, the mechanical structure of products is modeled using a graph of connections and its various modifications. The coordination of parts during assembly can be achieved by implementing several connections at the same time. This generates a $k$-ary basing relation on a set of product parts, which cannot be correctly described by graph means. A hypergraph model of the mechanical structure of a product is proposed. Modern discrete manufacturing uses sequential coherent assembly operations. The mathematical description of such operations is the normal contraction of edges of the hypergraph model. The sequence of contractions that transform the hypergraph into a point is a description of the assembly plan. Hypergraphs for which such a transformation exists are called $s$-hypergraphs. $S$-hypergraphs are correct mathematical models of the mechanical structures of any assembled products. A theorem on necessary conditions for the contractibility of $s$-hypergraphs is given. It is shown that the necessary conditions are not sufficient. An example of a noncontractible hypergraph for which the necessary conditions are satisfied is given. This means that the design of a complex technical system may contain hidden structural errors that make assembly of the product impossible. Therefore, finding sufficient conditions for contractibility is an important task. Two theorems on sufficient conditions for contractibility are proved. They provide a theoretical basis for developing an efficient computational procedure for finding all $s$-subgraphs of an $s$-hypergraph. An $s$-subgraph is a model of any part of a product that can be assembled independently. These are, first of all, assembly units of various levels of hierarchy. The set of all $s$-subgraphs of an $s$-hypergraph, ordered by inclusion, is a lattice. This model can be used to synthesize all possible sequences of assembly and disassembly of a product and its components. The lattice model of the product allows you to analyze geometric obstacles during assembly using algebraic means.

  3. Ahmad U., Ivanov V.
    Automating high-quality concept banks: leveraging LLMs and multimodal evaluation metrics
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1555-1567

    Interpretability in recent deep learning models has become an epicenter of research particularly in sensitive domains such as healthcare, and finance. Concept bottleneck models have emerged as a promising approach for achieving transparency and interpretability by leveraging a set of humanunderstandable concepts as an intermediate representation before the prediction layer. However, manual concept annotation is discouraged due to the time and effort involved. Our work explores the potential of large language models (LLMs) for generating high-quality concept banks and proposes a multimodal evaluation metric to assess the quality of generated concepts. We investigate three key research questions: the ability of LLMs to generate concept banks comparable to existing knowledge bases like ConceptNet, the sufficiency of unimodal text-based semantic similarity for evaluating concept-class label associations, and the effectiveness of multimodal information in quantifying concept generation quality compared to unimodal concept-label semantic similarity. Our findings reveal that multimodal models outperform unimodal approaches in capturing concept-class label similarity. Furthermore, our generated concepts for the CIFAR-10 and CIFAR-100 datasets surpass those obtained from ConceptNet and the baseline comparison, demonstrating the standalone capability of LLMs in generating highquality concepts. Being able to automatically generate and evaluate high-quality concepts will enable researchers to quickly adapt and iterate to a newer dataset with little to no effort before they can feed that into concept bottleneck models.

  4. Zhitnukhin N.A., Zhadan A.Y., Kondratov I.V., Allahverdyan A.L., Granichin O.N., Petrosian O.L., Romanovskii A.V., Kharin V.S.
    Multi-agent local voting protocol for online DAG scheduling
    Computer Research and Modeling, 2025, v. 17, no. 1, pp. 29-44

    Scheduling computational workflows represented by directed acyclic graphs (DAGs) is crucial in many areas of computer science, such as cloud/edge tasks with distributed workloads and data mining. The complexity of online DAG scheduling is compounded by the large number of computational nodes, data transfer delays, heterogeneity (by type and processing power) of executors, precedence constraints imposed by DAG, and the nonuniform arrival of tasks. This paper introduces the Multi-Agent Local Voting Protocol (MLVP), a novel approach focused on dynamic load balancing for DAG scheduling in heterogeneous computing environments, where executors are represented as agents. The MLVP employs a local voting protocol to achieve effective load distribution by formulating the problem as a differentiated consensus achievement. The algorithm calculates an aggregated DAG metric for each executor-node pair based on node dependencies, node availability, and executor performance. The balance of these metrics as a weighted sum is optimized using a genetic algorithm to assign tasks probabilistically, achieving efficient workload distribution via information sharing and reaching consensus among the executors across the system and thus improving makespan. The effectiveness of the MLVP is demonstrated through comparisons with the state-of-the-art DAG scheduling algorithm and popular heuristics such as DONF, FIFO, Min- Min, and Max-Min. Numerical simulations show that MLVP achieves makepsan improvements of up to 70% on specific graph topologies and an average makespan reduction of 23.99% over DONF (state-of-the-art DAG scheduling heuristic) across randomly generated diverse set of DAGs. Notably, the algorithm’s scalability is evidenced by enhanced performance with increasing numbers of executors and graph nodes.

  5. Koganov A.V.
    Complimentary information using in the task of averaging operators inversion in function space
    Computer Research and Modeling, 2011, v. 3, no. 3, pp. 241-254

    The dual task of integral geometry – to define for a given averaging operator the function class where inversion of that operator is possible – is solved. Those classes are defined ambiguously. Full description of those classes is given in the form of minimal complimentary information necessary to know about the function. The possible to give a constructive description of the class is researched and in the case of a finite averaging system the inversion formulas are given.

  6. Polezhaev V.A.
    Automated citation graph building from a corpora of scientific documents
    Computer Research and Modeling, 2012, v. 4, no. 4, pp. 707-719

    In this paper the problem of automated building of a citation graph from a collection of scientific documents is considered as a sequence of machine learning tasks. The overall data processing technology is described which consists of six stages: preprocessing, metainformation extraction, bibliography lists extraction, splitting bibliography lists into separate bibliography records, standardization of each bibliography record, and record linkage. The goal of this paper is to provide a survey of approaches and algorithms suitable for each stage, motivate the choice of the best combination of algorithms, and adapt some of them for multilingual bibliographies processing. For some of the tasks new algorithms and heuristics are proposed and evaluated on the mixed English and Russian documents corpora.

    Views (last year): 5. Citations: 1 (RSCI).
  7. Demianov A.Y., Dinariev O.Y., Lisitsin D.A.
    Numerical simulation of frequency dependence of dielectric permittivity and electrical conductivity of saturated porous media
    Computer Research and Modeling, 2016, v. 8, no. 5, pp. 765-773

    This article represents numerical simulation technique for determining effective spectral electromagnetic properties (effective electrical conductivity and relative dielectric permittivity) of saturated porous media. Information about these properties is vastly applied during the interpretation of petrophysical exploration data of boreholes and studying of rock core samples. The main feature of the present paper consists in the fact, that it involves three-dimensional saturated digital rock models, which were constructed based on the combined data considering microscopic structure of the porous media and the information about capillary equilibrium of oil-water mixture in pores. Data considering microscopic structure of the model are obtained by means of X-ray microscopic tomography. Information about distributions of saturating fluids is based on hydrodynamic simulations with density functional technique. In order to determine electromagnetic properties of the numerical model time-domain Fourier transform of Maxwell equations is considered. In low frequency approximation the problem can be reduced to solving elliptic equation for the distribution of complex electric potential. Finite difference approximation is based on discretization of the model with homogeneous isotropic orthogonal grid. This discretization implies that each computational cell contains exclusively one medium: water, oil or rock. In order to obtain suitable numerical model the distributions of saturating components is segmented. Such kind of modification enables avoiding usage of heterogeneous grids and disregards influence on the results of simulations of the additional techniques, required in order to determine properties of cells, filled with mixture of media. Corresponding system of differential equations is solved by means of biconjugate gradient stabilized method with multigrid preconditioner. Based on the results of complex electric potential computations average values of electrical conductivity and relative dielectric permittivity is calculated. For the sake of simplicity, this paper considers exclusively simulations with no spectral dependence of conductivities and permittivities of model components. The results of numerical simulations of spectral dependence of effective characteristics of heterogeneously saturated porous media (electrical conductivity and relative dielectric permittivity) in broad range of frequencies and multiple water saturations are represented in figures and table. Efficiency of the presented approach for determining spectral electrical properties of saturated rocks is discussed in conclusion.

    Views (last year): 8.
  8. Efficiency of production directly depends on quality of the management of technology which, in turn, relies on the accuracy and efficiency of the processing of control and measuring information. Development of the mathematical methods of research of the system communications and regularities of functioning and creation of the mathematical models taking into account structural features of object of researches, and also writing of the software products for realization of these methods are an actual task. Practice has shown that the list of parameters that take place in the study of complex object of modern production, ranging from a few dozen to several hundred names, and the degree of influence of each factor in the initial time is not clear. Before working for the direct determination of the model in these circumstances, it is impossible — the amount of the required information may be too great, and most of the work on the collection of this information will be done in vain due to the fact that the degree of influence on the optimization of most factors of the original list would be negligible. Therefore, a necessary step in determining a model of a complex object is to work to reduce the dimension of the factor space. Most industrial plants are hierarchical group processes and mass volume production, characterized by hundreds of factors. (For an example of realization of the mathematical methods and the approbation of the constructed models data of the Moldavian steel works were taken in a basis.) To investigate the systemic linkages and patterns of functioning of such complex objects are usually chosen several informative parameters, and carried out their sampling. In this article the sequence of coercion of the initial indices of the technological process of the smelting of steel to the look suitable for creation of a mathematical model for the purpose of prediction is described. The implementations of new types became also creation of a basis for development of the system of automated management of quality of the production. In the course of weak correlation the following stages are selected: collection and the analysis of the basic data, creation of the table the correlated of the parameters, abbreviation of factor space by means of the correlative pleiads and a method of weight factors. The received results allow to optimize process of creation of the model of multiple-factor process.

    Views (last year): 6. Citations: 1 (RSCI).
  9. Usanov M.S., Kulberg N.S., Morozov S.P.
    Development of anisotropic nonlinear noise-reduction algorithm for computed tomography data with context dynamic threshold
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 233-248

    The article deals with the development of the noise-reduction algorithm based on anisotropic nonlinear data filtering of computed tomography (CT). Analysis of domestic and foreign literature has shown that the most effective algorithms for noise reduction of CT data use complex methods for analyzing and processing data, such as bilateral, adaptive, three-dimensional and other types of filtrations. However, a combination of such techniques is rarely used in practice due to long processing time per slice. In this regard, it was decided to develop an efficient and fast algorithm for noise-reduction based on simplified bilateral filtration method with three-dimensional data accumulation. The algorithm was developed on C ++11 programming language in Microsoft Visual Studio 2015. The main difference of the developed noise reduction algorithm is the use an improved mathematical model of CT noise, based on the distribution of Poisson and Gauss from the logarithmic value, developed earlier by our team. This allows a more accurate determination of the noise level and, thus, the threshold of data processing. As the result of the noise reduction algorithm, processed CT data with lower noise level were obtained. Visual evaluation of the data showed the increased information content of the processed data, compared to original data, the clarity of the mapping of homogeneous regions, and a significant reduction in noise in processing areas. Assessing the numerical results of the algorithm showed a decrease in the standard deviation (SD) level by more than 6 times in the processed areas, and high rates of the determination coefficient showed that the data were not distorted and changed only due to the removal of noise. Usage of newly developed context dynamic threshold made it possible to decrease SD level on every area of data. The main difference of the developed threshold is its simplicity and speed, achieved by preliminary estimation of the data array and derivation of the threshold values that are put in correspondence with each pixel of the CT. The principle of its work is based on threshold criteria, which fits well both into the developed noise reduction algorithm based on anisotropic nonlinear filtration, and another algorithm of noise-reduction. The algorithm successfully functions as part of the MultiVox workstation and is being prepared for implementation in a single radiological network of the city of Moscow.

    Views (last year): 21.
  10. We consider a model of spontaneous formation of a computational structure in the human brain for solving a given class of tasks in the process of performing a series of similar tasks. The model is based on a special definition of a numerical measure of the complexity of the solution algorithm. This measure has an informational property: the complexity of a computational structure consisting of two independent structures is equal to the sum of the complexities of these structures. Then the probability of spontaneous occurrence of the structure depends exponentially on the complexity of the structure. The exponential coefficient requires experimental determination for each type of problem. It may depend on the form of presentation of the source data and the procedure for issuing the result. This estimation method was applied to the results of a series of experiments that determined the strategy for solving a series of similar problems with a growing number of initial data. These experiments were described in previously published papers. Two main strategies were considered: sequential execution of the computational algorithm, or the use of parallel computing in those tasks where it is effective. These strategies differ in how calculations are performed. Using an estimate of the complexity of schemes, you can use the empirical probability of one of the strategies to calculate the probability of the other. The calculations performed showed a good match between the calculated and empirical probabilities. This confirms the hypothesis about the spontaneous formation of structures that solve the problem during the initial training of a person. The paper contains a brief description of experiments, detailed computational schemes and a strict definition of the complexity measure of computational structures and the conclusion of the dependence of the probability of structure formation on its complexity.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"