Результаты поиска по 'regularization':
Найдено статей: 78
  1. Maksimov F.A.
    Supersonic flow of system of bodies
    Computer Research and Modeling, 2013, v. 5, no. 6, pp. 969-980

    The given work is devoted aerodynamic properties of system of the bodies which are flowed round by a supersonic stream. The question on reduction of mutual influence with increase in the size characterising scattering of elements of system is considered. The method of construction of a grid is applied to current modeling from a set of grids. One of grids, regular with rectangular cells, is responsible for an interference between bodies
    and serves for the description of an external nonviscous current. Other grids are  connected with surfaces of streamline bodies and allow to describe viscous layers about streamline bodies. These grids are imposed on the first, without combination of any knots. Boundary conditions are realized through interpolation of functions on borders from one grid on another.

    Views (last year): 1. Citations: 19 (RSCI).
  2. Svistunov I.N., Kolokol A.S., Shimkevich A.L.
    Topological microstructure analysis of the TIP4P-EW water model
    Computer Research and Modeling, 2014, v. 6, no. 3, pp. 415-426

    Molecular dynamics (MD) simulations of rigid water model TIP4P-EW at ambient conditions were carried out. Delaunay’s simplexes were considered as structural elements of liquid water. Topological criterion which allows to identify the water microstructure in snapshot of MD cell was used to allocate its dense part. Geometrical analysis of water Delaunay’s simplexes indicates their strong flatness in comparison with a regular tetrahedron that is fundamentally different from the results for dense part of simple liquids. The statistics of TIP4P-EW water clusters was investigated depending on their cardinality and connectivity. It is similar to the statistics for simple liquids and the structure of this dense part is also a fractal surface consisting of the free edges of the Delaunay’s simplexes.

    Views (last year): 1. Citations: 1 (RSCI).
  3. Cheremisina E.N., Senner A.E.
    The use of GIS INTEGRO in searching tasks for oil and gas deposits
    Computer Research and Modeling, 2015, v. 7, no. 3, pp. 439-444

    GIS INTEGRO is the geo-information software system forming the basis for the integrated interpretation of geophysical data in researching a deep structure of Earth. GIS INTEGRO combines a variety of computational and analytical applications for the solution of geological and geophysical problems. It includes various interfaces that allow you to change the form of representation of data (raster, vector, regular and irregular network of observations), the conversion unit of map projections, application blocks, including block integrated data analysis and decision prognostic and diagnostic tasks.

    The methodological approach is based on integration and integrated analysis of geophysical data on regional profiles, geophysical potential fields and additional geological information on the study area. Analytical support includes packages transformations, filtering, statistical processing, calculation, finding of lineaments, solving direct and inverse tasks, integration of geographic information.

    Technology and software and analytical support was tested in solving problems tectonic zoning in scale 1:200000, 1:1000000 in Yakutia, Kazakhstan, Rostov region, studying the deep structure of regional profiles 1:S, 1-SC, 2-SAT, 3-SAT and 2-DV, oil and gas forecast in the regions of Eastern Siberia, Brazil.

    The article describes two possible approaches of parallel calculations for data processing 2D or 3D nets in the field of geophysical research. As an example presented realization in the environment of GRID of the application software ZondGeoStat (statistical sensing), which create 3D net model on the basis of data 2d net. The experience has demonstrated the high efficiency of the use of environment of GRID during realization of calculations in field of geophysical researches.

    Views (last year): 4.
  4. Different versions of the shifting mode of reproduction models describe set of the macroeconomic production subsystems interacting with each other, to each of which there corresponds the household. These subsystems differ among themselves on age of the fixed capital used by them as they alternately stop production for its updating by own forces (for repair of the equipment and for introduction of the innovations increasing production efficiency). It essentially distinguishes this type of models from the models describing the mode of joint reproduction in case of which updating of fixed capital and production of a product happen simultaneously. Models of the shifting mode of reproduction allow to describe mechanisms of such phenomena as cash circulations and amortization, and also to describe different types of monetary policy, allow to interpret mechanisms of economic growth in a new way. Unlike many other macroeconomic models, model of this class in which the subsystems competing among themselves serially get an advantage in comparison with the others because of updating, essentially not equilibrium. They were originally described as a systems of ordinary differential equations with abruptly varying coefficients. In the numerical calculations which were carried out for these systems depending on parameter values and initial conditions both regular, and not regular dynamics was revealed. This paper shows that the simplest versions of this model without the use of additional approximations can be represented in a discrete form (in the form of non-linear mappings) with different variants (continuous and discrete) financial flows between subsystems (interpreted as wages and subsidies). This form of representation is more convenient for receipt of analytical results as well as for a more economical and accurate numerical calculations. In particular, its use allowed to determine the entry conditions corresponding to coordinated and sustained economic growth without systematic lagging in production of a product of one subsystems from others.

    Views (last year): 1. Citations: 4 (RSCI).
  5. Gasnikov A.V., Kubentayeva M.B.
    Searching stochastic equilibria in transport networks by universal primal-dual gradient method
    Computer Research and Modeling, 2018, v. 10, no. 3, pp. 335-345

    We consider one of the problems of transport modelling — searching the equilibrium distribution of traffic flows in the network. We use the classic Beckman’s model to describe time costs and flow distribution in the network represented by directed graph. Meanwhile agents’ behavior is not completely rational, what is described by the introduction of Markov logit dynamics: any driver selects a route randomly according to the Gibbs’ distribution taking into account current time costs on the edges of the graph. Thus, the problem is reduced to searching of the stationary distribution for this dynamics which is a stochastic Nash – Wardrope equilibrium in the corresponding population congestion game in the transport network. Since the game is potential, this problem is equivalent to the problem of minimization of some functional over flows distribution. The stochasticity is reflected in the appearance of the entropy regularization, in contrast to non-stochastic case. The dual problem is constructed to obtain a solution of the optimization problem. The universal primal-dual gradient method is applied. A major specificity of this method lies in an adaptive adjustment to the local smoothness of the problem, what is most important in case of the complex structure of the objective function and an inability to obtain a prior smoothness bound with acceptable accuracy. Such a situation occurs in the considered problem since the properties of the function strongly depend on the transport graph, on which we do not impose strong restrictions. The article describes the algorithm including the numerical differentiation for calculation of the objective function value and gradient. In addition, the paper represents a theoretical estimate of time complexity of the algorithm and the results of numerical experiments conducted on a small American town.

    Views (last year): 28.
  6. Beed R.S., Sarkar S., Roy A., Dutta Biswas S., Biswas S.
    A hybrid multi-objective carpool route optimization technique using genetic algorithm and A* algorithm
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 67-85

    Carpooling has gained considerable importance as an effective solution for reducing pollution, mitigation of traffic and congestion on the roads, reduced demand for parking facilities, lesser energy and fuel consumption and most importantly, reduction in carbon emission, thus improving the quality of life in cities. This work presents a hybrid GA-A* algorithm to obtain optimal routes for the carpooling problem in the domain of multiobjective optimization having multiple conflicting objectives. Though the Genetic Algorithm provides optimal solutions, the A* algorithm because of its efficiency in providing the shortest route between any two points based on heuristics, enhances the optimal routes obtained using the Genetic algorithm. The refined routes obtained using the GA-A* algorithm, are further subjected to dominance test to obtain non-dominating solutions based on Pareto-Optimality. The routes obtained maximize the profit of the service provider by minimizing the travel and detour distance as well as pick-up/drop costs while maximizing the utilization of the car. The proposed algorithm has been implemented over the Salt Lake area of Kolkata. Route distance and detour distance for the optimal routes obtained using the proposed algorithm are consistently lesser for the same number of passengers when compared to the corresponding results obtained from an existing algorithm. Various statistical analysis like boxplots have also confirmed that the proposed algorithm regularly performed better than the existing algorithm using only Genetic Algorithm.

  7. Tran T.T., Pham C.T.
    A hybrid regularizers approach based model for restoring image corrupted by Poisson noise
    Computer Research and Modeling, 2021, v. 13, no. 5, pp. 965-978

    Image denoising is one of the fundamental problems in digital image processing. This problem usually refers to the reconstruction of an image from an observed image degraded by noise. There are many factors that cause this degradation such as transceiver equipment, or environmental influences, etc. In order to obtain higher quality images, many methods have been proposed for image denoising problem. Most image denoising method are based on total variation (TV) regularization to develop efficient algorithms for solving the related optimization problem. TV-based models have become a standard technique in image restoration with the ability to preserve image sharpness.

    In this paper, we focus on Poisson noise usually appearing in photon-counting devices. We propose an effective regularization model based on combination of first-order and fractional-order total variation for image reconstruction corrupted by Poisson noise. The proposed model allows us to eliminate noise while edge preserving. An efficient alternating minimization algorithm is employed to solve the optimization problem. Finally, provided numerical results show that our proposed model can preserve more details and get higher image visual quality than recent state-of-the-art methods.

  8. Dvinskikh D.M., Pirau V.V., Gasnikov A.V.
    On the relations of stochastic convex optimization problems with empirical risk minimization problems on $p$-norm balls
    Computer Research and Modeling, 2022, v. 14, no. 2, pp. 309-319

    In this paper, we consider convex stochastic optimization problems arising in machine learning applications (e. g., risk minimization) and mathematical statistics (e. g., maximum likelihood estimation). There are two main approaches to solve such kinds of problems, namely the Stochastic Approximation approach (online approach) and the Sample Average Approximation approach, also known as the Monte Carlo approach, (offline approach). In the offline approach, the problem is replaced by its empirical counterpart (the empirical risk minimization problem). The natural question is how to define the problem sample size, i. e., how many realizations should be sampled so that the quite accurate solution of the empirical problem be the solution of the original problem with the desired precision. This issue is one of the main issues in modern machine learning and optimization. In the last decade, a lot of significant advances were made in these areas to solve convex stochastic optimization problems on the Euclidean balls (or the whole space). In this work, we are based on these advances and study the case of arbitrary balls in the $p$-norms. We also explore the question of how the parameter $p$ affects the estimates of the required number of terms as a function of empirical risk.

    In this paper, both convex and saddle point optimization problems are considered. For strongly convex problems, the existing results on the same sample sizes in both approaches (online and offline) were generalized to arbitrary norms. Moreover, it was shown that the strong convexity condition can be weakened: the obtained results are valid for functions satisfying the quadratic growth condition. In the case when this condition is not met, it is proposed to use the regularization of the original problem in an arbitrary norm. In contradistinction to convex problems, saddle point problems are much less studied. For saddle point problems, the sample size was obtained under the condition of $\gamma$-growth of the objective function. When $\gamma = 1$, this condition is the condition of sharp minimum in convex problems. In this article, it was shown that the sample size in the case of a sharp minimum is almost independent of the desired accuracy of the solution of the original problem.

  9. Zhdanova O.L., Zhdanov V.S., Neverova G.P.
    Modeling the dynamics of plankton community considering phytoplankton toxicity
    Computer Research and Modeling, 2022, v. 14, no. 6, pp. 1301-1323

    We propose a three-component discrete-time model of the phytoplankton-zooplankton community, in which toxic and non-toxic species of phytoplankton compete for resources. The use of the Holling functional response of type II allows us to describe an interaction between zooplankton and phytoplankton. With the Ricker competition model, we describe the restriction of phytoplankton biomass growth by the availability of external resources (mineral nutrition, oxygen, light, etc.). Many phytoplankton species, including diatom algae, are known not to release toxins if they are not damaged. Zooplankton pressure on phytoplankton decreases in the presence of toxic substances. For example, Copepods are selective in their food choices and avoid consuming toxin-producing phytoplankton. Therefore, in our model, zooplankton (predator) consumes only non-toxic phytoplankton species being prey, and toxic species phytoplankton only competes with non-toxic for resources.

    We study analytically and numerically the proposed model. Dynamic mode maps allow us to investigate stability domains of fixed points, bifurcations, and the evolution of the community. Stability loss of fixed points is shown to occur only through a cascade of period-doubling bifurcations. The Neimark – Sacker scenario leading to the appearance of quasiperiodic oscillations is found to realize as well. Changes in intrapopulation parameters of phytoplankton or zooplankton can lead to abrupt transitions from regular to quasi-periodic dynamics (according to the Neimark – Sacker scenario) and further to cycles with a short period or even stationary dynamics. In the multistability areas, an initial condition variation with the unchanged values of all model parameters can shift the current dynamic mode or/and community composition.

    The proposed discrete-time model of community is quite simple and reveals dynamics of interacting species that coincide with features of experimental dynamics. In particular, the system shows behavior like in prey-predator models without evolution: the predator fluctuations lag behind those of prey by about a quarter of the period. Considering the phytoplankton genetic heterogeneity, in the simplest case of two genetically different forms: toxic and non-toxic ones, allows the model to demonstrate both long-period antiphase oscillations of predator and prey and cryptic cycles. During the cryptic cycle, the prey density remains almost constant with fluctuating predators, which corresponds to the influence of rapid evolution masking the trophic interaction.

  10. Pham C.T., Phan M.N., Tran T.T.
    Image classification based on deep learning with automatic relevance determination and structured Bayesian pruning
    Computer Research and Modeling, 2024, v. 16, no. 4, pp. 927-938

    Deep learning’s power stems from complex architectures; however, these can lead to overfitting, where models memorize training data and fail to generalize to unseen examples. This paper proposes a novel probabilistic approach to mitigate this issue. We introduce two key elements: Truncated Log-Uniform Prior and Truncated Log-Normal Variational Approximation, and Automatic Relevance Determination (ARD) with Bayesian Deep Neural Networks (BDNNs). Within the probabilistic framework, we employ a specially designed truncated log-uniform prior for noise. This prior acts as a regularizer, guiding the learning process towards simpler solutions and reducing overfitting. Additionally, a truncated log-normal variational approximation is used for efficient handling of the complex probability distributions inherent in deep learning models. ARD automatically identifies and removes irrelevant features or weights within a model. By integrating ARD with BDNNs, where weights have a probability distribution, we achieve a variational bound similar to the popular variational dropout technique. Dropout randomly drops neurons during training, encouraging the model not to rely heavily on any single feature. Our approach with ARD achieves similar benefits without the randomness of dropout, potentially leading to more stable training.

    To evaluate our approach, we have tested the model on two datasets: the Canadian Institute For Advanced Research (CIFAR-10) for image classification and a dataset of Macroscopic Images of Wood, which is compiled from multiple macroscopic images of wood datasets. Our method is applied to established architectures like Visual Geometry Group (VGG) and Residual Network (ResNet). The results demonstrate significant improvements. The model reduced overfitting while maintaining, or even improving, the accuracy of the network’s predictions on classification tasks. This validates the effectiveness of our approach in enhancing the performance and generalization capabilities of deep learning models.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"