Результаты поиска по 'numerical algorithm':
Найдено статей: 136
  1. Rukavishnikov V.A., Mosolapov A.O.
    Weighthed vector finite element method and its applications
    Computer Research and Modeling, 2019, v. 11, no. 1, pp. 71-86

    Mathematical models of many natural processes are described by partial differential equations with singular solutions. Classical numerical methods for determination of approximate solution to such problems are inefficient. In the present paper a boundary value problem for vector wave equation in L-shaped domain is considered. The presence of reentrant corner of size $3\pi/2$ on the boundary of computational domain leads to the strong singularity of the solution, i.e. it does not belong to the Sobolev space $H^1$ so classical and special numerical methods have a convergence rate less than $O(h)$. Therefore in the present paper a special weighted set of vector-functions is introduced. In this set the solution of considered boundary value problem is defined as $R_ν$-generalized one.

    For numerical determination of the $R_ν$-generalized solution a weighted vector finite element method is constructed. The basic difference of this method is that the basis functions contain as a factor a special weight function in a degree depending on the properties of the solution of initial problem. This allows to significantly raise a convergence speed of approximate solution to the exact one when the mesh is refined. Moreover, introduced basis functions are solenoidal, therefore the solenoidal condition for the solution is taken into account precisely, so the spurious numerical solutions are prevented.

    Results of numerical experiments are presented for series of different type model problems: some of them have a solution containing only singular component and some of them have a solution containing a singular and regular components. Results of numerical experiment showed that when a finite element mesh is refined a convergence rate of the constructed weighted vector finite element method is $O(h)$, that is more than one and a half times better in comparison with special methods developed for described problem, namely singular complement method and regularization method. Another features of constructed method are algorithmic simplicity and naturalness of the solution determination that is beneficial for numerical computations.

    Views (last year): 37.
  2. Usanov M.S., Kulberg N.S., Morozov S.P.
    Development of anisotropic nonlinear noise-reduction algorithm for computed tomography data with context dynamic threshold
    Computer Research and Modeling, 2019, v. 11, no. 2, pp. 233-248

    The article deals with the development of the noise-reduction algorithm based on anisotropic nonlinear data filtering of computed tomography (CT). Analysis of domestic and foreign literature has shown that the most effective algorithms for noise reduction of CT data use complex methods for analyzing and processing data, such as bilateral, adaptive, three-dimensional and other types of filtrations. However, a combination of such techniques is rarely used in practice due to long processing time per slice. In this regard, it was decided to develop an efficient and fast algorithm for noise-reduction based on simplified bilateral filtration method with three-dimensional data accumulation. The algorithm was developed on C ++11 programming language in Microsoft Visual Studio 2015. The main difference of the developed noise reduction algorithm is the use an improved mathematical model of CT noise, based on the distribution of Poisson and Gauss from the logarithmic value, developed earlier by our team. This allows a more accurate determination of the noise level and, thus, the threshold of data processing. As the result of the noise reduction algorithm, processed CT data with lower noise level were obtained. Visual evaluation of the data showed the increased information content of the processed data, compared to original data, the clarity of the mapping of homogeneous regions, and a significant reduction in noise in processing areas. Assessing the numerical results of the algorithm showed a decrease in the standard deviation (SD) level by more than 6 times in the processed areas, and high rates of the determination coefficient showed that the data were not distorted and changed only due to the removal of noise. Usage of newly developed context dynamic threshold made it possible to decrease SD level on every area of data. The main difference of the developed threshold is its simplicity and speed, achieved by preliminary estimation of the data array and derivation of the threshold values that are put in correspondence with each pixel of the CT. The principle of its work is based on threshold criteria, which fits well both into the developed noise reduction algorithm based on anisotropic nonlinear filtration, and another algorithm of noise-reduction. The algorithm successfully functions as part of the MultiVox workstation and is being prepared for implementation in a single radiological network of the city of Moscow.

    Views (last year): 21.
  3. Stupitsky E.L., Andruschenko V.A.
    Physical research, numerical and analytical modeling of explosion phenomena. A review
    Computer Research and Modeling, 2020, v. 12, no. 3, pp. 505-546

    The review considers a wide range of phenomena and problems associated with the explosion. Detailed numerical studies revealed an interesting physical effect — the formation of discrete vortex structures directly behind the front of a shock wave propagating in dense layers of a heterogeneous atmosphere. The necessity of further investigation of such phenomena and the determination of the degree of their connection with the possible development of gas-dynamic instability is shown. The brief analysis of numerous works on the thermal explosion of meteoroids during their high-speed movement in the Earth’s atmosphere is given. Much attention is paid to the development of a numerical algorithm for calculating the simultaneous explosion of several fragments of meteoroids and the features of the development of such a gas-dynamic flow are analyzed. The work shows that earlier developed algorithms for calculating explosions can be successfully used to study explosive volcanic eruptions. The paper presents and discusses the results of such studies for both continental and underwater volcanoes with certain restrictions on the conditions of volcanic activity.

    The mathematical analysis is performed and the results of analytical studies of a number of important physical phenomena characteristic of explosions of high specific energy in the ionosphere are presented. It is shown that the preliminary laboratory physical modeling of the main processes that determine these phenomena is of fundamental importance for the development of sufficiently complete and adequate theoretical and numerical models of such complex phenomena as powerful plasma disturbances in the ionosphere. Laser plasma is the closest object for such a simulation. The results of the corresponding theoretical and experimental studies are presented and their scientific and practical significance is shown. The brief review of recent years on the use of laser radiation for laboratory physical modeling of the effects of a nuclear explosion on asteroid materials is given.

    As a result of the analysis performed in the review, it was possible to separate and preliminarily formulate some interesting and scientifically significant questions that must be investigated on the basis of the ideas already obtained. These are finely dispersed chemically active systems formed during the release of volcanoes; small-scale vortex structures; generation of spontaneous magnetic fields due to the development of instabilities and their role in the transformation of plasma energy during its expansion in the ionosphere. It is also important to study a possible laboratory physical simulation of the thermal explosion of bodies under the influence of highspeed plasma flow, which has only theoretical interpretations.

  4. We consider a model of spontaneous formation of a computational structure in the human brain for solving a given class of tasks in the process of performing a series of similar tasks. The model is based on a special definition of a numerical measure of the complexity of the solution algorithm. This measure has an informational property: the complexity of a computational structure consisting of two independent structures is equal to the sum of the complexities of these structures. Then the probability of spontaneous occurrence of the structure depends exponentially on the complexity of the structure. The exponential coefficient requires experimental determination for each type of problem. It may depend on the form of presentation of the source data and the procedure for issuing the result. This estimation method was applied to the results of a series of experiments that determined the strategy for solving a series of similar problems with a growing number of initial data. These experiments were described in previously published papers. Two main strategies were considered: sequential execution of the computational algorithm, or the use of parallel computing in those tasks where it is effective. These strategies differ in how calculations are performed. Using an estimate of the complexity of schemes, you can use the empirical probability of one of the strategies to calculate the probability of the other. The calculations performed showed a good match between the calculated and empirical probabilities. This confirms the hypothesis about the spontaneous formation of structures that solve the problem during the initial training of a person. The paper contains a brief description of experiments, detailed computational schemes and a strict definition of the complexity measure of computational structures and the conclusion of the dependence of the probability of structure formation on its complexity.

  5. The paper studies a multidimensional convection-diffusion equation with variable coefficients and a nonclassical boundary condition. Two cases are considered: in the first case, the first boundary condition contains the integral of the unknown function with respect to the integration variable $x_\alpha^{}$, and in the second case, the integral of the unknown function with respect to the integration variable $\tau$, denoting the memory effect. Similar problems arise when studying the transport of impurities along the riverbed. For an approximate solution of the problem posed, a locally one-dimensional difference scheme by A.A. Samarskii with order of approximation $O(h^2+\tau)$. In view of the fact that the equation contains the first derivative of the unknown function with respect to the spatial variable $x_\alpha^{}$, the wellknown method proposed by A.A. Samarskii in constructing a monotonic scheme of the second order of accuracy in $h_\alpha^{}$ for a general parabolic type equation containing one-sided derivatives taking into account the sign of $r_\alpha^{}(x,t)$. To increase the boundary conditions of the third kind to the second order of accuracy in $h_\alpha^{}$, we used the equation, on the assumption that it is also valid at the boundaries. The study of the uniqueness and stability of the solution was carried out using the method of energy inequalities. A priori estimates are obtained for the solution of the difference problem in the $L_2^{}$-norm, which implies the uniqueness of the solution, the continuous and uniform dependence of the solution of the difference problem on the input data, and the convergence of the solution of the locally onedimensional difference scheme to the solution of the original differential problem in the $L_2^{}$-norm with speed equal to the order of approximation of the difference scheme. For a two-dimensional problem, a numerical solution algorithm is constructed.

  6. Surov V.S.
    About one version of the nodal method of characteristics
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 29-44

    A variant of the inverse method of characteristics (IMH) is presented, in whose algorithm an additional fractional time step is introduced, which makes it possible to increase the accuracy of calculations due to a more accurate approximation of the characteristics. The calculation formulas of the modified method for the equations of the one-velocity model of a gas-liquid mixture are given, with the help of which one-dimensional and also flat test problems with self-similar solutions are calculated. When solving multidimensional problems, the original system of equations is split into a number of one-dimensional subsystems, for the calculation of which the inverse method of characteristics with a fractional time step is used. Using the proposed method, the following were calculated: the one-dimensional problem of the decay of an arbitrary discontinuity in a dispersed medium; a twodimensional problem of the interaction of a homogeneous gas-liquid flow with an obstacle with an attached shock wave, as well as a flow with a centered rarefaction wave. The results of numerical calculations of these problems are compared with self-similar solutions and their satisfactory agreement is noted. On the example of the Riemann problem with a shock wave, a comparison is made with a number of conservative, non-conservative, first and higher orders of accuracy schemes, from which, in particular, it follows that the presented calculation method, i. e. MIMC, quite competitive. Despite the fact that the application of MIMC requires many times more time than the original inverse method of characteristics (IMC), calculations can be carried out with an increased time step and, in some cases, more accurate results can be obtained. It is noted that the method with a fractional time step has advantages over the IMC in cases where the characteristics of the system are significantly curvilinear. For this reason, the use of MIMC, for example, for the Euler equations is inappropriate, since for the latter the characteristics within the time step differ little from straight lines.

  7. Pham C.T., Tran T.T., Dang H.P.
    Image noise removal method based on nonconvex total generalized variation and primal-dual algorithm
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 527-541

    In various applications, i. e., astronomical imaging, electron microscopy, and tomography, images are often damaged by Poisson noise. At the same time, the thermal motion leads to Gaussian noise. Therefore, in such applications, the image is usually corrupted by mixed Poisson – Gaussian noise.

    In this paper, we propose a novel method for recovering images corrupted by mixed Poisson – Gaussian noise. In the proposed method, we develop a total variation-based model connected with the nonconvex function and the total generalized variation regularization, which overcomes the staircase artifacts and maintains neat edges.

    Numerically, we employ the primal-dual method combined with the classical iteratively reweighted $l_1$ algorithm to solve our minimization problem. Experimental results are provided to demonstrate the superiority of our proposed model and algorithm for mixed Poisson – Gaussian removal to state-of-the-art numerical methods.

  8. Nefedova O.A., Spevak L.P., Kazakov A.L., Lee M.G.
    Solution to a two-dimensional nonlinear heat equation using null field method
    Computer Research and Modeling, 2023, v. 15, no. 6, pp. 1449-1467

    The paper deals with a heat wave motion problem for a degenerate second-order nonlinear parabolic equation with power nonlinearity. The considered boundary condition specifies in a plane the motion equation of the circular zero front of the heat wave. A new numerical-analytical algorithm for solving the problem is proposed. A solution is constructed stepby- step in time using difference time discretization. At each time step, a boundary value problem for the Poisson equation corresponding to the original equation at a fixed time is considered. This problem is, in fact, an inverse Cauchy problem in the domain whose initial boundary is free of boundary conditions and two boundary conditions (Neumann and Dirichlet) are specified on a current boundary (heat wave). A solution of this problem is constructed as the sum of a particular solution to the nonhomogeneous Poisson equation and a solution to the corresponding Laplace equation satisfying the boundary conditions. Since the inhomogeneity depends on the desired function and its derivatives, an iterative solution procedure is used. The particular solution is sought by the collocation method using inhomogeneity expansion in radial basis functions. The inverse Cauchy problem for the Laplace equation is solved by the null field method as applied to a circular domain with a circular hole. This method is used for the first time to solve such problem. The calculation algorithm is optimized by parallelizing the computations. The parallelization of the computations allows us to realize effectively the algorithm on high performance computing servers. The algorithm is implemented as a program, which is parallelized by using the OpenMP standard for the C++ language, suitable for calculations with parallel cycles. The effectiveness of the algorithm and the robustness of the program are tested by the comparison of the calculation results with the known exact solution as well as with the numerical solution obtained earlier by the authors with the use of the boundary element method. The implemented computational experiment shows good convergence of the iteration processes and higher calculation accuracy of the proposed new algorithm than of the previously developed one. The solution analysis allows us to select the radial basis functions which are most suitable for the proposed algorithm.

  9. Dorn Y.V., Shitikov O.M.
    Detecting Braess paradox in the stable dynamic model
    Computer Research and Modeling, 2024, v. 16, no. 1, pp. 35-51

    The work investigates the search for inefficient edges in the model of stable dynamics by Nestrov – de Palma (2003). For this purpose, we prove several general theorems about equilibrium properties, including the condition of equal costs for all used routes that can be extended to all paths involving edges from equilibrium routes. The study demonstrates that the standard problem formulation of finding edges whose removal reduces the cost of travel for all participants has no practical significance because the same edge can be both efficient and inefficient depending on the network’s load. In the work, we introduce the concept of an inefficient edge based on the sensitivity of total driver costs to the costs on the edge. The paper provides an algorithm for finding inefficient edges and presents the results of numerical experiments for the transportation network of the city of Anaheim.

Pages: « first previous next last »

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"