Результаты поиска по 'transformation':
Найдено статей: 96
  1. Aptukov A.M., Bratsun D.A., Lyushnin A.V.
    Modeling of behavior of panicked crowd in multi-floor branched space
    Computer Research and Modeling, 2013, v. 5, no. 3, pp. 491-508

    The collective behavior of crowd leaving a room is modeled. The model is based on molecular dynamics approach with a mixture of socio-psychological and physical forces. The new algorithm for complicatedly branched space is proposed. It suggests that each individual develops its own plan of escape, which is stochastically transformed during the evolution. The algorithm includes also the separation of original space into rooms with possible exits selected by individuals according to their probability distribution. The model is calibrated on the base of empirical data provided by fire case in the nightclub “Lame Horse” (Perm, 2009). The algorithm is realized as an end-user Java software. It is assumed that this tool could help to test the buildings for their safety for humans.

    Views (last year): 7. Citations: 10 (RSCI).
  2. Segmentation of medical image is one of the most challenging tasks in analysis of medical image. It classifies the organs pixels or lesions from medical images background like MRI or CT scans, that is to provide critical information about the human organ’s volumes and shapes. In scientific imaging field, medical imaging is considered one of the most important topics due to the rapid and continuing progress in computerized medical image visualization, advances in analysis approaches and computer-aided diagnosis. Digital image processing becomes more important in healthcare field due to the growing use of direct digital imaging systems for medical diagnostics. Due to medical imaging techniques, approaches of image processing are now applicable in medicine. Generally, various transformations will be needed to extract image data. Also, a digital image can be considered an approximation of a real situation includes some uncertainty derived from the constraints on the process of vision. Since information on the level of uncertainty will influence an expert’s attitude. To address this challenge, we propose novel framework involving interval concept that consider a good tool for dealing with the uncertainty, In the proposed approach, the medical images are transformed into interval valued representation approach and entropies are defined for an image object and background. Then we determine a threshold for lower-bound image and for upper-bound image, and then calculate the mean value for the final output results. To demonstrate the effectiveness of the proposed framework, we evaluate it by using synthetic image and its ground truth. Experimental results showed how performance of the segmentation-based entropy threshold can be enhanced using proposed approach to overcome ambiguity.

  3. Shakhgeldyan K.I., Kuksin N.S., Domzhalov I.G., Pak R.L., Geltser B.I.
    Random forest of risk factors as a predictive tool for adverse events in clinical medicine
    Computer Research and Modeling, 2025, v. 17, no. 5, pp. 987-1004

    The aim of study was to develop an ensemble machine learning method for constructing interpretable predictive models and to validate it using the example of predicting in-hospital mortality (IHM) in patients with ST-segment elevation myocardial infarction (STEMI).

    A retrospective cohort study was conducted using data from 5446 electronic medical records of STEMI patients who underwent percutaneous coronary intervention (PCI). Patients were divided into two groups: 335 (6.2%) patients who died during hospitalization and 5111 (93.8%) patients with a favourable in-hospital outcome. A pool of potential predictors was formed using statistical methods. Through multimetric categorization (minimizing p-values, maximizing the area under the ROC curve (AUC), and SHAP value analysis), decision trees, and multivariable logistic regression (MLR), predictors were transformed into risk factors for IHM. Predictive models for IHM were developed using MLR, Random Forest Risk Factors (RandFRF), Stochastic Gradient Boosting (XGboost), Random Forest (RF), Adaptive boosting, Gradient Boosting, Light Gradient-Boosting Machine, Categorical Boosting (CatBoost), Explainable Boosting Machine and Stacking methods.

    Authors developed the RandFRF method, which integrates the predictive outcomes of modified decision trees, identifies risk factors and ranks them based on their contribution to the risk of adverse outcomes. RandFRF enables the development of predictive models with high discriminative performance (AUC 0.908), comparable to models based on CatBoost and Stacking (AUC 0.904 and 0.908, respectively). In turn, risk factors provide clinicians with information on the patient’s risk group classification and the extent of their impact on the probability of IHM. The risk factors identified by RandFRF can serve not only as rationale for the prediction results but also as a basis for developing more accurate models.

  4. Varshavsky L.E.
    Control theory methods for creating market structures
    Computer Research and Modeling, 2014, v. 6, no. 5, pp. 839-859

    Control theory methods for creating market structures are discussed for two cases: when market participants are pursuing aims 1) of maximal growth and 2) of maximum economic efficiency of their firms. For the first case method based on variable structure systems principles is developed. For the second case dynamic game approach is proposed based on computation of Nash–Cournot and Stackelberg strategies with the help of Z-transform.

    Views (last year): 4. Citations: 4 (RSCI).
  5. Zharkova V.V., Schelyaev A.E., Fisher J.V.
    Numerical simulation of sportsman's external flow
    Computer Research and Modeling, 2017, v. 9, no. 2, pp. 331-344

    Numerical simulation of moving sportsman external flow is presented. The unique method is developed for obtaining integral aerodynamic characteristics, which were the function of the flow regime (i.e. angle of attack, flow speed) and body position. Individual anthropometric characteristics and moving boundaries of sportsman (or sports equipment) during the race are taken into consideration.

    Numerical simulation is realized using FlowVision CFD. The software is based on the finite volume method, high-performance numerical methods and reliable mathematical models of physical processes. A Cartesian computational grid is used by FlowVision, the grid generation is a completely automated process. Local grid adaptation is used for solving high-pressure gradient and object complex shape. Flow simulation process performed by solutions systems of equations describing movement of fluid and/or gas in the computational domain, including: mass, moment and energy conservation equations; state equations; turbulence model equations. FlowVision permits flow simulation near moving bodies by means of computational domain transformation according to the athlete shape changes in the motion. Ski jumper aerodynamic characteristics are studied during all phases: take-off performance in motion, in-run and flight. Projected investigation defined simulation method, which includes: inverted statement of sportsman external flow development (velocity of the motion is equal to air flow velocity, object is immobile); changes boundary of the body technology defining; multiple calculations with the national team member data projecting. The research results are identification of the main factors affected to jumping performance: aerodynamic forces, rotating moments etc. Developed method was tested with active sportsmen. Ski jumpers used this method during preparations for Sochi Olympic Games 2014. A comparison of the predicted characteristics and experimental data shows a good agreement. Method versatility is underlined by performing swimmer and skater flow simulation. Designed technology is applicable for sorts of natural and technical objects.

    Views (last year): 29.
  6. Elaraby A.E., Nechaevskiy A.V.
    An effective segmentation approach for liver computed tomography scans using fuzzy exponential entropy
    Computer Research and Modeling, 2021, v. 13, no. 1, pp. 195-202

    Accurate segmentation of liver plays important in contouring during diagnosis and the planning of treatment. Imaging technology analysis and processing are wide usage in medical diagnostics, and therapeutic applications. Liver segmentation referring to the process of automatic or semi-automatic detection of liver image boundaries. A major difficulty in segmentation of liver image is the high variability as; the human anatomy itself shows major variation modes. In this paper, a proposed approach for computed tomography (CT) liver segmentation is presented by combining exponential entropy and fuzzy c-partition. Entropy concept has been utilized in various applications in imaging computing domain. Threshold techniques based on entropy have attracted a considerable attention over the last years in image analysis and processing literatures and it is among the most powerful techniques in image segmentation. In the proposed approach, the computed tomography (CT) of liver is transformed into fuzzy domain and fuzzy entropies are defined for liver image object and background. In threshold selection procedure, the proposed approach considers not only the information of liver image background and object, but also interactions between them as the selection of threshold is done by find a proper parameter combination of membership function such that the total fuzzy exponential entropy is maximized. Differential Evolution (DE) algorithm is utilizing to optimize the exponential entropy measure to obtain image thresholds. Experimental results in different CT livers scan are done and the results demonstrate the efficient of the proposed approach. Based on the visual clarity of segmented images with varied threshold values using the proposed approach, it was observed that liver segmented image visual quality is better with the results higher level of threshold.

  7. Varshavsky L.E.
    Mathematical methods for stabilizing the structure of social systems under external disturbances
    Computer Research and Modeling, 2021, v. 13, no. 4, pp. 845-857

    The article considers a bilinear model of the influence of external disturbances on the stability of the structure of social systems. Approaches to the third-party stabilization of the initial system consisting of two groups are investigated — by reducing the initial system to a linear system with uncertain parameters and using the results of the theory of linear dynamic games with a quadratic criterion. The influence of the coefficients of the proposed model of the social system and the control parameters on the quality of the system stabilization is analyzed with the help of computer experiments. It is shown that the use of a minimax strategy by a third party in the form of feedback control leads to a relatively close convergence of the population of the second group (excited by external influences) to an acceptable level, even with unfavorable periodic dynamic perturbations.

    The influence of one of the key coefficients in the criterion $(\varepsilon)$ used to compensate for the effects of external disturbances (the latter are present in the linear model in the form of uncertainty) on the quality of system stabilization is investigated. Using Z-transform, it is shown that a decrease in the coefficient $\varepsilon$ should lead to an increase in the values of the sum of the squares of the control. The computer calculations carried out in the article also show that the improvement of the convergence of the system structure to the equilibrium level with a decrease in this coefficient is achieved due to sharp changes in control in the initial period, which may induce the transition of some members of the quiet group to the second, excited group.

    The article also examines the influence of the values of the model coefficients that characterize the level of social tension on the quality of management. Calculations show that an increase in the level of social tension (all other things being equal) leads to the need for a significant increase in the third party's stabilizing efforts, as well as the value of control at the transition period.

    The results of the statistical modeling carried out in the article show that the calculated feedback controls successfully compensate for random disturbances on the social system (both in the form of «white» noise, and of autocorrelated disturbances).

  8. Salem N., Hudaib A., Al-Tarawneh K., Salem H., Tareef A., Salloum H., Mazzara M.
    A survey on the application of large language models in software engineering
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1715-1726

    Large Language Models (LLMs) are transforming software engineering by bridging the gap between natural language and programming languages. These models have revolutionized communication within development teams and the Software Development Life Cycle (SDLC) by enabling developers to interact with code using natural language, thereby improving workflow efficiency. This survey examines the impact of LLMs across various stages of the SDLC, including requirement gathering, system design, coding, debugging, testing, and documentation. LLMs have proven to be particularly useful in automating repetitive tasks such as code generation, refactoring, and bug detection, thus reducing manual effort and accelerating the development process. The integration of LLMs into the development process offers several advantages, including the automation of error correction, enhanced collaboration, and the ability to generate high-quality, functional code based on natural language input. Additionally, LLMs assist developers in understanding and implementing complex software requirements and design patterns. This paper also discusses the evolution of LLMs from simple code completion tools to sophisticated models capable of performing high-level software engineering tasks. However, despite their benefits, there are challenges associated with LLM adoption, such as issues related to model accuracy, interpretability, and potential biases. These limitations must be addressed to ensure the reliable deployment of LLMs in production environments. The paper concludes by identifying key areas for future research, including improving the adaptability of LLMs to specific software domains, enhancing their contextual understanding, and refining their capabilities to generate semantically accurate and efficient code. This survey provides valuable insights into the evolving role of LLMs in software engineering, offering a foundation for further exploration and practical implementation.

  9. Lobanov A.I., Mirov F.Kh.
    On the using the differential schemes to transport equation with drain in grid modeling
    Computer Research and Modeling, 2020, v. 12, no. 5, pp. 1149-1164

    Modern power transportation systems are the complex engineering systems. Such systems include both point facilities (power producers, consumers, transformer substations, etc.) and the distributed elements (f.e. power lines). Such structures are presented in the form of the graphs with different types of nodes under creating the mathematical models. It is necessary to solve the system of partial differential equations of the hyperbolic type to study the dynamic effects in such systems.

    An approach similar to one already applied in modeling similar problems earlier used in the work. New variant of the splitting method was used proposed by the authors. Unlike most known works, the splitting is not carried out according to physical processes (energy transport without dissipation, separately dissipative processes). We used splitting to the transport equations with the drain and the exchange between Reimann’s invariants. This splitting makes possible to construct the hybrid schemes for Riemann invariants with a high order of approximation and minimal dissipation error. An example of constructing such a hybrid differential scheme is described for a single-phase power line. The difference scheme proposed is based on the analysis of the properties of the schemes in the space of insufficient coefficients.

    Examples of the model problem numerical solutions using the proposed splitting and the difference scheme are given. The results of the numerical calculations shows that the difference scheme allows to reproduce the arising regions of large gradients. It is shown that the difference schemes also allow detecting resonances in such the systems.

  10. Melman A.S., Evsutin O.O.
    Efficient and error-free information hiding in the hybrid domain of digital images using metaheuristic optimization
    Computer Research and Modeling, 2023, v. 15, no. 1, pp. 197-210

    Data hiding in digital images is a promising direction of cybersecurity. Digital steganography methods provide imperceptible transmission of secret data over an open communication channel. The information embedding efficiency depends on the embedding imperceptibility, capacity, and robustness. These quality criteria are mutually inverse, and the improvement of one indicator usually leads to the deterioration of the others. A balance between them can be achieved using metaheuristic optimization. Metaheuristics are a class of optimization algorithms that find an optimal, or close to an optimal solution for a variety of problems, including those that are difficult to formalize, by simulating various natural processes, for example, the evolution of species or the behavior of animals. In this study, we propose an approach to data hiding in the hybrid spatial-frequency domain of digital images based on metaheuristic optimization. Changing a block of image pixels according to some change matrix is considered as an embedding operation. We select the change matrix adaptively for each block using metaheuristic optimization algorithms. In this study, we compare the performance of three metaheuristics such as genetic algorithm, particle swarm optimization, and differential evolution to find the best change matrix. Experimental results showed that the proposed approach provides high imperceptibility of embedding, high capacity, and error-free extraction of embedded information. At the same time, storage of change matrices for each block is not required for further data extraction. This improves user experience and reduces the chance of an attacker discovering the steganographic attachment. Metaheuristics provided an increase in imperceptibility indicator, estimated by the PSNR metric, and the capacity of the previous algorithm for embedding information into the coefficients of the discrete cosine transform using the QIM method [Evsutin, Melman, Meshcheryakov, 2021] by 26.02% and 30.18%, respectively, for the genetic algorithm, 26.01% and 19.39% for particle swarm optimization, 27.30% and 28.73% for differential evolution.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"