Результаты поиска по 'interpretability':
Найдено статей: 51
  1. Yakovlev A.A., Abakumov A.I., Kostyushkо A.V., Markelova E.V.
    Cytokines as indicators of the state of the organism in infectious diseases. Experimental data analysis
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1409-1426

    When person`s diseases is result of bacterial infection, various characteristics of the organism are used for observation the course of the disease. Currently, one of these indicators is dynamics of cytokine concentrations are produced, mainly by cells of the immune system. There are many types of these low molecular weight proteins in human body and many species of animals. The study of cytokines is important for the interpretation of functional disorders of the body's immune system, assessment of the severity, monitoring the effectiveness of therapy, predicting of the course and outcome of treatment. Cytokine response of the body indicating characteristics of course of disease. For research regularities of such indication, experiments were conducted on laboratory mice. Experimental data are analyzed on the development of pneumonia and treatment with several drugs for bacterial infection of mice. As drugs used immunomodulatory drugs “Roncoleukin”, “Leikinferon” and “Tinrostim”. The data are presented by two types cytokines` concentration in lung tissue and animal blood. Multy-sided statistical ana non statistical analysis of the data allowed us to find common patterns of changes in the “cytokine profile” of the body and to link them with the properties of therapeutic preparations. The studies cytokine “Interleukin-10” (IL-10) and “Interferon Gamma” (IFN$\gamma$) in infected mice deviate from the normal level of infact animals indicating the development of the disease. Changes in cytokine concentrations in groups of treated mice are compared with those in a group of healthy (not infected) mice and a group of infected untreated mice. The comparison is made for groups of individuals, since the concentrations of cytokines are individual and differ significantly in different individuals. Under these conditions, only groups of individuals can indicate the regularities of the processes of the course of the disease. These groups of mice were being observed for two weeks. The dynamics of cytokine concentrations indicates characteristics of the disease course and efficiency of used therapeutic drugs. The effect of a medicinal product on organisms is monitored by the location of these groups of individuals in the space of cytokine concentrations. The Hausdorff distance between the sets of vectors of cytokine concentrations of individuals is used in this space. This is based on the Euclidean distance between the elements of these sets. It was found that the drug “Roncoleukin” and “Leukinferon” have a generally similar and different from the drug “Tinrostim” effect on the course of the disease.

  2. Musaev A.A., Grigoriev D.A.
    Extracting knowledge from text messages: overview and state-of-the-art
    Computer Research and Modeling, 2021, v. 13, no. 6, pp. 1291-1315

    In general, solving the information explosion problem can be delegated to systems for automatic processing of digital data. These systems are intended for recognizing, sorting, meaningfully processing and presenting data in formats readable and interpretable by humans. The creation of intelligent knowledge extraction systems that handle unstructured data would be a natural solution in this area. At the same time, the evident progress in these tasks for structured data contrasts with the limited success of unstructured data processing, and, in particular, document processing. Currently, this research area is undergoing active development and investigation. The present paper is a systematic survey on both Russian and international publications that are dedicated to the leading trend in automatic text data processing: Text Mining (TM). We cover the main tasks and notions of TM, as well as its place in the current AI landscape. Furthermore, we analyze the complications that arise during the processing of texts written in natural language (NLP) which are weakly structured and often provide ambiguous linguistic information. We describe the stages of text data preparation, cleaning, and selecting features which, alongside the data obtained via morphological, syntactic, and semantic analysis, constitute the input for the TM process. This process can be represented as mapping a set of text documents to «knowledge». Using the case of stock trading, we demonstrate the formalization of the problem of making a trade decision based on a set of analytical recommendations. Examples of such mappings are methods of Information Retrieval (IR), text summarization, sentiment analysis, document classification and clustering, etc. The common point of all tasks and techniques of TM is the selection of word forms and their derivatives used to recognize content in NL symbol sequences. Considering IR as an example, we examine classic types of search, such as searching for word forms, phrases, patterns and concepts. Additionally, we consider the augmentation of patterns with syntactic and semantic information. Next, we provide a general description of all NLP instruments: morphological, syntactic, semantic and pragmatic analysis. Finally, we end the paper with a comparative analysis of modern TM tools which can be helpful for selecting a suitable TM platform based on the user’s needs and skills.

  3. Vorontsova D.V., Isaeva M.V., Menshikov I.A., Orlov K.Y., Bernadotte A.
    Frequency, time, and spatial electroencephalogram changes after COVID-19 during a simple speech task
    Computer Research and Modeling, 2023, v. 15, no. 3, pp. 691-701

    We found a predominance of α-rhythm patterns in the left hemisphere in healthy people compared to people with COVID-19 history. Moreover, we observe a significant decrease in the left hemisphere contribution to the speech center area in people who have undergone COVID-19 when performing speech tasks.

    Our findings show that the signal in healthy subjects is more spatially localized and synchronized between hemispheres when performing tasks compared to people who recovered from COVID-19. We also observed a decrease in low frequencies in both hemispheres after COVID-19.

    EEG-patterns of COVID-19 are detectable in an unusual frequency domain. What is usually considered noise in electroencephalographic (EEG) data carries information that can be used to determine whether or not a person has had COVID-19. These patterns can be interpreted as signs of hemispheric desynchronization, premature brain ageing, and more significant brain strain when performing simple tasks compared to people who did not have COVID-19.

    In our work, we have shown the applicability of neural networks in helping to detect the long-term effects of COVID-19 on EEG-data. Furthermore, our data following other studies supported the hypothesis of the severity of the long-term effects of COVID-19 detected on the EEG-data of EEG-based BCI. The presented findings of functional activity of the brain– computer interface make it possible to use machine learning methods on simple, non-invasive brain–computer interfaces to detect post-COVID syndrome and develop progress in neurorehabilitation.

  4. Shakhgeldyan K.I., Kuksin N.S., Domzhalov I.G., Pak R.L., Geltser B.I.
    Random forest of risk factors as a predictive tool for adverse events in clinical medicine
    Computer Research and Modeling, 2025, v. 17, no. 5, pp. 987-1004

    The aim of study was to develop an ensemble machine learning method for constructing interpretable predictive models and to validate it using the example of predicting in-hospital mortality (IHM) in patients with ST-segment elevation myocardial infarction (STEMI).

    A retrospective cohort study was conducted using data from 5446 electronic medical records of STEMI patients who underwent percutaneous coronary intervention (PCI). Patients were divided into two groups: 335 (6.2%) patients who died during hospitalization and 5111 (93.8%) patients with a favourable in-hospital outcome. A pool of potential predictors was formed using statistical methods. Through multimetric categorization (minimizing p-values, maximizing the area under the ROC curve (AUC), and SHAP value analysis), decision trees, and multivariable logistic regression (MLR), predictors were transformed into risk factors for IHM. Predictive models for IHM were developed using MLR, Random Forest Risk Factors (RandFRF), Stochastic Gradient Boosting (XGboost), Random Forest (RF), Adaptive boosting, Gradient Boosting, Light Gradient-Boosting Machine, Categorical Boosting (CatBoost), Explainable Boosting Machine and Stacking methods.

    Authors developed the RandFRF method, which integrates the predictive outcomes of modified decision trees, identifies risk factors and ranks them based on their contribution to the risk of adverse outcomes. RandFRF enables the development of predictive models with high discriminative performance (AUC 0.908), comparable to models based on CatBoost and Stacking (AUC 0.904 and 0.908, respectively). In turn, risk factors provide clinicians with information on the patient’s risk group classification and the extent of their impact on the probability of IHM. The risk factors identified by RandFRF can serve not only as rationale for the prediction results but also as a basis for developing more accurate models.

  5. Matveev A.V.
    Modeling the kinetics of radiopharmaceuticals with iodine isotopes in nuclear medicine problems
    Computer Research and Modeling, 2020, v. 12, no. 4, pp. 883-905

    Radiopharmaceuticals with iodine radioisotopes are now widely used in imaging and non-imaging methods of nuclear medicine. When evaluating the results of radionuclide studies of the structural and functional state of organs and tissues, parallel modeling of the kinetics of radiopharmaceuticals in the body plays an important role. The complexity of such modeling lies in two opposite aspects. On the one hand, excessive simplification of the anatomical and physiological characteristics of the organism when splitting it to the compartments that may result in the loss or distortion of important clinical diagnosis information, on the other – excessive, taking into account all possible interdependencies of the functioning of the organs and systems that, on the contrary, will lead to excess amount of absolutely useless for clinical interpretation of the data or the mathematical model becomes even more intractable. Our work develops a unified approach to the construction of mathematical models of the kinetics of radiopharmaceuticals with iodine isotopes in the human body during diagnostic and therapeutic procedures of nuclear medicine. Based on this approach, three- and four-compartment pharmacokinetic models were developed and corresponding calculation programs were created in the C++ programming language for processing and evaluating the results of radionuclide diagnostics and therapy. Various methods for identifying model parameters based on quantitative data from radionuclide studies of the functional state of vital organs are proposed. The results of pharmacokinetic modeling for radionuclide diagnostics of the liver, kidney, and thyroid using iodine-containing radiopharmaceuticals are presented and analyzed. Using clinical and diagnostic data, individual pharmacokinetic parameters of transport of different radiopharmaceuticals in the body (transport constants, half-life periods, maximum activity in the organ and the time of its achievement) were determined. It is shown that the pharmacokinetic characteristics for each patient are strictly individual and cannot be described by averaged kinetic parameters. Within the framework of three pharmacokinetic models, “Activity–time” relationships were obtained and analyzed for different organs and tissues, including for tissues in which the activity of a radiopharmaceutical is impossible or difficult to measure by clinical methods. Also discussed are the features and the results of simulation and dosimetric planning of radioiodine therapy of the thyroid gland. It is shown that the values of absorbed radiation doses are very sensitive to the kinetic parameters of the compartment model. Therefore, special attention should be paid to obtaining accurate quantitative data from ultrasound and thyroid radiometry and identifying simulation parameters based on them. The work is based on the principles and methods of pharmacokinetics. For the numerical solution of systems of differential equations of the pharmacokinetic models we used Runge–Kutta methods and Rosenbrock method. The Hooke–Jeeves method was used to find the minimum of a function of several variables when identifying modeling parameters.

  6. Salem N., Hudaib A., Al-Tarawneh K., Salem H., Tareef A., Salloum H., Mazzara M.
    A survey on the application of large language models in software engineering
    Computer Research and Modeling, 2024, v. 16, no. 7, pp. 1715-1726

    Large Language Models (LLMs) are transforming software engineering by bridging the gap between natural language and programming languages. These models have revolutionized communication within development teams and the Software Development Life Cycle (SDLC) by enabling developers to interact with code using natural language, thereby improving workflow efficiency. This survey examines the impact of LLMs across various stages of the SDLC, including requirement gathering, system design, coding, debugging, testing, and documentation. LLMs have proven to be particularly useful in automating repetitive tasks such as code generation, refactoring, and bug detection, thus reducing manual effort and accelerating the development process. The integration of LLMs into the development process offers several advantages, including the automation of error correction, enhanced collaboration, and the ability to generate high-quality, functional code based on natural language input. Additionally, LLMs assist developers in understanding and implementing complex software requirements and design patterns. This paper also discusses the evolution of LLMs from simple code completion tools to sophisticated models capable of performing high-level software engineering tasks. However, despite their benefits, there are challenges associated with LLM adoption, such as issues related to model accuracy, interpretability, and potential biases. These limitations must be addressed to ensure the reliable deployment of LLMs in production environments. The paper concludes by identifying key areas for future research, including improving the adaptability of LLMs to specific software domains, enhancing their contextual understanding, and refining their capabilities to generate semantically accurate and efficient code. This survey provides valuable insights into the evolving role of LLMs in software engineering, offering a foundation for further exploration and practical implementation.

  7. Chen J., Lobanov A.V., Rogozin A.V.
    Nonsmooth Distributed Min-Max Optimization Using the Smoothing Technique
    Computer Research and Modeling, 2023, v. 15, no. 2, pp. 469-480

    Distributed saddle point problems (SPPs) have numerous applications in optimization, matrix games and machine learning. For example, the training of generated adversarial networks is represented as a min-max optimization problem, and training regularized linear models can be reformulated as an SPP as well. This paper studies distributed nonsmooth SPPs with Lipschitz-continuous objective functions. The objective function is represented as a sum of several components that are distributed between groups of computational nodes. The nodes, or agents, exchange information through some communication network that may be centralized or decentralized. A centralized network has a universal information aggregator (a server, or master node) that directly communicates to each of the agents and therefore can coordinate the optimization process. In a decentralized network, all the nodes are equal, the server node is not present, and each agent only communicates to its immediate neighbors.

    We assume that each of the nodes locally holds its objective and can compute its value at given points, i. e. has access to zero-order oracle. Zero-order information is used when the gradient of the function is costly, not possible to compute or when the function is not differentiable. For example, in reinforcement learning one needs to generate a trajectory to evaluate the current policy. This policy evaluation process can be interpreted as the computation of the function value. We propose an approach that uses a smoothing technique, i. e., applies a first-order method to the smoothed version of the initial function. It can be shown that the stochastic gradient of the smoothed function can be viewed as a random two-point gradient approximation of the initial function. Smoothing approaches have been studied for distributed zero-order minimization, and our paper generalizes the smoothing technique on SPPs.

  8. Ilyasov D.V., Molchanov A.G., Glagolev M.V., Suvorov G.G., Sirin A.A.
    Modelling of carbon dioxide net ecosystem exchange of hayfield on drained peat soil: land use scenario analysis
    Computer Research and Modeling, 2020, v. 12, no. 6, pp. 1427-1449

    The data of episodic field measurements of carbon dioxide balance components (soil respiration — Rsoil, ecosystem respiration — Reco, net ecosystem exchange — NEE) of hayfields under use and abandoned one are interpreted by modelling. The field measurements were carried within five field campaigns in 2018 and 2019 on the drained part of the Dubna Peatland in Taldom District, Moscow Oblast, Russia. The territory is within humid continental climate zone. Peatland drainage was done out for milled peat extraction. After extraction was stopped, the residual peat deposit (1–1.5 m) was ploughed and grassed (Poa pratensis L.) for hay production. The current ground water level (GWL) varies from 0.3–0.5 m below the surface during wet and up to 1.0 m during dry periods. Daily dynamics of CO2 fluxes was measured using dynamic chamber method in 2018 (August) and 2019 (May, June, August) for abandoned ditch spacing only with sanitary mowing once in 5 years and the ditch spacing with annual mowing. NEE and Reco were measured on the sites with original vegetation, and Rsoil — after vegetation removal. To model a seasonal dynamics of NEE, the dependence of its components (Reco, Rsoil, and Gross ecosystematmosphere exchange of carbon dioxide — GEE) from soil and air temperature, GWL, photosynthetically active radiation, underground and aboveground plant biomass were used. The parametrization of the models has been carried out considering the stability of coefficients estimated by the bootstrap method. R2 (α = 0.05) between simulated and measured Reco was 0.44 (p < 0.0003) on abandoned and 0.59 (p < 0.04) on under use hayfield, and GEE was 0.57 (p < 0.0002) and 0.77 (p < 0.00001), respectively. Numerical experiments were carried out to assess the influence of different haymaking regime on NEE. It was found that NEE for the season (May 15 – September 30) did not differ much between the hayfield without mowing (4.5±1.0 tC·ha–1·season–1) and the abandoned one (6.2±1.4). Single mowing during the season leads to increase of NEE up to 6.5±0.9, and double mowing — up to 7.5±1.4 tC·ha–1·season–1. This means increase of carbon losses and CO2 emission into the atmosphere. Carbon loss on hayfield for both single and double mowing scenario was comparable with abandoned hayfield. The value of removed phytomass for single and double mowing was 0.8±0.1 tC·ha–1·season–1 and 1.4±0.1 (45% carbon content in dry phytomass) or 3.0 and 4.4 t·ha–1·season–1 of hay (17% moisture content). In comparison with the fallow, the removal of biomass of 0.8±0.1 at single and 1.4±0.1 tC·ha–1·season–1 double mowing is accompanied by an increase in carbon loss due to CO2 emissions, i.e., the growth of NEE by 0.3±0.1 and 1.3±0.6 tC·ha–1·season–1, respectively. This corresponds to the growth of NEE for each ton of withdrawn phytomass per hectare of 0.4±0.2 tС·ha–1·season–1 at single mowing, and 0.9±0.7 tС·ha–1·season–1 at double mowing. Therefore, single mowing is more justified in terms of carbon loss than double mowing. Extensive mowing does not increase CO2 emissions into the atmosphere and allows, in addition, to “replace” part of the carbon loss by agricultural production.

  9. Moiseev N.A., Nazarova D.I., Semina N.S., Maksimov D.A.
    Changepoint detection on financial data using deep learning approach
    Computer Research and Modeling, 2024, v. 16, no. 2, pp. 555-575

    The purpose of this study is to develop a methodology for change points detection in time series, including financial data. The theoretical basis of the study is based on the pieces of research devoted to the analysis of structural changes in financial markets, description of the proposed algorithms for detecting change points and peculiarities of building classical and deep machine learning models for solving this type of problems. The development of such tools is of interest to investors and other stakeholders, providing them with additional approaches to the effective analysis of financial markets and interpretation of available data.

    To address the research objective, a neural network was trained. In the course of the study several ways of training sample formation were considered, differing in the nature of statistical parameters. In order to improve the quality of training and obtain more accurate results, a methodology for feature generation was developed for the formation of features that serve as input data for the neural network. These features, in turn, were derived from an analysis of mathematical expectations and standard deviations of time series data over specific intervals. The potential for combining these features to achieve more stable results is also under investigation.

    The results of model experiments were analyzed to compare the effectiveness of the proposed model with other existing changepoint detection algorithms that have gained widespread usage in practical applications. A specially generated dataset, developed using proprietary methods, was utilized as both training and testing data. Furthermore, the model, trained on various features, was tested on daily data from the S&P 500 index to assess its effectiveness in a real financial context.

    As the principles of the model’s operation are described, possibilities for its further improvement are considered, including the modernization of the proposed model’s structure, optimization of training data generation, and feature formation. Additionally, the authors are tasked with advancing existing concepts for real-time changepoint detection.

  10. The article discusses the problem of the influence of the research goals on the structure of the multivariate model of regression analysis (in particular, on the implementation of the procedure for reducing the dimension of the model). It is shown how bringing the specification of the multiple regression model in line with the research objectives affects the choice of modeling methods. Two schemes for constructing a model are compared: the first does not allow taking into account the typology of primary predictors and the nature of their influence on the performance characteristics, the second scheme implies a stage of preliminary division of the initial predictors into groups, in accordance with the objectives of the study. Using the example of solving the problem of analyzing the causes of burnout of creative workers, the importance of the stage of qualitative analysis and systematization of a priori selected factors is shown, which is implemented not by computing means, but by attracting the knowledge and experience of specialists in the studied subject area. The presented example of the implementation of the approach to determining the specification of the regression model combines formalized mathematical and statistical procedures and the preceding stage of the classification of primary factors. The presence of this stage makes it possible to explain the scheme of managing (corrective) actions (softening the leadership style and increasing approval lead to a decrease in the manifestations of anxiety and stress, which, in turn, reduces the severity of the emotional exhaustion of the team members). Preclassification also allows avoiding the combination in one main component of controlled and uncontrolled, regulatory and controlled feature factors, which could worsen the interpretability of the synthesized predictors. On the example of a specific problem, it is shown that the selection of factors-regressors is a process that requires an individual solution. In the case under consideration, the following were consistently used: systematization of features, correlation analysis, principal component analysis, regression analysis. The first three methods made it possible to significantly reduce the dimension of the problem, which did not affect the achievement of the goal for which this task was posed: significant measures of controlling influence on the team were shown. allowing to reduce the degree of emotional burnout of its participants.

Pages: « first previous next

Indexed in Scopus

Full-text version of the journal is also available on the web site of the scientific electronic library eLIBRARY.RU

The journal is included in the Russian Science Citation Index

The journal is included in the RSCI

International Interdisciplinary Conference "Mathematics. Computing. Education"