Skip to main content

Application of deep learning to predict the low serum albumin in new hemodialysis patients

Abstract

Background

Serum albumin level is a crucial nutritional indicator for patients on dialysis. Approximately one-third of patients on hemodialysis (HD) have protein malnutrition. Therefore, the serum albumin level of patients on HD is strongly correlated with mortality.

Methods

In study, the data sets were obtained from the longitudinal electronic health records of the largest HD center in Taiwan from July 2011 to December 2015, included 1,567 new patients on HD who met the inclusion criteria. Multivariate logistic regression was performed to evaluate the association of clinical factors with low serum albumin, and the grasshopper optimization algorithm (GOA) was used for feature selection. The quantile g-computation method was used to calculate the weight ratio of each factor. Machine learning and deep learning (DL) methods were used to predict the low serum albumin. The area under the curve (AUC) and accuracy were calculated to determine the model performance.

Results

Age, gender, hypertension, hemoglobin, iron, ferritin, sodium, potassium, calcium, creatinine, alkaline phosphatase, and triglyceride levels were significantly associated with low serum albumin. The AUC and accuracy of the GOA quantile g-computation weight model combined with the Bi-LSTM method were 98% and 95%, respectively.

Conclusion

The GOA method was able to rapidly identify the optimal combination of factors associated with serum albumin in patients on HD, and the quantile g-computation with DL methods could determine the most effective GOA quantile g-computation weight prediction model. The serum albumin status of patients on HD can be predicted by the proposed model and accordingly provide patients with better a prognostic care and treatment.

Introduction

The prevalence of end-stage renal disease (ESRD) has been continually increasing in various countries. According to a 2020 US Renal Data System report, Taiwan ranks among the top five countries globally in terms of the incidence rate of ESRD per million population. ESRD is a condition in which a person’s renal function declines to < 15% of normal renal function [1]. Patients with ESRD experience the symptoms of uremia, including loss of appetite, nausea, vomiting, itchy skin, facial and limb edema, and foul breath [2, 3]. Therefore, dialysis is required to alleviate symptoms and improve the quality of life of patients with ESRD [4]. Hemodialysis (HD) can effectively eliminate toxins and excess water from the kidneys. Patients with ESRD are required to undergo HD in a hospital two to three times per week throughout their life. In addition, receiving HD adversely affects patients’ quality of life and requires them to maintain diet control in terms of potassium, phosphorus, salt, water, and protein intake [5, 6]. Although HD can prolong patients’ lives, it may cause other complications, such as hypotension, hypertension, nausea, and vomiting, which may affect their physiological function and quality of life [7, 8]. Therefore, appropriate care and diet control are crucial for patients on HD [9]. Malnutrition may lead to increased mortality in patients on HD, and serum albumin level is a vital nutritional indicator for these patients [5, 10, 11]. The nutritional status of patients on HD is closely related to their clinical parameters, most represented by serum albumin, which may affect their risk of mortality [12]. To effectively prolong the survival of patients on HD, their clinical parameters should be maintained at normal levels.

Many related risk factors affect patients’ disease status, and appropriate medical care based on all possible risk factors cannot be currently provided. Therefore, the identification of the most crucial risk factors for diseases based on numerous biomarkers is essential. Most previous studies on this topic have recommended consultations with relevant disease specialists and the identification of risk factors for diseases; research and analysis should then be conducted by specialists [13]. Currently, machine learning (ML) methods have been widely used for disease diagnosis and prognosis, including artificial neural networks (ANN) [14], particle swarm optimization (PSO) [15], biogeography-based optimization [16, 17], and other hybrid technologies [18]. Previously, traditional statistical methods were used to compare data. ML and deep learning (DL) have the advantages of high accuracy, reproducibility, and objectivity. One of the major limitations of conventional ML techniques is the requirement of sometimes complex processing (feature engineering) to extract the requisite discriminative features [19]. Therefore, significant domain knowledge and data processing expertise were required to train non-deep learning models. Deep learning, however, is adept at learning abstract features directly from the raw data. Different layers of the network automatically learn abstract features representative of the data. A single well-designed and well-trained network can yield state-of-the-art results across many applications, without the need for significant domain knowledge [20]. It is clear that deep learning is an extremely powerful tool for learning complex, cognitive problems. However, it is not a comprehensive tool for all healthcare analytics applications. Several past commentaries on deep learning for clinical applications touch on how data issues such as low volume, high sparsity, and poor quality can limit the efficacy of deep learning methods. We find that conventional ML tools can achieve comparable, if not better performance in this context despite the complex nature of the data. Although deep learning can be applied to many of these fairly standard problems, conventional ML methods may provide simpler, cheaper, and more useful method for data modeling. Thus, their use for medical diagnosis and prognosis can be beneficial [18]. Traditional regression analysis may be inadequate for dealing with large and complex clinical data [21]. Studies have combined traditional statistics with ML and optimization algorithms to propose effective nursing strategies for patients on HD.

A metaheuristic optimization algorithm is commonly used to solve global optimization problems [22]. This algorithm is mainly used for searches by simulating nature and human intelligence to achieve optimal solutions. Heuristic optimization algorithms were first proposed in 1960 and are mainly divided into four categories: evolution, swarm intelligence, human intelligence, and physics and chemistry. Nature-inspired metaheuristic algorithms based on crowd intelligence are the most commonly employed [23], including PSO, grey wolf optimization, and whale optimization algorithms. Many nature-inspired metaheuristic algorithms have been developed and used in combination with other methods to solve complex problems in various fields and obtain the most favorable solution.

The grasshopper optimization algorithm (GOA) is a novel metaheuristic algorithm used for global optimization [24]. The GOA simulates the behavior of locust swarms and applies it to challenging problems in structural optimization. Exploration and exploitation are the two main stages of nature-inspired algorithms. The goal of the GOA is to improve the convergence speed of a search target and avoid local optima. A deep neural network is a DL method in machine learning [25]. Through imitation of the biological nervous system, models with different architectures are established for multiple operations and training to develop the optimal and most effective prediction model [26, 27].

Studies have reported that the serum albumin level in patients on HD is highly correlated with mortality and is a crucial factor for predicting mortality [28, 29]. This study used the GOA to determine the most favorable combination of risk factors for predicting the low serum albumin levels. Because interference factors may affect data, we used the quantile g-computation method for weight adjustment. Finally, we used the DL method to identify the most effective prediction model. This model was used to predict the serum albumin status of new HD patients. The findings of this study can help develop comprehensive prognostic care and treatment strategies for improving the quality of life and survival of new HD patients.

Methods

Data sets

This study used the data sets that were obtained from the longitudinal electronic health records of the largest HD center in Taiwan. A total of 2298 patients who received HD for more than 3 months and continued receiving HD three times a week from July 2011 to December 2015 were selected. We excluded the patients whose age was unknown, those aged < 18 years, those with a time interval of > 4 months between the end of dialysis and the last blood measurement, and those with incomplete data on baseline characteristics and laboratory measurements. Finally, we included 1567 patients who met the inclusion criteria in the analysis. All data were retrospectively collected using an approved data protocol (201800595B0), and the requirement for patients’ informed consent was waived. This study was conducted in accordance with the Declaration of Helsinki. Figure 1 presents the flowchart for the data processing.

Fig. 1
figure 1

Data preprocessing workflow

Serum albumin level is strongly associated with mortality. This study identified the risk factors for a low serum albumin level and determined whether patients had a low serum level before death to predict mortality. To collect data on serum albumin levels, we recorded the levels monthly and calculated the mean by adding the levels measured three months before the study's end and three months before the patient's death. The standard used to classify serum albumin was 3.5 g/dL, which is based on Chang Gung Memorial Hospital's lower limit of the normal range in Taiwan. The patients were categorized into two groups: those with a mean albumin level ≥ 3.5 g/dL and those with a mean albumin level < 3.5 g/dL. In addition, we collected data on demographics; comorbidities; causes of mortality; and mean albumin level–related clinical laboratory data, namely age, gender, diabetes, hypertension, heart failure, cancer, and mortality status. Baseline laboratory parameters included hemoglobin, serum albumin, iron, ferritin, sodium, phosphate, blood urea nitrogen, creatinine, alkaline phosphatase, intact parathyroid hormone, cholesterol, triglyceride, and fasting glucose levels.

Figure 2 illustrates the analytical workflow for predicting low serum albumin levels in patients on HD. In the first step, data were extracted from the longitudinal electronic health records of the largest HD center in Taiwan. We collected data on diagnosis, complications, and laboratory measurements. Subsequently, we cleaned, filtered, and merged the data. In the second step, we used the GOA for feature selection to determine the most favorable combination of risk factors for predicting low serum albumin levels. In the third step, we adjusted the weight of the data. The quantile g-computation method was used to examine the most favorable factor combinations selected using the GOA; this method enabled the ranking of the importance of risk factors for low serum albumin levels and the calculation of the positive and negative weights of each risk factor for low serum albumin levels condition. The weights were used to adjust blood levels such that they significantly differed from each other. In the fourth step, we established the prediction data. We used the synthetic minority oversampling technique to solve data imbalances. This method is based on the concept of the K-nearest neighbor (KNN). The data set was split into training and testing sets at a ratio of 7:3; these data were then used to establish a prediction model. Seven methods, namely the KNN, SVM, RF, GBDT, XGBoost, DNN, and Bi-LSTM, were used to establish three prediction models. In the fifth step, we evaluated the prediction model; plotted the receiver operating characteristic (ROC) curve of each model; and calculated the accuracy, prevalence, sensitivity, specificity, and area under the curve (AUC) to determine and compare the quality of the prediction models. In the sixth step, we evaluated the correlation between the clinical factors, drew a Pearson correlation diagram, and used a visual heatmap method to evaluate positive and negative correlations between blood parameters by visual.

Fig. 2
figure 2

Analysis flowchart

Grasshopper optimization algorithm (GOA)

The GOA, which was proposed by Saremi et al. in 2017, simulates the foraging behavior of grasshoppers [30]. Because of its high compatibility and ability to evaluate complex traits, the GOA has been used for the selection of multiple factors [31, 32]. The GOA can accelerate the integration of complex trait interactions among multiple factors. Moreover, the GOA can be used to solve various optimization problems, including engineering, computer, and feature selection problems [30]. The GOA is significantly superior to other classical algorithms, such as the PSO algorithm, the differential evolution (DE) algorithm, and the genetic algorithm (GA). In addition, the GOA can be used to manage different data sets [32]. The GOA can yield more favorable results and shorten calculation time of the criteria of fitness and average classification accuracy. In addition, the GOA can be combined with other methods to develop other hybrid GOA [33]. The accuracy and performance of the original algorithm can be improved, and these hybrid algorithms can be used in various fields. Therefore, we used a combination of the GOA and the bidirectional long short-term memory (Bi-LSTM) method to improve model performance. In this study, we established an optimal multifactor correlation model by using GOA-based feature selection methods to determine the relationship between albumin level and clinical factors in patients on HD and to identify the related risk factors for low serum albumin levels for prediction of mortality risks in patients on HD.

The grasshopper is a herbivorous insect that usually appears alone in nature. However, millions of grasshoppers gathered in a cluster can act as pests. They can damage crops and are thus a concern in the agricultural industry. The lifecycle of a grasshopper consists of three stages: egg, nymph, and adult. Grasshoppers can be found in swarms during the life stages of nymph or adult. Slow movement and small steps are the main characteristics of grasshopper swarms in the larval phase. By contrast, sudden and long-distance movements are characteristic of adult groups. Food source seeking is a crucial feature of grasshopper swarms. The GOA is inspired by nature. Exploration and exploitation are the two main stages of nature-inspired algorithms. The algorithm aims to increase the convergence speed of searching for targets and avoid local optima. Search agents tend to move locally in the search space during the exploitation process but are encouraged to move suddenly during the exploration process. Grasshoppers perform these two processes and naturally find their target (food source). The flight path of a group of grasshoppers is affected by three factors: social interaction (\(S_{i}\)), gravity force (\(G_{i}\)), and wind advection force (\(A_{i}\)).

The GOA-based feature selection was used to accelerate convergence and identify associated risk factors for low serum albumin levels in patients on HD. In Eq. (1) presents a simulation of the swarming behavior of grasshoppers.

$$X_{i} = r_{1} S_{i + } r_{2} G_{i + } r_{3} A_{i}$$
(1)

where \(X_{i}\) defines the position of the i-th grasshopper, \(S_{i}\) is the social interaction in Eq. (2), \(G_{i}\) is the gravity force on the i-th grasshopper in Eq. (4), and \(A_{i}\) is the wind advection in Eq. (5). To ensure random behavior, \(r_{1}\), \(r_{2}\), and \(r_{3}\) are considered random numbers in the range [0, 1].

$$S_{i} = \mathop \sum \limits_{j = 1, j \ne i}^{N} s\left( {d_{ij} } \right)\widehat{{d_{ij} }}$$
(2)

where \(d_{ij}\) is the distance between the i-th and j-th grasshopper, calculated as \(d_{ij} = \left| {X_{i} - X_{j} } \right|\), and \(\widehat{{d_{ij} }} = \frac{{X_{i} - X_{j} }}{{d_{ij} }}\) is a unit vector from the i-th grasshopper to the j-th grasshopper. \(s\) is a function used to define the strength of the social force in Eq. (3) calculated as follows.

$$s\left( r \right) = fe^{{\frac{ - r}{l}}} - e^{r}$$
(3)

where f indicates the intensity of attraction, and l is the attraction length scale.

$$G_{i} = - g \times \widehat{{e_{g} }}$$
(4)

where g is the gravitational constant and \(\widehat{{e_{g} }}\) is a unity vector toward the center of the earth.

$$A_{i} = u \times \widehat{{e_{w} }}$$
(5)

where u is a constant drift and \(\widehat{{e_{w} }}\) is a unity vector in the direction of wind.

Nymph grasshoppers have no wings; thus, their movements are highly correlated with wind direction.

Equation (6) is used to determine the current position of the i-th grasshopper, the position of all other grasshoppers, and the position of the target (food source).

$$X_{i}^{d} = c\left( {\mathop \sum \limits_{{j = 1,{ }j \ne i}}^{N} c\frac{{ub_{d} - lb_{d} }}{2}s\left( {\left| {x_{j}^{d} - x_{i}^{d} } \right|} \right)\frac{{x_{j} - x_{i} }}{{d_{ij} }}} \right) + \widehat{{T_{d} }}$$
(6)

where \(ub_{d}\) is the upper bound in the Dth dimension, \(lb_{d}\) is the lower bound in the Dth dimension \(s\left( r \right) = fe^{{\frac{ - r}{r}}} - e^{r} ,\) \(\widehat{{T_{d} }}{ }\) is the value of the Dth dimension in the target (the most favorable solution obtained thus far), and c is a decreasing coefficient used to shrink comfort, repulsion, and attraction zones. S is similar to S in Eq. (1). However, the gravity component G is not considered, and the wind direction A is assumed to be toward the target \(\widehat{{T_{d} }}\).

In Eq. (6), the adaptive parameter c is used twice to simulate the deceleration of the locust that approaches the food source and that eventually consumes it. With an increase in the number of iterations, the outer c is used to reduce the search range of the target grasshopper, whereas the inner c is used to reduce the effect of the attraction and repulsion between grasshoppers in proportion to the number of iterations. To balance exploration and exploitation, the parameter c needs to be reduced in proportion to the number of iterations.

$${\text{c}} = {\text{c}}_{max} - {\text{l}}\frac{{c_{max} - c_{min} }}{L}$$
(7)

where \({\text{c}}_{max}\) is the maximum value of parameter c, \(c_{min}\) is the minimum value of parameter cl is the current iteration number, and L is the maximum number of iterations.

Quantile g-computation

Quantile g-computation is a new method used to estimate the combined effects of mixtures [34]. It was proposed by Keil et al. in 2020 [35]. Quantile g-computation is based on parametric, generalized linear models. This method combines the simplicity of weighted quantile sum (WQS) regression with the flexibility of g-computation to estimate causal effects. Its advantages are that it is computationally efficient and can estimate positive and negative weights. Quantile g-computation does not require the assumption of direction homogeneity. This method redefines the positive and negative weights when directional homogeneity does not hold. The basic model of quantile g-computation is a joint marginal structural model given by the following formula.

$$E\left( {Y^{{X_{q} }} {|}Z,{\uppsi },\eta } \right) = g\left( {{\uppsi }_{0} + {\uppsi }_{1} S_{q} + \eta Z} \right)$$
(8)

where Y denotes outcomes, X refers exposures, and Z denotes some other possible covariates (e.g., potential confounders). g () is the link function in a generalized linear model (e.g., the inverse logit function of the probability of Y = 1 in a logistic model), \({\uppsi }_{0}\) is the model intercept, \(\eta\) is the model coefficient for a set of covariates, and \(S_{q}\) is an index representing the joint value of exposure.

Quantile g-computation (by default) converts all exposures X to Xq. Xq converts exposure X to discrete fractions such as 0, 1, and 2, etc. By default, each exposure has four quantile cutoff points with a uniform distribution. Thus, Xq = 0 means that X is below the 25th percentile observed for that exposure. The index \(S_{q}\) means that all exposures are set to the same value (by default, discrete values are 0, 1, 2, and 3). Thus, the parameter \({\uppsi }_{1}\) quantifies the expected change in results given that all exposures that are simultaneously increased by a quantile are possibly adjusted for Z.

The quantile g-computation allows the estimation of both \({\uppsi }_{1}\) and weights when the directional homogeneity assumption holds, and when the directional homogeneity does not hold, it allows valid inferences to be made regarding the effects of the entire exposure mixture as well as individual contributions to that mixture. First, the quantile g-computation transforms the exposure Xj to discretize \(X_{j}^{q}\) through quartiles. Next, a linear model is fitted (other confounders Z are omitted for notational simplicity, but they can also be included):

$$Y_{i} = \beta_{0} + \mathop \sum \limits_{j = 1}^{d} \beta_{j} + \begin{array}{*{20}c} q \\ {ji} \\ \end{array} + \varepsilon_{i}$$
(9)

Third, under the assumption of directional homogeneity, ψ is given as \(\mathop \sum \limits_{j = 1}^{d} \beta_{j}\) (\(\beta_{j}\) is the impact size of exposure j), and each exposure weight is given by k. Weights are defined as the sum to 1.0.

$$W_{k} = \beta_{k} l\mathop \sum \limits_{j}^{d} \beta_{j}$$
(10)

When directional homogeneity does not hold, quantile g-computation redefines weights as negative or positive, which are interpreted as the proportion of negative or positive partial effects due to a particular exposure, and positive and negative weights are defined as the sum of both to 1.0.

Synthetic minority over-sampling technique (SMOTE)

SMOTE is a synthetically sampled synthetic data algorithm proposed by Chawla et al. in 2002 [36]. SMOTE is used to solve the problem of data class imbalance by combining the oversampling minority and undersampling majority classes to synthesize data. Class imbalance is a common problem in classifier model training and is often encountered in the medical field. Therefore, this method can be used to increase the number of predicted event samples to make the data easier to train. The following steps are involved in SMOTE: (1) Find the KNN to the positive individual \(X_{i}\). (2) Randomly select one of the k neighbors called \(X_{j}\); this neighbor is used to generate new samples. (3) Calculate the difference between \(X_{i}\) and \(X_{j}\) in \(= X_{j} - X_{i}\). (4) Generate a random number \(\eta\) between [0, -1]. (5) Generate a new sample point \(X_{i}^{{\left( {new} \right)}} = X_{i} - \eta\). The data set was split into training and testing sets at a ratio of 7:3. Thus, training set has 1097 and testing set had 470 patients. In this study, SMOTE had been implemented in the training set. And training set was increasing to 1715 patients.

K-nearest neighbor (KNN)

The KNN algorithm was proposed by Peterson in 2009 [37]. The KNN algorithm is among the most fundamental and simple classification methods and should be one of the first choices for a classification study when little or no prior knowledge is available on the distribution of data. KNN classification was developed to perform discriminant analysis when the reliable parametric estimates of probability densities are unknown or difficult to determine. The traditional KNN method search an entire set of training data samples to classify an input test sample. Thus, memory requirements and massive computations are the main challenges during searches for nearest neighbors.

Support vector machine (SVM)

The SVM was proposed by Vapnik [38]. The algorithm builds a hyperplane to separate positive and negative samples, and the margin is as large as possible. However, in practice, samples are not linearly separable, and such a hyperplane does not exist. This can lead to poor algorithm performance. Accordingly, the original SVM algorithm is extended for nonlinear classification through the use of kernel functions.

Random forest (RF)

The RF is established using the numeral of decision trees, and every tree acquires its position arrangement through dissimilar classification [39]. This method permits the evaluation of sampling allocation by using random sampling, which is particularly appropriate for some simple models. The following steps are followed for RF classification.

  1. (1)

    The unique training illustration set is developed, in which the number of cases is X and the number of contribution features is Y. This illustration is the training set for increasing the tree.

  2. (2)

    A secondary training set is arbitrarily created through sampling with the substitution bootstrap technique for n tree times; hence, the subordinate training set for the RF with numeral n tree is created.

  3. (3)

    Ahead before the selection of characters (features) for every nonleaf node (internal node), this technique randomly chooses a definite number of characteristics from all distinctiveness, uses them as division characteristics of the existing decision tree, and chooses the optimal one to divide nodes. The number of characters attempted at every division is indicated by mtry, mtry ≤ M.

  4. (4)

    After pruning is considered, the tree expansion is increased.

  5. (5)

    The created trees are joined with an RF. Every tree in the RF transmits an entity choice for the mainly accepted group, and the classifier result is resolute by a mass choice of the trees.

Gradient boosting decision tree (GBDT)

The boosting method based on gradient descent and its corresponding model are called gradient boosting machines (GBMs) [40]. GBMs construct basic learners through repeated calculations by weighting misclassified observations. The prediction model is an ensemble of weak prediction models. GBMs determine weights by operating the negative partial derivative of the loss function in each training observation. In GBMs, a decision tree is the most common type of weak model used (i.e., gradient boosting decision tree (GBDT)). The GBDT is a model based on a phased manner and can be optimized based on the differentiable loss function. The GBDT uses a fixed-size regression tree as the basic model and uses an iterative calculation method to minimize the loss function. Each regression tree uses the residual of the previous tree to select features and segmentation points, and it sums the outputs of all regression trees as a trained GBDT model.

eXtreme gradient boosting (XGBoost)

The XGBoost was proposed by Chen and Guestrin in 2016 [41]. XGBoost is an ensemble learning algorithm based on gradient boosting. It provides state-of-the-art results for many bioinformatics problems. XGBoost is essentially an ensemble method based on the gradient boosted tree. The result of the prediction is the sum of scores predicted by trees, as shown in the following equation:

$$\hat{y}_{i} = \mathop \sum \limits_{k = 1}^{K} f_{k} \left( {x_{i} } \right), f_{k} \in F$$
(11)

where \(x_{i}\) is the i-th of the training sample, \(f_{k} \left( {x_{i} } \right)\) is the score for the k-th tree, and F is the space of functions containing all gradient boosted trees. The objective function can be optimized using the following equation:

$$Obj\left( \theta \right) = \mathop \sum \limits_{i = 1}^{n} l\left( {y_{i} ,\hat{y}_{i} } \right) + \mathop \sum \limits_{k}^{K} \Omega \left( {f_{k} } \right)$$
(12)

where \(\mathop \sum \limits_{i = 1}^{n} l\left( {y_{i} ,\hat{y}_{i} } \right)\) refers to a differentiable loss function that measures the fitness of model prediction \(y_{i}\) and samples of training dataset \(\hat{y}_{i}\), and \(\mathop \sum \limits_{k}^{K} \Omega \left( {f_{k} } \right)\) is a regularization item that punishes the complexity of the model to avoid overfitting.

Deep learning (DL)

Deep learning is a branch of ML that uses artificial neural networks to imitate a learning model generated based on the structure of the human brain [42]. The basic unit of an artificial neural network is a neuron. Each neuron is connected to other neurons, can input and output signals, and can transmit information [43]. In the era of big data, DL has been widely used to learn and train models by using large amounts of data to provide future predictions.

The deep neural network (DNN) model is a multilayer perceptron (MLP) neural network that consists of two or more hidden layers and is the basic model of DL [44]. MLP is a feedforward neural network whose architecture consists of an input layer, a hidden layer, and an output layer. Each layer consists of multiple neurons. In the input layer, the neuron takes the input data X and transmits this data signal to the next layer of the network. In the next layer, the hidden layer is where each neuron acquires a data signal, which is the weighted sum of the outputs of the neurons in the previous layer. An activation function is applied inside each neuron to control the input. The network applies nonlinear mapping from the input vector to the output, parameterized by weights called the weight vector (W). The variables used in DNNs are bias b, input x, output y, weight w, calculation function σ, and start function \(f\left( \sigma \right)\). Each neuron in a DNN uses the following equation:

$$\sigma :Sum = w*x + b$$
(17)
$$y :f\left( \sigma \right) = f\left( {w*x + b} \right)$$
(18)

The input layer is i neurons, the hidden layer is k layers, the hidden layer is j neurons, and the output layer is x neurons. The weights between layers are denoted as W, and these weights are randomly generated at the beginning of model create. The weights between layers are updated after consideration of the error rate between the model output and actual output. The formula for calculating the number of weights (W) between layers is as follows:

$$W = \left( {I*H_{1} } \right) + \mathop \sum \limits_{m = 1}^{k - 1} H_{m} *H_{m + 1} + \mathop \sum \limits_{m = 1}^{k} BiasH_{m} + \left( {H_{k} *O} \right) + BiasO$$
(19)

The MLP algorithm used in this study consisted of one input layer, three hidden layers, and one output layer. Both the input and hidden layers were used a rectified linear unit (ReLu) activation function, and the dropout probability was 0.1 before the last hidden layer. Because a classification problem was examined in this study, the output layer was used as a nonlinear sigmoid activation function. The ReLu and sigmoid activation function formulas are presented as follows, And Fig. 3 presents the DNN architecture.

$$\sigma \left( x \right) = \left\{ {\begin{array}{*{20}l} {\max \left( {0,x} \right), } \hfill & {x \ge 0} \hfill \\ {0,} \hfill & {x < 0} \hfill \\ \end{array} } \right.$$
(20)
$$f\left( z \right) = \frac{1}{{1 + e^{ - z} }}$$
(21)
Fig. 3
figure 3

Architecture of the DNN

Bidirectional long short-term memory (Bi-LSTM)

The LSTM employs three custom-built gates to store information [45]. The original architecture is proposed by Hochreiter [46], the update of the cell output state is related to the previous hidden layer output and the current input. Moreover, Hochreiter attached a peephole connection and used the previous cell state as a parameter. For a single LSTM cell, data flow between gates and inputs is depicted in Fig. 4. At each time t\(x_{t}\) is the current input, ht−1 is the previous hidden state, and \(c_{t - 1}\) is the previous cell output state. The outputs of three gates can be calculated using Eqs. (22)–(24). The forget gate \(f_{t}\) decides if \(c_{t - 1}\) is retained, the input gate decides if the state is updated by the current input \(x_{t}\), and the output gate \(o_{t}\) decides if \(h_{t - 1}\) is passed to the next cell. At each timestamp t\(a_{t}\) is the candidate for updating the memory cell. The output of the current LSTM cell \(c_{t}\) and the current hidden state \(h_{t}\) can be calculated according to Eqs. (25)–(27).

$$i_{t} = \sigma \left( {X_{i} x_{t} + H_{i} h_{t - 1} + C_{i} c_{t - 1} + b_{i} } \right)$$
(22)
$$o_{t} = \sigma \left( {X_{o} x_{t} + H_{o} h_{t - 1} + C_{o} c_{t - 1} + b_{o} } \right)$$
(23)
$$f_{t} = \sigma \left( {X_{f} x_{t} + H_{f} h_{t - 1} + C_{f} c_{t - 1} + b_{f} } \right)$$
(24)
$$a_{t} = \sigma \left( {X_{a} x_{t} + H_{a} h_{t - 1} + C_{a} c_{t - 1} + b_{a} } \right)$$
(25)
$$c_{t} = f_{t} *c_{t - 1} + i_{t} *a_{t}$$
(26)
$$h_{t} = o_{t} *tanh\left( {c_{t} } \right)$$
(27)
Fig. 4
figure 4

Bi-LSTM architecture

In these equations, * represents the element-wise multiplication operator, H and C are the weights, and b are biases.

Model evaluation

The data set was randomly divided into two groups: 70% for the training set and 30% for the testing set. We used the training data set to establish a prediction model.

In the binary classification model, the predicted results were combined with actual results to produce four elements, namely true positives, false positives, true negatives, and false negatives, which are represented by TP, FP, TN, and FN respectively (T represents a correct prediction and F represents an incorrect prediction). This process enables the formation of a confusion matrix using the following formula [47]:

$$TPR = \frac{TP}{{TP + FN}}$$
(28)
$$FPR = \frac{FP}{{FP + TN}}$$
(29)
$$FNR = \frac{FN}{{TP + FN}}$$
(30)
$$TNR = \frac{TN}{{FP + TN}}$$
(31)

The performance of the models was evaluated using criteria, namely accuracy, prevalence, sensitivity, specificity, the area under the curve (AUC). The area under the ROC curve was used to evaluate the model with the highest accuracy and calculate AUC [48]. The larger the AUC value is, the higher the accuracy is. The relevant equations are as follows:

$$Specificity = TNR$$
(32)
$$Sensitivity = TPR$$
(33)
$$Prevalence = \frac{TP + FP}{{TP + TN + FP + FN}}$$
(34)
$$Accuracy = \frac{TP + TN}{{TP + FP + FN + TN}}$$
(35)

Statistical analysis

Table 1 summarized the demographic characteristics of the patients on HD and the distribution of albumin-related biomarkers, including the mean (standard deviation), frequency (percentage), and median (interquartile range). Differences between the patients with a 3-month mean albumin level of ≥ 3.5 g/dL and those with a 3-month mean albumin level of < 3.5 g/dL were determined using independent two-sample t-tests or chi-squared tests, as appropriate. Pearson’s correlation analysis was performed, and correlation plots and correlation heatmaps were drawn to assess collinearity between mean albumin levels and biomarkers.

Table 1 Baseline characteristics of 3-months mean albumin in new HD patients were divided into 2 categories (n = 1567)

Associations between mean albumin levels and individual factors were analyzed using univariate logistic regression analysis. Multivariate logistic regression was used to analyze associations between mean albumin categories and multiple factors. The full adjusted model included all factors, whereas the GOA model selected factors by using the GOA. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated. The performance of multiple logistic regression models was compared based on the Akaike information criterion (AIC). A low AIC value indicated a low prediction error for the corresponding model. The g-computation method was used to calculate the factor weights. These weights were used to adjust the original blood value and highlight the importance of factors. The SMOTE method was used to solve the data imbalance problem. All P values were two-tailed, and a P value of < 0.05 was considered statistically significant. All statistical analyses were performed using R version 4.0.5 (R Development Core Team 2022). The relevant packages used are as follows: stats, My.stepwise, metaheuristicOpt, e1071, keras, tensorflow, etc.

Results

Baseline characteristics and laboratory measurement distributions of patients on HD

Table 1 presented the distribution of clinicopathological characteristics between the patients with a 3-month mean albumin level of ≥ 3.5 g/dL and those with a 3-month mean albumin level of < 3.5 g/dL. Among the 1567 patients on HD included in this study, 1283 and 284 had 3-month mean albumin levels of ≥ 3.5 and < 3.5 g/dL, respectively. The patients on HD with a 3-month mean albumin level of < 3.5 g/dL were older, had a higher prevalence of diabetes mellitus and heart failure, and a higher risk of mortality. Moreover, the laboratory measurements significantly different between the groups.

Individual factors affecting mean 3-month albumin levels

Table 2 presented the results of the univariate logistic regression analysis of mean 3-month albumin levels before death in patients on HD. The results revealed that older age (OR = 1.05, 95% CI = 1.04–1.06, P < 0.001), diabetes mellitus (OR = 1.39, 95% CI = 1.07–1.81, P = 0.013), heart failure (OR = 1.41, 95% CI = 1.04–1.89, P = 0.025), and cancer (OR = 1.50, 95% CI = 1.16–1.95, P = 0.002) were associated with a mean 3-month albumin level of < 3.5 g/dL. In terms of laboratory measurements, low hemoglobin levels (OR = 0.63, 95% CI = 0.57–0.70, P < 0.001), low Fe levels (OR = 0.99, 95% CI = 0.99–1.00, P < 0.001), high ferritin levels (OR = 1.001, 95% CI = 1.0007–1.0012, P < 0.001), low sodium levels (OR = 0.90, 95% CI = 0.87–0.94, P < 0.001), low potassium levels (OR = 0.54, 95% CI = 0.44–0.66, P < 0.001), low calcium levels (OR = 0.61, 95% CI = 0.51–0.72, P < 0.001), low phosphate levels (OR = 0.70, 95% CI = 0.63–0.77, P < 0.001), low blood urea nitrogen levels (OR = 0.99, 95% CI = 0.89–0.99, P < 0.001), low creatinine levels (OR = 0.67, 95% CI = 0.63–0.71, P < 0.001), high alkaline phosphatase levels (OR = 1.01, 95% CI = 1.0073–1.0124, P < 0.001), and low cholesterol levels (OR = 0.99, 95% CI = 0.99–1.00, P < 0.001) were associated with a 3-month mean albumin level of < 3.5 g/dL. The first blood values 3 months prior to death of patients had significant associations with the mean albumin levels in the 3 months prior to death according to a univariate analysis.

Table 2 Regression analysis for 3-months albumin mean univariate logistic (n = 1567)

Multifactorial influencing factors of mean 3-months albumin levels determined using GOA

Table 3 summarized the results of the multivariate logistic regression analysis on mean albumin levels in new patients on HD 3 months prior to death, obtained using the fully adjusted model and the GOA feature selection model. Older age (OR = 1.01, 95% CI = 1.01–1.04, P < 0.001), low iron levels (OR = 0.99, 95% CI = 0.98–0.99, P < 0.001), low creatinine levels (OR = 0.77, 95% CI = 0.71–0.84, P < 0.001), and high alkaline phosphatase levels (OR = 1.01, 95% CI = 1.00–1.01, P < 0.001) were determined to be significant in the fully adjusted logistic regression model. Feature selection was performed using the GOA to select 12 out of 20 clinical factors, namely age; gender; hypertension; and hemoglobin, iron, ferritin, sodium, potassium, calcium, creatinine, alkaline phosphatase, and triglyceride levels. Older age (OR = 1.03, 95% CI = 1.02–1.04, P < 0.001), male (OR = 1.48, 95% CI = 1.07–2.06, P = 0.018), low hemoglobin levels (OR = 0.83, 95% CI = 0.73–0.95, P = 0.006), low iron levels (OR = 0.99, 95% CI = 0.99–1.00, P < 0.001), high ferritin levels (OR = 1.001, 95% CI = 1.0004–1.0011, P < 0.001), low Na levels (OR = 0.94, 95% CI = 0.90–0.98, P = 0.005), low K levels (OR = 0.79, 95% CI = 0.64–0.98, P = 0.037), low Ca levels (OR = 0.72, 95% CI = 0.59–0.86, P = 0.001), low creatinine levels (OR = 0.77, 95% CI = 0.71–0.83, P < 0.001), high alkaline phosphatase levels (OR = 1.01, 95% CI = 1.00–1.01, P < 0.001), and low triglyceride levels (OR = 0.998, 95% CI = 0.9968–0.9998, P = 0.030) were all significant in the GOA feature selection model. The best AIC of the fully adjusted logistic regression model and the GOA feature selection model were 1173.52 and 1160.71, respectively. The results indicated that the GOA feature selection model had a lower AIC and a higher accuracy in selecting risk factors for the low serum albumin.

Table 3 Regression analysis for 3-months albumin mean multivariate logistic (n = 1567)

Quantile g-computation adjustment of factor weights

Figure 5 presented the risk factors for the low serum albumin selected using the GOA, and the weight ratio of each factor was calculated using the quantile g-computation method. Alkaline phosphatase was assigned the highest positive weight, followed by age and ferritin levels. Creatinine was assigned the largest negative weight, followed by blood measurements such as iron and hemoglobin levels. In addition, age and creatinine levels were identified as more crucial risk factors for low serum albumin levels than other clinical factors.

Fig. 5
figure 5

Weights representing the proportion of the positive or negative partial effect of biomarkers selected using the GOA in the quantile g-computation method

Prediction of the low serum albumin

In this study, we used three models and seven methods to predict the low serum albumin. The three models were the fully adjusted, GOA, and GOA quantile g-computation weight models. Subsequently, we compared the prediction performance of the three models by using seven methods, namely KNN, SVM, RF, GBDT, XGBoost, DNN, and Bi-LSTM. We examined the predictive performance of the three models by using the seven methods based on their accuracy, prevalence, sensitivity, specificity, and AUC (Table 4 and Fig. 6). Table 4 presented the prediction results for the three models. The results revealed that the seven methods predicted the performance of the three models, respectively, and the accuracy and AUC of the GOA quantile g-computation weight model were higher than those of the other two models. Compared with the GOA model, the accuracy of the GOA quantile g-computation weight model improved by 0.1, 0.3, 0.6, 0.3, 0.5, and 0.12 when the KNN, SVM, RF, GBDT, XGBoost, and DNN methods were used, respectively. However, compared with the fully adjusted model and GOA model, the accuracy of the Bi-LSTM combined with the GOA quantile g-computation weight model improved by at least 0.16 and at the most by 0.21. The Bi-LSTM method combined with the GOA quantile g-computation weight model yielded the most favorable results for predicting the low serum albumin. In order to prove the performance of the proposed model objective, the data set was cut five times using cross-validation, and the average results are shown in Table 5.

Table 4 Comparison of the prediction performance of 3-months albumin average with 2 categories
Fig. 6
figure 6

ROC curves for the A KNN, B SVM, C RF, D GBDT, E XGBoost, F DNN, and G Bi-LSTM methods

Table 5 Comparison of the prediction performance of GOA quantile g-computation weight with 5 cross-validation

Figure 6 presented a comparison of the ROC curves of the seven methods for the three models. The results revealed that the AUC of the GOA quantile g-computation weight model was higher than that of the other two models. The seven methods with the GOA quantile g-computation weight model were used to obtain AUC values. The AUC values obtained using the KNN, SVM, RF, GBDT, XGBoost, DNN, and Bi-LSTM methods were 0.87, 0.86, 0.91, 0.95, 0.94, 0.96, and 0.98, respectively. Moreover, the results revealed that the prediction performance of the Bi-LSTM method combined with the GOA quantile g-computation weight model was significantly higher than that of the other methods.

Correlations between biomarkers and serum albumin

Figure 7 presented a heatmap depicting the correlation between serum albumin levels and 15 biomarkers. The saturation and size of the circle indicate the magnitude of correlations. Blue indicates a positive correlation, and red indicates a negative correlation.

Fig. 7
figure 7

Pearson correlations between studied biomarkers and serum albumin levels

Strong positive correlations were observed between mean albumin and creatinine levels [49], between creatinine and phosphate levels [50], and between phosphate and blood urea nitrogen levels.

Strong negative correlations were observed between age and creatinine levels, between age and mean albumin levels, and between alkaline phosphatase and mean albumin levels.

In summary, positive and negative correlations were noted between the biomarkers. The factors with strong correlations were related to nutritional status and clinical significance [51]. For example, advanced age may affect basal metabolism and nutrient absorption, and creatinine is mainly related to metabolites released due to muscle activity. For patients on HD, dietary control is crucial to health. Phosphate is obtained from the human diet, and its intake should be balanced.

Discussion

This study used data from the longitudinal electronic health records of the largest HD center in Taiwan. Many studies have reported that serum albumin level is a nutritional indicator for HD, and previous studies using long-term clinical data have demonstrated a relationship between hypoalbuminemia and mortality in patients on HD [52, 53]. In this study, we observed that the albumin levels of Taiwanese patients receiving maintenance HD was unstable 3 months before death, and their albumin value was mostly less than the normal value of 3.5 g/dL. Therefore, in this study, we used the DL method to predict whether the mean albumin level of patients on HD was low 3 months before their death, and the first measurement obtained 3 months before death was used to predict the low serum albumin. The results of this study indicated that the use of the GOA quantile g-computation weight model combined with the DL method can improve the efficiency of clinical factor screening and the accuracy of the low serum albumin prediction.

Principal results

A complex interaction exists between clinical biomarkers. The findings of preliminary analysis in this study revealed that the 3-month mean albumin level in patients on HD was 3.8 ± 0.4 g/dL. Furthermore, the 3-month mean albumin level before the end of the study follow-up and before death were 3.8 ± 0.4 and 3.4 ± 0.5, respectively, and the levels did not significantly differ between the patients who survived and those who died (P < 0.001). The 3-month mean albumin level before death was correlated with mortality. This study identified risk factors associated with the low serum albumin. The results of univariate logistic regression analysis revealed that the first three laboratory values of the patients on HD before death were significantly correlated with their albumin level in the 3 months before death. Furthermore, the findings of multivariate logistic regression analysis indicated that the factors determined to be significantly correlated with albumin level in the univariate model exhibited nonsignificant correlations in the fully adjusted multivariate model; this finding might be due to interactions among factors. Therefore, we used the GOA feature selection method to identify crucial risk factors for the low serum albumin. The advantage of the GOA feature selection method is its high compatibility and its ability to accelerate convergence to provide a global optimal solution. Using the GOA for feature selection, we selected 12 out of 20 clinical factors, namely age; gender; hypertension; and hemoglobin, iron, ferritin, sodium, potassium, calcium, creatinine, alkaline phosphatase, and triglyceride levels; all these factors were significant.

We determined that the women (OR = 0.66, 95% CI = 0.51–0.86, P = 0.002) had a significantly higher risk of the low serum albumin in the univariate model, whereas the men had a nonsignificantly higher risk of the low serum albumin in the multivariate fully adjusted model (OR = 1.41, 95% CI = 1.00–1.99, P = 0.051). Among the factors selected by the GOA, male (OR = 1.48, 95% CI = 1.07–2.06, P = 0.018) was associated with a higher risk of the low serum albumin. Moreover, we observed that a low triglyceride level (OR = 0.999, 95% CI = 0.9971–1.0003, P = 0.133) was associated with a higher risk of the low serum albumin in the multivariate fully adjusted model; however, this association was not significant. Similarly, among the factors selected by the GOA model, a low triglyceride level (OR = 0.998, 95% CI = 0.9968–0.9998, P = 0.030) was significantly associated with a higher risk of the low serum albumin. The findings indicate that these factors can be used in combination to predict the low serum albumin, and they possibly reflect interactions between biomarkers.

For prediction, this study used three models, namely the fully adjusted, GOA, and GOA g-computation weight models, and seven methods, namely the KNN, SVM, RF, GBDT, XGBoost, DNN and Bi-LSTM. The GOA quantile g-computation weight model used the GOA to select the most favorable combination factors associated with the low serum albumin. Subsequently, the g-computation method was used to calculate the weight of each factor. This weight was used to adjust the original blood value such that the important blood factors have a greater impact on the fitness through the weight adjustment, thus improving the predictive ability of the model. In addition, the problem of data imbalance often occurs when medical data are used. Thus, we used the SMOTE method to solve this problem and subsequently used each of the seven methods to compare the performance of the models. The results revealed no significant differences between the accuracy and AUC of the fully adjusted model and those of the GOA model determined using all the aforementioned seven methods. However, the accuracy and AUC of the GOA quantile g-computation weight model determined using the seven methods in combination were significantly higher than those of the other two models. Moreover, the accuracy and AUC of the GOA quantile g-computation weight model determined using the DL method were higher than those of the other two models. This finding may have arisen because DL involves the simulation of the basic operating principles of the nervous system in the human brain. Thus, with the adjusted value of the weight, coupled with the powerful self-learning ability of the DL method, model constantly recalculates weights and training. The DL method exerted the multiplier effect and improved the prediction ability of model.

Comparison with prior studies

Hypoalbuminemia in patients on HD is associated with malnutrition, inflammation, and increased mortality [54, 55]. Figure 8 presents the distribution of albumin levels in the 3 months before the death of patients on HD. The dots on the left side represent the distribution of albumin levels 1 month before death, and those on the right side represent the distribution of albumin levels 3 months before death. The blue dots represent the albumin levels of the patients who survived, and the red dots represent the albumin levels of the patients who died. When the distribution line segment in Fig. 8 is viewed from right to left, we can observe that patients who died had lower levels of albumin 3 months before death compared to patients who survived. The middle dots present the distribution of albumin levels 2 months before death. The albumin levels of the patients 2 months before death exhibited a downward trend, and most of these patients eventually had an albumin level of 3 ≤ g/dL. Finally, the distribution of the albumin levels for the month before death of the deceased patient is shown on the far left. The albumin levels of these patients were between 2 and 3 g/dL, and a few extreme values were noted below 2 g/dL. This finding indicated that the low serum albumin is associated with mortality; this result is consistent with those of previous studies.

Fig. 8
figure 8

Scatter diagram of the distribution of albumin levels in the 3 months before the death of patients on HD

This study identified and predicted factors associated with the low serum albumin. These factors can be used to predict the mortality risk of patients on HD. We used the GOA quantile g-computation weight model combined with the DL method to determine the optimal combination of factors associated with low serum albumin levels in patients on HD. The related factors included age; gender; hypertension; and hemoglobin, iron, ferritin, sodium, potassium, calcium, creatinine, alkaline phosphatase, and triglyceride levels. According to previous studies and clinical viewpoints, organ failure eventually occurs in older patients, resulting in the impairment of some repair and absorption mechanisms, which may easily lead to malnutrition and indirectly increase the risk of mortality [56, 57]. Patients with chronic kidney disease often experience loss of appetite. Inflammation is highly correlated with appetite, and men have a higher risk of anorexia than have women [55]. Because of differences in body composition between men and women, such as in hormones, muscle mass, and body water content, the severity of related symptoms may be different [56, 57]. Female patients on HD appear to have a survival advantage over male patients on HD because of the presence of sex hormones, which reduce the likelihood of women developing anorexia and malnutrition [58]. In addition, appetite may affect biomarkers and physical indicators, and decreased appetite may lead to decreased concentrations of nutrition-related biomarkers, such as serum albumin and creatinine [55]. Moreover, dialysis concentration may affect dialysis efficacy [59]. A study reported that the dialysis efficacy of patients who died was lower than that of those who survived; lower dialysis efficacy results in lower levels of calcium, creatinine and a lower ultrafiltration volume [60]. Impaired nutritional status results in lower levels of triglycerides, lower levels of density lipoprotein cholesterol, and a lower body mass index [61, 62]. In summary, the optimal factors associated with low serum albumin levels in patients on HD determined using the GOA appeared to be strongly correlated with nutritional status.

Limitations

This study has some limitations due to its retrospective nature. First, previous studies have reported that albumin indicators are related to nutritional status. This study did not consider patients’ body composition and the discomfort caused by inappropriate dialysis doses. Second, our results may be limited by potential residual confounding effects, such as daily physical activity, dietary intake, and quality of life. Finally, factors associated with the low serum albumin might differ between gender, and this study did not consider gender differences in individual analysis. Previous studies have reported that gender differences affect biomarkers. In this study, we observed that gender affected albumin levels. Therefore, a separate analysis based on gender should be conducted in future studies and can improve the improve clinical care. Furthermore, studies should examine the effects of additional clinical factors on patients on HD, including comorbidities, medication, and dietary intake.

Conclusions

Malnutrition is often observed in patients receiving long-term HD treatment. Previous studies have reported that the all-cause mortality of patients on HD is related to nutritional status. In this study, the GOA was used to select the factors most associated with the low serum albumin. Because data may be affected by interference factors, we used the quantile g-computation method to calculate the weights for adjustment. Finally, we used the DL method to determine the most effective prediction model. The GOA selected 12 parameters, namely age; gender; hypertension; and hemoglobin, iron, ferritin, sodium, potassium, calcium, creatinine, alkaline phosphatase, and triglyceride levels, which were significantly associated with the low serum albumin. By selecting factors through the GOA and using the quantile g-computation method for weight adjustment in combination with the DL method, we determined the most effective prediction model. The GOA quantile g-computation weight model combined with the DL method can help in accurately predicting the low serum albumin in new patients on HD. The selected factors should be considered for further nutritional management of patients on HD. Appropriate prognostic care and treatment are essential for improving the quality of life of patients on HD.

Availability of data and materials

Not applicable.

References

  1. Burrows NR, Koyama A, Pavkov ME. Reported cases of end-stage kidney disease—United States, 2000–2019. Am J Transpl. 2022;22(5):1483–6.

    Article  Google Scholar 

  2. Cox KJ, Parshall MB, Hernandez SH, Parvez SZ, Unruh ML. Symptoms among patients receiving in-center hemodialysis: a qualitative study. Hemodial Int. 2017;21(4):524–33.

    Article  PubMed  Google Scholar 

  3. Zucker I, Yosipovitch G, David M, Gafter U, Boner G. Prevalence and characterization of uremic pruritus in patients undergoing hemodialysis: uremic pruritus is still a major problem for patients with end-stage renal disease. J Am Acad Dermatol. 2003;49(5):842–6.

    Article  PubMed  Google Scholar 

  4. Xie J, Song C. Analysis of quality of life and risk factors in 122 patients with persistent hemodialysis. Pakistan J Med Sci. 2022;38:1026.

    Google Scholar 

  5. Kaysen GA, et al. Relationships among inflammation nutrition and physiologic mechanisms establishing albumin levels in hemodialysis patients. Kidney Int. 2002;61(6):2240–9.

    Article  PubMed  Google Scholar 

  6. Chen J-B, Lee W-C, Cheng B-C, Moi S-H, Yang C-H, Lin Y-D. Impact of risk factors on functional status in maintenance hemodialysis patients. Eur J Med Res. 2017;22(1):1–8.

    Article  Google Scholar 

  7. Shoji T, Tsubakihara Y, Fujii M, Imai E. Hemodialysis-associated hypotension as an independent risk factor for two-year mortality in hemodialysis patients. Kidney Int. 2004;66(3):1212–20.

    Article  PubMed  Google Scholar 

  8. Hörl MP, Hörl WH. Hemodialysis-associated hypertension: pathophysiology and therapy. Am J Kidney Dis. 2002;39(2):227–44.

    Article  PubMed  Google Scholar 

  9. Bergström J. Nutrition and mortality in hemodialysis. J Am Soc Nephrol. 1995;6(5):1329–41.

    Article  PubMed  Google Scholar 

  10. Owen WF Jr, Lew NL, Liu Y, Lowrie EG, Lazarus JM. The urea reduction ratio and serum albumin concentration as predictors of mortality in patients undergoing hemodialysis. N Engl J Med. 1993;329(14):1001–6.

    Article  PubMed  Google Scholar 

  11. Kaysen GA, Stevenson FT, Depner TA. Determinants of albumin concentration in hemodialysis patients. Am J Kidney Dis. 1997;29(5):658–68.

    Article  CAS  PubMed  Google Scholar 

  12. Leavey SF, Strawderman RL, Jones CA, Port FK, Held PJ. Simple nutritional indicators as independent predictors of mortality in hemodialysis patients. Am J Kidney Dis. 1998;31(6):997–1006.

    Article  CAS  PubMed  Google Scholar 

  13. Cheng T-H, Wei C-P, Tseng VS. Feature selection for medical data mining: comparisons of expert judgment and automatic approaches. In: 19th IEEE symposium on computer-based medical systems (CBMS'06), 2006, pp. 165–170: IEEE.

  14. Agatonovic-Kustrin S, Beresford R. Basic concepts of artificial neural network (ANN) modeling and its application in pharmaceutical research. J Pharm Biomed Anal. 2000;22(5):717–27.

    Article  CAS  PubMed  Google Scholar 

  15. Kennedy J, Eberhart R, Particle swarm optimization. In: Proceedings of ICNN'95-international conference on neural networks, 1995, vol. 4, pp. 1942–1948: IEEE.

  16. D. J. I. t. o. e. c. Simon, "Biogeography-based optimization," vol. 12, no. 6, pp. 702–713, 2008.

  17. L.-Y. Chuang, G.-Y. Chen, S.-H. Moi, F. Ou-Yang, M.-F. Hou, and C.-H. J. B. R. I. Yang, "Relationship between Clinicopathologic Variables in Breast Cancer Overall Survival Using Biogeography-Based Optimization Algorithm," vol. 2019, 2019.

  18. Wang P, Li Y, Reddy CK. Machine learning for survival analysis: a survey. ACM Comput Surv. 2019;51(6):1–36.

    Article  Google Scholar 

  19. Li Q, Cai W, Wang X, Zhou Y, Feng DD, Chen M, Medical image classification with convolutional neural network. In: 2014 13th international conference on control automation robotics & vision (ICARCV), 2014, pp. 844–848: IEEE.

  20. Iandola F, Moskewicz M, Karayev S, Girshick R, Darrell T, Keutzer KJAPA. Densenet: Implementing efficient convnet descriptor pyramids. Science. 2014;5:7889.

    Google Scholar 

  21. Kononenko I. Machine learning for medical diagnosis: history, state of the art and perspective. Artif Intell Med. 2001;23(1):89–109.

    Article  CAS  PubMed  Google Scholar 

  22. Yang X-S. Metaheuristic optimization: nature-inspired algorithms and applications. In: Artificial intelligence, evolutionary computing and metaheuristics: Springer, 2013, pp. 405–420.

  23. Fister Jr I, Yang X-S, Fister I, Brest J, Fister D. A brief review of nature-inspired algorithms for optimization, arXiv preprint arXiv:1307.4186, 2013.

  24. Saremi S, Mirjalili S, Lewis A. Grasshopper optimisation algorithm: theory and application. Adv Eng Softw. 2017;105:30–47.

    Article  Google Scholar 

  25. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44.

    Article  CAS  PubMed  Google Scholar 

  26. Canziani A, Paszke A, Culurciello E. An analysis of deep neural network models for practical applications, arXiv preprint arXiv:1605.07678, 2016.

  27. Hacibeyoglu M, Ibrahim MH. A novel multimean particle swarm optimization algorithm for nonlinear continuous optimization: application to feed-forward neural network training. Sci Program. 2018;2:5589.

    Google Scholar 

  28. Goldwasser P, et al. Predictors of mortality in hemodialysis patients. J Am Soc Nephrol. 1993;3(9):1613–22.

    Article  CAS  PubMed  Google Scholar 

  29. Chen J-B, Cheng B-C, Yang C-H, Hua M-S. An association between time-varying serum albumin level and the mortality rate in maintenance haemodialysis patients: a five-year clinical cohort study. BMC Nephrol. 2016;17(1):1–7.

    Article  CAS  Google Scholar 

  30. Mafarja M, Aljarah I, Faris H, Hammouri AI, Alam A-Z, Mirjalili S. Binary grasshopper optimisation algorithm approaches for feature selection problems. Expert Syst Appl. 2019;117:267–86.

    Article  Google Scholar 

  31. Hichem H, Elkamel M, Rafik M, Mesaaoud MT, Ouahiba C. A new binary grasshopper optimization algorithm for feature selection problem. J King Saud Univ-Comput Inf Sci. 2019;2:866.

    Google Scholar 

  32. Meraihi Y, Gabis AB, Mirjalili S, Ramdane-Cherif A. Grasshopper optimization algorithm: theory, variants, and applications. IEEE Access. 2021;9:50001–24.

    Article  Google Scholar 

  33. Niehoff NM, et al. Metals and trace elements in relation to body mass index in a prospective study of US women. Environ Res. 2020;184:109396.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  34. Keil AP, Buckley JP, O’Brien KM, Ferguson KK, Zhao S, White AJ. A quantile-based g-computation approach to addressing the effects of exposure mixtures. Environ Health Perspect. 2020;128(4):047004.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Carrico C, Gennings C, Wheeler DC, Factor-Litvak P. Characterization of weighted quantile sum regression for highly correlated data in a risk analysis setting. J Agric Biol Environ Stat. 2015;20(1):100–20.

    Article  PubMed  Google Scholar 

  36. Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research. 2002;16:321–57.

    Article  Google Scholar 

  37. Cover T, Hart P. Nearest neighbor pattern classification. IEEE Trans Inf Theory. 1967;13(1):21–7.

    Article  Google Scholar 

  38. Vapnik V. Statistical learning theory new york. New York: Wiley; 1998.

    Google Scholar 

  39. Breiman L. Random forests. Mach Learn. 2001;45(1):5–32.

    Article  Google Scholar 

  40. Friedman JH. Greedy function approximation: a gradient boosting machine. Ann Stat. 2001;5:1189–232.

    Google Scholar 

  41. Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 2016, pp. 785–794.

  42. Janiesch C, Zschech P, Heinrich K. Machine learning and deep learning. Electron Mark. 2021;31(3):685–95.

    Article  Google Scholar 

  43. Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85–117.

    Article  PubMed  Google Scholar 

  44. Gardner MW, Dorling S. Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences. Atmos Environ. 1998;32(14–15):2627–36.

    Article  CAS  Google Scholar 

  45. Nunez JC, Cabido R, Pantrigo JJ, Montemayor AS, Velez JF. Convolutional neural networks and long short-term memory for skeleton-based human activity and hand gesture recognition. Pattern Recogn. 2018;76:80–94.

    Article  Google Scholar 

  46. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9(8):1735–80.

    Article  CAS  PubMed  Google Scholar 

  47. Luque A, Carrasco A, Martín A, de Las Heras A. The impact of class imbalance in classification performance metrics based on the binary confusion matrix. Pattern Recogn. 2019;91:216–31.

    Article  Google Scholar 

  48. Wang Q, Guo A. An efficient variance estimator of AUC and its applications to binary classification. Stat Med. 2020;39(28):4281–300.

    Article  PubMed  Google Scholar 

  49. Kaysen GA, Chertow GM, Adhikarla R, Young B, Ronco C, Levin NWJKI. Inflammation and dietary protein intake exert competing effects on serum albumin and creatinine in hemodialysis patients. Science. 2001;60(1):333–40.

    CAS  Google Scholar 

  50. Klonoff-Cohen H, Barrett-Connor EL, Edelstein SLJJOCE. Albumin levels as a predictor of mortality in the healthy elderly. Science. 1992;45(3):207–12.

    CAS  Google Scholar 

  51. Chertow GM, Johansen KL, Lew N, Lazarus JM, Lowrie EGJKI. Vintage, nutritional status, and survival in hemodialysis patients. Science. 2000;57(3):1176–81.

    CAS  Google Scholar 

  52. Kaysen GA, Rathore V, Shearer GC, Depner TA. Mechanisms of hypoalbuminemia in hemodialysis patients. Kidney Int. 1995;48(2):510–6.

    Article  CAS  PubMed  Google Scholar 

  53. Misra DP, Loudon JM, Staddon GE. Albumin metabolism in elderly patients. J Gerontol. 1975;30(3):304–6.

    Article  CAS  PubMed  Google Scholar 

  54. Myers OB, et al. Age, race, diabetes, blood pressure, and mortality among hemodialysis patients. J Am Soc Nephrol. 2010;21(11):1970–8.

    Article  PubMed  PubMed Central  Google Scholar 

  55. Carrero JJ, et al. Comparison of nutritional and inflammatory markers in dialysis patients with reduced appetite. Am J Clin Nutr. 2007;85(3):695–701.

    Article  CAS  PubMed  Google Scholar 

  56. Hecking M, et al. Sex-specific differences in hemodialysis prevalence and practices and the male-to-female mortality rate: the Dialysis Outcomes and Practice Patterns Study (DOPPS). PLoS Med. 2014;11(10):e1001750.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Garagarza C, Flores AL, Valente A. Influence of body composition and nutrition parameters in handgrip strength: are there differences by sex in hemodialysis patients? Nutr Clin Pract. 2018;33(2):247–54.

    Article  PubMed  Google Scholar 

  58. Stenvinkel P, et al. Inflammation and outcome in end-stage renal failure: does female gender constitute a survival advantage? Kidney Int. 2002;62(5):1791–8.

    Article  PubMed  Google Scholar 

  59. Held PJ, et al. The dose of hemodialysis and patient mortality. Kidney Int. 1996;50(2):550–6.

    Article  CAS  PubMed  Google Scholar 

  60. Ikeda-Taniguchi M, Takahashi K, Shishido K, Honda H. Total iron binding capacity is a predictor for muscle loss in maintenance hemodialysis patients. Clin Exp Nephrol. 2022;26(6):583–92.

    Article  CAS  PubMed  Google Scholar 

  61. Sameiro-Faria MD, et al. Risk factors for mortality in hemodialysis patients: two-year follow-up study. Dis Mark. 2013;35(6):791–8.

    Article  Google Scholar 

  62. Yamamoto S, et al. Medical director practice of advising increased dietary protein intake in hemodialysis patients with hyperphosphatemia: associations with mortality in the dialysis outcomes and practice patterns study. J Ren Nutr. 2022;32(2):243–50.

    Article  CAS  PubMed  Google Scholar 

Download references

Funding

This work was partly supported by the Ministry of Science and Technology, R.O.C. (111-2221-E-165 -002 -MY3), Taiwan.

Author information

Authors and Affiliations

Authors

Contributions

CY and LC conceptualized, designed and supervised this study. YC, JC, HH, and LC were in charge of data collection and data analysis. YC, JC, and HH drafted the article. JC, HH and LC interpreted the results of the analysis. All authors have read and approved the final manuscript.

Corresponding authors

Correspondence to Jin-Bor Chen, Hsiu-Chen Huang or Li-Yeh Chuang.

Ethics declarations

Ethics approval and consent to participate

All data were retrospectively collected using an approved data protocol (201800595B0) with a waiver of informed consent from patients. Written informed consent was obtained from all participants.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, CH., Chen, YS., Chen, JB. et al. Application of deep learning to predict the low serum albumin in new hemodialysis patients. Nutr Metab (Lond) 20, 24 (2023). https://doi.org/10.1186/s12986-023-00746-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12986-023-00746-z

Keywords

  • Hemodialysis
  • Serum albumin
  • Grasshopper optimization algorithm
  • Quantile g-computation
  • Deep learning